All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling
@ 2018-11-08 18:35 sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 01/20] octeontx2-af: Support to modify min/max allowed packet lengths sunil.kovvuri
                   ` (20 more replies)
  0 siblings, 21 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patchset is a continuation to earlier submitted three patch
series to add a new driver for Marvell's OcteonTX2 SOC's
Resource virtualization unit (RVU) admin function driver.

1. octeontx2-af: Add RVU Admin Function driver
   https://www.spinics.net/lists/netdev/msg528272.html
2. octeontx2-af: NPA and NIX blocks initialization
   https://www.spinics.net/lists/netdev/msg529163.html
3. octeontx2-af: NPC parser and NIX blocks initialization
   https://www.spinics.net/lists/netdev/msg530252.html

This patch series adds support for below
RVU generic:
- Function Level Reset irq handler
  When FLR is triggered for PFs, AF receives interrupt.
  This patchset adds logic for cleaning up of NPA, NIX
  and NPC block resources being used by PF.

- Mailbox communication between AF and it's VFs.
  Unlike VFs of PF1-PFn, AF which is PF0 can communicate 
  with it's VFs directly. Added support for the same.

- AF's VFs IO configuration
  These VFs are mapped to use internal HW loopback channels
  instead of CGX LMACs. Each pair of VFs work as two of ends
  of hardwired interfaces. VF0's TX is VF1's Rx & viceversa.

NPC block:
- MCAM entry management
  Alloc/Free of contiguous/non-contiguous and lower/higher 
  priority MCAM entry allocation and programming support.
- MCAM counters management and map/unmap with MCAM entries
- Default KEY extract profile
- HW errata workarounds

NIX block:
- Minimum and maximum allowed packet length config
- HW errata workarounds

Few more changes like shift to use mutex instead of spinlock etc
are done in this patchset.

Geetha sowjanya (2):
  octeontx2-af: Add FLR interrupt handler
  octeontx2-af: Teardown NPA, NIX LF upon receiving FLR

Kiran Kumar (1):
  octeontx2-af: Support to get NIX HW constants from AF

Linu Cherian (1):
  octeontx2-af: Add interrupt handlers for Master Enable event

Santosh Shukla (1):
  octeontx2-af: Add MKEX default profile

Stanislaw Kardach (1):
  octeontx2-af: Relax resource lock into mutex

Sunil Goutham (10):
  octeontx2-af: Support to modify min/max allowed packet lengths
  octeontx2-af: NPC MCAM entry alloc/free support
  octeontx2-af: MCAM entry installation support
  octeontx2-af: Support for NPC MCAM counters
  octeontx2-af: Map or unmap NPC MCAM entry and counter
  octeontx2-af: Alloc and config NPC MCAM entry at a time
  octeontx2-af: Support to enable/disable default MCAM entries
  octeontx2-af: Verify NPA/SSO/NIX PF_FUNC mapping
  octeontx2-af: Add FLR handling support for AF's VFs
  octeontx2-af: Workarounds for HW errata

Tomasz Duszynski (4):
  octeontx2-af: Add support for stripping STAG/CTAG
  octeontx2-af: Mbox communication support btw AF and it's VFs
  octeontx2-af: Enable sriov on AF to create VFs
  octeontx2-af: Configure AF VFs to talk over LBK channels

 drivers/net/ethernet/marvell/octeontx2/af/cgx.h    |    1 +
 drivers/net/ethernet/marvell/octeontx2/af/common.h |    7 +
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  200 ++-
 drivers/net/ethernet/marvell/octeontx2/af/npc.h    |   30 +
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |  937 +++++++++++--
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  118 +-
 .../net/ethernet/marvell/octeontx2/af/rvu_cgx.c    |    6 +-
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    |  495 ++++++-
 .../net/ethernet/marvell/octeontx2/af/rvu_npa.c    |   17 +
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 1386 +++++++++++++++++++-
 10 files changed, 2975 insertions(+), 222 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 01/20] octeontx2-af: Support to modify min/max allowed packet lengths
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 02/20] octeontx2-af: Support to get NIX HW constants from AF sunil.kovvuri
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds support for RVU PF/VFs to modify min/max
packet lengths allowed by HW. For VFs on PF0, settings will
be automatically applied on LBK link. RX link's min/maxlen
is configured to min/max of PF and it's all VFs. On the TX side
if requested all SMQs attached to the requesting NIXLF will be
updated with new min/max lengths.

Also updates transmit credits for Tx links based on new maxlen.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/cgx.h    |   1 +
 drivers/net/ethernet/marvell/octeontx2/af/common.h |   5 +
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  12 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   4 +
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 199 +++++++++++++++++++++
 5 files changed, 220 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
index 0a66d27..3bd38ed 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
@@ -22,6 +22,7 @@
 
 #define MAX_CGX				3
 #define MAX_LMAC_PER_CGX		4
+#define CGX_FIFO_LEN			65536 /* 64K for both Rx & Tx */
 #define CGX_OFFSET(x)			((x) * MAX_LMAC_PER_CGX)
 
 /* Registers */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index d39ada4..a8c89df 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -143,6 +143,11 @@ enum nix_scheduler {
 	NIX_TXSCH_LVL_CNT = 0x5,
 };
 
+/* Min/Max packet sizes, excluding FCS */
+#define	NIC_HW_MIN_FRS			40
+#define	NIC_HW_MAX_FRS			9212
+#define	SDP_HW_MAX_FRS			65535
+
 /* NIX RX action operation*/
 #define NIX_RX_ACTIONOP_DROP		(0x0ull)
 #define NIX_RX_ACTIONOP_UCAST		(0x1ull)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index a15a59c..a4e0fb5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -160,7 +160,8 @@ M(NIX_STATS_RST,	0x8007, msg_req, msg_rsp)			\
 M(NIX_VTAG_CFG,	0x8008, nix_vtag_config, msg_rsp)		\
 M(NIX_RSS_FLOWKEY_CFG,  0x8009, nix_rss_flowkey_cfg, msg_rsp)		\
 M(NIX_SET_MAC_ADDR,	0x800a, nix_set_mac_addr, msg_rsp)		\
-M(NIX_SET_RX_MODE,	0x800b, nix_rx_mode, msg_rsp)
+M(NIX_SET_RX_MODE,	0x800b, nix_rx_mode, msg_rsp)			\
+M(NIX_SET_HW_FRS,	0x800c, nix_frs_cfg, msg_rsp)
 
 /* Messages initiated by AF (range 0xC00 - 0xDFF) */
 #define MBOX_UP_CGX_MESSAGES						\
@@ -522,4 +523,13 @@ struct nix_rx_mode {
 	u16	mode;
 };
 
+struct nix_frs_cfg {
+	struct mbox_msghdr hdr;
+	u8	update_smq;    /* Update SMQ's min/max lens */
+	u8	update_minlen; /* Set minlen also */
+	u8	sdp_link;      /* Set SDP RX link */
+	u16	maxlen;
+	u16	minlen;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 2c0580c..4f4829e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -122,6 +122,8 @@ struct rvu_pfvf {
 	u16		tx_chan_base;
 	u8              rx_chan_cnt; /* total number of RX channels */
 	u8              tx_chan_cnt; /* total number of TX channels */
+	u16		maxlen;
+	u16		minlen;
 
 	u8		mac_addr[ETH_ALEN]; /* MAC address of this PF/VF */
 
@@ -349,6 +351,8 @@ int rvu_mbox_handler_NIX_SET_MAC_ADDR(struct rvu *rvu,
 				      struct msg_rsp *rsp);
 int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req,
 				     struct msg_rsp *rsp);
+int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
+				    struct msg_rsp *rsp);
 
 /* NPC APIs */
 int rvu_npc_init(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index a5ab7ef..b8d8bb9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -168,14 +168,20 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
 
 	rvu_npc_install_bcast_match_entry(rvu, pcifunc,
 					  nixlf, pfvf->rx_chan_base);
+	pfvf->maxlen = NIC_HW_MIN_FRS;
+	pfvf->minlen = NIC_HW_MIN_FRS;
 
 	return 0;
 }
 
 static void nix_interface_deinit(struct rvu *rvu, u16 pcifunc, u8 nixlf)
 {
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
 	int err;
 
+	pfvf->maxlen = 0;
+	pfvf->minlen = 0;
+
 	/* Remove this PF_FUNC from bcast pkt replication list */
 	err = nix_update_bcast_mce_list(rvu, pcifunc, false);
 	if (err) {
@@ -1778,6 +1784,196 @@ int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req,
 	return 0;
 }
 
+static void nix_find_link_frs(struct rvu *rvu,
+			      struct nix_frs_cfg *req, u16 pcifunc)
+{
+	int pf = rvu_get_pf(pcifunc);
+	struct rvu_pfvf *pfvf;
+	int maxlen, minlen;
+	int numvfs, hwvf;
+	int vf;
+
+	/* Update with requester's min/max lengths */
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+	pfvf->maxlen = req->maxlen;
+	if (req->update_minlen)
+		pfvf->minlen = req->minlen;
+
+	maxlen = req->maxlen;
+	minlen = req->update_minlen ? req->minlen : 0;
+
+	/* Get this PF's numVFs and starting hwvf */
+	rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf);
+
+	/* For each VF, compare requested max/minlen */
+	for (vf = 0; vf < numvfs; vf++) {
+		pfvf =  &rvu->hwvf[hwvf + vf];
+		if (pfvf->maxlen > maxlen)
+			maxlen = pfvf->maxlen;
+		if (req->update_minlen &&
+		    pfvf->minlen && pfvf->minlen < minlen)
+			minlen = pfvf->minlen;
+	}
+
+	/* Compare requested max/minlen with PF's max/minlen */
+	pfvf = &rvu->pf[pf];
+	if (pfvf->maxlen > maxlen)
+		maxlen = pfvf->maxlen;
+	if (req->update_minlen &&
+	    pfvf->minlen && pfvf->minlen < minlen)
+		minlen = pfvf->minlen;
+
+	/* Update the request with max/min PF's and it's VF's max/min */
+	req->maxlen = maxlen;
+	if (req->update_minlen)
+		req->minlen = minlen;
+}
+
+int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
+				    struct msg_rsp *rsp)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	u16 pcifunc = req->hdr.pcifunc;
+	int pf = rvu_get_pf(pcifunc);
+	int blkaddr, schq, link = -1;
+	struct nix_txsch *txsch;
+	u64 cfg, lmac_fifo_len;
+	struct nix_hw *nix_hw;
+	u8 cgx = 0, lmac = 0;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	if (blkaddr < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	nix_hw = get_nix_hw(rvu->hw, blkaddr);
+	if (!nix_hw)
+		return -EINVAL;
+
+	if (!req->sdp_link && req->maxlen > NIC_HW_MAX_FRS)
+		return NIX_AF_ERR_FRS_INVALID;
+
+	if (req->update_minlen && req->minlen < NIC_HW_MIN_FRS)
+		return NIX_AF_ERR_FRS_INVALID;
+
+	/* Check if requester wants to update SMQ's */
+	if (!req->update_smq)
+		goto rx_frscfg;
+
+	/* Update min/maxlen in each of the SMQ attached to this PF/VF */
+	txsch = &nix_hw->txsch[NIX_TXSCH_LVL_SMQ];
+	spin_lock(&rvu->rsrc_lock);
+	for (schq = 0; schq < txsch->schq.max; schq++) {
+		if (txsch->pfvf_map[schq] != pcifunc)
+			continue;
+		cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq));
+		cfg = (cfg & ~(0xFFFFULL << 8)) | ((u64)req->maxlen << 8);
+		if (req->update_minlen)
+			cfg = (cfg & ~0x7FULL) | ((u64)req->minlen & 0x7F);
+		rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq), cfg);
+	}
+	spin_unlock(&rvu->rsrc_lock);
+
+rx_frscfg:
+	/* Check if config is for SDP link */
+	if (req->sdp_link) {
+		if (!hw->sdp_links)
+			return NIX_AF_ERR_RX_LINK_INVALID;
+		link = hw->cgx_links + hw->lbk_links;
+		goto linkcfg;
+	}
+
+	/* Check if the request is from CGX mapped RVU PF */
+	if (is_pf_cgxmapped(rvu, pf)) {
+		/* Get CGX and LMAC to which this PF is mapped and find link */
+		rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx, &lmac);
+		link = (cgx * hw->lmac_per_cgx) + lmac;
+	} else if (pf == 0) {
+		/* For VFs of PF0 ingress is LBK port, so config LBK link */
+		link = hw->cgx_links;
+	}
+
+	if (link < 0)
+		return NIX_AF_ERR_RX_LINK_INVALID;
+
+	nix_find_link_frs(rvu, req, pcifunc);
+
+linkcfg:
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link));
+	cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16);
+	if (req->update_minlen)
+		cfg = (cfg & ~0xFFFFULL) | req->minlen;
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), cfg);
+
+	if (req->sdp_link || pf == 0)
+		return 0;
+
+	/* Update transmit credits for CGX links */
+	lmac_fifo_len =
+		CGX_FIFO_LEN / cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_TX_LINKX_NORM_CREDIT(link));
+	cfg &= ~(0xFFFFFULL << 12);
+	cfg |=  ((lmac_fifo_len - req->maxlen) / 16) << 12;
+	rvu_write64(rvu, blkaddr, NIX_AF_TX_LINKX_NORM_CREDIT(link), cfg);
+	rvu_write64(rvu, blkaddr, NIX_AF_TX_LINKX_EXPR_CREDIT(link), cfg);
+
+	return 0;
+}
+
+static void nix_link_config(struct rvu *rvu, int blkaddr)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	int cgx, lmac_cnt, slink, link;
+	u64 tx_credits;
+
+	/* Set default min/max packet lengths allowed on NIX Rx links.
+	 *
+	 * With HW reset minlen value of 60byte, HW will treat ARP pkts
+	 * as undersize and report them to SW as error pkts, hence
+	 * setting it to 40 bytes.
+	 */
+	for (link = 0; link < (hw->cgx_links + hw->lbk_links); link++) {
+		rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link),
+			    NIC_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS);
+	}
+
+	if (hw->sdp_links) {
+		link = hw->cgx_links + hw->lbk_links;
+		rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link),
+			    SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS);
+	}
+
+	/* Set credits for Tx links assuming max packet length allowed.
+	 * This will be reconfigured based on MTU set for PF/VF.
+	 */
+	for (cgx = 0; cgx < hw->cgx; cgx++) {
+		lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
+		tx_credits = ((CGX_FIFO_LEN / lmac_cnt) - NIC_HW_MAX_FRS) / 16;
+		/* Enable credits and set credit pkt count to max allowed */
+		tx_credits =  (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1);
+		slink = cgx * hw->lmac_per_cgx;
+		for (link = slink; link < (slink + lmac_cnt); link++) {
+			rvu_write64(rvu, blkaddr,
+				    NIX_AF_TX_LINKX_NORM_CREDIT(link),
+				    tx_credits);
+			rvu_write64(rvu, blkaddr,
+				    NIX_AF_TX_LINKX_EXPR_CREDIT(link),
+				    tx_credits);
+		}
+	}
+
+	/* Set Tx credits for LBK link */
+	slink = hw->cgx_links;
+	for (link = slink; link < (slink + hw->lbk_links); link++) {
+		tx_credits = 1000; /* 10 * max LBK datarate = 10 * 100Gbps */
+		/* Enable credits and set credit pkt count to max allowed */
+		tx_credits =  (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1);
+		rvu_write64(rvu, blkaddr,
+			    NIX_AF_TX_LINKX_NORM_CREDIT(link), tx_credits);
+		rvu_write64(rvu, blkaddr,
+			    NIX_AF_TX_LINKX_EXPR_CREDIT(link), tx_credits);
+	}
+}
+
 static int nix_calibrate_x2p(struct rvu *rvu, int blkaddr)
 {
 	int idx, err;
@@ -1922,6 +2118,9 @@ int rvu_nix_init(struct rvu *rvu)
 			    (NPC_LID_LC << 8) | (NPC_LT_LC_IP << 4) | 0x0F);
 
 		nix_rx_flowkey_alg_cfg(rvu, blkaddr);
+
+		/* Initialize CGX/LBK/SDP link credits, min/max pkt lengths */
+		nix_link_config(rvu, blkaddr);
 	}
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/20] octeontx2-af: Support to get NIX HW constants from AF
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 01/20] octeontx2-af: Support to modify min/max allowed packet lengths sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 03/20] octeontx2-af: Relax resource lock into mutex sunil.kovvuri
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Kiran Kumar, Sunil Goutham

From: Kiran Kumar <kirankumark@marvell.com>

This patch adds reading HW limits like number of Rx/Tx stats,
number of queue IRQs supported per NIX LF from AF registers
and sync them to PF/VF.

Signed-off-by: Kiran Kumar <kirankumark@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h    | 4 ++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c | 8 ++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index a4e0fb5..89db883 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -393,6 +393,10 @@ struct nix_lf_alloc_rsp {
 	u8	lso_tsov4_idx;
 	u8	lso_tsov6_idx;
 	u8      mac_addr[ETH_ALEN];
+	u8	lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
+	u8	lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
+	u16	cints; /* NIX_AF_CONST2::CINTS */
+	u16	qints; /* NIX_AF_CONST2::QINTS */
 };
 
 /* NIX AQ enqueue msg */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index b8d8bb9..0b37a88 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -829,6 +829,14 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 	rsp->tx_chan_cnt = pfvf->tx_chan_cnt;
 	rsp->lso_tsov4_idx = NIX_LSO_FORMAT_IDX_TSOV4;
 	rsp->lso_tsov6_idx = NIX_LSO_FORMAT_IDX_TSOV6;
+	/* Get HW supported stat count */
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+	rsp->lf_rx_stats = ((cfg >> 32) & 0xFF);
+	rsp->lf_tx_stats = ((cfg >> 24) & 0xFF);
+	/* Get count of CQ IRQs and error IRQs supported per LF */
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST2);
+	rsp->qints = ((cfg >> 12) & 0xFFF);
+	rsp->cints = ((cfg >> 24) & 0xFFF);
 	return rc;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/20] octeontx2-af: Relax resource lock into mutex
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 01/20] octeontx2-af: Support to modify min/max allowed packet lengths sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 02/20] octeontx2-af: Support to get NIX HW constants from AF sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support sunil.kovvuri
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Stanislaw Kardach, Sunil Goutham

From: Stanislaw Kardach <skardach@marvell.com>

The resource locks does not need to be a spinlock as they are not
used in any interrupt handling routines (only in bottom halves).
Therefore relax them into a mutex so that later on we may use them
in routines that might sleep.

Signed-off-by: Stanislaw Kardach <skardach@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    | 18 ++++++++-------
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  6 ++---
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 27 +++++++++++-----------
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    |  4 +++-
 4 files changed, 30 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index dc28fa2..c7f00895 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -153,17 +153,17 @@ int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot)
 	u16 match = 0;
 	int lf;
 
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 	for (lf = 0; lf < block->lf.max; lf++) {
 		if (block->fn_map[lf] == pcifunc) {
 			if (slot == match) {
-				spin_unlock(&rvu->rsrc_lock);
+				mutex_unlock(&rvu->rsrc_lock);
 				return lf;
 			}
 			match++;
 		}
 	}
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 	return -ENODEV;
 }
 
@@ -597,6 +597,8 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 	dma_unmap_resource(rvu->dev, rvu->msix_base_iova,
 			   max_msix * PCI_MSIX_ENTRY_SIZE,
 			   DMA_BIDIRECTIONAL, 0);
+
+	mutex_destroy(&rvu->rsrc_lock);
 }
 
 static int rvu_setup_hw_resources(struct rvu *rvu)
@@ -752,7 +754,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	if (!rvu->hwvf)
 		return -ENOMEM;
 
-	spin_lock_init(&rvu->rsrc_lock);
+	mutex_init(&rvu->rsrc_lock);
 
 	err = rvu_setup_msix_resources(rvu);
 	if (err)
@@ -926,7 +928,7 @@ static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach,
 	struct rvu_block *block;
 	int blkid;
 
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 
 	/* Check for partial resource detach */
 	if (detach && detach->partial)
@@ -956,7 +958,7 @@ static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach,
 		rvu_detach_block(rvu, pcifunc, block->type);
 	}
 
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 	return 0;
 }
 
@@ -1119,7 +1121,7 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
 	if (!attach->modify)
 		rvu_detach_rsrcs(rvu, NULL, pcifunc);
 
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 
 	/* Check if the request can be accommodated */
 	err = rvu_check_rsrc_availability(rvu, attach, pcifunc);
@@ -1163,7 +1165,7 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
 	}
 
 exit:
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 	return err;
 }
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 4f4829e..12268de 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -64,7 +64,7 @@ struct nix_mcast {
 	struct qmem	*mcast_buf;
 	int		replay_pkind;
 	int		next_free_mce;
-	spinlock_t	mce_lock; /* Serialize MCE updates */
+	struct mutex	mce_lock; /* Serialize MCE updates */
 };
 
 struct nix_mce_list {
@@ -74,7 +74,7 @@ struct nix_mce_list {
 };
 
 struct npc_mcam {
-	spinlock_t	lock;	/* MCAM entries and counters update lock */
+	struct mutex	lock;	/* MCAM entries and counters update lock */
 	u8	keysize;	/* MCAM keysize 112/224/448 bits */
 	u8	banks;		/* Number of MCAM banks */
 	u8	banks_per_entry;/* Number of keywords in key */
@@ -174,7 +174,7 @@ struct rvu {
 	struct rvu_hwinfo       *hw;
 	struct rvu_pfvf		*pf;
 	struct rvu_pfvf		*hwvf;
-	spinlock_t		rsrc_lock; /* Serialize resource alloc/free */
+	struct mutex            rsrc_lock; /* Serialize resource alloc/free */
 
 	/* Mbox */
 	struct otx2_mbox	mbox;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 0b37a88..4d8ae2e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -109,12 +109,12 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
 	if (schq >= txsch->schq.max)
 		return false;
 
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 	if (txsch->pfvf_map[schq] != pcifunc) {
-		spin_unlock(&rvu->rsrc_lock);
+		mutex_unlock(&rvu->rsrc_lock);
 		return false;
 	}
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 	return true;
 }
 
@@ -953,7 +953,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu,
 	if (!nix_hw)
 		return -EINVAL;
 
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 	for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
 		txsch = &nix_hw->txsch[lvl];
 		req_schq = req->schq_contig[lvl] + req->schq[lvl];
@@ -1009,7 +1009,7 @@ int rvu_mbox_handler_NIX_TXSCH_ALLOC(struct rvu *rvu,
 err:
 	rc = NIX_AF_ERR_TLX_ALLOC_FAIL;
 exit:
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 	return rc;
 }
 
@@ -1034,7 +1034,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc)
 		return NIX_AF_ERR_AF_LF_INVALID;
 
 	/* Disable TL2/3 queue links before SMQ flush*/
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 	for (lvl = NIX_TXSCH_LVL_TL4; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
 		if (lvl != NIX_TXSCH_LVL_TL2 && lvl != NIX_TXSCH_LVL_TL4)
 			continue;
@@ -1076,7 +1076,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc)
 			txsch->pfvf_map[schq] = 0;
 		}
 	}
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 
 	/* Sync cached info for this LF in NDC-TX to LLC/DRAM */
 	rvu_write64(rvu, blkaddr, NIX_AF_NDC_TX_SYNC, BIT_ULL(12) | nixlf);
@@ -1308,7 +1308,7 @@ static int nix_update_mce_list(struct nix_mce_list *mce_list,
 		return 0;
 
 	/* Add a new one to the list, at the tail */
-	mce = kzalloc(sizeof(*mce), GFP_ATOMIC);
+	mce = kzalloc(sizeof(*mce), GFP_KERNEL);
 	if (!mce)
 		return -ENOMEM;
 	mce->idx = idx;
@@ -1354,7 +1354,7 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
 		return -EINVAL;
 	}
 
-	spin_lock(&mcast->mce_lock);
+	mutex_lock(&mcast->mce_lock);
 
 	err = nix_update_mce_list(mce_list, pcifunc, idx, add);
 	if (err)
@@ -1384,7 +1384,7 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
 	}
 
 end:
-	spin_unlock(&mcast->mce_lock);
+	mutex_unlock(&mcast->mce_lock);
 	return err;
 }
 
@@ -1469,7 +1469,7 @@ static int nix_setup_mcast(struct rvu *rvu, struct nix_hw *nix_hw, int blkaddr)
 		    BIT_ULL(63) | (mcast->replay_pkind << 24) |
 		    BIT_ULL(20) | MC_BUF_CNT);
 
-	spin_lock_init(&mcast->mce_lock);
+	mutex_init(&mcast->mce_lock);
 
 	return nix_setup_bcast_tables(rvu, nix_hw);
 }
@@ -1869,7 +1869,7 @@ int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
 
 	/* Update min/maxlen in each of the SMQ attached to this PF/VF */
 	txsch = &nix_hw->txsch[NIX_TXSCH_LVL_SMQ];
-	spin_lock(&rvu->rsrc_lock);
+	mutex_lock(&rvu->rsrc_lock);
 	for (schq = 0; schq < txsch->schq.max; schq++) {
 		if (txsch->pfvf_map[schq] != pcifunc)
 			continue;
@@ -1879,7 +1879,7 @@ int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
 			cfg = (cfg & ~0x7FULL) | ((u64)req->minlen & 0x7F);
 		rvu_write64(rvu, blkaddr, NIX_AF_SMQX_CFG(schq), cfg);
 	}
-	spin_unlock(&rvu->rsrc_lock);
+	mutex_unlock(&rvu->rsrc_lock);
 
 rx_frscfg:
 	/* Check if config is for SDP link */
@@ -2162,5 +2162,6 @@ void rvu_nix_freemem(struct rvu *rvu)
 		mcast = &nix_hw->mcast;
 		qmem_free(rvu->dev, mcast->mce_ctx);
 		qmem_free(rvu->dev, mcast->mcast_buf);
+		mutex_destroy(&mcast->mce_lock);
 	}
 }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 23ff47f..3a96dfd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -732,7 +732,7 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr)
 	mcam->nixlf_offset = mcam->entries;
 	mcam->pf_offset = mcam->nixlf_offset + nixlf_count;
 
-	spin_lock_init(&mcam->lock);
+	mutex_init(&mcam->lock);
 
 	return 0;
 }
@@ -811,6 +811,8 @@ int rvu_npc_init(struct rvu *rvu)
 void rvu_npc_freemem(struct rvu *rvu)
 {
 	struct npc_pkind *pkind = &rvu->hw->pkind;
+	struct npc_mcam *mcam = &rvu->hw->mcam;
 
 	kfree(pkind->rsrc.bmap);
+	mutex_destroy(&mcam->lock);
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (2 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 03/20] octeontx2-af: Relax resource lock into mutex sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 22:22   ` David Miller
  2018-11-08 18:35 ` [PATCH 05/20] octeontx2-af: MCAM entry installation support sunil.kovvuri
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds NPC MCAM entry management and support for
allocating and freeing them via mailbox. Both contiguous and
non-contiguous allocations are supported. Incase of contiguous,
if request cannot be met then max contiguous number of available
entries are allocated.

High or low priority index allocation w.r.t a reference MCAM index
is also supported.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  48 ++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  19 +-
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    |  11 +
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 514 ++++++++++++++++++++-
 4 files changed, 587 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 89db883..9eefbf7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -148,6 +148,9 @@ M(NPA_HWCTX_DISABLE,	0x403, hwctx_disable_req, msg_rsp)		\
 /* TIM mbox IDs (range 0x800 - 0x9FF) */				\
 /* CPT mbox IDs (range 0xA00 - 0xBFF) */				\
 /* NPC mbox IDs (range 0x6000 - 0x7FFF) */				\
+M(NPC_MCAM_ALLOC_ENTRY,	0x6000, npc_mcam_alloc_entry_req,		\
+				npc_mcam_alloc_entry_rsp)		\
+M(NPC_MCAM_FREE_ENTRY,	0x6001, npc_mcam_free_entry_req, msg_rsp)	\
 /* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 M(NIX_LF_ALLOC,		0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp)	\
 M(NIX_LF_FREE,		0x8001, msg_req, msg_rsp)			\
@@ -348,6 +351,8 @@ struct hwctx_disable_req {
 	u8 ctype;
 };
 
+/* NIX mbox message formats */
+
 /* NIX mailbox error codes
  * Range 401 - 500.
  */
@@ -536,4 +541,47 @@ struct nix_frs_cfg {
 	u16	minlen;
 };
 
+/* NPC mbox message structs */
+
+#define NPC_MCAM_ENTRY_INVALID	0xFFFF
+#define NPC_MCAM_INVALID_MAP	0xFFFF
+
+/* NPC mailbox error codes
+ * Range 701 - 800.
+ */
+enum npc_af_status {
+	NPC_MCAM_INVALID_REQ	= -701,
+	NPC_MCAM_ALLOC_DENIED	= -702,
+	NPC_MCAM_ALLOC_FAILED	= -703,
+	NPC_MCAM_PERM_DENIED	= -704,
+};
+
+struct npc_mcam_alloc_entry_req {
+	struct mbox_msghdr hdr;
+#define NPC_MAX_NONCONTIG_ENTRIES	256
+	u8  contig;   /* Contiguous entries ? */
+#define NPC_MCAM_ANY_PRIO		0
+#define NPC_MCAM_LOWER_PRIO		1
+#define NPC_MCAM_HIGHER_PRIO		2
+	u8  priority; /* Lower or higher w.r.t ref_entry */
+	u16 ref_entry;
+	u16 count;    /* Number of entries requested */
+};
+
+struct npc_mcam_alloc_entry_rsp {
+	struct mbox_msghdr hdr;
+	u16 entry; /* Entry allocated or start index if contiguous.
+		    * Invalid incase of non-contiguous.
+		    */
+	u16 count; /* Number of entries allocated */
+	u16 free_count; /* Number of entries available */
+	u16 entry_list[NPC_MAX_NONCONTIG_ENTRIES];
+};
+
+struct npc_mcam_free_entry_req {
+	struct mbox_msghdr hdr;
+	u16 entry; /* Entry index to be freed */
+	u8  all;   /* If all entries allocated to this PFVF to be freed */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 12268de..01a2fcd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -74,15 +74,25 @@ struct nix_mce_list {
 };
 
 struct npc_mcam {
+	struct rsrc_bmap counters;
 	struct mutex	lock;	/* MCAM entries and counters update lock */
+	unsigned long	*bmap;		/* bitmap, 0 => bmap_entries */
+	unsigned long	*bmap_reverse;	/* Reverse bitmap, bmap_entries => 0 */
+	u16	bmap_entries;	/* Number of unreserved MCAM entries */
+	u16	bmap_fcnt;	/* MCAM entries free count */
+	u16	*entry2pfvf_map;
+	u16	*cntr2pfvf_map;
 	u8	keysize;	/* MCAM keysize 112/224/448 bits */
 	u8	banks;		/* Number of MCAM banks */
 	u8	banks_per_entry;/* Number of keywords in key */
 	u16	banksize;	/* Number of MCAM entries in each bank */
 	u16	total_entries;	/* Total number of MCAM entries */
-	u16     entries;	/* Total minus reserved for NIX LFs */
 	u16	nixlf_offset;	/* Offset of nixlf rsvd uncast entries */
 	u16	pf_offset;	/* Offset of PF's rsvd bcast, promisc entries */
+	u16	lprio_count;
+	u16	lprio_start;
+	u16	hprio_count;
+	u16	hprio_end;
 };
 
 /* Structure for per RVU func info ie PF/VF */
@@ -315,6 +325,7 @@ int rvu_mbox_handler_NPA_LF_FREE(struct rvu *rvu, struct msg_req *req,
 				 struct msg_rsp *rsp);
 
 /* NIX APIs */
+bool is_nixlf_attached(struct rvu *rvu, u16 pcifunc);
 int rvu_nix_init(struct rvu *rvu);
 void rvu_nix_freemem(struct rvu *rvu);
 int rvu_get_nixlf_count(struct rvu *rvu);
@@ -369,4 +380,10 @@ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
 void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
 				    int group, int alg_idx, int mcam_index);
+int rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_alloc_entry_req *req,
+					  struct npc_mcam_alloc_entry_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
+					 struct npc_mcam_free_entry_req *req,
+					 struct msg_rsp *rsp);
 #endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 4d8ae2e..9de9aaf 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -55,6 +55,17 @@ struct mce {
 	u16			pcifunc;
 };
 
+bool is_nixlf_attached(struct rvu *rvu, u16 pcifunc)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	int blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	if (!pfvf->nixlf || blkaddr < 0)
+		return false;
+	return true;
+}
+
 int rvu_get_nixlf_count(struct rvu *rvu)
 {
 	struct rvu_block *block;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 3a96dfd..82c1c10 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -26,6 +26,9 @@
 
 #define NPC_PARSE_RESULT_DMAC_OFFSET	8
 
+static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
+				      int blkaddr, u16 pcifunc);
+
 struct mcam_entry {
 #define NPC_MAX_KWS_IN_KEY	7 /* Number of keywords in max keywidth */
 	u64	kw[NPC_MAX_KWS_IN_KEY];
@@ -466,6 +469,13 @@ void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
 	if (blkaddr < 0)
 		return;
 
+	mutex_lock(&mcam->lock);
+
+	/* Disable and free all MCAM entries mapped to this 'pcifunc' */
+	npc_mcam_free_all_entries(rvu, mcam, blkaddr, pcifunc);
+
+	mutex_unlock(&mcam->lock);
+
 	/* Disable ucast MCAM match entry of this PF/VF */
 	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
 					 nixlf, NIXLF_UCAST_ENTRY);
@@ -690,13 +700,14 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr)
 {
 	int nixlf_count = rvu_get_nixlf_count(rvu);
 	struct npc_mcam *mcam = &rvu->hw->mcam;
-	int rsvd;
+	int rsvd, err;
 	u64 cfg;
 
 	/* Get HW limits */
 	cfg = rvu_read64(rvu, blkaddr, NPC_AF_CONST);
 	mcam->banks = (cfg >> 44) & 0xF;
 	mcam->banksize = (cfg >> 28) & 0xFFFF;
+	mcam->counters.max = (cfg >> 48) & 0xFFFF;
 
 	/* Actual number of MCAM entries vary by entry size */
 	cfg = (rvu_read64(rvu, blkaddr,
@@ -728,20 +739,69 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr)
 		return -ENOMEM;
 	}
 
-	mcam->entries = mcam->total_entries - rsvd;
-	mcam->nixlf_offset = mcam->entries;
+	mcam->bmap_entries = mcam->total_entries - rsvd;
+	mcam->nixlf_offset = mcam->bmap_entries;
 	mcam->pf_offset = mcam->nixlf_offset + nixlf_count;
 
+	/* Allocate bitmaps for managing MCAM entries */
+	mcam->bmap = devm_kcalloc(rvu->dev, BITS_TO_LONGS(mcam->bmap_entries),
+				  sizeof(long), GFP_KERNEL);
+	if (!mcam->bmap)
+		return -ENOMEM;
+
+	mcam->bmap_reverse = devm_kcalloc(rvu->dev,
+					  BITS_TO_LONGS(mcam->bmap_entries),
+					  sizeof(long), GFP_KERNEL);
+	if (!mcam->bmap_reverse)
+		return -ENOMEM;
+
+	mcam->bmap_fcnt = mcam->bmap_entries;
+
+	/* Alloc memory for saving entry to RVU PFFUNC allocation mapping */
+	mcam->entry2pfvf_map = devm_kcalloc(rvu->dev, mcam->bmap_entries,
+					    sizeof(u16), GFP_KERNEL);
+	if (!mcam->entry2pfvf_map)
+		return -ENOMEM;
+
+	/* Reserve 1/8th of MCAM entries at the bottom for low priority
+	 * allocations and another 1/8th at the top for high priority
+	 * allocations.
+	 */
+	mcam->lprio_count = mcam->bmap_entries / 8;
+	if (mcam->lprio_count > BITS_PER_LONG)
+		mcam->lprio_count = round_down(mcam->lprio_count,
+					       BITS_PER_LONG);
+	mcam->lprio_start = mcam->bmap_entries - mcam->lprio_count;
+	mcam->hprio_count = mcam->lprio_count;
+	mcam->hprio_end = mcam->hprio_count;
+
+	/* Allocate bitmap for managing MCAM counters and memory
+	 * for saving counter to RVU PFFUNC allocation mapping.
+	 */
+	err = rvu_alloc_bitmap(&mcam->counters);
+	if (err)
+		return err;
+
+	mcam->cntr2pfvf_map = devm_kcalloc(rvu->dev, mcam->counters.max,
+					   sizeof(u16), GFP_KERNEL);
+	if (!mcam->cntr2pfvf_map)
+		goto free_mem;
+
 	mutex_init(&mcam->lock);
 
 	return 0;
+
+free_mem:
+	kfree(mcam->counters.bmap);
+	return -ENOMEM;
 }
 
 int rvu_npc_init(struct rvu *rvu)
 {
 	struct npc_pkind *pkind = &rvu->hw->pkind;
 	u64 keyz = NPC_MCAM_KEY_X2;
-	int blkaddr, err;
+	int blkaddr, entry, bank, err;
+	u64 cfg;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0) {
@@ -749,6 +809,14 @@ int rvu_npc_init(struct rvu *rvu)
 		return -ENODEV;
 	}
 
+	/* First disable all MCAM entries, to stop traffic towards NIXLFs */
+	cfg = rvu_read64(rvu, blkaddr, NPC_AF_CONST);
+	for (bank = 0; bank < ((cfg >> 44) & 0xF); bank++) {
+		for (entry = 0; entry < ((cfg >> 28) & 0xFFFF); entry++)
+			rvu_write64(rvu, blkaddr,
+				    NPC_AF_MCAMEX_BANKX_CFG(entry, bank), 0);
+	}
+
 	/* Allocate resource bimap for pkind*/
 	pkind->rsrc.max = (rvu_read64(rvu, blkaddr,
 				      NPC_AF_CONST1) >> 12) & 0xFF;
@@ -814,5 +882,443 @@ void rvu_npc_freemem(struct rvu *rvu)
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 
 	kfree(pkind->rsrc.bmap);
+	kfree(mcam->counters.bmap);
 	mutex_destroy(&mcam->lock);
 }
+
+static int npc_mcam_verify_entry(struct npc_mcam *mcam,
+				 u16 pcifunc, int entry)
+{
+	/* Verify if entry is valid and if it is indeed
+	 * allocated to the requesting PFFUNC.
+	 */
+	if (entry >= mcam->bmap_entries)
+		return NPC_MCAM_INVALID_REQ;
+
+	if (pcifunc != mcam->entry2pfvf_map[entry])
+		return NPC_MCAM_PERM_DENIED;
+
+	return 0;
+}
+
+/* Sets MCAM entry in bitmap as used. Update
+ * reverse bitmap too. Should be called with
+ * 'mcam->lock' held.
+ */
+static void npc_mcam_set_bit(struct npc_mcam *mcam, u16 index)
+{
+	u16 entry, rentry;
+
+	entry = index;
+	rentry = mcam->bmap_entries - index - 1;
+
+	__set_bit(entry, mcam->bmap);
+	__set_bit(rentry, mcam->bmap_reverse);
+	mcam->bmap_fcnt--;
+}
+
+/* Sets MCAM entry in bitmap as free. Update
+ * reverse bitmap too. Should be called with
+ * 'mcam->lock' held.
+ */
+static void npc_mcam_clear_bit(struct npc_mcam *mcam, u16 index)
+{
+	u16 entry, rentry;
+
+	entry = index;
+	rentry = mcam->bmap_entries - index - 1;
+
+	__clear_bit(entry, mcam->bmap);
+	__clear_bit(rentry, mcam->bmap_reverse);
+	mcam->bmap_fcnt++;
+}
+
+static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
+				      int blkaddr, u16 pcifunc)
+{
+	u16 index;
+
+	/* Scan all MCAM entries and free the ones mapped to 'pcifunc' */
+	for (index = 0; index < mcam->bmap_entries; index++) {
+		if (mcam->entry2pfvf_map[index] == pcifunc) {
+			mcam->entry2pfvf_map[index] = NPC_MCAM_INVALID_MAP;
+			/* Free the entry in bitmap */
+			npc_mcam_clear_bit(mcam, index);
+			/* Disable the entry */
+			npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
+		}
+	}
+}
+
+/* Find area of contiguous free entries of size 'nr'.
+ * If not found return max contiguous free entries available.
+ */
+static u16 npc_mcam_find_zero_area(unsigned long *map, u16 size, u16 start,
+				   u16 nr, u16 *max_area)
+{
+	u16 max_area_start = 0;
+	u16 index, next, end;
+
+	*max_area = 0;
+
+again:
+	index = find_next_zero_bit(map, size, start);
+	if (index >= size)
+		return max_area_start;
+
+	end = ((index + nr) >= size) ? size : index + nr;
+	next = find_next_bit(map, end, index);
+	if (*max_area < (next - index)) {
+		*max_area = next - index;
+		max_area_start = index;
+	}
+
+	if (next < end) {
+		start = next + 1;
+		goto again;
+	}
+
+	return max_area_start;
+}
+
+/* Find number of free MCAM entries available
+ * within range i.e in between 'start' and 'end'.
+ */
+static u16 npc_mcam_get_free_count(unsigned long *map, u16 start, u16 end)
+{
+	u16 index, next;
+	u16 fcnt = 0;
+
+again:
+	if (start >= end)
+		return fcnt;
+
+	index = find_next_zero_bit(map, end, start);
+	if (index >= end)
+		return fcnt;
+
+	next = find_next_bit(map, end, index);
+	if (next <= end) {
+		fcnt += next - index;
+		start = next + 1;
+		goto again;
+	}
+
+	fcnt += end - index;
+	return fcnt;
+}
+
+static void
+npc_get_mcam_search_range_priority(struct npc_mcam *mcam,
+				   struct npc_mcam_alloc_entry_req *req,
+				   u16 *start, u16 *end, bool *reverse)
+{
+	u16 fcnt;
+
+	if (req->priority == NPC_MCAM_HIGHER_PRIO)
+		goto hprio;
+
+	/* For a low priority entry allocation
+	 * - If reference entry is not in hprio zone then
+	 *      search range: ref_entry to end.
+	 * - If reference entry is in hprio zone and if
+	 *   request can be accomodated in non-hprio zone then
+	 *      search range: 'start of middle zone' to 'end'
+	 * - else search in reverse, so that less number of hprio
+	 *   zone entries are allocated.
+	 */
+
+	*reverse = false;
+	*start = req->ref_entry + 1;
+	*end = mcam->bmap_entries;
+
+	if (req->ref_entry >= mcam->hprio_end)
+		return;
+
+	fcnt = npc_mcam_get_free_count(mcam->bmap,
+				       mcam->hprio_end, mcam->bmap_entries);
+	if (fcnt > req->count)
+		*start = mcam->hprio_end;
+	else
+		*reverse = true;
+	return;
+
+hprio:
+	/* For a high priority entry allocation, search is always
+	 * in reverse to preserve hprio zone entries.
+	 * - If reference entry is not in lprio zone then
+	 *      search range: 0 to ref_entry.
+	 * - If reference entry is in lprio zone and if
+	 *   request can be accomodated in middle zone then
+	 *      search range: 'hprio_end' to 'lprio_start'
+	 */
+
+	*reverse = true;
+	*start = 0;
+	*end = req->ref_entry;
+
+	if (req->ref_entry <= mcam->lprio_start)
+		return;
+
+	fcnt = npc_mcam_get_free_count(mcam->bmap,
+				       mcam->hprio_end, mcam->lprio_start);
+	if (fcnt < req->count)
+		return;
+	*start = mcam->hprio_end;
+	*end = mcam->lprio_start;
+}
+
+static int npc_mcam_alloc_entries(struct npc_mcam *mcam, u16 pcifunc,
+				  struct npc_mcam_alloc_entry_req *req,
+				  struct npc_mcam_alloc_entry_rsp *rsp)
+{
+	u16 entry_list[NPC_MAX_NONCONTIG_ENTRIES];
+	u16 fcnt, hp_fcnt, lp_fcnt;
+	u16 start, end, index;
+	int entry, next_start;
+	bool reverse = false;
+	unsigned long *bmap;
+	u16 max_contig;
+
+	mutex_lock(&mcam->lock);
+
+	/* Check if there are any free entries */
+	if (!mcam->bmap_fcnt) {
+		mutex_unlock(&mcam->lock);
+		return NPC_MCAM_ALLOC_FAILED;
+	}
+
+	/* MCAM entries are divided into high priority, middle and
+	 * low priority zones. Idea is to not allocate top and lower
+	 * most entries as much as possible, this is to increase
+	 * probability of honouring priority allocation requests.
+	 *
+	 * Two bitmaps are used for mcam entry management,
+	 * mcam->bmap for forward search i.e '0 to mcam->bmap_entries'.
+	 * mcam->bmap_reverse for reverse search i.e 'mcam->bmap_entries to 0'.
+	 *
+	 * Reverse bitmap is used to allocate entries
+	 * - when a higher priority entry is requested
+	 * - when available free entries are less.
+	 * Lower priority ones out of avaialble free entries are always
+	 * chosen when 'high vs low' question arises.
+	 */
+
+	/* Get the search range for priority allocation request */
+	if (req->priority) {
+		npc_get_mcam_search_range_priority(mcam, req,
+						   &start, &end, &reverse);
+		goto alloc;
+	}
+
+	/* Find out the search range for non-priority allocation request
+	 *
+	 * Get MCAM free entry count in middle zone.
+	 */
+	lp_fcnt = npc_mcam_get_free_count(mcam->bmap,
+					  mcam->lprio_start,
+					  mcam->bmap_entries);
+	hp_fcnt = npc_mcam_get_free_count(mcam->bmap, 0, mcam->hprio_end);
+	fcnt = mcam->bmap_fcnt - lp_fcnt - hp_fcnt;
+
+	/* Check if request can be accomodated in the middle zone */
+	if (fcnt > req->count) {
+		start = mcam->hprio_end;
+		end = mcam->lprio_start;
+	} else if ((fcnt + (hp_fcnt / 2) + (lp_fcnt / 2)) > req->count) {
+		/* Expand search zone from half of hprio zone to
+		 * half of lprio zone.
+		 */
+		start = mcam->hprio_end / 2;
+		end = mcam->bmap_entries - (mcam->lprio_count / 2);
+		reverse = true;
+	} else {
+		/* Not enough free entries, search all entries in reverse,
+		 * so that low priority ones will get used up.
+		 */
+		reverse = true;
+		start = 0;
+		end = mcam->bmap_entries;
+	}
+
+alloc:
+	if (reverse) {
+		bmap = mcam->bmap_reverse;
+		start = mcam->bmap_entries - start;
+		end = mcam->bmap_entries - end;
+		index = start;
+		start = end;
+		end = index;
+	} else {
+		bmap = mcam->bmap;
+	}
+
+	if (req->contig) {
+		/* Allocate requested number of contiguous entries, if
+		 * unsuccessful find max contiguous entries available.
+		 */
+		index = npc_mcam_find_zero_area(bmap, end, start,
+						req->count, &max_contig);
+		rsp->count = max_contig;
+		if (reverse)
+			rsp->entry = mcam->bmap_entries - index - max_contig;
+		else
+			rsp->entry = index;
+	} else {
+		/* Allocate requested number of non-contiguous entries,
+		 * if unsuccessful allocate as many as possible.
+		 */
+		rsp->count = 0;
+		next_start = start;
+		for (entry = 0; entry < req->count; entry++) {
+			index = find_next_zero_bit(bmap, end, next_start);
+			if (index >= end)
+				break;
+
+			next_start = start + (index - start) + 1;
+
+			/* Save the entry's index */
+			if (reverse)
+				index = mcam->bmap_entries - index - 1;
+			entry_list[entry] = index;
+			rsp->count++;
+		}
+	}
+
+	/* If allocating requested no of entries is unsucessful,
+	 * expand the search range to full bitmap length and retry.
+	 */
+	if (!req->priority && (rsp->count < req->count) &&
+	    ((end - start) != mcam->bmap_entries)) {
+		reverse = true;
+		start = 0;
+		end = mcam->bmap_entries;
+		goto alloc;
+	}
+
+	/* For priority entry allocation requests, if allocation is
+	 * failed then expand search to max possible range and retry.
+	 */
+	if (req->priority && rsp->count < req->count) {
+		if (req->priority == NPC_MCAM_LOWER_PRIO &&
+		    (start != (req->ref_entry + 1))) {
+			start = req->ref_entry + 1;
+			end = mcam->bmap_entries;
+			reverse = false;
+			goto alloc;
+		} else if ((req->priority == NPC_MCAM_HIGHER_PRIO) &&
+			   ((end - start) != req->ref_entry)) {
+			start = 0;
+			end = req->ref_entry;
+			reverse = true;
+			goto alloc;
+		}
+	}
+
+	/* Copy MCAM entry indices into mbox response entry_list.
+	 * Requester always expects indices in ascending order, so
+	 * so reverse the list if reverse bitmap is used for allocation.
+	 */
+	if (!req->contig && rsp->count) {
+		index = 0;
+		for (entry = rsp->count - 1; entry >= 0; entry--) {
+			if (reverse)
+				rsp->entry_list[index++] = entry_list[entry];
+			else
+				rsp->entry_list[entry] = entry_list[entry];
+		}
+	}
+
+	/* Mark the allocated entries as used and set nixlf mapping */
+	for (entry = 0; entry < rsp->count; entry++) {
+		index = req->contig ?
+			(rsp->entry + entry) : rsp->entry_list[entry];
+		npc_mcam_set_bit(mcam, index);
+		mcam->entry2pfvf_map[index] = pcifunc;
+	}
+
+	/* Update available free count in mbox response */
+	rsp->free_count = mcam->bmap_fcnt;
+
+	mutex_unlock(&mcam->lock);
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_alloc_entry_req *req,
+					  struct npc_mcam_alloc_entry_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	rsp->entry = NPC_MCAM_ENTRY_INVALID;
+	rsp->free_count = 0;
+
+	/* Check if ref_entry is within range */
+	if (req->priority && req->ref_entry >= mcam->bmap_entries)
+		return NPC_MCAM_INVALID_REQ;
+
+	/* ref_entry can't be '0' if requested priority is high.
+	 * Can't be last entry if requested priority is low.
+	 */
+	if ((!req->ref_entry && req->priority == NPC_MCAM_HIGHER_PRIO) ||
+	    ((req->ref_entry == (mcam->bmap_entries - 1)) &&
+	     req->priority == NPC_MCAM_LOWER_PRIO))
+		return NPC_MCAM_INVALID_REQ;
+
+	/* Since list of allocated indices needs to be sent to requester,
+	 * max number of non-contiguous entries per mbox msg is limited.
+	 */
+	if (!req->contig && req->count > NPC_MAX_NONCONTIG_ENTRIES)
+		return NPC_MCAM_INVALID_REQ;
+
+	/* Alloc request from PFFUNC with no NIXLF attached should be denied */
+	if (!is_nixlf_attached(rvu, pcifunc))
+		return NPC_MCAM_ALLOC_DENIED;
+
+	return npc_mcam_alloc_entries(mcam, pcifunc, req, rsp);
+}
+
+int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
+					 struct npc_mcam_free_entry_req *req,
+					 struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, rc = 0;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	/* Free request from PFFUNC with no NIXLF attached, ignore */
+	if (!is_nixlf_attached(rvu, pcifunc))
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+
+	if (req->all)
+		goto free_all;
+
+	rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry);
+	if (rc)
+		goto exit;
+
+	mcam->entry2pfvf_map[req->entry] = 0;
+	npc_mcam_clear_bit(mcam, req->entry);
+	npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, false);
+
+	goto exit;
+
+free_all:
+	/* Free up all entries allocated to requesting PFFUNC */
+	npc_mcam_free_all_entries(rvu, mcam, blkaddr, pcifunc);
+exit:
+	mutex_unlock(&mcam->lock);
+	return rc;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/20] octeontx2-af: MCAM entry installation support
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (3 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 06/20] octeontx2-af: Support for NPC MCAM counters sunil.kovvuri
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Add support for a RVU PF/VF to enable, disable, configure
and shuffle MCAM entries via mbox commands. This patch adds
mailbox message formats and handling of these commands.

As of now otherthan validating MCAM entry index, info like
channel number e.t.c in MCAM config data sent by PF/VF are
not validated.

Also a max of 64 MCAM entries can be shuffled with a single
mbox command.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  42 +++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  12 ++
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 179 ++++++++++++++++++++-
 3 files changed, 225 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 9eefbf7..76b666b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -151,6 +151,11 @@ M(NPA_HWCTX_DISABLE,	0x403, hwctx_disable_req, msg_rsp)		\
 M(NPC_MCAM_ALLOC_ENTRY,	0x6000, npc_mcam_alloc_entry_req,		\
 				npc_mcam_alloc_entry_rsp)		\
 M(NPC_MCAM_FREE_ENTRY,	0x6001, npc_mcam_free_entry_req, msg_rsp)	\
+M(NPC_MCAM_WRITE_ENTRY,	0x6002, npc_mcam_write_entry_req, msg_rsp)	\
+M(NPC_MCAM_ENA_ENTRY,   0x6003, npc_mcam_ena_dis_entry_req, msg_rsp)	\
+M(NPC_MCAM_DIS_ENTRY,   0x6004, npc_mcam_ena_dis_entry_req, msg_rsp)	\
+M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry_req,		\
+				npc_mcam_shift_entry_rsp)		\
 /* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 M(NIX_LF_ALLOC,		0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp)	\
 M(NIX_LF_FREE,		0x8001, msg_req, msg_rsp)			\
@@ -584,4 +589,41 @@ struct npc_mcam_free_entry_req {
 	u8  all;   /* If all entries allocated to this PFVF to be freed */
 };
 
+struct mcam_entry {
+#define NPC_MAX_KWS_IN_KEY	7 /* Number of keywords in max keywidth */
+	u64	kw[NPC_MAX_KWS_IN_KEY];
+	u64	kw_mask[NPC_MAX_KWS_IN_KEY];
+	u64	action;
+	u64	vtag_action;
+};
+
+struct npc_mcam_write_entry_req {
+	struct mbox_msghdr hdr;
+	struct mcam_entry entry_data;
+	u16 entry;	 /* MCAM entry to write this match key */
+	u16 cntr;	 /* Counter for this MCAM entry */
+	u8  intf;	 /* Rx or Tx interface */
+	u8  enable_entry;/* Enable this MCAM entry ? */
+	u8  set_cntr;    /* Set counter for this entry ? */
+};
+
+/* Enable/Disable a given entry */
+struct npc_mcam_ena_dis_entry_req {
+	struct mbox_msghdr hdr;
+	u16 entry;
+};
+
+struct npc_mcam_shift_entry_req {
+	struct mbox_msghdr hdr;
+#define NPC_MCAM_MAX_SHIFTS	64
+	u16 curr_entry[NPC_MCAM_MAX_SHIFTS];
+	u16 new_entry[NPC_MCAM_MAX_SHIFTS];
+	u16 shift_count; /* Number of entries to shift */
+};
+
+struct npc_mcam_shift_entry_rsp {
+	struct mbox_msghdr hdr;
+	u16 failed_entry_idx; /* Index in 'curr_entry', not entry itself */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 01a2fcd..b649c9c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -386,4 +386,16 @@ int rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(struct rvu *rvu,
 int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
 					 struct npc_mcam_free_entry_req *req,
 					 struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_WRITE_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_write_entry_req *req,
+					  struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_ENA_ENTRY(struct rvu *rvu,
+					struct npc_mcam_ena_dis_entry_req *req,
+					struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_DIS_ENTRY(struct rvu *rvu,
+					struct npc_mcam_ena_dis_entry_req *req,
+					struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_shift_entry_req *req,
+					  struct npc_mcam_shift_entry_rsp *rsp);
 #endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 82c1c10..af0d8a1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -29,14 +29,6 @@
 static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
 				      int blkaddr, u16 pcifunc);
 
-struct mcam_entry {
-#define NPC_MAX_KWS_IN_KEY	7 /* Number of keywords in max keywidth */
-	u64	kw[NPC_MAX_KWS_IN_KEY];
-	u64	kw_mask[NPC_MAX_KWS_IN_KEY];
-	u64	action;
-	u64	vtag_action;
-};
-
 void rvu_npc_set_pkind(struct rvu *rvu, int pkind, struct rvu_pfvf *pfvf)
 {
 	int blkaddr;
@@ -259,6 +251,46 @@ static void npc_config_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam,
 		npc_enable_mcam_entry(rvu, mcam, blkaddr, actindex, false);
 }
 
+static void npc_copy_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam,
+				int blkaddr, u16 src, u16 dest)
+{
+	int dbank = npc_get_bank(mcam, dest);
+	int sbank = npc_get_bank(mcam, src);
+	u64 cfg, sreg, dreg;
+	int bank, i;
+
+	src &= (mcam->banksize - 1);
+	dest &= (mcam->banksize - 1);
+
+	/* Copy INTF's, W0's, W1's CAM0 and CAM1 configuration */
+	for (bank = 0; bank < mcam->banks_per_entry; bank++) {
+		sreg = NPC_AF_MCAMEX_BANKX_CAMX_INTF(src, sbank + bank, 0);
+		dreg = NPC_AF_MCAMEX_BANKX_CAMX_INTF(dest, dbank + bank, 0);
+		for (i = 0; i < 6; i++) {
+			cfg = rvu_read64(rvu, blkaddr, sreg + (i * 8));
+			rvu_write64(rvu, blkaddr, dreg + (i * 8), cfg);
+		}
+	}
+
+	/* Copy action */
+	cfg = rvu_read64(rvu, blkaddr,
+			 NPC_AF_MCAMEX_BANKX_ACTION(src, sbank));
+	rvu_write64(rvu, blkaddr,
+		    NPC_AF_MCAMEX_BANKX_ACTION(dest, dbank), cfg);
+
+	/* Copy TAG action */
+	cfg = rvu_read64(rvu, blkaddr,
+			 NPC_AF_MCAMEX_BANKX_TAG_ACT(src, sbank));
+	rvu_write64(rvu, blkaddr,
+		    NPC_AF_MCAMEX_BANKX_TAG_ACT(dest, dbank), cfg);
+
+	/* Enable or disable */
+	cfg = rvu_read64(rvu, blkaddr,
+			 NPC_AF_MCAMEX_BANKX_CFG(src, sbank));
+	rvu_write64(rvu, blkaddr,
+		    NPC_AF_MCAMEX_BANKX_CFG(dest, dbank), cfg);
+}
+
 static u64 npc_get_mcam_action(struct rvu *rvu, struct npc_mcam *mcam,
 			       int blkaddr, int index)
 {
@@ -1322,3 +1354,134 @@ int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
 	mutex_unlock(&mcam->lock);
 	return rc;
 }
+
+int rvu_mbox_handler_NPC_MCAM_WRITE_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_write_entry_req *req,
+					  struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, rc;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry);
+	if (rc)
+		goto exit;
+
+	if (req->intf != NIX_INTF_RX && req->intf != NIX_INTF_TX) {
+		rc = NPC_MCAM_INVALID_REQ;
+		goto exit;
+	}
+
+	npc_config_mcam_entry(rvu, mcam, blkaddr, req->entry, req->intf,
+			      &req->entry_data, req->enable_entry);
+
+	rc = 0;
+exit:
+	mutex_unlock(&mcam->lock);
+	return rc;
+}
+
+int rvu_mbox_handler_NPC_MCAM_ENA_ENTRY(struct rvu *rvu,
+					struct npc_mcam_ena_dis_entry_req *req,
+					struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, rc;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry);
+	mutex_unlock(&mcam->lock);
+	if (rc)
+		return rc;
+
+	npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, true);
+
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_DIS_ENTRY(struct rvu *rvu,
+					struct npc_mcam_ena_dis_entry_req *req,
+					struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, rc;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	rc = npc_mcam_verify_entry(mcam, pcifunc, req->entry);
+	mutex_unlock(&mcam->lock);
+	if (rc)
+		return rc;
+
+	npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, false);
+
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
+					  struct npc_mcam_shift_entry_req *req,
+					  struct npc_mcam_shift_entry_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	u16 old_entry, new_entry;
+	int blkaddr, rc;
+	u16 index;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	if (req->shift_count > NPC_MCAM_MAX_SHIFTS)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	for (index = 0; index < req->shift_count; index++) {
+		old_entry = req->curr_entry[index];
+		new_entry = req->new_entry[index];
+
+		/* Check if both old and new entries are valid and
+		 * does belong to this PFFUNC or not.
+		 */
+		rc = npc_mcam_verify_entry(mcam, pcifunc, old_entry);
+		if (rc)
+			break;
+
+		rc = npc_mcam_verify_entry(mcam, pcifunc, new_entry);
+		if (rc)
+			break;
+
+		/* Disable the new_entry */
+		npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, false);
+
+		/* Copy rule from old entry to new entry */
+		npc_copy_mcam_entry(rvu, mcam, blkaddr, old_entry, new_entry);
+
+		/* Enable new_entry and disable old_entry */
+		npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, true);
+		npc_enable_mcam_entry(rvu, mcam, blkaddr, old_entry, false);
+	}
+
+	/* If shift has failed then report the failed index */
+	if (index != req->shift_count) {
+		rc = NPC_MCAM_PERM_DENIED;
+		rsp->failed_entry_idx = index;
+	}
+
+	mutex_unlock(&mcam->lock);
+	return rc;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/20] octeontx2-af: Support for NPC MCAM counters
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (4 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 05/20] octeontx2-af: MCAM entry installation support sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 07/20] octeontx2-af: Map or unmap NPC MCAM entry and counter sunil.kovvuri
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

NPC HW has counters which can be mapped to MCAM
entries to gather entry match statistics. This
patch adds support to allocate, free, clear and retrieve
stats of NPC MCAM counters. New mailbox messages have
been added for this. Similar to MCAM entries both
contiguous and non-contiguous counter allocation is
supported.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  32 +++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  10 ++
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 144 +++++++++++++++++++++
 3 files changed, 186 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 76b666b..9fa491e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -156,6 +156,12 @@ M(NPC_MCAM_ENA_ENTRY,   0x6003, npc_mcam_ena_dis_entry_req, msg_rsp)	\
 M(NPC_MCAM_DIS_ENTRY,   0x6004, npc_mcam_ena_dis_entry_req, msg_rsp)	\
 M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry_req,		\
 				npc_mcam_shift_entry_rsp)		\
+M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter_req,		\
+				  npc_mcam_alloc_counter_rsp)		\
+M(NPC_MCAM_FREE_COUNTER,  0x6007, npc_mcam_oper_counter_req, msg_rsp)	\
+M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_oper_counter_req, msg_rsp)	\
+M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_oper_counter_req,		\
+				  npc_mcam_oper_counter_rsp)		\
 /* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 M(NIX_LF_ALLOC,		0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp)	\
 M(NIX_LF_FREE,		0x8001, msg_req, msg_rsp)			\
@@ -626,4 +632,30 @@ struct npc_mcam_shift_entry_rsp {
 	u16 failed_entry_idx; /* Index in 'curr_entry', not entry itself */
 };
 
+struct npc_mcam_alloc_counter_req {
+	struct mbox_msghdr hdr;
+	u8  contig;	/* Contiguous counters ? */
+#define NPC_MAX_NONCONTIG_COUNTERS       64
+	u16 count;	/* Number of counters requested */
+};
+
+struct npc_mcam_alloc_counter_rsp {
+	struct mbox_msghdr hdr;
+	u16 cntr;   /* Counter allocated or start index if contiguous.
+		     * Invalid incase of non-contiguous.
+		     */
+	u16 count;  /* Number of counters allocated */
+	u16 cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
+};
+
+struct npc_mcam_oper_counter_req {
+	struct mbox_msghdr hdr;
+	u16 cntr;   /* Free a counter or clear/fetch it's stats */
+};
+
+struct npc_mcam_oper_counter_rsp {
+	struct mbox_msghdr hdr;
+	u64 stat;  /* valid only while fetching counter's stats */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index b649c9c..4e10fe9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -398,4 +398,14 @@ int rvu_mbox_handler_NPC_MCAM_DIS_ENTRY(struct rvu *rvu,
 int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
 					  struct npc_mcam_shift_entry_req *req,
 					  struct npc_mcam_shift_entry_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_ALLOC_COUNTER(struct rvu *rvu,
+				struct npc_mcam_alloc_counter_req *req,
+				struct npc_mcam_alloc_counter_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_FREE_COUNTER(struct rvu *rvu,
+		   struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_CLEAR_COUNTER(struct rvu *rvu,
+		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
+			struct npc_mcam_oper_counter_req *req,
+			struct npc_mcam_oper_counter_rsp *rsp);
 #endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index af0d8a1..abc62f2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -933,6 +933,21 @@ static int npc_mcam_verify_entry(struct npc_mcam *mcam,
 	return 0;
 }
 
+static int npc_mcam_verify_counter(struct npc_mcam *mcam,
+				   u16 pcifunc, int cntr)
+{
+	/* Verify if counter is valid and if it is indeed
+	 * allocated to the requesting PFFUNC.
+	 */
+	if (cntr >= mcam->counters.max)
+		return NPC_MCAM_INVALID_REQ;
+
+	if (pcifunc != mcam->cntr2pfvf_map[cntr])
+		return NPC_MCAM_PERM_DENIED;
+
+	return 0;
+}
+
 /* Sets MCAM entry in bitmap as used. Update
  * reverse bitmap too. Should be called with
  * 'mcam->lock' held.
@@ -1485,3 +1500,132 @@ int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
 	mutex_unlock(&mcam->lock);
 	return rc;
 }
+
+int rvu_mbox_handler_NPC_MCAM_ALLOC_COUNTER(struct rvu *rvu,
+			struct npc_mcam_alloc_counter_req *req,
+			struct npc_mcam_alloc_counter_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 pcifunc = req->hdr.pcifunc;
+	u16 max_contig, cntr;
+	int blkaddr, index;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	/* If the request is from a PFFUNC with no NIXLF attached, ignore */
+	if (!is_nixlf_attached(rvu, pcifunc))
+		return NPC_MCAM_INVALID_REQ;
+
+	/* Since list of allocated counter IDs needs to be sent to requester,
+	 * max number of non-contiguous counters per mbox msg is limited.
+	 */
+	if (!req->contig && req->count > NPC_MAX_NONCONTIG_COUNTERS)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+
+	/* Check if unused counters are available or not */
+	if (!rvu_rsrc_free_count(&mcam->counters)) {
+		mutex_unlock(&mcam->lock);
+		return NPC_MCAM_ALLOC_FAILED;
+	}
+
+	rsp->count = 0;
+
+	if (req->contig) {
+		/* Allocate requested number of contiguous counters, if
+		 * unsuccessful find max contiguous entries available.
+		 */
+		index = npc_mcam_find_zero_area(mcam->counters.bmap,
+						mcam->counters.max, 0,
+						req->count, &max_contig);
+		rsp->count = max_contig;
+		rsp->cntr = index;
+		for (cntr = index; cntr < (index + max_contig); cntr++) {
+			__set_bit(cntr, mcam->counters.bmap);
+			mcam->cntr2pfvf_map[cntr] = pcifunc;
+		}
+	} else {
+		/* Allocate requested number of non-contiguous counters,
+		 * if unsuccessful allocate as many as possible.
+		 */
+		for (cntr = 0; cntr < req->count; cntr++) {
+			index = rvu_alloc_rsrc(&mcam->counters);
+			if (index < 0)
+				break;
+			rsp->cntr_list[cntr] = index;
+			rsp->count++;
+			mcam->cntr2pfvf_map[index] = pcifunc;
+		}
+	}
+
+	mutex_unlock(&mcam->lock);
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_FREE_COUNTER(struct rvu *rvu,
+		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	int err;
+
+	mutex_lock(&mcam->lock);
+	err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr);
+	if (err) {
+		mutex_unlock(&mcam->lock);
+		return err;
+	}
+
+	/* Mark counter as free/unused */
+	mcam->cntr2pfvf_map[req->cntr] = NPC_MCAM_INVALID_MAP;
+	rvu_free_rsrc(&mcam->counters, req->cntr);
+	mutex_unlock(&mcam->lock);
+
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_CLEAR_COUNTER(struct rvu *rvu,
+		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	int blkaddr, err;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr);
+	mutex_unlock(&mcam->lock);
+	if (err)
+		return err;
+
+	rvu_write64(rvu, blkaddr, NPC_AF_MATCH_STATX(req->cntr), 0x00);
+
+	return 0;
+}
+
+int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
+			struct npc_mcam_oper_counter_req *req,
+			struct npc_mcam_oper_counter_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	int blkaddr, err;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr);
+	mutex_unlock(&mcam->lock);
+	if (err)
+		return err;
+
+	rsp->stat = rvu_read64(rvu, blkaddr, NPC_AF_MATCH_STATX(req->cntr));
+	rsp->stat &= BIT_ULL(48) - 1;
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/20] octeontx2-af: Map or unmap NPC MCAM entry and counter
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (5 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 06/20] octeontx2-af: Support for NPC MCAM counters sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time sunil.kovvuri
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Alloc memory to save MCAM 'entry to counter' mapping and since
multiple entries can map to same counter, added counter's reference
count tracking.

Do 'entry to counter' mapping when a entry is being installed
and mbox msg sender requested to configure a counter as well.
Mapping is removed when a entry or counter is being freed or
a explicit mbox msg is received to unmap them.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |   8 +
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   4 +
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 183 ++++++++++++++++++++-
 3 files changed, 191 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 9fa491e..a851a0b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -159,6 +159,7 @@ M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry_req,		\
 M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter_req,		\
 				  npc_mcam_alloc_counter_rsp)		\
 M(NPC_MCAM_FREE_COUNTER,  0x6007, npc_mcam_oper_counter_req, msg_rsp)	\
+M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter_req, msg_rsp)	\
 M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_oper_counter_req, msg_rsp)	\
 M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_oper_counter_req,		\
 				  npc_mcam_oper_counter_rsp)		\
@@ -658,4 +659,11 @@ struct npc_mcam_oper_counter_rsp {
 	u64 stat;  /* valid only while fetching counter's stats */
 };
 
+struct npc_mcam_unmap_counter_req {
+	struct mbox_msghdr hdr;
+	u16 cntr;
+	u16 entry; /* Entry and counter to be unmapped */
+	u8  all;   /* Unmap all entries using this counter ? */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 4e10fe9..203f441 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -81,7 +81,9 @@ struct npc_mcam {
 	u16	bmap_entries;	/* Number of unreserved MCAM entries */
 	u16	bmap_fcnt;	/* MCAM entries free count */
 	u16	*entry2pfvf_map;
+	u16	*entry2cntr_map;
 	u16	*cntr2pfvf_map;
+	u16	*cntr_refcnt;
 	u8	keysize;	/* MCAM keysize 112/224/448 bits */
 	u8	banks;		/* Number of MCAM banks */
 	u8	banks_per_entry;/* Number of keywords in key */
@@ -405,6 +407,8 @@ int rvu_mbox_handler_NPC_MCAM_FREE_COUNTER(struct rvu *rvu,
 		   struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp);
 int rvu_mbox_handler_NPC_MCAM_CLEAR_COUNTER(struct rvu *rvu,
 		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_UNMAP_COUNTER(struct rvu *rvu,
+		struct npc_mcam_unmap_counter_req *req, struct msg_rsp *rsp);
 int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
 			struct npc_mcam_oper_counter_req *req,
 			struct npc_mcam_oper_counter_rsp *rsp);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index abc62f2..79df1d4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -28,6 +28,8 @@
 
 static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
 				      int blkaddr, u16 pcifunc);
+static void npc_mcam_free_all_counters(struct rvu *rvu, struct npc_mcam *mcam,
+				       u16 pcifunc);
 
 void rvu_npc_set_pkind(struct rvu *rvu, int pkind, struct rvu_pfvf *pfvf)
 {
@@ -506,6 +508,9 @@ void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
 	/* Disable and free all MCAM entries mapped to this 'pcifunc' */
 	npc_mcam_free_all_entries(rvu, mcam, blkaddr, pcifunc);
 
+	/* Free all MCAM counters mapped to this 'pcifunc' */
+	npc_mcam_free_all_counters(rvu, mcam, pcifunc);
+
 	mutex_unlock(&mcam->lock);
 
 	/* Disable ucast MCAM match entry of this PF/VF */
@@ -819,6 +824,19 @@ static int npc_mcam_rsrcs_init(struct rvu *rvu, int blkaddr)
 	if (!mcam->cntr2pfvf_map)
 		goto free_mem;
 
+	/* Alloc memory for MCAM entry to counter mapping and for tracking
+	 * counter's reference count.
+	 */
+	mcam->entry2cntr_map = devm_kcalloc(rvu->dev, mcam->bmap_entries,
+					    sizeof(u16), GFP_KERNEL);
+	if (!mcam->entry2cntr_map)
+		goto free_mem;
+
+	mcam->cntr_refcnt = devm_kcalloc(rvu->dev, mcam->counters.max,
+					 sizeof(u16), GFP_KERNEL);
+	if (!mcam->cntr_refcnt)
+		goto free_mem;
+
 	mutex_init(&mcam->lock);
 
 	return 0;
@@ -948,6 +966,36 @@ static int npc_mcam_verify_counter(struct npc_mcam *mcam,
 	return 0;
 }
 
+static void npc_map_mcam_entry_and_cntr(struct rvu *rvu, struct npc_mcam *mcam,
+					int blkaddr, u16 entry, u16 cntr)
+{
+	u16 index = entry & (mcam->banksize - 1);
+	u16 bank = npc_get_bank(mcam, entry);
+
+	/* Set mapping and increment counter's refcnt */
+	mcam->entry2cntr_map[entry] = cntr;
+	mcam->cntr_refcnt[cntr]++;
+	/* Enable stats */
+	rvu_write64(rvu, blkaddr,
+		    NPC_AF_MCAMEX_BANKX_STAT_ACT(index, bank),
+		    BIT_ULL(9) | cntr);
+}
+
+static void npc_unmap_mcam_entry_and_cntr(struct rvu *rvu,
+					  struct npc_mcam *mcam,
+					  int blkaddr, u16 entry, u16 cntr)
+{
+	u16 index = entry & (mcam->banksize - 1);
+	u16 bank = npc_get_bank(mcam, entry);
+
+	/* Remove mapping and reduce counter's refcnt */
+	mcam->entry2cntr_map[entry] = NPC_MCAM_INVALID_MAP;
+	mcam->cntr_refcnt[cntr]--;
+	/* Disable stats */
+	rvu_write64(rvu, blkaddr,
+		    NPC_AF_MCAMEX_BANKX_STAT_ACT(index, bank), 0x00);
+}
+
 /* Sets MCAM entry in bitmap as used. Update
  * reverse bitmap too. Should be called with
  * 'mcam->lock' held.
@@ -983,7 +1031,7 @@ static void npc_mcam_clear_bit(struct npc_mcam *mcam, u16 index)
 static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
 				      int blkaddr, u16 pcifunc)
 {
-	u16 index;
+	u16 index, cntr;
 
 	/* Scan all MCAM entries and free the ones mapped to 'pcifunc' */
 	for (index = 0; index < mcam->bmap_entries; index++) {
@@ -993,6 +1041,33 @@ static void npc_mcam_free_all_entries(struct rvu *rvu, struct npc_mcam *mcam,
 			npc_mcam_clear_bit(mcam, index);
 			/* Disable the entry */
 			npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
+
+			/* Update entry2counter mapping */
+			cntr = mcam->entry2cntr_map[index];
+			if (cntr != NPC_MCAM_INVALID_MAP)
+				npc_unmap_mcam_entry_and_cntr(rvu, mcam,
+							      blkaddr, index,
+							      cntr);
+		}
+	}
+}
+
+static void npc_mcam_free_all_counters(struct rvu *rvu, struct npc_mcam *mcam,
+				       u16 pcifunc)
+{
+	u16 cntr;
+
+	/* Scan all MCAM counters and free the ones mapped to 'pcifunc' */
+	for (cntr = 0; cntr < mcam->counters.max; cntr++) {
+		if (mcam->cntr2pfvf_map[cntr] == pcifunc) {
+			mcam->cntr2pfvf_map[cntr] = NPC_MCAM_INVALID_MAP;
+			mcam->cntr_refcnt[cntr] = 0;
+			rvu_free_rsrc(&mcam->counters, cntr);
+			/* This API is expected to be called after freeing
+			 * MCAM entries, which inturn will remove
+			 * 'entry to counter' mapping.
+			 * No need to do it again.
+			 */
 		}
 	}
 }
@@ -1282,6 +1357,7 @@ static int npc_mcam_alloc_entries(struct npc_mcam *mcam, u16 pcifunc,
 			(rsp->entry + entry) : rsp->entry_list[entry];
 		npc_mcam_set_bit(mcam, index);
 		mcam->entry2pfvf_map[index] = pcifunc;
+		mcam->entry2cntr_map[index] = NPC_MCAM_INVALID_MAP;
 	}
 
 	/* Update available free count in mbox response */
@@ -1338,6 +1414,7 @@ int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 	u16 pcifunc = req->hdr.pcifunc;
 	int blkaddr, rc = 0;
+	u16 cntr;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0)
@@ -1360,6 +1437,12 @@ int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
 	npc_mcam_clear_bit(mcam, req->entry);
 	npc_enable_mcam_entry(rvu, mcam, blkaddr, req->entry, false);
 
+	/* Update entry2counter mapping */
+	cntr = mcam->entry2cntr_map[req->entry];
+	if (cntr != NPC_MCAM_INVALID_MAP)
+		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+					      req->entry, cntr);
+
 	goto exit;
 
 free_all:
@@ -1387,6 +1470,12 @@ int rvu_mbox_handler_NPC_MCAM_WRITE_ENTRY(struct rvu *rvu,
 	if (rc)
 		goto exit;
 
+	if (req->set_cntr &&
+	    npc_mcam_verify_counter(mcam, pcifunc, req->cntr)) {
+		rc = NPC_MCAM_INVALID_REQ;
+		goto exit;
+	}
+
 	if (req->intf != NIX_INTF_RX && req->intf != NIX_INTF_TX) {
 		rc = NPC_MCAM_INVALID_REQ;
 		goto exit;
@@ -1395,6 +1484,10 @@ int rvu_mbox_handler_NPC_MCAM_WRITE_ENTRY(struct rvu *rvu,
 	npc_config_mcam_entry(rvu, mcam, blkaddr, req->entry, req->intf,
 			      &req->entry_data, req->enable_entry);
 
+	if (req->set_cntr)
+		npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+					    req->entry, req->cntr);
+
 	rc = 0;
 exit:
 	mutex_unlock(&mcam->lock);
@@ -1454,8 +1547,8 @@ int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 	u16 pcifunc = req->hdr.pcifunc;
 	u16 old_entry, new_entry;
+	u16 index, cntr;
 	int blkaddr, rc;
-	u16 index;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0)
@@ -1480,12 +1573,27 @@ int rvu_mbox_handler_NPC_MCAM_SHIFT_ENTRY(struct rvu *rvu,
 		if (rc)
 			break;
 
+		/* new_entry should not have a counter mapped */
+		if (mcam->entry2cntr_map[new_entry] != NPC_MCAM_INVALID_MAP) {
+			rc = NPC_MCAM_PERM_DENIED;
+			break;
+		}
+
 		/* Disable the new_entry */
 		npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, false);
 
 		/* Copy rule from old entry to new entry */
 		npc_copy_mcam_entry(rvu, mcam, blkaddr, old_entry, new_entry);
 
+		/* Copy counter mapping, if any */
+		cntr = mcam->entry2cntr_map[old_entry];
+		if (cntr != NPC_MCAM_INVALID_MAP) {
+			npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+						      old_entry, cntr);
+			npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+						    new_entry, cntr);
+		}
+
 		/* Enable new_entry and disable old_entry */
 		npc_enable_mcam_entry(rvu, mcam, blkaddr, new_entry, true);
 		npc_enable_mcam_entry(rvu, mcam, blkaddr, old_entry, false);
@@ -1569,7 +1677,12 @@ int rvu_mbox_handler_NPC_MCAM_FREE_COUNTER(struct rvu *rvu,
 		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp)
 {
 	struct npc_mcam *mcam = &rvu->hw->mcam;
-	int err;
+	u16 index, entry = 0;
+	int blkaddr, err;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
 
 	mutex_lock(&mcam->lock);
 	err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr);
@@ -1581,11 +1694,73 @@ int rvu_mbox_handler_NPC_MCAM_FREE_COUNTER(struct rvu *rvu,
 	/* Mark counter as free/unused */
 	mcam->cntr2pfvf_map[req->cntr] = NPC_MCAM_INVALID_MAP;
 	rvu_free_rsrc(&mcam->counters, req->cntr);
-	mutex_unlock(&mcam->lock);
 
+	/* Disable all MCAM entry's stats which are using this counter */
+	while (entry < mcam->bmap_entries) {
+		if (!mcam->cntr_refcnt[req->cntr])
+			break;
+
+		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+		if (index >= mcam->bmap_entries)
+			break;
+		if (mcam->entry2cntr_map[index] != req->cntr)
+			continue;
+
+		entry = index + 1;
+		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+					      index, req->cntr);
+	}
+
+	mutex_unlock(&mcam->lock);
 	return 0;
 }
 
+int rvu_mbox_handler_NPC_MCAM_UNMAP_COUNTER(struct rvu *rvu,
+		struct npc_mcam_unmap_counter_req *req, struct msg_rsp *rsp)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 index, entry = 0;
+	int blkaddr, rc;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	mutex_lock(&mcam->lock);
+	rc = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr);
+	if (rc)
+		goto exit;
+
+	/* Unmap the MCAM entry and counter */
+	if (!req->all) {
+		rc = npc_mcam_verify_entry(mcam, req->hdr.pcifunc, req->entry);
+		if (rc)
+			goto exit;
+		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+					      req->entry, req->cntr);
+		goto exit;
+	}
+
+	/* Disable all MCAM entry's stats which are using this counter */
+	while (entry < mcam->bmap_entries) {
+		if (!mcam->cntr_refcnt[req->cntr])
+			break;
+
+		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+		if (index >= mcam->bmap_entries)
+			break;
+		if (mcam->entry2cntr_map[index] != req->cntr)
+			continue;
+
+		entry = index + 1;
+		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+					      index, req->cntr);
+	}
+exit:
+	mutex_unlock(&mcam->lock);
+	return rc;
+}
+
 int rvu_mbox_handler_NPC_MCAM_CLEAR_COUNTER(struct rvu *rvu,
 		struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp)
 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (6 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 07/20] octeontx2-af: Map or unmap NPC MCAM entry and counter sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 20:43   ` Arnd Bergmann
  2018-11-08 18:35 ` [PATCH 09/20] octeontx2-af: Add MKEX default profile sunil.kovvuri
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

A new mailbox message is added to support allocating a MCAM entry
along with a counter and configuring it in one go. This reduces
the amount of mailbox communication involved in installing a new
MCAM rule.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   | 18 ++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  3 +
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 72 ++++++++++++++++++++++
 3 files changed, 93 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index a851a0b..2be2f71 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -163,6 +163,8 @@ M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter_req, msg_rsp)	\
 M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_oper_counter_req, msg_rsp)	\
 M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_oper_counter_req,		\
 				  npc_mcam_oper_counter_rsp)		\
+M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry_req,\
+					  npc_mcam_alloc_and_write_entry_rsp)\
 /* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 M(NIX_LF_ALLOC,		0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp)	\
 M(NIX_LF_FREE,		0x8001, msg_req, msg_rsp)			\
@@ -666,4 +668,20 @@ struct npc_mcam_unmap_counter_req {
 	u8  all;   /* Unmap all entries using this counter ? */
 };
 
+struct npc_mcam_alloc_and_write_entry_req {
+	struct mbox_msghdr hdr;
+	struct mcam_entry entry_data;
+	u16 ref_entry;
+	u8  priority;    /* Lower or higher w.r.t ref_entry */
+	u8  intf;	 /* Rx or Tx interface */
+	u8  enable_entry;/* Enable this MCAM entry ? */
+	u8  alloc_cntr;  /* Allocate counter and map ? */
+};
+
+struct npc_mcam_alloc_and_write_entry_rsp {
+	struct mbox_msghdr hdr;
+	u16 entry;
+	u16 cntr;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 203f441..75b6f6b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -412,4 +412,7 @@ int rvu_mbox_handler_NPC_MCAM_UNMAP_COUNTER(struct rvu *rvu,
 int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
 			struct npc_mcam_oper_counter_req *req,
 			struct npc_mcam_oper_counter_rsp *rsp);
+int rvu_mbox_handler_NPC_MCAM_ALLOC_AND_WRITE_ENTRY(struct rvu *rvu,
+			  struct npc_mcam_alloc_and_write_entry_req *req,
+			  struct npc_mcam_alloc_and_write_entry_rsp *rsp);
 #endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 79df1d4..289de15 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -1804,3 +1804,75 @@ int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
 
 	return 0;
 }
+
+int rvu_mbox_handler_NPC_MCAM_ALLOC_AND_WRITE_ENTRY(struct rvu *rvu,
+			  struct npc_mcam_alloc_and_write_entry_req *req,
+			  struct npc_mcam_alloc_and_write_entry_rsp *rsp)
+{
+	struct npc_mcam_alloc_counter_req cntr_req;
+	struct npc_mcam_alloc_counter_rsp cntr_rsp;
+	struct npc_mcam_alloc_entry_req entry_req;
+	struct npc_mcam_alloc_entry_rsp entry_rsp;
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	u16 entry = NPC_MCAM_ENTRY_INVALID;
+	u16 cntr = NPC_MCAM_ENTRY_INVALID;
+	int blkaddr, rc;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NPC_MCAM_INVALID_REQ;
+
+	if (req->intf != NIX_INTF_RX && req->intf != NIX_INTF_TX)
+		return NPC_MCAM_INVALID_REQ;
+
+	/* Try to allocate a MCAM entry */
+	entry_req.hdr.pcifunc = req->hdr.pcifunc;
+	entry_req.contig = true;
+	entry_req.priority = req->priority;
+	entry_req.ref_entry = req->ref_entry;
+	entry_req.count = 1;
+
+	rc = rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(rvu,
+						   &entry_req, &entry_rsp);
+	if (rc)
+		return rc;
+
+	if (!entry_rsp.count)
+		return NPC_MCAM_ALLOC_FAILED;
+
+	entry = entry_rsp.entry;
+
+	if (!req->alloc_cntr)
+		goto write_entry;
+
+	/* Now allocate counter */
+	cntr_req.hdr.pcifunc = req->hdr.pcifunc;
+	cntr_req.contig = true;
+	cntr_req.count = 1;
+
+	rc = rvu_mbox_handler_NPC_MCAM_ALLOC_COUNTER(rvu, &cntr_req, &cntr_rsp);
+	if (rc) {
+		/* Free allocated MCAM entry */
+		mutex_lock(&mcam->lock);
+		mcam->entry2pfvf_map[entry] = 0;
+		npc_mcam_clear_bit(mcam, entry);
+		mutex_unlock(&mcam->lock);
+		return rc;
+	}
+
+	cntr = cntr_rsp.cntr;
+
+write_entry:
+	mutex_lock(&mcam->lock);
+	npc_config_mcam_entry(rvu, mcam, blkaddr, entry, req->intf,
+			      &req->entry_data, req->enable_entry);
+
+	if (req->alloc_cntr)
+		npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr, entry, cntr);
+	mutex_unlock(&mcam->lock);
+
+	rsp->entry = entry;
+	rsp->cntr = cntr;
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/20] octeontx2-af: Add MKEX default profile
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (7 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 10/20] octeontx2-af: Support to enable/disable default MCAM entries sunil.kovvuri
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem
  Cc: arnd, linux-soc, Santosh Shukla, Yuri Tolstov, Sunil Goutham

From: Santosh Shukla <sshukla@marvell.com>

Added basic default MKEX profile. This profile tells
hardware what data to extract from packet and where to
place it (bit offset) in final KEY generated for the
parsed packet. Based on the bit placement of the packet
data, MCAM entries have to programmed for matching.

Also added a msg to retrieve this MKEX profile from PF/VF
which inturn can process it to determine how MCAM entry
has to be populated.

Signed-off-by: Santosh Shukla <sshukla@marvell.com>
Signed-off-by: Yuri Tolstov <ytolstov@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  18 +++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   2 +
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 155 +++++++++++++++++----
 3 files changed, 150 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 2be2f71..9941f0a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -165,6 +165,7 @@ M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_oper_counter_req,		\
 				  npc_mcam_oper_counter_rsp)		\
 M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry_req,\
 					  npc_mcam_alloc_and_write_entry_rsp)\
+M(NPC_GET_KEX_CFG,	  0x600c, msg_req, npc_get_kex_cfg_rsp)		\
 /* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 M(NIX_LF_ALLOC,		0x8000, nix_lf_alloc_req, nix_lf_alloc_rsp)	\
 M(NIX_LF_FREE,		0x8001, msg_req, msg_rsp)			\
@@ -684,4 +685,21 @@ struct npc_mcam_alloc_and_write_entry_rsp {
 	u16 cntr;
 };
 
+struct npc_get_kex_cfg_rsp {
+	struct mbox_msghdr hdr;
+	u64 rx_keyx_cfg;   /* NPC_AF_INTF(0)_KEX_CFG */
+	u64 tx_keyx_cfg;   /* NPC_AF_INTF(1)_KEX_CFG */
+#define NPC_MAX_INTF	2
+#define NPC_MAX_LID	8
+#define NPC_MAX_LT	16
+#define NPC_MAX_LD	2
+#define NPC_MAX_LFL	16
+	/* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+	u64 kex_ld_flags[NPC_MAX_LD];
+	/* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
+	u64 intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
+	/* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
+	u64 intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 75b6f6b..074d792 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -415,4 +415,6 @@ int rvu_mbox_handler_NPC_MCAM_COUNTER_STATS(struct rvu *rvu,
 int rvu_mbox_handler_NPC_MCAM_ALLOC_AND_WRITE_ENTRY(struct rvu *rvu,
 			  struct npc_mcam_alloc_and_write_entry_req *req,
 			  struct npc_mcam_alloc_and_write_entry_rsp *rsp);
+int rvu_mbox_handler_NPC_GET_KEX_CFG(struct rvu *rvu, struct msg_req *req,
+				     struct npc_get_kex_cfg_rsp *rsp);
 #endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 289de15..814166e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -427,9 +427,28 @@ void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
 	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
 					 nixlf, NIXLF_BCAST_ENTRY);
 
-	/* Check for L2B bit and LMAC channel */
-	entry.kw[0] = BIT_ULL(25) | chan;
-	entry.kw_mask[0] = BIT_ULL(25) | 0xFFFULL;
+	/* Check for L2B bit and LMAC channel
+	 * NOTE: Since MKEX default profile(a reduced version intended to
+	 * accommodate more capability but igoring few bits) a stap-gap
+	 * approach.
+	 * Since we care for L2B which by HRM NPC_PARSE_KEX_S at BIT_POS[25], So
+	 * moved to BIT_POS[13], ignoring ERRCODE, ERRLEV as we'll loose out
+	 * on capability features needed for CoS (/from ODP PoV) e.g: VLAN,
+	 * DSCP.
+	 *
+	 * Reduced layout of MKEX default profile -
+	 * Includes following are (i.e.CHAN, L2/3{B/M}, LA, LB, LC, LD):
+	 *
+	 * BIT_POS[31:28] : LD
+	 * BIT_POS[27:24] : LC
+	 * BIT_POS[23:20] : LB
+	 * BIT_POS[19:16] : LA
+	 * BIT_POS[15:12] : L3B, L3M, L2B, L2M
+	 * BIT_POS[11:00] : CHAN
+	 *
+	 */
+	entry.kw[0] = BIT_ULL(13) | chan;
+	entry.kw_mask[0] = ~entry.kw[0] & (BIT_ULL(13) | 0xFFFULL);
 
 	*(u64 *)&action = 0x00;
 #ifdef MCAST_MCE
@@ -538,14 +557,18 @@ void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
 	}
 }
 
-#define LDATA_EXTRACT_CONFIG(intf, lid, ltype, ld, cfg) \
+#define SET_KEX_LD(intf, lid, ltype, ld, cfg)	\
 	rvu_write64(rvu, blkaddr,			\
 		NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, ltype, ld), cfg)
 
-#define LDATA_FLAGS_CONFIG(intf, ld, flags, cfg)	\
+#define SET_KEX_LDFLAGS(intf, ld, flags, cfg)	\
 	rvu_write64(rvu, blkaddr,			\
 		NPC_AF_INTFX_LDATAX_FLAGSX_CFG(intf, ld, flags), cfg)
 
+#define KEX_LD_CFG(bytesm1, hdr_ofs, ena, flags_ena, key_ofs)		\
+			(((bytesm1) << 16) | ((hdr_ofs) << 8) | ((ena) << 7) | \
+			 ((flags_ena) << 6) | ((key_ofs) & 0x3F))
+
 static void npc_config_ldata_extract(struct rvu *rvu, int blkaddr)
 {
 	struct npc_mcam *mcam = &rvu->hw->mcam;
@@ -561,28 +584,66 @@ static void npc_config_ldata_extract(struct rvu *rvu, int blkaddr)
 	 */
 	for (lid = 0; lid < lid_count; lid++) {
 		for (ltype = 0; ltype < 16; ltype++) {
-			LDATA_EXTRACT_CONFIG(NIX_INTF_RX, lid, ltype, 0, 0ULL);
-			LDATA_EXTRACT_CONFIG(NIX_INTF_RX, lid, ltype, 1, 0ULL);
-			LDATA_EXTRACT_CONFIG(NIX_INTF_TX, lid, ltype, 0, 0ULL);
-			LDATA_EXTRACT_CONFIG(NIX_INTF_TX, lid, ltype, 1, 0ULL);
-
-			LDATA_FLAGS_CONFIG(NIX_INTF_RX, 0, ltype, 0ULL);
-			LDATA_FLAGS_CONFIG(NIX_INTF_RX, 1, ltype, 0ULL);
-			LDATA_FLAGS_CONFIG(NIX_INTF_TX, 0, ltype, 0ULL);
-			LDATA_FLAGS_CONFIG(NIX_INTF_TX, 1, ltype, 0ULL);
+			SET_KEX_LD(NIX_INTF_RX, lid, ltype, 0, 0ULL);
+			SET_KEX_LD(NIX_INTF_RX, lid, ltype, 1, 0ULL);
+			SET_KEX_LD(NIX_INTF_TX, lid, ltype, 0, 0ULL);
+			SET_KEX_LD(NIX_INTF_TX, lid, ltype, 1, 0ULL);
+
+			SET_KEX_LDFLAGS(NIX_INTF_RX, 0, ltype, 0ULL);
+			SET_KEX_LDFLAGS(NIX_INTF_RX, 1, ltype, 0ULL);
+			SET_KEX_LDFLAGS(NIX_INTF_TX, 0, ltype, 0ULL);
+			SET_KEX_LDFLAGS(NIX_INTF_TX, 1, ltype, 0ULL);
 		}
 	}
 
-	/* If we plan to extract Outer IPv4 tuple for TCP/UDP pkts
-	 * then 112bit key is not sufficient
-	 */
 	if (mcam->keysize != NPC_MCAM_KEY_X2)
 		return;
 
-	/* Start placing extracted data/flags from 64bit onwards, for now */
-	/* Extract DMAC from the packet */
-	cfg = (0x05 << 16) | BIT_ULL(7) | NPC_PARSE_RESULT_DMAC_OFFSET;
-	LDATA_EXTRACT_CONFIG(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 0, cfg);
+	/* Default MCAM KEX profile */
+	/* Layer A: Ethernet: */
+
+	/* DMAC: 6 bytes, KW1[47:0] */
+	cfg = KEX_LD_CFG(0x05, 0x0, 0x1, 0x0, NPC_PARSE_RESULT_DMAC_OFFSET);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 0, cfg);
+
+	/* Ethertype: 2 bytes, KW0[47:32] */
+	cfg = KEX_LD_CFG(0x01, 0xc, 0x1, 0x0, 0x4);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LA, NPC_LT_LA_ETHER, 1, cfg);
+
+	/* Layer B: Single VLAN (CTAG) */
+	/* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */
+	cfg = KEX_LD_CFG(0x03, 0x0, 0x1, 0x0, 0x4);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_CTAG, 0, cfg);
+
+	/* Layer B: Stacked VLAN (STAG|QinQ) */
+	/* CTAG VLAN[2..3] + Ethertype, 4 bytes, KW0[63:32] */
+	cfg = KEX_LD_CFG(0x03, 0x4, 0x1, 0x0, 0x4);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_STAG, 0, cfg);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LB, NPC_LT_LB_QINQ, 0, cfg);
+
+	/* Layer C: IPv4 */
+	/* SIP+DIP: 8 bytes, KW2[63:0] */
+	cfg = KEX_LD_CFG(0x07, 0xc, 0x1, 0x0, 0x10);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LC, NPC_LT_LC_IP, 0, cfg);
+	/* TOS: 1 byte, KW1[63:56] */
+	cfg = KEX_LD_CFG(0x0, 0x1, 0x1, 0x0, 0xf);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LC, NPC_LT_LC_IP, 1, cfg);
+
+	/* Layer D:UDP */
+	/* SPORT: 2 bytes, KW3[15:0] */
+	cfg = KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_UDP, 0, cfg);
+	/* DPORT: 2 bytes, KW3[31:16] */
+	cfg = KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_UDP, 1, cfg);
+
+	/* Layer D:TCP */
+	/* SPORT: 2 bytes, KW3[15:0] */
+	cfg = KEX_LD_CFG(0x1, 0x0, 0x1, 0x0, 0x18);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_TCP, 0, cfg);
+	/* DPORT: 2 bytes, KW3[31:16] */
+	cfg = KEX_LD_CFG(0x1, 0x2, 0x1, 0x0, 0x1a);
+	SET_KEX_LD(NIX_INTF_RX, NPC_LID_LD, NPC_LT_LD_TCP, 1, cfg);
 }
 
 static void npc_config_kpuaction(struct rvu *rvu, int blkaddr,
@@ -898,13 +959,12 @@ int rvu_npc_init(struct rvu *rvu)
 		    BIT_ULL(6) | BIT_ULL(2));
 
 	/* Set RX and TX side MCAM search key size.
-	 * Also enable parse key extract nibbles suchthat except
-	 * layer E to H, rest of the key is included for MCAM search.
+	 * LA..LD (ltype only) + Channel
 	 */
 	rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX),
-		    ((keyz & 0x3) << 32) | ((1ULL << 20) - 1));
+			((keyz & 0x3) << 32) | 0x49247);
 	rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_TX),
-		    ((keyz & 0x3) << 32) | ((1ULL << 20) - 1));
+			((keyz & 0x3) << 32) | ((1ULL << 19) - 1));
 
 	err = npc_mcam_rsrcs_init(rvu, blkaddr);
 	if (err)
@@ -1876,3 +1936,48 @@ int rvu_mbox_handler_NPC_MCAM_ALLOC_AND_WRITE_ENTRY(struct rvu *rvu,
 
 	return 0;
 }
+
+#define GET_KEX_CFG(intf) \
+	rvu_read64(rvu, BLKADDR_NPC, NPC_AF_INTFX_KEX_CFG(intf))
+
+#define GET_KEX_FLAGS(ld) \
+	rvu_read64(rvu, BLKADDR_NPC, NPC_AF_KEX_LDATAX_FLAGS_CFG(ld))
+
+#define GET_KEX_LD(intf, lid, lt, ld)	\
+	rvu_read64(rvu, BLKADDR_NPC,	\
+		NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld))
+
+#define GET_KEX_LDFLAGS(intf, ld, fl)	\
+	rvu_read64(rvu, BLKADDR_NPC,	\
+		NPC_AF_INTFX_LDATAX_FLAGSX_CFG(intf, ld, fl))
+
+int rvu_mbox_handler_NPC_GET_KEX_CFG(struct rvu *rvu, struct msg_req *req,
+				     struct npc_get_kex_cfg_rsp *rsp)
+{
+	int lid, lt, ld, fl;
+
+	rsp->rx_keyx_cfg = GET_KEX_CFG(NIX_INTF_RX);
+	rsp->tx_keyx_cfg = GET_KEX_CFG(NIX_INTF_TX);
+	for (lid = 0; lid < NPC_MAX_LID; lid++) {
+		for (lt = 0; lt < NPC_MAX_LT; lt++) {
+			for (ld = 0; ld < NPC_MAX_LD; ld++) {
+				rsp->intf_lid_lt_ld[NIX_INTF_RX][lid][lt][ld] =
+					GET_KEX_LD(NIX_INTF_RX, lid, lt, ld);
+				rsp->intf_lid_lt_ld[NIX_INTF_TX][lid][lt][ld] =
+					GET_KEX_LD(NIX_INTF_TX, lid, lt, ld);
+			}
+		}
+	}
+	for (ld = 0; ld < NPC_MAX_LD; ld++)
+		rsp->kex_ld_flags[ld] = GET_KEX_FLAGS(ld);
+
+	for (ld = 0; ld < NPC_MAX_LD; ld++) {
+		for (fl = 0; fl < NPC_MAX_LFL; fl++) {
+			rsp->intf_ld_flags[NIX_INTF_RX][ld][fl] =
+					GET_KEX_LDFLAGS(NIX_INTF_RX, ld, fl);
+			rsp->intf_ld_flags[NIX_INTF_TX][ld][fl] =
+					GET_KEX_LDFLAGS(NIX_INTF_TX, ld, fl);
+		}
+	}
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/20] octeontx2-af: Support to enable/disable default MCAM entries
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (8 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 09/20] octeontx2-af: Add MKEX default profile sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG sunil.kovvuri
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

For a PF/VF with a NIXLF attached has default/reserved MCAM entries
for receiving Ucast/Bcast/Promisc traffic. Ideally traffic should be
forwarded to NIXLF only after it's contexts are initialized. This
patch keeps these default entries disabled and adds mbox messages
for a PF/VF to enable these once NPA/NIXLF initialization is done.
Likewise while PF/VF is being teared down, it can send the disable
mailbox message to stop receiving traffic.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  4 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  7 ++
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 48 ++++++++++++
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 91 +++++++++++++++-------
 4 files changed, 122 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 9941f0a..737dbc9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -179,7 +179,9 @@ M(NIX_VTAG_CFG,	0x8008, nix_vtag_config, msg_rsp)		\
 M(NIX_RSS_FLOWKEY_CFG,  0x8009, nix_rss_flowkey_cfg, msg_rsp)		\
 M(NIX_SET_MAC_ADDR,	0x800a, nix_set_mac_addr, msg_rsp)		\
 M(NIX_SET_RX_MODE,	0x800b, nix_rx_mode, msg_rsp)			\
-M(NIX_SET_HW_FRS,	0x800c, nix_frs_cfg, msg_rsp)
+M(NIX_SET_HW_FRS,	0x800c, nix_frs_cfg, msg_rsp)			\
+M(NIX_LF_START_RX,	0x800d, msg_req, msg_rsp)			\
+M(NIX_LF_STOP_RX,	0x800e, msg_req, msg_rsp)
 
 /* Messages initiated by AF (range 0xC00 - 0xDFF) */
 #define MBOX_UP_CGX_MESSAGES						\
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 074d792..12fbdba 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -366,6 +366,10 @@ int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req,
 				     struct msg_rsp *rsp);
 int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
 				    struct msg_rsp *rsp);
+int rvu_mbox_handler_NIX_LF_START_RX(struct rvu *rvu, struct msg_req *req,
+				     struct msg_rsp *rsp);
+int rvu_mbox_handler_NIX_LF_STOP_RX(struct rvu *rvu, struct msg_req *req,
+				    struct msg_rsp *rsp);
 
 /* NPC APIs */
 int rvu_npc_init(struct rvu *rvu);
@@ -377,9 +381,12 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
 void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
 				   int nixlf, u64 chan, bool allmulti);
 void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
+void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
 				       int nixlf, u64 chan);
 void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
+void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
+void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
 				    int group, int alg_idx, int mcam_index);
 int rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(struct rvu *rvu,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 9de9aaf..5853af4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -821,6 +821,9 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 	if (err)
 		goto free_mem;
 
+	/* Disable NPC entries as NIXLF's contexts are not initialized yet */
+	rvu_npc_disable_default_entries(rvu, pcifunc, nixlf);
+
 	goto exit;
 
 free_mem:
@@ -2176,3 +2179,48 @@ void rvu_nix_freemem(struct rvu *rvu)
 		mutex_destroy(&mcast->mce_lock);
 	}
 }
+
+static int nix_get_nixlf(struct rvu *rvu, u16 pcifunc, int *nixlf)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	int blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	if (!pfvf->nixlf || blkaddr < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	*nixlf = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+	if (*nixlf < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	return 0;
+}
+
+int rvu_mbox_handler_NIX_LF_START_RX(struct rvu *rvu, struct msg_req *req,
+				     struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	int nixlf, err;
+
+	err = nix_get_nixlf(rvu, pcifunc, &nixlf);
+	if (err)
+		return err;
+
+	rvu_npc_enable_default_entries(rvu, pcifunc, nixlf);
+	return 0;
+}
+
+int rvu_mbox_handler_NIX_LF_STOP_RX(struct rvu *rvu, struct msg_req *req,
+				    struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	int nixlf, err;
+
+	err = nix_get_nixlf(rvu, pcifunc, &nixlf);
+	if (err)
+		return err;
+
+	rvu_npc_disable_default_entries(rvu, pcifunc, nixlf);
+	return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 814166e..100ce29 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -384,7 +384,8 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
 			      NIX_INTF_RX, &entry, true);
 }
 
-void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf)
+static void npc_enadis_promisc_entry(struct rvu *rvu, u16 pcifunc,
+				     int nixlf, bool enable)
 {
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 	int blkaddr, index;
@@ -399,7 +400,17 @@ void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf)
 
 	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
 					 nixlf, NIXLF_PROMISC_ENTRY);
-	npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
+	npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable);
+}
+
+void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	npc_enadis_promisc_entry(rvu, pcifunc, nixlf, false);
+}
+
+void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	npc_enadis_promisc_entry(rvu, pcifunc, nixlf, true);
 }
 
 void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
@@ -512,11 +523,59 @@ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
 		    NPC_AF_MCAMEX_BANKX_ACTION(index, bank), *(u64 *)&action);
 }
 
-void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
+				       int nixlf, bool enable)
 {
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 	struct nix_rx_action action;
-	int blkaddr, index, bank;
+	int index, bank, blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return;
+
+	/* Ucast MCAM match entry of this PF/VF */
+	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
+					 nixlf, NIXLF_UCAST_ENTRY);
+	npc_enable_mcam_entry(rvu, mcam, blkaddr, index, enable);
+
+	/* For PF, ena/dis promisc and bcast MCAM match entries */
+	if (pcifunc & RVU_PFVF_FUNC_MASK)
+		return;
+
+	/* For bcast, enable/disable only if it's action is not
+	 * packet replication, incase if action is replication
+	 * then this PF's nixlf is removed from bcast replication
+	 * list.
+	 */
+	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
+					 nixlf, NIXLF_BCAST_ENTRY);
+	bank = npc_get_bank(mcam, index);
+	*(u64 *)&action = rvu_read64(rvu, blkaddr,
+	     NPC_AF_MCAMEX_BANKX_ACTION(index & (mcam->banksize - 1), bank));
+	if (action.op != NIX_RX_ACTIONOP_MCAST)
+		npc_enable_mcam_entry(rvu, mcam,
+				      blkaddr, index, enable);
+	if (enable)
+		rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf);
+	else
+		rvu_npc_disable_promisc_entry(rvu, pcifunc, nixlf);
+}
+
+void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	npc_enadis_default_entries(rvu, pcifunc, nixlf, false);
+}
+
+void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	npc_enadis_default_entries(rvu, pcifunc, nixlf, true);
+}
+
+void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	int blkaddr;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0)
@@ -532,29 +591,7 @@ void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
 
 	mutex_unlock(&mcam->lock);
 
-	/* Disable ucast MCAM match entry of this PF/VF */
-	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
-					 nixlf, NIXLF_UCAST_ENTRY);
-	npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
-
-	/* For PF, disable promisc and bcast MCAM match entries */
-	if (!(pcifunc & RVU_PFVF_FUNC_MASK)) {
-		index = npc_get_nixlf_mcam_index(mcam, pcifunc,
-						 nixlf, NIXLF_BCAST_ENTRY);
-		/* For bcast, disable only if it's action is not
-		 * packet replication, incase if action is replication
-		 * then this PF's nixlf is removed from bcast replication
-		 * list.
-		 */
-		bank = npc_get_bank(mcam, index);
-		index &= (mcam->banksize - 1);
-		*(u64 *)&action = rvu_read64(rvu, blkaddr,
-				     NPC_AF_MCAMEX_BANKX_ACTION(index, bank));
-		if (action.op != NIX_RX_ACTIONOP_MCAST)
-			npc_enable_mcam_entry(rvu, mcam, blkaddr, index, false);
-
-		rvu_npc_disable_promisc_entry(rvu, pcifunc, nixlf);
-	}
+	rvu_npc_disable_default_entries(rvu, pcifunc, nixlf);
 }
 
 #define SET_KEX_LD(intf, lid, ltype, ld, cfg)	\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (9 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 10/20] octeontx2-af: Support to enable/disable default MCAM entries sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 20:47   ` Arnd Bergmann
  2018-11-08 18:35 ` [PATCH 12/20] octeontx2-af: Verify NPA/SSO/NIX PF_FUNC mapping sunil.kovvuri
                   ` (9 subsequent siblings)
  20 siblings, 1 reply; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Tomasz Duszynski, Sunil Goutham

From: Tomasz Duszynski <tduszynski@marvell.com>

This works by shadowing existing UCAST MCAM entry
with a new one additionally matching either NPC_LT_LB_CTAG
or NPC_LT_LB_STAG. For this to fully work one needs to
send properly configured NIX_VTAG_CFG message afterwards i.e with
strip and capture enabled and type set to 0.

On receiving tagged packet NIX will remove outer VLAN and capture
TCI in NIX_RX_PARSE_S.

Also simplified RX Vtag configuration flow
With this setting STRIP/CAPTURE VTAG actions separately would be
possible. Following combinations are possible: STRIP,
STRIP and CAPTURE, CAPTURE or nothing (0 disables respective actions).

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  6 +-
 drivers/net/ethernet/marvell/octeontx2/af/npc.h    | 30 ++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  8 +++
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 83 +++++++++++++++++-----
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 46 +++++++++++-
 5 files changed, 152 insertions(+), 21 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 737dbc9..f2bf77d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -181,7 +181,8 @@ M(NIX_SET_MAC_ADDR,	0x800a, nix_set_mac_addr, msg_rsp)		\
 M(NIX_SET_RX_MODE,	0x800b, nix_rx_mode, msg_rsp)			\
 M(NIX_SET_HW_FRS,	0x800c, nix_frs_cfg, msg_rsp)			\
 M(NIX_LF_START_RX,	0x800d, msg_req, msg_rsp)			\
-M(NIX_LF_STOP_RX,	0x800e, msg_req, msg_rsp)
+M(NIX_LF_STOP_RX,	0x800e, msg_req, msg_rsp)			\
+M(NIX_RXVLAN_ALLOC,	0x8012, msg_req, msg_rsp)
 
 /* Messages initiated by AF (range 0xC00 - 0xDFF) */
 #define MBOX_UP_CGX_MESSAGES						\
@@ -499,6 +500,7 @@ struct nix_txschq_config {
 
 struct nix_vtag_config {
 	struct mbox_msghdr hdr;
+	/* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
 	u8 vtag_size;
 	/* cfg_type is '0' for tx vlan cfg
 	 * cfg_type is '1' for rx vlan cfg
@@ -519,7 +521,7 @@ struct nix_vtag_config {
 
 		/* valid when cfg_type is '1' */
 		struct {
-			/* rx vtag type index */
+			/* rx vtag type index, valid values are in 0..7 range */
 			u8 vtag_type;
 			/* rx vtag strip */
 			u8 strip_vtag :1;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
index f98b011..3f7e5e6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
@@ -259,4 +259,34 @@ struct nix_rx_action {
 #endif
 };
 
+struct nix_rx_vtag_action {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	u64     rsvd_63_48      :16;
+	u64     vtag1_valid     :1;
+	u64     vtag1_type      :3;
+	u64     rsvd_43         :1;
+	u64     vtag1_lid       :3;
+	u64     vtag1_relptr    :8;
+	u64     rsvd_31_16      :16;
+	u64     vtag0_valid     :1;
+	u64     vtag0_type      :3;
+	u64     rsvd_11         :1;
+	u64     vtag0_lid       :3;
+	u64     vtag0_relptr    :8;
+#else
+	u64     vtag0_relptr    :8;
+	u64     vtag0_lid       :3;
+	u64     rsvd_11         :1;
+	u64     vtag0_type      :3;
+	u64     vtag0_valid     :1;
+	u64     rsvd_31_16      :16;
+	u64     vtag1_relptr    :8;
+	u64     vtag1_lid       :3;
+	u64     rsvd_43         :1;
+	u64     vtag1_type      :3;
+	u64     vtag1_valid     :1;
+	u64     rsvd_63_48      :16;
+#endif
+};
+
 #endif /* NPC_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 12fbdba..e213bf4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -142,6 +142,11 @@ struct rvu_pfvf {
 	/* Broadcast pkt replication info */
 	u16			bcast_mce_idx;
 	struct nix_mce_list	bcast_mce_list;
+
+	/* VLAN offload */
+	struct mcam_entry entry;
+	int rxvlan_index;
+	bool rxvlan;
 };
 
 struct nix_txsch {
@@ -356,6 +361,8 @@ int rvu_mbox_handler_NIX_STATS_RST(struct rvu *rvu, struct msg_req *req,
 int rvu_mbox_handler_NIX_VTAG_CFG(struct rvu *rvu,
 				  struct nix_vtag_config *req,
 				  struct msg_rsp *rsp);
+int rvu_mbox_handler_NIX_RXVLAN_ALLOC(struct rvu *rvu, struct msg_req *req,
+				      struct msg_rsp *rsp);
 int rvu_mbox_handler_NIX_RSS_FLOWKEY_CFG(struct rvu *rvu,
 					 struct nix_rss_flowkey_cfg *req,
 					 struct msg_rsp *rsp);
@@ -384,6 +391,7 @@ void rvu_npc_disable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_enable_promisc_entry(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_install_bcast_match_entry(struct rvu *rvu, u16 pcifunc,
 				       int nixlf, u64 chan);
+int rvu_npc_update_rxvlan(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_disable_mcam_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
 void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 5853af4..70a2997 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -192,6 +192,7 @@ static void nix_interface_deinit(struct rvu *rvu, u16 pcifunc, u8 nixlf)
 
 	pfvf->maxlen = 0;
 	pfvf->minlen = 0;
+	pfvf->rxvlan = false;
 
 	/* Remove this PF_FUNC from bcast pkt replication list */
 	err = nix_update_bcast_mce_list(rvu, pcifunc, false);
@@ -1209,28 +1210,15 @@ int rvu_mbox_handler_NIX_TXSCHQ_CFG(struct rvu *rvu,
 static int nix_rx_vtag_cfg(struct rvu *rvu, int nixlf, int blkaddr,
 			   struct nix_vtag_config *req)
 {
-	u64 regval = 0;
+	u64 regval = req->vtag_size;
 
-#define NIX_VTAGTYPE_MAX 0x8ull
-#define NIX_VTAGSIZE_MASK 0x7ull
-#define NIX_VTAGSTRIP_CAP_MASK 0x30ull
-
-	if (req->rx.vtag_type >= NIX_VTAGTYPE_MAX ||
-	    req->vtag_size > VTAGSIZE_T8)
+	if (req->rx.vtag_type > 7 || req->vtag_size > VTAGSIZE_T8)
 		return -EINVAL;
 
-	regval = rvu_read64(rvu, blkaddr,
-			    NIX_AF_LFX_RX_VTAG_TYPEX(nixlf, req->rx.vtag_type));
-
-	if (req->rx.strip_vtag && req->rx.capture_vtag)
-		regval |= BIT_ULL(4) | BIT_ULL(5);
-	else if (req->rx.strip_vtag)
+	if (req->rx.capture_vtag)
+		regval |= BIT_ULL(5);
+	if (req->rx.strip_vtag)
 		regval |= BIT_ULL(4);
-	else
-		regval &= ~(BIT_ULL(4) | BIT_ULL(5));
-
-	regval &= ~NIX_VTAGSIZE_MASK;
-	regval |= req->vtag_size & NIX_VTAGSIZE_MASK;
 
 	rvu_write64(rvu, blkaddr,
 		    NIX_AF_LFX_RX_VTAG_TYPEX(nixlf, req->rx.vtag_type), regval);
@@ -1770,6 +1758,9 @@ int rvu_mbox_handler_NIX_SET_MAC_ADDR(struct rvu *rvu,
 
 	rvu_npc_install_ucast_entry(rvu, pcifunc, nixlf,
 				    pfvf->rx_chan_base, req->mac_addr);
+
+	rvu_npc_update_rxvlan(rvu, pcifunc, nixlf);
+
 	return 0;
 }
 
@@ -1803,6 +1794,9 @@ int rvu_mbox_handler_NIX_SET_RX_MODE(struct rvu *rvu, struct nix_rx_mode *req,
 	else
 		rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
 					      pfvf->rx_chan_base, allmulti);
+
+	rvu_npc_update_rxvlan(rvu, pcifunc, nixlf);
+
 	return 0;
 }
 
@@ -1941,6 +1935,59 @@ int rvu_mbox_handler_NIX_SET_HW_FRS(struct rvu *rvu, struct nix_frs_cfg *req,
 	return 0;
 }
 
+int rvu_mbox_handler_NIX_RXVLAN_ALLOC(struct rvu *rvu, struct msg_req *req,
+				      struct msg_rsp *rsp)
+{
+	struct npc_mcam_alloc_entry_req alloc_req = { };
+	struct npc_mcam_alloc_entry_rsp alloc_rsp = { };
+	struct npc_mcam_free_entry_req free_req = { };
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, nixlf, err;
+	struct rvu_pfvf *pfvf;
+
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+	if (pfvf->rxvlan)
+		return 0;
+
+	/* alloc new mcam entry */
+	alloc_req.hdr.pcifunc = pcifunc;
+	alloc_req.count = 1;
+
+	err = rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(rvu, &alloc_req,
+						    &alloc_rsp);
+	if (err)
+		return err;
+
+	/* update entry to enable rxvlan offload */
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	if (blkaddr < 0) {
+		err = NIX_AF_ERR_AF_LF_INVALID;
+		goto free_entry;
+	}
+
+	nixlf = rvu_get_lf(rvu, &rvu->hw->block[blkaddr], pcifunc, 0);
+	if (nixlf < 0) {
+		err = NIX_AF_ERR_AF_LF_INVALID;
+		goto free_entry;
+	}
+
+	pfvf->rxvlan_index = alloc_rsp.entry_list[0];
+	/* all it means is that rxvlan_index is valid */
+	pfvf->rxvlan = true;
+
+	err = rvu_npc_update_rxvlan(rvu, pcifunc, nixlf);
+	if (err)
+		goto free_entry;
+
+	return 0;
+free_entry:
+	free_req.hdr.pcifunc = pcifunc;
+	free_req.entry = alloc_rsp.entry_list[0];
+	rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(rvu, &free_req, rsp);
+	pfvf->rxvlan = false;
+	return err;
+}
+
 static void nix_link_config(struct rvu *rvu, int blkaddr)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 100ce29..5dbb5cd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -306,7 +306,9 @@ static u64 npc_get_mcam_action(struct rvu *rvu, struct npc_mcam *mcam,
 void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
 				 int nixlf, u64 chan, u8 *mac_addr)
 {
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
 	struct npc_mcam *mcam = &rvu->hw->mcam;
+	struct nix_rx_vtag_action vtag_action;
 	struct mcam_entry entry = { {0} };
 	struct nix_rx_action action;
 	int blkaddr, index, kwi;
@@ -345,6 +347,20 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
 	entry.action = *(u64 *)&action;
 	npc_config_mcam_entry(rvu, mcam, blkaddr, index,
 			      NIX_INTF_RX, &entry, true);
+
+	/* add VLAN matching, setup action and save entry back for later */
+	entry.kw[0] |= (NPC_LT_LB_STAG | NPC_LT_LB_CTAG) << 20;
+	entry.kw_mask[0] |= (NPC_LT_LB_STAG & NPC_LT_LB_CTAG) << 20;
+
+	*(u64 *)&vtag_action = 0;
+	vtag_action.vtag0_valid = 1;
+	/* must match type set in NIX_VTAG_CFG */
+	vtag_action.vtag0_type = 0;
+	vtag_action.vtag0_lid = NPC_LID_LA;
+	vtag_action.vtag0_relptr = 12;
+	entry.vtag_action = *(u64 *)&vtag_action;
+
+	memcpy(&pfvf->entry, &entry, sizeof(entry));
 }
 
 void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
@@ -352,7 +368,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
 {
 	struct npc_mcam *mcam = &rvu->hw->mcam;
 	struct mcam_entry entry = { {0} };
-	struct nix_rx_action action;
+	struct nix_rx_action action = { };
 	int blkaddr, index, kwi;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
@@ -521,6 +537,8 @@ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
 
 	rvu_write64(rvu, blkaddr,
 		    NPC_AF_MCAMEX_BANKX_ACTION(index, bank), *(u64 *)&action);
+
+	rvu_npc_update_rxvlan(rvu, pcifunc, nixlf);
 }
 
 static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
@@ -560,6 +578,8 @@ static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
 		rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf);
 	else
 		rvu_npc_disable_promisc_entry(rvu, pcifunc, nixlf);
+
+	rvu_npc_update_rxvlan(rvu, pcifunc, nixlf);
 }
 
 void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
@@ -2018,3 +2038,27 @@ int rvu_mbox_handler_NPC_GET_KEX_CFG(struct rvu *rvu, struct msg_req *req,
 	}
 	return 0;
 }
+
+int rvu_npc_update_rxvlan(struct rvu *rvu, u16 pcifunc, int nixlf)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct npc_mcam *mcam = &rvu->hw->mcam;
+	int blkaddr, index;
+	bool enable;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	if (!pfvf->rxvlan)
+		return 0;
+
+	index = npc_get_nixlf_mcam_index(mcam, pcifunc, nixlf,
+					 NIXLF_UCAST_ENTRY);
+	pfvf->entry.action = npc_get_mcam_action(rvu, mcam, blkaddr, index);
+	enable = is_mcam_entry_enabled(rvu, mcam, blkaddr, index);
+	npc_config_mcam_entry(rvu, mcam, blkaddr, pfvf->rxvlan_index,
+			      NIX_INTF_RX, &pfvf->entry, enable);
+
+	return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 12/20] octeontx2-af: Verify NPA/SSO/NIX PF_FUNC mapping
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (10 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 13/20] octeontx2-af: Add FLR interrupt handler sunil.kovvuri
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

While mapping a NIX LF to a NPA LF attached PF_FUNC or
SSO LF attached PF_FUNC, verify if PF_FUNC is valid and
if that PF_FUNC has a LF of that block attached to it or not.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  2 ++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    | 38 ++++++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  1 +
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 32 +++++++++++-------
 4 files changed, 62 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index f2bf77d..96dd59e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -389,6 +389,8 @@ enum nix_af_status {
 	NIX_AF_INVAL_TXSCHQ_CFG     = -412,
 	NIX_AF_SMQ_FLUSH_FAILED     = -413,
 	NIX_AF_ERR_LF_RESET         = -414,
+	NIX_AF_INVAL_NPA_PF_FUNC    = -419,
+	NIX_AF_INVAL_SSO_PF_FUNC    = -420,
 };
 
 /* For NIX LF context alloc and init */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index c7f00895..bd1df1c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -337,6 +337,28 @@ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
 		return &rvu->pf[rvu_get_pf(pcifunc)];
 }
 
+static bool is_pf_func_valid(struct rvu *rvu, u16 pcifunc)
+{
+	int pf, vf, nvfs;
+	u64 cfg;
+
+	pf = rvu_get_pf(pcifunc);
+	if (pf >= rvu->hw->total_pfs)
+		return false;
+
+	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+		return true;
+
+	/* Check if VF is within number of VFs attached to this PF */
+	vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1;
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+	nvfs = (cfg >> 12) & 0xFF;
+	if (vf >= nvfs)
+		return false;
+
+	return true;
+}
+
 bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr)
 {
 	struct rvu_block *block;
@@ -860,6 +882,22 @@ static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
 	return 0;
 }
 
+bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype)
+{
+	struct rvu_pfvf *pfvf;
+
+	if (!is_pf_func_valid(rvu, pcifunc))
+		return false;
+
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+
+	/* Check if this PFFUNC has a LF of type blktype attached */
+	if (!rvu_get_rsrc_mapcount(pfvf, blktype))
+		return false;
+
+	return true;
+}
+
 static int rvu_lookup_rsrc(struct rvu *rvu, struct rvu_block *block,
 			   int pcifunc, int slot)
 {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index e213bf4..d54db21 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -255,6 +255,7 @@ int rvu_get_pf(u16 pcifunc);
 struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
 void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
 bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+bool is_pffunc_map_valid(struct rvu *rvu, u16 pcifunc, int blktype);
 int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
 int rvu_lf_reset(struct rvu *rvu, struct rvu_block *block, int lf);
 int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 70a2997..d8d8947 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -694,6 +694,24 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 	if (nixlf < 0)
 		return NIX_AF_ERR_AF_LF_INVALID;
 
+	/* Check if requested 'NIXLF <=> NPALF' mapping is valid */
+	if (req->npa_func) {
+		/* If default, use 'this' NIXLF's PFFUNC */
+		if (req->npa_func == RVU_DEFAULT_PF_FUNC)
+			req->npa_func = pcifunc;
+		if (!is_pffunc_map_valid(rvu, req->npa_func, BLKTYPE_NPA))
+			return NIX_AF_INVAL_NPA_PF_FUNC;
+	}
+
+	/* Check if requested 'NIXLF <=> SSOLF' mapping is valid */
+	if (req->sso_func) {
+		/* If default, use 'this' NIXLF's PFFUNC */
+		if (req->sso_func == RVU_DEFAULT_PF_FUNC)
+			req->sso_func = pcifunc;
+		if (!is_pffunc_map_valid(rvu, req->sso_func, BLKTYPE_SSO))
+			return NIX_AF_INVAL_SSO_PF_FUNC;
+	}
+
 	/* If RSS is being enabled, check if requested config is valid.
 	 * RSS table size should be power of two, otherwise
 	 * RSS_GRP::OFFSET + adder might go beyond that group or
@@ -798,18 +816,10 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 	/* Enable LMTST for this NIX LF */
 	rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_CFG2(nixlf), BIT_ULL(0));
 
-	/* Set CQE/WQE size, NPA_PF_FUNC for SQBs and also SSO_PF_FUNC
-	 * If requester has sent a 'RVU_DEFAULT_PF_FUNC' use this NIX LF's
-	 * PCIFUNC itself.
-	 */
-	if (req->npa_func == RVU_DEFAULT_PF_FUNC)
-		cfg = pcifunc;
-	else
+	/* Set CQE/WQE size, NPA_PF_FUNC for SQBs and also SSO_PF_FUNC */
+	if (req->npa_func)
 		cfg = req->npa_func;
-
-	if (req->sso_func == RVU_DEFAULT_PF_FUNC)
-		cfg |= (u64)pcifunc << 16;
-	else
+	if (req->sso_func)
 		cfg |= (u64)req->sso_func << 16;
 
 	cfg |= (u64)req->xqe_sz << 33;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 13/20] octeontx2-af: Add FLR interrupt handler
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (11 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 12/20] octeontx2-af: Verify NPA/SSO/NIX PF_FUNC mapping sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 20:50   ` Arnd Bergmann
  2018-11-08 18:35 ` [PATCH 14/20] octeontx2-af: Teardown NPA, NIX LF upon receiving FLR sunil.kovvuri
                   ` (7 subsequent siblings)
  20 siblings, 1 reply; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Geetha sowjanya, Sunil Goutham

From: Geetha sowjanya <gakula@marvell.com>

All RVU PF's upon receiving a FLR will trigger an IRQ to RVU AF.
This patch enables all RVU PF's FLR interrupt and registers a
handler. Upon receiving an IRQ a workqueue is scheduled to
cleanup all RVU blocks being used by the PF/VF which received
the FLR.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 117 +++++++++++++++++++++++-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h |   5 +
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index bd1df1c..0f5923a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1601,6 +1601,44 @@ static void rvu_enable_mbox_intr(struct rvu *rvu)
 		    INTR_MASK(hw->total_pfs) & ~1ULL);
 }
 
+static void rvu_flr_handler(struct work_struct *work)
+{
+	struct rvu_work *flrwork = container_of(work, struct rvu_work, work);
+	struct rvu *rvu = flrwork->rvu;
+	u16 pf;
+
+	pf = flrwork - rvu->flr_wrk;
+
+	/* Signal FLR finish */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFTRPEND, BIT_ULL(pf));
+
+	/* Enable interrupt */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S,  BIT_ULL(pf));
+}
+
+static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	u64 intr;
+	u8  pf;
+
+	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT);
+
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+		if (intr & (1ULL << pf)) {
+			/* PF is already dead do only AF related operations */
+			queue_work(rvu->flr_wq, &rvu->flr_wrk[pf].work);
+			/* clear interrupt */
+			rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT,
+				    BIT_ULL(pf));
+			/* Disable the interrupt */
+			rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1C,
+				    BIT_ULL(pf));
+		}
+	}
+	return IRQ_HANDLED;
+}
+
 static void rvu_unregister_interrupts(struct rvu *rvu)
 {
 	int irq;
@@ -1609,6 +1647,10 @@ static void rvu_unregister_interrupts(struct rvu *rvu)
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C,
 		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
 
+	/* Disable the PF FLR interrupt */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1C,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
 	for (irq = 0; irq < rvu->num_vec; irq++) {
 		if (rvu->irq_allocated[irq])
 			free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
@@ -1660,6 +1702,27 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	/* Enable mailbox interrupts from all PFs */
 	rvu_enable_mbox_intr(rvu);
 
+	/* Register FLR interrupt handler */
+	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_PFFLR * NAME_SIZE],
+		"RVUAF FLR");
+	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_PFFLR),
+			  rvu_flr_intr_handler, 0,
+			  &rvu->irq_name[RVU_AF_INT_VEC_PFFLR * NAME_SIZE],
+			  rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for FLR\n");
+		goto fail;
+	}
+	rvu->irq_allocated[RVU_AF_INT_VEC_PFFLR] = true;
+
+	/* Enable FLR interrupt for all PFs*/
+	rvu_write64(rvu, BLKADDR_RVUM,
+		    RVU_AF_PFFLR_INT, INTR_MASK(rvu->hw->total_pfs));
+
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
 	return 0;
 
 fail:
@@ -1667,6 +1730,51 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	return ret;
 }
 
+static void rvu_flr_wq_destroy(struct rvu *rvu)
+{
+	if (rvu->flr_wq) {
+		flush_workqueue(rvu->flr_wq);
+		destroy_workqueue(rvu->flr_wq);
+		rvu->flr_wq = NULL;
+	}
+	kfree(rvu->flr_wrk);
+}
+
+static int rvu_flr_init(struct rvu *rvu)
+{
+	u64 cfg;
+	int pf;
+
+	/* Enable FLR for all PFs*/
+	for (pf = 1; pf < rvu->hw->total_pfs; pf++) {
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+		rvu_write64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf),
+			    cfg | BIT_ULL(22));
+	}
+
+	rvu->flr_wq = alloc_workqueue("rvu_afpf_flr",
+				      WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+				       1);
+	if (!rvu->flr_wq)
+		return -ENOMEM;
+
+	rvu->flr_wrk = devm_kcalloc(rvu->dev, rvu->hw->total_pfs,
+				    sizeof(struct rvu_work), GFP_KERNEL);
+	if (!rvu->flr_wrk) {
+		destroy_workqueue(rvu->flr_wq);
+		return -ENOMEM;
+	}
+
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+		rvu->flr_wrk[pf].rvu = rvu;
+		INIT_WORK(&rvu->flr_wrk[pf].work, rvu_flr_handler);
+	}
+
+	mutex_init(&rvu->flr_lock);
+
+	return 0;
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -1737,11 +1845,17 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_mbox;
 
-	err = rvu_register_interrupts(rvu);
+	err = rvu_flr_init(rvu);
 	if (err)
 		goto err_cgx;
 
+	err = rvu_register_interrupts(rvu);
+	if (err)
+		goto err_flr;
+
 	return 0;
+err_flr:
+	rvu_flr_wq_destroy(rvu);
 err_cgx:
 	rvu_cgx_wq_destroy(rvu);
 err_mbox:
@@ -1765,6 +1879,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
 	rvu_unregister_interrupts(rvu);
+	rvu_flr_wq_destroy(rvu);
 	rvu_cgx_wq_destroy(rvu);
 	rvu_mbox_destroy(rvu);
 	rvu_reset_all_blocks(rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index d54db21..510ae33 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -200,6 +200,11 @@ struct rvu {
 	struct rvu_work		*mbox_wrk_up;
 	struct workqueue_struct *mbox_wq;
 
+	/* PF FLR */
+	struct rvu_work		*flr_wrk;
+	struct workqueue_struct *flr_wq;
+	struct mutex		flr_lock; /* Serialize FLRs */
+
 	/* MSI-X */
 	u16			num_vec;
 	char			*irq_name;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 14/20] octeontx2-af: Teardown NPA, NIX LF upon receiving FLR
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (12 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 13/20] octeontx2-af: Add FLR interrupt handler sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 15/20] octeontx2-af: Mbox communication support btw AF and it's VFs sunil.kovvuri
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem
  Cc: arnd, linux-soc, Geetha sowjanya, Stanislaw Kardach, Sunil Goutham

From: Geetha sowjanya <gakula@marvell.com>

Upon receiving FLR IRQ for a RVU PF, teardown or cleanup
resources held by that PF_FUNC. This patch cleans up,
NIX LF
 - Stop ingress/egress traffic
 - Disable NPC MCAM entries being used.
 - Free Tx scheduler queues
 - Disable RQ/SQ/CQ HW contexts
NPA LF
 - Disable Pool/Aura HW contexts
In future teardown of SSO/SSOW/TIM/CPT will be added.

Also added a mailbox message for a RVU PF to request
AF, to perform FLR for a RVU VF under it.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Stanislaw Kardach <skardach@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   | 10 ++-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    | 82 +++++++++++++++++++++-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  2 +
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 48 +++++++++++++
 .../net/ethernet/marvell/octeontx2/af/rvu_npa.c    | 17 +++++
 5 files changed, 157 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 96dd59e..4ea7efd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -123,7 +123,8 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 M(READY,		0x001, msg_req, ready_msg_rsp)			\
 M(ATTACH_RESOURCES,	0x002, rsrc_attach, msg_rsp)			\
 M(DETACH_RESOURCES,	0x003, rsrc_detach, msg_rsp)			\
-M(MSIX_OFFSET,		0x004, msg_req, msix_offset_rsp)		\
+M(MSIX_OFFSET,		0x005, msg_req, msix_offset_rsp)		\
+M(VF_FLR,		0x006, msg_req, msg_rsp)			\
 /* CGX mbox IDs (range 0x200 - 0x3FF) */				\
 M(CGX_START_RXTX,	0x200, msg_req, msg_rsp)			\
 M(CGX_STOP_RXTX,	0x201, msg_req, msg_rsp)			\
@@ -213,6 +214,13 @@ struct msg_rsp {
 	struct mbox_msghdr hdr;
 };
 
+/* RVU mailbox error codes
+ * Range 256 - 300.
+ */
+enum rvu_af_status {
+	RVU_INVALID_VF_ID           = -256,
+};
+
 struct ready_msg_rsp {
 	struct mbox_msghdr hdr;
 	u16    sclk_feq;	/* SCLK frequency */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 0f5923a..60340dc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -29,6 +29,7 @@ static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				struct rvu_block *block, int lf);
 static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				  struct rvu_block *block, int lf);
+static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc);
 
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
@@ -1320,6 +1321,26 @@ static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req,
 	return 0;
 }
 
+static int rvu_mbox_handler_VF_FLR(struct rvu *rvu, struct msg_req *req,
+				   struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	u16 vf, numvfs;
+	u64 cfg;
+
+	vf = pcifunc & RVU_PFVF_FUNC_MASK;
+	cfg = rvu_read64(rvu, BLKADDR_RVUM,
+			 RVU_PRIV_PFX_CFG(rvu_get_pf(pcifunc)));
+	numvfs = (cfg >> 12) & 0xFF;
+
+	if (vf && vf <= numvfs)
+		__rvu_flr_handler(rvu, pcifunc);
+	else
+		return RVU_INVALID_VF_ID;
+
+	return 0;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
@@ -1601,14 +1622,73 @@ static void rvu_enable_mbox_intr(struct rvu *rvu)
 		    INTR_MASK(hw->total_pfs) & ~1ULL);
 }
 
+static void rvu_blklf_teardown(struct rvu *rvu, u16 pcifunc, u8 blkaddr)
+{
+	struct rvu_block *block;
+	int slot, lf, num_lfs;
+	int err;
+
+	block = &rvu->hw->block[blkaddr];
+	num_lfs = rvu_get_rsrc_mapcount(rvu_get_pfvf(rvu, pcifunc),
+					block->type);
+	if (!num_lfs)
+		return;
+	for (slot = 0; slot < num_lfs; slot++) {
+		lf = rvu_get_lf(rvu, block, pcifunc, slot);
+		if (lf < 0)
+			continue;
+
+		/* Cleanup LF and reset it */
+		if (block->addr == BLKADDR_NIX0)
+			rvu_nix_lf_teardown(rvu, pcifunc, block->addr, lf);
+		else if (block->addr == BLKADDR_NPA)
+			rvu_npa_lf_teardown(rvu, pcifunc, lf);
+
+		err = rvu_lf_reset(rvu, block, lf);
+		if (err) {
+			dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n",
+				block->addr, lf);
+		}
+	}
+}
+
+static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
+{
+	mutex_lock(&rvu->flr_lock);
+	/* Reset order should reflect inter-block dependencies:
+	 * 1. Reset any packet/work sources (NIX, CPT, TIM)
+	 * 2. Flush and reset SSO/SSOW
+	 * 3. Cleanup pools (NPA)
+	 */
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NIX0);
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_CPT0);
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_TIM);
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSOW);
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSO);
+	rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
+	rvu_detach_rsrcs(rvu, NULL, pcifunc);
+	mutex_unlock(&rvu->flr_lock);
+}
+
 static void rvu_flr_handler(struct work_struct *work)
 {
 	struct rvu_work *flrwork = container_of(work, struct rvu_work, work);
 	struct rvu *rvu = flrwork->rvu;
-	u16 pf;
+	u16 pcifunc, numvfs, vf;
+	u64 cfg;
+	int pf;
 
 	pf = flrwork - rvu->flr_wrk;
 
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+	numvfs = (cfg >> 12) & 0xFF;
+	pcifunc  = pf << RVU_PFVF_PF_SHIFT;
+
+	for (vf = 0; vf < numvfs; vf++)
+		__rvu_flr_handler(rvu, (pcifunc | (vf + 1)));
+
+	__rvu_flr_handler(rvu, pcifunc);
+
 	/* Signal FLR finish */
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFTRPEND, BIT_ULL(pf));
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 510ae33..9e3843e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -325,6 +325,7 @@ int rvu_mbox_handler_CGX_INTLBK_DISABLE(struct rvu *rvu, struct msg_req *req,
 /* NPA APIs */
 int rvu_npa_init(struct rvu *rvu);
 void rvu_npa_freemem(struct rvu *rvu);
+void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf);
 int rvu_mbox_handler_NPA_AQ_ENQ(struct rvu *rvu,
 				struct npa_aq_enq_req *req,
 				struct npa_aq_enq_rsp *rsp);
@@ -342,6 +343,7 @@ bool is_nixlf_attached(struct rvu *rvu, u16 pcifunc);
 int rvu_nix_init(struct rvu *rvu);
 void rvu_nix_freemem(struct rvu *rvu);
 int rvu_get_nixlf_count(struct rvu *rvu);
+void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int npalf);
 int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 				  struct nix_lf_alloc_req *req,
 				  struct nix_lf_alloc_rsp *rsp);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index d8d8947..ad99cf7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -105,6 +105,17 @@ static inline struct nix_hw *get_nix_hw(struct rvu_hwinfo *hw, int blkaddr)
 	return NULL;
 }
 
+static void nix_rx_sync(struct rvu *rvu, int blkaddr)
+{
+	int err;
+
+	/*Sync all in flight RX packets to LLC/DRAM */
+	rvu_write64(rvu, blkaddr, NIX_AF_RX_SW_SYNC, BIT_ULL(0));
+	err = rvu_poll_reg(rvu, blkaddr, NIX_AF_RX_SW_SYNC, BIT_ULL(0), true);
+	if (err)
+		dev_err(rvu->dev, "NIX RX software sync failed\n");
+}
+
 static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
 			    int lvl, u16 pcifunc, u16 schq)
 {
@@ -2281,3 +2292,40 @@ int rvu_mbox_handler_NIX_LF_STOP_RX(struct rvu *rvu, struct msg_req *req,
 	rvu_npc_disable_default_entries(rvu, pcifunc, nixlf);
 	return 0;
 }
+
+void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct hwctx_disable_req ctx_req;
+	int err;
+
+	ctx_req.hdr.pcifunc = pcifunc;
+
+	/* Cleanup NPC MCAM entries, free Tx scheduler queues being used */
+	nix_interface_deinit(rvu, pcifunc, nixlf);
+	nix_rx_sync(rvu, blkaddr);
+	nix_txschq_free(rvu, pcifunc);
+
+	if (pfvf->sq_ctx) {
+		ctx_req.ctype = NIX_AQ_CTYPE_SQ;
+		err = nix_lf_hwctx_disable(rvu, &ctx_req);
+		if (err)
+			dev_err(rvu->dev, "SQ ctx disable failed\n");
+	}
+
+	if (pfvf->rq_ctx) {
+		ctx_req.ctype = NIX_AQ_CTYPE_RQ;
+		err = nix_lf_hwctx_disable(rvu, &ctx_req);
+		if (err)
+			dev_err(rvu->dev, "RQ ctx disable failed\n");
+	}
+
+	if (pfvf->cq_ctx) {
+		ctx_req.ctype = NIX_AQ_CTYPE_CQ;
+		err = nix_lf_hwctx_disable(rvu, &ctx_req);
+		if (err)
+			dev_err(rvu->dev, "CQ ctx disable failed\n");
+	}
+
+	nix_ctx_free(rvu, pfvf);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
index 7531fdc..887daaa 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
@@ -470,3 +470,20 @@ void rvu_npa_freemem(struct rvu *rvu)
 	block = &hw->block[blkaddr];
 	rvu_aq_free(rvu, block->aq);
 }
+
+void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct hwctx_disable_req ctx_req;
+
+	/* Disable all pools */
+	ctx_req.hdr.pcifunc = pcifunc;
+	ctx_req.ctype = NPA_AQ_CTYPE_POOL;
+	npa_lf_hwctx_disable(rvu, &ctx_req);
+
+	/* Disable all auras */
+	ctx_req.ctype = NPA_AQ_CTYPE_AURA;
+	npa_lf_hwctx_disable(rvu, &ctx_req);
+
+	npa_ctx_free(rvu, pfvf);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 15/20] octeontx2-af: Mbox communication support btw AF and it's VFs
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (13 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 14/20] octeontx2-af: Teardown NPA, NIX LF upon receiving FLR sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 16/20] octeontx2-af: Enable sriov on AF to create VFs sunil.kovvuri
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem
  Cc: arnd, linux-soc, Tomasz Duszynski, Marko Kallio, Sunil Goutham

From: Tomasz Duszynski <tduszynski@marvell.com>

VFs attached to PFs other than AF can not communicate with AF
directly. Instead they are supposed to first send message to
the PF they are residing on and PF forwards it to the AF.
Responses to messages are handled in the reverse order.

On the other hand if VFs are on AF (PF0) itself then direct mailbox
communication is possible since there's no other PF in the way.

This patch addresses this particular case and adds support for
handling it.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Signed-off-by: Marko Kallio <mkallio@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    | 295 +++++++++++++++------
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  17 +-
 .../net/ethernet/marvell/octeontx2/af/rvu_cgx.c    |   6 +-
 3 files changed, 223 insertions(+), 95 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 60340dc..cf4df08 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -31,6 +31,15 @@ static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				  struct rvu_block *block, int lf);
 static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc);
 
+static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+			 int type, int num,
+			 void (mbox_handler)(struct work_struct *),
+			 void (mbox_up_handler)(struct work_struct *));
+enum {
+	TYPE_AFVF,
+	TYPE_AFPF,
+};
+
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
@@ -1341,9 +1350,11 @@ static int rvu_mbox_handler_VF_FLR(struct rvu *rvu, struct msg_req *req,
 	return 0;
 }
 
-static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
+static int rvu_process_mbox_msg(struct otx2_mbox *mbox, int devid,
 				struct mbox_msghdr *req)
 {
+	struct rvu *rvu = pci_get_drvdata(mbox->pdev);
+
 	/* Check if valid, if not reply with a invalid msg */
 	if (req->sig != OTX2_MBOX_REQ_SIG)
 		goto bad_message;
@@ -1355,8 +1366,15 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 		int err;						\
 									\
 		rsp = (struct _rsp_type *)otx2_mbox_alloc_msg(		\
-			&rvu->mbox, devid,				\
+			mbox, devid,					\
 			sizeof(struct _rsp_type));			\
+		/* some handlers should complete even if reply */	\
+		/* could not be allocated */				\
+		if (!rsp &&						\
+		    _id != MBOX_MSG_DETACH_RESOURCES &&			\
+		    _id != MBOX_MSG_NIX_TXSCH_FREE &&			\
+		    _id != MBOX_MSG_VF_FLR)				\
+			return -ENOMEM;					\
 		if (rsp) {						\
 			rsp->hdr.id = _id;				\
 			rsp->hdr.sig = OTX2_MBOX_RSP_SIG;		\
@@ -1374,29 +1392,38 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 	}
 MBOX_MESSAGES
 #undef M
-		break;
+
 bad_message:
 	default:
-		otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
-				       req->id);
+		otx2_reply_invalid_msg(mbox, devid, req->pcifunc, req->id);
 		return -ENODEV;
 	}
 }
 
-static void rvu_mbox_handler(struct work_struct *work)
+static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
 {
-	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
 	struct rvu *rvu = mwork->rvu;
+	int offset, err, id, devid;
 	struct otx2_mbox_dev *mdev;
 	struct mbox_hdr *req_hdr;
 	struct mbox_msghdr *msg;
+	struct mbox_wq_info *mw;
 	struct otx2_mbox *mbox;
-	int offset, id, err;
-	u16 pf;
 
-	mbox = &rvu->mbox;
-	pf = mwork - rvu->mbox_wrk;
-	mdev = &mbox->dev[pf];
+	switch (type) {
+	case TYPE_AFPF:
+		mw = &rvu->afpf_wq_info;
+		break;
+	case TYPE_AFVF:
+		mw = &rvu->afvf_wq_info;
+		break;
+	default:
+		return;
+	}
+
+	devid = mwork - mw->mbox_wrk;
+	mbox = &mw->mbox;
+	mdev = &mbox->dev[devid];
 
 	/* Process received mbox messages */
 	req_hdr = mdev->mbase + mbox->rx_start;
@@ -1408,10 +1435,21 @@ static void rvu_mbox_handler(struct work_struct *work)
 	for (id = 0; id < req_hdr->num_msgs; id++) {
 		msg = mdev->mbase + offset;
 
-		/* Set which PF sent this message based on mbox IRQ */
-		msg->pcifunc &= ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
-		msg->pcifunc |= (pf << RVU_PFVF_PF_SHIFT);
-		err = rvu_process_mbox_msg(rvu, pf, msg);
+		/* Set which PF/VF sent this message based on mbox IRQ */
+		switch (type) {
+		case TYPE_AFPF:
+			msg->pcifunc &=
+				~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
+			msg->pcifunc |= (devid << RVU_PFVF_PF_SHIFT);
+			break;
+		case TYPE_AFVF:
+			msg->pcifunc &=
+				~(RVU_PFVF_FUNC_MASK << RVU_PFVF_FUNC_SHIFT);
+			msg->pcifunc |= (devid << RVU_PFVF_FUNC_SHIFT) + 1;
+			break;
+		}
+
+		err = rvu_process_mbox_msg(mbox, devid, msg);
 		if (!err) {
 			offset = mbox->rx_start + msg->next_msgoff;
 			continue;
@@ -1419,31 +1457,57 @@ static void rvu_mbox_handler(struct work_struct *work)
 
 		if (msg->pcifunc & RVU_PFVF_FUNC_MASK)
 			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n",
-				 err, otx2_mbox_id2name(msg->id), msg->id, pf,
+				 err, otx2_mbox_id2name(msg->id),
+				 msg->id, devid,
 				 (msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1);
 		else
 			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n",
-				 err, otx2_mbox_id2name(msg->id), msg->id, pf);
+				 err, otx2_mbox_id2name(msg->id),
+				 msg->id, devid);
 	}
 
-	/* Send mbox responses to PF */
-	otx2_mbox_msg_send(mbox, pf);
+	/* Send mbox responses to VF/PF */
+	otx2_mbox_msg_send(mbox, devid);
 }
 
-static void rvu_mbox_up_handler(struct work_struct *work)
+static inline void rvu_afpf_mbox_handler(struct work_struct *work)
 {
 	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+
+	__rvu_mbox_handler(mwork, TYPE_AFPF);
+}
+
+static inline void rvu_afvf_mbox_handler(struct work_struct *work)
+{
+	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+
+	__rvu_mbox_handler(mwork, TYPE_AFVF);
+}
+
+static void __rvu_mbox_up_handler(struct rvu_work *mwork, int type)
+{
 	struct rvu *rvu = mwork->rvu;
 	struct otx2_mbox_dev *mdev;
 	struct mbox_hdr *rsp_hdr;
 	struct mbox_msghdr *msg;
+	struct mbox_wq_info *mw;
 	struct otx2_mbox *mbox;
-	int offset, id;
-	u16 pf;
+	int offset, id, devid;
+
+	switch (type) {
+	case TYPE_AFPF:
+		mw = &rvu->afpf_wq_info;
+		break;
+	case TYPE_AFVF:
+		mw = &rvu->afvf_wq_info;
+		break;
+	default:
+		return;
+	}
 
-	mbox = &rvu->mbox_up;
-	pf = mwork - rvu->mbox_wrk_up;
-	mdev = &mbox->dev[pf];
+	devid = mwork - mw->mbox_wrk_up;
+	mbox = &mw->mbox_up;
+	mdev = &mbox->dev[devid];
 
 	rsp_hdr = mdev->mbase + mbox->rx_start;
 	if (rsp_hdr->num_msgs == 0) {
@@ -1484,128 +1548,182 @@ static void rvu_mbox_up_handler(struct work_struct *work)
 		mdev->msgs_acked++;
 	}
 
-	otx2_mbox_reset(mbox, 0);
+	otx2_mbox_reset(mbox, devid);
 }
 
-static int rvu_mbox_init(struct rvu *rvu)
+static inline void rvu_afpf_mbox_up_handler(struct work_struct *work)
 {
-	struct rvu_hwinfo *hw = rvu->hw;
-	void __iomem *hwbase = NULL;
+	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+
+	__rvu_mbox_up_handler(mwork, TYPE_AFPF);
+}
+
+static inline void rvu_afvf_mbox_up_handler(struct work_struct *work)
+{
+	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+
+	__rvu_mbox_up_handler(mwork, TYPE_AFVF);
+}
+
+static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+			 int type, int num,
+			 void (mbox_handler)(struct work_struct *),
+			 void (mbox_up_handler)(struct work_struct *))
+{
+	void __iomem *hwbase = NULL, *reg_base;
+	int err, i, dir, dir_up;
 	struct rvu_work *mwork;
+	const char *name;
 	u64 bar4_addr;
-	int err, pf;
 
-	rvu->mbox_wq = alloc_workqueue("rvu_afpf_mailbox",
-				       WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
-				       hw->total_pfs);
-	if (!rvu->mbox_wq)
+	switch (type) {
+	case TYPE_AFPF:
+		name = "rvu_afpf_mailbox";
+		bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR);
+		dir = MBOX_DIR_AFPF;
+		dir_up = MBOX_DIR_AFPF_UP;
+		reg_base = rvu->afreg_base;
+		break;
+	case TYPE_AFVF:
+		name = "rvu_afvf_mailbox";
+		bar4_addr = rvupf_read64(rvu, RVU_PF_VF_BAR4_ADDR);
+		dir = MBOX_DIR_PFVF;
+		dir_up = MBOX_DIR_PFVF_UP;
+		reg_base = rvu->pfreg_base;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	mw->mbox_wq = alloc_workqueue(name,
+				      WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+				      num);
+	if (!mw->mbox_wq)
 		return -ENOMEM;
 
-	rvu->mbox_wrk = devm_kcalloc(rvu->dev, hw->total_pfs,
-				     sizeof(struct rvu_work), GFP_KERNEL);
-	if (!rvu->mbox_wrk) {
+	mw->mbox_wrk = devm_kcalloc(rvu->dev, num,
+				    sizeof(struct rvu_work), GFP_KERNEL);
+	if (!mw->mbox_wrk) {
 		err = -ENOMEM;
 		goto exit;
 	}
 
-	rvu->mbox_wrk_up = devm_kcalloc(rvu->dev, hw->total_pfs,
-					sizeof(struct rvu_work), GFP_KERNEL);
-	if (!rvu->mbox_wrk_up) {
+	mw->mbox_wrk_up = devm_kcalloc(rvu->dev, num,
+				       sizeof(struct rvu_work), GFP_KERNEL);
+	if (!mw->mbox_wrk_up) {
 		err = -ENOMEM;
 		goto exit;
 	}
 
-	/* Map mbox region shared with PFs */
-	bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR);
 	/* Mailbox is a reserved memory (in RAM) region shared between
 	 * RVU devices, shouldn't be mapped as device memory to allow
 	 * unaligned accesses.
 	 */
-	hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * hw->total_pfs);
+	hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * num);
 	if (!hwbase) {
 		dev_err(rvu->dev, "Unable to map mailbox region\n");
 		err = -ENOMEM;
 		goto exit;
 	}
 
-	err = otx2_mbox_init(&rvu->mbox, hwbase, rvu->pdev, rvu->afreg_base,
-			     MBOX_DIR_AFPF, hw->total_pfs);
+	err = otx2_mbox_init(&mw->mbox, hwbase, rvu->pdev, reg_base, dir, num);
 	if (err)
 		goto exit;
 
-	err = otx2_mbox_init(&rvu->mbox_up, hwbase, rvu->pdev, rvu->afreg_base,
-			     MBOX_DIR_AFPF_UP, hw->total_pfs);
+	err = otx2_mbox_init(&mw->mbox_up, hwbase, rvu->pdev,
+			     reg_base, dir_up, num);
 	if (err)
 		goto exit;
 
-	for (pf = 0; pf < hw->total_pfs; pf++) {
-		mwork = &rvu->mbox_wrk[pf];
+	for (i = 0; i < num; i++) {
+		mwork = &mw->mbox_wrk[i];
 		mwork->rvu = rvu;
-		INIT_WORK(&mwork->work, rvu_mbox_handler);
-	}
+		INIT_WORK(&mwork->work, mbox_handler);
 
-	for (pf = 0; pf < hw->total_pfs; pf++) {
-		mwork = &rvu->mbox_wrk_up[pf];
+		mwork = &mw->mbox_wrk_up[i];
 		mwork->rvu = rvu;
-		INIT_WORK(&mwork->work, rvu_mbox_up_handler);
+		INIT_WORK(&mwork->work, mbox_up_handler);
 	}
 
 	return 0;
 exit:
 	if (hwbase)
 		iounmap((void __iomem *)hwbase);
-	destroy_workqueue(rvu->mbox_wq);
+	destroy_workqueue(mw->mbox_wq);
 	return err;
 }
 
-static void rvu_mbox_destroy(struct rvu *rvu)
+static void rvu_mbox_destroy(struct mbox_wq_info *mw)
 {
-	if (rvu->mbox_wq) {
-		flush_workqueue(rvu->mbox_wq);
-		destroy_workqueue(rvu->mbox_wq);
-		rvu->mbox_wq = NULL;
+	if (mw->mbox_wq) {
+		flush_workqueue(mw->mbox_wq);
+		destroy_workqueue(mw->mbox_wq);
+		mw->mbox_wq = NULL;
 	}
 
-	if (rvu->mbox.hwbase)
-		iounmap((void __iomem *)rvu->mbox.hwbase);
+	if (mw->mbox.hwbase)
+		iounmap((void __iomem *)mw->mbox.hwbase);
 
-	otx2_mbox_destroy(&rvu->mbox);
-	otx2_mbox_destroy(&rvu->mbox_up);
+	otx2_mbox_destroy(&mw->mbox);
+	otx2_mbox_destroy(&mw->mbox_up);
 }
 
-static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+static void rvu_queue_work(struct mbox_wq_info *mw, int first,
+			   int mdevs, u64 intr)
 {
-	struct rvu *rvu = (struct rvu *)rvu_irq;
 	struct otx2_mbox_dev *mdev;
 	struct otx2_mbox *mbox;
 	struct mbox_hdr *hdr;
+	int i;
+
+	for (i = first; i < mdevs; i++) {
+		/* start from 0 */
+		if (!(intr & BIT_ULL(i - first)))
+			continue;
+
+		mbox = &mw->mbox;
+		mdev = &mbox->dev[i];
+		hdr = mdev->mbase + mbox->rx_start;
+		if (hdr->num_msgs)
+			queue_work(mw->mbox_wq, &mw->mbox_wrk[i].work);
+
+		mbox = &mw->mbox_up;
+		mdev = &mbox->dev[i];
+		hdr = mdev->mbase + mbox->rx_start;
+		if (hdr->num_msgs)
+			queue_work(mw->mbox_wq, &mw->mbox_wrk_up[i].work);
+	}
+}
+
+static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	int vfs = pci_num_vf(rvu->pdev);
 	u64 intr;
-	u8  pf;
 
 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
 	/* Clear interrupts */
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT, intr);
 
 	/* Sync with mbox memory region */
-	smp_wmb();
+	rmb();
 
-	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
-		if (intr & (1ULL << pf)) {
-			mbox = &rvu->mbox;
-			mdev = &mbox->dev[pf];
-			hdr = mdev->mbase + mbox->rx_start;
-			if (hdr->num_msgs)
-				queue_work(rvu->mbox_wq,
-					   &rvu->mbox_wrk[pf].work);
-			mbox = &rvu->mbox_up;
-			mdev = &mbox->dev[pf];
-			hdr = mdev->mbase + mbox->rx_start;
-			if (hdr->num_msgs)
-				queue_work(rvu->mbox_wq,
-					   &rvu->mbox_wrk_up[pf].work);
-		}
+	rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr);
+
+	/* Handle VF interrupts */
+	if (vfs > 64) {
+		intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1));
+		rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), intr);
+
+		rvu_queue_work(&rvu->afvf_wq_info, 64, vfs, intr);
+		vfs -= 64;
 	}
 
+	intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(0));
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(0), intr);
+
+	rvu_queue_work(&rvu->afvf_wq_info, 0, vfs, intr);
+
 	return IRQ_HANDLED;
 }
 
@@ -1917,7 +2035,10 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_release_regions;
 
-	err = rvu_mbox_init(rvu);
+	/* Init mailbox btw AF and PFs */
+	err = rvu_mbox_init(rvu, &rvu->afpf_wq_info, TYPE_AFPF,
+			    rvu->hw->total_pfs, rvu_afpf_mbox_handler,
+			    rvu_afpf_mbox_up_handler);
 	if (err)
 		goto err_hwsetup;
 
@@ -1939,7 +2060,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 err_cgx:
 	rvu_cgx_wq_destroy(rvu);
 err_mbox:
-	rvu_mbox_destroy(rvu);
+	rvu_mbox_destroy(&rvu->afpf_wq_info);
 err_hwsetup:
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
@@ -1961,7 +2082,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	rvu_unregister_interrupts(rvu);
 	rvu_flr_wq_destroy(rvu);
 	rvu_cgx_wq_destroy(rvu);
-	rvu_mbox_destroy(rvu);
+	rvu_mbox_destroy(&rvu->afpf_wq_info);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 9e3843e..b491760 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -183,6 +183,16 @@ struct rvu_hwinfo {
 	struct npc_mcam  mcam;
 };
 
+struct mbox_wq_info {
+	struct otx2_mbox mbox;
+	struct rvu_work *mbox_wrk;
+
+	struct otx2_mbox mbox_up;
+	struct rvu_work *mbox_wrk_up;
+
+	struct workqueue_struct *mbox_wq;
+};
+
 struct rvu {
 	void __iomem		*afreg_base;
 	void __iomem		*pfreg_base;
@@ -194,11 +204,8 @@ struct rvu {
 	struct mutex            rsrc_lock; /* Serialize resource alloc/free */
 
 	/* Mbox */
-	struct otx2_mbox	mbox;
-	struct rvu_work		*mbox_wrk;
-	struct otx2_mbox        mbox_up;
-	struct rvu_work		*mbox_wrk_up;
-	struct workqueue_struct *mbox_wq;
+	struct mbox_wq_info	afpf_wq_info;
+	struct mbox_wq_info	afvf_wq_info;
 
 	/* PF FLR */
 	struct rvu_work		*flr_wrk;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
index 188185c..d341ba0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
@@ -27,7 +27,7 @@ static struct _req_type __maybe_unused					\
 	struct _req_type *req;						\
 									\
 	req = (struct _req_type *)otx2_mbox_alloc_msg_rsp(		\
-		&rvu->mbox_up, devid, sizeof(struct _req_type),		\
+		&rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \
 		sizeof(struct _rsp_type));				\
 	if (!req)							\
 		return NULL;						\
@@ -181,8 +181,8 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
 		if (!msg)
 			continue;
 		msg->link_info = *linfo;
-		otx2_mbox_msg_send(&rvu->mbox_up, pfid);
-		err = otx2_mbox_wait_for_rsp(&rvu->mbox_up, pfid);
+		otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pfid);
+		err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
 		if (err)
 			dev_warn(rvu->dev, "notification to pf %d failed\n",
 				 pfid);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 16/20] octeontx2-af: Enable sriov on AF to create VFs
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (14 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 15/20] octeontx2-af: Mbox communication support btw AF and it's VFs sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 17/20] octeontx2-af: Configure AF VFs to talk over LBK channels sunil.kovvuri
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Tomasz Duszynski, Sunil Goutham

From: Tomasz Duszynski <tduszynski@marvell.com>

Enable all AF VFs during probe. Since AF's VFs work in pairs
(eg: Pkts sent on VF0 are received by VF1 and viceversa),
enable only even number of VFs out of totalVFs, which should
again be less than number of loopback (LBK) channels.

Also enable VF's mailbox interrupts.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 185 +++++++++++++++++++++++-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h |   3 +-
 2 files changed, 185 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index cf4df08..8e51f4e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1698,7 +1698,7 @@ static void rvu_queue_work(struct mbox_wq_info *mw, int first,
 static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
 {
 	struct rvu *rvu = (struct rvu *)rvu_irq;
-	int vfs = pci_num_vf(rvu->pdev);
+	int vfs = rvu->vfs;
 	u64 intr;
 
 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
@@ -1858,9 +1858,25 @@ static void rvu_unregister_interrupts(struct rvu *rvu)
 	rvu->num_vec = 0;
 }
 
+static int rvu_afvf_msix_vectors_num_ok(struct rvu *rvu)
+{
+	struct rvu_pfvf *pfvf = &rvu->pf[0];
+	int offset;
+
+	pfvf = &rvu->pf[0];
+	offset = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff;
+
+	/* Make sure there are enough MSIX vectors configured so that
+	 * VF interrupts can be handled. Offset equal to zero means
+	 * that PF vectors are not configured and overlapping AF vectors.
+	 */
+	return (pfvf->msix.max >= RVU_AF_INT_VEC_CNT + RVU_PF_INT_VEC_CNT) &&
+	       offset;
+}
+
 static int rvu_register_interrupts(struct rvu *rvu)
 {
-	int ret;
+	int ret, offset;
 
 	rvu->num_vec = pci_msix_vec_count(rvu->pdev);
 
@@ -1921,6 +1937,40 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S,
 		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
 
+	if (!rvu_afvf_msix_vectors_num_ok(rvu))
+		return 0;
+
+	/* Get PF MSIX vectors offset. */
+	offset = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff;
+
+	/* Register MBOX0 interrupt. */
+	offset += RVU_PF_INT_VEC_VFPF_MBOX0;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox0");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_mbox_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE],
+			  rvu);
+	if (ret)
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for Mbox0\n");
+
+	rvu->irq_allocated[offset] = true;
+
+	/* Register MBOX1 interrupt. MBOX1 IRQ number follows MBOX0 so
+	 * simply increment current offset by 1.
+	 */
+	offset += 1;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox1");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_mbox_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE],
+			  rvu);
+	if (ret)
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for Mbox1\n");
+
+	rvu->irq_allocated[offset] = true;
+
 	return 0;
 
 fail:
@@ -1973,6 +2023,129 @@ static int rvu_flr_init(struct rvu *rvu)
 	return 0;
 }
 
+static void rvu_disable_afvf_mbox_intr(struct rvu *rvu)
+{
+	int vfs = rvu->vfs;
+
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0), INTR_MASK(vfs));
+	if (vfs > 64)
+		rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1),
+			      INTR_MASK(vfs - 64));
+}
+
+static void rvu_enable_afvf_mbox_intr(struct rvu *rvu)
+{
+	int vfs = rvu->vfs;
+
+	/* Clear any pending interrupts and enable AF VF interrupts for
+	 * the first 64 VFs.
+	 */
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(0), INTR_MASK(vfs));
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(0), INTR_MASK(vfs));
+
+	/* Same for remaining VFs, if any. */
+	if (vfs <= 64)
+		return;
+
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(1),
+		      INTR_MASK(vfs - 64));
+}
+
+#define PCI_DEVID_OCTEONTX2_LBK 0xA061
+
+static int lbk_get_num_chans(void)
+{
+	struct pci_dev *pdev;
+	void __iomem *base;
+	int ret = -EIO;
+
+	pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_LBK,
+			      NULL);
+	if (!pdev)
+		goto err;
+
+	base = pci_ioremap_bar(pdev, 0);
+	if (!base)
+		goto err_put;
+
+	/* Read number of available LBK channels from LBK(0)_CONST register. */
+	ret = (readq(base + 0x10) >> 32) & 0xffff;
+	iounmap(base);
+err_put:
+	pci_dev_put(pdev);
+err:
+	return ret;
+}
+
+static int rvu_enable_sriov(struct rvu *rvu)
+{
+	struct pci_dev *pdev = rvu->pdev;
+	int err, chans, vfs;
+
+	if (!rvu_afvf_msix_vectors_num_ok(rvu)) {
+		dev_warn(&pdev->dev,
+			 "Skipping SRIOV enablement since not enough IRQs are available\n");
+		return 0;
+	}
+
+	chans = lbk_get_num_chans();
+	if (chans < 0)
+		return chans;
+
+	vfs = pci_sriov_get_totalvfs(pdev);
+
+	/* Limit VFs in case we have more VFs than LBK channels available. */
+	if (vfs > chans)
+		vfs = chans;
+
+	/* AF's VFs work in pairs and talk over consecutive loopback channels.
+	 * Thus we want to enable maximum even number of VFs. In case
+	 * odd number of VFs are available then the last VF on the list
+	 * remains disabled.
+	 */
+	if (vfs & 0x1) {
+		dev_warn(&pdev->dev,
+			 "Number of VFs should be even. Enabling %d out of %d.\n",
+			 vfs - 1, vfs);
+		vfs--;
+	}
+
+	if (!vfs)
+		return 0;
+
+	/* Save VFs number for reference in VF interrupts handlers.
+	 * Since interrupts might start arriving during SRIOV enablement
+	 * ordinary API cannot be used to get number of enabled VFs.
+	 */
+	rvu->vfs = vfs;
+
+	err = rvu_mbox_init(rvu, &rvu->afvf_wq_info, TYPE_AFVF, vfs,
+			    rvu_afvf_mbox_handler, rvu_afvf_mbox_up_handler);
+	if (err)
+		return err;
+
+	rvu_enable_afvf_mbox_intr(rvu);
+	/* Make sure IRQs are enabled before SRIOV. */
+	mb();
+
+	err = pci_enable_sriov(pdev, vfs);
+	if (err) {
+		rvu_disable_afvf_mbox_intr(rvu);
+		rvu_mbox_destroy(&rvu->afvf_wq_info);
+		return err;
+	}
+
+	return 0;
+}
+
+static void rvu_disable_sriov(struct rvu *rvu)
+{
+	rvu_disable_afvf_mbox_intr(rvu);
+	rvu_mbox_destroy(&rvu->afvf_wq_info);
+	pci_disable_sriov(rvu->pdev);
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -2054,7 +2227,14 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_flr;
 
+	/* Enable AF's VFs (if any) */
+	err = rvu_enable_sriov(rvu);
+	if (err)
+		goto err_irq;
+
 	return 0;
+err_irq:
+	rvu_unregister_interrupts(rvu);
 err_flr:
 	rvu_flr_wq_destroy(rvu);
 err_cgx:
@@ -2083,6 +2263,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	rvu_flr_wq_destroy(rvu);
 	rvu_cgx_wq_destroy(rvu);
 	rvu_mbox_destroy(&rvu->afpf_wq_info);
+	rvu_disable_sriov(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index b491760..ffed649 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -201,7 +201,8 @@ struct rvu {
 	struct rvu_hwinfo       *hw;
 	struct rvu_pfvf		*pf;
 	struct rvu_pfvf		*hwvf;
-	struct mutex            rsrc_lock; /* Serialize resource alloc/free */
+	struct mutex		rsrc_lock; /* Serialize resource alloc/free */
+	int			vfs; /* Number of VFs attached to RVU */
 
 	/* Mbox */
 	struct mbox_wq_info	afpf_wq_info;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 17/20] octeontx2-af: Configure AF VFs to talk over LBK channels
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (15 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 16/20] octeontx2-af: Enable sriov on AF to create VFs sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:35 ` [PATCH 18/20] octeontx2-af: Add FLR handling support for AF's VFs sunil.kovvuri
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Tomasz Duszynski, Sunil Goutham

From: Tomasz Duszynski <tduszynski@marvell.com>

Configure AF VFs such that they are able to talk over consecutive
loopback channels.

If 8 VFs are attached to AF then communication will work as below:

TX      RX
lbk0 -> lbk1
lbk1 -> lbk0

lbk2 -> lbk3
lbk3 -> lbk2

lbk4 -> lbk5
lbk5 -> lbk4

lbk6 -> lbk7
lbk7 -> lbk6

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/common.h |  2 ++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  5 +++++
 .../net/ethernet/marvell/octeontx2/af/rvu_nix.c    | 25 +++++++++++++++++++---
 .../net/ethernet/marvell/octeontx2/af/rvu_npc.c    | 12 +++++++----
 4 files changed, 37 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index a8c89df..9bddb03 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -174,7 +174,9 @@ enum nix_scheduler {
 
 #define MAX_LMAC_PKIND			12
 #define NIX_LINK_CGX_LMAC(a, b)		(0 + 4 * (a) + (b))
+#define NIX_LINK_LBK(a)			(12 + (a))
 #define NIX_CHAN_CGX_LMAC_CHX(a, b, c)	(0x800 + 0x100 * (a) + 0x10 * (b) + (c))
+#define NIX_CHAN_LBK_CHX(a, b)		(0 + 0x100 * (a) + (b))
 
 /* NIX LSO format indices.
  * As of now TSO is the only one using, so statically assigning indices.
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index ffed649..cf9139e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -258,6 +258,11 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
 /* Function Prototypes
  * RVU
  */
+static inline int is_afvf(u16 pcifunc)
+{
+	return !(pcifunc & ~RVU_PFVF_FUNC_MASK);
+}
+
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
 void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index ad99cf7..1f73c05 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -144,7 +144,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
 {
 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
 	u8 cgx_id, lmac_id;
-	int pkind, pf;
+	int pkind, pf, vf;
 	int err;
 
 	pf = rvu_get_pf(pcifunc);
@@ -170,6 +170,14 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
 		rvu_npc_set_pkind(rvu, pkind, pfvf);
 		break;
 	case NIX_INTF_TYPE_LBK:
+		vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1;
+		pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(0, vf);
+		pfvf->tx_chan_base = vf & 0x1 ? NIX_CHAN_LBK_CHX(0, vf - 1) :
+						NIX_CHAN_LBK_CHX(0, vf + 1);
+		pfvf->rx_chan_cnt = 1;
+		pfvf->tx_chan_cnt = 1;
+		rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
+					      pfvf->rx_chan_base, false);
 		break;
 	}
 
@@ -684,7 +692,7 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 				  struct nix_lf_alloc_req *req,
 				  struct nix_lf_alloc_rsp *rsp)
 {
-	int nixlf, qints, hwctx_size, err, rc = 0;
+	int nixlf, qints, hwctx_size, intf, err, rc = 0;
 	struct rvu_hwinfo *hw = rvu->hw;
 	u16 pcifunc = req->hdr.pcifunc;
 	struct rvu_block *block;
@@ -839,7 +847,8 @@ int rvu_mbox_handler_NIX_LF_ALLOC(struct rvu *rvu,
 	/* Config Rx pkt length, csum checks and apad  enable / disable */
 	rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_CFG(nixlf), req->rx_cfg);
 
-	err = nix_interface_init(rvu, pcifunc, NIX_INTF_TYPE_CGX, nixlf);
+	intf = is_afvf(pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+	err = nix_interface_init(rvu, pcifunc, intf, nixlf);
 	if (err)
 		goto free_mem;
 
@@ -1354,6 +1363,10 @@ static int nix_update_bcast_mce_list(struct rvu *rvu, u16 pcifunc, bool add)
 	struct rvu_pfvf *pfvf;
 	int blkaddr;
 
+	/* Broadcast pkt replication is not needed for AF's VFs, hence skip */
+	if (is_afvf(pcifunc))
+		return 0;
+
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
 	if (blkaddr < 0)
 		return 0;
@@ -1966,6 +1979,12 @@ int rvu_mbox_handler_NIX_RXVLAN_ALLOC(struct rvu *rvu, struct msg_req *req,
 	int blkaddr, nixlf, err;
 	struct rvu_pfvf *pfvf;
 
+	/* LBK VFs do not have separate MCAM UCAST entry hence
+	 * skip allocating rxvlan for them
+	 */
+	if (is_afvf(pcifunc))
+		return 0;
+
 	pfvf = rvu_get_pfvf(rvu, pcifunc);
 	if (pfvf->rxvlan)
 		return 0;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 5dbb5cd..e61f5b3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -314,6 +314,10 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
 	int blkaddr, index, kwi;
 	u64 mac = 0;
 
+	/* AF's VFs work in promiscuous mode */
+	if (is_afvf(pcifunc))
+		return;
+
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0)
 		return;
@@ -371,12 +375,12 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
 	struct nix_rx_action action = { };
 	int blkaddr, index, kwi;
 
-	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
-	if (blkaddr < 0)
+	/* Only PF or AF VF can add a promiscuous entry */
+	if ((pcifunc & RVU_PFVF_FUNC_MASK) && !is_afvf(pcifunc))
 		return;
 
-	/* Only PF or AF VF can add a promiscuous entry */
-	if (pcifunc & RVU_PFVF_FUNC_MASK)
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+	if (blkaddr < 0)
 		return;
 
 	index = npc_get_nixlf_mcam_index(mcam, pcifunc,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 18/20] octeontx2-af: Add FLR handling support for AF's VFs
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (16 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 17/20] octeontx2-af: Configure AF VFs to talk over LBK channels sunil.kovvuri
@ 2018-11-08 18:35 ` sunil.kovvuri
  2018-11-08 18:36 ` [PATCH 19/20] octeontx2-af: Add interrupt handlers for Master Enable event sunil.kovvuri
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:35 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Added support to handle FLR for AF's VFs (i.e LBK VFs).
Just the FLR interrupt enable/disable, handler registration
etc, actual HW resource cleanup or LFs teardown logic is
already there.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 129 ++++++++++++++++++++----
 1 file changed, 110 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 8e51f4e..1ef104a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1788,6 +1788,23 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
 	mutex_unlock(&rvu->flr_lock);
 }
 
+static void rvu_afvf_flr_handler(struct rvu *rvu, int vf)
+{
+	int reg = 0;
+
+	/* pcifunc = 0(PF0) | (vf + 1) */
+	__rvu_flr_handler(rvu, vf + 1);
+
+	if (vf >= 64) {
+		reg = 1;
+		vf = vf - 64;
+	}
+
+	/* Signal FLR finish and enable IRQ */
+	rvupf_write64(rvu, RVU_PF_VFTRPENDX(reg), BIT_ULL(vf));
+	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(reg), BIT_ULL(vf));
+}
+
 static void rvu_flr_handler(struct work_struct *work)
 {
 	struct rvu_work *flrwork = container_of(work, struct rvu_work, work);
@@ -1797,6 +1814,10 @@ static void rvu_flr_handler(struct work_struct *work)
 	int pf;
 
 	pf = flrwork - rvu->flr_wrk;
+	if (pf >= rvu->hw->total_pfs) {
+		rvu_afvf_flr_handler(rvu, pf - rvu->hw->total_pfs);
+		return;
+	}
 
 	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
 	numvfs = (cfg >> 12) & 0xFF;
@@ -1814,6 +1835,29 @@ static void rvu_flr_handler(struct work_struct *work)
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S,  BIT_ULL(pf));
 }
 
+static void rvu_afvf_queue_flr_work(struct rvu *rvu, int start_vf, int numvfs)
+{
+	int dev, vf, reg = 0;
+	u64 intr;
+
+	if (start_vf >= 64)
+		reg = 1;
+
+	intr = rvupf_read64(rvu, RVU_PF_VFFLR_INTX(reg));
+	if (!intr)
+		return;
+
+	for (vf = 0; vf < numvfs; vf++) {
+		if (!(intr & BIT_ULL(vf)))
+			continue;
+		dev = vf + start_vf + rvu->hw->total_pfs;
+		queue_work(rvu->flr_wq, &rvu->flr_wrk[dev].work);
+		/* Clear and disable the interrupt */
+		rvupf_write64(rvu, RVU_PF_VFFLR_INTX(reg), BIT_ULL(vf));
+		rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(reg), BIT_ULL(vf));
+	}
+}
+
 static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq)
 {
 	struct rvu *rvu = (struct rvu *)rvu_irq;
@@ -1821,6 +1865,8 @@ static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq)
 	u8  pf;
 
 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT);
+	if (!intr)
+		goto afvf_flr;
 
 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
 		if (intr & (1ULL << pf)) {
@@ -1834,6 +1880,12 @@ static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq)
 				    BIT_ULL(pf));
 		}
 	}
+
+afvf_flr:
+	rvu_afvf_queue_flr_work(rvu, 0, 64);
+	if (rvu->vfs > 64)
+		rvu_afvf_queue_flr_work(rvu, 64, rvu->vfs - 64);
+
 	return IRQ_HANDLED;
 }
 
@@ -1876,7 +1928,7 @@ static int rvu_afvf_msix_vectors_num_ok(struct rvu *rvu)
 
 static int rvu_register_interrupts(struct rvu *rvu)
 {
-	int ret, offset;
+	int ret, offset, pf_vec_start;
 
 	rvu->num_vec = pci_msix_vec_count(rvu->pdev);
 
@@ -1941,10 +1993,11 @@ static int rvu_register_interrupts(struct rvu *rvu)
 		return 0;
 
 	/* Get PF MSIX vectors offset. */
-	offset = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff;
+	pf_vec_start = rvu_read64(rvu, BLKADDR_RVUM,
+				  RVU_PRIV_PFX_INT_CFG(0)) & 0x3ff;
 
 	/* Register MBOX0 interrupt. */
-	offset += RVU_PF_INT_VEC_VFPF_MBOX0;
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFPF_MBOX0;
 	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox0");
 	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
 			  rvu_mbox_intr_handler, 0,
@@ -1959,7 +2012,7 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	/* Register MBOX1 interrupt. MBOX1 IRQ number follows MBOX0 so
 	 * simply increment current offset by 1.
 	 */
-	offset += 1;
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFPF_MBOX1;
 	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF Mbox1");
 	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
 			  rvu_mbox_intr_handler, 0,
@@ -1971,10 +2024,35 @@ static int rvu_register_interrupts(struct rvu *rvu)
 
 	rvu->irq_allocated[offset] = true;
 
+	/* Register FLR interrupt handler for AF's VFs */
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFFLR0;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF FLR0");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_flr_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for RVUAFVF FLR0\n");
+		goto fail;
+	}
+	rvu->irq_allocated[offset] = true;
+
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFFLR1;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF FLR1");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_flr_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for RVUAFVF FLR1\n");
+		goto fail;
+	}
+	rvu->irq_allocated[offset] = true;
+
 	return 0;
 
 fail:
-	pci_free_irq_vectors(rvu->pdev);
+	rvu_unregister_interrupts(rvu);
 	return ret;
 }
 
@@ -1985,16 +2063,16 @@ static void rvu_flr_wq_destroy(struct rvu *rvu)
 		destroy_workqueue(rvu->flr_wq);
 		rvu->flr_wq = NULL;
 	}
-	kfree(rvu->flr_wrk);
 }
 
 static int rvu_flr_init(struct rvu *rvu)
 {
+	int dev, num_devs;
 	u64 cfg;
 	int pf;
 
 	/* Enable FLR for all PFs*/
-	for (pf = 1; pf < rvu->hw->total_pfs; pf++) {
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
 		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
 		rvu_write64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf),
 			    cfg | BIT_ULL(22));
@@ -2006,16 +2084,17 @@ static int rvu_flr_init(struct rvu *rvu)
 	if (!rvu->flr_wq)
 		return -ENOMEM;
 
-	rvu->flr_wrk = devm_kcalloc(rvu->dev, rvu->hw->total_pfs,
+	num_devs = rvu->hw->total_pfs + pci_sriov_get_totalvfs(rvu->pdev);
+	rvu->flr_wrk = devm_kcalloc(rvu->dev, num_devs,
 				    sizeof(struct rvu_work), GFP_KERNEL);
 	if (!rvu->flr_wrk) {
 		destroy_workqueue(rvu->flr_wq);
 		return -ENOMEM;
 	}
 
-	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
-		rvu->flr_wrk[pf].rvu = rvu;
-		INIT_WORK(&rvu->flr_wrk[pf].work, rvu_flr_handler);
+	for (dev = 0; dev < num_devs; dev++) {
+		rvu->flr_wrk[dev].rvu = rvu;
+		INIT_WORK(&rvu->flr_wrk[dev].work, rvu_flr_handler);
 	}
 
 	mutex_init(&rvu->flr_lock);
@@ -2023,26 +2102,35 @@ static int rvu_flr_init(struct rvu *rvu)
 	return 0;
 }
 
-static void rvu_disable_afvf_mbox_intr(struct rvu *rvu)
+static void rvu_disable_afvf_intr(struct rvu *rvu)
 {
 	int vfs = rvu->vfs;
 
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0), INTR_MASK(vfs));
-	if (vfs > 64)
-		rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1),
-			      INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(0), INTR_MASK(vfs));
+	if (vfs <= 64)
+		return;
+
+	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1),
+		      INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(1), INTR_MASK(vfs - 64));
 }
 
-static void rvu_enable_afvf_mbox_intr(struct rvu *rvu)
+static void rvu_enable_afvf_intr(struct rvu *rvu)
 {
 	int vfs = rvu->vfs;
 
 	/* Clear any pending interrupts and enable AF VF interrupts for
 	 * the first 64 VFs.
 	 */
+	/* Mbox */
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(0), INTR_MASK(vfs));
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(0), INTR_MASK(vfs));
 
+	/* FLR */
+	rvupf_write64(rvu, RVU_PF_VFFLR_INTX(0), INTR_MASK(vfs));
+	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(0), INTR_MASK(vfs));
+
 	/* Same for remaining VFs, if any. */
 	if (vfs <= 64)
 		return;
@@ -2050,6 +2138,9 @@ static void rvu_enable_afvf_mbox_intr(struct rvu *rvu)
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), INTR_MASK(vfs - 64));
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1SX(1),
 		      INTR_MASK(vfs - 64));
+
+	rvupf_write64(rvu, RVU_PF_VFFLR_INTX(1), INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(1), INTR_MASK(vfs - 64));
 }
 
 #define PCI_DEVID_OCTEONTX2_LBK 0xA061
@@ -2125,13 +2216,13 @@ static int rvu_enable_sriov(struct rvu *rvu)
 	if (err)
 		return err;
 
-	rvu_enable_afvf_mbox_intr(rvu);
+	rvu_enable_afvf_intr(rvu);
 	/* Make sure IRQs are enabled before SRIOV. */
 	mb();
 
 	err = pci_enable_sriov(pdev, vfs);
 	if (err) {
-		rvu_disable_afvf_mbox_intr(rvu);
+		rvu_disable_afvf_intr(rvu);
 		rvu_mbox_destroy(&rvu->afvf_wq_info);
 		return err;
 	}
@@ -2141,7 +2232,7 @@ static int rvu_enable_sriov(struct rvu *rvu)
 
 static void rvu_disable_sriov(struct rvu *rvu)
 {
-	rvu_disable_afvf_mbox_intr(rvu);
+	rvu_disable_afvf_intr(rvu);
 	rvu_mbox_destroy(&rvu->afvf_wq_info);
 	pci_disable_sriov(rvu->pdev);
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 19/20] octeontx2-af: Add interrupt handlers for Master Enable event
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (17 preceding siblings ...)
  2018-11-08 18:35 ` [PATCH 18/20] octeontx2-af: Add FLR handling support for AF's VFs sunil.kovvuri
@ 2018-11-08 18:36 ` sunil.kovvuri
  2018-11-08 18:36 ` [PATCH 20/20] octeontx2-af: Workarounds for HW errata sunil.kovvuri
  2018-11-08 21:02 ` [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling Arnd Bergmann
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:36 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Linu Cherian, Sunil Goutham

From: Linu Cherian <lcherian@marvell.com>

- Add interrupt handlers for Master Enable events from PFs
  and Master Enable events from VFs of AF
- Master Enable is required for the MSIX delivery to work
- Master Enable bit trap handler doesn't have to do any anything
  other than clearing the TRPEND bit, since the enable/disable
  requirements are already taken care using mbox requests/flr handler.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 113 ++++++++++++++++++++++++
 1 file changed, 113 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 1ef104a..395819d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1889,6 +1889,67 @@ static irqreturn_t rvu_flr_intr_handler(int irq, void *rvu_irq)
 	return IRQ_HANDLED;
 }
 
+static void rvu_me_handle_vfset(struct rvu *rvu, int idx, u64 intr)
+{
+	int vf;
+
+	/* Nothing to be done here other than clearing the
+	 * TRPEND bit.
+	 */
+	for (vf = 0; vf < 64; vf++) {
+		if (intr & (1ULL << vf)) {
+			/* clear the trpend due to ME(master enable) */
+			rvupf_write64(rvu, RVU_PF_VFTRPENDX(idx), BIT_ULL(vf));
+			/* clear interrupt */
+			rvupf_write64(rvu, RVU_PF_VFME_INTX(idx), BIT_ULL(vf));
+		}
+	}
+}
+
+/* Handles ME interrupts from VFs of AF */
+static irqreturn_t rvu_me_vf_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	int vfset;
+	u64 intr;
+
+	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT);
+
+	for (vfset = 0; vfset <= 1; vfset++) {
+		intr = rvupf_read64(rvu, RVU_PF_VFME_INTX(vfset));
+		if (intr)
+			rvu_me_handle_vfset(rvu, vfset, intr);
+	}
+
+	return IRQ_HANDLED;
+}
+
+/* Handles ME interrupts from PFs */
+static irqreturn_t rvu_me_pf_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	u64 intr;
+	u8  pf;
+
+	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT);
+
+	/* Nothing to be done here other than clearing the
+	 * TRPEND bit.
+	 */
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+		if (intr & (1ULL << pf)) {
+			/* clear the trpend due to ME(master enable) */
+			rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFTRPEND,
+				    BIT_ULL(pf));
+			/* clear interrupt */
+			rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT,
+				    BIT_ULL(pf));
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
 static void rvu_unregister_interrupts(struct rvu *rvu)
 {
 	int irq;
@@ -1901,6 +1962,10 @@ static void rvu_unregister_interrupts(struct rvu *rvu)
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1C,
 		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
 
+	/* Disable the PF ME interrupt */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT_ENA_W1C,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
 	for (irq = 0; irq < rvu->num_vec; irq++) {
 		if (rvu->irq_allocated[irq])
 			free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
@@ -1989,6 +2054,26 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFFLR_INT_ENA_W1S,
 		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
 
+	/* Register ME interrupt handler */
+	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_PFME * NAME_SIZE],
+		"RVUAF ME");
+	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_PFME),
+			  rvu_me_pf_intr_handler, 0,
+			  &rvu->irq_name[RVU_AF_INT_VEC_PFME * NAME_SIZE],
+			  rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for ME\n");
+	}
+	rvu->irq_allocated[RVU_AF_INT_VEC_PFME] = true;
+
+	/* Enable ME interrupt for all PFs*/
+	rvu_write64(rvu, BLKADDR_RVUM,
+		    RVU_AF_PFME_INT, INTR_MASK(rvu->hw->total_pfs));
+
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFME_INT_ENA_W1S,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
 	if (!rvu_afvf_msix_vectors_num_ok(rvu))
 		return 0;
 
@@ -2049,6 +2134,30 @@ static int rvu_register_interrupts(struct rvu *rvu)
 	}
 	rvu->irq_allocated[offset] = true;
 
+	/* Register ME interrupt handler for AF's VFs */
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFME0;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF ME0");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_me_vf_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for RVUAFVF ME0\n");
+		goto fail;
+	}
+	rvu->irq_allocated[offset] = true;
+
+	offset = pf_vec_start + RVU_PF_INT_VEC_VFME1;
+	sprintf(&rvu->irq_name[offset * NAME_SIZE], "RVUAFVF ME1");
+	ret = request_irq(pci_irq_vector(rvu->pdev, offset),
+			  rvu_me_vf_intr_handler, 0,
+			  &rvu->irq_name[offset * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for RVUAFVF ME1\n");
+		goto fail;
+	}
+	rvu->irq_allocated[offset] = true;
 	return 0;
 
 fail:
@@ -2108,12 +2217,14 @@ static void rvu_disable_afvf_intr(struct rvu *rvu)
 
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0), INTR_MASK(vfs));
 	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(0), INTR_MASK(vfs));
+	rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1CX(0), INTR_MASK(vfs));
 	if (vfs <= 64)
 		return;
 
 	rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1),
 		      INTR_MASK(vfs - 64));
 	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1CX(1), INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1CX(1), INTR_MASK(vfs - 64));
 }
 
 static void rvu_enable_afvf_intr(struct rvu *rvu)
@@ -2130,6 +2241,7 @@ static void rvu_enable_afvf_intr(struct rvu *rvu)
 	/* FLR */
 	rvupf_write64(rvu, RVU_PF_VFFLR_INTX(0), INTR_MASK(vfs));
 	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(0), INTR_MASK(vfs));
+	rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1SX(0), INTR_MASK(vfs));
 
 	/* Same for remaining VFs, if any. */
 	if (vfs <= 64)
@@ -2141,6 +2253,7 @@ static void rvu_enable_afvf_intr(struct rvu *rvu)
 
 	rvupf_write64(rvu, RVU_PF_VFFLR_INTX(1), INTR_MASK(vfs - 64));
 	rvupf_write64(rvu, RVU_PF_VFFLR_INT_ENA_W1SX(1), INTR_MASK(vfs - 64));
+	rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1SX(1), INTR_MASK(vfs - 64));
 }
 
 #define PCI_DEVID_OCTEONTX2_LBK 0xA061
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 20/20] octeontx2-af: Workarounds for HW errata
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (18 preceding siblings ...)
  2018-11-08 18:36 ` [PATCH 19/20] octeontx2-af: Add interrupt handlers for Master Enable event sunil.kovvuri
@ 2018-11-08 18:36 ` sunil.kovvuri
  2018-11-08 21:02 ` [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling Arnd Bergmann
  20 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-11-08 18:36 UTC (permalink / raw)
  To: netdev, davem; +Cc: arnd, linux-soc, Sunil Goutham, Jerin Jacob

From: Sunil Goutham <sgoutham@marvell.com>

Errata 35038
  Software sets NIX_AF_RX_SW_SYNC[ENA] to sync (flush) in-flight packets
  the RX data path before configuration changes (e.g. disabling one or
  more RQs). Hardware clears [ENA] to indicate sync is done

  An issue exists whereby NIX may clear NIX_AF_RX_SW_SYNC [ENA] too
  early.

Errata 35057
  NIX may corrupt internal state when conditional clocks turn off.
  So turnon all clocks by default.

Errata 35786
 Parse nibble enable NPC configuration for KEY generation has to be
 identical for both Rx and Tx interfaces.

Also corrected endianness configuration for NIX i.e NIX_AF_CFG[AF_BE]
is bit8 and not bit1.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h     | 12 ++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c | 18 ++++++++++++++++--
 drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c | 12 +++++++++---
 3 files changed, 37 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index cf9139e..aa9f8c2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -11,6 +11,7 @@
 #ifndef RVU_H
 #define RVU_H
 
+#include <linux/pci.h>
 #include "rvu_struct.h"
 #include "common.h"
 #include "mbox.h"
@@ -18,6 +19,9 @@
 /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
 
+/* Subsystem Device ID */
+#define PCI_SUBSYS_DEVID_96XX                  0xB200
+
 /* PCI BAR nos */
 #define	PCI_AF_REG_BAR_NUM			0
 #define	PCI_PF_REG_BAR_NUM			2
@@ -255,6 +259,14 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
 	return readq(rvu->pfreg_base + offset);
 }
 
+static inline bool is_rvu_9xxx_A0(struct rvu *rvu)
+{
+	struct pci_dev *pdev = rvu->pdev;
+
+	return (pdev->revision == 0x00) &&
+		(pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX);
+}
+
 /* Function Prototypes
  * RVU
  */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 1f73c05..63df40d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -114,6 +114,12 @@ static void nix_rx_sync(struct rvu *rvu, int blkaddr)
 	err = rvu_poll_reg(rvu, blkaddr, NIX_AF_RX_SW_SYNC, BIT_ULL(0), true);
 	if (err)
 		dev_err(rvu->dev, "NIX RX software sync failed\n");
+
+	/* As per a HW errata in 9xxx A0 silicon, HW may clear SW_SYNC[ENA]
+	 * bit too early. Hence wait for 50us more.
+	 */
+	if (is_rvu_9xxx_A0(rvu))
+		usleep_range(50, 60);
 }
 
 static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
@@ -2135,10 +2141,10 @@ static int nix_aq_init(struct rvu *rvu, struct rvu_block *block)
 	/* Set admin queue endianness */
 	cfg = rvu_read64(rvu, block->addr, NIX_AF_CFG);
 #ifdef __BIG_ENDIAN
-	cfg |= BIT_ULL(1);
+	cfg |= BIT_ULL(8);
 	rvu_write64(rvu, block->addr, NIX_AF_CFG, cfg);
 #else
-	cfg &= ~BIT_ULL(1);
+	cfg &= ~BIT_ULL(8);
 	rvu_write64(rvu, block->addr, NIX_AF_CFG, cfg);
 #endif
 
@@ -2175,6 +2181,14 @@ int rvu_nix_init(struct rvu *rvu)
 		return 0;
 	block = &hw->block[blkaddr];
 
+	/* As per a HW errata in 9xxx A0 silicon, NIX may corrupt
+	 * internal state when conditional clocks are turned off.
+	 * Hence enable them.
+	 */
+	if (is_rvu_9xxx_A0(rvu))
+		rvu_write64(rvu, blkaddr, NIX_AF_CFG,
+			    rvu_read64(rvu, blkaddr, NIX_AF_CFG) | 0x5EULL);
+
 	/* Calibrate X2P bus to check if CGX/LBK links are fine */
 	err = nix_calibrate_x2p(rvu, blkaddr);
 	if (err)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index e61f5b3..bff253c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -973,7 +973,7 @@ int rvu_npc_init(struct rvu *rvu)
 	struct npc_pkind *pkind = &rvu->hw->pkind;
 	u64 keyz = NPC_MCAM_KEY_X2;
 	int blkaddr, entry, bank, err;
-	u64 cfg;
+	u64 cfg, nibble_ena;
 
 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
 	if (blkaddr < 0) {
@@ -1022,10 +1022,16 @@ int rvu_npc_init(struct rvu *rvu)
 	/* Set RX and TX side MCAM search key size.
 	 * LA..LD (ltype only) + Channel
 	 */
+	nibble_ena = 0x49247;
 	rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_RX),
-			((keyz & 0x3) << 32) | 0x49247);
+			((keyz & 0x3) << 32) | nibble_ena);
+	/* Due to an errata (35786) in A0 pass silicon, parse nibble enable
+	 * configuration has to be identical for both Rx and Tx interfaces.
+	 */
+	if (!is_rvu_9xxx_A0(rvu))
+		nibble_ena = (1ULL << 19) - 1;
 	rvu_write64(rvu, blkaddr, NPC_AF_INTFX_KEX_CFG(NIX_INTF_TX),
-			((keyz & 0x3) << 32) | ((1ULL << 19) - 1));
+			((keyz & 0x3) << 32) | nibble_ena);
 
 	err = npc_mcam_rsrcs_init(rvu, blkaddr);
 	if (err)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-08 18:35 ` [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time sunil.kovvuri
@ 2018-11-08 20:43   ` Arnd Bergmann
  2018-11-09  4:20     ` Sunil Kovvuri
  0 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-08 20:43 UTC (permalink / raw)
  To: Sunil Kovvuri; +Cc: Networking, David Miller, linux-soc, Sunil Goutham

On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
> @@ -666,4 +668,20 @@ struct npc_mcam_unmap_counter_req {
>         u8  all;   /* Unmap all entries using this counter ? */
>  };
>
> +struct npc_mcam_alloc_and_write_entry_req {
> +       struct mbox_msghdr hdr;
> +       struct mcam_entry entry_data;
> +       u16 ref_entry;
> +       u8  priority;    /* Lower or higher w.r.t ref_entry */
> +       u8  intf;        /* Rx or Tx interface */
> +       u8  enable_entry;/* Enable this MCAM entry ? */
> +       u8  alloc_cntr;  /* Allocate counter and map ? */
> +};

I noticed that this structure requires padding at the end because
struct mbox_msghdr has a 32-bit alignment requirement. For
data structures in an interface, I'd recommend avoiding that kind
of padding and adding reserved fields or widening the types
accordingly.

I also noticed a similar problem in struct mbox_msghdr. Maybe
use the 'pahole' tool to check for this kind of padding in the
API structures.

         Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG
  2018-11-08 18:35 ` [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG sunil.kovvuri
@ 2018-11-08 20:47   ` Arnd Bergmann
  2018-11-09  4:29     ` Sunil Kovvuri
  0 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-08 20:47 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: Networking, David Miller, linux-soc, tduszynski, Sunil Goutham

On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:

> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> index f98b011..3f7e5e6 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> @@ -259,4 +259,34 @@ struct nix_rx_action {
>  #endif
>  };
>
> +struct nix_rx_vtag_action {
> +#if defined(__BIG_ENDIAN_BITFIELD)
> +       u64     rsvd_63_48      :16;
> +       u64     vtag1_valid     :1;
> +       u64     vtag1_type      :3;
> +       u64     rsvd_43         :1;
> +       u64     vtag1_lid       :3;
> +       u64     vtag1_relptr    :8;
> +       u64     rsvd_31_16      :16;
> +       u64     vtag0_valid     :1;
> +       u64     vtag0_type      :3;
> +       u64     rsvd_11         :1;
> +       u64     vtag0_lid       :3;
> +       u64     vtag0_relptr    :8;
> +#else
> +       u64     vtag0_relptr    :8;
> +       u64     vtag0_lid       :3;
> +       u64     rsvd_11         :1;
> +       u64     vtag0_type      :3;
> +       u64     vtag0_valid     :1;
> +       u64     rsvd_31_16      :16;
> +       u64     vtag1_relptr    :8;
> +       u64     vtag1_lid       :3;
> +       u64     rsvd_43         :1;
> +       u64     vtag1_type      :3;
> +       u64     vtag1_valid     :1;
> +       u64     rsvd_63_48      :16;
> +#endif
> +};

Here is another instance of bitfields in an interface structure. As
before, please try to avoid doing that and use bit shifts and masks
instead.

       Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 13/20] octeontx2-af: Add FLR interrupt handler
  2018-11-08 18:35 ` [PATCH 13/20] octeontx2-af: Add FLR interrupt handler sunil.kovvuri
@ 2018-11-08 20:50   ` Arnd Bergmann
  0 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-08 20:50 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: Networking, David Miller, linux-soc, Geetha sowjanya, Sunil Goutham

On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
>
> From: Geetha sowjanya <gakula@marvell.com>
>
> All RVU PF's upon receiving a FLR will trigger an IRQ to RVU AF.
> This patch enables all RVU PF's FLR interrupt and registers a
> handler. Upon receiving an IRQ a workqueue is scheduled to
> cleanup all RVU blocks being used by the PF/VF which received
> the FLR.

I assume this all makes sense, but I find it hard to read the changelog
text with all those acronyms. I probably knew what an FLR and a RVU
is in the earlier review rounds, but it would be nice to reword changelogs
like this one so they make more sense to readers that are not already
familiar with the rest of the driver.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling
  2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
                   ` (19 preceding siblings ...)
  2018-11-08 18:36 ` [PATCH 20/20] octeontx2-af: Workarounds for HW errata sunil.kovvuri
@ 2018-11-08 21:02 ` Arnd Bergmann
  2018-11-09  4:34   ` Sunil Kovvuri
  20 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-08 21:02 UTC (permalink / raw)
  To: Sunil Kovvuri; +Cc: Networking, David Miller, linux-soc, Sunil Goutham

On Thu, Nov 8, 2018 at 7:36 PM <sunil.kovvuri@gmail.com> wrote:
>
> From: Sunil Goutham <sgoutham@marvell.com>

Hmm, I noticed that you use a different address as the patch author
and the submitter. I'm guessing that "Sunil Goutham" and
"Sunil Kovvuri" actually refer to the same person, and you just
need to pick which of the two email addresses you want to use
for public communication, but that's not obvious here.

However, if there are actually two different Sunil's here, then
you need to add that second Signed-off-by.

I've taken a look at all the patches now, and found very little
sticking out that warranted a comment from my side, and
no real show-stoppers. That said, I found this series overall
much harder to understand than the previous ones, and don't
even know what to ask about it. My feeling is that it's probably
all fine, but that is  purely based on a review  of the individual
pieces, not the overall design and how they fit together. With the
earlier patches that I managed to get a better understanding
of, that seemed reasonable as well.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support
  2018-11-08 18:35 ` [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support sunil.kovvuri
@ 2018-11-08 22:22   ` David Miller
  0 siblings, 0 replies; 36+ messages in thread
From: David Miller @ 2018-11-08 22:22 UTC (permalink / raw)
  To: sunil.kovvuri; +Cc: netdev, arnd, linux-soc, sgoutham

From: sunil.kovvuri@gmail.com
Date: Fri,  9 Nov 2018 00:05:45 +0530

> +int rvu_mbox_handler_NPC_MCAM_ALLOC_ENTRY(struct rvu *rvu,
> +					  struct npc_mcam_alloc_entry_req *req,
> +					  struct npc_mcam_alloc_entry_rsp *rsp);
> +int rvu_mbox_handler_NPC_MCAM_FREE_ENTRY(struct rvu *rvu,
> +					 struct npc_mcam_free_entry_req *req,
> +					 struct msg_rsp *rsp);

Please don't use capitalized letters or StUdLyCaPs for function and variable
names.

Keep capitalized letters for CPP macros.

This is how programmers visually can see if something is a real C declaration
or some CPP stuff.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-08 20:43   ` Arnd Bergmann
@ 2018-11-09  4:20     ` Sunil Kovvuri
  2018-11-09 11:02       ` Arnd Bergmann
  0 siblings, 1 reply; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-09  4:20 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 2:13 AM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
> > @@ -666,4 +668,20 @@ struct npc_mcam_unmap_counter_req {
> >         u8  all;   /* Unmap all entries using this counter ? */
> >  };
> >
> > +struct npc_mcam_alloc_and_write_entry_req {
> > +       struct mbox_msghdr hdr;
> > +       struct mcam_entry entry_data;
> > +       u16 ref_entry;
> > +       u8  priority;    /* Lower or higher w.r.t ref_entry */
> > +       u8  intf;        /* Rx or Tx interface */
> > +       u8  enable_entry;/* Enable this MCAM entry ? */
> > +       u8  alloc_cntr;  /* Allocate counter and map ? */
> > +};
>
> I noticed that this structure requires padding at the end because
> struct mbox_msghdr has a 32-bit alignment requirement. For
> data structures in an interface, I'd recommend avoiding that kind
> of padding and adding reserved fields or widening the types
> accordingly.
>

When there are multiple messages in the mailbox, each message starts
at a 16byte aligned offset. So struct mbox_msghdr is always aligned.
I think adding reserved fields is not needed here.

===
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
                                            int size, int size_rsp)
{
        size = ALIGN(size, MBOX_MSG_ALIGN);
===

Is this what you were referring to ?

Sunil.

> I also noticed a similar problem in struct mbox_msghdr. Maybe
> use the 'pahole' tool to check for this kind of padding in the
> API structures.
>
>          Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG
  2018-11-08 20:47   ` Arnd Bergmann
@ 2018-11-09  4:29     ` Sunil Kovvuri
  2018-11-09 11:12       ` Arnd Bergmann
  0 siblings, 1 reply; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-09  4:29 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Tomasz Duszynski,
	Sunil Goutham

On Fri, Nov 9, 2018 at 2:17 AM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> > index f98b011..3f7e5e6 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
> > @@ -259,4 +259,34 @@ struct nix_rx_action {
> >  #endif
> >  };
> >
> > +struct nix_rx_vtag_action {
> > +#if defined(__BIG_ENDIAN_BITFIELD)
> > +       u64     rsvd_63_48      :16;
> > +       u64     vtag1_valid     :1;
> > +       u64     vtag1_type      :3;
> > +       u64     rsvd_43         :1;
> > +       u64     vtag1_lid       :3;
> > +       u64     vtag1_relptr    :8;
> > +       u64     rsvd_31_16      :16;
> > +       u64     vtag0_valid     :1;
> > +       u64     vtag0_type      :3;
> > +       u64     rsvd_11         :1;
> > +       u64     vtag0_lid       :3;
> > +       u64     vtag0_relptr    :8;
> > +#else
> > +       u64     vtag0_relptr    :8;
> > +       u64     vtag0_lid       :3;
> > +       u64     rsvd_11         :1;
> > +       u64     vtag0_type      :3;
> > +       u64     vtag0_valid     :1;
> > +       u64     rsvd_31_16      :16;
> > +       u64     vtag1_relptr    :8;
> > +       u64     vtag1_lid       :3;
> > +       u64     rsvd_43         :1;
> > +       u64     vtag1_type      :3;
> > +       u64     vtag1_valid     :1;
> > +       u64     rsvd_63_48      :16;
> > +#endif
> > +};
>
> Here is another instance of bitfields in an interface structure. As
> before, please try to avoid doing that and use bit shifts and masks
> instead.
>
>        Arnd

No, this struct is not part of communication interface.
This is used to fill up a register in a bit more readable fashion
instead of plain bit shifts.

===
struct nix_rx_vtag_action vtag_action;

        *(u64 *)&vtag_action = 0;
        vtag_action.vtag0_valid = 1;
        /* must match type set in NIX_VTAG_CFG */
        vtag_action.vtag0_type = 0;
        vtag_action.vtag0_lid = NPC_LID_LA;
        vtag_action.vtag0_relptr = 12;
        entry.vtag_action = *(u64 *)&vtag_action;

        /* Set TAG 'action' */
        rvu_write64(rvu, blkaddr, NPC_AF_MCAMEX_BANKX_TAG_ACT(index, actbank),
                    entry->vtag_action);
===

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling
  2018-11-08 21:02 ` [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling Arnd Bergmann
@ 2018-11-09  4:34   ` Sunil Kovvuri
  2018-11-09 11:08     ` Arnd Bergmann
  0 siblings, 1 reply; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-09  4:34 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 2:32 AM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Thu, Nov 8, 2018 at 7:36 PM <sunil.kovvuri@gmail.com> wrote:
> >
> > From: Sunil Goutham <sgoutham@marvell.com>
>
> Hmm, I noticed that you use a different address as the patch author
> and the submitter. I'm guessing that "Sunil Goutham" and
> "Sunil Kovvuri" actually refer to the same person, and you just
> need to pick which of the two email addresses you want to use
> for public communication, but that's not obvious here.
>
> However, if there are actually two different Sunil's here, then
> you need to add that second Signed-off-by.
>

No, it's just me.
Sometimes code indentation becomes messy and difficult to read, if i use
corporate mail server to submit patches. So i have been using gmail.

> I've taken a look at all the patches now, and found very little
> sticking out that warranted a comment from my side, and
> no real show-stoppers. That said, I found this series overall
> much harder to understand than the previous ones, and don't
> even know what to ask about it. My feeling is that it's probably
> all fine, but that is  purely based on a review  of the individual
> pieces, not the overall design and how they fit together. With the
> earlier patches that I managed to get a better understanding
> of, that seemed reasonable as well.
>
>       Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-09  4:20     ` Sunil Kovvuri
@ 2018-11-09 11:02       ` Arnd Bergmann
  2018-11-09 17:13         ` Sunil Kovvuri
  0 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-09 11:02 UTC (permalink / raw)
  To: Sunil Kovvuri; +Cc: Networking, David Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 5:21 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
>
> On Fri, Nov 9, 2018 at 2:13 AM Arnd Bergmann <arnd@arndb.de> wrote:
> >
> > On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
> > > @@ -666,4 +668,20 @@ struct npc_mcam_unmap_counter_req {
> > >         u8  all;   /* Unmap all entries using this counter ? */
> > >  };
> > >
> > > +struct npc_mcam_alloc_and_write_entry_req {
> > > +       struct mbox_msghdr hdr;
> > > +       struct mcam_entry entry_data;
> > > +       u16 ref_entry;
> > > +       u8  priority;    /* Lower or higher w.r.t ref_entry */
> > > +       u8  intf;        /* Rx or Tx interface */
> > > +       u8  enable_entry;/* Enable this MCAM entry ? */
> > > +       u8  alloc_cntr;  /* Allocate counter and map ? */
> > > +};
> >
> > I noticed that this structure requires padding at the end because
> > struct mbox_msghdr has a 32-bit alignment requirement. For
> > data structures in an interface, I'd recommend avoiding that kind
> > of padding and adding reserved fields or widening the types
> > accordingly.
> >
>
> When there are multiple messages in the mailbox, each message starts
> at a 16byte aligned offset. So struct mbox_msghdr is always aligned.
> I think adding reserved fields is not needed here.
>
> ===
> struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
>                                             int size, int size_rsp)
> {
>         size = ALIGN(size, MBOX_MSG_ALIGN);
> ===
>
> Is this what you were referring to ?
>

No, I mean the padding at the end of the structure. An example
would be a structure like

struct s {
    u16 a;
    u32 b;
    u16 c;
};

Since b is aligned to four bytes, you get padding between a and b.
On top of that, you also get padding after c to make the size of
structure itself be a multiple of its alignment. For interfaces, we
should avoid both kinds of padding. This can be done by marking
members as __packed (usually I don't recommend that), by
changing the size of members, or by adding explicit 'reserved'
fields in place of the padding.

> > I also noticed a similar problem in struct mbox_msghdr. Maybe
> > use the 'pahole' tool to check for this kind of padding in the
> > API structures.

     Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling
  2018-11-09  4:34   ` Sunil Kovvuri
@ 2018-11-09 11:08     ` Arnd Bergmann
  0 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-09 11:08 UTC (permalink / raw)
  To: Sunil Kovvuri; +Cc: Networking, David Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 5:35 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> On Fri, Nov 9, 2018 at 2:32 AM Arnd Bergmann <arnd@arndb.de> wrote:
> > On Thu, Nov 8, 2018 at 7:36 PM <sunil.kovvuri@gmail.com> wrote:
> > > From: Sunil Goutham <sgoutham@marvell.com>
> >
> > Hmm, I noticed that you use a different address as the patch author
> > and the submitter. I'm guessing that "Sunil Goutham" and
> > "Sunil Kovvuri" actually refer to the same person, and you just
> > need to pick which of the two email addresses you want to use
> > for public communication, but that's not obvious here.
> >
> > However, if there are actually two different Sunil's here, then
> > you need to add that second Signed-off-by.
> >
>
> No, it's just me.
> Sometimes code indentation becomes messy and difficult to read, if i use
> corporate mail server to submit patches. So i have been using gmail.

Ok, I see. Ideally you should try to get the company mail server fixed
of course. A possible workaround is to add your marvell address as
an alias in gmail, which allows 'git send email' to send out mails with
the other address as the sender. This may however fail if the marvell
mail server uses SPF, as mail clients might then consider your
mails as forged.

        Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG
  2018-11-09  4:29     ` Sunil Kovvuri
@ 2018-11-09 11:12       ` Arnd Bergmann
  2018-11-09 17:06         ` Sunil Kovvuri
  0 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-09 11:12 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: Networking, David Miller, linux-soc, tduszynski, Sunil Goutham

On Fri, Nov 9, 2018 at 5:29 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> On Fri, Nov 9, 2018 at 2:17 AM Arnd Bergmann <arnd@arndb.de> wrote:
> > On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:

> >
> > Here is another instance of bitfields in an interface structure. As
> > before, please try to avoid doing that and use bit shifts and masks
> > instead.
> >
> >        Arnd
>
> No, this struct is not part of communication interface.
> This is used to fill up a register in a bit more readable fashion
> instead of plain bit shifts.

But this is still an interface, isn't it? Writing to the register
implies that there is some hardware that interprets the
bits, so they have to be in the right place.

> ===
> struct nix_rx_vtag_action vtag_action;
>
>         *(u64 *)&vtag_action = 0;
>         vtag_action.vtag0_valid = 1;
>         /* must match type set in NIX_VTAG_CFG */
>         vtag_action.vtag0_type = 0;
>         vtag_action.vtag0_lid = NPC_LID_LA;
>         vtag_action.vtag0_relptr = 12;
>         entry.vtag_action = *(u64 *)&vtag_action;
>
>         /* Set TAG 'action' */
>         rvu_write64(rvu, blkaddr, NPC_AF_MCAMEX_BANKX_TAG_ACT(index, actbank),
>                     entry->vtag_action);

I assume this rvu_write64() does a cpu_to_le64() swap on big-endian,
so the contents again are in the wrong place. I don't see any non-reserved
fields that span an 8-bit boundary, so you can probably rearrange the bits
to make it work, but generally speaking it's better to not rely on how the
compiler lays out bit fields.

        Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG
  2018-11-09 11:12       ` Arnd Bergmann
@ 2018-11-09 17:06         ` Sunil Kovvuri
  0 siblings, 0 replies; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-09 17:06 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Tomasz Duszynski,
	Sunil Goutham

On Fri, Nov 9, 2018 at 4:42 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Fri, Nov 9, 2018 at 5:29 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> > On Fri, Nov 9, 2018 at 2:17 AM Arnd Bergmann <arnd@arndb.de> wrote:
> > > On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
>
> > >
> > > Here is another instance of bitfields in an interface structure. As
> > > before, please try to avoid doing that and use bit shifts and masks
> > > instead.
> > >
> > >        Arnd
> >
> > No, this struct is not part of communication interface.
> > This is used to fill up a register in a bit more readable fashion
> > instead of plain bit shifts.
>
> But this is still an interface, isn't it? Writing to the register
> implies that there is some hardware that interprets the
> bits, so they have to be in the right place.
>
> > ===
> > struct nix_rx_vtag_action vtag_action;
> >
> >         *(u64 *)&vtag_action = 0;
> >         vtag_action.vtag0_valid = 1;
> >         /* must match type set in NIX_VTAG_CFG */
> >         vtag_action.vtag0_type = 0;
> >         vtag_action.vtag0_lid = NPC_LID_LA;
> >         vtag_action.vtag0_relptr = 12;
> >         entry.vtag_action = *(u64 *)&vtag_action;
> >
> >         /* Set TAG 'action' */
> >         rvu_write64(rvu, blkaddr, NPC_AF_MCAMEX_BANKX_TAG_ACT(index, actbank),
> >                     entry->vtag_action);
>
> I assume this rvu_write64() does a cpu_to_le64() swap on big-endian,
> so the contents again are in the wrong place. I don't see any non-reserved
> fields that span an 8-bit boundary, so you can probably rearrange the bits
> to make it work, but generally speaking it's better to not rely on how the
> compiler lays out bit fields.
>
>         Arnd

Agreed.
Will fix and submit a new series.

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-09 11:02       ` Arnd Bergmann
@ 2018-11-09 17:13         ` Sunil Kovvuri
  2018-11-09 21:06           ` Arnd Bergmann
  0 siblings, 1 reply; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-09 17:13 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 4:32 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Fri, Nov 9, 2018 at 5:21 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> >
> > On Fri, Nov 9, 2018 at 2:13 AM Arnd Bergmann <arnd@arndb.de> wrote:
> > >
> > > On Thu, Nov 8, 2018 at 7:37 PM <sunil.kovvuri@gmail.com> wrote:
> > > > @@ -666,4 +668,20 @@ struct npc_mcam_unmap_counter_req {
> > > >         u8  all;   /* Unmap all entries using this counter ? */
> > > >  };
> > > >
> > > > +struct npc_mcam_alloc_and_write_entry_req {
> > > > +       struct mbox_msghdr hdr;
> > > > +       struct mcam_entry entry_data;
> > > > +       u16 ref_entry;
> > > > +       u8  priority;    /* Lower or higher w.r.t ref_entry */
> > > > +       u8  intf;        /* Rx or Tx interface */
> > > > +       u8  enable_entry;/* Enable this MCAM entry ? */
> > > > +       u8  alloc_cntr;  /* Allocate counter and map ? */
> > > > +};
> > >
> > > I noticed that this structure requires padding at the end because
> > > struct mbox_msghdr has a 32-bit alignment requirement. For
> > > data structures in an interface, I'd recommend avoiding that kind
> > > of padding and adding reserved fields or widening the types
> > > accordingly.
> > >
> >
> > When there are multiple messages in the mailbox, each message starts
> > at a 16byte aligned offset. So struct mbox_msghdr is always aligned.
> > I think adding reserved fields is not needed here.
> >
> > ===
> > struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
> >                                             int size, int size_rsp)
> > {
> >         size = ALIGN(size, MBOX_MSG_ALIGN);
> > ===
> >
> > Is this what you were referring to ?
> >
>
> No, I mean the padding at the end of the structure. An example
> would be a structure like
>
> struct s {
>     u16 a;
>     u32 b;
>     u16 c;
> };
>
> Since b is aligned to four bytes, you get padding between a and b.
> On top of that, you also get padding after c to make the size of
> structure itself be a multiple of its alignment. For interfaces, we
> should avoid both kinds of padding. This can be done by marking
> members as __packed (usually I don't recommend that), by
> changing the size of members, or by adding explicit 'reserved'
> fields in place of the padding.
>
> > > I also noticed a similar problem in struct mbox_msghdr. Maybe
> > > use the 'pahole' tool to check for this kind of padding in the
> > > API structures.
>
>      Arnd

Got your point now and agree that padding has to be avoided.
But this is a big change and above pointed structure is not
the only one as this applies to all structures in the file.

Would it be okay if I submit a separate patch after this series
addressing all structures ?

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-09 17:13         ` Sunil Kovvuri
@ 2018-11-09 21:06           ` Arnd Bergmann
  2018-11-12 10:31             ` Sunil Kovvuri
  0 siblings, 1 reply; 36+ messages in thread
From: Arnd Bergmann @ 2018-11-09 21:06 UTC (permalink / raw)
  To: Sunil Kovvuri; +Cc: Networking, David Miller, linux-soc, Sunil Goutham

On Fri, Nov 9, 2018 at 6:13 PM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> On Fri, Nov 9, 2018 at 4:32 PM Arnd Bergmann <arnd@arndb.de> wrote:
> > On Fri, Nov 9, 2018 at 5:21 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:

> >
> > Since b is aligned to four bytes, you get padding between a and b.
> > On top of that, you also get padding after c to make the size of
> > structure itself be a multiple of its alignment. For interfaces, we
> > should avoid both kinds of padding. This can be done by marking
> > members as __packed (usually I don't recommend that), by
> > changing the size of members, or by adding explicit 'reserved'
> > fields in place of the padding.
> >
> > > > I also noticed a similar problem in struct mbox_msghdr. Maybe
> > > > use the 'pahole' tool to check for this kind of padding in the
> > > > API structures.
>
> Got your point now and agree that padding has to be avoided.
> But this is a big change and above pointed structure is not
> the only one as this applies to all structures in the file.
>
> Would it be okay if I submit a separate patch after this series
> addressing all structures ?

It depends on how you want to address it. If you want to
change the structure layout, then I think it would be better
integrated into the series as that is an incompatible interface
change. If you just want to add reserved members to make
the padding explicit, that could be a follow-up.

        Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time
  2018-11-09 21:06           ` Arnd Bergmann
@ 2018-11-12 10:31             ` Sunil Kovvuri
  0 siblings, 0 replies; 36+ messages in thread
From: Sunil Kovvuri @ 2018-11-12 10:31 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Netdev List, David S. Miller, linux-soc, Sunil Goutham

On Sat, Nov 10, 2018 at 2:36 AM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Fri, Nov 9, 2018 at 6:13 PM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
> > On Fri, Nov 9, 2018 at 4:32 PM Arnd Bergmann <arnd@arndb.de> wrote:
> > > On Fri, Nov 9, 2018 at 5:21 AM Sunil Kovvuri <sunil.kovvuri@gmail.com> wrote:
>
> > >
> > > Since b is aligned to four bytes, you get padding between a and b.
> > > On top of that, you also get padding after c to make the size of
> > > structure itself be a multiple of its alignment. For interfaces, we
> > > should avoid both kinds of padding. This can be done by marking
> > > members as __packed (usually I don't recommend that), by
> > > changing the size of members, or by adding explicit 'reserved'
> > > fields in place of the padding.
> > >
> > > > > I also noticed a similar problem in struct mbox_msghdr. Maybe
> > > > > use the 'pahole' tool to check for this kind of padding in the
> > > > > API structures.
> >
> > Got your point now and agree that padding has to be avoided.
> > But this is a big change and above pointed structure is not
> > the only one as this applies to all structures in the file.
> >
> > Would it be okay if I submit a separate patch after this series
> > addressing all structures ?
>
> It depends on how you want to address it. If you want to
> change the structure layout, then I think it would be better
> integrated into the series as that is an incompatible interface
> change. If you just want to add reserved members to make
> the padding explicit, that could be a follow-up.
>
>         Arnd

Upon further thought i think padding is needed here but adding reserved fields
is also not enough. Probably i will have to align all struct elements to 64bit
with  __attribute__ ((aligned(8))) to avoid unaligned access faults when device
is attached to guest. VFIO maps BARs as device memory, so irrespective of
whatever mapping i do in driver, reads/writes to the mailbox region might result
in unaligned access fault.

Checking if there are other ways of solving it.

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2018-11-12 20:24 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-08 18:35 [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling sunil.kovvuri
2018-11-08 18:35 ` [PATCH 01/20] octeontx2-af: Support to modify min/max allowed packet lengths sunil.kovvuri
2018-11-08 18:35 ` [PATCH 02/20] octeontx2-af: Support to get NIX HW constants from AF sunil.kovvuri
2018-11-08 18:35 ` [PATCH 03/20] octeontx2-af: Relax resource lock into mutex sunil.kovvuri
2018-11-08 18:35 ` [PATCH 04/20] octeontx2-af: NPC MCAM entry alloc/free support sunil.kovvuri
2018-11-08 22:22   ` David Miller
2018-11-08 18:35 ` [PATCH 05/20] octeontx2-af: MCAM entry installation support sunil.kovvuri
2018-11-08 18:35 ` [PATCH 06/20] octeontx2-af: Support for NPC MCAM counters sunil.kovvuri
2018-11-08 18:35 ` [PATCH 07/20] octeontx2-af: Map or unmap NPC MCAM entry and counter sunil.kovvuri
2018-11-08 18:35 ` [PATCH 08/20] octeontx2-af: Alloc and config NPC MCAM entry at a time sunil.kovvuri
2018-11-08 20:43   ` Arnd Bergmann
2018-11-09  4:20     ` Sunil Kovvuri
2018-11-09 11:02       ` Arnd Bergmann
2018-11-09 17:13         ` Sunil Kovvuri
2018-11-09 21:06           ` Arnd Bergmann
2018-11-12 10:31             ` Sunil Kovvuri
2018-11-08 18:35 ` [PATCH 09/20] octeontx2-af: Add MKEX default profile sunil.kovvuri
2018-11-08 18:35 ` [PATCH 10/20] octeontx2-af: Support to enable/disable default MCAM entries sunil.kovvuri
2018-11-08 18:35 ` [PATCH 11/20] octeontx2-af: Add support for stripping STAG/CTAG sunil.kovvuri
2018-11-08 20:47   ` Arnd Bergmann
2018-11-09  4:29     ` Sunil Kovvuri
2018-11-09 11:12       ` Arnd Bergmann
2018-11-09 17:06         ` Sunil Kovvuri
2018-11-08 18:35 ` [PATCH 12/20] octeontx2-af: Verify NPA/SSO/NIX PF_FUNC mapping sunil.kovvuri
2018-11-08 18:35 ` [PATCH 13/20] octeontx2-af: Add FLR interrupt handler sunil.kovvuri
2018-11-08 20:50   ` Arnd Bergmann
2018-11-08 18:35 ` [PATCH 14/20] octeontx2-af: Teardown NPA, NIX LF upon receiving FLR sunil.kovvuri
2018-11-08 18:35 ` [PATCH 15/20] octeontx2-af: Mbox communication support btw AF and it's VFs sunil.kovvuri
2018-11-08 18:35 ` [PATCH 16/20] octeontx2-af: Enable sriov on AF to create VFs sunil.kovvuri
2018-11-08 18:35 ` [PATCH 17/20] octeontx2-af: Configure AF VFs to talk over LBK channels sunil.kovvuri
2018-11-08 18:35 ` [PATCH 18/20] octeontx2-af: Add FLR handling support for AF's VFs sunil.kovvuri
2018-11-08 18:36 ` [PATCH 19/20] octeontx2-af: Add interrupt handlers for Master Enable event sunil.kovvuri
2018-11-08 18:36 ` [PATCH 20/20] octeontx2-af: Workarounds for HW errata sunil.kovvuri
2018-11-08 21:02 ` [PATCH 00/20] octeontx2-af: NPC MCAM support and FLR handling Arnd Bergmann
2018-11-09  4:34   ` Sunil Kovvuri
2018-11-09 11:08     ` Arnd Bergmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.