netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform
@ 2022-09-28  3:58 Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 1/8] octeontx2-af: cn10k: Introduce driver for macsec block Subbaraya Sundeep
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Subbaraya Sundeep

CN10K-B and CNF10K-B variaints of CN10K silicon has macsec block(MCS)
to encrypt and decrypt packets at MAC/hardware level. This block is a
global resource with hardware resources like SecYs, SCs and SAs
and is in between NIX block and RPM LMAC. CN10K-B silicon has only
one MCS block which receives packets from all LMACS whereas
CNF10K-B has seven MCS blocks for seven LMACs. Both MCS blocks are
similar in operation except for few register offsets and some
configurations require writing to different registers. This patchset
introduces macsec hardware offloading support. AF driver manages hardware
resources and PF driver consumes them when macsec hardware offloading
is needed.

Patch 1 adds basic pci driver for both CN10K-B and CNF10K-B
silicons and initializes hardware block.
Patches 2 and 3 adds mailboxes to init, reset and manage
resources of the MCS block
Patch 4 adds a low priority rule in MCS TCAM so that the
traffic which do not need macsec processing can be sent/received
Patch 5 adds macsec stats collection support
Patch 6 adds interrupt handling support and any event in which
AF consumer is interested can be notified via mbox notification
Patch 7 adds debugfs support which helps in debugging packet
path
Patch 8 introduces macsec hardware offload feature for
PF netdev driver.

v2 changes:
 Fix build error by changing #ifdef CONFIG_MACSEC to
 #if IS_ENABLED(CONFIG_MACSEC)


Thanks,
Sundeep

Geetha sowjanya (7):
  octeontx2-af: cn10k: Introduce driver for macsec block.
  octeontx2-af: cn10k: mcs: Add mailboxes for port related operations
  octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources
  octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic
  octeontx2-af: cn10k: mcs: Support for stats collection
  octeontx2-af: cn10k: mcs: Handle MCS block interrupts
  octeontx2-af: cn10k: mcs: Add debugfs support

Subbaraya Sundeep (1):
  octeontx2-pf: mcs: Introduce MACSEC hardware offloading

 drivers/net/ethernet/marvell/octeontx2/af/Makefile |    2 +-
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  471 +++++-
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 1601 +++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  246 +++
 .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c    |  214 +++
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    | 1102 +++++++++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c |  888 +++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |   20 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   21 +
 .../ethernet/marvell/octeontx2/af/rvu_debugfs.c    |  346 ++++
 .../net/ethernet/marvell/octeontx2/nic/Makefile    |    1 +
 .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c  | 1668 ++++++++++++++++++++
 .../ethernet/marvell/octeontx2/nic/otx2_common.c   |    1 +
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |   90 ++
 .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c   |   16 +
 15 files changed, 6680 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c

-- 
2.7.4


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 1/8] octeontx2-af: cn10k: Introduce driver for macsec block.
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 2/8] octeontx2-af: cn10k: mcs: Add mailboxes for port related operations Subbaraya Sundeep
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Vamsi Attunuru, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

CN10K-B and CNF10K-B has macsec block(MCS) to encrypt and
decrypt packets at MAC level. This block is a global resource
with hardware resources like SecYs, SCs and SAs and is in
between NIX block and RPM LMAC. CN10K-B silicon has only one MCS
block which receives packets from all LMACS whereas CNF10K-B has
seven MCS blocks for seven LMACs. Both MCS blocks are
similar in operation except for few register offsets and some
configurations require writing to different registers. Those
differences between IPs are handled using separate ops.
This patch adds basic driver and does the initial hardware
calibration and parser configuration.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/Makefile |   2 +-
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 368 +++++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  92 ++++++
 .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c    |  65 ++++
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    |  78 +++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c |  52 +++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |  14 +
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   8 +
 8 files changed, 678 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
index 4020356..3cf4c82 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
@@ -11,4 +11,4 @@ rvu_mbox-y := mbox.o rvu_trace.o
 rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
 		  rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
 		  rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
-		  rvu_sdp.o rvu_npc_hash.o
+		  rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
new file mode 100644
index 0000000..a4e919c
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -0,0 +1,368 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell MCS driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "mcs.h"
+#include "mcs_reg.h"
+
+#define DRV_NAME	"Marvell MCS Driver"
+
+#define PCI_CFG_REG_BAR_NUM	0
+
+static const struct pci_device_id mcs_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_MCS) },
+	{ 0, }  /* end of table */
+};
+
+static LIST_HEAD(mcs_list);
+
+static void *alloc_mem(struct mcs *mcs, int n)
+{
+	return devm_kcalloc(mcs->dev, n, sizeof(u16), GFP_KERNEL);
+}
+
+static int mcs_alloc_struct_mem(struct mcs *mcs, struct mcs_rsrc_map *res)
+{
+	struct hwinfo *hw = mcs->hw;
+	int err;
+
+	res->flowid2pf_map = alloc_mem(mcs, hw->tcam_entries);
+	if (!res->flowid2pf_map)
+		return -ENOMEM;
+
+	res->secy2pf_map = alloc_mem(mcs, hw->secy_entries);
+	if (!res->secy2pf_map)
+		return -ENOMEM;
+
+	res->sc2pf_map = alloc_mem(mcs, hw->sc_entries);
+	if (!res->sc2pf_map)
+		return -ENOMEM;
+
+	res->sa2pf_map = alloc_mem(mcs, hw->sa_entries);
+	if (!res->sa2pf_map)
+		return -ENOMEM;
+
+	res->flowid2secy_map = alloc_mem(mcs, hw->tcam_entries);
+	if (!res->flowid2secy_map)
+		return -ENOMEM;
+
+	res->flow_ids.max = hw->tcam_entries - MCS_RSRC_RSVD_CNT;
+	err = rvu_alloc_bitmap(&res->flow_ids);
+	if (err)
+		return err;
+
+	res->secy.max = hw->secy_entries - MCS_RSRC_RSVD_CNT;
+	err = rvu_alloc_bitmap(&res->secy);
+	if (err)
+		return err;
+
+	res->sc.max = hw->sc_entries;
+	err = rvu_alloc_bitmap(&res->sc);
+	if (err)
+		return err;
+
+	res->sa.max = hw->sa_entries;
+	err = rvu_alloc_bitmap(&res->sa);
+	if (err)
+		return err;
+
+	return 0;
+}
+
+int mcs_get_blkcnt(void)
+{
+	struct mcs *mcs;
+	int idmax = -ENODEV;
+
+	/* Check MCS block is present in hardware */
+	if (!pci_dev_present(mcs_id_table))
+		return 0;
+
+	list_for_each_entry(mcs, &mcs_list, mcs_list)
+		if (mcs->mcs_id > idmax)
+			idmax = mcs->mcs_id;
+
+	if (idmax < 0)
+		return 0;
+
+	return idmax + 1;
+}
+
+struct mcs *mcs_get_pdata(int mcs_id)
+{
+	struct mcs *mcs_dev;
+
+	list_for_each_entry(mcs_dev, &mcs_list, mcs_list) {
+		if (mcs_dev->mcs_id == mcs_id)
+			return mcs_dev;
+	}
+	return NULL;
+}
+
+/* Set lmac to bypass/operational mode */
+void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode)
+{
+	u64 reg;
+
+	reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(lmac_id * 2);
+	mcs_reg_write(mcs, reg, (u64)mode);
+}
+
+void cn10kb_mcs_parser_cfg(struct mcs *mcs)
+{
+	u64 reg, val;
+
+	/* VLAN CTag */
+	val = BIT_ULL(0) | (0x8100ull & 0xFFFF) << 1 | BIT_ULL(17);
+	/* RX */
+	reg = MCSX_PEX_RX_SLAVE_VLAN_CFGX(0);
+	mcs_reg_write(mcs, reg, val);
+
+	/* TX */
+	reg = MCSX_PEX_TX_SLAVE_VLAN_CFGX(0);
+	mcs_reg_write(mcs, reg, val);
+
+	/* VLAN STag */
+	val = BIT_ULL(0) | (0x88a8ull & 0xFFFF) << 1 | BIT_ULL(18);
+	/* RX */
+	reg = MCSX_PEX_RX_SLAVE_VLAN_CFGX(1);
+	mcs_reg_write(mcs, reg, val);
+
+	/* TX */
+	reg = MCSX_PEX_TX_SLAVE_VLAN_CFGX(1);
+	mcs_reg_write(mcs, reg, val);
+}
+
+static void mcs_lmac_init(struct mcs *mcs, int lmac_id)
+{
+	u64 reg;
+
+	/* Port mode 25GB */
+	reg = MCSX_PAB_RX_SLAVE_PORT_CFGX(lmac_id);
+	mcs_reg_write(mcs, reg, 0);
+
+	if (mcs->hw->mcs_blks > 1) {
+		reg = MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(lmac_id);
+		mcs_reg_write(mcs, reg, 0xe000e);
+		return;
+	}
+
+	reg = MCSX_PAB_TX_SLAVE_PORT_CFGX(lmac_id);
+	mcs_reg_write(mcs, reg, 0);
+}
+
+int mcs_set_lmac_channels(int mcs_id, u16 base)
+{
+	struct mcs *mcs;
+	int lmac;
+	u64 cfg;
+
+	mcs = mcs_get_pdata(mcs_id);
+	if (!mcs)
+		return -ENODEV;
+	for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) {
+		cfg = mcs_reg_read(mcs, MCSX_LINK_LMACX_CFG(lmac));
+		cfg &= ~(MCSX_LINK_LMAC_BASE_MASK | MCSX_LINK_LMAC_RANGE_MASK);
+		cfg |=	FIELD_PREP(MCSX_LINK_LMAC_RANGE_MASK, ilog2(16));
+		cfg |=	FIELD_PREP(MCSX_LINK_LMAC_BASE_MASK, base);
+		mcs_reg_write(mcs, MCSX_LINK_LMACX_CFG(lmac), cfg);
+		base += 16;
+	}
+	return 0;
+}
+
+static int mcs_x2p_calibration(struct mcs *mcs)
+{
+	unsigned long timeout = jiffies + usecs_to_jiffies(20000);
+	int i, err = 0;
+	u64 val;
+
+	/* set X2P calibration */
+	val = mcs_reg_read(mcs, MCSX_MIL_GLOBAL);
+	val |= BIT_ULL(5);
+	mcs_reg_write(mcs, MCSX_MIL_GLOBAL, val);
+
+	/* Wait for calibration to complete */
+	while (!(mcs_reg_read(mcs, MCSX_MIL_RX_GBL_STATUS) & BIT_ULL(0))) {
+		if (time_before(jiffies, timeout)) {
+			usleep_range(80, 100);
+			continue;
+		} else {
+			err = -EBUSY;
+			dev_err(mcs->dev, "MCS X2P calibration failed..ignoring\n");
+			return err;
+		}
+	}
+
+	val = mcs_reg_read(mcs, MCSX_MIL_RX_GBL_STATUS);
+	for (i = 0; i < mcs->hw->mcs_x2p_intf; i++) {
+		if (val & BIT_ULL(1 + i))
+			continue;
+		err = -EBUSY;
+		dev_err(mcs->dev, "MCS:%d didn't respond to X2P calibration\n", i);
+	}
+	/* Clear X2P calibrate */
+	mcs_reg_write(mcs, MCSX_MIL_GLOBAL, mcs_reg_read(mcs, MCSX_MIL_GLOBAL) & ~BIT_ULL(5));
+
+	return err;
+}
+
+static void mcs_set_external_bypass(struct mcs *mcs, u8 bypass)
+{
+	u64 val;
+
+	/* Set MCS to external bypass */
+	val = mcs_reg_read(mcs, MCSX_MIL_GLOBAL);
+	if (bypass)
+		val |= BIT_ULL(6);
+	else
+		val &= ~BIT_ULL(6);
+	mcs_reg_write(mcs, MCSX_MIL_GLOBAL, val);
+}
+
+static void mcs_global_cfg(struct mcs *mcs)
+{
+	/* Disable external bypass */
+	mcs_set_external_bypass(mcs, false);
+
+	/* Set MCS to perform standard IEEE802.1AE macsec processing */
+	if (mcs->hw->mcs_blks == 1) {
+		mcs_reg_write(mcs, MCSX_IP_MODE, BIT_ULL(3));
+		return;
+	}
+
+	mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_CAL_ENTRY, 0xe4);
+	mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_CAL_LEN, 4);
+}
+
+void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs)
+{
+	struct hwinfo *hw = mcs->hw;
+
+	hw->tcam_entries = 128;		/* TCAM entries */
+	hw->secy_entries  = 128;	/* SecY entries */
+	hw->sc_entries = 128;		/* SC CAM entries */
+	hw->sa_entries = 256;		/* SA entries */
+	hw->lmac_cnt = 20;		/* lmacs/ports per mcs block */
+	hw->mcs_x2p_intf = 5;		/* x2p clabration intf */
+	hw->mcs_blks = 1;		/* MCS blocks */
+}
+
+struct mcs_ops cn10kb_mcs_ops = {
+	.mcs_set_hw_capabilities	= cn10kb_mcs_set_hw_capabilities,
+	.mcs_parser_cfg			= cn10kb_mcs_parser_cfg,
+};
+
+static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	int lmac, err = 0;
+	struct mcs *mcs;
+
+	mcs = devm_kzalloc(dev, sizeof(*mcs), GFP_KERNEL);
+	if (!mcs)
+		return -ENOMEM;
+
+	mcs->hw = devm_kzalloc(dev, sizeof(struct hwinfo), GFP_KERNEL);
+	if (!mcs->hw)
+		return -ENOMEM;
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device\n");
+		pci_set_drvdata(pdev, NULL);
+		return err;
+	}
+
+	err = pci_request_regions(pdev, DRV_NAME);
+	if (err) {
+		dev_err(dev, "PCI request regions failed 0x%x\n", err);
+		goto exit;
+	}
+
+	mcs->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+	if (!mcs->reg_base) {
+		dev_err(dev, "mcs: Cannot map CSR memory space, aborting\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	pci_set_drvdata(pdev, mcs);
+	mcs->pdev = pdev;
+	mcs->dev = &pdev->dev;
+
+	if (pdev->subsystem_device == PCI_SUBSYS_DEVID_CN10K_B)
+		mcs->mcs_ops = &cn10kb_mcs_ops;
+	else
+		mcs->mcs_ops = cnf10kb_get_mac_ops();
+
+	/* Set hardware capabilities */
+	mcs->mcs_ops->mcs_set_hw_capabilities(mcs);
+
+	mcs_global_cfg(mcs);
+
+	/* Perform X2P clibration */
+	err = mcs_x2p_calibration(mcs);
+	if (err)
+		goto err_x2p;
+
+	mcs->mcs_id = (pci_resource_start(pdev, PCI_CFG_REG_BAR_NUM) >> 24)
+			& MCS_ID_MASK;
+
+	/* Set mcs tx side resources */
+	err = mcs_alloc_struct_mem(mcs, &mcs->tx);
+	if (err)
+		goto err_x2p;
+
+	/* Set mcs rx side resources */
+	err = mcs_alloc_struct_mem(mcs, &mcs->rx);
+	if (err)
+		goto err_x2p;
+
+	/* per port config */
+	for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++)
+		mcs_lmac_init(mcs, lmac);
+
+	/* Parser configuration */
+	mcs->mcs_ops->mcs_parser_cfg(mcs);
+
+	list_add(&mcs->mcs_list, &mcs_list);
+
+	return 0;
+
+err_x2p:
+	/* Enable external bypass */
+	mcs_set_external_bypass(mcs, true);
+exit:
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return err;
+}
+
+static void mcs_remove(struct pci_dev *pdev)
+{
+	struct mcs *mcs = pci_get_drvdata(pdev);
+
+	/* Set MCS to external bypass */
+	mcs_set_external_bypass(mcs, true);
+	pci_free_irq_vectors(pdev);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+struct pci_driver mcs_driver = {
+	.name = DRV_NAME,
+	.id_table = mcs_id_table,
+	.probe = mcs_probe,
+	.remove = mcs_remove,
+};
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
new file mode 100644
index 0000000..002fee8
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell CN10K MCS driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#ifndef MCS_H
+#define MCS_H
+
+#include <linux/bits.h>
+#include "rvu.h"
+
+#define PCI_DEVID_CN10K_MCS		0xA096
+
+#define MCSX_LINK_LMAC_RANGE_MASK	GENMASK_ULL(19, 16)
+#define MCSX_LINK_LMAC_BASE_MASK	GENMASK_ULL(11, 0)
+
+#define MCS_ID_MASK			0x7
+
+/* Reserved resources for default bypass entry */
+#define MCS_RSRC_RSVD_CNT		1
+
+struct mcs_rsrc_map {
+	u16 *flowid2pf_map;
+	u16 *secy2pf_map;
+	u16 *sc2pf_map;
+	u16 *sa2pf_map;
+	u16 *flowid2secy_map;	/* bitmap flowid mapped to secy*/
+	struct rsrc_bmap	flow_ids;
+	struct rsrc_bmap	secy;
+	struct rsrc_bmap	sc;
+	struct rsrc_bmap	sa;
+};
+
+struct hwinfo {
+	u8 tcam_entries;
+	u8 secy_entries;
+	u8 sc_entries;
+	u16 sa_entries;
+	u8 mcs_x2p_intf;
+	u8 lmac_cnt;
+	u8 mcs_blks;
+	unsigned long	lmac_bmap; /* bitmap of enabled mcs lmac */
+};
+
+struct mcs {
+	void __iomem		*reg_base;
+	struct pci_dev		*pdev;
+	struct device		*dev;
+	struct hwinfo		*hw;
+	struct mcs_rsrc_map	tx;
+	struct mcs_rsrc_map	rx;
+	u8			mcs_id;
+	struct mcs_ops		*mcs_ops;
+	struct list_head	mcs_list;
+};
+
+struct mcs_ops {
+	void	(*mcs_set_hw_capabilities)(struct mcs *mcs);
+	void	(*mcs_parser_cfg)(struct mcs *mcs);
+};
+
+extern struct pci_driver mcs_driver;
+
+static inline void mcs_reg_write(struct mcs *mcs, u64 offset, u64 val)
+{
+	writeq(val, mcs->reg_base + offset);
+}
+
+static inline u64 mcs_reg_read(struct mcs *mcs, u64 offset)
+{
+	return readq(mcs->reg_base + offset);
+}
+
+/* MCS APIs */
+struct mcs *mcs_get_pdata(int mcs_id);
+int mcs_get_blkcnt(void);
+int mcs_set_lmac_channels(int mcs_id, u16 base);
+
+int mcs_install_flowid_bypass_entry(struct mcs *mcs);
+void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode);
+
+/* CN10K-B APIs */
+void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs);
+void cn10kb_mcs_parser_cfg(struct mcs *mcs);
+
+/* CNF10K-B APIs */
+struct mcs_ops *cnf10kb_get_mac_ops(void);
+void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs);
+void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
+
+#endif /* MCS_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
new file mode 100644
index 0000000..68bcee0
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
@@ -0,0 +1,65 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell MCS driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#include "mcs.h"
+#include "mcs_reg.h"
+
+static struct mcs_ops		cnf10kb_mcs_ops   = {
+	.mcs_set_hw_capabilities	= cnf10kb_mcs_set_hw_capabilities,
+	.mcs_parser_cfg			= cnf10kb_mcs_parser_cfg,
+};
+
+struct mcs_ops *cnf10kb_get_mac_ops(void)
+{
+	return &cnf10kb_mcs_ops;
+}
+
+void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs)
+{
+	struct hwinfo *hw = mcs->hw;
+
+	hw->tcam_entries = 64;		/* TCAM entries */
+	hw->secy_entries  = 64;		/* SecY entries */
+	hw->sc_entries = 64;		/* SC CAM entries */
+	hw->sa_entries = 128;		/* SA entries */
+	hw->lmac_cnt = 4;		/* lmacs/ports per mcs block */
+	hw->mcs_x2p_intf = 1;		/* x2p clabration intf */
+	hw->mcs_blks = 7;		/* MCS blocks */
+}
+
+void cnf10kb_mcs_parser_cfg(struct mcs *mcs)
+{
+	u64 reg, val;
+
+	/* VLAN Ctag */
+	val = (0x8100ull & 0xFFFF) | BIT_ULL(20) | BIT_ULL(22);
+
+	reg = MCSX_PEX_RX_SLAVE_CUSTOM_TAGX(0);
+	mcs_reg_write(mcs, reg, val);
+
+	reg = MCSX_PEX_TX_SLAVE_CUSTOM_TAGX(0);
+	mcs_reg_write(mcs, reg, val);
+
+	/* VLAN STag */
+	val = (0x88a8ull & 0xFFFF) | BIT_ULL(20) | BIT_ULL(23);
+
+	/* RX */
+	reg = MCSX_PEX_RX_SLAVE_CUSTOM_TAGX(1);
+	mcs_reg_write(mcs, reg, val);
+
+	/* TX */
+	reg = MCSX_PEX_TX_SLAVE_CUSTOM_TAGX(1);
+	mcs_reg_write(mcs, reg, val);
+
+	/* Enable custom tage 0 and 1 and sectag */
+	val = BIT_ULL(0) | BIT_ULL(1) | BIT_ULL(12);
+
+	reg = MCSX_PEX_RX_SLAVE_ETYPE_ENABLE;
+	mcs_reg_write(mcs, reg, val);
+
+	reg = MCSX_PEX_TX_SLAVE_ETYPE_ENABLE;
+	mcs_reg_write(mcs, reg, val);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
new file mode 100644
index 0000000..61bf8ab
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell MCS driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#ifndef MCS_REG_H
+#define MCS_REG_H
+
+#include <linux/bits.h>
+
+/* Registers */
+#define MCSX_IP_MODE					0x900c8ull
+
+#define MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x808ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa68ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_MIL_GLOBAL	({				\
+	u64 offset;					\
+							\
+	offset = 0x80000ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x60000ull;			\
+	offset; })
+
+#define MCSX_LINK_LMACX_CFG(a) ({			\
+	u64 offset;					\
+							\
+	offset = 0x90000ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x70000ull;			\
+	offset += (a) * 0x800ull;			\
+	offset; })
+
+#define MCSX_MIL_RX_GBL_STATUS ({			\
+	u64 offset;					\
+							\
+	offset = 0x800c8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x600c8ull;			\
+	offset; })
+
+/* PAB */
+#define MCSX_PAB_RX_SLAVE_PORT_CFGX(a) ({	\
+	u64 offset;				\
+						\
+	offset = 0x1718ull;			\
+	if (mcs->hw->mcs_blks > 1)		\
+		offset = 0x280ull;		\
+	offset += (a) * 0x40ull;		\
+	offset; })
+
+#define MCSX_PAB_TX_SLAVE_PORT_CFGX(a)			(0x2930ull + (a) * 0x40ull)
+
+/* PEX registers */
+#define MCSX_PEX_RX_SLAVE_VLAN_CFGX(a)          (0x3b58ull + (a) * 0x8ull)
+#define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a)          (0x46f8ull + (a) * 0x8ull)
+
+/* CNF10K-B */
+#define MCSX_PEX_RX_SLAVE_CUSTOM_TAGX(a)        (0x4c8ull + (a) * 0x8ull)
+#define MCSX_PEX_TX_SLAVE_CUSTOM_TAGX(a)        (0x748ull + (a) * 0x8ull)
+#define MCSX_PEX_RX_SLAVE_ETYPE_ENABLE          0x6e8ull
+#define MCSX_PEX_TX_SLAVE_ETYPE_ENABLE          0x968ull
+
+/* BEE */
+#define MCSX_BBE_RX_SLAVE_PADDING_CTL			0xe08ull
+#define MCSX_BBE_TX_SLAVE_PADDING_CTL			0x12f8ull
+#define MCSX_BBE_RX_SLAVE_CAL_ENTRY			0x180ull
+#define MCSX_BBE_RX_SLAVE_CAL_LEN			0x188ull
+#define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a)		(0x290ull + (a) * 0x40ull)
+
+#endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
new file mode 100644
index 0000000..c3f5b39
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell CN10K MCS driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "mcs.h"
+#include "rvu.h"
+#include "lmac_common.h"
+
+static void rvu_mcs_set_lmac_bmap(struct rvu *rvu)
+{
+	struct mcs *mcs = mcs_get_pdata(0);
+	unsigned long lmac_bmap;
+	int cgx, lmac, port;
+
+	for (port = 0; port < mcs->hw->lmac_cnt; port++) {
+		cgx = port / rvu->hw->lmac_per_cgx;
+		lmac = port % rvu->hw->lmac_per_cgx;
+		if (!is_lmac_valid(rvu_cgx_pdata(cgx, rvu), lmac))
+			continue;
+		set_bit(port, &lmac_bmap);
+	}
+	mcs->hw->lmac_bmap = lmac_bmap;
+}
+
+int rvu_mcs_init(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	int err = 0;
+
+	rvu->mcs_blk_cnt = mcs_get_blkcnt();
+
+	if (!rvu->mcs_blk_cnt)
+		return 0;
+
+	/* Needed only for CN10K-B */
+	if (rvu->mcs_blk_cnt == 1) {
+		err = mcs_set_lmac_channels(0, hw->cgx_chan_base);
+		if (err)
+			return err;
+		/* Set active lmacs */
+		rvu_mcs_set_lmac_bmap(rvu);
+	}
+
+	return err;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 7282a82..5d74641 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -16,6 +16,7 @@
 #include "rvu.h"
 #include "rvu_reg.h"
 #include "ptp.h"
+#include "mcs.h"
 
 #include "rvu_trace.h"
 #include "rvu_npc_hash.h"
@@ -1159,6 +1160,12 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 
 	rvu_program_channels(rvu);
 
+	err = rvu_mcs_init(rvu);
+	if (err) {
+		dev_err(rvu->dev, "%s: Failed to initialize mcs\n", __func__);
+		goto nix_err;
+	}
+
 	return 0;
 
 nix_err:
@@ -3354,12 +3361,18 @@ static int __init rvu_init_module(void)
 	if (err < 0)
 		goto ptp_err;
 
+	err = pci_register_driver(&mcs_driver);
+	if (err < 0)
+		goto mcs_err;
+
 	err =  pci_register_driver(&rvu_driver);
 	if (err < 0)
 		goto rvu_err;
 
 	return 0;
 rvu_err:
+	pci_unregister_driver(&mcs_driver);
+mcs_err:
 	pci_unregister_driver(&ptp_driver);
 ptp_err:
 	pci_unregister_driver(&cgx_driver);
@@ -3370,6 +3383,7 @@ static int __init rvu_init_module(void)
 static void __exit rvu_cleanup_module(void)
 {
 	pci_unregister_driver(&rvu_driver);
+	pci_unregister_driver(&mcs_driver);
 	pci_unregister_driver(&ptp_driver);
 	pci_unregister_driver(&cgx_driver);
 }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index d15bc44..9a150da 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -25,6 +25,8 @@
 /* Subsystem Device ID */
 #define PCI_SUBSYS_DEVID_96XX                  0xB200
 #define PCI_SUBSYS_DEVID_CN10K_A	       0xB900
+#define PCI_SUBSYS_DEVID_CNF10K_B              0xBC00
+#define PCI_SUBSYS_DEVID_CN10K_B               0xBD00
 
 /* PCI BAR nos */
 #define	PCI_AF_REG_BAR_NUM			0
@@ -497,6 +499,8 @@ struct rvu {
 
 	struct ptp		*ptp;
 
+	int			mcs_blk_cnt;
+
 #ifdef CONFIG_DEBUG_FS
 	struct rvu_debugfs	rvu_dbg;
 #endif
@@ -868,4 +872,8 @@ void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc);
 int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir,
 			   u64 pkind, u8 var_len_off, u8 var_len_off_mask,
 			   u8 shift_dir);
+
+/* CN10K MCS */
+int rvu_mcs_init(struct rvu *rvu);
+
 #endif /* RVU_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 2/8] octeontx2-af: cn10k: mcs: Add mailboxes for port related operations
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 1/8] octeontx2-af: cn10k: Introduce driver for macsec block Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 3/8] octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources Subbaraya Sundeep
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Vamsi Attunuru, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

There are set of configurations to be done at MCS port level like
bringing port out of reset, making port as operational or bypass.
This patch adds all the port related mailbox message handlers
so that AF consumers can use them.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   | 111 +++++++++++++++++-
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    |  94 +++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  10 ++
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    |  36 ++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 129 ++++++++++++++++++++-
 5 files changed, 376 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index e26c3b0..207cd4f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -293,9 +293,21 @@ M(NIX_BANDPROF_ALLOC,	0x801d, nix_bandprof_alloc, nix_bandprof_alloc_req, \
 M(NIX_BANDPROF_FREE,	0x801e, nix_bandprof_free, nix_bandprof_free_req,   \
 				msg_rsp)				    \
 M(NIX_BANDPROF_GET_HWINFO, 0x801f, nix_bandprof_get_hwinfo, msg_req,		\
-				nix_bandprof_get_hwinfo_rsp)
-
-/* Messages initiated by AF (range 0xC00 - 0xDFF) */
+				nix_bandprof_get_hwinfo_rsp)		    \
+/* MCS mbox IDs (range 0xA000 - 0xBFFF) */					\
+M(MCS_SET_ACTIVE_LMAC,	0xa00a,	mcs_set_active_lmac, mcs_set_active_lmac,	\
+				msg_rsp)					\
+M(MCS_GET_HW_INFO,	0xa00b,	mcs_get_hw_info, msg_req, mcs_hw_info)		\
+M(MCS_SET_LMAC_MODE,	0xa013, mcs_set_lmac_mode, mcs_set_lmac_mode, msg_rsp)	\
+M(MCS_PORT_RESET,	0xa018, mcs_port_reset, mcs_port_reset_req, msg_rsp)	\
+M(MCS_PORT_CFG_SET,	0xa019, mcs_port_cfg_set, mcs_port_cfg_set_req, msg_rsp)\
+M(MCS_PORT_CFG_GET,	0xa020, mcs_port_cfg_get, mcs_port_cfg_get_req,		\
+				mcs_port_cfg_get_rsp)				\
+M(MCS_CUSTOM_TAG_CFG_GET, 0xa021, mcs_custom_tag_cfg_get,			\
+				  mcs_custom_tag_cfg_get_req,			\
+				  mcs_custom_tag_cfg_get_rsp)
+
+/* Messages initiated by AF (range 0xC00 - 0xEFF) */
 #define MBOX_UP_CGX_MESSAGES						\
 M(CGX_LINK_EVENT,	0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp)
 
@@ -1657,4 +1669,97 @@ enum cgx_af_status {
 	LMAC_AF_ERR_EXACT_MATCH_TBL_LOOK_UP_FAILED = -1110,
 };
 
+enum mcs_direction {
+	MCS_RX,
+	MCS_TX,
+};
+
+struct mcs_hw_info {
+	struct mbox_msghdr hdr;
+	u8 num_mcs_blks;	/* Number of MCS blocks */
+	u8 tcam_entries;	/* RX/TX Tcam entries per mcs block */
+	u8 secy_entries;	/* RX/TX SECY entries per mcs block */
+	u8 sc_entries;		/* RX/TX SC CAM entries per mcs block */
+	u8 sa_entries;		/* PN table entries = SA entries */
+	u64 rsvd[16];
+};
+
+struct mcs_set_active_lmac {
+	struct mbox_msghdr hdr;
+	u32 lmac_bmap;	/* bitmap of active lmac per mcs block */
+	u8 mcs_id;
+	u16 chan_base; /* MCS channel base */
+	u64 rsvd;
+};
+
+struct mcs_set_lmac_mode {
+	struct mbox_msghdr hdr;
+	u8 mode;	/* 1:Bypass 0:Operational */
+	u8 lmac_id;
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_port_reset_req {
+	struct mbox_msghdr hdr;
+	u8 reset;
+	u8 mcs_id;
+	u8 port_id;
+	u64 rsvd;
+};
+
+struct mcs_port_cfg_set_req {
+	struct mbox_msghdr hdr;
+	u8 cstm_tag_rel_mode_sel;
+	u8 custom_hdr_enb;
+	u8 fifo_skid;
+	u8 port_mode;
+	u8 port_id;
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_port_cfg_get_req {
+	struct mbox_msghdr hdr;
+	u8 port_id;
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_port_cfg_get_rsp {
+	struct mbox_msghdr hdr;
+	u8 cstm_tag_rel_mode_sel;
+	u8 custom_hdr_enb;
+	u8 fifo_skid;
+	u8 port_mode;
+	u8 port_id;
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_custom_tag_cfg_get_req {
+	struct mbox_msghdr hdr;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_custom_tag_cfg_get_rsp {
+	struct mbox_msghdr hdr;
+	u16 cstm_etype[8];
+	u8 cstm_indx[8];
+	u8 cstm_etype_en;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+/* MCS mailbox error codes
+ * Range 1201 - 1300.
+ */
+enum mcs_af_status {
+	MCS_AF_ERR_INVALID_MCSID        = -1201,
+	MCS_AF_ERR_NOT_MAPPED           = -1202,
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
index a4e919c..89a3c54 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -107,6 +107,100 @@ struct mcs *mcs_get_pdata(int mcs_id)
 	return NULL;
 }
 
+void mcs_set_port_cfg(struct mcs *mcs, struct mcs_port_cfg_set_req *req)
+{
+	u64 val = 0;
+
+	mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PORT_CFGX(req->port_id),
+		      req->port_mode & MCS_PORT_MODE_MASK);
+
+	req->cstm_tag_rel_mode_sel &= 0x3;
+
+	if (mcs->hw->mcs_blks > 1) {
+		req->fifo_skid &= MCS_PORT_FIFO_SKID_MASK;
+		val = (u32)req->fifo_skid << 0x10;
+		val |= req->fifo_skid;
+		mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(req->port_id), val);
+		mcs_reg_write(mcs, MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(req->port_id),
+			      req->cstm_tag_rel_mode_sel);
+		val = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION);
+
+		if (req->custom_hdr_enb)
+			val |= BIT_ULL(req->port_id);
+		else
+			val &= ~BIT_ULL(req->port_id);
+
+		mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION, val);
+	} else {
+		val = mcs_reg_read(mcs, MCSX_PEX_TX_SLAVE_PORT_CONFIG(req->port_id));
+		val |= (req->cstm_tag_rel_mode_sel << 2);
+		mcs_reg_write(mcs, MCSX_PEX_TX_SLAVE_PORT_CONFIG(req->port_id), val);
+	}
+}
+
+void mcs_get_port_cfg(struct mcs *mcs, struct mcs_port_cfg_get_req *req,
+		      struct mcs_port_cfg_get_rsp *rsp)
+{
+	u64 reg = 0;
+
+	rsp->port_mode = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PORT_CFGX(req->port_id)) &
+			 MCS_PORT_MODE_MASK;
+
+	if (mcs->hw->mcs_blks > 1) {
+		reg = MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(req->port_id);
+		rsp->fifo_skid = mcs_reg_read(mcs, reg) & MCS_PORT_FIFO_SKID_MASK;
+		reg = MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(req->port_id);
+		rsp->cstm_tag_rel_mode_sel = mcs_reg_read(mcs, reg) & 0x3;
+		if (mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION) & BIT_ULL(req->port_id))
+			rsp->custom_hdr_enb = 1;
+	} else {
+		reg = MCSX_PEX_TX_SLAVE_PORT_CONFIG(req->port_id);
+		rsp->cstm_tag_rel_mode_sel = mcs_reg_read(mcs, reg) >> 2;
+	}
+
+	rsp->port_id = req->port_id;
+	rsp->mcs_id = req->mcs_id;
+}
+
+void mcs_get_custom_tag_cfg(struct mcs *mcs, struct mcs_custom_tag_cfg_get_req *req,
+			    struct mcs_custom_tag_cfg_get_rsp *rsp)
+{
+	u64 reg = 0, val = 0;
+	u8 idx;
+
+	for (idx = 0; idx < MCS_MAX_CUSTOM_TAGS; idx++) {
+		if (mcs->hw->mcs_blks > 1)
+			reg  = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_CUSTOM_TAGX(idx) :
+				MCSX_PEX_TX_SLAVE_CUSTOM_TAGX(idx);
+		else
+			reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_VLAN_CFGX(idx) :
+				MCSX_PEX_TX_SLAVE_VLAN_CFGX(idx);
+
+		val = mcs_reg_read(mcs, reg);
+		if (mcs->hw->mcs_blks > 1) {
+			rsp->cstm_etype[idx] = val & GENMASK(15, 0);
+			rsp->cstm_indx[idx] = (val >> 0x16) & 0x3;
+			reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_ETYPE_ENABLE :
+				MCSX_PEX_TX_SLAVE_ETYPE_ENABLE;
+			rsp->cstm_etype_en = mcs_reg_read(mcs, reg) & 0xFF;
+		} else {
+			rsp->cstm_etype[idx] = (val >> 0x1) & GENMASK(15, 0);
+			rsp->cstm_indx[idx] = (val >> 0x11) & 0x3;
+			rsp->cstm_etype_en |= (val & 0x1) << idx;
+		}
+	}
+
+	rsp->mcs_id = req->mcs_id;
+	rsp->dir = req->dir;
+}
+
+void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset)
+{
+	u64 reg = MCSX_MCS_TOP_SLAVE_PORT_RESET(port_id);
+
+	mcs_reg_write(mcs, reg, reset & 0x1);
+}
+
 /* Set lmac to bypass/operational mode */
 void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode)
 {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
index 002fee8..c11d507 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
@@ -17,6 +17,10 @@
 
 #define MCS_ID_MASK			0x7
 
+#define MCS_PORT_MODE_MASK		0x3
+#define MCS_PORT_FIFO_SKID_MASK		0x3F
+#define MCS_MAX_CUSTOM_TAGS		0x8
+
 /* Reserved resources for default bypass entry */
 #define MCS_RSRC_RSVD_CNT		1
 
@@ -79,6 +83,12 @@ int mcs_set_lmac_channels(int mcs_id, u16 base);
 
 int mcs_install_flowid_bypass_entry(struct mcs *mcs);
 void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode);
+void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset);
+void mcs_set_port_cfg(struct mcs *mcs, struct mcs_port_cfg_set_req *req);
+void mcs_get_port_cfg(struct mcs *mcs, struct mcs_port_cfg_get_req *req,
+		      struct mcs_port_cfg_get_rsp *rsp);
+void mcs_get_custom_tag_cfg(struct mcs *mcs, struct mcs_custom_tag_cfg_get_req *req,
+			    struct mcs_custom_tag_cfg_get_rsp *rsp);
 
 /* CN10K-B APIs */
 void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
index 61bf8ab..1ce3442 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
@@ -11,6 +11,15 @@
 
 /* Registers */
 #define MCSX_IP_MODE					0x900c8ull
+#define MCSX_MCS_TOP_SLAVE_PORT_RESET(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x408ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa28ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
 
 #define MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(a) ({		\
 	u64 offset;					\
@@ -29,6 +38,23 @@
 		offset = 0x60000ull;			\
 	offset; })
 
+#define MCSX_MIL_RX_LMACX_CFG(a) ({			\
+	u64 offset;					\
+							\
+	offset = 0x900a8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x700a8ull;			\
+	offset += (a) * 0x800ull;			\
+	offset; })
+
+#define MCSX_HIL_GLOBAL ({				\
+	u64 offset;					\
+							\
+	offset = 0xc0000ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa0000ull;			\
+	offset; })
+
 #define MCSX_LINK_LMACX_CFG(a) ({			\
 	u64 offset;					\
 							\
@@ -61,6 +87,16 @@
 /* PEX registers */
 #define MCSX_PEX_RX_SLAVE_VLAN_CFGX(a)          (0x3b58ull + (a) * 0x8ull)
 #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a)          (0x46f8ull + (a) * 0x8ull)
+#define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a)	(0x788ull + (a) * 0x8ull)
+#define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a)		(0x4738ull + (a) * 0x8ull)
+
+#define MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION ({		\
+	u64 offset;					\
+							\
+	offset = 0x3b50ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x4c0ull;			\
+	offset; })
 
 /* CNF10K-B */
 #define MCSX_PEX_RX_SLAVE_CUSTOM_TAGX(a)        (0x4c8ull + (a) * 0x8ull)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index c3f5b39..9eaa8ee 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -13,6 +13,126 @@
 #include "rvu.h"
 #include "lmac_common.h"
 
+int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu,
+				       struct mcs_set_lmac_mode *req,
+				       struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (BIT_ULL(req->lmac_id) & mcs->hw->lmac_bmap)
+		mcs_set_lmac_mode(mcs, req->lmac_id, req->mode);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_hw_info(struct rvu *rvu,
+				     struct msg_req *req,
+				     struct mcs_hw_info *rsp)
+{
+	struct mcs *mcs;
+
+	if (!rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_NOT_MAPPED;
+
+	/* MCS resources are same across all blocks */
+	mcs = mcs_get_pdata(0);
+	rsp->num_mcs_blks = rvu->mcs_blk_cnt;
+	rsp->tcam_entries = mcs->hw->tcam_entries;
+	rsp->secy_entries = mcs->hw->secy_entries;
+	rsp->sc_entries = mcs->hw->sc_entries;
+	rsp->sa_entries = mcs->hw->sa_entries;
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_port_reset(struct rvu *rvu, struct mcs_port_reset_req *req,
+				    struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mcs_reset_port(mcs, req->port_id, req->reset);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_set_active_lmac(struct rvu *rvu,
+					 struct mcs_set_active_lmac *req,
+					 struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	if (!mcs)
+		return MCS_AF_ERR_NOT_MAPPED;
+
+	mcs->hw->lmac_bmap = req->lmac_bmap;
+	mcs_set_lmac_channels(req->mcs_id, req->chan_base);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_port_cfg_set(struct rvu *rvu, struct mcs_port_cfg_set_req *req,
+				      struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->lmac_cnt <= req->port_id || !(mcs->hw->lmac_bmap & BIT_ULL(req->port_id)))
+		return -EINVAL;
+
+	mcs_set_port_cfg(mcs, req);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_port_cfg_get(struct rvu *rvu, struct mcs_port_cfg_get_req *req,
+				      struct mcs_port_cfg_get_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->lmac_cnt <= req->port_id || !(mcs->hw->lmac_bmap & BIT_ULL(req->port_id)))
+		return -EINVAL;
+
+	mcs_get_port_cfg(mcs, req, rsp);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_custom_tag_cfg_get(struct rvu *rvu, struct mcs_custom_tag_cfg_get_req *req,
+					    struct mcs_custom_tag_cfg_get_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mcs_get_custom_tag_cfg(mcs, req, rsp);
+
+	return 0;
+}
+
 static void rvu_mcs_set_lmac_bmap(struct rvu *rvu)
 {
 	struct mcs *mcs = mcs_get_pdata(0);
@@ -32,7 +152,8 @@ static void rvu_mcs_set_lmac_bmap(struct rvu *rvu)
 int rvu_mcs_init(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
-	int err = 0;
+	int lmac, err = 0, mcs_id;
+	struct mcs *mcs;
 
 	rvu->mcs_blk_cnt = mcs_get_blkcnt();
 
@@ -48,5 +169,11 @@ int rvu_mcs_init(struct rvu *rvu)
 		rvu_mcs_set_lmac_bmap(rvu);
 	}
 
+	for (mcs_id = 0; mcs_id < rvu->mcs_blk_cnt; mcs_id++) {
+		mcs = mcs_get_pdata(mcs_id);
+		for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++)
+			mcs_set_lmac_mode(mcs, lmac, 0);
+	}
+
 	return err;
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 3/8] octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 1/8] octeontx2-af: cn10k: Introduce driver for macsec block Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 2/8] octeontx2-af: cn10k: mcs: Add mailboxes for port related operations Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 4/8] octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic Subbaraya Sundeep
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Vamsi Attunuru, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

To establish a macsec connection association netdev driver
needs hardware resources like SecY, TCAM flows, SCs and SAs.
This patch manages allocating, freeing and configuring those
resources. AF consumers can request resources and configure them
via these mailbox messages. AF can allocate until it runs out of
hardware resources.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   | 211 ++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 447 +++++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  59 ++-
 .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c    |  55 +++
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    | 385 ++++++++++++++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 374 +++++++++++++++++
 6 files changed, 1530 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 207cd4f..3213b1512 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -295,10 +295,38 @@ M(NIX_BANDPROF_FREE,	0x801e, nix_bandprof_free, nix_bandprof_free_req,   \
 M(NIX_BANDPROF_GET_HWINFO, 0x801f, nix_bandprof_get_hwinfo, msg_req,		\
 				nix_bandprof_get_hwinfo_rsp)		    \
 /* MCS mbox IDs (range 0xA000 - 0xBFFF) */					\
+M(MCS_ALLOC_RESOURCES,	0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req,	\
+				mcs_alloc_rsrc_rsp)				\
+M(MCS_FREE_RESOURCES,	0xa001, mcs_free_resources, mcs_free_rsrc_req, msg_rsp) \
+M(MCS_FLOWID_ENTRY_WRITE, 0xa002, mcs_flowid_entry_write, mcs_flowid_entry_write_req,	\
+				msg_rsp)					\
+M(MCS_SECY_PLCY_WRITE,	0xa003, mcs_secy_plcy_write, mcs_secy_plcy_write_req,	\
+				msg_rsp)					\
+M(MCS_RX_SC_CAM_WRITE,	0xa004, mcs_rx_sc_cam_write, mcs_rx_sc_cam_write_req,	\
+				msg_rsp)					\
+M(MCS_SA_PLCY_WRITE,	0xa005, mcs_sa_plcy_write, mcs_sa_plcy_write_req,	\
+				msg_rsp)					\
+M(MCS_TX_SC_SA_MAP_WRITE, 0xa006, mcs_tx_sc_sa_map_write, mcs_tx_sc_sa_map,	\
+				  msg_rsp)					\
+M(MCS_RX_SC_SA_MAP_WRITE, 0xa007, mcs_rx_sc_sa_map_write, mcs_rx_sc_sa_map,	\
+				  msg_rsp)					\
+M(MCS_FLOWID_ENA_ENTRY,	0xa008, mcs_flowid_ena_entry, mcs_flowid_ena_dis_entry,	\
+				msg_rsp)					\
+M(MCS_PN_TABLE_WRITE,	0xa009, mcs_pn_table_write, mcs_pn_table_write_req,	\
+				msg_rsp)					\
 M(MCS_SET_ACTIVE_LMAC,	0xa00a,	mcs_set_active_lmac, mcs_set_active_lmac,	\
 				msg_rsp)					\
 M(MCS_GET_HW_INFO,	0xa00b,	mcs_get_hw_info, msg_req, mcs_hw_info)		\
 M(MCS_SET_LMAC_MODE,	0xa013, mcs_set_lmac_mode, mcs_set_lmac_mode, msg_rsp)	\
+M(MCS_SET_PN_THRESHOLD, 0xa014, mcs_set_pn_threshold, mcs_set_pn_threshold,	\
+				msg_rsp)					\
+M(MCS_ALLOC_CTRL_PKT_RULE, 0xa015, mcs_alloc_ctrl_pkt_rule,			\
+				   mcs_alloc_ctrl_pkt_rule_req,			\
+				   mcs_alloc_ctrl_pkt_rule_rsp)			\
+M(MCS_FREE_CTRL_PKT_RULE, 0xa016, mcs_free_ctrl_pkt_rule,			\
+				  mcs_free_ctrl_pkt_rule_req, msg_rsp)		\
+M(MCS_CTRL_PKT_RULE_WRITE, 0xa017, mcs_ctrl_pkt_rule_write,			\
+				   mcs_ctrl_pkt_rule_write_req, msg_rsp)	\
 M(MCS_PORT_RESET,	0xa018, mcs_port_reset, mcs_port_reset_req, msg_rsp)	\
 M(MCS_PORT_CFG_SET,	0xa019, mcs_port_cfg_set, mcs_port_cfg_set_req, msg_rsp)\
 M(MCS_PORT_CFG_GET,	0xa020, mcs_port_cfg_get, mcs_port_cfg_get_req,		\
@@ -1674,6 +1702,133 @@ enum mcs_direction {
 	MCS_TX,
 };
 
+enum mcs_rsrc_type {
+	MCS_RSRC_TYPE_FLOWID,
+	MCS_RSRC_TYPE_SECY,
+	MCS_RSRC_TYPE_SC,
+	MCS_RSRC_TYPE_SA,
+};
+
+struct mcs_alloc_rsrc_req {
+	struct mbox_msghdr hdr;
+	u8 rsrc_type;
+	u8 rsrc_cnt;	/* Resources count */
+	u8 mcs_id;	/* MCS block ID	*/
+	u8 dir;		/* Macsec ingress or egress side */
+	u8 all;		/* Allocate all resource type one each */
+	u64 rsvd;
+};
+
+struct mcs_alloc_rsrc_rsp {
+	struct mbox_msghdr hdr;
+	u8 flow_ids[128];	/* Index of reserved entries */
+	u8 secy_ids[128];
+	u8 sc_ids[128];
+	u8 sa_ids[256];
+	u8 rsrc_type;
+	u8 rsrc_cnt;		/* No of entries reserved */
+	u8 mcs_id;
+	u8 dir;
+	u8 all;
+	u8 rsvd[256];		/* reserved fields for future expansion */
+};
+
+struct mcs_free_rsrc_req {
+	struct mbox_msghdr hdr;
+	u8 rsrc_id;		/* Index of the entry to be freed */
+	u8 rsrc_type;
+	u8 mcs_id;
+	u8 dir;
+	u8 all;			/* Free all the cam resources */
+	u64 rsvd;
+};
+
+struct mcs_flowid_entry_write_req {
+	struct mbox_msghdr hdr;
+	u64 data[4];
+	u64 mask[4];
+	u64 sci;	/* CNF10K-B for tx_secy_mem_map */
+	u8 flow_id;
+	u8 secy_id;	/* secyid for which flowid is mapped */
+	u8 sc_id;	/* Valid if dir = MCS_TX, SC_CAM id mapped to flowid */
+	u8 ena;		/* Enable tcam entry */
+	u8 ctrl_pkt;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_secy_plcy_write_req {
+	struct mbox_msghdr hdr;
+	u64 plcy;
+	u8 secy_id;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+/* RX SC_CAM mapping */
+struct mcs_rx_sc_cam_write_req {
+	struct mbox_msghdr hdr;
+	u64 sci;	/* SCI */
+	u64 secy_id;	/* secy index mapped to SC */
+	u8 sc_id;	/* SC CAM entry index */
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_sa_plcy_write_req {
+	struct mbox_msghdr hdr;
+	u64 plcy[2][9];		/* Support 2 SA policy */
+	u8 sa_index[2];
+	u8 sa_cnt;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_tx_sc_sa_map {
+	struct mbox_msghdr hdr;
+	u8 sa_index0;
+	u8 sa_index1;
+	u8 rekey_ena;
+	u8 sa_index0_vld;
+	u8 sa_index1_vld;
+	u8 tx_sa_active;
+	u64 sectag_sci;
+	u8 sc_id;	/* used as index for SA_MEM_MAP */
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_rx_sc_sa_map {
+	struct mbox_msghdr hdr;
+	u8 sa_index;
+	u8 sa_in_use;
+	u8 sc_id;
+	u8 an;		/* value range 0-3, sc_id + an used as index SA_MEM_MAP */
+	u8 mcs_id;
+	u64 rsvd;
+};
+
+struct mcs_flowid_ena_dis_entry {
+	struct mbox_msghdr hdr;
+	u8 flow_id;
+	u8 ena;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_pn_table_write_req {
+	struct mbox_msghdr hdr;
+	u64 next_pn;
+	u8 pn_id;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
 struct mcs_hw_info {
 	struct mbox_msghdr hdr;
 	u8 num_mcs_blks;	/* Number of MCS blocks */
@@ -1762,4 +1917,60 @@ enum mcs_af_status {
 	MCS_AF_ERR_NOT_MAPPED           = -1202,
 };
 
+struct mcs_set_pn_threshold {
+	struct mbox_msghdr hdr;
+	u64 threshold;
+	u8 xpn; /* '1' for setting xpn threshold */
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+enum mcs_ctrl_pkt_rulew_type {
+	MCS_CTRL_PKT_RULE_TYPE_ETH,
+	MCS_CTRL_PKT_RULE_TYPE_DA,
+	MCS_CTRL_PKT_RULE_TYPE_RANGE,
+	MCS_CTRL_PKT_RULE_TYPE_COMBO,
+	MCS_CTRL_PKT_RULE_TYPE_MAC,
+};
+
+struct mcs_alloc_ctrl_pkt_rule_req {
+	struct mbox_msghdr hdr;
+	u8 rule_type;
+	u8 mcs_id;	/* MCS block ID	*/
+	u8 dir;		/* Macsec ingress or egress side */
+	u64 rsvd;
+};
+
+struct mcs_alloc_ctrl_pkt_rule_rsp {
+	struct mbox_msghdr hdr;
+	u8 rule_idx;
+	u8 rule_type;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_free_ctrl_pkt_rule_req {
+	struct mbox_msghdr hdr;
+	u8 rule_idx;
+	u8 rule_type;
+	u8 mcs_id;
+	u8 dir;
+	u8 all;
+	u64 rsvd;
+};
+
+struct mcs_ctrl_pkt_rule_write_req {
+	struct mbox_msghdr hdr;
+	u64 data0;
+	u64 data1;
+	u64 data2;
+	u8 rule_idx;
+	u8 rule_type;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
index 89a3c54..66d5038 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -24,6 +24,429 @@ static const struct pci_device_id mcs_id_table[] = {
 
 static LIST_HEAD(mcs_list);
 
+void mcs_pn_table_write(struct mcs *mcs, u8 pn_id, u64 next_pn, u8 dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX)
+		reg = MCSX_CPM_RX_SLAVE_SA_PN_TABLE_MEMX(pn_id);
+	else
+		reg = MCSX_CPM_TX_SLAVE_SA_PN_TABLE_MEMX(pn_id);
+	mcs_reg_write(mcs, reg, next_pn);
+}
+
+void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *map)
+{
+	u64 reg, val;
+
+	val = (map->sa_index0 & 0xFF) |
+	      (map->sa_index1 & 0xFF) << 9 |
+	      (map->rekey_ena & 0x1) << 18 |
+	      (map->sa_index0_vld & 0x1) << 19 |
+	      (map->sa_index1_vld & 0x1) << 20 |
+	      (map->tx_sa_active & 0x1) << 21 |
+	      map->sectag_sci << 22;
+	reg = MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(map->sc_id);
+	mcs_reg_write(mcs, reg, val);
+
+	val = map->sectag_sci >> 42;
+	reg = MCSX_CPM_TX_SLAVE_SA_MAP_MEM_1X(map->sc_id);
+	mcs_reg_write(mcs, reg, val);
+}
+
+void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map)
+{
+	u64 val, reg;
+
+	val = (map->sa_index & 0xFF) | map->sa_in_use << 9;
+
+	reg = MCSX_CPM_RX_SLAVE_SA_MAP_MEMX((4 * map->sc_id) + map->an);
+	mcs_reg_write(mcs, reg, val);
+}
+
+void mcs_sa_plcy_write(struct mcs *mcs, u64 *plcy, int sa_id, int dir)
+{
+	int reg_id;
+	u64 reg;
+
+	if (dir == MCS_RX) {
+		for (reg_id = 0; reg_id < 8; reg_id++) {
+			reg =  MCSX_CPM_RX_SLAVE_SA_PLCY_MEMX(reg_id, sa_id);
+			mcs_reg_write(mcs, reg, plcy[reg_id]);
+		}
+	} else {
+		for (reg_id = 0; reg_id < 9; reg_id++) {
+			reg =  MCSX_CPM_TX_SLAVE_SA_PLCY_MEMX(reg_id, sa_id);
+			mcs_reg_write(mcs, reg, plcy[reg_id]);
+		}
+	}
+}
+
+void mcs_ena_dis_sc_cam_entry(struct mcs *mcs, int sc_id, int ena)
+{
+	u64 reg, val;
+
+	reg = MCSX_CPM_RX_SLAVE_SC_CAM_ENA(0);
+	if (sc_id > 63)
+		reg = MCSX_CPM_RX_SLAVE_SC_CAM_ENA(1);
+
+	if (ena)
+		val = mcs_reg_read(mcs, reg) | BIT_ULL(sc_id);
+	else
+		val = mcs_reg_read(mcs, reg) & ~BIT_ULL(sc_id);
+
+	mcs_reg_write(mcs, reg, val);
+}
+
+void mcs_rx_sc_cam_write(struct mcs *mcs, u64 sci, u64 secy, int sc_id)
+{
+	mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_SC_CAMX(0, sc_id), sci);
+	mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_SC_CAMX(1, sc_id), secy);
+	/* Enable SC CAM */
+	mcs_ena_dis_sc_cam_entry(mcs, sc_id, true);
+}
+
+void mcs_secy_plcy_write(struct mcs *mcs, u64 plcy, int secy_id, int dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX)
+		reg = MCSX_CPM_RX_SLAVE_SECY_PLCY_MEM_0X(secy_id);
+	else
+		reg = MCSX_CPM_TX_SLAVE_SECY_PLCY_MEMX(secy_id);
+
+	mcs_reg_write(mcs, reg, plcy);
+
+	if (mcs->hw->mcs_blks == 1 && dir == MCS_RX)
+		mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_SECY_PLCY_MEM_1X(secy_id), 0x0ull);
+}
+
+void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir)
+{
+	u64 reg, val;
+
+	val = (map->secy & 0x7F) | (map->ctrl_pkt & 0x1) << 8;
+	if (dir == MCS_RX) {
+		reg = MCSX_CPM_RX_SLAVE_SECY_MAP_MEMX(map->flow_id);
+	} else {
+		val |= (map->sc & 0x7F) << 9;
+		reg = MCSX_CPM_TX_SLAVE_SECY_MAP_MEM_0X(map->flow_id);
+	}
+
+	mcs_reg_write(mcs, reg, val);
+}
+
+void mcs_ena_dis_flowid_entry(struct mcs *mcs, int flow_id, int dir, int ena)
+{
+	u64 reg, val;
+
+	if (dir == MCS_RX) {
+		reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_ENA_0;
+		if (flow_id > 63)
+			reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_ENA_1;
+	} else {
+		reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_ENA_0;
+		if (flow_id > 63)
+			reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_ENA_1;
+	}
+
+	/* Enable/Disable the tcam entry */
+	if (ena)
+		val = mcs_reg_read(mcs, reg) | BIT_ULL(flow_id);
+	else
+		val = mcs_reg_read(mcs, reg) & ~BIT_ULL(flow_id);
+
+	mcs_reg_write(mcs, reg, val);
+}
+
+void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id, int dir)
+{
+	int reg_id;
+	u64 reg;
+
+	if (dir == MCS_RX) {
+		for (reg_id = 0; reg_id < 4; reg_id++) {
+			reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
+			mcs_reg_write(mcs, reg, data[reg_id]);
+			reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+			mcs_reg_write(mcs, reg, mask[reg_id]);
+		}
+	} else {
+		for (reg_id = 0; reg_id < 4; reg_id++) {
+			reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
+			mcs_reg_write(mcs, reg, data[reg_id]);
+			reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+			mcs_reg_write(mcs, reg, mask[reg_id]);
+		}
+	}
+}
+
+void mcs_clear_secy_plcy(struct mcs *mcs, int secy_id, int dir)
+{
+	struct mcs_rsrc_map *map;
+	int flow_id;
+
+	if (dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	/* Clear secy memory to zero */
+	mcs_secy_plcy_write(mcs, 0, secy_id, dir);
+
+	/* Disable the tcam entry using this secy */
+	for (flow_id = 0; flow_id < map->flow_ids.max; flow_id++) {
+		if (map->flowid2secy_map[flow_id] != secy_id)
+			continue;
+		mcs_ena_dis_flowid_entry(mcs, flow_id, dir, false);
+	}
+}
+
+int mcs_alloc_ctrlpktrule(struct rsrc_bmap *rsrc, u16 *pf_map, u16 offset, u16 pcifunc)
+{
+	int rsrc_id;
+
+	if (!rsrc->bmap)
+		return -EINVAL;
+
+	rsrc_id = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, offset, 1, 0);
+	if (rsrc_id >= rsrc->max)
+		return -ENOSPC;
+
+	bitmap_set(rsrc->bmap, rsrc_id, 1);
+	pf_map[rsrc_id] = pcifunc;
+
+	return rsrc_id;
+}
+
+int mcs_free_ctrlpktrule(struct mcs *mcs, struct mcs_free_ctrl_pkt_rule_req *req)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs_rsrc_map *map;
+	u64 dis, reg;
+	int id, rc;
+
+	reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_RULE_ENABLE : MCSX_PEX_TX_SLAVE_RULE_ENABLE;
+	map = (req->dir == MCS_RX) ? &mcs->rx : &mcs->tx;
+
+	if (req->all) {
+		for (id = 0; id < map->ctrlpktrule.max; id++) {
+			if (map->ctrlpktrule2pf_map[id] != pcifunc)
+				continue;
+			mcs_free_rsrc(&map->ctrlpktrule, map->ctrlpktrule2pf_map, id, pcifunc);
+			dis = mcs_reg_read(mcs, reg);
+			dis &= ~BIT_ULL(id);
+			mcs_reg_write(mcs, reg, dis);
+		}
+		return 0;
+	}
+
+	rc = mcs_free_rsrc(&map->ctrlpktrule, map->ctrlpktrule2pf_map, req->rule_idx, pcifunc);
+	dis = mcs_reg_read(mcs, reg);
+	dis &= ~BIT_ULL(req->rule_idx);
+	mcs_reg_write(mcs, reg, dis);
+
+	return rc;
+}
+
+int mcs_ctrlpktrule_write(struct mcs *mcs, struct mcs_ctrl_pkt_rule_write_req *req)
+{
+	u64 reg, enb;
+	u64 idx;
+
+	switch (req->rule_type) {
+	case MCS_CTRL_PKT_RULE_TYPE_ETH:
+		req->data0 &= GENMASK(15, 0);
+		if (req->data0 != ETH_P_PAE)
+			return -EINVAL;
+
+		idx = req->rule_idx - MCS_CTRLPKT_ETYPE_RULE_OFFSET;
+		reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(idx) :
+		      MCSX_PEX_TX_SLAVE_RULE_ETYPE_CFGX(idx);
+
+		mcs_reg_write(mcs, reg, req->data0);
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_DA:
+		if (!(req->data0 & BIT_ULL(40)))
+			return -EINVAL;
+
+		idx = req->rule_idx - MCS_CTRLPKT_DA_RULE_OFFSET;
+		reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_RULE_DAX(idx) :
+		      MCSX_PEX_TX_SLAVE_RULE_DAX(idx);
+
+		mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_RANGE:
+		if (!(req->data0 & BIT_ULL(40)) || !(req->data1 & BIT_ULL(40)))
+			return -EINVAL;
+
+		idx = req->rule_idx - MCS_CTRLPKT_DA_RANGE_RULE_OFFSET;
+		if (req->dir == MCS_RX) {
+			reg = MCSX_PEX_RX_SLAVE_RULE_DA_RANGE_MINX(idx);
+			mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_RX_SLAVE_RULE_DA_RANGE_MAXX(idx);
+			mcs_reg_write(mcs, reg, req->data1 & GENMASK_ULL(47, 0));
+		} else {
+			reg = MCSX_PEX_TX_SLAVE_RULE_DA_RANGE_MINX(idx);
+			mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_TX_SLAVE_RULE_DA_RANGE_MAXX(idx);
+			mcs_reg_write(mcs, reg, req->data1 & GENMASK_ULL(47, 0));
+		}
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_COMBO:
+		req->data2 &= GENMASK(15, 0);
+		if (req->data2 != ETH_P_PAE || !(req->data0 & BIT_ULL(40)) ||
+		    !(req->data1 & BIT_ULL(40)))
+			return -EINVAL;
+
+		idx = req->rule_idx - MCS_CTRLPKT_COMBO_RULE_OFFSET;
+		if (req->dir == MCS_RX) {
+			reg = MCSX_PEX_RX_SLAVE_RULE_COMBO_MINX(idx);
+			mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_RX_SLAVE_RULE_COMBO_MAXX(idx);
+			mcs_reg_write(mcs, reg, req->data1 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_RX_SLAVE_RULE_COMBO_ETX(idx);
+			mcs_reg_write(mcs, reg, req->data2);
+		} else {
+			reg = MCSX_PEX_TX_SLAVE_RULE_COMBO_MINX(idx);
+			mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_TX_SLAVE_RULE_COMBO_MAXX(idx);
+			mcs_reg_write(mcs, reg, req->data1 & GENMASK_ULL(47, 0));
+			reg = MCSX_PEX_TX_SLAVE_RULE_COMBO_ETX(idx);
+			mcs_reg_write(mcs, reg, req->data2);
+		}
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_MAC:
+		if (!(req->data0 & BIT_ULL(40)))
+			return -EINVAL;
+
+		idx = req->rule_idx - MCS_CTRLPKT_MAC_EN_RULE_OFFSET;
+		reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_RULE_MAC :
+		      MCSX_PEX_TX_SLAVE_RULE_MAC;
+
+		mcs_reg_write(mcs, reg, req->data0 & GENMASK_ULL(47, 0));
+		break;
+	}
+
+	reg = (req->dir == MCS_RX) ? MCSX_PEX_RX_SLAVE_RULE_ENABLE : MCSX_PEX_TX_SLAVE_RULE_ENABLE;
+
+	enb = mcs_reg_read(mcs, reg);
+	enb |= BIT_ULL(req->rule_idx);
+	mcs_reg_write(mcs, reg, enb);
+
+	return 0;
+}
+
+int mcs_free_rsrc(struct rsrc_bmap *rsrc, u16 *pf_map, int rsrc_id, u16 pcifunc)
+{
+	/* Check if the rsrc_id is mapped to PF/VF */
+	if (pf_map[rsrc_id] != pcifunc)
+		return -EINVAL;
+
+	rvu_free_rsrc(rsrc, rsrc_id);
+	pf_map[rsrc_id] = 0;
+	return 0;
+}
+
+/* Free all the cam resources mapped to pf */
+int mcs_free_all_rsrc(struct mcs *mcs, int dir, u16 pcifunc)
+{
+	struct mcs_rsrc_map *map;
+	int id;
+
+	if (dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	/* free tcam entries */
+	for (id = 0; id < map->flow_ids.max; id++) {
+		if (map->flowid2pf_map[id] != pcifunc)
+			continue;
+		mcs_free_rsrc(&map->flow_ids, map->flowid2pf_map,
+			      id, pcifunc);
+		mcs_ena_dis_flowid_entry(mcs, id, dir, false);
+	}
+
+	/* free secy entries */
+	for (id = 0; id < map->secy.max; id++) {
+		if (map->secy2pf_map[id] != pcifunc)
+			continue;
+		mcs_free_rsrc(&map->secy, map->secy2pf_map,
+			      id, pcifunc);
+		mcs_clear_secy_plcy(mcs, id, dir);
+	}
+
+	/* free sc entries */
+	for (id = 0; id < map->secy.max; id++) {
+		if (map->sc2pf_map[id] != pcifunc)
+			continue;
+		mcs_free_rsrc(&map->sc, map->sc2pf_map, id, pcifunc);
+
+		/* Disable SC CAM only on RX side */
+		if (dir == MCS_RX)
+			mcs_ena_dis_sc_cam_entry(mcs, id, false);
+	}
+
+	/* free sa entries */
+	for (id = 0; id < map->sa.max; id++) {
+		if (map->sa2pf_map[id] != pcifunc)
+			continue;
+		mcs_free_rsrc(&map->sa, map->sa2pf_map, id, pcifunc);
+	}
+	return 0;
+}
+
+int mcs_alloc_rsrc(struct rsrc_bmap *rsrc, u16 *pf_map, u16 pcifunc)
+{
+	int rsrc_id;
+
+	rsrc_id = rvu_alloc_rsrc(rsrc);
+	if (rsrc_id < 0)
+		return -ENOMEM;
+	pf_map[rsrc_id] = pcifunc;
+	return rsrc_id;
+}
+
+int mcs_alloc_all_rsrc(struct mcs *mcs, u8 *flow_id, u8 *secy_id,
+		       u8 *sc_id, u8 *sa1_id, u8 *sa2_id, u16 pcifunc, int dir)
+{
+	struct mcs_rsrc_map *map;
+	int id;
+
+	if (dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	id = mcs_alloc_rsrc(&map->flow_ids, map->flowid2pf_map, pcifunc);
+	if (id < 0)
+		return -ENOMEM;
+	*flow_id = id;
+
+	id = mcs_alloc_rsrc(&map->secy, map->secy2pf_map, pcifunc);
+	if (id < 0)
+		return -ENOMEM;
+	*secy_id = id;
+
+	id = mcs_alloc_rsrc(&map->sc, map->sc2pf_map, pcifunc);
+	if (id < 0)
+		return -ENOMEM;
+	*sc_id = id;
+
+	id =  mcs_alloc_rsrc(&map->sa, map->sa2pf_map, pcifunc);
+	if (id < 0)
+		return -ENOMEM;
+	*sa1_id = id;
+
+	id =  mcs_alloc_rsrc(&map->sa, map->sa2pf_map, pcifunc);
+	if (id < 0)
+		return -ENOMEM;
+	*sa2_id = id;
+
+	return 0;
+}
+
 static void *alloc_mem(struct mcs *mcs, int n)
 {
 	return devm_kcalloc(mcs->dev, n, sizeof(u16), GFP_KERNEL);
@@ -54,6 +477,10 @@ static int mcs_alloc_struct_mem(struct mcs *mcs, struct mcs_rsrc_map *res)
 	if (!res->flowid2secy_map)
 		return -ENOMEM;
 
+	res->ctrlpktrule2pf_map = alloc_mem(mcs, MCS_MAX_CTRLPKT_RULES);
+	if (!res->ctrlpktrule2pf_map)
+		return -ENOMEM;
+
 	res->flow_ids.max = hw->tcam_entries - MCS_RSRC_RSVD_CNT;
 	err = rvu_alloc_bitmap(&res->flow_ids);
 	if (err)
@@ -74,6 +501,11 @@ static int mcs_alloc_struct_mem(struct mcs *mcs, struct mcs_rsrc_map *res)
 	if (err)
 		return err;
 
+	res->ctrlpktrule.max = MCS_MAX_CTRLPKT_RULES;
+	err = rvu_alloc_bitmap(&res->ctrlpktrule);
+	if (err)
+		return err;
+
 	return 0;
 }
 
@@ -210,6 +642,18 @@ void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode)
 	mcs_reg_write(mcs, reg, (u64)mode);
 }
 
+void mcs_pn_threshold_set(struct mcs *mcs, struct mcs_set_pn_threshold *pn)
+{
+	u64 reg;
+
+	if (pn->dir == MCS_RX)
+		reg = pn->xpn ? MCSX_CPM_RX_SLAVE_XPN_THRESHOLD : MCSX_CPM_RX_SLAVE_PN_THRESHOLD;
+	else
+		reg = pn->xpn ? MCSX_CPM_TX_SLAVE_XPN_THRESHOLD : MCSX_CPM_TX_SLAVE_PN_THRESHOLD;
+
+	mcs_reg_write(mcs, reg, pn->threshold);
+}
+
 void cn10kb_mcs_parser_cfg(struct mcs *mcs)
 {
 	u64 reg, val;
@@ -353,6 +797,9 @@ void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs)
 struct mcs_ops cn10kb_mcs_ops = {
 	.mcs_set_hw_capabilities	= cn10kb_mcs_set_hw_capabilities,
 	.mcs_parser_cfg			= cn10kb_mcs_parser_cfg,
+	.mcs_tx_sa_mem_map_write	= cn10kb_mcs_tx_sa_mem_map_write,
+	.mcs_rx_sa_mem_map_write	= cn10kb_mcs_rx_sa_mem_map_write,
+	.mcs_flowid_secy_map		= cn10kb_mcs_flowid_secy_map,
 };
 
 static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
index c11d507..615a3ad 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
@@ -21,19 +21,47 @@
 #define MCS_PORT_FIFO_SKID_MASK		0x3F
 #define MCS_MAX_CUSTOM_TAGS		0x8
 
+#define MCS_CTRLPKT_ETYPE_RULE_MAX	8
+#define MCS_CTRLPKT_DA_RULE_MAX		8
+#define MCS_CTRLPKT_DA_RANGE_RULE_MAX	4
+#define MCS_CTRLPKT_COMBO_RULE_MAX	4
+#define MCS_CTRLPKT_MAC_RULE_MAX	1
+
+#define MCS_MAX_CTRLPKT_RULES	(MCS_CTRLPKT_ETYPE_RULE_MAX + \
+				MCS_CTRLPKT_DA_RULE_MAX + \
+				MCS_CTRLPKT_DA_RANGE_RULE_MAX + \
+				MCS_CTRLPKT_COMBO_RULE_MAX + \
+				MCS_CTRLPKT_MAC_RULE_MAX)
+
+#define MCS_CTRLPKT_ETYPE_RULE_OFFSET		0
+#define MCS_CTRLPKT_DA_RULE_OFFSET		8
+#define MCS_CTRLPKT_DA_RANGE_RULE_OFFSET	16
+#define MCS_CTRLPKT_COMBO_RULE_OFFSET		20
+#define MCS_CTRLPKT_MAC_EN_RULE_OFFSET		24
+
 /* Reserved resources for default bypass entry */
 #define MCS_RSRC_RSVD_CNT		1
 
+struct secy_mem_map {
+	u8 flow_id;
+	u8 secy;
+	u8 ctrl_pkt;
+	u8 sc;
+	u64 sci;
+};
+
 struct mcs_rsrc_map {
 	u16 *flowid2pf_map;
 	u16 *secy2pf_map;
 	u16 *sc2pf_map;
 	u16 *sa2pf_map;
 	u16 *flowid2secy_map;	/* bitmap flowid mapped to secy*/
+	u16 *ctrlpktrule2pf_map;
 	struct rsrc_bmap	flow_ids;
 	struct rsrc_bmap	secy;
 	struct rsrc_bmap	sc;
 	struct rsrc_bmap	sa;
+	struct rsrc_bmap	ctrlpktrule;
 };
 
 struct hwinfo {
@@ -62,6 +90,9 @@ struct mcs {
 struct mcs_ops {
 	void	(*mcs_set_hw_capabilities)(struct mcs *mcs);
 	void	(*mcs_parser_cfg)(struct mcs *mcs);
+	void	(*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
+	void	(*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
+	void	(*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir);
 };
 
 extern struct pci_driver mcs_driver;
@@ -80,7 +111,24 @@ static inline u64 mcs_reg_read(struct mcs *mcs, u64 offset)
 struct mcs *mcs_get_pdata(int mcs_id);
 int mcs_get_blkcnt(void);
 int mcs_set_lmac_channels(int mcs_id, u16 base);
-
+int mcs_alloc_rsrc(struct rsrc_bmap *rsrc, u16 *pf_map, u16 pcifunc);
+int mcs_free_rsrc(struct rsrc_bmap *rsrc, u16 *pf_map, int rsrc_id, u16 pcifunc);
+int mcs_alloc_all_rsrc(struct mcs *mcs, u8 *flowid, u8 *secy_id,
+		       u8 *sc_id, u8 *sa1_id, u8 *sa2_id, u16 pcifunc, int dir);
+int mcs_free_all_rsrc(struct mcs *mcs, int dir, u16 pcifunc);
+void mcs_clear_secy_plcy(struct mcs *mcs, int secy_id, int dir);
+void mcs_ena_dis_flowid_entry(struct mcs *mcs, int id, int dir, int ena);
+void mcs_ena_dis_sc_cam_entry(struct mcs *mcs, int id, int ena);
+void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int id, int dir);
+void mcs_secy_plcy_write(struct mcs *mcs, u64 plcy, int id, int dir);
+void mcs_rx_sc_cam_write(struct mcs *mcs, u64 sci, u64 secy, int sc_id);
+void mcs_sa_plcy_write(struct mcs *mcs, u64 *plcy, int sa, int dir);
+void mcs_map_sc_to_sa(struct mcs *mcs, u64 *sa_map, int sc, int dir);
+void mcs_pn_table_write(struct mcs *mcs, u8 pn_id, u64 next_pn, u8 dir);
+void mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
+void mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
+void mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
+void mcs_pn_threshold_set(struct mcs *mcs, struct mcs_set_pn_threshold *pn);
 int mcs_install_flowid_bypass_entry(struct mcs *mcs);
 void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode);
 void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset);
@@ -89,14 +137,23 @@ void mcs_get_port_cfg(struct mcs *mcs, struct mcs_port_cfg_get_req *req,
 		      struct mcs_port_cfg_get_rsp *rsp);
 void mcs_get_custom_tag_cfg(struct mcs *mcs, struct mcs_custom_tag_cfg_get_req *req,
 			    struct mcs_custom_tag_cfg_get_rsp *rsp);
+int mcs_alloc_ctrlpktrule(struct rsrc_bmap *rsrc, u16 *pf_map, u16 offset, u16 pcifunc);
+int mcs_free_ctrlpktrule(struct mcs *mcs, struct mcs_free_ctrl_pkt_rule_req *req);
+int mcs_ctrlpktrule_write(struct mcs *mcs, struct mcs_ctrl_pkt_rule_write_req *req);
 
 /* CN10K-B APIs */
 void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs);
+void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
+void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
+void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
 void cn10kb_mcs_parser_cfg(struct mcs *mcs);
 
 /* CNF10K-B APIs */
 struct mcs_ops *cnf10kb_get_mac_ops(void);
 void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs);
+void cnf10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
+void cnf10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
+void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
 void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
 
 #endif /* MCS_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
index 68bcee0..d6d03d0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
@@ -10,6 +10,9 @@
 static struct mcs_ops		cnf10kb_mcs_ops   = {
 	.mcs_set_hw_capabilities	= cnf10kb_mcs_set_hw_capabilities,
 	.mcs_parser_cfg			= cnf10kb_mcs_parser_cfg,
+	.mcs_tx_sa_mem_map_write	= cnf10kb_mcs_tx_sa_mem_map_write,
+	.mcs_rx_sa_mem_map_write	= cnf10kb_mcs_rx_sa_mem_map_write,
+	.mcs_flowid_secy_map		= cnf10kb_mcs_flowid_secy_map,
 };
 
 struct mcs_ops *cnf10kb_get_mac_ops(void)
@@ -63,3 +66,55 @@ void cnf10kb_mcs_parser_cfg(struct mcs *mcs)
 	reg = MCSX_PEX_TX_SLAVE_ETYPE_ENABLE;
 	mcs_reg_write(mcs, reg, val);
 }
+
+void cnf10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir)
+{
+	u64 reg, val;
+
+	val = (map->secy & 0x3F) | (map->ctrl_pkt & 0x1) << 6;
+	if (dir == MCS_RX) {
+		reg = MCSX_CPM_RX_SLAVE_SECY_MAP_MEMX(map->flow_id);
+	} else {
+		reg = MCSX_CPM_TX_SLAVE_SECY_MAP_MEM_0X(map->flow_id);
+		mcs_reg_write(mcs, reg, map->sci);
+		val |= (map->sc & 0x3F) << 7;
+		reg = MCSX_CPM_TX_SLAVE_SECY_MAP_MEM_1X(map->flow_id);
+	}
+
+	mcs_reg_write(mcs, reg, val);
+}
+
+void cnf10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *map)
+{
+	u64 reg, val;
+
+	val = (map->sa_index0 & 0x7F) | (map->sa_index1 & 0x7F) << 7;
+
+	reg = MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(map->sc_id);
+	mcs_reg_write(mcs, reg, val);
+
+	if (map->rekey_ena) {
+		reg = MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0;
+		val = mcs_reg_read(mcs, reg);
+		val |= BIT_ULL(map->sc_id);
+		mcs_reg_write(mcs, reg, val);
+	}
+
+	if (map->sa_index0_vld)
+		mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX0_VLDX(map->sc_id), BIT_ULL(0));
+
+	if (map->sa_index1_vld)
+		mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX1_VLDX(map->sc_id), BIT_ULL(0));
+
+	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_SA_ACTIVEX(map->sc_id), map->tx_sa_active);
+}
+
+void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map)
+{
+	u64 val, reg;
+
+	val = (map->sa_index & 0x7F) | (map->sa_in_use << 7);
+
+	reg = MCSX_CPM_RX_SLAVE_SA_MAP_MEMX((4 * map->sc_id) + map->an);
+	mcs_reg_write(mcs, reg, val);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
index 1ce3442..e192a68 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
@@ -89,6 +89,163 @@
 #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a)          (0x46f8ull + (a) * 0x8ull)
 #define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a)	(0x788ull + (a) * 0x8ull)
 #define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a)		(0x4738ull + (a) * 0x8ull)
+#define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x3fc0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x558ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_DAX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4000ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x598ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_DA_RANGE_MINX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4040ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5d8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_DA_RANGE_MAXX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4048ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5e0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_COMBO_MINX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4080ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x648ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_COMBO_MAXX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4088ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x650ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_COMBO_ETX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4090ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x658ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_MAC ({	\
+	u64 offset;					\
+							\
+	offset = 0x40e0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x6d8ull;			\
+	offset; })
+
+#define MCSX_PEX_RX_SLAVE_RULE_ENABLE ({	\
+	u64 offset;					\
+							\
+	offset = 0x40e8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x6e0ull;			\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_ETYPE_CFGX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4b60ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x7d8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_DAX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4ba0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x818ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_DA_RANGE_MINX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4be0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x858ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_DA_RANGE_MAXX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4be8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x860ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_COMBO_MINX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4c20ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x8c8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_COMBO_MAXX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4c28ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x8d0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_COMBO_ETX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x4c30ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x8d8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_MAC ({	\
+	u64 offset;					\
+							\
+	offset = 0x4c80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x958ull;			\
+	offset; })
+
+#define MCSX_PEX_TX_SLAVE_RULE_ENABLE ({	\
+	u64 offset;					\
+							\
+	offset = 0x4c88ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x960ull;			\
+	offset; })
 
 #define MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION ({		\
 	u64 offset;					\
@@ -111,4 +268,232 @@
 #define MCSX_BBE_RX_SLAVE_CAL_LEN			0x188ull
 #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a)		(0x290ull + (a) * 0x40ull)
 
+/* CPM registers */
+#define MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(a, b) ({	\
+	u64 offset;					\
+							\
+	offset = 0x30740ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x3bf8ull;			\
+	offset += (a) * 0x8ull + (b) * 0x20ull;		\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(a, b) ({	\
+	u64 offset;					\
+							\
+	offset = 0x34740ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x43f8ull;			\
+	offset += (a) * 0x8ull + (b) * 0x20ull;		\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_FLOWID_TCAM_ENA_0 ({		\
+	u64 offset;					\
+							\
+	offset = 0x30700ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x3bd8ull;			\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SC_CAMX(a, b)	({		\
+	u64 offset;					\
+							\
+	offset = 0x38780ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x4c08ull;			\
+	offset +=  (a) * 0x8ull + (b) * 0x10ull;	\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SC_CAM_ENA(a)	({		\
+	u64 offset;					\
+							\
+	offset = 0x38740ull + (a) * 0x8ull;		\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x4bf8ull;			\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SECY_MAP_MEMX(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x23ee0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xbd0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SECY_PLCY_MEM_0X(a) ({	\
+	u64 offset;					\
+							\
+	offset = (0x246e0ull + (a) * 0x10ull);		\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = (0xdd0ull + (a) * 0x8ull);	\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SA_KEY_LOCKOUTX(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x23E90ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xbb0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SA_MAP_MEMX(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x256e0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xfd0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SA_PLCY_MEMX(a, b) ({		\
+	u64 offset;					\
+							\
+	offset = 0x27700ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x17d8ull;			\
+	offset +=  (a) * 0x8ull + (b) * 0x40ull;	\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_SA_PN_TABLE_MEMX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x2f700ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x37d8;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_XPN_THRESHOLD	({		\
+	u64 offset;					\
+							\
+	offset = 0x23e40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xb90ull;			\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_PN_THRESHOLD	({		\
+	u64 offset;					\
+							\
+	offset = 0x23e48ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xb98ull;			\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_PN_THRESH_REACHEDX(a)	({	\
+	u64 offset;					\
+							\
+	offset = 0x23e50ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xba0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_FLOWID_TCAM_ENA_1		0x30708ull
+#define MCSX_CPM_RX_SLAVE_SECY_PLCY_MEM_1X(a)		(0x246e8ull + (a) * 0x10ull)
+
+/* TX registers */
+#define MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(a, b) ({	\
+	u64 offset;					\
+							\
+	offset = 0x51d50ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa7c0ull;			\
+	offset += (a) * 0x8ull + (b) * 0x20ull;		\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(a, b) ({	\
+	u64 offset;					\
+							\
+	offset = 0x55d50ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xafc0ull;			\
+	offset += (a) * 0x8ull + (b) * 0x20ull;		\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_FLOWID_TCAM_ENA_0 ({		\
+	u64 offset;					\
+							\
+	offset = 0x51d10ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa7a0ull;			\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SECY_MAP_MEM_0X(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x3e508ull + (a) * 0x8ull;		\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5550ull + (a) * 0x10ull;	\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SECY_PLCY_MEMX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x3ed08ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5950ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SA_KEY_LOCKOUTX(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x3e4c0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5538ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(a) ({		\
+	u64 offset;					\
+							\
+	offset = 0x3fd10ull + (a) * 0x10ull;		\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x6150ull + (a) * 0x8ull;	\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SA_PLCY_MEMX(a, b) ({		\
+	u64 offset;					\
+							\
+	offset = 0x40d10ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x63a0ull;			\
+	offset += (a) * 0x8ull + (b) * 0x80ull;		\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SA_PN_TABLE_MEMX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x50d10ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa3a0ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_XPN_THRESHOLD ({		\
+	u64 offset;					\
+							\
+	offset = 0x3e4b0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5528ull;			\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_PN_THRESHOLD ({		\
+	u64 offset;					\
+							\
+	offset = 0x3e4b8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x5530ull;			\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_SA_MAP_MEM_1X(a)		(0x3fd18ull + (a) * 0x10ull)
+#define MCSX_CPM_TX_SLAVE_SECY_MAP_MEM_1X(a)		(0x5558ull + (a) * 0x10ull)
+#define MCSX_CPM_TX_SLAVE_FLOWID_TCAM_ENA_1		0x51d18ull
+#define MCSX_CPM_TX_SLAVE_TX_SA_ACTIVEX(a)		(0x5b50 + (a) * 0x8ull)
+#define MCSX_CPM_TX_SLAVE_SA_INDEX0_VLDX(a)		(0x5d50 + (a) * 0x8ull)
+#define MCSX_CPM_TX_SLAVE_SA_INDEX1_VLDX(a)		(0x5f50 + (a) * 0x8ull)
+#define MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0		0x5500ull
+
 #endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index 9eaa8ee..3c307e7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -133,6 +133,380 @@ int rvu_mbox_handler_mcs_custom_tag_cfg_get(struct rvu *rvu, struct mcs_custom_t
 	return 0;
 }
 
+int rvu_mbox_handler_mcs_flowid_ena_entry(struct rvu *rvu,
+					  struct mcs_flowid_ena_dis_entry *req,
+					  struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	mcs_ena_dis_flowid_entry(mcs, req->flow_id, req->dir, req->ena);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_pn_table_write(struct rvu *rvu,
+					struct mcs_pn_table_write_req *req,
+					struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	mcs_pn_table_write(mcs, req->pn_id, req->next_pn, req->dir);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_set_pn_threshold(struct rvu *rvu,
+					  struct mcs_set_pn_threshold *req,
+					  struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mcs_pn_threshold_set(mcs, req);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_rx_sc_sa_map_write(struct rvu *rvu,
+					    struct mcs_rx_sc_sa_map *req,
+					    struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	mcs->mcs_ops->mcs_rx_sa_mem_map_write(mcs, req);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_tx_sc_sa_map_write(struct rvu *rvu,
+					    struct mcs_tx_sc_sa_map *req,
+					    struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	mcs->mcs_ops->mcs_tx_sa_mem_map_write(mcs, req);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_sa_plcy_write(struct rvu *rvu,
+				       struct mcs_sa_plcy_write_req *req,
+				       struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+	int i;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	for (i = 0; i < req->sa_cnt; i++)
+		mcs_sa_plcy_write(mcs, &req->plcy[i][0],
+				  req->sa_index[i], req->dir);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_rx_sc_cam_write(struct rvu *rvu,
+					 struct mcs_rx_sc_cam_write_req *req,
+					 struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+	mcs_rx_sc_cam_write(mcs, req->sci, req->secy_id, req->sc_id);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_secy_plcy_write(struct rvu *rvu,
+					 struct mcs_secy_plcy_write_req *req,
+					 struct msg_rsp *rsp)
+{	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mcs_secy_plcy_write(mcs, req->plcy,
+			    req->secy_id, req->dir);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_flowid_entry_write(struct rvu *rvu,
+					    struct mcs_flowid_entry_write_req *req,
+					    struct msg_rsp *rsp)
+{
+	struct secy_mem_map map;
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	/* TODO validate the flowid */
+	mcs_flowid_entry_write(mcs, req->data, req->mask,
+			       req->flow_id, req->dir);
+	map.secy = req->secy_id;
+	map.sc = req->sc_id;
+	map.ctrl_pkt = req->ctrl_pkt;
+	map.flow_id = req->flow_id;
+	map.sci = req->sci;
+	mcs->mcs_ops->mcs_flowid_secy_map(mcs, &map, req->dir);
+	if (req->ena)
+		mcs_ena_dis_flowid_entry(mcs, req->flow_id,
+					 req->dir, true);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_free_resources(struct rvu *rvu,
+					struct mcs_free_rsrc_req *req,
+					struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs_rsrc_map *map;
+	struct mcs *mcs;
+	int rc;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (req->dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	mutex_lock(&rvu->rsrc_lock);
+	/* Free all the cam resources mapped to PF/VF */
+	if (req->all) {
+		rc = mcs_free_all_rsrc(mcs, req->dir, pcifunc);
+		goto exit;
+	}
+
+	switch (req->rsrc_type) {
+	case MCS_RSRC_TYPE_FLOWID:
+		rc = mcs_free_rsrc(&map->flow_ids, map->flowid2pf_map, req->rsrc_id, pcifunc);
+		mcs_ena_dis_flowid_entry(mcs, req->rsrc_id, req->dir, false);
+		break;
+	case MCS_RSRC_TYPE_SECY:
+		rc =  mcs_free_rsrc(&map->secy, map->secy2pf_map, req->rsrc_id, pcifunc);
+		mcs_clear_secy_plcy(mcs, req->rsrc_id, req->dir);
+		break;
+	case MCS_RSRC_TYPE_SC:
+		rc = mcs_free_rsrc(&map->sc, map->sc2pf_map, req->rsrc_id, pcifunc);
+		/* Disable SC CAM only on RX side */
+		if (req->dir == MCS_RX)
+			mcs_ena_dis_sc_cam_entry(mcs, req->rsrc_id, false);
+		break;
+	case MCS_RSRC_TYPE_SA:
+		rc = mcs_free_rsrc(&map->sa, map->sa2pf_map, req->rsrc_id, pcifunc);
+		break;
+	}
+exit:
+	mutex_unlock(&rvu->rsrc_lock);
+	return rc;
+}
+
+int rvu_mbox_handler_mcs_alloc_resources(struct rvu *rvu,
+					 struct mcs_alloc_rsrc_req *req,
+					 struct mcs_alloc_rsrc_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs_rsrc_map *map;
+	struct mcs *mcs;
+	int rsrc_id, i;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (req->dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	mutex_lock(&rvu->rsrc_lock);
+
+	if (req->all) {
+		rsrc_id = mcs_alloc_all_rsrc(mcs, &rsp->flow_ids[0],
+					     &rsp->secy_ids[0],
+					     &rsp->sc_ids[0],
+					     &rsp->sa_ids[0],
+					     &rsp->sa_ids[1],
+					     pcifunc, req->dir);
+		goto exit;
+	}
+
+	switch (req->rsrc_type) {
+	case MCS_RSRC_TYPE_FLOWID:
+		for (i = 0; i < req->rsrc_cnt; i++) {
+			rsrc_id = mcs_alloc_rsrc(&map->flow_ids, map->flowid2pf_map, pcifunc);
+			if (rsrc_id < 0)
+				goto exit;
+			rsp->flow_ids[i] = rsrc_id;
+			rsp->rsrc_cnt++;
+		}
+		break;
+	case MCS_RSRC_TYPE_SECY:
+		for (i = 0; i < req->rsrc_cnt; i++) {
+			rsrc_id = mcs_alloc_rsrc(&map->secy, map->secy2pf_map, pcifunc);
+			if (rsrc_id < 0)
+				goto exit;
+			rsp->secy_ids[i] = rsrc_id;
+			rsp->rsrc_cnt++;
+		}
+		break;
+	case MCS_RSRC_TYPE_SC:
+		for (i = 0; i < req->rsrc_cnt; i++) {
+			rsrc_id = mcs_alloc_rsrc(&map->sc, map->sc2pf_map, pcifunc);
+			if (rsrc_id < 0)
+				goto exit;
+			rsp->sc_ids[i] = rsrc_id;
+			rsp->rsrc_cnt++;
+		}
+		break;
+	case MCS_RSRC_TYPE_SA:
+		for (i = 0; i < req->rsrc_cnt; i++) {
+			rsrc_id = mcs_alloc_rsrc(&map->sa, map->sa2pf_map, pcifunc);
+			if (rsrc_id < 0)
+				goto exit;
+			rsp->sa_ids[i] = rsrc_id;
+			rsp->rsrc_cnt++;
+		}
+		break;
+	}
+
+	rsp->rsrc_type = req->rsrc_type;
+	rsp->dir = req->dir;
+	rsp->mcs_id = req->mcs_id;
+	rsp->all = req->all;
+
+exit:
+	if (rsrc_id < 0)
+		dev_err(rvu->dev, "Failed to allocate the mcs resources for PCIFUNC:%d\n", pcifunc);
+	mutex_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_alloc_ctrl_pkt_rule(struct rvu *rvu,
+					     struct mcs_alloc_ctrl_pkt_rule_req *req,
+					     struct mcs_alloc_ctrl_pkt_rule_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs_rsrc_map *map;
+	struct mcs *mcs;
+	int rsrc_id;
+	u16 offset;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	map = (req->dir == MCS_RX) ? &mcs->rx : &mcs->tx;
+
+	mutex_lock(&rvu->rsrc_lock);
+
+	switch (req->rule_type) {
+	case MCS_CTRL_PKT_RULE_TYPE_ETH:
+		offset = MCS_CTRLPKT_ETYPE_RULE_OFFSET;
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_DA:
+		offset = MCS_CTRLPKT_DA_RULE_OFFSET;
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_RANGE:
+		offset = MCS_CTRLPKT_DA_RANGE_RULE_OFFSET;
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_COMBO:
+		offset = MCS_CTRLPKT_COMBO_RULE_OFFSET;
+		break;
+	case MCS_CTRL_PKT_RULE_TYPE_MAC:
+		offset = MCS_CTRLPKT_MAC_EN_RULE_OFFSET;
+		break;
+	}
+
+	rsrc_id = mcs_alloc_ctrlpktrule(&map->ctrlpktrule, map->ctrlpktrule2pf_map, offset,
+					pcifunc);
+	if (rsrc_id < 0)
+		goto exit;
+
+	rsp->rule_idx = rsrc_id;
+	rsp->rule_type = req->rule_type;
+	rsp->dir = req->dir;
+	rsp->mcs_id = req->mcs_id;
+
+	mutex_unlock(&rvu->rsrc_lock);
+	return 0;
+exit:
+	if (rsrc_id < 0)
+		dev_err(rvu->dev, "Failed to allocate the mcs ctrl pkt rule for PCIFUNC:%d\n",
+			pcifunc);
+	mutex_unlock(&rvu->rsrc_lock);
+	return rsrc_id;
+}
+
+int rvu_mbox_handler_mcs_free_ctrl_pkt_rule(struct rvu *rvu,
+					    struct mcs_free_ctrl_pkt_rule_req *req,
+					    struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+	int rc;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mutex_lock(&rvu->rsrc_lock);
+
+	rc = mcs_free_ctrlpktrule(mcs, req);
+
+	mutex_unlock(&rvu->rsrc_lock);
+
+	return rc;
+}
+
+int rvu_mbox_handler_mcs_ctrl_pkt_rule_write(struct rvu *rvu,
+					     struct mcs_ctrl_pkt_rule_write_req *req,
+					     struct msg_rsp *rsp)
+{
+	struct mcs *mcs;
+	int rc;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	rc = mcs_ctrlpktrule_write(mcs, req);
+
+	return rc;
+}
+
 static void rvu_mcs_set_lmac_bmap(struct rvu *rvu)
 {
 	struct mcs *mcs = mcs_get_pdata(0);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 4/8] octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
                   ` (2 preceding siblings ...)
  2022-09-28  3:58 ` [net-next PATCH v2 3/8] octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 5/8] octeontx2-af: cn10k: mcs: Support for stats collection Subbaraya Sundeep
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

Out of all the TCAM entries, reserve last TX and RX TCAM flow
entry(low priority) so that normal traffic can be sent out and
received. The traffic which needs macsec processing hits the
high priority TCAM flows. Also install a FLR handler to free
the allocated resources for PF/VF.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 45 ++++++++++++++++++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 23 +++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |  1 +
 3 files changed, 69 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
index 66d5038..7c82a25 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -181,6 +181,51 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id,
 	}
 }
 
+int mcs_install_flowid_bypass_entry(struct mcs *mcs)
+{
+	int flow_id, secy_id, reg_id;
+	struct secy_mem_map map;
+	u64 reg, plcy = 0;
+
+	/* Flow entry */
+	flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT;
+	for (reg_id = 0; reg_id < 4; reg_id++) {
+		reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+		mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0));
+	}
+	for (reg_id = 0; reg_id < 4; reg_id++) {
+		reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
+		mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0));
+	}
+	/* secy */
+	secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT;
+
+	/* Set validate frames to NULL and enable control port */
+	plcy = 0x7ull;
+	if (mcs->hw->mcs_blks > 1)
+		plcy = BIT_ULL(0) | 0x3ull << 4;
+	mcs_secy_plcy_write(mcs, plcy, secy_id, MCS_RX);
+
+	/* Enable control port and set mtu to max */
+	plcy = BIT_ULL(0) | GENMASK_ULL(43, 28);
+	if (mcs->hw->mcs_blks > 1)
+		plcy = BIT_ULL(0) | GENMASK_ULL(63, 48);
+	mcs_secy_plcy_write(mcs, plcy, secy_id, MCS_TX);
+
+	/* Map flowid to secy */
+	map.secy = secy_id;
+	map.ctrl_pkt = 0;
+	map.flow_id = flow_id;
+	mcs->mcs_ops->mcs_flowid_secy_map(mcs, &map, MCS_RX);
+	map.sc = secy_id;
+	mcs->mcs_ops->mcs_flowid_secy_map(mcs, &map, MCS_TX);
+
+	/* Enable Flowid entry */
+	mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true);
+	mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true);
+	return 0;
+}
+
 void mcs_clear_secy_plcy(struct mcs *mcs, int secy_id, int dir)
 {
 	struct mcs_rsrc_map *map;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index 3c307e7..8a7d455 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -133,6 +133,27 @@ int rvu_mbox_handler_mcs_custom_tag_cfg_get(struct rvu *rvu, struct mcs_custom_t
 	return 0;
 }
 
+int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc)
+{
+	struct mcs *mcs;
+	int mcs_id;
+
+	/* CNF10K-B mcs0-6 are mapped to RPM2-8*/
+	if (rvu->mcs_blk_cnt > 1) {
+		for (mcs_id = 0; mcs_id < rvu->mcs_blk_cnt; mcs_id++) {
+			mcs = mcs_get_pdata(mcs_id);
+			mcs_free_all_rsrc(mcs, MCS_RX, pcifunc);
+			mcs_free_all_rsrc(mcs, MCS_TX, pcifunc);
+		}
+	} else {
+		/* CN10K-B has only one mcs block */
+		mcs = mcs_get_pdata(0);
+		mcs_free_all_rsrc(mcs, MCS_RX, pcifunc);
+		mcs_free_all_rsrc(mcs, MCS_TX, pcifunc);
+	}
+	return 0;
+}
+
 int rvu_mbox_handler_mcs_flowid_ena_entry(struct rvu *rvu,
 					  struct mcs_flowid_ena_dis_entry *req,
 					  struct msg_rsp *rsp)
@@ -543,8 +564,10 @@ int rvu_mcs_init(struct rvu *rvu)
 		rvu_mcs_set_lmac_bmap(rvu);
 	}
 
+	/* Install default tcam bypass entry and set port to operational mode */
 	for (mcs_id = 0; mcs_id < rvu->mcs_blk_cnt; mcs_id++) {
 		mcs = mcs_get_pdata(mcs_id);
+		mcs_install_flowid_bypass_entry(mcs);
 		for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++)
 			mcs_set_lmac_mode(mcs, lmac, 0);
 	}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 9a150da..4aefe47 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -875,5 +875,6 @@ int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir,
 
 /* CN10K MCS */
 int rvu_mcs_init(struct rvu *rvu);
+int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
 
 #endif /* RVU_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 5/8] octeontx2-af: cn10k: mcs: Support for stats collection
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
                   ` (3 preceding siblings ...)
  2022-09-28  3:58 ` [net-next PATCH v2 4/8] octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts Subbaraya Sundeep
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Ankur Dwivedi, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

Add mailbox messages to return the resource stats to the
caller. Stats of SecY, SC and SAs as per the macsec standard,
TCAM flow id hits/miss, mailbox to clear the stats are
implemented.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   | 111 ++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 310 +++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  13 +
 .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c    |  26 ++
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    | 435 +++++++++++++++++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 153 ++++++++
 6 files changed, 1048 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 3213b1512..e01a705 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -317,6 +317,15 @@ M(MCS_PN_TABLE_WRITE,	0xa009, mcs_pn_table_write, mcs_pn_table_write_req,	\
 M(MCS_SET_ACTIVE_LMAC,	0xa00a,	mcs_set_active_lmac, mcs_set_active_lmac,	\
 				msg_rsp)					\
 M(MCS_GET_HW_INFO,	0xa00b,	mcs_get_hw_info, msg_req, mcs_hw_info)		\
+M(MCS_GET_FLOWID_STATS, 0xa00c, mcs_get_flowid_stats, mcs_stats_req,		\
+				mcs_flowid_stats)				\
+M(MCS_GET_SECY_STATS,	0xa00d, mcs_get_secy_stats, mcs_stats_req,		\
+				mcs_secy_stats)					\
+M(MCS_GET_SC_STATS,	0xa00e, mcs_get_sc_stats, mcs_stats_req, mcs_sc_stats)	\
+M(MCS_GET_SA_STATS,	0xa00f, mcs_get_sa_stats, mcs_stats_req, mcs_sa_stats)	\
+M(MCS_GET_PORT_STATS,	0xa010, mcs_get_port_stats, mcs_stats_req,		\
+				mcs_port_stats)					\
+M(MCS_CLEAR_STATS,	0xa011,	mcs_clear_stats, mcs_clear_stats, msg_rsp)	\
 M(MCS_SET_LMAC_MODE,	0xa013, mcs_set_lmac_mode, mcs_set_lmac_mode, msg_rsp)	\
 M(MCS_SET_PN_THRESHOLD, 0xa014, mcs_set_pn_threshold, mcs_set_pn_threshold,	\
 				msg_rsp)					\
@@ -1973,4 +1982,106 @@ struct mcs_ctrl_pkt_rule_write_req {
 	u64 rsvd;
 };
 
+struct mcs_stats_req {
+	struct mbox_msghdr hdr;
+	u8 id;
+	u8 mcs_id;
+	u8 dir;
+	u64 rsvd;
+};
+
+struct mcs_flowid_stats {
+	struct mbox_msghdr hdr;
+	u64 tcam_hit_cnt;
+	u64 rsvd;
+};
+
+struct mcs_secy_stats {
+	struct mbox_msghdr hdr;
+	u64 ctl_pkt_bcast_cnt;
+	u64 ctl_pkt_mcast_cnt;
+	u64 ctl_pkt_ucast_cnt;
+	u64 ctl_octet_cnt;
+	u64 unctl_pkt_bcast_cnt;
+	u64 unctl_pkt_mcast_cnt;
+	u64 unctl_pkt_ucast_cnt;
+	u64 unctl_octet_cnt;
+	/* Valid only for RX */
+	u64 octet_decrypted_cnt;
+	u64 octet_validated_cnt;
+	u64 pkt_port_disabled_cnt;
+	u64 pkt_badtag_cnt;
+	u64 pkt_nosa_cnt;
+	u64 pkt_nosaerror_cnt;
+	u64 pkt_tagged_ctl_cnt;
+	u64 pkt_untaged_cnt;
+	u64 pkt_ctl_cnt;	/* CN10K-B */
+	u64 pkt_notag_cnt;	/* CNF10K-B */
+	/* Valid only for TX */
+	u64 octet_encrypted_cnt;
+	u64 octet_protected_cnt;
+	u64 pkt_noactivesa_cnt;
+	u64 pkt_toolong_cnt;
+	u64 pkt_untagged_cnt;
+	u64 rsvd[4];
+};
+
+struct mcs_port_stats {
+	struct mbox_msghdr hdr;
+	u64 tcam_miss_cnt;
+	u64 parser_err_cnt;
+	u64 preempt_err_cnt;  /* CNF10K-B */
+	u64 sectag_insert_err_cnt;
+	u64 rsvd[4];
+};
+
+/* Only for CN10K-B */
+struct mcs_sa_stats {
+	struct mbox_msghdr hdr;
+	/* RX */
+	u64 pkt_invalid_cnt;
+	u64 pkt_nosaerror_cnt;
+	u64 pkt_notvalid_cnt;
+	u64 pkt_ok_cnt;
+	u64 pkt_nosa_cnt;
+	/* TX */
+	u64 pkt_encrypt_cnt;
+	u64 pkt_protected_cnt;
+	u64 rsvd[4];
+};
+
+struct mcs_sc_stats {
+	struct mbox_msghdr hdr;
+	/* RX */
+	u64 hit_cnt;
+	u64 pkt_invalid_cnt;
+	u64 pkt_late_cnt;
+	u64 pkt_notvalid_cnt;
+	u64 pkt_unchecked_cnt;
+	u64 pkt_delay_cnt;	/* CNF10K-B */
+	u64 pkt_ok_cnt;		/* CNF10K-B */
+	u64 octet_decrypt_cnt;	/* CN10K-B */
+	u64 octet_validate_cnt;	/* CN10K-B */
+	/* TX */
+	u64 pkt_encrypt_cnt;
+	u64 pkt_protected_cnt;
+	u64 octet_encrypt_cnt;		/* CN10K-B */
+	u64 octet_protected_cnt;	/* CN10K-B */
+	u64 rsvd[4];
+};
+
+struct mcs_clear_stats {
+	struct mbox_msghdr hdr;
+#define MCS_FLOWID_STATS	0
+#define MCS_SECY_STATS		1
+#define MCS_SC_STATS		2
+#define MCS_SA_STATS		3
+#define MCS_PORT_STATS		4
+	u8 type;	/* FLOWID, SECY, SC, SA, PORT */
+	u8 id;		/* type = PORT, If id = FF(invalid) port no is derived from pcifunc */
+	u8 mcs_id;
+	u8 dir;
+	u8 all;		/* All resources stats mapped to PF are cleared */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
index 7c82a25..2cc713d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -24,6 +24,311 @@ static const struct pci_device_id mcs_id_table[] = {
 
 static LIST_HEAD(mcs_list);
 
+void mcs_get_tx_secy_stats(struct mcs *mcs, struct mcs_secy_stats *stats, int id)
+{
+	u64 reg;
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLBCPKTSX(id);
+	stats->ctl_pkt_bcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLMCPKTSX(id);
+	stats->ctl_pkt_mcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLOCTETSX(id);
+	stats->ctl_octet_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLUCPKTSX(id);
+	stats->ctl_pkt_ucast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLBCPKTSX(id);
+	stats->unctl_pkt_bcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLMCPKTSX(id);
+	stats->unctl_pkt_mcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLOCTETSX(id);
+	stats->unctl_octet_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLUCPKTSX(id);
+	stats->unctl_pkt_ucast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSECYENCRYPTEDX(id);
+	stats->octet_encrypted_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSECYPROTECTEDX(id);
+	stats->octet_protected_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYNOACTIVESAX(id);
+	stats->pkt_noactivesa_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYTOOLONGX(id);
+	stats->pkt_toolong_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYUNTAGGEDX(id);
+	stats->pkt_untagged_cnt =  mcs_reg_read(mcs, reg);
+}
+
+void mcs_get_rx_secy_stats(struct mcs *mcs, struct mcs_secy_stats *stats, int id)
+{
+	u64 reg;
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINCTLBCPKTSX(id);
+	stats->ctl_pkt_bcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINCTLMCPKTSX(id);
+	stats->ctl_pkt_mcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINCTLOCTETSX(id);
+	stats->ctl_octet_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINCTLUCPKTSX(id);
+	stats->ctl_pkt_ucast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLBCPKTSX(id);
+	stats->unctl_pkt_bcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLMCPKTSX(id);
+	stats->unctl_pkt_mcast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLOCTETSX(id);
+	stats->unctl_octet_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLUCPKTSX(id);
+	stats->unctl_pkt_ucast_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INOCTETSSECYDECRYPTEDX(id);
+	stats->octet_decrypted_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INOCTETSSECYVALIDATEX(id);
+	stats->octet_validated_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSCTRLPORTDISABLEDX(id);
+	stats->pkt_port_disabled_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYBADTAGX(id);
+	stats->pkt_badtag_cnt =  mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOSAX(id);
+	stats->pkt_nosa_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOSAERRORX(id);
+	stats->pkt_nosaerror_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYTAGGEDCTLX(id);
+	stats->pkt_tagged_ctl_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDORNOTAGX(id);
+	stats->pkt_untaged_cnt = mcs_reg_read(mcs, reg);
+
+	reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYCTLX(id);
+	stats->pkt_ctl_cnt = mcs_reg_read(mcs, reg);
+
+	if (mcs->hw->mcs_blks > 1) {
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOTAGX(id);
+		stats->pkt_notag_cnt = mcs_reg_read(mcs, reg);
+	}
+}
+
+void mcs_get_flowid_stats(struct mcs *mcs, struct mcs_flowid_stats *stats,
+			  int id, int dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX)
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSFLOWIDTCAMHITX(id);
+	else
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSFLOWIDTCAMHITX(id);
+
+	stats->tcam_hit_cnt = mcs_reg_read(mcs, reg);
+}
+
+void mcs_get_port_stats(struct mcs *mcs, struct mcs_port_stats *stats,
+			int id, int dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX) {
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSFLOWIDTCAMMISSX(id);
+		stats->tcam_miss_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSPARSEERRX(id);
+		stats->parser_err_cnt = mcs_reg_read(mcs, reg);
+		if (mcs->hw->mcs_blks > 1) {
+			reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSEARLYPREEMPTERRX(id);
+			stats->preempt_err_cnt = mcs_reg_read(mcs, reg);
+		}
+	} else {
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSFLOWIDTCAMMISSX(id);
+		stats->tcam_miss_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSPARSEERRX(id);
+		stats->parser_err_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECTAGINSERTIONERRX(id);
+		stats->sectag_insert_err_cnt = mcs_reg_read(mcs, reg);
+	}
+}
+
+void mcs_get_sa_stats(struct mcs *mcs, struct mcs_sa_stats *stats, int id, int dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX) {
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSAINVALIDX(id);
+		stats->pkt_invalid_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTUSINGSAERRORX(id);
+		stats->pkt_nosaerror_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTVALIDX(id);
+		stats->pkt_notvalid_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSAOKX(id);
+		stats->pkt_ok_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSAUNUSEDSAX(id);
+		stats->pkt_nosa_cnt = mcs_reg_read(mcs, reg);
+	} else {
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAENCRYPTEDX(id);
+		stats->pkt_encrypt_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAPROTECTEDX(id);
+		stats->pkt_protected_cnt = mcs_reg_read(mcs, reg);
+	}
+}
+
+void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats,
+		      int id, int dir)
+{
+	u64 reg;
+
+	if (dir == MCS_RX) {
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCCAMHITX(id);
+		stats->hit_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCINVALIDX(id);
+		stats->pkt_invalid_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCLATEORDELAYEDX(id);
+		stats->pkt_late_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCNOTVALIDX(id);
+		stats->pkt_notvalid_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDOROKX(id);
+		stats->pkt_unchecked_cnt = mcs_reg_read(mcs, reg);
+
+		if (mcs->hw->mcs_blks > 1) {
+			reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCDELAYEDX(id);
+			stats->pkt_delay_cnt = mcs_reg_read(mcs, reg);
+
+			reg = MCSX_CSE_RX_MEM_SLAVE_INPKTSSCOKX(id);
+			stats->pkt_ok_cnt = mcs_reg_read(mcs, reg);
+		}
+		if (mcs->hw->mcs_blks == 1) {
+			reg = MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCDECRYPTEDX(id);
+			stats->octet_decrypt_cnt = mcs_reg_read(mcs, reg);
+
+			reg = MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCVALIDATEX(id);
+			stats->octet_validate_cnt = mcs_reg_read(mcs, reg);
+		}
+	} else {
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSCENCRYPTEDX(id);
+		stats->pkt_encrypt_cnt = mcs_reg_read(mcs, reg);
+
+		reg = MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSCPROTECTEDX(id);
+		stats->pkt_protected_cnt = mcs_reg_read(mcs, reg);
+
+		if (mcs->hw->mcs_blks == 1) {
+			reg = MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSCENCRYPTEDX(id);
+			stats->octet_encrypt_cnt = mcs_reg_read(mcs, reg);
+
+			reg = MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSCPROTECTEDX(id);
+			stats->octet_protected_cnt = mcs_reg_read(mcs, reg);
+		}
+	}
+}
+
+void mcs_clear_stats(struct mcs *mcs, u8 type, u8 id, int dir)
+{
+	struct mcs_flowid_stats flowid_st;
+	struct mcs_port_stats port_st;
+	struct mcs_secy_stats secy_st;
+	struct mcs_sc_stats sc_st;
+	struct mcs_sa_stats sa_st;
+	u64 reg;
+
+	if (dir == MCS_RX)
+		reg = MCSX_CSE_RX_SLAVE_CTRL;
+	else
+		reg = MCSX_CSE_TX_SLAVE_CTRL;
+
+	mcs_reg_write(mcs, reg, BIT_ULL(0));
+
+	switch (type) {
+	case MCS_FLOWID_STATS:
+		mcs_get_flowid_stats(mcs, &flowid_st, id, dir);
+		break;
+	case MCS_SECY_STATS:
+		if (dir == MCS_RX)
+			mcs_get_rx_secy_stats(mcs, &secy_st, id);
+		else
+			mcs_get_tx_secy_stats(mcs, &secy_st, id);
+		break;
+	case MCS_SC_STATS:
+		mcs_get_sc_stats(mcs, &sc_st, id, dir);
+		break;
+	case MCS_SA_STATS:
+		mcs_get_sa_stats(mcs, &sa_st, id, dir);
+		break;
+	case MCS_PORT_STATS:
+		mcs_get_port_stats(mcs, &port_st, id, dir);
+		break;
+	}
+
+	mcs_reg_write(mcs, reg, 0x0);
+}
+
+int mcs_clear_all_stats(struct mcs *mcs, u16 pcifunc, int dir)
+{
+	struct mcs_rsrc_map *map;
+	int id;
+
+	if (dir == MCS_RX)
+		map = &mcs->rx;
+	else
+		map = &mcs->tx;
+
+	/* Clear FLOWID stats */
+	for (id = 0; id < map->flow_ids.max; id++) {
+		if (map->flowid2pf_map[id] != pcifunc)
+			continue;
+		mcs_clear_stats(mcs, MCS_FLOWID_STATS, id, dir);
+	}
+
+	/* Clear SECY stats */
+	for (id = 0; id < map->secy.max; id++) {
+		if (map->secy2pf_map[id] != pcifunc)
+			continue;
+		mcs_clear_stats(mcs, MCS_SECY_STATS, id, dir);
+	}
+
+	/* Clear SC stats */
+	for (id = 0; id < map->secy.max; id++) {
+		if (map->sc2pf_map[id] != pcifunc)
+			continue;
+		mcs_clear_stats(mcs, MCS_SC_STATS, id, dir);
+	}
+
+	/* Clear SA stats */
+	for (id = 0; id < map->sa.max; id++) {
+		if (map->sa2pf_map[id] != pcifunc)
+			continue;
+		mcs_clear_stats(mcs, MCS_SA_STATS, id, dir);
+	}
+	return 0;
+}
+
 void mcs_pn_table_write(struct mcs *mcs, u8 pn_id, u64 next_pn, u8 dir)
 {
 	u64 reg;
@@ -816,6 +1121,10 @@ static void mcs_global_cfg(struct mcs *mcs)
 	/* Disable external bypass */
 	mcs_set_external_bypass(mcs, false);
 
+	/* Reset TX/RX stats memory */
+	mcs_reg_write(mcs, MCSX_CSE_RX_SLAVE_STATS_CLEAR, 0x1F);
+	mcs_reg_write(mcs, MCSX_CSE_TX_SLAVE_STATS_CLEAR, 0x1F);
+
 	/* Set MCS to perform standard IEEE802.1AE macsec processing */
 	if (mcs->hw->mcs_blks == 1) {
 		mcs_reg_write(mcs, MCSX_IP_MODE, BIT_ULL(3));
@@ -921,6 +1230,7 @@ static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	mcs->mcs_ops->mcs_parser_cfg(mcs);
 
 	list_add(&mcs->mcs_list, &mcs_list);
+	mutex_init(&mcs->stats_lock);
 
 	return 0;
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
index 615a3ad..28600ef 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
@@ -85,6 +85,8 @@ struct mcs {
 	u8			mcs_id;
 	struct mcs_ops		*mcs_ops;
 	struct list_head	mcs_list;
+	/* Lock for mcs stats */
+	struct mutex		stats_lock;
 };
 
 struct mcs_ops {
@@ -156,4 +158,15 @@ void cnf10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int
 void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
 void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
 
+/* Stats APIs */
+void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir);
+void mcs_get_sa_stats(struct mcs *mcs, struct mcs_sa_stats *stats, int id, int dir);
+void mcs_get_port_stats(struct mcs *mcs, struct mcs_port_stats *stats, int id, int dir);
+void mcs_get_flowid_stats(struct mcs *mcs, struct mcs_flowid_stats *stats, int id, int dir);
+void mcs_get_rx_secy_stats(struct mcs *mcs, struct mcs_secy_stats *stats, int id);
+void mcs_get_tx_secy_stats(struct mcs *mcs, struct mcs_secy_stats *stats, int id);
+void mcs_clear_stats(struct mcs *mcs, u8 type, u8 id, int dir);
+int mcs_clear_all_stats(struct mcs *mcs, u16 pcifunc, int dir);
+int mcs_set_force_clk_en(struct mcs *mcs, bool set);
+
 #endif /* MCS_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
index d6d03d0..22ca3b1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
@@ -118,3 +118,29 @@ void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *m
 	reg = MCSX_CPM_RX_SLAVE_SA_MAP_MEMX((4 * map->sc_id) + map->an);
 	mcs_reg_write(mcs, reg, val);
 }
+
+int mcs_set_force_clk_en(struct mcs *mcs, bool set)
+{
+	unsigned long timeout = jiffies + usecs_to_jiffies(2000);
+	u64 val;
+
+	val = mcs_reg_read(mcs, MCSX_MIL_GLOBAL);
+
+	if (set) {
+		val |= BIT_ULL(4);
+		mcs_reg_write(mcs, MCSX_MIL_GLOBAL, val);
+
+		/* Poll till mcsx_mil_ip_gbl_status.mcs_ip_stats_ready value is 1 */
+		while (!(mcs_reg_read(mcs, MCSX_MIL_IP_GBL_STATUS) & BIT_ULL(0))) {
+			if (time_after(jiffies, timeout)) {
+				dev_err(mcs->dev, "MCS set force clk enable failed\n");
+				break;
+			}
+		}
+	} else {
+		val &= ~BIT_ULL(4);
+		mcs_reg_write(mcs, MCSX_MIL_GLOBAL, val);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
index e192a68..12be9f9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
@@ -72,6 +72,14 @@
 		offset = 0x600c8ull;			\
 	offset; })
 
+#define MCSX_MIL_IP_GBL_STATUS ({			\
+	u64 offset;					\
+							\
+	offset = 0x800d0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x600d0ull;			\
+	offset; })
+
 /* PAB */
 #define MCSX_PAB_RX_SLAVE_PORT_CFGX(a) ({	\
 	u64 offset;				\
@@ -496,4 +504,431 @@
 #define MCSX_CPM_TX_SLAVE_SA_INDEX1_VLDX(a)		(0x5f50 + (a) * 0x8ull)
 #define MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0		0x5500ull
 
+/* CSE */
+#define MCSX_CSE_RX_MEM_SLAVE_IFINCTLBCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x9e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xc218ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_IFINCTLMCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x9680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xc018ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_IFINCTLOCTETSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x6e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xbc18ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_IFINCTLUCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x8e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xbe18ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define	MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLBCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x8680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xca18ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define	MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLMCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x7e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xc818ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define	MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLOCTETSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x6680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xc418ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define	MCSX_CSE_RX_MEM_SLAVE_IFINUNCTLUCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x7680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xc618ull;			\
+	offset +=  (a) * 0x8ull;			\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSECYDECRYPTEDX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0x5e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xdc18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSECYVALIDATEX(a)({ \
+	u64 offset;					\
+							\
+	offset = 0x5680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xda18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSCTRLPORTDISABLEDX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0xd680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xce18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSFLOWIDTCAMHITX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0x16a80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xec78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSFLOWIDTCAMMISSX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0x16680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xec38ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSPARSEERRX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x16880ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xec18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCCAMHITX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0xfe80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xde18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCINVALIDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x10680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xe418ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCNOTVALIDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x10e80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xe218ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYBADTAGX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0xae80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xd418ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOSAX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0xc680ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xd618ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOSAERRORX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0xce80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xd818ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYTAGGEDCTLX(a) ({ \
+	u64 offset;					\
+							\
+	offset = 0xbe80ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xcc18ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_RX_SLAVE_CTRL	({			\
+	u64 offset;					\
+							\
+	offset = 0x52a0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x9c0ull;			\
+	offset; })
+
+#define MCSX_CSE_RX_SLAVE_STATS_CLEAR	({		\
+	u64 offset;					\
+							\
+	offset = 0x52b8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x9d8ull;			\
+	offset; })
+
+#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCDECRYPTEDX(a)	(0xe680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INOCTETSSCVALIDATEX(a)	(0xde80ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDORNOTAGX(a)	(0xa680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYNOTAGX(a)	(0xd218 + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYUNTAGGEDX(a)	(0xd018ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCUNCHECKEDOROKX(a)	(0xee80ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSECYCTLX(a)		(0xb680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCLATEORDELAYEDX(a) (0xf680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSAINVALIDX(a)	(0x12680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTUSINGSAERRORX(a) (0x15680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSANOTVALIDX(a)	(0x13680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSAOKX(a)		(0x11680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSAUNUSEDSAX(a)	(0x14680ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSEARLYPREEMPTERRX(a) (0xec58ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCOKX(a)		(0xea18ull + (a) * 0x8ull)
+#define MCSX_CSE_RX_MEM_SLAVE_INPKTSSCDELAYEDX(a)	(0xe618ull + (a) * 0x8ull)
+
+/* CSE TX */
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTCOMMONOCTETSX(a)	(0x18440ull + (a) * 0x8ull)
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLBCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1c440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xf478ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLMCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1bc40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xf278ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLOCTETSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x19440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xee78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTCTLUCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1b440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xf078ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLBCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1ac40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xfc78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLMCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1a440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xfa78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLOCTETSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x18c40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xf678ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_IFOUTUNCTLUCPKTSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x19c40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xf878ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSECYENCRYPTEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x17c40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10878ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSECYPROTECTEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x17440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10678ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSCTRLPORTDISABLEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1e440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xfe78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSFLOWIDTCAMHITX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x23240ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10ed8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSFLOWIDTCAMMISSX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x22c40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10e98ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSPARSEERRX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x22e40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10e78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSCENCRYPTEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x20440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10c78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSCPROTECTEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1fc40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10a78ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECTAGINSERTIONERRX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x23040ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x110d8ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYNOACTIVESAX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1dc40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10278ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYTOOLONGX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1d440ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10478ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSECYUNTAGGEDX(a) ({	\
+	u64 offset;					\
+							\
+	offset = 0x1cc40ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0x10078ull;			\
+	offset += (a) * 0x8ull;				\
+	offset; })
+
+#define MCSX_CSE_TX_SLAVE_CTRL	({	\
+	u64 offset;					\
+							\
+	offset = 0x54a0ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa00ull;			\
+	offset; })
+
+#define MCSX_CSE_TX_SLAVE_STATS_CLEAR ({		\
+	u64 offset;					\
+							\
+	offset = 0x54b8ull;				\
+	if (mcs->hw->mcs_blks > 1)			\
+		offset = 0xa18ull;			\
+	offset; })
+
+#define MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSCENCRYPTEDX(a)	(0x1f440ull + (a) * 0x8ull)
+#define MCSX_CSE_TX_MEM_SLAVE_OUTOCTETSSCPROTECTEDX(a)	(0x1ec40ull + (a) * 0x8ull)
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSEARLYPREEMPTERRX(a) (0x10eb8ull + (a) * 0x8ull)
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAENCRYPTEDX(a)	(0x21c40ull + (a) * 0x8ull)
+#define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAPROTECTEDX(a)	(0x20c40ull + (a) * 0x8ull)
+
 #endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index 8a7d455..939c9b6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -64,6 +64,159 @@ int rvu_mbox_handler_mcs_port_reset(struct rvu *rvu, struct mcs_port_reset_req *
 	return 0;
 }
 
+int rvu_mbox_handler_mcs_clear_stats(struct rvu *rvu,
+				     struct mcs_clear_stats *req,
+				     struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	mutex_lock(&mcs->stats_lock);
+	if (req->all)
+		mcs_clear_all_stats(mcs, pcifunc, req->dir);
+	else
+		mcs_clear_stats(mcs, req->type, req->id, req->dir);
+
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_flowid_stats(struct rvu *rvu,
+					  struct mcs_stats_req *req,
+					  struct mcs_flowid_stats *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	/* In CNF10K-B, before reading the statistics,
+	 * MCSX_MIL_GLOBAL.FORCE_CLK_EN_IP needs to be set
+	 * to get accurate statistics
+	 */
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, true);
+
+	mutex_lock(&mcs->stats_lock);
+	mcs_get_flowid_stats(mcs, rsp, req->id, req->dir);
+	mutex_unlock(&mcs->stats_lock);
+
+	/* Clear MCSX_MIL_GLOBAL.FORCE_CLK_EN_IP after reading
+	 * the statistics
+	 */
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, false);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_secy_stats(struct rvu *rvu,
+					struct mcs_stats_req *req,
+					struct mcs_secy_stats *rsp)
+{	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, true);
+
+	mutex_lock(&mcs->stats_lock);
+
+	if (req->dir == MCS_RX)
+		mcs_get_rx_secy_stats(mcs, rsp, req->id);
+	else
+		mcs_get_tx_secy_stats(mcs, rsp, req->id);
+
+	mutex_unlock(&mcs->stats_lock);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, false);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_sc_stats(struct rvu *rvu,
+				      struct mcs_stats_req *req,
+				      struct mcs_sc_stats *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, true);
+
+	mutex_lock(&mcs->stats_lock);
+	mcs_get_sc_stats(mcs, rsp, req->id, req->dir);
+	mutex_unlock(&mcs->stats_lock);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, false);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_sa_stats(struct rvu *rvu,
+				      struct mcs_stats_req *req,
+				      struct mcs_sa_stats *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, true);
+
+	mutex_lock(&mcs->stats_lock);
+	mcs_get_sa_stats(mcs, rsp, req->id, req->dir);
+	mutex_unlock(&mcs->stats_lock);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, false);
+
+	return 0;
+}
+
+int rvu_mbox_handler_mcs_get_port_stats(struct rvu *rvu,
+					struct mcs_stats_req *req,
+					struct mcs_port_stats *rsp)
+{
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, true);
+
+	mutex_lock(&mcs->stats_lock);
+	mcs_get_port_stats(mcs, rsp, req->id, req->dir);
+	mutex_unlock(&mcs->stats_lock);
+
+	if (mcs->hw->mcs_blks > 1)
+		mcs_set_force_clk_en(mcs, false);
+
+	return 0;
+}
+
 int rvu_mbox_handler_mcs_set_active_lmac(struct rvu *rvu,
 					 struct mcs_set_active_lmac *req,
 					 struct msg_rsp *rsp)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
                   ` (4 preceding siblings ...)
  2022-09-28  3:58 ` [net-next PATCH v2 5/8] octeontx2-af: cn10k: mcs: Support for stats collection Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-29 16:13   ` Jakub Kicinski
  2022-09-28  3:58 ` [net-next PATCH v2 7/8] octeontx2-af: cn10k: mcs: Add debugfs support Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 8/8] octeontx2-pf: mcs: Introduce MACSEC hardware offloading Subbaraya Sundeep
  7 siblings, 1 reply; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Vamsi Attunuru, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

Hardware triggers an interrupt for events like PN wrap to zero,
PN crosses set threshold. This interrupt is received
by the MCS_AF. MCS AF then finds the PF/VF to which SA is mapped
and notifies them using mcs_intr_notify mbox message.

PF/VF using mcs_intr_cfg mbox can configure the list
of interrupts for which they want to receive the
notification from AF.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/mbox.h   |  38 +++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.c    | 337 +++++++++++++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/mcs.h    |  74 +++++
 .../ethernet/marvell/octeontx2/af/mcs_cnf10kb.c    |  86 +++++-
 .../net/ethernet/marvell/octeontx2/af/mcs_reg.h    | 168 ++++++++++
 .../net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c | 159 ++++++++++
 drivers/net/ethernet/marvell/octeontx2/af/rvu.c    |   6 +-
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   8 +
 8 files changed, 864 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index e01a705..8d5d5a0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -326,6 +326,7 @@ M(MCS_GET_SA_STATS,	0xa00f, mcs_get_sa_stats, mcs_stats_req, mcs_sa_stats)	\
 M(MCS_GET_PORT_STATS,	0xa010, mcs_get_port_stats, mcs_stats_req,		\
 				mcs_port_stats)					\
 M(MCS_CLEAR_STATS,	0xa011,	mcs_clear_stats, mcs_clear_stats, msg_rsp)	\
+M(MCS_INTR_CFG,		0xa012, mcs_intr_cfg, mcs_intr_cfg, msg_rsp)		\
 M(MCS_SET_LMAC_MODE,	0xa013, mcs_set_lmac_mode, mcs_set_lmac_mode, msg_rsp)	\
 M(MCS_SET_PN_THRESHOLD, 0xa014, mcs_set_pn_threshold, mcs_set_pn_threshold,	\
 				msg_rsp)					\
@@ -351,11 +352,15 @@ M(CGX_LINK_EVENT,	0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp)
 #define MBOX_UP_CPT_MESSAGES						\
 M(CPT_INST_LMTST,	0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp)
 
+#define MBOX_UP_MCS_MESSAGES						\
+M(MCS_INTR_NOTIFY,	0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp)
+
 enum {
 #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
 MBOX_MESSAGES
 MBOX_UP_CGX_MESSAGES
 MBOX_UP_CPT_MESSAGES
+MBOX_UP_MCS_MESSAGES
 #undef M
 };
 
@@ -2084,4 +2089,37 @@ struct mcs_clear_stats {
 	u8 all;		/* All resources stats mapped to PF are cleared */
 };
 
+struct mcs_intr_cfg {
+	struct mbox_msghdr hdr;
+#define MCS_CPM_RX_SECTAG_V_EQ1_INT		BIT_ULL(0)
+#define MCS_CPM_RX_SECTAG_E_EQ0_C_EQ1_INT	BIT_ULL(1)
+#define MCS_CPM_RX_SECTAG_SL_GTE48_INT		BIT_ULL(2)
+#define MCS_CPM_RX_SECTAG_ES_EQ1_SC_EQ1_INT	BIT_ULL(3)
+#define MCS_CPM_RX_SECTAG_SC_EQ1_SCB_EQ1_INT	BIT_ULL(4)
+#define MCS_CPM_RX_PACKET_XPN_EQ0_INT		BIT_ULL(5)
+#define MCS_CPM_RX_PN_THRESH_REACHED_INT	BIT_ULL(6)
+#define MCS_CPM_TX_PACKET_XPN_EQ0_INT		BIT_ULL(7)
+#define MCS_CPM_TX_PN_THRESH_REACHED_INT	BIT_ULL(8)
+#define MCS_CPM_TX_SA_NOT_VALID_INT		BIT_ULL(9)
+#define MCS_BBE_RX_DFIFO_OVERFLOW_INT		BIT_ULL(10)
+#define MCS_BBE_RX_PLFIFO_OVERFLOW_INT		BIT_ULL(11)
+#define MCS_BBE_TX_DFIFO_OVERFLOW_INT		BIT_ULL(12)
+#define MCS_BBE_TX_PLFIFO_OVERFLOW_INT		BIT_ULL(13)
+#define MCS_PAB_RX_CHAN_OVERFLOW_INT		BIT_ULL(14)
+#define MCS_PAB_TX_CHAN_OVERFLOW_INT		BIT_ULL(15)
+	u64 intr_mask;		/* Interrupt enable mask */
+	u8 mcs_id;
+	u8 lmac_id;
+	u64 rsvd;
+};
+
+struct mcs_intr_info {
+	struct mbox_msghdr hdr;
+	u64 intr_mask;
+	int sa_id;
+	u8 mcs_id;
+	u8 lmac_id;
+	u64 rsvd;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
index 2cc713d..38f98d4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.c
@@ -797,6 +797,289 @@ int mcs_alloc_all_rsrc(struct mcs *mcs, u8 *flow_id, u8 *secy_id,
 	return 0;
 }
 
+static void cn10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs)
+{
+	struct mcs_intr_event event = { 0 };
+	struct rsrc_bmap *sc_bmap;
+	u64 val;
+	int sc;
+
+	sc_bmap = &mcs->tx.sc;
+
+	event.mcs_id = mcs->mcs_id;
+	event.intr_mask = MCS_CPM_TX_PACKET_XPN_EQ0_INT;
+
+	for_each_set_bit(sc, sc_bmap->bmap, mcs->hw->sc_entries) {
+		val = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(sc));
+
+		if (mcs->tx_sa_active[sc])
+			/* SA_index1 was used and got expired */
+			event.sa_id = (val >> 9) & 0xFF;
+		else
+			/* SA_index0 was used and got expired */
+			event.sa_id = val & 0xFF;
+
+		event.pcifunc = mcs->tx.sa2pf_map[event.sa_id];
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
+
+static void cn10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs)
+{
+	struct mcs_intr_event event = { 0 };
+	struct rsrc_bmap *sc_bmap;
+	u64 val, status;
+	int sc;
+
+	sc_bmap = &mcs->tx.sc;
+
+	event.mcs_id = mcs->mcs_id;
+	event.intr_mask = MCS_CPM_TX_PN_THRESH_REACHED_INT;
+
+	/* TX SA interrupt is raised only if autorekey is enabled.
+	 * MCS_CPM_TX_SLAVE_SA_MAP_MEM_0X[sc].tx_sa_active bit gets toggled if
+	 * one of two SAs mapped to SC gets expired. If tx_sa_active=0 implies
+	 * SA in SA_index1 got expired else SA in SA_index0 got expired.
+	 */
+	for_each_set_bit(sc, sc_bmap->bmap, mcs->hw->sc_entries) {
+		val = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(sc));
+		/* Auto rekey is enable */
+		if (!((val >> 18) & 0x1))
+			continue;
+
+		status = (val >> 21) & 0x1;
+
+		/* Check if tx_sa_active status had changed */
+		if (status == mcs->tx_sa_active[sc])
+			continue;
+		/* SA_index0 is expired */
+		if (status)
+			event.sa_id = val & 0xFF;
+		else
+			event.sa_id = (val >> 9) & 0xFF;
+
+		event.pcifunc = mcs->tx.sa2pf_map[event.sa_id];
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
+
+static void mcs_rx_pn_thresh_reached_handler(struct mcs *mcs)
+{
+	struct mcs_intr_event event = { 0 };
+	int sa, reg;
+	u64 intr;
+
+	/* Check expired SAs */
+	for (reg = 0; reg < (mcs->hw->sa_entries / 64); reg++) {
+		/* Bit high in *PN_THRESH_REACHEDX implies
+		 * corresponding SAs are expired.
+		 */
+		intr = mcs_reg_read(mcs, MCSX_CPM_RX_SLAVE_PN_THRESH_REACHEDX(reg));
+		for (sa = 0; sa < 64; sa++) {
+			if (!(intr & BIT_ULL(sa)))
+				continue;
+
+			event.mcs_id = mcs->mcs_id;
+			event.intr_mask = MCS_CPM_RX_PN_THRESH_REACHED_INT;
+			event.sa_id = sa + (reg * 64);
+			event.pcifunc = mcs->rx.sa2pf_map[event.sa_id];
+			mcs_add_intr_wq_entry(mcs, &event);
+		}
+	}
+}
+
+static void mcs_rx_misc_intr_handler(struct mcs *mcs, u64 intr)
+{
+	struct mcs_intr_event event = { 0 };
+
+	event.mcs_id = mcs->mcs_id;
+	event.pcifunc = mcs->pf_map[0];
+
+	if (intr & MCS_CPM_RX_INT_SECTAG_V_EQ1)
+		event.intr_mask = MCS_CPM_RX_SECTAG_V_EQ1_INT;
+	if (intr & MCS_CPM_RX_INT_SECTAG_E_EQ0_C_EQ1)
+		event.intr_mask |= MCS_CPM_RX_SECTAG_E_EQ0_C_EQ1_INT;
+	if (intr & MCS_CPM_RX_INT_SL_GTE48)
+		event.intr_mask |= MCS_CPM_RX_SECTAG_SL_GTE48_INT;
+	if (intr & MCS_CPM_RX_INT_ES_EQ1_SC_EQ1)
+		event.intr_mask |= MCS_CPM_RX_SECTAG_ES_EQ1_SC_EQ1_INT;
+	if (intr & MCS_CPM_RX_INT_SC_EQ1_SCB_EQ1)
+		event.intr_mask |= MCS_CPM_RX_SECTAG_SC_EQ1_SCB_EQ1_INT;
+	if (intr & MCS_CPM_RX_INT_PACKET_XPN_EQ0)
+		event.intr_mask |= MCS_CPM_RX_PACKET_XPN_EQ0_INT;
+
+	mcs_add_intr_wq_entry(mcs, &event);
+}
+
+static void mcs_tx_misc_intr_handler(struct mcs *mcs, u64 intr)
+{
+	struct mcs_intr_event event = { 0 };
+
+	if (!(intr & MCS_CPM_TX_INT_SA_NOT_VALID))
+		return;
+
+	event.mcs_id = mcs->mcs_id;
+	event.pcifunc = mcs->pf_map[0];
+
+	event.intr_mask = MCS_CPM_TX_SA_NOT_VALID_INT;
+
+	mcs_add_intr_wq_entry(mcs, &event);
+}
+
+static void mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir)
+{
+	struct mcs_intr_event event = { 0 };
+	int i;
+
+	if (!(intr & MCS_BBE_INT_MASK))
+		return;
+
+	event.mcs_id = mcs->mcs_id;
+	event.pcifunc = mcs->pf_map[0];
+
+	for (i = 0; i < MCS_MAX_BBE_INT; i++) {
+		if (!(intr & BIT_ULL(i)))
+			continue;
+
+		/* Lower nibble denotes data fifo overflow interrupts and
+		 * upper nibble indicates policy fifo overflow interrupts.
+		 */
+		if (intr & 0xFULL)
+			event.intr_mask = (dir == MCS_RX) ?
+					  MCS_BBE_RX_DFIFO_OVERFLOW_INT :
+					  MCS_BBE_TX_DFIFO_OVERFLOW_INT;
+		else
+			event.intr_mask = (dir == MCS_RX) ?
+					  MCS_BBE_RX_PLFIFO_OVERFLOW_INT :
+					  MCS_BBE_RX_PLFIFO_OVERFLOW_INT;
+
+		/* Notify the lmac_id info which ran into BBE fatal error */
+		event.lmac_id = i & 0x3ULL;
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
+
+static void mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir)
+{
+	struct mcs_intr_event event = { 0 };
+	int i;
+
+	if (!(intr & MCS_PAB_INT_MASK))
+		return;
+
+	event.mcs_id = mcs->mcs_id;
+	event.pcifunc = mcs->pf_map[0];
+
+	for (i = 0; i < MCS_MAX_PAB_INT; i++) {
+		if (!(intr & BIT_ULL(i)))
+			continue;
+
+		event.intr_mask = (dir == MCS_RX) ? MCS_PAB_RX_CHAN_OVERFLOW_INT :
+				  MCS_PAB_TX_CHAN_OVERFLOW_INT;
+
+		/* Notify the lmac_id info which ran into PAB fatal error */
+		event.lmac_id = i;
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
+
+static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
+{
+	struct mcs *mcs = (struct mcs *)mcs_irq;
+	u64 intr, cpm_intr, bbe_intr, pab_intr;
+
+	/* Disable and clear the interrupt */
+	mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0));
+	mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
+
+	/* Check which block has interrupt*/
+	intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM);
+
+	/* CPM RX */
+	if (intr & MCS_CPM_RX_INT_ENA) {
+		/* Check for PN thresh interrupt bit */
+		cpm_intr = mcs_reg_read(mcs, MCSX_CPM_RX_SLAVE_RX_INT);
+
+		if (cpm_intr & MCS_CPM_RX_INT_PN_THRESH_REACHED)
+			mcs_rx_pn_thresh_reached_handler(mcs);
+
+		if (cpm_intr & MCS_CPM_RX_INT_ALL)
+			mcs_rx_misc_intr_handler(mcs, cpm_intr);
+
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT, cpm_intr);
+	}
+
+	/* CPM TX */
+	if (intr & MCS_CPM_TX_INT_ENA) {
+		cpm_intr = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_TX_INT);
+
+		if (cpm_intr & MCS_CPM_TX_INT_PN_THRESH_REACHED) {
+			if (mcs->hw->mcs_blks > 1)
+				cnf10kb_mcs_tx_pn_thresh_reached_handler(mcs);
+			else
+				cn10kb_mcs_tx_pn_thresh_reached_handler(mcs);
+		}
+
+		if (cpm_intr & MCS_CPM_TX_INT_SA_NOT_VALID)
+			mcs_tx_misc_intr_handler(mcs, cpm_intr);
+
+		if (cpm_intr & MCS_CPM_TX_INT_PACKET_XPN_EQ0) {
+			if (mcs->hw->mcs_blks > 1)
+				cnf10kb_mcs_tx_pn_wrapped_handler(mcs);
+			else
+				cn10kb_mcs_tx_pn_wrapped_handler(mcs);
+		}
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT, cpm_intr);
+	}
+
+	/* BBE RX */
+	if (intr & MCS_BBE_RX_INT_ENA) {
+		bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT);
+		mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
+
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0);
+		mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT, bbe_intr);
+	}
+
+	/* BBE TX */
+	if (intr & MCS_BBE_TX_INT_ENA) {
+		bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT);
+		mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
+
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0);
+		mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT, bbe_intr);
+	}
+
+	/* PAB RX */
+	if (intr & MCS_PAB_RX_INT_ENA) {
+		pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT);
+		mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
+
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0);
+		mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT, pab_intr);
+	}
+
+	/* PAB TX */
+	if (intr & MCS_PAB_TX_INT_ENA) {
+		pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT);
+		mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
+
+		/* Clear the interrupt */
+		mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0);
+		mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr);
+	}
+
+	/* Enable the interrupt */
+	mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
+
+	return IRQ_HANDLED;
+}
+
 static void *alloc_mem(struct mcs *mcs, int n)
 {
 	return devm_kcalloc(mcs->dev, n, sizeof(u16), GFP_KERNEL);
@@ -859,6 +1142,56 @@ static int mcs_alloc_struct_mem(struct mcs *mcs, struct mcs_rsrc_map *res)
 	return 0;
 }
 
+static int mcs_register_interrupts(struct mcs *mcs)
+{
+	int ret = 0;
+
+	mcs->num_vec = pci_msix_vec_count(mcs->pdev);
+
+	ret = pci_alloc_irq_vectors(mcs->pdev, mcs->num_vec,
+				    mcs->num_vec, PCI_IRQ_MSIX);
+	if (ret < 0) {
+		dev_err(mcs->dev, "MCS Request for %d msix vector failed err:%d\n",
+			mcs->num_vec, ret);
+		return ret;
+	}
+
+	ret = request_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP),
+			  mcs_ip_intr_handler, 0, "MCS_IP", mcs);
+	if (ret) {
+		dev_err(mcs->dev, "MCS IP irq registration failed\n");
+		goto exit;
+	}
+
+	/* MCS enable IP interrupts */
+	mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
+
+	/* Enable CPM Rx/Tx interrupts */
+	mcs_reg_write(mcs, MCSX_TOP_SLAVE_INT_SUM_ENB,
+		      MCS_CPM_RX_INT_ENA | MCS_CPM_TX_INT_ENA |
+		      MCS_BBE_RX_INT_ENA | MCS_BBE_TX_INT_ENA |
+		      MCS_PAB_RX_INT_ENA | MCS_PAB_TX_INT_ENA);
+
+	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL);
+	mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL);
+
+	mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xff);
+	mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xff);
+
+	mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xff);
+	mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xff);
+
+	mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries);
+	if (!mcs->tx_sa_active)
+		goto exit;
+
+	return ret;
+exit:
+	pci_free_irq_vectors(mcs->pdev);
+	mcs->num_vec = 0;
+	return ret;
+}
+
 int mcs_get_blkcnt(void)
 {
 	struct mcs *mcs;
@@ -1229,6 +1562,10 @@ static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	/* Parser configuration */
 	mcs->mcs_ops->mcs_parser_cfg(mcs);
 
+	err = mcs_register_interrupts(mcs);
+	if (err)
+		goto exit;
+
 	list_add(&mcs->mcs_list, &mcs_list);
 	mutex_init(&mcs->stats_lock);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
index 28600ef..64dc2b8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs.h
@@ -16,6 +16,7 @@
 #define MCSX_LINK_LMAC_BASE_MASK	GENMASK_ULL(11, 0)
 
 #define MCS_ID_MASK			0x7
+#define MCS_MAX_PFS                     128
 
 #define MCS_PORT_MODE_MASK		0x3
 #define MCS_PORT_FIFO_SKID_MASK		0x3F
@@ -42,6 +43,69 @@
 /* Reserved resources for default bypass entry */
 #define MCS_RSRC_RSVD_CNT		1
 
+/* MCS Interrupt Vector Enumeration */
+enum mcs_int_vec_e {
+	MCS_INT_VEC_MIL_RX_GBL		= 0x0,
+	MCS_INT_VEC_MIL_RX_LMACX	= 0x1,
+	MCS_INT_VEC_MIL_TX_LMACX	= 0x5,
+	MCS_INT_VEC_HIL_RX_GBL		= 0x9,
+	MCS_INT_VEC_HIL_RX_LMACX	= 0xa,
+	MCS_INT_VEC_HIL_TX_GBL		= 0xe,
+	MCS_INT_VEC_HIL_TX_LMACX	= 0xf,
+	MCS_INT_VEC_IP			= 0x13,
+	MCS_INT_VEC_CNT			= 0x14,
+};
+
+#define MCS_MAX_BBE_INT			8ULL
+#define MCS_BBE_INT_MASK		0xFFULL
+
+#define MCS_MAX_PAB_INT			4ULL
+#define MCS_PAB_INT_MASK		0xFULL
+
+#define MCS_BBE_RX_INT_ENA		BIT_ULL(0)
+#define MCS_BBE_TX_INT_ENA		BIT_ULL(1)
+#define MCS_CPM_RX_INT_ENA		BIT_ULL(2)
+#define MCS_CPM_TX_INT_ENA		BIT_ULL(3)
+#define MCS_PAB_RX_INT_ENA		BIT_ULL(4)
+#define MCS_PAB_TX_INT_ENA		BIT_ULL(5)
+
+#define MCS_CPM_TX_INT_PACKET_XPN_EQ0		BIT_ULL(0)
+#define MCS_CPM_TX_INT_PN_THRESH_REACHED	BIT_ULL(1)
+#define MCS_CPM_TX_INT_SA_NOT_VALID		BIT_ULL(2)
+
+#define MCS_CPM_RX_INT_SECTAG_V_EQ1		BIT_ULL(0)
+#define MCS_CPM_RX_INT_SECTAG_E_EQ0_C_EQ1	BIT_ULL(1)
+#define MCS_CPM_RX_INT_SL_GTE48			BIT_ULL(2)
+#define MCS_CPM_RX_INT_ES_EQ1_SC_EQ1		BIT_ULL(3)
+#define MCS_CPM_RX_INT_SC_EQ1_SCB_EQ1		BIT_ULL(4)
+#define MCS_CPM_RX_INT_PACKET_XPN_EQ0		BIT_ULL(5)
+#define MCS_CPM_RX_INT_PN_THRESH_REACHED	BIT_ULL(6)
+
+#define MCS_CPM_RX_INT_ALL	(MCS_CPM_RX_INT_SECTAG_V_EQ1 |		\
+				 MCS_CPM_RX_INT_SECTAG_E_EQ0_C_EQ1 |    \
+				 MCS_CPM_RX_INT_SL_GTE48 |		\
+				 MCS_CPM_RX_INT_ES_EQ1_SC_EQ1 |		\
+				 MCS_CPM_RX_INT_SC_EQ1_SCB_EQ1 |	\
+				 MCS_CPM_RX_INT_PACKET_XPN_EQ0 |	\
+				 MCS_CPM_RX_INT_PN_THRESH_REACHED)
+
+struct mcs_pfvf {
+	u64 intr_mask;	/* Enabled Interrupt mask */
+};
+
+struct mcs_intr_event {
+	u16 pcifunc;
+	u64 intr_mask;
+	u64 sa_id;
+	u8 mcs_id;
+	u8 lmac_id;
+};
+
+struct mcs_intrq_entry {
+	struct list_head node;
+	struct mcs_intr_event intr_event;
+};
+
 struct secy_mem_map {
 	u8 flow_id;
 	u8 secy;
@@ -82,11 +146,17 @@ struct mcs {
 	struct hwinfo		*hw;
 	struct mcs_rsrc_map	tx;
 	struct mcs_rsrc_map	rx;
+	u16                     pf_map[MCS_MAX_PFS]; /* List of PCIFUNC mapped to MCS */
 	u8			mcs_id;
 	struct mcs_ops		*mcs_ops;
 	struct list_head	mcs_list;
 	/* Lock for mcs stats */
 	struct mutex		stats_lock;
+	struct mcs_pfvf		*pf;
+	struct mcs_pfvf		*vf;
+	u16			num_vec;
+	void			*rvu;
+	u16			*tx_sa_active;
 };
 
 struct mcs_ops {
@@ -157,6 +227,8 @@ void cnf10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *m
 void cnf10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
 void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
 void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
+void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs);
+void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs);
 
 /* Stats APIs */
 void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir);
@@ -169,4 +241,6 @@ void mcs_clear_stats(struct mcs *mcs, u8 type, u8 id, int dir);
 int mcs_clear_all_stats(struct mcs *mcs, u16 pcifunc, int dir);
 int mcs_set_force_clk_en(struct mcs *mcs, bool set);
 
+int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event);
+
 #endif /* MCS_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
index 22ca3b1..69ba36d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_cnf10kb.c
@@ -93,18 +93,18 @@ void cnf10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *m
 	reg = MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(map->sc_id);
 	mcs_reg_write(mcs, reg, val);
 
-	if (map->rekey_ena) {
-		reg = MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0;
-		val = mcs_reg_read(mcs, reg);
+	reg = MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0;
+	val = mcs_reg_read(mcs, reg);
+
+	if (map->rekey_ena)
 		val |= BIT_ULL(map->sc_id);
-		mcs_reg_write(mcs, reg, val);
-	}
+	else
+		val &= ~BIT_ULL(map->sc_id);
 
-	if (map->sa_index0_vld)
-		mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX0_VLDX(map->sc_id), BIT_ULL(0));
+	mcs_reg_write(mcs, reg, val);
 
-	if (map->sa_index1_vld)
-		mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX1_VLDX(map->sc_id), BIT_ULL(0));
+	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX0_VLDX(map->sc_id), map->sa_index0_vld);
+	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_SA_INDEX1_VLDX(map->sc_id), map->sa_index1_vld);
 
 	mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_SA_ACTIVEX(map->sc_id), map->tx_sa_active);
 }
@@ -144,3 +144,71 @@ int mcs_set_force_clk_en(struct mcs *mcs, bool set)
 
 	return 0;
 }
+
+/* TX SA interrupt is raised only if autorekey is enabled.
+ * MCS_CPM_TX_SLAVE_SA_MAP_MEM_0X[sc].tx_sa_active bit gets toggled if
+ * one of two SAs mapped to SC gets expired. If tx_sa_active=0 implies
+ * SA in SA_index1 got expired else SA in SA_index0 got expired.
+ */
+void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs)
+{
+	struct mcs_intr_event event;
+	struct rsrc_bmap *sc_bmap;
+	unsigned long rekey_ena;
+	u64 val, sa_status;
+	int sc;
+
+	sc_bmap = &mcs->tx.sc;
+
+	event.mcs_id = mcs->mcs_id;
+	event.intr_mask = MCS_CPM_TX_PN_THRESH_REACHED_INT;
+
+	rekey_ena = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_AUTO_REKEY_ENABLE_0);
+
+	for_each_set_bit(sc, sc_bmap->bmap, mcs->hw->sc_entries) {
+		/* Auto rekey is enable */
+		if (!test_bit(sc, &rekey_ena))
+			continue;
+		sa_status = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_TX_SA_ACTIVEX(sc));
+		/* Check if tx_sa_active status had changed */
+		if (sa_status == mcs->tx_sa_active[sc])
+			continue;
+
+		/* SA_index0 is expired */
+		val = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(sc));
+		if (sa_status)
+			event.sa_id = val & 0x7F;
+		else
+			event.sa_id = (val >> 7) & 0x7F;
+
+		event.pcifunc = mcs->tx.sa2pf_map[event.sa_id];
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
+
+void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs)
+{
+	struct mcs_intr_event event = { 0 };
+	struct rsrc_bmap *sc_bmap;
+	u64 val;
+	int sc;
+
+	sc_bmap = &mcs->tx.sc;
+
+	event.mcs_id = mcs->mcs_id;
+	event.intr_mask = MCS_CPM_TX_PACKET_XPN_EQ0_INT;
+
+	for_each_set_bit(sc, sc_bmap->bmap, mcs->hw->sc_entries) {
+		val = mcs_reg_read(mcs, MCSX_CPM_TX_SLAVE_SA_MAP_MEM_0X(sc));
+
+		if (mcs->tx_sa_active[sc])
+			/* SA_index1 was used and got expired */
+			event.sa_id = (val >> 7) & 0x7F;
+		else
+			/* SA_index0 was used and got expired */
+			event.sa_id = val & 0x7F;
+
+		event.pcifunc = mcs->tx.sa2pf_map[event.sa_id];
+		mcs_add_intr_wq_entry(mcs, &event);
+	}
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
index 12be9f9..c95a8b8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_reg.h
@@ -276,6 +276,102 @@
 #define MCSX_BBE_RX_SLAVE_CAL_LEN			0x188ull
 #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a)		(0x290ull + (a) * 0x40ull)
 
+#define MCSX_BBE_RX_SLAVE_BBE_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0xe00ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x160ull;	\
+	offset; })
+
+#define MCSX_BBE_RX_SLAVE_BBE_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0xe08ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x168ull;	\
+	offset; })
+
+#define MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW ({	\
+	u64 offset;			\
+					\
+	offset = 0xe08ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x178ull;	\
+	offset; })
+
+#define MCSX_BBE_TX_SLAVE_BBE_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0x1278ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x1e0ull;	\
+	offset; })
+
+#define MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW ({	\
+	u64 offset;			\
+					\
+	offset = 0x1278ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x1f8ull;	\
+	offset; })
+
+#define MCSX_BBE_TX_SLAVE_BBE_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0x1280ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x1e8ull;	\
+	offset; })
+
+#define MCSX_PAB_RX_SLAVE_PAB_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0x16f0ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x260ull;	\
+	offset; })
+
+#define MCSX_PAB_RX_SLAVE_PAB_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0x16f8ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x268ull;	\
+	offset; })
+
+#define MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW ({	\
+	u64 offset;			\
+					\
+	offset = 0x16f8ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x278ull;	\
+	offset; })
+
+#define MCSX_PAB_TX_SLAVE_PAB_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0x2908ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x380ull;	\
+	offset; })
+
+#define MCSX_PAB_TX_SLAVE_PAB_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0x2910ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x388ull;	\
+	offset; })
+
+#define MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW ({	\
+	u64 offset;			\
+					\
+	offset = 0x16f8ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x398ull;	\
+	offset; })
+
 /* CPM registers */
 #define MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(a, b) ({	\
 	u64 offset;					\
@@ -931,4 +1027,76 @@
 #define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAENCRYPTEDX(a)	(0x21c40ull + (a) * 0x8ull)
 #define MCSX_CSE_TX_MEM_SLAVE_OUTPKTSSAPROTECTEDX(a)	(0x20c40ull + (a) * 0x8ull)
 
+#define MCSX_IP_INT ({			\
+	u64 offset;			\
+					\
+	offset = 0x80028ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x60028ull;	\
+	offset; })
+
+#define MCSX_IP_INT_ENA_W1S ({		\
+	u64 offset;			\
+					\
+	offset = 0x80040ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x60040ull;	\
+	offset; })
+
+#define MCSX_IP_INT_ENA_W1C ({		\
+	u64 offset;			\
+					\
+	offset = 0x80038ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x60038ull;	\
+	offset; })
+
+#define MCSX_TOP_SLAVE_INT_SUM ({	\
+	u64 offset;			\
+					\
+	offset = 0xc20ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0xab8ull;	\
+	offset; })
+
+#define MCSX_TOP_SLAVE_INT_SUM_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0xc28ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0xac0ull;	\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_RX_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0x23c00ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x0ad8ull;	\
+	offset; })
+
+#define MCSX_CPM_RX_SLAVE_RX_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0x23c08ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0xae0ull;	\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_TX_INT ({	\
+	u64 offset;			\
+					\
+	offset = 0x3d490ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x54a0ull;	\
+	offset; })
+
+#define MCSX_CPM_TX_SLAVE_TX_INT_ENB ({	\
+	u64 offset;			\
+					\
+	offset = 0x3d498ull;		\
+	if (mcs->hw->mcs_blks > 1)	\
+		offset = 0x54a8ull;	\
+	offset; })
+
 #endif
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index 939c9b6..70de24d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -13,6 +13,25 @@
 #include "rvu.h"
 #include "lmac_common.h"
 
+#define M(_name, _id, _fn_name, _req_type, _rsp_type)			\
+static struct _req_type __maybe_unused					\
+*otx2_mbox_alloc_msg_ ## _fn_name(struct rvu *rvu, int devid)		\
+{									\
+	struct _req_type *req;						\
+									\
+	req = (struct _req_type *)otx2_mbox_alloc_msg_rsp(		\
+		&rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \
+		sizeof(struct _rsp_type));				\
+	if (!req)							\
+		return NULL;						\
+	req->hdr.sig = OTX2_MBOX_REQ_SIG;				\
+	req->hdr.id = _id;						\
+	return req;							\
+}
+
+MBOX_UP_MCS_MESSAGES
+#undef M
+
 int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu,
 				       struct mcs_set_lmac_mode *req,
 				       struct msg_rsp *rsp)
@@ -30,6 +49,113 @@ int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu,
 	return 0;
 }
 
+int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event)
+{
+	struct mcs_intrq_entry *qentry;
+	u16 pcifunc = event->pcifunc;
+	struct rvu *rvu = mcs->rvu;
+	struct mcs_pfvf *pfvf;
+
+	/* Check if it is PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK)
+		pfvf = &mcs->vf[rvu_get_hwvf(rvu, pcifunc)];
+	else
+		pfvf = &mcs->pf[rvu_get_pf(pcifunc)];
+
+	event->intr_mask &= pfvf->intr_mask;
+
+	/* Check PF/VF interrupt notification is enabled */
+	if (!(pfvf->intr_mask && event->intr_mask))
+		return 0;
+
+	qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
+	if (!qentry)
+		return -ENOMEM;
+
+	qentry->intr_event = *event;
+	spin_lock(&rvu->mcs_intrq_lock);
+	list_add_tail(&qentry->node, &rvu->mcs_intrq_head);
+	spin_unlock(&rvu->mcs_intrq_lock);
+	queue_work(rvu->mcs_intr_wq, &rvu->mcs_intr_work);
+
+	return 0;
+}
+
+static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+{
+	struct mcs_intr_info *req;
+	int err, pf;
+
+	pf = rvu_get_pf(event->pcifunc);
+
+	req = otx2_mbox_alloc_msg_mcs_intr_notify(rvu, pf);
+	if (!req)
+		return -ENOMEM;
+
+	req->mcs_id = event->mcs_id;
+	req->intr_mask = event->intr_mask;
+	req->sa_id = event->sa_id;
+	req->hdr.pcifunc = event->pcifunc;
+	req->lmac_id = event->lmac_id;
+	otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pf);
+
+	if (err)
+		dev_warn(rvu->dev, "MCS notification to pf %d failed\n", pf);
+
+	return 0;
+}
+
+static void mcs_intr_handler_task(struct work_struct *work)
+{
+	struct rvu *rvu = container_of(work, struct rvu, mcs_intr_work);
+	struct mcs_intrq_entry *qentry;
+	struct mcs_intr_event *event;
+	unsigned long flags;
+
+	do {
+		spin_lock_irqsave(&rvu->mcs_intrq_lock, flags);
+		qentry = list_first_entry_or_null(&rvu->mcs_intrq_head,
+						  struct mcs_intrq_entry,
+						  node);
+		if (qentry)
+			list_del(&qentry->node);
+
+		spin_unlock_irqrestore(&rvu->mcs_intrq_lock, flags);
+		if (!qentry)
+			break; /* nothing more to process */
+
+		event = &qentry->intr_event;
+
+		mcs_notify_pfvf(event, rvu);
+		kfree(qentry);
+	} while (1);
+}
+
+int rvu_mbox_handler_mcs_intr_cfg(struct rvu *rvu,
+				  struct mcs_intr_cfg *req,
+				  struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct mcs_pfvf *pfvf;
+	struct mcs *mcs;
+
+	if (req->mcs_id >= rvu->mcs_blk_cnt)
+		return MCS_AF_ERR_INVALID_MCSID;
+
+	mcs = mcs_get_pdata(req->mcs_id);
+
+	/* Check if it is PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK)
+		pfvf = &mcs->vf[rvu_get_hwvf(rvu, pcifunc)];
+	else
+		pfvf = &mcs->pf[rvu_get_pf(pcifunc)];
+
+	mcs->pf_map[0] = pcifunc;
+	pfvf->intr_mask = req->intr_mask;
+
+	return 0;
+}
+
 int rvu_mbox_handler_mcs_get_hw_info(struct rvu *rvu,
 				     struct msg_req *req,
 				     struct mcs_hw_info *rsp)
@@ -376,6 +502,7 @@ int rvu_mbox_handler_mcs_tx_sc_sa_map_write(struct rvu *rvu,
 
 	mcs = mcs_get_pdata(req->mcs_id);
 	mcs->mcs_ops->mcs_tx_sa_mem_map_write(mcs, req);
+	mcs->tx_sa_active[req->sc_id] = req->tx_sa_active;
 
 	return 0;
 }
@@ -723,7 +850,39 @@ int rvu_mcs_init(struct rvu *rvu)
 		mcs_install_flowid_bypass_entry(mcs);
 		for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++)
 			mcs_set_lmac_mode(mcs, lmac, 0);
+
+		mcs->rvu = rvu;
+
+		/* Allocated memory for PFVF data */
+		mcs->pf = devm_kcalloc(mcs->dev, hw->total_pfs,
+				       sizeof(struct mcs_pfvf), GFP_KERNEL);
+		if (!mcs->pf)
+			return -ENOMEM;
+
+		mcs->vf = devm_kcalloc(mcs->dev, hw->total_vfs,
+				       sizeof(struct mcs_pfvf), GFP_KERNEL);
+		if (!mcs->vf)
+			return -ENOMEM;
+	}
+
+	/* Initialize the wq for handling mcs interrupts */
+	INIT_LIST_HEAD(&rvu->mcs_intrq_head);
+	INIT_WORK(&rvu->mcs_intr_work, mcs_intr_handler_task);
+	rvu->mcs_intr_wq = alloc_workqueue("mcs_intr_wq", 0, 0);
+	if (!rvu->mcs_intr_wq) {
+		dev_err(rvu->dev, "mcs alloc workqueue failed\n");
+		return -ENOMEM;
 	}
 
 	return err;
 }
+
+void rvu_mcs_exit(struct rvu *rvu)
+{
+	if (!rvu->mcs_intr_wq)
+		return;
+
+	flush_workqueue(rvu->mcs_intr_wq);
+	destroy_workqueue(rvu->mcs_intr_wq);
+	rvu->mcs_intr_wq = NULL;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 5d74641..3f5e09b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -24,8 +24,6 @@
 #define DRV_NAME	"rvu_af"
 #define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
 
-static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
-
 static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				struct rvu_block *block, int lf);
 static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
@@ -419,7 +417,7 @@ void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf)
 		*hwvf = cfg & 0xFFF;
 }
 
-static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
+int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
 {
 	int pf, func;
 	u64 cfg;
@@ -3300,6 +3298,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 err_hwsetup:
 	rvu_cgx_exit(rvu);
 	rvu_fwdata_exit(rvu);
+	rvu_mcs_exit(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
 	rvu_clear_rvum_blk_revid(rvu);
@@ -3326,6 +3325,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	rvu_flr_wq_destroy(rvu);
 	rvu_cgx_exit(rvu);
 	rvu_fwdata_exit(rvu);
+	rvu_mcs_exit(rvu);
 	rvu_mbox_destroy(&rvu->afpf_wq_info);
 	rvu_disable_sriov(rvu);
 	rvu_reset_all_blocks(rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 4aefe47..d0268c4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -508,6 +508,12 @@ struct rvu {
 
 	/* RVU switch implementation over NPC with DMAC rules */
 	struct rvu_switch	rswitch;
+
+	struct			work_struct mcs_intr_work;
+	struct			workqueue_struct *mcs_intr_wq;
+	struct list_head	mcs_intrq_head;
+	/* mcs interrupt queue lock */
+	spinlock_t		mcs_intrq_lock;
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -872,9 +878,11 @@ void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc);
 int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir,
 			   u64 pkind, u8 var_len_off, u8 var_len_off_mask,
 			   u8 shift_dir);
+int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
 
 /* CN10K MCS */
 int rvu_mcs_init(struct rvu *rvu);
 int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
+void rvu_mcs_exit(struct rvu *rvu);
 
 #endif /* RVU_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 7/8] octeontx2-af: cn10k: mcs: Add debugfs support
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
                   ` (5 preceding siblings ...)
  2022-09-28  3:58 ` [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  2022-09-28  3:58 ` [net-next PATCH v2 8/8] octeontx2-pf: mcs: Introduce MACSEC hardware offloading Subbaraya Sundeep
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Geetha sowjanya, Subbaraya Sundeep

From: Geetha sowjanya <gakula@marvell.com>

This patch adds debugfs entry to dump MCS secy, sc,
sa, flowid and port stats. This helps in debugging
the packet path and to figure out where exactly packet
was dropped.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
---
 drivers/net/ethernet/marvell/octeontx2/af/rvu.h    |   4 +
 .../ethernet/marvell/octeontx2/af/rvu_debugfs.c    | 346 +++++++++++++++++++++
 2 files changed, 350 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index d0268c4..76474385 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -64,6 +64,10 @@ struct rvu_debugfs {
 	struct dentry *nix;
 	struct dentry *npc;
 	struct dentry *cpt;
+	struct dentry *mcs_root;
+	struct dentry *mcs;
+	struct dentry *mcs_rx;
+	struct dentry *mcs_tx;
 	struct dump_ctx npa_aura_ctx;
 	struct dump_ctx npa_pool_ctx;
 	struct dump_ctx nix_cq_ctx;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index f42a09f..c944884 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -19,6 +19,7 @@
 #include "lmac_common.h"
 #include "npc.h"
 #include "rvu_npc_hash.h"
+#include "mcs.h"
 
 #define DEBUGFS_DIR_NAME "octeontx2"
 
@@ -227,6 +228,350 @@ static const struct file_operations rvu_dbg_##name##_fops = { \
 
 static void print_nix_qsize(struct seq_file *filp, struct rvu_pfvf *pfvf);
 
+static int rvu_dbg_mcs_port_stats_display(struct seq_file *filp, void *unused, int dir)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_port_stats stats;
+	int lmac;
+
+	seq_puts(filp, "\n port stats\n");
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(lmac, &mcs->hw->lmac_bmap, mcs->hw->lmac_cnt) {
+		mcs_get_port_stats(mcs, &stats, lmac, dir);
+		seq_printf(filp, "port%d: Tcam Miss: %lld\n", lmac, stats.tcam_miss_cnt);
+		seq_printf(filp, "port%d: Parser errors: %lld\n", lmac, stats.parser_err_cnt);
+
+		if (dir == MCS_RX && mcs->hw->mcs_blks > 1)
+			seq_printf(filp, "port%d: Preempt error: %lld\n", lmac,
+				   stats.preempt_err_cnt);
+		if (dir == MCS_TX)
+			seq_printf(filp, "port%d: Sectag insert error: %lld\n", lmac,
+				   stats.sectag_insert_err_cnt);
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+static int rvu_dbg_mcs_rx_port_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_port_stats_display(filp, unused, MCS_RX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_rx_port_stats, mcs_rx_port_stats_display, NULL);
+
+static int rvu_dbg_mcs_tx_port_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_port_stats_display(filp, unused, MCS_TX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_tx_port_stats, mcs_tx_port_stats_display, NULL);
+
+static int rvu_dbg_mcs_sa_stats_display(struct seq_file *filp, void *unused, int dir)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_sa_stats stats;
+	struct rsrc_bmap *map;
+	int sa_id;
+
+	if (dir == MCS_TX) {
+		mutex_lock(&mcs->stats_lock);
+		map = &mcs->tx.sa;
+		seq_puts(filp, "\n TX SA stats\n");
+		mcs_get_sa_stats(mcs, &stats, sa_id, MCS_TX);
+		for_each_set_bit(sa_id, map->bmap, mcs->hw->sa_entries) {
+			seq_printf(filp, "sa%d: Pkts encrypted: %lld\n", sa_id,
+				   stats.pkt_encrypt_cnt);
+
+			seq_printf(filp, "sa%d: Pkts protected: %lld\n", sa_id,
+				   stats.pkt_protected_cnt);
+		}
+		mutex_unlock(&mcs->stats_lock);
+		return 0;
+	}
+
+	/* RX stats */
+	map = &mcs->rx.sa;
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(sa_id, map->bmap, mcs->hw->sa_entries) {
+		seq_puts(filp, "\n RX SA stats\n");
+		mcs_get_sa_stats(mcs, &stats, sa_id, MCS_RX);
+		seq_printf(filp, "sa%d: Invalid pkts: %lld\n", sa_id, stats.pkt_invalid_cnt);
+		seq_printf(filp, "sa%d: Pkts no sa error: %lld\n", sa_id, stats.pkt_nosaerror_cnt);
+		seq_printf(filp, "sa%d: Pkts not valid: %lld\n", sa_id, stats.pkt_notvalid_cnt);
+		seq_printf(filp, "sa%d: Pkts ok: %lld\n", sa_id, stats.pkt_ok_cnt);
+		seq_printf(filp, "sa%d: Pkts no sa: %lld\n", sa_id, stats.pkt_nosa_cnt);
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+static int rvu_dbg_mcs_rx_sa_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_sa_stats_display(filp, unused, MCS_RX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_rx_sa_stats, mcs_rx_sa_stats_display, NULL);
+
+static int rvu_dbg_mcs_tx_sa_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_sa_stats_display(filp, unused, MCS_TX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_tx_sa_stats, mcs_tx_sa_stats_display, NULL);
+
+static int rvu_dbg_mcs_tx_sc_stats_display(struct seq_file *filp, void *unused)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_sc_stats stats;
+	struct rsrc_bmap *map;
+	int sc_id;
+
+	map = &mcs->tx.sc;
+	seq_puts(filp, "\n SC stats\n");
+
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(sc_id, map->bmap, mcs->hw->sc_entries) {
+		mcs_get_sc_stats(mcs, &stats, sc_id, MCS_TX);
+		seq_printf(filp, "\n=======sc%d======\n\n", sc_id);
+		seq_printf(filp, "sc%d: Pkts encrypted: %lld\n", sc_id, stats.pkt_encrypt_cnt);
+		seq_printf(filp, "sc%d: Pkts protected: %lld\n", sc_id, stats.pkt_protected_cnt);
+
+		if (mcs->hw->mcs_blks == 1) {
+			seq_printf(filp, "sc%d: Octets encrypted: %lld\n", sc_id,
+				   stats.octet_encrypt_cnt);
+			seq_printf(filp, "sc%d: Octets protected: %lld\n", sc_id,
+				   stats.octet_protected_cnt);
+		}
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_tx_sc_stats, mcs_tx_sc_stats_display, NULL);
+
+static int rvu_dbg_mcs_rx_sc_stats_display(struct seq_file *filp, void *unused)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_sc_stats stats;
+	struct rsrc_bmap *map;
+	int sc_id;
+
+	map = &mcs->rx.sc;
+	seq_puts(filp, "\n SC stats\n");
+
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(sc_id, map->bmap, mcs->hw->sc_entries) {
+		mcs_get_sc_stats(mcs, &stats, sc_id, MCS_RX);
+		seq_printf(filp, "\n=======sc%d======\n\n", sc_id);
+		seq_printf(filp, "sc%d: Cam hits: %lld\n", sc_id, stats.hit_cnt);
+		seq_printf(filp, "sc%d: Invalid pkts: %lld\n", sc_id, stats.pkt_invalid_cnt);
+		seq_printf(filp, "sc%d: Late pkts: %lld\n", sc_id, stats.pkt_late_cnt);
+		seq_printf(filp, "sc%d: Notvalid pkts: %lld\n", sc_id, stats.pkt_notvalid_cnt);
+		seq_printf(filp, "sc%d: Unchecked pkts: %lld\n", sc_id, stats.pkt_unchecked_cnt);
+
+		if (mcs->hw->mcs_blks > 1) {
+			seq_printf(filp, "sc%d: Delay pkts: %lld\n", sc_id, stats.pkt_delay_cnt);
+			seq_printf(filp, "sc%d: Pkts ok: %lld\n", sc_id, stats.pkt_ok_cnt);
+		}
+		if (mcs->hw->mcs_blks == 1) {
+			seq_printf(filp, "sc%d: Octets decrypted: %lld\n", sc_id,
+				   stats.octet_decrypt_cnt);
+			seq_printf(filp, "sc%d: Octets validated: %lld\n", sc_id,
+				   stats.octet_validate_cnt);
+		}
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_rx_sc_stats, mcs_rx_sc_stats_display, NULL);
+
+static int rvu_dbg_mcs_flowid_stats_display(struct seq_file *filp, void *unused, int dir)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_flowid_stats stats;
+	struct rsrc_bmap *map;
+	int flow_id;
+
+	seq_puts(filp, "\n Flowid stats\n");
+
+	if (dir == MCS_RX)
+		map = &mcs->rx.flow_ids;
+	else
+		map = &mcs->tx.flow_ids;
+
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(flow_id, map->bmap, mcs->hw->tcam_entries) {
+		mcs_get_flowid_stats(mcs, &stats, flow_id, dir);
+		seq_printf(filp, "Flowid%d: Hit:%lld\n", flow_id, stats.tcam_hit_cnt);
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+static int rvu_dbg_mcs_tx_flowid_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_flowid_stats_display(filp, unused, MCS_TX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_tx_flowid_stats, mcs_tx_flowid_stats_display, NULL);
+
+static int rvu_dbg_mcs_rx_flowid_stats_display(struct seq_file *filp, void *unused)
+{
+	return rvu_dbg_mcs_flowid_stats_display(filp, unused, MCS_RX);
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_rx_flowid_stats, mcs_rx_flowid_stats_display, NULL);
+
+static int rvu_dbg_mcs_tx_secy_stats_display(struct seq_file *filp, void *unused)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_secy_stats stats;
+	struct rsrc_bmap *map;
+	int secy_id;
+
+	map = &mcs->tx.secy;
+	seq_puts(filp, "\n MCS TX secy stats\n");
+
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(secy_id, map->bmap, mcs->hw->secy_entries) {
+		mcs_get_tx_secy_stats(mcs, &stats, secy_id);
+		seq_printf(filp, "\n=======Secy%d======\n\n", secy_id);
+		seq_printf(filp, "secy%d: Ctrl bcast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_bcast_cnt);
+		seq_printf(filp, "secy%d: Ctrl Mcast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_mcast_cnt);
+		seq_printf(filp, "secy%d: Ctrl ucast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_ucast_cnt);
+		seq_printf(filp, "secy%d: Ctrl octets: %lld\n", secy_id, stats.ctl_octet_cnt);
+		seq_printf(filp, "secy%d: Unctrl bcast cnt: %lld\n", secy_id,
+			   stats.unctl_pkt_bcast_cnt);
+		seq_printf(filp, "secy%d: Unctrl mcast pkts: %lld\n", secy_id,
+			   stats.unctl_pkt_mcast_cnt);
+		seq_printf(filp, "secy%d: Unctrl ucast pkts: %lld\n", secy_id,
+			   stats.unctl_pkt_ucast_cnt);
+		seq_printf(filp, "secy%d: Unctrl octets: %lld\n", secy_id, stats.unctl_octet_cnt);
+		seq_printf(filp, "secy%d: Octet encrypted: %lld\n", secy_id,
+			   stats.octet_encrypted_cnt);
+		seq_printf(filp, "secy%d: octet protected: %lld\n", secy_id,
+			   stats.octet_protected_cnt);
+		seq_printf(filp, "secy%d: Pkts on active sa: %lld\n", secy_id,
+			   stats.pkt_noactivesa_cnt);
+		seq_printf(filp, "secy%d: Pkts too long: %lld\n", secy_id, stats.pkt_toolong_cnt);
+		seq_printf(filp, "secy%d: Pkts untagged: %lld\n", secy_id, stats.pkt_untagged_cnt);
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_tx_secy_stats, mcs_tx_secy_stats_display, NULL);
+
+static int rvu_dbg_mcs_rx_secy_stats_display(struct seq_file *filp, void *unused)
+{
+	struct mcs *mcs = filp->private;
+	struct mcs_secy_stats stats;
+	struct rsrc_bmap *map;
+	int secy_id;
+
+	map = &mcs->rx.secy;
+	seq_puts(filp, "\n MCS secy stats\n");
+
+	mutex_lock(&mcs->stats_lock);
+	for_each_set_bit(secy_id, map->bmap, mcs->hw->secy_entries) {
+		mcs_get_rx_secy_stats(mcs, &stats, secy_id);
+		seq_printf(filp, "\n=======Secy%d======\n\n", secy_id);
+		seq_printf(filp, "secy%d: Ctrl bcast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_bcast_cnt);
+		seq_printf(filp, "secy%d: Ctrl Mcast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_mcast_cnt);
+		seq_printf(filp, "secy%d: Ctrl ucast pkts: %lld\n", secy_id,
+			   stats.ctl_pkt_ucast_cnt);
+		seq_printf(filp, "secy%d: Ctrl octets: %lld\n", secy_id, stats.ctl_octet_cnt);
+		seq_printf(filp, "secy%d: Unctrl bcast cnt: %lld\n", secy_id,
+			   stats.unctl_pkt_bcast_cnt);
+		seq_printf(filp, "secy%d: Unctrl mcast pkts: %lld\n", secy_id,
+			   stats.unctl_pkt_mcast_cnt);
+		seq_printf(filp, "secy%d: Unctrl ucast pkts: %lld\n", secy_id,
+			   stats.unctl_pkt_ucast_cnt);
+		seq_printf(filp, "secy%d: Unctrl octets: %lld\n", secy_id, stats.unctl_octet_cnt);
+		seq_printf(filp, "secy%d: Octet decrypted: %lld\n", secy_id,
+			   stats.octet_decrypted_cnt);
+		seq_printf(filp, "secy%d: octet validated: %lld\n", secy_id,
+			   stats.octet_validated_cnt);
+		seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id,
+			   stats.pkt_port_disabled_cnt);
+		seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_badtag_cnt);
+		seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_nosa_cnt);
+		seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id,
+			   stats.pkt_nosaerror_cnt);
+		seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id,
+			   stats.pkt_tagged_ctl_cnt);
+		seq_printf(filp, "secy%d: Untaged pkts: %lld\n", secy_id, stats.pkt_untaged_cnt);
+		seq_printf(filp, "secy%d: Ctrl pkts: %lld\n", secy_id, stats.pkt_ctl_cnt);
+		if (mcs->hw->mcs_blks > 1)
+			seq_printf(filp, "secy%d: pkts notag: %lld\n", secy_id,
+				   stats.pkt_notag_cnt);
+	}
+	mutex_unlock(&mcs->stats_lock);
+	return 0;
+}
+
+RVU_DEBUG_SEQ_FOPS(mcs_rx_secy_stats, mcs_rx_secy_stats_display, NULL);
+
+static void rvu_dbg_mcs_init(struct rvu *rvu)
+{
+	struct mcs *mcs;
+	char dname[10];
+	int i;
+
+	if (!rvu->mcs_blk_cnt)
+		return;
+
+	rvu->rvu_dbg.mcs_root = debugfs_create_dir("mcs", rvu->rvu_dbg.root);
+
+	for (i = 0; i < rvu->mcs_blk_cnt; i++) {
+		mcs = mcs_get_pdata(i);
+
+		sprintf(dname, "mcs%d", i);
+		rvu->rvu_dbg.mcs = debugfs_create_dir(dname,
+						      rvu->rvu_dbg.mcs_root);
+
+		rvu->rvu_dbg.mcs_rx = debugfs_create_dir("rx_stats", rvu->rvu_dbg.mcs);
+
+		debugfs_create_file("flowid", 0600, rvu->rvu_dbg.mcs_rx, mcs,
+				    &rvu_dbg_mcs_rx_flowid_stats_fops);
+
+		debugfs_create_file("secy", 0600, rvu->rvu_dbg.mcs_rx, mcs,
+				    &rvu_dbg_mcs_rx_secy_stats_fops);
+
+		debugfs_create_file("sc", 0600, rvu->rvu_dbg.mcs_rx, mcs,
+				    &rvu_dbg_mcs_rx_sc_stats_fops);
+
+		debugfs_create_file("sa", 0600, rvu->rvu_dbg.mcs_rx, mcs,
+				    &rvu_dbg_mcs_rx_sa_stats_fops);
+
+		debugfs_create_file("port", 0600, rvu->rvu_dbg.mcs_rx, mcs,
+				    &rvu_dbg_mcs_rx_port_stats_fops);
+
+		rvu->rvu_dbg.mcs_tx = debugfs_create_dir("tx_stats", rvu->rvu_dbg.mcs);
+
+		debugfs_create_file("flowid", 0600, rvu->rvu_dbg.mcs_tx, mcs,
+				    &rvu_dbg_mcs_tx_flowid_stats_fops);
+
+		debugfs_create_file("secy", 0600, rvu->rvu_dbg.mcs_tx, mcs,
+				    &rvu_dbg_mcs_tx_secy_stats_fops);
+
+		debugfs_create_file("sc", 0600, rvu->rvu_dbg.mcs_tx, mcs,
+				    &rvu_dbg_mcs_tx_sc_stats_fops);
+
+		debugfs_create_file("sa", 0600, rvu->rvu_dbg.mcs_tx, mcs,
+				    &rvu_dbg_mcs_tx_sa_stats_fops);
+
+		debugfs_create_file("port", 0600, rvu->rvu_dbg.mcs_tx, mcs,
+				    &rvu_dbg_mcs_tx_port_stats_fops);
+	}
+}
+
 #define LMT_MAPTBL_ENTRY_SIZE 16
 /* Dump LMTST map table */
 static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp,
@@ -3053,6 +3398,7 @@ void rvu_dbg_init(struct rvu *rvu)
 	rvu_dbg_npc_init(rvu);
 	rvu_dbg_cpt_init(rvu, BLKADDR_CPT0);
 	rvu_dbg_cpt_init(rvu, BLKADDR_CPT1);
+	rvu_dbg_mcs_init(rvu);
 }
 
 void rvu_dbg_exit(struct rvu *rvu)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [net-next PATCH v2 8/8] octeontx2-pf: mcs: Introduce MACSEC hardware offloading
  2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
                   ` (6 preceding siblings ...)
  2022-09-28  3:58 ` [net-next PATCH v2 7/8] octeontx2-af: cn10k: mcs: Add debugfs support Subbaraya Sundeep
@ 2022-09-28  3:58 ` Subbaraya Sundeep
  7 siblings, 0 replies; 10+ messages in thread
From: Subbaraya Sundeep @ 2022-09-28  3:58 UTC (permalink / raw)
  To: davem, kuba, edumazet, pabeni, netdev
  Cc: sgoutham, naveenm, Subbaraya Sundeep

This patch introduces the macsec offload feature to cn10k
PF netdev driver. The macsec offload ops like adding, deleting
and updating SecYs, SCs, SAs and stats are supported. XPN support
will be added in later patches. Some stats use same counter in hardware
which means based on the SecY mode the same counter represents different
stat. Hence when SecY mode/policy is changed then snapshot of current
stats are captured. Also there is no provision to specify the unique
flow-id/SCI per packet to hardware hence different mac address needs to
be set for macsec interfaces.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 .../net/ethernet/marvell/octeontx2/nic/Makefile    |    1 +
 .../ethernet/marvell/octeontx2/nic/cn10k_macsec.c  | 1668 ++++++++++++++++++++
 .../ethernet/marvell/octeontx2/nic/otx2_common.c   |    1 +
 .../ethernet/marvell/octeontx2/nic/otx2_common.h   |   90 ++
 .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c   |   16 +
 5 files changed, 1776 insertions(+)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
index d463dc72..73fdb87 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
@@ -13,5 +13,6 @@ rvu_nicvf-y := otx2_vf.o otx2_devlink.o
 
 rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
 rvu_nicvf-$(CONFIG_DCB) += otx2_dcbnl.o
+rvu_nicpf-$(CONFIG_MACSEC) += cn10k_macsec.o
 
 ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
new file mode 100644
index 0000000..6cc6af9
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
@@ -0,0 +1,1668 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell MACSEC hardware offload driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#include <linux/rtnetlink.h>
+#include <linux/bitfield.h>
+#include <net/macsec.h>
+#include "otx2_common.h"
+
+#define MCS_TCAM0_MAC_SA_MASK		GENMASK_ULL(63, 48)
+#define MCS_TCAM1_MAC_SA_MASK		GENMASK_ULL(31, 0)
+#define MCS_TCAM1_ETYPE_MASK		GENMASK_ULL(47, 32)
+
+#define MCS_SA_MAP_MEM_SA_USE		BIT_ULL(9)
+
+#define MCS_RX_SECY_PLCY_RW_MASK	GENMASK_ULL(49, 18)
+#define MCS_RX_SECY_PLCY_RP		BIT_ULL(17)
+#define MCS_RX_SECY_PLCY_AUTH_ENA	BIT_ULL(16)
+#define MCS_RX_SECY_PLCY_CIP		GENMASK_ULL(8, 5)
+#define MCS_RX_SECY_PLCY_VAL		GENMASK_ULL(2, 1)
+#define MCS_RX_SECY_PLCY_ENA		BIT_ULL(0)
+
+#define MCS_TX_SECY_PLCY_MTU		GENMASK_ULL(43, 28)
+#define MCS_TX_SECY_PLCY_ST_TCI		GENMASK_ULL(27, 22)
+#define MCS_TX_SECY_PLCY_ST_OFFSET	GENMASK_ULL(21, 15)
+#define MCS_TX_SECY_PLCY_INS_MODE	BIT_ULL(14)
+#define MCS_TX_SECY_PLCY_AUTH_ENA	BIT_ULL(13)
+#define MCS_TX_SECY_PLCY_CIP		GENMASK_ULL(5, 2)
+#define MCS_TX_SECY_PLCY_PROTECT	BIT_ULL(1)
+#define MCS_TX_SECY_PLCY_ENA		BIT_ULL(0)
+
+#define MCS_GCM_AES_128			0
+#define MCS_GCM_AES_256			1
+#define MCS_GCM_AES_XPN_128		2
+#define MCS_GCM_AES_XPN_256		3
+
+#define MCS_TCI_ES			0x40 /* end station */
+#define MCS_TCI_SC			0x20 /* SCI present */
+#define MCS_TCI_SCB			0x10 /* epon */
+#define MCS_TCI_E			0x08 /* encryption */
+#define MCS_TCI_C			0x04 /* changed text */
+
+static struct cn10k_mcs_txsc *cn10k_mcs_get_txsc(struct cn10k_mcs_cfg *cfg,
+						 struct macsec_secy *secy)
+{
+	struct cn10k_mcs_txsc *txsc;
+
+	list_for_each_entry(txsc, &cfg->txsc_list, entry) {
+		if (txsc->sw_secy == secy)
+			return txsc;
+	}
+
+	return NULL;
+}
+
+static struct cn10k_mcs_rxsc *cn10k_mcs_get_rxsc(struct cn10k_mcs_cfg *cfg,
+						 struct macsec_secy *secy,
+						 struct macsec_rx_sc *rx_sc)
+{
+	struct cn10k_mcs_rxsc *rxsc;
+
+	list_for_each_entry(rxsc, &cfg->rxsc_list, entry) {
+		if (rxsc->sw_rxsc == rx_sc && rxsc->sw_secy == secy)
+			return rxsc;
+	}
+
+	return NULL;
+}
+
+static const char *rsrc_name(enum mcs_rsrc_type rsrc_type)
+{
+	switch (rsrc_type) {
+	case MCS_RSRC_TYPE_FLOWID:
+		return "FLOW";
+	case MCS_RSRC_TYPE_SC:
+		return "SC";
+	case MCS_RSRC_TYPE_SECY:
+		return "SECY";
+	case MCS_RSRC_TYPE_SA:
+		return "SA";
+	default:
+		return "Unknown";
+	};
+
+	return "Unknown";
+}
+
+static int cn10k_mcs_alloc_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir,
+				enum mcs_rsrc_type type, u16 *rsrc_id)
+{
+	struct mbox *mbox = &pfvf->mbox;
+	struct mcs_alloc_rsrc_req *req;
+	struct mcs_alloc_rsrc_rsp *rsp;
+	int ret = -ENOMEM;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_alloc_resources(mbox);
+	if (!req)
+		goto fail;
+
+	req->rsrc_type = type;
+	req->rsrc_cnt  = 1;
+	req->dir = dir;
+
+	ret = otx2_sync_mbox_msg(mbox);
+	if (ret)
+		goto fail;
+
+	rsp = (struct mcs_alloc_rsrc_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+							     0, &req->hdr);
+	if (IS_ERR(rsp) || req->rsrc_cnt != rsp->rsrc_cnt ||
+	    req->rsrc_type != rsp->rsrc_type || req->dir != rsp->dir) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	switch (rsp->rsrc_type) {
+	case MCS_RSRC_TYPE_FLOWID:
+		*rsrc_id = rsp->flow_ids[0];
+		break;
+	case MCS_RSRC_TYPE_SC:
+		*rsrc_id = rsp->sc_ids[0];
+		break;
+	case MCS_RSRC_TYPE_SECY:
+		*rsrc_id = rsp->secy_ids[0];
+		break;
+	case MCS_RSRC_TYPE_SA:
+		*rsrc_id = rsp->sa_ids[0];
+		break;
+	default:
+		ret = -EINVAL;
+		goto fail;
+	};
+
+	mutex_unlock(&mbox->lock);
+
+	return 0;
+fail:
+	dev_err(pfvf->dev, "Failed to allocate %s %s resource\n",
+		dir == MCS_TX ? "TX" : "RX", rsrc_name(type));
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static void cn10k_mcs_free_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir,
+				enum mcs_rsrc_type type, u16 hw_rsrc_id,
+				bool all)
+{
+	struct mbox *mbox = &pfvf->mbox;
+	struct mcs_free_rsrc_req *req;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_free_resources(mbox);
+	if (!req)
+		goto fail;
+
+	req->rsrc_id = hw_rsrc_id;
+	req->rsrc_type = type;
+	req->dir = dir;
+	if (all)
+		req->all = 1;
+
+	if (otx2_sync_mbox_msg(&pfvf->mbox))
+		goto fail;
+
+	mutex_unlock(&mbox->lock);
+
+	return;
+fail:
+	dev_err(pfvf->dev, "Failed to free %s %s resource\n",
+		dir == MCS_TX ? "TX" : "RX", rsrc_name(type));
+	mutex_unlock(&mbox->lock);
+}
+
+static int cn10k_mcs_alloc_txsa(struct otx2_nic *pfvf, u16 *hw_sa_id)
+{
+	return cn10k_mcs_alloc_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SA, hw_sa_id);
+}
+
+static int cn10k_mcs_alloc_rxsa(struct otx2_nic *pfvf, u16 *hw_sa_id)
+{
+	return cn10k_mcs_alloc_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SA, hw_sa_id);
+}
+
+static void cn10k_mcs_free_txsa(struct otx2_nic *pfvf, u16 hw_sa_id)
+{
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SA, hw_sa_id, false);
+}
+
+static void cn10k_mcs_free_rxsa(struct otx2_nic *pfvf, u16 hw_sa_id)
+{
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SA, hw_sa_id, false);
+}
+
+static int cn10k_mcs_write_rx_secy(struct otx2_nic *pfvf,
+				   struct macsec_secy *secy, u8 hw_secy_id)
+{
+	struct mcs_secy_plcy_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	u64 policy;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_secy_plcy_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	policy = FIELD_PREP(MCS_RX_SECY_PLCY_RW_MASK, secy->replay_window);
+	if (secy->replay_protect)
+		policy |= MCS_RX_SECY_PLCY_RP;
+
+	policy |= MCS_RX_SECY_PLCY_AUTH_ENA;
+	policy |= FIELD_PREP(MCS_RX_SECY_PLCY_CIP, MCS_GCM_AES_128);
+	policy |= FIELD_PREP(MCS_RX_SECY_PLCY_VAL, secy->validate_frames);
+
+	policy |= MCS_RX_SECY_PLCY_ENA;
+
+	req->plcy = policy;
+	req->secy_id = hw_secy_id;
+	req->dir = MCS_RX;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf,
+				     struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id)
+{
+	struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc;
+	struct mcs_flowid_entry_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_flowid_entry_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC);
+	req->mask[1] = ~0ULL;
+	req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK;
+
+	req->mask[0] = ~0ULL;
+	req->mask[2] = ~0ULL;
+	req->mask[3] = ~0ULL;
+
+	req->flow_id = rxsc->hw_flow_id;
+	req->secy_id = hw_secy_id;
+	req->sc_id = rxsc->hw_sc_id;
+	req->dir = MCS_RX;
+
+	if (sw_rx_sc->active)
+		req->ena = 1;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_sc_cam(struct otx2_nic *pfvf,
+				  struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id)
+{
+	struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc;
+	struct mcs_rx_sc_cam_write_req *sc_req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	sc_req = otx2_mbox_alloc_msg_mcs_rx_sc_cam_write(mbox);
+	if (!sc_req) {
+		return -ENOMEM;
+		goto fail;
+	}
+
+	sc_req->sci = cpu_to_be64(sw_rx_sc->sci);
+	sc_req->sc_id = rxsc->hw_sc_id;
+	sc_req->secy_id = hw_secy_id;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_rx_sa_plcy(struct otx2_nic *pfvf,
+				      struct macsec_secy *secy,
+				      struct cn10k_mcs_rxsc *rxsc,
+				      u8 assoc_num, bool sa_in_use)
+{
+	unsigned char *src = rxsc->sa_key[assoc_num];
+	struct mcs_sa_plcy_write_req *plcy_req;
+	struct mcs_rx_sc_sa_map *map_req;
+	struct mbox *mbox = &pfvf->mbox;
+	u8 reg, key_len;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	plcy_req = otx2_mbox_alloc_msg_mcs_sa_plcy_write(mbox);
+	if (!plcy_req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	map_req = otx2_mbox_alloc_msg_mcs_rx_sc_sa_map_write(mbox);
+	if (!map_req) {
+		otx2_mbox_reset(&mbox->mbox, 0);
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	for (reg = 0, key_len = 0; key_len < secy->key_len; key_len += 8) {
+		memcpy((u8 *)&plcy_req->plcy[0][reg],
+		       (src + reg * 8), 8);
+		reg++;
+	}
+
+	plcy_req->sa_index[0] = rxsc->hw_sa_id[assoc_num];
+	plcy_req->sa_cnt = 1;
+	plcy_req->dir = MCS_RX;
+
+	map_req->sa_index = rxsc->hw_sa_id[assoc_num];
+	map_req->sa_in_use = sa_in_use;
+	map_req->sc_id = rxsc->hw_sc_id;
+	map_req->an = assoc_num;
+
+	/* Send two messages together */
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_rx_sa_pn(struct otx2_nic *pfvf,
+				    struct cn10k_mcs_rxsc *rxsc,
+				    u8 assoc_num, u64 next_pn)
+{
+	struct mcs_pn_table_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_pn_table_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->pn_id = rxsc->hw_sa_id[assoc_num];
+	req->next_pn = next_pn;
+	req->dir = MCS_RX;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_tx_secy(struct otx2_nic *pfvf,
+				   struct macsec_secy *secy,
+				   struct cn10k_mcs_txsc *txsc)
+{
+	struct mcs_secy_plcy_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	struct macsec_tx_sc *sw_tx_sc;
+	/* Insert SecTag after 12 bytes (DA+SA)*/
+	u8 tag_offset = 12;
+	u8 sectag_tci = 0;
+	u64 policy;
+	int ret;
+
+	sw_tx_sc = &secy->tx_sc;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_secy_plcy_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (sw_tx_sc->send_sci) {
+		sectag_tci |= MCS_TCI_SC;
+	} else {
+		if (sw_tx_sc->end_station)
+			sectag_tci |= MCS_TCI_ES;
+		if (sw_tx_sc->scb)
+			sectag_tci |= MCS_TCI_SCB;
+	}
+
+	if (sw_tx_sc->encrypt)
+		sectag_tci |= (MCS_TCI_E | MCS_TCI_C);
+
+	policy = FIELD_PREP(MCS_TX_SECY_PLCY_MTU, secy->netdev->mtu);
+	/* Write SecTag excluding AN bits(1..0) */
+	policy |= FIELD_PREP(MCS_TX_SECY_PLCY_ST_TCI, sectag_tci >> 2);
+	policy |= FIELD_PREP(MCS_TX_SECY_PLCY_ST_OFFSET, tag_offset);
+	policy |= MCS_TX_SECY_PLCY_INS_MODE;
+	policy |= MCS_TX_SECY_PLCY_AUTH_ENA;
+	policy |= FIELD_PREP(MCS_TX_SECY_PLCY_CIP, MCS_GCM_AES_128);
+
+	if (secy->protect_frames)
+		policy |= MCS_TX_SECY_PLCY_PROTECT;
+
+	/* If the encodingsa does not exist/active and protect is
+	 * not set then frames can be sent out as it is. Hence enable
+	 * the policy irrespective of secy operational when !protect.
+	 */
+	if (!secy->protect_frames || secy->operational)
+		policy |= MCS_TX_SECY_PLCY_ENA;
+
+	req->plcy = policy;
+	req->secy_id = txsc->hw_secy_id_tx;
+	req->dir = MCS_TX;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_tx_flowid(struct otx2_nic *pfvf,
+				     struct macsec_secy *secy,
+				     struct cn10k_mcs_txsc *txsc)
+{
+	struct mcs_flowid_entry_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	u64 mac_sa;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_flowid_entry_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	mac_sa = ether_addr_to_u64(secy->netdev->dev_addr);
+
+	req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_SA_MASK, mac_sa);
+	req->data[1] = FIELD_PREP(MCS_TCAM1_MAC_SA_MASK, mac_sa >> 16);
+
+	req->mask[0] = ~0ULL;
+	req->mask[0] &= ~MCS_TCAM0_MAC_SA_MASK;
+
+	req->mask[1] = ~0ULL;
+	req->mask[1] &= ~MCS_TCAM1_MAC_SA_MASK;
+
+	req->mask[2] = ~0ULL;
+	req->mask[3] = ~0ULL;
+
+	req->flow_id = txsc->hw_flow_id;
+	req->secy_id = txsc->hw_secy_id_tx;
+	req->sc_id = txsc->hw_sc_id;
+	req->sci = cpu_to_be64(secy->sci);
+	req->dir = MCS_TX;
+	/* This can be enabled since stack xmits packets only when interface is up */
+	req->ena = 1;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_link_tx_sa2sc(struct otx2_nic *pfvf,
+				   struct macsec_secy *secy,
+				   struct cn10k_mcs_txsc *txsc,
+				   u8 sa_num, bool sa_active)
+{
+	struct mcs_tx_sc_sa_map *map_req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	/* Link the encoding_sa only to SC out of all SAs */
+	if (txsc->encoding_sa != sa_num)
+		return 0;
+
+	mutex_lock(&mbox->lock);
+
+	map_req = otx2_mbox_alloc_msg_mcs_tx_sc_sa_map_write(mbox);
+	if (!map_req) {
+		otx2_mbox_reset(&mbox->mbox, 0);
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	map_req->sa_index0 = txsc->hw_sa_id[sa_num];
+	map_req->sa_index0_vld = sa_active;
+	map_req->sectag_sci = cpu_to_be64(secy->sci);
+	map_req->sc_id = txsc->hw_sc_id;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_write_tx_sa_plcy(struct otx2_nic *pfvf,
+				      struct macsec_secy *secy,
+				      struct cn10k_mcs_txsc *txsc,
+				      u8 assoc_num)
+{
+	unsigned char *src = txsc->sa_key[assoc_num];
+	struct mcs_sa_plcy_write_req *plcy_req;
+	struct mbox *mbox = &pfvf->mbox;
+	u8 reg, key_len;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	plcy_req = otx2_mbox_alloc_msg_mcs_sa_plcy_write(mbox);
+	if (!plcy_req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	for (reg = 0, key_len = 0; key_len < secy->key_len; key_len += 8) {
+		memcpy((u8 *)&plcy_req->plcy[0][reg], (src + reg * 8), 8);
+		reg++;
+	}
+
+	plcy_req->plcy[0][8] = assoc_num;
+	plcy_req->sa_index[0] = txsc->hw_sa_id[assoc_num];
+	plcy_req->sa_cnt = 1;
+	plcy_req->dir = MCS_TX;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_write_tx_sa_pn(struct otx2_nic *pfvf,
+				struct cn10k_mcs_txsc *txsc,
+				u8 assoc_num, u64 next_pn)
+{
+	struct mcs_pn_table_write_req *req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_pn_table_write(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->pn_id = txsc->hw_sa_id[assoc_num];
+	req->next_pn = next_pn;
+	req->dir = MCS_TX;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_ena_dis_flowid(struct otx2_nic *pfvf, u16 hw_flow_id,
+				    bool enable, enum mcs_direction dir)
+{
+	struct mcs_flowid_ena_dis_entry *req;
+	struct mbox *mbox = &pfvf->mbox;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_flowid_ena_entry(mbox);
+	if (!req) {
+		return -ENOMEM;
+		goto fail;
+	}
+
+	req->flow_id = hw_flow_id;
+	req->ena = enable;
+	req->dir = dir;
+
+	ret = otx2_sync_mbox_msg(mbox);
+
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_sa_stats(struct otx2_nic *pfvf, u8 hw_sa_id,
+			      struct mcs_sa_stats *rsp_p,
+			      enum mcs_direction dir, bool clear)
+{
+	struct mcs_clear_stats *clear_req;
+	struct mbox *mbox = &pfvf->mbox;
+	struct mcs_stats_req *req;
+	struct mcs_sa_stats *rsp;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_get_sa_stats(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->id = hw_sa_id;
+	req->dir = dir;
+
+	if (!clear)
+		goto send_msg;
+
+	clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox);
+	if (!clear_req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+	clear_req->id = hw_sa_id;
+	clear_req->dir = dir;
+	clear_req->type = MCS_RSRC_TYPE_SA;
+
+send_msg:
+	ret = otx2_sync_mbox_msg(mbox);
+	if (ret)
+		goto fail;
+
+	rsp = (struct mcs_sa_stats *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+						       0, &req->hdr);
+	if (IS_ERR(rsp)) {
+		ret = PTR_ERR(rsp);
+		goto fail;
+	}
+
+	memcpy(rsp_p, rsp, sizeof(*rsp_p));
+
+	mutex_unlock(&mbox->lock);
+
+	return 0;
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_sc_stats(struct otx2_nic *pfvf, u8 hw_sc_id,
+			      struct mcs_sc_stats *rsp_p,
+			      enum mcs_direction dir, bool clear)
+{
+	struct mcs_clear_stats *clear_req;
+	struct mbox *mbox = &pfvf->mbox;
+	struct mcs_stats_req *req;
+	struct mcs_sc_stats *rsp;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_get_sc_stats(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->id = hw_sc_id;
+	req->dir = dir;
+
+	if (!clear)
+		goto send_msg;
+
+	clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox);
+	if (!clear_req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+	clear_req->id = hw_sc_id;
+	clear_req->dir = dir;
+	clear_req->type = MCS_RSRC_TYPE_SC;
+
+send_msg:
+	ret = otx2_sync_mbox_msg(mbox);
+	if (ret)
+		goto fail;
+
+	rsp = (struct mcs_sc_stats *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+						       0, &req->hdr);
+	if (IS_ERR(rsp)) {
+		ret = PTR_ERR(rsp);
+		goto fail;
+	}
+
+	memcpy(rsp_p, rsp, sizeof(*rsp_p));
+
+	mutex_unlock(&mbox->lock);
+
+	return 0;
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static int cn10k_mcs_secy_stats(struct otx2_nic *pfvf, u8 hw_secy_id,
+				struct mcs_secy_stats *rsp_p,
+				enum mcs_direction dir, bool clear)
+{
+	struct mcs_clear_stats *clear_req;
+	struct mbox *mbox = &pfvf->mbox;
+	struct mcs_secy_stats *rsp;
+	struct mcs_stats_req *req;
+	int ret;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_get_secy_stats(mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	req->id = hw_secy_id;
+	req->dir = dir;
+
+	if (!clear)
+		goto send_msg;
+
+	clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox);
+	if (!clear_req) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+	clear_req->id = hw_secy_id;
+	clear_req->dir = dir;
+	clear_req->type = MCS_RSRC_TYPE_SECY;
+
+send_msg:
+	ret = otx2_sync_mbox_msg(mbox);
+	if (ret)
+		goto fail;
+
+	rsp = (struct mcs_secy_stats *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+							 0, &req->hdr);
+	if (IS_ERR(rsp)) {
+		ret = PTR_ERR(rsp);
+		goto fail;
+	}
+
+	memcpy(rsp_p, rsp, sizeof(*rsp_p));
+
+	mutex_unlock(&mbox->lock);
+
+	return 0;
+fail:
+	mutex_unlock(&mbox->lock);
+	return ret;
+}
+
+static struct cn10k_mcs_txsc *cn10k_mcs_create_txsc(struct otx2_nic *pfvf)
+{
+	struct cn10k_mcs_txsc *txsc;
+	int ret;
+
+	txsc = kzalloc(sizeof(*txsc), GFP_KERNEL);
+	if (!txsc)
+		return ERR_PTR(-ENOMEM);
+
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID,
+				   &txsc->hw_flow_id);
+	if (ret)
+		goto fail;
+
+	/* For a SecY, one TX secy and one RX secy HW resources are needed */
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SECY,
+				   &txsc->hw_secy_id_tx);
+	if (ret)
+		goto free_flowid;
+
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SECY,
+				   &txsc->hw_secy_id_rx);
+	if (ret)
+		goto free_tx_secy;
+
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SC,
+				   &txsc->hw_sc_id);
+	if (ret)
+		goto free_rx_secy;
+
+	return txsc;
+free_rx_secy:
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SECY,
+			    txsc->hw_secy_id_rx, false);
+free_tx_secy:
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SECY,
+			    txsc->hw_secy_id_tx, false);
+free_flowid:
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID,
+			    txsc->hw_flow_id, false);
+fail:
+	return ERR_PTR(ret);
+}
+
+/* Free Tx SC and its SAs(if any) resources to AF
+ */
+static void cn10k_mcs_delete_txsc(struct otx2_nic *pfvf,
+				  struct cn10k_mcs_txsc *txsc)
+{
+	u8 sa_bmap = txsc->sa_bmap;
+	u8 sa_num = 0;
+
+	while (sa_bmap) {
+		if (sa_bmap & 1) {
+			cn10k_mcs_write_tx_sa_plcy(pfvf, txsc->sw_secy,
+						   txsc, sa_num);
+			cn10k_mcs_free_txsa(pfvf, txsc->hw_sa_id[sa_num]);
+		}
+		sa_num++;
+		sa_bmap >>= 1;
+	}
+
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SC,
+			    txsc->hw_sc_id, false);
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SECY,
+			    txsc->hw_secy_id_rx, false);
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SECY,
+			    txsc->hw_secy_id_tx, false);
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID,
+			    txsc->hw_flow_id, false);
+}
+
+static struct cn10k_mcs_rxsc *cn10k_mcs_create_rxsc(struct otx2_nic *pfvf)
+{
+	struct cn10k_mcs_rxsc *rxsc;
+	int ret;
+
+	rxsc = kzalloc(sizeof(*rxsc), GFP_KERNEL);
+	if (!rxsc)
+		return ERR_PTR(-ENOMEM);
+
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID,
+				   &rxsc->hw_flow_id);
+	if (ret)
+		goto fail;
+
+	ret = cn10k_mcs_alloc_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SC,
+				   &rxsc->hw_sc_id);
+	if (ret)
+		goto free_flowid;
+
+	return rxsc;
+free_flowid:
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID,
+			    rxsc->hw_flow_id, false);
+fail:
+	return ERR_PTR(ret);
+}
+
+/* Free Rx SC and its SAs(if any) resources to AF
+ */
+static void cn10k_mcs_delete_rxsc(struct otx2_nic *pfvf,
+				  struct cn10k_mcs_rxsc *rxsc)
+{
+	u8 sa_bmap = rxsc->sa_bmap;
+	u8 sa_num = 0;
+
+	while (sa_bmap) {
+		if (sa_bmap & 1) {
+			cn10k_mcs_write_rx_sa_plcy(pfvf, rxsc->sw_secy, rxsc,
+						   sa_num, false);
+			cn10k_mcs_free_rxsa(pfvf, rxsc->hw_sa_id[sa_num]);
+		}
+		sa_num++;
+		sa_bmap >>= 1;
+	}
+
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SC,
+			    rxsc->hw_sc_id, false);
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID,
+			    rxsc->hw_flow_id, false);
+}
+
+static int cn10k_mcs_secy_tx_cfg(struct otx2_nic *pfvf, struct macsec_secy *secy,
+				 struct cn10k_mcs_txsc *txsc,
+				 struct macsec_tx_sa *sw_tx_sa, u8 sa_num)
+{
+	if (sw_tx_sa) {
+		cn10k_mcs_write_tx_sa_plcy(pfvf, secy, txsc, sa_num);
+		cn10k_write_tx_sa_pn(pfvf, txsc, sa_num,
+				     sw_tx_sa->next_pn_halves.lower);
+		cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc, sa_num,
+					sw_tx_sa->active);
+	}
+
+	cn10k_mcs_write_tx_secy(pfvf, secy, txsc);
+	cn10k_mcs_write_tx_flowid(pfvf, secy, txsc);
+	/* When updating secy, change RX secy also */
+	cn10k_mcs_write_rx_secy(pfvf, secy, txsc->hw_secy_id_rx);
+
+	return 0;
+}
+
+static int cn10k_mcs_secy_rx_cfg(struct otx2_nic *pfvf,
+				 struct macsec_secy *secy, u8 hw_secy_id)
+{
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct cn10k_mcs_rxsc *mcs_rx_sc;
+	struct macsec_rx_sc *sw_rx_sc;
+	struct macsec_rx_sa *sw_rx_sa;
+	u8 sa_num;
+
+	for (sw_rx_sc = rcu_dereference_bh(secy->rx_sc); sw_rx_sc && sw_rx_sc->active;
+	     sw_rx_sc = rcu_dereference_bh(sw_rx_sc->next)) {
+		mcs_rx_sc = cn10k_mcs_get_rxsc(cfg, secy, sw_rx_sc);
+		if (unlikely(!mcs_rx_sc))
+			continue;
+
+		for (sa_num = 0; sa_num < CN10K_MCS_SA_PER_SC; sa_num++) {
+			sw_rx_sa = rcu_dereference_bh(sw_rx_sc->sa[sa_num]);
+			if (!sw_rx_sa)
+				continue;
+
+			cn10k_mcs_write_rx_sa_plcy(pfvf, secy, mcs_rx_sc,
+						   sa_num, sw_rx_sa->active);
+			cn10k_mcs_write_rx_sa_pn(pfvf, mcs_rx_sc, sa_num,
+						 sw_rx_sa->next_pn_halves.lower);
+		}
+
+		cn10k_mcs_write_rx_flowid(pfvf, mcs_rx_sc, hw_secy_id);
+		cn10k_mcs_write_sc_cam(pfvf, mcs_rx_sc, hw_secy_id);
+	}
+
+	return 0;
+}
+
+static int cn10k_mcs_disable_rxscs(struct otx2_nic *pfvf,
+				   struct macsec_secy *secy,
+				   bool delete)
+{
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct cn10k_mcs_rxsc *mcs_rx_sc;
+	struct macsec_rx_sc *sw_rx_sc;
+	int ret;
+
+	for (sw_rx_sc = rcu_dereference_bh(secy->rx_sc); sw_rx_sc && sw_rx_sc->active;
+	     sw_rx_sc = rcu_dereference_bh(sw_rx_sc->next)) {
+		mcs_rx_sc = cn10k_mcs_get_rxsc(cfg, secy, sw_rx_sc);
+		if (unlikely(!mcs_rx_sc))
+			continue;
+
+		ret = cn10k_mcs_ena_dis_flowid(pfvf, mcs_rx_sc->hw_flow_id,
+					       false, MCS_RX);
+		if (ret)
+			dev_err(pfvf->dev, "Failed to disable TCAM for SC %d\n",
+				mcs_rx_sc->hw_sc_id);
+		if (delete) {
+			cn10k_mcs_delete_rxsc(pfvf, mcs_rx_sc);
+			list_del(&mcs_rx_sc->entry);
+			kfree(mcs_rx_sc);
+		}
+	}
+
+	return 0;
+}
+
+static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy,
+				 struct cn10k_mcs_txsc *txsc)
+{
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct mcs_secy_stats rx_rsp = { 0 };
+	struct mcs_sc_stats sc_rsp = { 0 };
+	struct cn10k_mcs_rxsc *rxsc;
+
+	/* Because of shared counters for some stats in the hardware, when
+	 * updating secy policy take a snapshot of current stats and reset them.
+	 * Below are the effected stats because of shared counters.
+	 */
+
+	/* Check if sync is really needed */
+	if (secy->validate_frames == txsc->last_validate_frames &&
+	    secy->protect_frames == txsc->last_protect_frames)
+		return;
+
+	cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true);
+
+	txsc->stats.InPktsBadTag += rx_rsp.pkt_badtag_cnt;
+	txsc->stats.InPktsUnknownSCI += rx_rsp.pkt_nosa_cnt;
+	txsc->stats.InPktsNoSCI += rx_rsp.pkt_nosaerror_cnt;
+	if (txsc->last_validate_frames == MACSEC_VALIDATE_STRICT)
+		txsc->stats.InPktsNoTag += rx_rsp.pkt_untaged_cnt;
+	else
+		txsc->stats.InPktsUntagged += rx_rsp.pkt_untaged_cnt;
+
+	list_for_each_entry(rxsc, &cfg->rxsc_list, entry) {
+		cn10k_mcs_sc_stats(pfvf, rxsc->hw_sc_id, &sc_rsp, MCS_RX, true);
+
+		rxsc->stats.InOctetsValidated += sc_rsp.octet_validate_cnt;
+		rxsc->stats.InOctetsDecrypted += sc_rsp.octet_decrypt_cnt;
+
+		rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt;
+		rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt;
+
+		if (txsc->last_protect_frames)
+			rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt;
+		else
+			rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt;
+
+		if (txsc->last_validate_frames == MACSEC_VALIDATE_CHECK)
+			rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt;
+		else
+			rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt;
+	}
+
+	txsc->last_validate_frames = secy->validate_frames;
+	txsc->last_protect_frames = secy->protect_frames;
+}
+
+static int cn10k_mdo_open(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct macsec_tx_sa *sw_tx_sa;
+	struct cn10k_mcs_txsc *txsc;
+	u8 sa_num;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	sa_num = txsc->encoding_sa;
+	sw_tx_sa = secy->tx_sc.sa[sa_num];
+
+	err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num);
+	if (err)
+		return err;
+
+	return cn10k_mcs_secy_rx_cfg(pfvf, secy, txsc->hw_secy_id_rx);
+}
+
+static int cn10k_mdo_stop(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct cn10k_mcs_txsc *txsc;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	err = cn10k_mcs_ena_dis_flowid(pfvf, txsc->hw_flow_id, false, MCS_TX);
+	if (err)
+		return err;
+
+	return cn10k_mcs_disable_rxscs(pfvf, ctx->secy, false);
+}
+
+static int cn10k_mdo_add_secy(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct cn10k_mcs_txsc *txsc;
+
+	if (secy->icv_len != MACSEC_DEFAULT_ICV_LEN)
+		return -EOPNOTSUPP;
+
+	/* Stick to 16 bytes key len until XPN support is added */
+	if (secy->key_len != 16)
+		return -EOPNOTSUPP;
+
+	if (secy->xpn)
+		return -EOPNOTSUPP;
+
+	txsc = cn10k_mcs_create_txsc(pfvf);
+	if (IS_ERR(txsc))
+		return -ENOSPC;
+
+	txsc->sw_secy = secy;
+	txsc->encoding_sa = secy->tx_sc.encoding_sa;
+	txsc->last_validate_frames = secy->validate_frames;
+	txsc->last_protect_frames = secy->protect_frames;
+
+	list_add(&txsc->entry, &cfg->txsc_list);
+
+	if (netif_running(secy->netdev))
+		return cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, NULL, 0);
+
+	return 0;
+}
+
+static int cn10k_mdo_upd_secy(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct macsec_tx_sa *sw_tx_sa;
+	struct cn10k_mcs_txsc *txsc;
+	u8 sa_num;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, secy);
+	if (!txsc)
+		return -ENOENT;
+
+	txsc->encoding_sa = secy->tx_sc.encoding_sa;
+
+	sa_num = txsc->encoding_sa;
+	sw_tx_sa = secy->tx_sc.sa[sa_num];
+
+	if (netif_running(secy->netdev)) {
+		cn10k_mcs_sync_stats(pfvf, secy, txsc);
+
+		err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_del_secy(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct cn10k_mcs_txsc *txsc;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	cn10k_mcs_ena_dis_flowid(pfvf, txsc->hw_flow_id, false, MCS_TX);
+	cn10k_mcs_disable_rxscs(pfvf, ctx->secy, true);
+	cn10k_mcs_delete_txsc(pfvf, txsc);
+	list_del(&txsc->entry);
+	kfree(txsc);
+
+	return 0;
+}
+
+static int cn10k_mdo_add_txsa(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct macsec_tx_sa *sw_tx_sa = ctx->sa.tx_sa;
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_txsc *txsc;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, secy);
+	if (!txsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	if (cn10k_mcs_alloc_txsa(pfvf, &txsc->hw_sa_id[sa_num]))
+		return -ENOSPC;
+
+	memcpy(&txsc->sa_key[sa_num], ctx->sa.key, secy->key_len);
+	txsc->sa_bmap |= 1 << sa_num;
+
+	if (netif_running(secy->netdev)) {
+		err = cn10k_mcs_write_tx_sa_plcy(pfvf, secy, txsc, sa_num);
+		if (err)
+			return err;
+
+		err = cn10k_write_tx_sa_pn(pfvf, txsc, sa_num,
+					   sw_tx_sa->next_pn_halves.lower);
+		if (err)
+			return err;
+
+		err = cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc,
+					      sa_num, sw_tx_sa->active);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_upd_txsa(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct macsec_tx_sa *sw_tx_sa = ctx->sa.tx_sa;
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_txsc *txsc;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, secy);
+	if (!txsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	if (netif_running(secy->netdev)) {
+		/* Keys cannot be changed after creation */
+		err = cn10k_write_tx_sa_pn(pfvf, txsc, sa_num,
+					   sw_tx_sa->next_pn_halves.lower);
+		if (err)
+			return err;
+
+		err = cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc,
+					      sa_num, sw_tx_sa->active);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_del_txsa(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_txsc *txsc;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	cn10k_mcs_free_txsa(pfvf, txsc->hw_sa_id[sa_num]);
+	txsc->sa_bmap &= ~(1 << sa_num);
+
+	return 0;
+}
+
+static int cn10k_mdo_add_rxsc(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct cn10k_mcs_rxsc *rxsc;
+	struct cn10k_mcs_txsc *txsc;
+	int err;
+
+	txsc = cn10k_mcs_get_txsc(cfg, secy);
+	if (!txsc)
+		return -ENOENT;
+
+	rxsc = cn10k_mcs_create_rxsc(pfvf);
+	if (IS_ERR(rxsc))
+		return -ENOSPC;
+
+	rxsc->sw_secy = ctx->secy;
+	rxsc->sw_rxsc = ctx->rx_sc;
+	list_add(&rxsc->entry, &cfg->rxsc_list);
+
+	if (netif_running(secy->netdev)) {
+		err = cn10k_mcs_write_rx_flowid(pfvf, rxsc, txsc->hw_secy_id_rx);
+		if (err)
+			return err;
+
+		err = cn10k_mcs_write_sc_cam(pfvf, rxsc, txsc->hw_secy_id_rx);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_upd_rxsc(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	bool enable = ctx->rx_sc->active;
+	struct cn10k_mcs_rxsc *rxsc;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, secy, ctx->rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	if (netif_running(secy->netdev))
+		return cn10k_mcs_ena_dis_flowid(pfvf, rxsc->hw_flow_id,
+						enable, MCS_RX);
+
+	return 0;
+}
+
+static int cn10k_mdo_del_rxsc(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct cn10k_mcs_rxsc *rxsc;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, ctx->secy, ctx->rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	cn10k_mcs_ena_dis_flowid(pfvf, rxsc->hw_flow_id, false, MCS_RX);
+	cn10k_mcs_delete_rxsc(pfvf, rxsc);
+	list_del(&rxsc->entry);
+	kfree(rxsc);
+
+	return 0;
+}
+
+static int cn10k_mdo_add_rxsa(struct macsec_context *ctx)
+{
+	struct macsec_rx_sc *sw_rx_sc = ctx->sa.rx_sa->sc;
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_rx_sa *rx_sa = ctx->sa.rx_sa;
+	u64 next_pn = rx_sa->next_pn_halves.lower;
+	struct macsec_secy *secy = ctx->secy;
+	bool sa_in_use = rx_sa->active;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_rxsc *rxsc;
+	int err;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, secy, sw_rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	if (cn10k_mcs_alloc_rxsa(pfvf, &rxsc->hw_sa_id[sa_num]))
+		return -ENOSPC;
+
+	memcpy(&rxsc->sa_key[sa_num], ctx->sa.key, ctx->secy->key_len);
+	rxsc->sa_bmap |= 1 << sa_num;
+
+	if (netif_running(secy->netdev)) {
+		err = cn10k_mcs_write_rx_sa_plcy(pfvf, secy, rxsc,
+						 sa_num, sa_in_use);
+		if (err)
+			return err;
+
+		err = cn10k_mcs_write_rx_sa_pn(pfvf, rxsc, sa_num, next_pn);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_upd_rxsa(struct macsec_context *ctx)
+{
+	struct macsec_rx_sc *sw_rx_sc = ctx->sa.rx_sa->sc;
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_rx_sa *rx_sa = ctx->sa.rx_sa;
+	u64 next_pn = rx_sa->next_pn_halves.lower;
+	struct macsec_secy *secy = ctx->secy;
+	bool sa_in_use = rx_sa->active;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_rxsc *rxsc;
+	int err;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, secy, sw_rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	if (netif_running(secy->netdev)) {
+		err = cn10k_mcs_write_rx_sa_plcy(pfvf, secy, rxsc, sa_num, sa_in_use);
+		if (err)
+			return err;
+
+		err = cn10k_mcs_write_rx_sa_pn(pfvf, rxsc, sa_num, next_pn);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int cn10k_mdo_del_rxsa(struct macsec_context *ctx)
+{
+	struct macsec_rx_sc *sw_rx_sc = ctx->sa.rx_sa->sc;
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_rxsc *rxsc;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, ctx->secy, sw_rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	cn10k_mcs_write_rx_sa_plcy(pfvf, ctx->secy, rxsc, sa_num, false);
+	cn10k_mcs_free_rxsa(pfvf, rxsc->hw_sa_id[sa_num]);
+
+	rxsc->sa_bmap &= ~(1 << sa_num);
+
+	return 0;
+}
+
+static int cn10k_mdo_get_dev_stats(struct macsec_context *ctx)
+{
+	struct mcs_secy_stats tx_rsp = { 0 }, rx_rsp = { 0 };
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct cn10k_mcs_txsc *txsc;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_tx, &tx_rsp, MCS_TX, false);
+	ctx->stats.dev_stats->OutPktsUntagged = tx_rsp.pkt_untagged_cnt;
+	ctx->stats.dev_stats->OutPktsTooLong = tx_rsp.pkt_toolong_cnt;
+
+	cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true);
+	txsc->stats.InPktsBadTag += rx_rsp.pkt_badtag_cnt;
+	txsc->stats.InPktsUnknownSCI += rx_rsp.pkt_nosa_cnt;
+	txsc->stats.InPktsNoSCI += rx_rsp.pkt_nosaerror_cnt;
+	if (secy->validate_frames == MACSEC_VALIDATE_STRICT)
+		txsc->stats.InPktsNoTag += rx_rsp.pkt_untaged_cnt;
+	else
+		txsc->stats.InPktsUntagged += rx_rsp.pkt_untaged_cnt;
+	txsc->stats.InPktsOverrun = 0;
+
+	ctx->stats.dev_stats->InPktsNoTag = txsc->stats.InPktsNoTag;
+	ctx->stats.dev_stats->InPktsUntagged = txsc->stats.InPktsUntagged;
+	ctx->stats.dev_stats->InPktsBadTag = txsc->stats.InPktsBadTag;
+	ctx->stats.dev_stats->InPktsUnknownSCI = txsc->stats.InPktsUnknownSCI;
+	ctx->stats.dev_stats->InPktsNoSCI = txsc->stats.InPktsNoSCI;
+	ctx->stats.dev_stats->InPktsOverrun = txsc->stats.InPktsOverrun;
+
+	return 0;
+}
+
+static int cn10k_mdo_get_tx_sc_stats(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct mcs_sc_stats rsp = { 0 };
+	struct cn10k_mcs_txsc *txsc;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	cn10k_mcs_sc_stats(pfvf, txsc->hw_sc_id, &rsp, MCS_TX, false);
+
+	ctx->stats.tx_sc_stats->OutPktsProtected = rsp.pkt_protected_cnt;
+	ctx->stats.tx_sc_stats->OutPktsEncrypted = rsp.pkt_encrypt_cnt;
+	ctx->stats.tx_sc_stats->OutOctetsProtected = rsp.octet_protected_cnt;
+	ctx->stats.tx_sc_stats->OutOctetsEncrypted = rsp.octet_encrypt_cnt;
+
+	return 0;
+}
+
+static int cn10k_mdo_get_tx_sa_stats(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct mcs_sa_stats rsp = { 0 };
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_txsc *txsc;
+
+	txsc = cn10k_mcs_get_txsc(cfg, ctx->secy);
+	if (!txsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	cn10k_mcs_sa_stats(pfvf, txsc->hw_sa_id[sa_num], &rsp, MCS_TX, false);
+
+	ctx->stats.tx_sa_stats->OutPktsProtected = rsp.pkt_protected_cnt;
+	ctx->stats.tx_sa_stats->OutPktsEncrypted = rsp.pkt_encrypt_cnt;
+
+	return 0;
+}
+
+static int cn10k_mdo_get_rx_sc_stats(struct macsec_context *ctx)
+{
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_secy *secy = ctx->secy;
+	struct mcs_sc_stats rsp = { 0 };
+	struct cn10k_mcs_rxsc *rxsc;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, secy, ctx->rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	cn10k_mcs_sc_stats(pfvf, rxsc->hw_sc_id, &rsp, MCS_RX, true);
+
+	rxsc->stats.InOctetsValidated += rsp.octet_validate_cnt;
+	rxsc->stats.InOctetsDecrypted += rsp.octet_decrypt_cnt;
+
+	rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt;
+	rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt;
+
+	if (secy->protect_frames)
+		rxsc->stats.InPktsLate += rsp.pkt_late_cnt;
+	else
+		rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt;
+
+	if (secy->validate_frames == MACSEC_VALIDATE_CHECK)
+		rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt;
+	else
+		rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt;
+
+	ctx->stats.rx_sc_stats->InOctetsValidated = rxsc->stats.InOctetsValidated;
+	ctx->stats.rx_sc_stats->InOctetsDecrypted = rxsc->stats.InOctetsDecrypted;
+	ctx->stats.rx_sc_stats->InPktsInvalid = rxsc->stats.InPktsInvalid;
+	ctx->stats.rx_sc_stats->InPktsNotValid = rxsc->stats.InPktsNotValid;
+	ctx->stats.rx_sc_stats->InPktsLate = rxsc->stats.InPktsLate;
+	ctx->stats.rx_sc_stats->InPktsDelayed = rxsc->stats.InPktsDelayed;
+	ctx->stats.rx_sc_stats->InPktsUnchecked = rxsc->stats.InPktsUnchecked;
+	ctx->stats.rx_sc_stats->InPktsOK = rxsc->stats.InPktsOK;
+
+	return 0;
+}
+
+static int cn10k_mdo_get_rx_sa_stats(struct macsec_context *ctx)
+{
+	struct macsec_rx_sc *sw_rx_sc = ctx->sa.rx_sa->sc;
+	struct otx2_nic *pfvf = netdev_priv(ctx->netdev);
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct mcs_sa_stats rsp = { 0 };
+	u8 sa_num = ctx->sa.assoc_num;
+	struct cn10k_mcs_rxsc *rxsc;
+
+	rxsc = cn10k_mcs_get_rxsc(cfg, ctx->secy, sw_rx_sc);
+	if (!rxsc)
+		return -ENOENT;
+
+	if (sa_num >= CN10K_MCS_SA_PER_SC)
+		return -EOPNOTSUPP;
+
+	cn10k_mcs_sa_stats(pfvf, rxsc->hw_sa_id[sa_num], &rsp, MCS_RX, false);
+
+	ctx->stats.rx_sa_stats->InPktsOK = rsp.pkt_ok_cnt;
+	ctx->stats.rx_sa_stats->InPktsInvalid = rsp.pkt_invalid_cnt;
+	ctx->stats.rx_sa_stats->InPktsNotValid = rsp.pkt_notvalid_cnt;
+	ctx->stats.rx_sa_stats->InPktsNotUsingSA = rsp.pkt_nosaerror_cnt;
+	ctx->stats.rx_sa_stats->InPktsUnusedSA = rsp.pkt_nosa_cnt;
+
+	return 0;
+}
+
+static const struct macsec_ops cn10k_mcs_ops = {
+	.mdo_dev_open = cn10k_mdo_open,
+	.mdo_dev_stop = cn10k_mdo_stop,
+	.mdo_add_secy = cn10k_mdo_add_secy,
+	.mdo_upd_secy = cn10k_mdo_upd_secy,
+	.mdo_del_secy = cn10k_mdo_del_secy,
+	.mdo_add_rxsc = cn10k_mdo_add_rxsc,
+	.mdo_upd_rxsc = cn10k_mdo_upd_rxsc,
+	.mdo_del_rxsc = cn10k_mdo_del_rxsc,
+	.mdo_add_rxsa = cn10k_mdo_add_rxsa,
+	.mdo_upd_rxsa = cn10k_mdo_upd_rxsa,
+	.mdo_del_rxsa = cn10k_mdo_del_rxsa,
+	.mdo_add_txsa = cn10k_mdo_add_txsa,
+	.mdo_upd_txsa = cn10k_mdo_upd_txsa,
+	.mdo_del_txsa = cn10k_mdo_del_txsa,
+	.mdo_get_dev_stats = cn10k_mdo_get_dev_stats,
+	.mdo_get_tx_sc_stats = cn10k_mdo_get_tx_sc_stats,
+	.mdo_get_tx_sa_stats = cn10k_mdo_get_tx_sa_stats,
+	.mdo_get_rx_sc_stats = cn10k_mdo_get_rx_sc_stats,
+	.mdo_get_rx_sa_stats = cn10k_mdo_get_rx_sa_stats,
+};
+
+void cn10k_handle_mcs_event(struct otx2_nic *pfvf, struct mcs_intr_info *event)
+{
+	struct cn10k_mcs_cfg *cfg = pfvf->macsec_cfg;
+	struct macsec_tx_sa *sw_tx_sa = NULL;
+	struct macsec_secy *secy = NULL;
+	struct cn10k_mcs_txsc *txsc;
+	u8 an;
+
+	if (!test_bit(CN10K_HW_MACSEC, &pfvf->hw.cap_flag))
+		return;
+
+	if (!(event->intr_mask & MCS_CPM_TX_PACKET_XPN_EQ0_INT))
+		return;
+
+	/* Find the SecY to which the expired hardware SA is mapped */
+	list_for_each_entry(txsc, &cfg->txsc_list, entry) {
+		for (an = 0; an < CN10K_MCS_SA_PER_SC; an++)
+			if (txsc->hw_sa_id[an] == event->sa_id) {
+				secy = txsc->sw_secy;
+				sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[an]);
+			}
+	}
+
+	if (secy && sw_tx_sa)
+		macsec_pn_wrapped(secy, sw_tx_sa);
+}
+
+int cn10k_mcs_init(struct otx2_nic *pfvf)
+{
+	struct mbox *mbox = &pfvf->mbox;
+	struct cn10k_mcs_cfg *cfg;
+	struct mcs_intr_cfg *req;
+
+	if (!test_bit(CN10K_HW_MACSEC, &pfvf->hw.cap_flag))
+		return 0;
+
+	cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
+	if (!cfg)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&cfg->txsc_list);
+	INIT_LIST_HEAD(&cfg->rxsc_list);
+	pfvf->macsec_cfg = cfg;
+
+	pfvf->netdev->features |= NETIF_F_HW_MACSEC;
+	pfvf->netdev->macsec_ops = &cn10k_mcs_ops;
+
+	mutex_lock(&mbox->lock);
+
+	req = otx2_mbox_alloc_msg_mcs_intr_cfg(mbox);
+	if (!req)
+		goto fail;
+
+	req->intr_mask = MCS_CPM_TX_PACKET_XPN_EQ0_INT;
+
+	if (otx2_sync_mbox_msg(mbox))
+		goto fail;
+
+	mutex_unlock(&mbox->lock);
+
+	return 0;
+fail:
+	dev_err(pfvf->dev, "Cannot notify PN wrapped event\n");
+	return 0;
+}
+
+void cn10k_mcs_free(struct otx2_nic *pfvf)
+{
+	if (!test_bit(CN10K_HW_MACSEC, &pfvf->hw.cap_flag))
+		return;
+
+	cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_SECY, 0, true);
+	cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_SECY, 0, true);
+	kfree(pfvf->macsec_cfg);
+	pfvf->macsec_cfg = NULL;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index bc3e6aae..9ac9e66 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -1827,4 +1827,5 @@ otx2_mbox_up_handler_ ## _fn_name(struct otx2_nic *pfvf,		\
 }									\
 EXPORT_SYMBOL(otx2_mbox_up_handler_ ## _fn_name);
 MBOX_UP_CGX_MESSAGES
+MBOX_UP_MCS_MESSAGES
 #undef M
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 4c7691a..282db6f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -19,6 +19,7 @@
 #include <net/devlink.h>
 #include <linux/time64.h>
 #include <linux/dim.h>
+#include <uapi/linux/if_macsec.h>
 
 #include <mbox.h>
 #include <npc.h>
@@ -33,6 +34,7 @@
 #define PCI_DEVID_OCTEONTX2_RVU_AFVF		0xA0F8
 
 #define PCI_SUBSYS_DEVID_96XX_RVU_PFVF		0xB200
+#define PCI_SUBSYS_DEVID_CN10K_B_RVU_PFVF	0xBD00
 
 /* PCI BAR nos */
 #define PCI_CFG_REG_BAR_NUM                     2
@@ -244,6 +246,7 @@ struct otx2_hw {
 #define CN10K_LMTST		2
 #define CN10K_RPM		3
 #define CN10K_PTP_ONESTEP	4
+#define CN10K_HW_MACSEC		5
 	unsigned long		cap_flag;
 
 #define LMT_LINE_SIZE		128
@@ -351,6 +354,66 @@ struct dev_hw_ops {
 	void	(*aura_freeptr)(void *dev, int aura, u64 buf);
 };
 
+#define CN10K_MCS_SA_PER_SC	4
+
+/* Stats which need to be accumulated in software because
+ * of shared counters in hardware.
+ */
+struct cn10k_txsc_stats {
+	u64 InPktsUntagged;
+	u64 InPktsNoTag;
+	u64 InPktsBadTag;
+	u64 InPktsUnknownSCI;
+	u64 InPktsNoSCI;
+	u64 InPktsOverrun;
+};
+
+struct cn10k_rxsc_stats {
+	u64 InOctetsValidated;
+	u64 InOctetsDecrypted;
+	u64 InPktsUnchecked;
+	u64 InPktsDelayed;
+	u64 InPktsOK;
+	u64 InPktsInvalid;
+	u64 InPktsLate;
+	u64 InPktsNotValid;
+	u64 InPktsNotUsingSA;
+	u64 InPktsUnusedSA;
+};
+
+struct cn10k_mcs_txsc {
+	struct macsec_secy *sw_secy;
+	struct cn10k_txsc_stats stats;
+	struct list_head entry;
+	enum macsec_validation_type last_validate_frames;
+	bool last_protect_frames;
+	u16 hw_secy_id_tx;
+	u16 hw_secy_id_rx;
+	u16 hw_flow_id;
+	u16 hw_sc_id;
+	u16 hw_sa_id[CN10K_MCS_SA_PER_SC];
+	u8 sa_bmap;
+	u8 sa_key[CN10K_MCS_SA_PER_SC][MACSEC_MAX_KEY_LEN];
+	u8 encoding_sa;
+};
+
+struct cn10k_mcs_rxsc {
+	struct macsec_secy *sw_secy;
+	struct macsec_rx_sc *sw_rxsc;
+	struct cn10k_rxsc_stats stats;
+	struct list_head entry;
+	u16 hw_flow_id;
+	u16 hw_sc_id;
+	u16 hw_sa_id[CN10K_MCS_SA_PER_SC];
+	u8 sa_bmap;
+	u8 sa_key[CN10K_MCS_SA_PER_SC][MACSEC_MAX_KEY_LEN];
+};
+
+struct cn10k_mcs_cfg {
+	struct list_head txsc_list;
+	struct list_head rxsc_list;
+};
+
 struct otx2_nic {
 	void __iomem		*reg_base;
 	struct net_device	*netdev;
@@ -438,6 +501,10 @@ struct otx2_nic {
 
 	/* napi event count. It is needed for adaptive irq coalescing. */
 	u32 napi_events;
+
+#if IS_ENABLED(CONFIG_MACSEC)
+	struct cn10k_mcs_cfg	*macsec_cfg;
+#endif
 };
 
 static inline bool is_otx2_lbkvf(struct pci_dev *pdev)
@@ -477,6 +544,11 @@ static inline bool is_dev_otx2(struct pci_dev *pdev)
 		midr == PCI_REVISION_ID_95XXMM || midr == PCI_REVISION_ID_95XXO);
 }
 
+static inline bool is_dev_cn10kb(struct pci_dev *pdev)
+{
+	return pdev->subsystem_device == PCI_SUBSYS_DEVID_CN10K_B_RVU_PFVF;
+}
+
 static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
 {
 	struct otx2_hw *hw = &pfvf->hw;
@@ -508,6 +580,9 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf)
 		__set_bit(CN10K_RPM, &hw->cap_flag);
 		__set_bit(CN10K_PTP_ONESTEP, &hw->cap_flag);
 	}
+
+	if (is_dev_cn10kb(pfvf->pdev))
+		__set_bit(CN10K_HW_MACSEC, &hw->cap_flag);
 }
 
 /* Register read/write APIs */
@@ -763,6 +838,7 @@ otx2_mbox_up_handler_ ## _fn_name(struct otx2_nic *pfvf,		\
 				struct _rsp_type *rsp);			\
 
 MBOX_UP_CGX_MESSAGES
+MBOX_UP_MCS_MESSAGES
 #undef M
 
 /* Time to wait before watchdog kicks off */
@@ -945,4 +1021,18 @@ int otx2_pfc_txschq_alloc(struct otx2_nic *pfvf);
 int otx2_pfc_txschq_update(struct otx2_nic *pfvf);
 int otx2_pfc_txschq_stop(struct otx2_nic *pfvf);
 #endif
+
+#if IS_ENABLED(CONFIG_MACSEC)
+/* MACSEC offload support */
+int cn10k_mcs_init(struct otx2_nic *pfvf);
+void cn10k_mcs_free(struct otx2_nic *pfvf);
+void cn10k_handle_mcs_event(struct otx2_nic *pfvf, struct mcs_intr_info *event);
+#else
+static inline int cn10k_mcs_init(struct otx2_nic *pfvf) { return 0; }
+static inline void cn10k_mcs_free(struct otx2_nic *pfvf) {}
+static inline void cn10k_handle_mcs_event(struct otx2_nic *pfvf,
+					  struct mcs_intr_info *event)
+{}
+#endif /* CONFIG_MACSEC */
+
 #endif /* OTX2_COMMON_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 88ce472..2d59fe5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -858,6 +858,15 @@ static void otx2_handle_link_event(struct otx2_nic *pf)
 	}
 }
 
+int otx2_mbox_up_handler_mcs_intr_notify(struct otx2_nic *pf,
+					 struct mcs_intr_info *event,
+					 struct msg_rsp *rsp)
+{
+	cn10k_handle_mcs_event(pf, event);
+
+	return 0;
+}
+
 int otx2_mbox_up_handler_cgx_link_event(struct otx2_nic *pf,
 					struct cgx_link_info_msg *msg,
 					struct msg_rsp *rsp)
@@ -917,6 +926,7 @@ static int otx2_process_mbox_msg_up(struct otx2_nic *pf,
 		return err;						\
 	}
 MBOX_UP_CGX_MESSAGES
+MBOX_UP_MCS_MESSAGES
 #undef M
 		break;
 	default:
@@ -2765,6 +2775,10 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_ptp_destroy;
 
+	err = cn10k_mcs_init(pf);
+	if (err)
+		goto err_del_mcam_entries;
+
 	if (pf->flags & OTX2_FLAG_NTUPLE_SUPPORT)
 		netdev->hw_features |= NETIF_F_NTUPLE;
 
@@ -2979,6 +2993,8 @@ static void otx2_remove(struct pci_dev *pdev)
 		otx2_config_pause_frm(pf);
 	}
 
+	cn10k_mcs_free(pf);
+
 #ifdef CONFIG_DCB
 	/* Disable PFC config */
 	if (pf->pfc_en) {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts
  2022-09-28  3:58 ` [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts Subbaraya Sundeep
@ 2022-09-29 16:13   ` Jakub Kicinski
  0 siblings, 0 replies; 10+ messages in thread
From: Jakub Kicinski @ 2022-09-29 16:13 UTC (permalink / raw)
  To: Subbaraya Sundeep
  Cc: davem, edumazet, pabeni, netdev, sgoutham, naveenm,
	Geetha sowjanya, Vamsi Attunuru

On Wed, 28 Sep 2022 09:28:08 +0530 Subbaraya Sundeep wrote:
> From: Geetha sowjanya <gakula@marvell.com>
> 
> Hardware triggers an interrupt for events like PN wrap to zero,
> PN crosses set threshold. This interrupt is received
> by the MCS_AF. MCS AF then finds the PF/VF to which SA is mapped
> and notifies them using mcs_intr_notify mbox message.
> 
> PF/VF using mcs_intr_cfg mbox can configure the list
> of interrupts for which they want to receive the
> notification from AF.

clang is still upset at a couple of patches here, for instance:

drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c:102:6: warning: variable 'err' is uninitialized when used here [-Wuninitialized]
        if (err)
            ^~~
drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c:87:9: note: initialize the variable 'err' to silence this warning
        int err, pf;
               ^
                = 0

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-09-29 16:13 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-28  3:58 [net-next PATCH v2 0/8] Introduce macsec hardware offload for cn10k platform Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 1/8] octeontx2-af: cn10k: Introduce driver for macsec block Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 2/8] octeontx2-af: cn10k: mcs: Add mailboxes for port related operations Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 3/8] octeontx2-af: cn10k: mcs: Manage the MCS block hardware resources Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 4/8] octeontx2-af: cn10k: mcs: Install a default TCAM for normal traffic Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 5/8] octeontx2-af: cn10k: mcs: Support for stats collection Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 6/8] octeontx2-af: cn10k: mcs: Handle MCS block interrupts Subbaraya Sundeep
2022-09-29 16:13   ` Jakub Kicinski
2022-09-28  3:58 ` [net-next PATCH v2 7/8] octeontx2-af: cn10k: mcs: Add debugfs support Subbaraya Sundeep
2022-09-28  3:58 ` [net-next PATCH v2 8/8] octeontx2-pf: mcs: Introduce MACSEC hardware offloading Subbaraya Sundeep

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).