All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
@ 2018-09-04 11:54 ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
PF0 is called administrative / admin function (AF) and has privilege access
to registers to provision different RVU functional blocks to each of
PF/VF.

This admin function (AF) driver acts as a configuration / administrative
software which provisions functional blocks to a PF/VF on demand for them
to work as one of the following
 - A basic network controller (i.e NIC).
 - NIC with packet filtering, shaping and scheduling capabilities.
 - A crypto device.
 - A combination of above etc.

PF/VFs communicate with admin function via a shared memory region.
This patch series adds logic for the following
 - RVU AF driver with functional blocks provisioning support
 - Mailbox infrastructure for communication between AF and PFs.
 - CGX driver which provides information about physcial network
   interfaces which AF processes and forwards required info to
   PF/VF drivers.

This is the first set of patches out of 70 odd patches.

Note: This driver neither receives any data nor processes it i.e no I/O,
      just does the hardware configuration.

Changes from v1:
 1 Merged RVU admin function and CGX drivers into a single module
   - Suggested by Arnd Bergmann
 2 Pulled mbox communication APIs into a separate module to remove
   admin function driver dependency in a VM where AF is not attached.
   - Suggested by Arnd Bergmann

Aleksey Makarov (2):
  soc: octeontx2: Add mailbox support infra
  soc: octeontx2: Convert mbox msg id check to a macro

Geetha sowjanya (1):
  soc: octeontx2: Reconfig MSIX base with IOVA

Linu Cherian (3):
  soc: octeontx2: Set RVU PFs to CGX LMACs mapping
  soc: octeontx2: Add support for CGX link management
  soc: octeontx2: Register for CGX lmac events

Sunil Goutham (9):
  soc: octeontx2: Add Marvell OcteonTX2 RVU AF driver
  soc: octeontx2: Reset all RVU blocks
  soc: octeontx2: Gather RVU blocks HW info
  soc: octeontx2: Add mailbox IRQ and msg handlers
  soc: octeontx2: Scan blocks for LFs provisioned to PF/VF
  soc: octeontx2: Add RVU block LF provisioning support
  soc: octeontx2: Configure block LF's MSIX vector offset
  soc: octeontx2: Add Marvell OcteonTX2 CGX driver
  MAINTAINERS: Add entry for Marvell OcteonTX2 Admin Function driver

 MAINTAINERS                                |   10 +
 drivers/soc/Kconfig                        |    1 +
 drivers/soc/Makefile                       |    1 +
 drivers/soc/marvell/Kconfig                |   18 +
 drivers/soc/marvell/Makefile               |    2 +
 drivers/soc/marvell/octeontx2/Makefile     |   10 +
 drivers/soc/marvell/octeontx2/cgx.c        |  517 +++++++++
 drivers/soc/marvell/octeontx2/cgx.h        |   65 ++
 drivers/soc/marvell/octeontx2/cgx_fw_if.h  |  225 ++++
 drivers/soc/marvell/octeontx2/mbox.c       |  303 +++++
 drivers/soc/marvell/octeontx2/mbox.h       |  211 ++++
 drivers/soc/marvell/octeontx2/rvu.c        | 1637 ++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  158 +++
 drivers/soc/marvell/octeontx2/rvu_cgx.c    |  194 ++++
 drivers/soc/marvell/octeontx2/rvu_reg.h    |  442 ++++++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |   78 ++
 16 files changed, 3872 insertions(+)
 create mode 100644 drivers/soc/marvell/Kconfig
 create mode 100644 drivers/soc/marvell/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.h
 create mode 100644 drivers/soc/marvell/octeontx2/cgx_fw_if.h
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.c
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_reg.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_struct.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
@ 2018-09-04 11:54 ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
PF0 is called administrative / admin function (AF) and has privilege access
to registers to provision different RVU functional blocks to each of
PF/VF.

This admin function (AF) driver acts as a configuration / administrative
software which provisions functional blocks to a PF/VF on demand for them
to work as one of the following
 - A basic network controller (i.e NIC).
 - NIC with packet filtering, shaping and scheduling capabilities.
 - A crypto device.
 - A combination of above etc.

PF/VFs communicate with admin function via a shared memory region.
This patch series adds logic for the following
 - RVU AF driver with functional blocks provisioning support
 - Mailbox infrastructure for communication between AF and PFs.
 - CGX driver which provides information about physcial network
   interfaces which AF processes and forwards required info to
   PF/VF drivers.

This is the first set of patches out of 70 odd patches.

Note: This driver neither receives any data nor processes it i.e no I/O,
      just does the hardware configuration.

Changes from v1:
 1 Merged RVU admin function and CGX drivers into a single module
   - Suggested by Arnd Bergmann
 2 Pulled mbox communication APIs into a separate module to remove
   admin function driver dependency in a VM where AF is not attached.
   - Suggested by Arnd Bergmann

Aleksey Makarov (2):
  soc: octeontx2: Add mailbox support infra
  soc: octeontx2: Convert mbox msg id check to a macro

Geetha sowjanya (1):
  soc: octeontx2: Reconfig MSIX base with IOVA

Linu Cherian (3):
  soc: octeontx2: Set RVU PFs to CGX LMACs mapping
  soc: octeontx2: Add support for CGX link management
  soc: octeontx2: Register for CGX lmac events

Sunil Goutham (9):
  soc: octeontx2: Add Marvell OcteonTX2 RVU AF driver
  soc: octeontx2: Reset all RVU blocks
  soc: octeontx2: Gather RVU blocks HW info
  soc: octeontx2: Add mailbox IRQ and msg handlers
  soc: octeontx2: Scan blocks for LFs provisioned to PF/VF
  soc: octeontx2: Add RVU block LF provisioning support
  soc: octeontx2: Configure block LF's MSIX vector offset
  soc: octeontx2: Add Marvell OcteonTX2 CGX driver
  MAINTAINERS: Add entry for Marvell OcteonTX2 Admin Function driver

 MAINTAINERS                                |   10 +
 drivers/soc/Kconfig                        |    1 +
 drivers/soc/Makefile                       |    1 +
 drivers/soc/marvell/Kconfig                |   18 +
 drivers/soc/marvell/Makefile               |    2 +
 drivers/soc/marvell/octeontx2/Makefile     |   10 +
 drivers/soc/marvell/octeontx2/cgx.c        |  517 +++++++++
 drivers/soc/marvell/octeontx2/cgx.h        |   65 ++
 drivers/soc/marvell/octeontx2/cgx_fw_if.h  |  225 ++++
 drivers/soc/marvell/octeontx2/mbox.c       |  303 +++++
 drivers/soc/marvell/octeontx2/mbox.h       |  211 ++++
 drivers/soc/marvell/octeontx2/rvu.c        | 1637 ++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  158 +++
 drivers/soc/marvell/octeontx2/rvu_cgx.c    |  194 ++++
 drivers/soc/marvell/octeontx2/rvu_reg.h    |  442 ++++++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |   78 ++
 16 files changed, 3872 insertions(+)
 create mode 100644 drivers/soc/marvell/Kconfig
 create mode 100644 drivers/soc/marvell/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.h
 create mode 100644 drivers/soc/marvell/octeontx2/cgx_fw_if.h
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.c
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_reg.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_struct.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 01/15] soc: octeontx2: Add Marvell OcteonTX2 RVU AF driver
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds basic template for Marvell OcteonTX2's
resource virtualization unit (RVU) admin function (AF)
driver. Just the driver registration and probe.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/Kconfig                    |   1 +
 drivers/soc/Makefile                   |   1 +
 drivers/soc/marvell/Kconfig            |  13 ++++
 drivers/soc/marvell/Makefile           |   2 +
 drivers/soc/marvell/octeontx2/Makefile |   8 +++
 drivers/soc/marvell/octeontx2/rvu.c    | 126 +++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h    |  31 ++++++++
 7 files changed, 182 insertions(+)
 create mode 100644 drivers/soc/marvell/Kconfig
 create mode 100644 drivers/soc/marvell/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.h

diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index c07b4a8..42f2d0b 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -6,6 +6,7 @@ source "drivers/soc/atmel/Kconfig"
 source "drivers/soc/bcm/Kconfig"
 source "drivers/soc/fsl/Kconfig"
 source "drivers/soc/imx/Kconfig"
+source "drivers/soc/marvell/Kconfig"
 source "drivers/soc/mediatek/Kconfig"
 source "drivers/soc/qcom/Kconfig"
 source "drivers/soc/renesas/Kconfig"
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 113e884..5e18cbb 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -12,6 +12,7 @@ obj-y				+= fsl/
 obj-$(CONFIG_ARCH_GEMINI)	+= gemini/
 obj-$(CONFIG_ARCH_MXC)		+= imx/
 obj-$(CONFIG_SOC_XWAY)		+= lantiq/
+obj-y				+= marvell/
 obj-y				+= mediatek/
 obj-$(CONFIG_ARCH_MESON)	+= amlogic/
 obj-y				+= qcom/
diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
new file mode 100644
index 0000000..4499caf
--- /dev/null
+++ b/drivers/soc/marvell/Kconfig
@@ -0,0 +1,13 @@
+#
+# MARVELL SoC drivers
+#
+
+menu "Marvell SoC drivers"
+
+config OCTEONTX2_AF
+	tristate "OcteonTX2 RVU Admin Function driver"
+	depends on ARM64 && PCI
+	help
+	  This driver supports Marvell's OcteonTX2 Resource Virtualization
+	  Unit's admin function manager which manages all RVU HW resources.
+endmenu
diff --git a/drivers/soc/marvell/Makefile b/drivers/soc/marvell/Makefile
new file mode 100644
index 0000000..16e0ca0
--- /dev/null
+++ b/drivers/soc/marvell/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y		+= octeontx2/
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
new file mode 100644
index 0000000..dacbd16
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Marvell's OcteonTX2 RVU Admin Function driver
+#
+
+obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
+
+octeontx2_af-y := rvu.o
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
new file mode 100644
index 0000000..5af4da6
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/pci.h>
+#include <linux/sysfs.h>
+
+#include "rvu.h"
+
+#define DRV_NAME	"octeontx2-af"
+#define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
+#define DRV_VERSION	"1.0"
+
+/* Supported devices */
+static const struct pci_device_id rvu_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
+	{ 0, }  /* end of table */
+};
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_DESCRIPTION(DRV_STRING);
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION(DRV_VERSION);
+MODULE_DEVICE_TABLE(pci, rvu_id_table);
+
+static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	struct rvu *rvu;
+	int    err;
+
+	rvu = devm_kzalloc(dev, sizeof(*rvu), GFP_KERNEL);
+	if (!rvu)
+		return -ENOMEM;
+
+	pci_set_drvdata(pdev, rvu);
+	rvu->pdev = pdev;
+	rvu->dev = &pdev->dev;
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device\n");
+		goto err_freemem;
+	}
+
+	err = pci_request_regions(pdev, DRV_NAME);
+	if (err) {
+		dev_err(dev, "PCI request regions failed 0x%x\n", err);
+		goto err_disable_device;
+	}
+
+	err = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (err) {
+		dev_err(dev, "Unable to set DMA mask\n");
+		goto err_release_regions;
+	}
+
+	err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (err) {
+		dev_err(dev, "Unable to set consistent DMA mask\n");
+		goto err_release_regions;
+	}
+
+	/* Map Admin function CSRs */
+	rvu->afreg_base = pcim_iomap(pdev, PCI_AF_REG_BAR_NUM, 0);
+	rvu->pfreg_base = pcim_iomap(pdev, PCI_PF_REG_BAR_NUM, 0);
+	if (!rvu->afreg_base || !rvu->pfreg_base) {
+		dev_err(dev, "Unable to map admin function CSRs, aborting\n");
+		err = -ENOMEM;
+		goto err_release_regions;
+	}
+
+	return 0;
+
+err_release_regions:
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+err_freemem:
+	pci_set_drvdata(pdev, NULL);
+	devm_kfree(dev, rvu);
+	return err;
+}
+
+static void rvu_remove(struct pci_dev *pdev)
+{
+	struct rvu *rvu = pci_get_drvdata(pdev);
+
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+
+	devm_kfree(&pdev->dev, rvu);
+}
+
+static struct pci_driver rvu_driver = {
+	.name = DRV_NAME,
+	.id_table = rvu_id_table,
+	.probe = rvu_probe,
+	.remove = rvu_remove,
+};
+
+static int __init rvu_init_module(void)
+{
+	pr_info("%s: %s\n", DRV_NAME, DRV_STRING);
+
+	return pci_register_driver(&rvu_driver);
+}
+
+static void __exit rvu_cleanup_module(void)
+{
+	pci_unregister_driver(&rvu_driver);
+}
+
+module_init(rvu_init_module);
+module_exit(rvu_cleanup_module);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
new file mode 100644
index 0000000..4a4b0ad
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_H
+#define RVU_H
+
+/* PCI device IDs */
+#define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
+
+/* PCI BAR nos */
+#define	PCI_AF_REG_BAR_NUM			0
+#define	PCI_PF_REG_BAR_NUM			2
+#define	PCI_MBOX_BAR_NUM			4
+
+#define NAME_SIZE				32
+
+struct rvu {
+	void __iomem		*afreg_base;
+	void __iomem		*pfreg_base;
+	struct pci_dev		*pdev;
+	struct device		*dev;
+};
+
+#endif /* RVU_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 01/15] soc: octeontx2: Add Marvell OcteonTX2 RVU AF driver
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds basic template for Marvell OcteonTX2's
resource virtualization unit (RVU) admin function (AF)
driver. Just the driver registration and probe.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/Kconfig                    |   1 +
 drivers/soc/Makefile                   |   1 +
 drivers/soc/marvell/Kconfig            |  13 ++++
 drivers/soc/marvell/Makefile           |   2 +
 drivers/soc/marvell/octeontx2/Makefile |   8 +++
 drivers/soc/marvell/octeontx2/rvu.c    | 126 +++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h    |  31 ++++++++
 7 files changed, 182 insertions(+)
 create mode 100644 drivers/soc/marvell/Kconfig
 create mode 100644 drivers/soc/marvell/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/Makefile
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.c
 create mode 100644 drivers/soc/marvell/octeontx2/rvu.h

diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index c07b4a8..42f2d0b 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -6,6 +6,7 @@ source "drivers/soc/atmel/Kconfig"
 source "drivers/soc/bcm/Kconfig"
 source "drivers/soc/fsl/Kconfig"
 source "drivers/soc/imx/Kconfig"
+source "drivers/soc/marvell/Kconfig"
 source "drivers/soc/mediatek/Kconfig"
 source "drivers/soc/qcom/Kconfig"
 source "drivers/soc/renesas/Kconfig"
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 113e884..5e18cbb 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -12,6 +12,7 @@ obj-y				+= fsl/
 obj-$(CONFIG_ARCH_GEMINI)	+= gemini/
 obj-$(CONFIG_ARCH_MXC)		+= imx/
 obj-$(CONFIG_SOC_XWAY)		+= lantiq/
+obj-y				+= marvell/
 obj-y				+= mediatek/
 obj-$(CONFIG_ARCH_MESON)	+= amlogic/
 obj-y				+= qcom/
diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
new file mode 100644
index 0000000..4499caf
--- /dev/null
+++ b/drivers/soc/marvell/Kconfig
@@ -0,0 +1,13 @@
+#
+# MARVELL SoC drivers
+#
+
+menu "Marvell SoC drivers"
+
+config OCTEONTX2_AF
+	tristate "OcteonTX2 RVU Admin Function driver"
+	depends on ARM64 && PCI
+	help
+	  This driver supports Marvell's OcteonTX2 Resource Virtualization
+	  Unit's admin function manager which manages all RVU HW resources.
+endmenu
diff --git a/drivers/soc/marvell/Makefile b/drivers/soc/marvell/Makefile
new file mode 100644
index 0000000..16e0ca0
--- /dev/null
+++ b/drivers/soc/marvell/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y		+= octeontx2/
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
new file mode 100644
index 0000000..dacbd16
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for Marvell's OcteonTX2 RVU Admin Function driver
+#
+
+obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
+
+octeontx2_af-y := rvu.o
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
new file mode 100644
index 0000000..5af4da6
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/pci.h>
+#include <linux/sysfs.h>
+
+#include "rvu.h"
+
+#define DRV_NAME	"octeontx2-af"
+#define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
+#define DRV_VERSION	"1.0"
+
+/* Supported devices */
+static const struct pci_device_id rvu_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
+	{ 0, }  /* end of table */
+};
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_DESCRIPTION(DRV_STRING);
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION(DRV_VERSION);
+MODULE_DEVICE_TABLE(pci, rvu_id_table);
+
+static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	struct rvu *rvu;
+	int    err;
+
+	rvu = devm_kzalloc(dev, sizeof(*rvu), GFP_KERNEL);
+	if (!rvu)
+		return -ENOMEM;
+
+	pci_set_drvdata(pdev, rvu);
+	rvu->pdev = pdev;
+	rvu->dev = &pdev->dev;
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device\n");
+		goto err_freemem;
+	}
+
+	err = pci_request_regions(pdev, DRV_NAME);
+	if (err) {
+		dev_err(dev, "PCI request regions failed 0x%x\n", err);
+		goto err_disable_device;
+	}
+
+	err = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (err) {
+		dev_err(dev, "Unable to set DMA mask\n");
+		goto err_release_regions;
+	}
+
+	err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
+	if (err) {
+		dev_err(dev, "Unable to set consistent DMA mask\n");
+		goto err_release_regions;
+	}
+
+	/* Map Admin function CSRs */
+	rvu->afreg_base = pcim_iomap(pdev, PCI_AF_REG_BAR_NUM, 0);
+	rvu->pfreg_base = pcim_iomap(pdev, PCI_PF_REG_BAR_NUM, 0);
+	if (!rvu->afreg_base || !rvu->pfreg_base) {
+		dev_err(dev, "Unable to map admin function CSRs, aborting\n");
+		err = -ENOMEM;
+		goto err_release_regions;
+	}
+
+	return 0;
+
+err_release_regions:
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+err_freemem:
+	pci_set_drvdata(pdev, NULL);
+	devm_kfree(dev, rvu);
+	return err;
+}
+
+static void rvu_remove(struct pci_dev *pdev)
+{
+	struct rvu *rvu = pci_get_drvdata(pdev);
+
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+
+	devm_kfree(&pdev->dev, rvu);
+}
+
+static struct pci_driver rvu_driver = {
+	.name = DRV_NAME,
+	.id_table = rvu_id_table,
+	.probe = rvu_probe,
+	.remove = rvu_remove,
+};
+
+static int __init rvu_init_module(void)
+{
+	pr_info("%s: %s\n", DRV_NAME, DRV_STRING);
+
+	return pci_register_driver(&rvu_driver);
+}
+
+static void __exit rvu_cleanup_module(void)
+{
+	pci_unregister_driver(&rvu_driver);
+}
+
+module_init(rvu_init_module);
+module_exit(rvu_cleanup_module);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
new file mode 100644
index 0000000..4a4b0ad
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_H
+#define RVU_H
+
+/* PCI device IDs */
+#define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
+
+/* PCI BAR nos */
+#define	PCI_AF_REG_BAR_NUM			0
+#define	PCI_PF_REG_BAR_NUM			2
+#define	PCI_MBOX_BAR_NUM			4
+
+#define NAME_SIZE				32
+
+struct rvu {
+	void __iomem		*afreg_base;
+	void __iomem		*pfreg_base;
+	struct pci_dev		*pdev;
+	struct device		*dev;
+};
+
+#endif /* RVU_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 02/15] soc: octeontx2: Reset all RVU blocks
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Go through all BLKADDRs and check which ones are implemented
on this silicon and do a HW reset of each implemented block.
Also added all RVU AF and PF register offsets.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c        |  78 ++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  37 ++++++++++
 drivers/soc/marvell/octeontx2/rvu_reg.h    | 113 +++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  36 +++++++++
 4 files changed, 264 insertions(+)
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_reg.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_struct.h

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 5af4da6..d40fabf 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 
 #include "rvu.h"
+#include "rvu_reg.h"
 
 #define DRV_NAME	"octeontx2-af"
 #define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
@@ -33,6 +34,70 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, rvu_id_table);
 
+/* Poll a RVU block's register 'offset', for a 'zero'
+ * or 'nonzero' at bits specified by 'mask'
+ */
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
+{
+	void __iomem *reg;
+	int timeout = 100;
+	u64 reg_val;
+
+	reg = rvu->afreg_base + ((block << 28) | offset);
+	while (timeout) {
+		reg_val = readq(reg);
+		if (zero && !(reg_val & mask))
+			return 0;
+		if (!zero && (reg_val & mask))
+			return 0;
+		udelay(1);
+		cpu_relax();
+		timeout--;
+	}
+	return -EBUSY;
+}
+
+static void rvu_check_block_implemented(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int blkid;
+	u64 cfg;
+
+	/* For each block check if 'implemented' bit is set */
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		cfg = rvupf_read64(rvu, RVU_PF_BLOCK_ADDRX_DISC(blkid));
+		if (cfg & BIT_ULL(11))
+			block->implemented = true;
+	}
+}
+
+static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
+{
+	struct rvu_block *block = &rvu->hw->block[blkaddr];
+
+	if (!block->implemented)
+		return;
+
+	rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
+	rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
+}
+
+static void rvu_reset_all_blocks(struct rvu *rvu)
+{
+	/* Do a HW reset of all RVU blocks */
+	rvu_block_reset(rvu, BLKADDR_NPA, NPA_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NIX0, NIX_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NPC, NPC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_SSO, SSO_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_TIM, TIM_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_CPT0, CPT_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC0, NDC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC1, NDC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -43,6 +108,12 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (!rvu)
 		return -ENOMEM;
 
+	rvu->hw = devm_kzalloc(dev, sizeof(struct rvu_hwinfo), GFP_KERNEL);
+	if (!rvu->hw) {
+		devm_kfree(dev, rvu);
+		return -ENOMEM;
+	}
+
 	pci_set_drvdata(pdev, rvu);
 	rvu->pdev = pdev;
 	rvu->dev = &pdev->dev;
@@ -80,6 +151,11 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	/* Check which blocks the HW supports */
+	rvu_check_block_implemented(rvu);
+
+	rvu_reset_all_blocks(rvu);
+
 	return 0;
 
 err_release_regions:
@@ -88,6 +164,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	pci_disable_device(pdev);
 err_freemem:
 	pci_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, rvu->hw);
 	devm_kfree(dev, rvu);
 	return err;
 }
@@ -100,6 +177,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
 
+	devm_kfree(&pdev->dev, rvu->hw);
 	devm_kfree(&pdev->dev, rvu);
 }
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 4a4b0ad..e2c54d0 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -11,6 +11,8 @@
 #ifndef RVU_H
 #define RVU_H
 
+#include "rvu_struct.h"
+
 /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
 
@@ -21,11 +23,46 @@
 
 #define NAME_SIZE				32
 
+struct rvu_block {
+	bool implemented;
+};
+
+struct rvu_hwinfo {
+	struct rvu_block block[BLK_COUNT]; /* Block info */
+};
+
 struct rvu {
 	void __iomem		*afreg_base;
 	void __iomem		*pfreg_base;
 	struct pci_dev		*pdev;
 	struct device		*dev;
+	struct rvu_hwinfo       *hw;
 };
 
+static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+{
+	writeq(val, rvu->afreg_base + ((block << 28) | offset));
+}
+
+static inline u64 rvu_read64(struct rvu *rvu, u64 block, u64 offset)
+{
+	return readq(rvu->afreg_base + ((block << 28) | offset));
+}
+
+static inline void rvupf_write64(struct rvu *rvu, u64 offset, u64 val)
+{
+	writeq(val, rvu->pfreg_base + offset);
+}
+
+static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
+{
+	return readq(rvu->pfreg_base + offset);
+}
+
+/* Function Prototypes
+ * RVU
+ */
+
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
new file mode 100644
index 0000000..e7abd7e
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_REG_H
+#define RVU_REG_H
+
+/* Admin function registers */
+#define RVU_AF_MSIXTR_BASE                  (0x10)
+#define RVU_AF_ECO                          (0x20)
+#define RVU_AF_BLK_RST                      (0x30)
+#define RVU_AF_PF_BAR4_ADDR                 (0x40)
+#define RVU_AF_RAS                          (0x100)
+#define RVU_AF_RAS_W1S                      (0x108)
+#define RVU_AF_RAS_ENA_W1S                  (0x110)
+#define RVU_AF_RAS_ENA_W1C                  (0x118)
+#define RVU_AF_GEN_INT                      (0x120)
+#define RVU_AF_GEN_INT_W1S                  (0x128)
+#define RVU_AF_GEN_INT_ENA_W1S              (0x130)
+#define RVU_AF_GEN_INT_ENA_W1C              (0x138)
+#define	RVU_AF_AFPF_MBOX0		    (0x02000)
+#define	RVU_AF_AFPF_MBOX1		    (0x02008)
+#define RVU_AF_AFPFX_MBOXX(a, b)            (0x2000 | (a) << 4 | (b) << 3)
+#define RVU_AF_PFME_STATUS                  (0x2800)
+#define RVU_AF_PFTRPEND                     (0x2810)
+#define RVU_AF_PFTRPEND_W1S                 (0x2820)
+#define RVU_AF_PF_RST                       (0x2840)
+#define RVU_AF_HWVF_RST                     (0x2850)
+#define RVU_AF_PFAF_MBOX_INT                (0x2880)
+#define RVU_AF_PFAF_MBOX_INT_W1S            (0x2888)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1S        (0x2890)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1C        (0x2898)
+#define RVU_AF_PFFLR_INT                    (0x28a0)
+#define RVU_AF_PFFLR_INT_W1S                (0x28a8)
+#define RVU_AF_PFFLR_INT_ENA_W1S            (0x28b0)
+#define RVU_AF_PFFLR_INT_ENA_W1C            (0x28b8)
+#define RVU_AF_PFME_INT                     (0x28c0)
+#define RVU_AF_PFME_INT_W1S                 (0x28c8)
+#define RVU_AF_PFME_INT_ENA_W1S             (0x28d0)
+#define RVU_AF_PFME_INT_ENA_W1C             (0x28d8)
+
+/* Admin function's privileged PF/VF registers */
+#define RVU_PRIV_CONST                      (0x8000000)
+#define RVU_PRIV_GEN_CFG                    (0x8000010)
+#define RVU_PRIV_CLK_CFG                    (0x8000020)
+#define RVU_PRIV_ACTIVE_PC                  (0x8000030)
+#define RVU_PRIV_PFX_CFG(a)                 (0x8000100 | (a) << 16)
+#define RVU_PRIV_PFX_MSIX_CFG(a)            (0x8000110 | (a) << 16)
+#define RVU_PRIV_PFX_ID_CFG(a)              (0x8000120 | (a) << 16)
+#define RVU_PRIV_PFX_INT_CFG(a)             (0x8000200 | (a) << 16)
+#define RVU_PRIV_PFX_NIX_CFG                (0x8000300)
+#define RVU_PRIV_PFX_NPA_CFG		    (0x8000310)
+#define RVU_PRIV_PFX_SSO_CFG                (0x8000320)
+#define RVU_PRIV_PFX_SSOW_CFG               (0x8000330)
+#define RVU_PRIV_PFX_TIM_CFG                (0x8000340)
+#define RVU_PRIV_PFX_CPT_CFG                (0x8000350)
+#define RVU_PRIV_BLOCK_TYPEX_REV(a)         (0x8000400 | (a) << 3)
+#define RVU_PRIV_HWVFX_INT_CFG(a)           (0x8001280 | (a) << 16)
+#define RVU_PRIV_HWVFX_NIX_CFG              (0x8001300)
+#define RVU_PRIV_HWVFX_NPA_CFG              (0x8001310)
+#define RVU_PRIV_HWVFX_SSO_CFG              (0x8001320)
+#define RVU_PRIV_HWVFX_SSOW_CFG             (0x8001330)
+#define RVU_PRIV_HWVFX_TIM_CFG              (0x8001340)
+#define RVU_PRIV_HWVFX_CPT_CFG              (0x8001350)
+
+/* RVU PF registers */
+#define	RVU_PF_VFX_PFVF_MBOX0		    (0x00000)
+#define	RVU_PF_VFX_PFVF_MBOX1		    (0x00008)
+#define RVU_PF_VFX_PFVF_MBOXX(a, b)         (0x0 | (a) << 12 | (b) << 3)
+#define RVU_PF_VF_BAR4_ADDR                 (0x10)
+#define RVU_PF_BLOCK_ADDRX_DISC(a)          (0x200 | (a) << 3)
+#define RVU_PF_VFME_STATUSX(a)              (0x800 | (a) << 3)
+#define RVU_PF_VFTRPENDX(a)                 (0x820 | (a) << 3)
+#define RVU_PF_VFTRPEND_W1SX(a)             (0x840 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INTX(a)            (0x880 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_W1SX(a)        (0x8A0 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a)    (0x8C0 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a)    (0x8E0 | (a) << 3)
+#define RVU_PF_VFFLR_INTX(a)                (0x900 | (a) << 3)
+#define RVU_PF_VFFLR_INT_W1SX(a)            (0x920 | (a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1SX(a)        (0x940 | (a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1CX(a)        (0x960 | (a) << 3)
+#define RVU_PF_VFME_INTX(a)                 (0x980 | (a) << 3)
+#define RVU_PF_VFME_INT_W1SX(a)             (0x9A0 | (a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1SX(a)         (0x9C0 | (a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1CX(a)         (0x9E0 | (a) << 3)
+#define RVU_PF_PFAF_MBOX0                   (0xC00)
+#define RVU_PF_PFAF_MBOX1                   (0xC08)
+#define RVU_PF_PFAF_MBOXX(a)                (0xC00 | (a) << 3)
+#define RVU_PF_INT                          (0xc20)
+#define RVU_PF_INT_W1S                      (0xc28)
+#define RVU_PF_INT_ENA_W1S                  (0xc30)
+#define RVU_PF_INT_ENA_W1C                  (0xc38)
+#define RVU_PF_MSIX_VECX_ADDR(a)            (0x000 | (a) << 4)
+#define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
+#define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
+
+
+#define NPA_AF_BLK_RST                  (0x0000)
+#define NIX_AF_BLK_RST                  (0x00B0)
+#define SSO_AF_BLK_RST                  (0x10f8)
+#define TIM_AF_BLK_RST                  (0x10)
+#define CPT_AF_BLK_RST                  (0x46000)
+#define NDC_AF_BLK_RST                  (0x002F0)
+#define NPC_AF_BLK_RST                  (0x00040)
+
+#endif /* RVU_REG_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
new file mode 100644
index 0000000..4122559
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_STRUCT_H
+#define RVU_STRUCT_H
+
+/*
+ * RVU Block Address Enumeration
+ */
+enum rvu_block_addr_e {
+	BLKADDR_RVUM    = 0x0ULL,
+	BLKADDR_LMT     = 0x1ULL,
+	BLKADDR_MSIX    = 0x2ULL,
+	BLKADDR_NPA     = 0x3ULL,
+	BLKADDR_NIX0    = 0x4ULL,
+	BLKADDR_NIX1    = 0x5ULL,
+	BLKADDR_NPC     = 0x6ULL,
+	BLKADDR_SSO     = 0x7ULL,
+	BLKADDR_SSOW    = 0x8ULL,
+	BLKADDR_TIM     = 0x9ULL,
+	BLKADDR_CPT0    = 0xaULL,
+	BLKADDR_CPT1    = 0xbULL,
+	BLKADDR_NDC0    = 0xcULL,
+	BLKADDR_NDC1    = 0xdULL,
+	BLKADDR_NDC2    = 0xeULL,
+	BLK_COUNT	= 0xfULL,
+};
+
+#endif /* RVU_STRUCT_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 02/15] soc: octeontx2: Reset all RVU blocks
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Go through all BLKADDRs and check which ones are implemented
on this silicon and do a HW reset of each implemented block.
Also added all RVU AF and PF register offsets.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c        |  78 ++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  37 ++++++++++
 drivers/soc/marvell/octeontx2/rvu_reg.h    | 113 +++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  36 +++++++++
 4 files changed, 264 insertions(+)
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_reg.h
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_struct.h

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 5af4da6..d40fabf 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -16,6 +16,7 @@
 #include <linux/sysfs.h>
 
 #include "rvu.h"
+#include "rvu_reg.h"
 
 #define DRV_NAME	"octeontx2-af"
 #define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
@@ -33,6 +34,70 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, rvu_id_table);
 
+/* Poll a RVU block's register 'offset', for a 'zero'
+ * or 'nonzero' at bits specified by 'mask'
+ */
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
+{
+	void __iomem *reg;
+	int timeout = 100;
+	u64 reg_val;
+
+	reg = rvu->afreg_base + ((block << 28) | offset);
+	while (timeout) {
+		reg_val = readq(reg);
+		if (zero && !(reg_val & mask))
+			return 0;
+		if (!zero && (reg_val & mask))
+			return 0;
+		udelay(1);
+		cpu_relax();
+		timeout--;
+	}
+	return -EBUSY;
+}
+
+static void rvu_check_block_implemented(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int blkid;
+	u64 cfg;
+
+	/* For each block check if 'implemented' bit is set */
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		cfg = rvupf_read64(rvu, RVU_PF_BLOCK_ADDRX_DISC(blkid));
+		if (cfg & BIT_ULL(11))
+			block->implemented = true;
+	}
+}
+
+static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
+{
+	struct rvu_block *block = &rvu->hw->block[blkaddr];
+
+	if (!block->implemented)
+		return;
+
+	rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
+	rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
+}
+
+static void rvu_reset_all_blocks(struct rvu *rvu)
+{
+	/* Do a HW reset of all RVU blocks */
+	rvu_block_reset(rvu, BLKADDR_NPA, NPA_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NIX0, NIX_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NPC, NPC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_SSO, SSO_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_TIM, TIM_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_CPT0, CPT_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC0, NDC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC1, NDC_AF_BLK_RST);
+	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -43,6 +108,12 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (!rvu)
 		return -ENOMEM;
 
+	rvu->hw = devm_kzalloc(dev, sizeof(struct rvu_hwinfo), GFP_KERNEL);
+	if (!rvu->hw) {
+		devm_kfree(dev, rvu);
+		return -ENOMEM;
+	}
+
 	pci_set_drvdata(pdev, rvu);
 	rvu->pdev = pdev;
 	rvu->dev = &pdev->dev;
@@ -80,6 +151,11 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	/* Check which blocks the HW supports */
+	rvu_check_block_implemented(rvu);
+
+	rvu_reset_all_blocks(rvu);
+
 	return 0;
 
 err_release_regions:
@@ -88,6 +164,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	pci_disable_device(pdev);
 err_freemem:
 	pci_set_drvdata(pdev, NULL);
+	devm_kfree(&pdev->dev, rvu->hw);
 	devm_kfree(dev, rvu);
 	return err;
 }
@@ -100,6 +177,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
 
+	devm_kfree(&pdev->dev, rvu->hw);
 	devm_kfree(&pdev->dev, rvu);
 }
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 4a4b0ad..e2c54d0 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -11,6 +11,8 @@
 #ifndef RVU_H
 #define RVU_H
 
+#include "rvu_struct.h"
+
 /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
 
@@ -21,11 +23,46 @@
 
 #define NAME_SIZE				32
 
+struct rvu_block {
+	bool implemented;
+};
+
+struct rvu_hwinfo {
+	struct rvu_block block[BLK_COUNT]; /* Block info */
+};
+
 struct rvu {
 	void __iomem		*afreg_base;
 	void __iomem		*pfreg_base;
 	struct pci_dev		*pdev;
 	struct device		*dev;
+	struct rvu_hwinfo       *hw;
 };
 
+static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+{
+	writeq(val, rvu->afreg_base + ((block << 28) | offset));
+}
+
+static inline u64 rvu_read64(struct rvu *rvu, u64 block, u64 offset)
+{
+	return readq(rvu->afreg_base + ((block << 28) | offset));
+}
+
+static inline void rvupf_write64(struct rvu *rvu, u64 offset, u64 val)
+{
+	writeq(val, rvu->pfreg_base + offset);
+}
+
+static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
+{
+	return readq(rvu->pfreg_base + offset);
+}
+
+/* Function Prototypes
+ * RVU
+ */
+
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
new file mode 100644
index 0000000..e7abd7e
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_REG_H
+#define RVU_REG_H
+
+/* Admin function registers */
+#define RVU_AF_MSIXTR_BASE                  (0x10)
+#define RVU_AF_ECO                          (0x20)
+#define RVU_AF_BLK_RST                      (0x30)
+#define RVU_AF_PF_BAR4_ADDR                 (0x40)
+#define RVU_AF_RAS                          (0x100)
+#define RVU_AF_RAS_W1S                      (0x108)
+#define RVU_AF_RAS_ENA_W1S                  (0x110)
+#define RVU_AF_RAS_ENA_W1C                  (0x118)
+#define RVU_AF_GEN_INT                      (0x120)
+#define RVU_AF_GEN_INT_W1S                  (0x128)
+#define RVU_AF_GEN_INT_ENA_W1S              (0x130)
+#define RVU_AF_GEN_INT_ENA_W1C              (0x138)
+#define	RVU_AF_AFPF_MBOX0		    (0x02000)
+#define	RVU_AF_AFPF_MBOX1		    (0x02008)
+#define RVU_AF_AFPFX_MBOXX(a, b)            (0x2000 | (a) << 4 | (b) << 3)
+#define RVU_AF_PFME_STATUS                  (0x2800)
+#define RVU_AF_PFTRPEND                     (0x2810)
+#define RVU_AF_PFTRPEND_W1S                 (0x2820)
+#define RVU_AF_PF_RST                       (0x2840)
+#define RVU_AF_HWVF_RST                     (0x2850)
+#define RVU_AF_PFAF_MBOX_INT                (0x2880)
+#define RVU_AF_PFAF_MBOX_INT_W1S            (0x2888)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1S        (0x2890)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1C        (0x2898)
+#define RVU_AF_PFFLR_INT                    (0x28a0)
+#define RVU_AF_PFFLR_INT_W1S                (0x28a8)
+#define RVU_AF_PFFLR_INT_ENA_W1S            (0x28b0)
+#define RVU_AF_PFFLR_INT_ENA_W1C            (0x28b8)
+#define RVU_AF_PFME_INT                     (0x28c0)
+#define RVU_AF_PFME_INT_W1S                 (0x28c8)
+#define RVU_AF_PFME_INT_ENA_W1S             (0x28d0)
+#define RVU_AF_PFME_INT_ENA_W1C             (0x28d8)
+
+/* Admin function's privileged PF/VF registers */
+#define RVU_PRIV_CONST                      (0x8000000)
+#define RVU_PRIV_GEN_CFG                    (0x8000010)
+#define RVU_PRIV_CLK_CFG                    (0x8000020)
+#define RVU_PRIV_ACTIVE_PC                  (0x8000030)
+#define RVU_PRIV_PFX_CFG(a)                 (0x8000100 | (a) << 16)
+#define RVU_PRIV_PFX_MSIX_CFG(a)            (0x8000110 | (a) << 16)
+#define RVU_PRIV_PFX_ID_CFG(a)              (0x8000120 | (a) << 16)
+#define RVU_PRIV_PFX_INT_CFG(a)             (0x8000200 | (a) << 16)
+#define RVU_PRIV_PFX_NIX_CFG                (0x8000300)
+#define RVU_PRIV_PFX_NPA_CFG		    (0x8000310)
+#define RVU_PRIV_PFX_SSO_CFG                (0x8000320)
+#define RVU_PRIV_PFX_SSOW_CFG               (0x8000330)
+#define RVU_PRIV_PFX_TIM_CFG                (0x8000340)
+#define RVU_PRIV_PFX_CPT_CFG                (0x8000350)
+#define RVU_PRIV_BLOCK_TYPEX_REV(a)         (0x8000400 | (a) << 3)
+#define RVU_PRIV_HWVFX_INT_CFG(a)           (0x8001280 | (a) << 16)
+#define RVU_PRIV_HWVFX_NIX_CFG              (0x8001300)
+#define RVU_PRIV_HWVFX_NPA_CFG              (0x8001310)
+#define RVU_PRIV_HWVFX_SSO_CFG              (0x8001320)
+#define RVU_PRIV_HWVFX_SSOW_CFG             (0x8001330)
+#define RVU_PRIV_HWVFX_TIM_CFG              (0x8001340)
+#define RVU_PRIV_HWVFX_CPT_CFG              (0x8001350)
+
+/* RVU PF registers */
+#define	RVU_PF_VFX_PFVF_MBOX0		    (0x00000)
+#define	RVU_PF_VFX_PFVF_MBOX1		    (0x00008)
+#define RVU_PF_VFX_PFVF_MBOXX(a, b)         (0x0 | (a) << 12 | (b) << 3)
+#define RVU_PF_VF_BAR4_ADDR                 (0x10)
+#define RVU_PF_BLOCK_ADDRX_DISC(a)          (0x200 | (a) << 3)
+#define RVU_PF_VFME_STATUSX(a)              (0x800 | (a) << 3)
+#define RVU_PF_VFTRPENDX(a)                 (0x820 | (a) << 3)
+#define RVU_PF_VFTRPEND_W1SX(a)             (0x840 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INTX(a)            (0x880 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_W1SX(a)        (0x8A0 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a)    (0x8C0 | (a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a)    (0x8E0 | (a) << 3)
+#define RVU_PF_VFFLR_INTX(a)                (0x900 | (a) << 3)
+#define RVU_PF_VFFLR_INT_W1SX(a)            (0x920 | (a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1SX(a)        (0x940 | (a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1CX(a)        (0x960 | (a) << 3)
+#define RVU_PF_VFME_INTX(a)                 (0x980 | (a) << 3)
+#define RVU_PF_VFME_INT_W1SX(a)             (0x9A0 | (a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1SX(a)         (0x9C0 | (a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1CX(a)         (0x9E0 | (a) << 3)
+#define RVU_PF_PFAF_MBOX0                   (0xC00)
+#define RVU_PF_PFAF_MBOX1                   (0xC08)
+#define RVU_PF_PFAF_MBOXX(a)                (0xC00 | (a) << 3)
+#define RVU_PF_INT                          (0xc20)
+#define RVU_PF_INT_W1S                      (0xc28)
+#define RVU_PF_INT_ENA_W1S                  (0xc30)
+#define RVU_PF_INT_ENA_W1C                  (0xc38)
+#define RVU_PF_MSIX_VECX_ADDR(a)            (0x000 | (a) << 4)
+#define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
+#define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
+
+
+#define NPA_AF_BLK_RST                  (0x0000)
+#define NIX_AF_BLK_RST                  (0x00B0)
+#define SSO_AF_BLK_RST                  (0x10f8)
+#define TIM_AF_BLK_RST                  (0x10)
+#define CPT_AF_BLK_RST                  (0x46000)
+#define NDC_AF_BLK_RST                  (0x002F0)
+#define NPC_AF_BLK_RST                  (0x00040)
+
+#endif /* RVU_REG_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
new file mode 100644
index 0000000..4122559
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef RVU_STRUCT_H
+#define RVU_STRUCT_H
+
+/*
+ * RVU Block Address Enumeration
+ */
+enum rvu_block_addr_e {
+	BLKADDR_RVUM    = 0x0ULL,
+	BLKADDR_LMT     = 0x1ULL,
+	BLKADDR_MSIX    = 0x2ULL,
+	BLKADDR_NPA     = 0x3ULL,
+	BLKADDR_NIX0    = 0x4ULL,
+	BLKADDR_NIX1    = 0x5ULL,
+	BLKADDR_NPC     = 0x6ULL,
+	BLKADDR_SSO     = 0x7ULL,
+	BLKADDR_SSOW    = 0x8ULL,
+	BLKADDR_TIM     = 0x9ULL,
+	BLKADDR_CPT0    = 0xaULL,
+	BLKADDR_CPT1    = 0xbULL,
+	BLKADDR_NDC0    = 0xcULL,
+	BLKADDR_NDC1    = 0xdULL,
+	BLKADDR_NDC2    = 0xeULL,
+	BLK_COUNT	= 0xfULL,
+};
+
+#endif /* RVU_STRUCT_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 03/15] soc: octeontx2: Gather RVU blocks HW info
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch gathers NPA/NIX/SSO/SSOW/TIM/CPT RVU blocks's
HW info like number of LFs. Important register offsets
saved for later use to avoid code duplication for each block.
A bitmap is allocated for each of the blocks which later
on will be used to allocate a LF for a RVU PF/VF.

Also added RVU NIX/NPA block registers and few registers
of other blocks.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c     | 167 ++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h     |  21 ++
 drivers/soc/marvell/octeontx2/rvu_reg.h | 335 +++++++++++++++++++++++++++++++-
 3 files changed, 518 insertions(+), 5 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index d40fabf..fa5f40b 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -57,6 +57,15 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
 	return -EBUSY;
 }
 
+int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
+{
+	rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
+			     sizeof(long), GFP_KERNEL);
+	if (!rsrc->bmap)
+		return -ENOMEM;
+	return 0;
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -98,6 +107,157 @@ static void rvu_reset_all_blocks(struct rvu *rvu)
 	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
 }
 
+static void rvu_free_hw_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int id;
+
+	/* Free all bitmaps */
+	for (id = 0; id < BLK_COUNT; id++) {
+		block = &hw->block[id];
+		kfree(block->lf.bmap);
+	}
+}
+
+static int rvu_setup_hw_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int err;
+	u64 cfg;
+
+	/* Get HW supported max RVU PF & VF count */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	hw->total_pfs = (cfg >> 32) & 0xFF;
+	hw->total_vfs = (cfg >> 20) & 0xFFF;
+	hw->max_vfs_per_pf = (cfg >> 40) & 0xFF;
+
+	/* Init NPA LF's bitmap */
+	block = &hw->block[BLKADDR_NPA];
+	if (!block->implemented)
+		goto nix;
+	cfg = rvu_read64(rvu, BLKADDR_NPA, NPA_AF_CONST);
+	block->lf.max = (cfg >> 16) & 0xFFF;
+	block->addr = BLKADDR_NPA;
+	block->lfshift = 8;
+	block->lookup_reg = NPA_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NPA_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NPA_CFG;
+	block->lfcfg_reg = NPA_PRIV_LFX_CFG;
+	block->msixcfg_reg = NPA_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = NPA_AF_LF_RST;
+	sprintf(block->name, "NPA");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+nix:
+	/* Init NIX LF's bitmap */
+	block = &hw->block[BLKADDR_NIX0];
+	if (!block->implemented)
+		goto sso;
+	cfg = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
+	block->lf.max = cfg & 0xFFF;
+	block->addr = BLKADDR_NIX0;
+	block->lfshift = 8;
+	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX_CFG;
+	block->lfcfg_reg = NIX_PRIV_LFX_CFG;
+	block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = NIX_AF_LF_RST;
+	sprintf(block->name, "NIX");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+sso:
+	/* Init SSO group's bitmap */
+	block = &hw->block[BLKADDR_SSO];
+	if (!block->implemented)
+		goto ssow;
+	cfg = rvu_read64(rvu, BLKADDR_SSO, SSO_AF_CONST);
+	block->lf.max = cfg & 0xFFFF;
+	block->addr = BLKADDR_SSO;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = SSO_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_SSO_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSO_CFG;
+	block->lfcfg_reg = SSO_PRIV_LFX_HWGRP_CFG;
+	block->msixcfg_reg = SSO_PRIV_LFX_HWGRP_INT_CFG;
+	block->lfreset_reg = SSO_AF_LF_HWGRP_RST;
+	sprintf(block->name, "SSO GROUP");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+ssow:
+	/* Init SSO workslot's bitmap */
+	block = &hw->block[BLKADDR_SSOW];
+	if (!block->implemented)
+		goto tim;
+	block->lf.max = (cfg >> 56) & 0xFF;
+	block->addr = BLKADDR_SSOW;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = SSOW_AF_RVU_LF_HWS_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_SSOW_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSOW_CFG;
+	block->lfcfg_reg = SSOW_PRIV_LFX_HWS_CFG;
+	block->msixcfg_reg = SSOW_PRIV_LFX_HWS_INT_CFG;
+	block->lfreset_reg = SSOW_AF_LF_HWS_RST;
+	sprintf(block->name, "SSOWS");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+tim:
+	/* Init TIM LF's bitmap */
+	block = &hw->block[BLKADDR_TIM];
+	if (!block->implemented)
+		goto cpt;
+	cfg = rvu_read64(rvu, BLKADDR_TIM, TIM_AF_CONST);
+	block->lf.max = cfg & 0xFFFF;
+	block->addr = BLKADDR_TIM;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = TIM_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_TIM_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_TIM_CFG;
+	block->lfcfg_reg = TIM_PRIV_LFX_CFG;
+	block->msixcfg_reg = TIM_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = TIM_AF_LF_RST;
+	sprintf(block->name, "TIM");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+cpt:
+	/* Init CPT LF's bitmap */
+	block = &hw->block[BLKADDR_CPT0];
+	if (!block->implemented)
+		return 0;
+	cfg = rvu_read64(rvu, BLKADDR_CPT0, CPT_AF_CONSTANTS0);
+	block->lf.max = cfg & 0xFF;
+	block->addr = BLKADDR_CPT0;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT_CFG;
+	block->lfcfg_reg = CPT_PRIV_LFX_CFG;
+	block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = CPT_AF_LF_RST;
+	sprintf(block->name, "CPT");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+	return 0;
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -156,6 +316,10 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	rvu_reset_all_blocks(rvu);
 
+	err = rvu_setup_hw_resources(rvu);
+	if (err)
+		goto err_release_regions;
+
 	return 0;
 
 err_release_regions:
@@ -173,6 +337,9 @@ static void rvu_remove(struct pci_dev *pdev)
 {
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
+	rvu_reset_all_blocks(rvu);
+	rvu_free_hw_resources(rvu);
+
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index e2c54d0..592b820 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -23,11 +23,31 @@
 
 #define NAME_SIZE				32
 
+struct rsrc_bmap {
+	unsigned long *bmap;	/* Pointer to resource bitmap */
+	u16  max;		/* Max resource id or count */
+};
+
 struct rvu_block {
+	struct rsrc_bmap lf;
+	bool multislot;
 	bool implemented;
+	u8   addr;  /* RVU_BLOCK_ADDR_E */
+	u8   lfshift;
+	u64  lookup_reg;
+	u64  pf_lfcnt_reg;
+	u64  vf_lfcnt_reg;
+	u64  lfcfg_reg;
+	u64  msixcfg_reg;
+	u64  lfreset_reg;
+	unsigned char name[NAME_SIZE];
 };
 
 struct rvu_hwinfo {
+	u8	total_pfs;   /* MAX RVU PFs HW supports */
+	u16	total_vfs;   /* Max RVU VFs HW supports */
+	u16	max_vfs_per_pf; /* Max VFs that can be attached to a PF */
+
 	struct rvu_block block[BLK_COUNT]; /* Block info */
 };
 
@@ -63,6 +83,7 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
  * RVU
  */
 
+int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index e7abd7e..a7995d1 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -101,12 +101,337 @@
 #define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
 #define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
 
-
+/* NPA block's admin function registers */
 #define NPA_AF_BLK_RST                  (0x0000)
-#define NIX_AF_BLK_RST                  (0x00B0)
-#define SSO_AF_BLK_RST                  (0x10f8)
-#define TIM_AF_BLK_RST                  (0x10)
-#define CPT_AF_BLK_RST                  (0x46000)
+#define NPA_AF_CONST                    (0x0010)
+#define NPA_AF_CONST1                   (0x0018)
+#define NPA_AF_LF_RST                   (0x0020)
+#define NPA_AF_GEN_CFG                  (0x0030)
+#define NPA_AF_NDC_CFG                  (0x0040)
+#define NPA_AF_INP_CTL                  (0x00D0)
+#define NPA_AF_ACTIVE_CYCLES_PC         (0x00F0)
+#define NPA_AF_AVG_DELAY                (0x0100)
+#define NPA_AF_GEN_INT                  (0x0140)
+#define NPA_AF_GEN_INT_W1S              (0x0148)
+#define NPA_AF_GEN_INT_ENA_W1S          (0x0150)
+#define NPA_AF_GEN_INT_ENA_W1C          (0x0158)
+#define NPA_AF_RVU_INT                  (0x0160)
+#define NPA_AF_RVU_INT_W1S              (0x0168)
+#define NPA_AF_RVU_INT_ENA_W1S          (0x0170)
+#define NPA_AF_RVU_INT_ENA_W1C          (0x0178)
+#define NPA_AF_ERR_INT			(0x0180)
+#define NPA_AF_ERR_INT_W1S		(0x0188)
+#define NPA_AF_ERR_INT_ENA_W1S		(0x0190)
+#define NPA_AF_ERR_INT_ENA_W1C		(0x0198)
+#define NPA_AF_RAS			(0x01A0)
+#define NPA_AF_RAS_W1S			(0x01A8)
+#define NPA_AF_RAS_ENA_W1S		(0x01B0)
+#define NPA_AF_RAS_ENA_W1C		(0x01B8)
+#define NPA_AF_BP_TEST                  (0x0200)
+#define NPA_AF_ECO                      (0x0300)
+#define NPA_AF_AQ_CFG                   (0x0600)
+#define NPA_AF_AQ_BASE                  (0x0610)
+#define NPA_AF_AQ_STATUS		(0x0620)
+#define NPA_AF_AQ_DOOR                  (0x0630)
+#define NPA_AF_AQ_DONE_WAIT             (0x0640)
+#define NPA_AF_AQ_DONE                  (0x0650)
+#define NPA_AF_AQ_DONE_ACK              (0x0660)
+#define NPA_AF_AQ_DONE_INT              (0x0680)
+#define NPA_AF_AQ_DONE_INT_W1S          (0x0688)
+#define NPA_AF_AQ_DONE_ENA_W1S          (0x0690)
+#define NPA_AF_AQ_DONE_ENA_W1C          (0x0698)
+#define NPA_AF_LFX_AURAS_CFG(a)         (0x4000 | (a) << 18)
+#define NPA_AF_LFX_LOC_AURAS_BASE(a)    (0x4010 | (a) << 18)
+#define NPA_AF_LFX_QINTS_CFG(a)         (0x4100 | (a) << 18)
+#define NPA_AF_LFX_QINTS_BASE(a)        (0x4110 | (a) << 18)
+#define NPA_PRIV_AF_INT_CFG             (0x10000)
+#define NPA_PRIV_LFX_CFG		(0x10010)
+#define NPA_PRIV_LFX_INT_CFG		(0x10020)
+#define NPA_AF_RVU_LF_CFG_DEBUG         (0x10030)
+
+
+/* NIX block's admin function registers */
+#define NIX_AF_CFG			(0x0000)
+#define NIX_AF_STATUS			(0x0010)
+#define NIX_AF_NDC_CFG			(0x0018)
+#define NIX_AF_CONST			(0x0020)
+#define NIX_AF_CONST1			(0x0028)
+#define NIX_AF_CONST2			(0x0030)
+#define NIX_AF_CONST3			(0x0038)
+#define NIX_AF_SQ_CONST			(0x0040)
+#define NIX_AF_CQ_CONST			(0x0048)
+#define NIX_AF_RQ_CONST			(0x0050)
+#define NIX_AF_PSE_CONST		(0x0060)
+#define NIX_AF_TL1_CONST		(0x0070)
+#define NIX_AF_TL2_CONST		(0x0078)
+#define NIX_AF_TL3_CONST		(0x0080)
+#define NIX_AF_TL4_CONST		(0x0088)
+#define NIX_AF_MDQ_CONST		(0x0090)
+#define NIX_AF_MC_MIRROR_CONST		(0x0098)
+#define NIX_AF_LSO_CFG			(0x00A8)
+#define NIX_AF_BLK_RST			(0x00B0)
+#define NIX_AF_TX_TSTMP_CFG		(0x00C0)
+#define NIX_AF_RX_CFG			(0x00D0)
+#define NIX_AF_AVG_DELAY		(0x00E0)
+#define NIX_AF_CINT_DELAY		(0x00F0)
+#define NIX_AF_RX_MCAST_BASE		(0x0100)
+#define NIX_AF_RX_MCAST_CFG		(0x0110)
+#define NIX_AF_RX_MCAST_BUF_BASE	(0x0120)
+#define NIX_AF_RX_MCAST_BUF_CFG		(0x0130)
+#define NIX_AF_RX_MIRROR_BUF_BASE	(0x0140)
+#define NIX_AF_RX_MIRROR_BUF_CFG	(0x0148)
+#define NIX_AF_LF_RST			(0x0150)
+#define NIX_AF_GEN_INT			(0x0160)
+#define NIX_AF_GEN_INT_W1S		(0x0168)
+#define NIX_AF_GEN_INT_ENA_W1S		(0x0170)
+#define NIX_AF_GEN_INT_ENA_W1C		(0x0178)
+#define NIX_AF_ERR_INT			(0x0180)
+#define NIX_AF_ERR_INT_W1S		(0x0188)
+#define NIX_AF_ERR_INT_ENA_W1S		(0x0190)
+#define NIX_AF_ERR_INT_ENA_W1C		(0x0198)
+#define NIX_AF_RAS			(0x01A0)
+#define NIX_AF_RAS_W1S			(0x01A8)
+#define NIX_AF_RAS_ENA_W1S		(0x01B0)
+#define NIX_AF_RAS_ENA_W1C		(0x01B8)
+#define NIX_AF_RVU_INT			(0x01C0)
+#define NIX_AF_RVU_INT_W1S		(0x01C8)
+#define NIX_AF_RVU_INT_ENA_W1S		(0x01D0)
+#define NIX_AF_RVU_INT_ENA_W1C		(0x01D8)
+#define NIX_AF_TCP_TIMER		(0x01E0)
+#define NIX_AF_RX_WQE_TAG_CTL		(0x01F0)
+#define NIX_AF_RX_DEF_OL2		(0x0200)
+#define NIX_AF_RX_DEF_OIP4		(0x0210)
+#define NIX_AF_RX_DEF_IIP4		(0x0220)
+#define NIX_AF_RX_DEF_OIP6		(0x0230)
+#define NIX_AF_RX_DEF_IIP6		(0x0240)
+#define NIX_AF_RX_DEF_OTCP		(0x0250)
+#define NIX_AF_RX_DEF_ITCP		(0x0260)
+#define NIX_AF_RX_DEF_OUDP		(0x0270)
+#define NIX_AF_RX_DEF_IUDP		(0x0280)
+#define NIX_AF_RX_DEF_OSCTP		(0x0290)
+#define NIX_AF_RX_DEF_ISCTP		(0x02A0)
+#define NIX_AF_RX_DEF_IPSECX		(0x02B0)
+#define NIX_AF_RX_IPSEC_GEN_CFG		(0x0300)
+#define NIX_AF_RX_CPTX_INST_ADDR	(0x0310)
+#define NIX_AF_NDC_TX_SYNC		(0x03F0)
+#define NIX_AF_AQ_CFG			(0x0400)
+#define NIX_AF_AQ_BASE			(0x0410)
+#define NIX_AF_AQ_STATUS		(0x0420)
+#define NIX_AF_AQ_DOOR			(0x0430)
+#define NIX_AF_AQ_DONE_WAIT		(0x0440)
+#define NIX_AF_AQ_DONE			(0x0450)
+#define NIX_AF_AQ_DONE_ACK		(0x0460)
+#define NIX_AF_AQ_DONE_TIMER		(0x0470)
+#define NIX_AF_AQ_DONE_INT		(0x0480)
+#define NIX_AF_AQ_DONE_INT_W1S		(0x0488)
+#define NIX_AF_AQ_DONE_ENA_W1S		(0x0490)
+#define NIX_AF_AQ_DONE_ENA_W1C		(0x0498)
+#define NIX_AF_RX_LINKX_SLX_SPKT_CNT	(0x0500)
+#define NIX_AF_RX_LINKX_SLX_SXQE_CNT	(0x0510)
+#define NIX_AF_RX_MCAST_JOBSX_SW_CNT	(0x0520)
+#define NIX_AF_RX_MIRROR_JOBSX_SW_CNT	(0x0530)
+#define NIX_AF_RX_LINKX_CFG(a)		(0x0540 | (a) << 16)
+#define NIX_AF_RX_SW_SYNC		(0x0550)
+#define NIX_AF_RX_SW_SYNC_DONE		(0x0560)
+#define NIX_AF_SEB_ECO			(0x0600)
+#define NIX_AF_SEB_TEST_BP		(0x0610)
+#define NIX_AF_NORM_TX_FIFO_STATUS	(0x0620)
+#define NIX_AF_EXPR_TX_FIFO_STATUS	(0x0630)
+#define NIX_AF_SDP_TX_FIFO_STATUS	(0x0640)
+#define NIX_AF_TX_NPC_CAPTURE_CONFIG	(0x0660)
+#define NIX_AF_TX_NPC_CAPTURE_INFO	(0x0670)
+
+#define NIX_AF_DEBUG_NPC_RESP_DATAX(a)          (0x680 | (a) << 3)
+#define NIX_AF_SMQX_CFG(a)                      (0x700 | (a) << 16)
+#define NIX_AF_PSE_CHANNEL_LEVEL                (0x800)
+#define NIX_AF_PSE_SHAPER_CFG                   (0x810)
+#define NIX_AF_TX_EXPR_CREDIT			(0x830)
+#define NIX_AF_MARK_FORMATX_CTL(a)              (0x900 | (a) << 18)
+#define NIX_AF_TX_LINKX_NORM_CREDIT(a)		(0xA00 | (a) << 16)
+#define NIX_AF_TX_LINKX_EXPR_CREDIT(a)		(0xA10 | (a) << 16)
+#define NIX_AF_TX_LINKX_SW_XOFF(a)              (0xA20 | (a) << 16)
+#define NIX_AF_TX_LINKX_HW_XOFF(a)              (0xA30 | (a) << 16)
+#define NIX_AF_SDP_LINK_CREDIT                  (0xa40)
+#define NIX_AF_SDP_SW_XOFFX(a)                  (0xA60 | (a) << 3)
+#define NIX_AF_SDP_HW_XOFFX(a)                  (0xAC0 | (a) << 3)
+#define NIX_AF_TL4X_BP_STATUS(a)                (0xB00 | (a) << 16)
+#define NIX_AF_TL4X_SDP_LINK_CFG(a)             (0xB10 | (a) << 16)
+#define NIX_AF_TL1X_SCHEDULE(a)                 (0xC00 | (a) << 16)
+#define NIX_AF_TL1X_SHAPE(a)                    (0xC10 | (a) << 16)
+#define NIX_AF_TL1X_CIR(a)                      (0xC20 | (a) << 16)
+#define NIX_AF_TL1X_SHAPE_STATE(a)              (0xC50 | (a) << 16)
+#define NIX_AF_TL1X_SW_XOFF(a)                  (0xC70 | (a) << 16)
+#define NIX_AF_TL1X_TOPOLOGY(a)                 (0xC80 | (a) << 16)
+#define NIX_AF_TL1X_GREEN(a)                    (0xC90 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW(a)                   (0xCA0 | (a) << 16)
+#define NIX_AF_TL1X_RED(a)                      (0xCB0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG0(a)                (0xCC0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG1(a)                (0xCC8 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG2(a)                (0xCD0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG3(a)                (0xCD8 | (a) << 16)
+#define NIX_AF_TL1A_DEBUG                       (0xce0)
+#define NIX_AF_TL1B_DEBUG                       (0xcf0)
+#define NIX_AF_TL1_DEBUG_GREEN                  (0xd00)
+#define NIX_AF_TL1_DEBUG_NODE                   (0xd10)
+#define NIX_AF_TL1X_DROPPED_PACKETS(a)          (0xD20 | (a) << 16)
+#define NIX_AF_TL1X_DROPPED_BYTES(a)            (0xD30 | (a) << 16)
+#define NIX_AF_TL1X_RED_PACKETS(a)              (0xD40 | (a) << 16)
+#define NIX_AF_TL1X_RED_BYTES(a)                (0xD50 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW_PACKETS(a)           (0xD60 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW_BYTES(a)             (0xD70 | (a) << 16)
+#define NIX_AF_TL1X_GREEN_PACKETS(a)            (0xD80 | (a) << 16)
+#define NIX_AF_TL1X_GREEN_BYTES(a)              (0xD90 | (a) << 16)
+#define NIX_AF_TL2X_SCHEDULE(a)                 (0xE00 | (a) << 16)
+#define NIX_AF_TL2X_SHAPE(a)                    (0xE10 | (a) << 16)
+#define NIX_AF_TL2X_CIR(a)                      (0xE20 | (a) << 16)
+#define NIX_AF_TL2X_PIR(a)                      (0xE30 | (a) << 16)
+#define NIX_AF_TL2X_SCHED_STATE(a)              (0xE40 | (a) << 16)
+#define NIX_AF_TL2X_SHAPE_STATE(a)              (0xE50 | (a) << 16)
+#define NIX_AF_TL2X_POINTERS(a)                 (0xE60 | (a) << 16)
+#define NIX_AF_TL2X_SW_XOFF(a)                  (0xE70 | (a) << 16)
+#define NIX_AF_TL2X_TOPOLOGY(a)                 (0xE80 | (a) << 16)
+#define NIX_AF_TL2X_PARENT(a)                   (0xE88 | (a) << 16)
+#define NIX_AF_TL2X_GREEN(a)                    (0xE90 | (a) << 16)
+#define NIX_AF_TL2X_YELLOW(a)                   (0xEA0 | (a) << 16)
+#define NIX_AF_TL2X_RED(a)                      (0xEB0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG0(a)                (0xEC0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG1(a)                (0xEC8 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG2(a)                (0xED0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG3(a)                (0xED8 | (a) << 16)
+#define NIX_AF_TL2A_DEBUG                       (0xee0)
+#define NIX_AF_TL2B_DEBUG                       (0xef0)
+#define NIX_AF_TL3X_SCHEDULE(a)                 (0x1000 | (a) << 16)
+#define NIX_AF_TL3X_SHAPE(a)                    (0x1010 | (a) << 16)
+#define NIX_AF_TL3X_CIR(a)                      (0x1020 | (a) << 16)
+#define NIX_AF_TL3X_PIR(a)                      (0x1030 | (a) << 16)
+#define NIX_AF_TL3X_SCHED_STATE(a)              (0x1040 | (a) << 16)
+#define NIX_AF_TL3X_SHAPE_STATE(a)              (0x1050 | (a) << 16)
+#define NIX_AF_TL3X_POINTERS(a)                 (0x1060 | (a) << 16)
+#define NIX_AF_TL3X_SW_XOFF(a)                  (0x1070 | (a) << 16)
+#define NIX_AF_TL3X_TOPOLOGY(a)                 (0x1080 | (a) << 16)
+#define NIX_AF_TL3X_PARENT(a)                   (0x1088 | (a) << 16)
+#define NIX_AF_TL3X_GREEN(a)                    (0x1090 | (a) << 16)
+#define NIX_AF_TL3X_YELLOW(a)                   (0x10A0 | (a) << 16)
+#define NIX_AF_TL3X_RED(a)                      (0x10B0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG0(a)                (0x10C0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG1(a)                (0x10C8 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG2(a)                (0x10D0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG3(a)                (0x10D8 | (a) << 16)
+#define NIX_AF_TL3A_DEBUG                       (0x10e0)
+#define NIX_AF_TL3B_DEBUG                       (0x10f0)
+#define NIX_AF_TL4X_SCHEDULE(a)                 (0x1200 | (a) << 16)
+#define NIX_AF_TL4X_SHAPE(a)                    (0x1210 | (a) << 16)
+#define NIX_AF_TL4X_CIR(a)                      (0x1220 | (a) << 16)
+#define NIX_AF_TL4X_PIR(a)                      (0x1230 | (a) << 16)
+#define NIX_AF_TL4X_SCHED_STATE(a)              (0x1240 | (a) << 16)
+#define NIX_AF_TL4X_SHAPE_STATE(a)              (0x1250 | (a) << 16)
+#define NIX_AF_TL4X_POINTERS(a)                 (0x1260 | (a) << 16)
+#define NIX_AF_TL4X_SW_XOFF(a)                  (0x1270 | (a) << 16)
+#define NIX_AF_TL4X_TOPOLOGY(a)                 (0x1280 | (a) << 16)
+#define NIX_AF_TL4X_PARENT(a)                   (0x1288 | (a) << 16)
+#define NIX_AF_TL4X_GREEN(a)                    (0x1290 | (a) << 16)
+#define NIX_AF_TL4X_YELLOW(a)                   (0x12A0 | (a) << 16)
+#define NIX_AF_TL4X_RED(a)                      (0x12B0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG0(a)                (0x12C0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG1(a)                (0x12C8 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG2(a)                (0x12D0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG3(a)                (0x12D8 | (a) << 16)
+#define NIX_AF_TL4A_DEBUG                       (0x12e0)
+#define NIX_AF_TL4B_DEBUG                       (0x12f0)
+#define NIX_AF_MDQX_SCHEDULE(a)                 (0x1400 | (a) << 16)
+#define NIX_AF_MDQX_SHAPE(a)                    (0x1410 | (a) << 16)
+#define NIX_AF_MDQX_CIR(a)                      (0x1420 | (a) << 16)
+#define NIX_AF_MDQX_PIR(a)                      (0x1430 | (a) << 16)
+#define NIX_AF_MDQX_SCHED_STATE(a)              (0x1440 | (a) << 16)
+#define NIX_AF_MDQX_SHAPE_STATE(a)              (0x1450 | (a) << 16)
+#define NIX_AF_MDQX_POINTERS(a)                 (0x1460 | (a) << 16)
+#define NIX_AF_MDQX_SW_XOFF(a)                  (0x1470 | (a) << 16)
+#define NIX_AF_MDQX_PARENT(a)                   (0x1480 | (a) << 16)
+#define NIX_AF_MDQX_MD_DEBUG(a)                 (0x14C0 | (a) << 16)
+#define NIX_AF_MDQX_PTR_FIFO(a)                 (0x14D0 | (a) << 16)
+#define NIX_AF_MDQA_DEBUG                       (0x14e0)
+#define NIX_AF_MDQB_DEBUG                       (0x14f0)
+#define NIX_AF_TL3_TL2X_CFG(a)                  (0x1600 | (a) << 18)
+#define NIX_AF_TL3_TL2X_BP_STATUS(a)            (0x1610 | (a) << 16)
+#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)         (0x1700 | (a) << 16 | (b) << 3)
+#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b)    (0x1800 | (a) << 18 | (b) << 3)
+#define NIX_AF_TX_MCASTX(a)                     (0x1900 | (a) << 15)
+#define NIX_AF_TX_VTAG_DEFX_CTL(a)              (0x1A00 | (a) << 16)
+#define NIX_AF_TX_VTAG_DEFX_DATA(a)             (0x1A10 | (a) << 16)
+#define NIX_AF_RX_BPIDX_STATUS(a)               (0x1A20 | (a) << 17)
+#define NIX_AF_RX_CHANX_CFG(a)                  (0x1A30 | (a) << 15)
+#define NIX_AF_CINT_TIMERX(a)                   (0x1A40 | (a) << 18)
+#define NIX_AF_LSO_FORMATX_FIELDX(a, b)         (0x1B00 | (a) << 16 | (b) << 3)
+#define NIX_AF_LFX_CFG(a)		(0x4000 | (a) << 17)
+#define NIX_AF_LFX_SQS_CFG(a)		(0x4020 | (a) << 17)
+#define NIX_AF_LFX_TX_CFG2(a)		(0x4028 | (a) << 17)
+#define NIX_AF_LFX_SQS_BASE(a)		(0x4030 | (a) << 17)
+#define NIX_AF_LFX_RQS_CFG(a)		(0x4040 | (a) << 17)
+#define NIX_AF_LFX_RQS_BASE(a)		(0x4050 | (a) << 17)
+#define NIX_AF_LFX_CQS_CFG(a)		(0x4060 | (a) << 17)
+#define NIX_AF_LFX_CQS_BASE(a)		(0x4070 | (a) << 17)
+#define NIX_AF_LFX_TX_CFG(a)		(0x4080 | (a) << 17)
+#define NIX_AF_LFX_TX_PARSE_CFG(a)	(0x4090 | (a) << 17)
+#define NIX_AF_LFX_RX_CFG(a)		(0x40A0 | (a) << 17)
+#define NIX_AF_LFX_RSS_CFG(a)		(0x40C0 | (a) << 17)
+#define NIX_AF_LFX_RSS_BASE(a)		(0x40D0 | (a) << 17)
+#define NIX_AF_LFX_QINTS_CFG(a)		(0x4100 | (a) << 17)
+#define NIX_AF_LFX_QINTS_BASE(a)	(0x4110 | (a) << 17)
+#define NIX_AF_LFX_CINTS_CFG(a)		(0x4120 | (a) << 17)
+#define NIX_AF_LFX_CINTS_BASE(a)	(0x4130 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG0(a)	(0x4140 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG1(a)	(0x4148 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a)	(0x4150 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a)	(0x4158 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a)	(0x4170 | (a) << 17)
+#define NIX_AF_LFX_TX_STATUS(a)		(0x4180 | (a) << 17)
+#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b)	(0x4200 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_LOCKX(a, b)		(0x4300 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_TX_STATX(a, b)	(0x4400 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_RX_STATX(a, b)	(0x4500 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_RSS_GRPX(a, b)	(0x4600 | (a) << 17 | (b) << 3)
+#define NIX_AF_RX_NPC_MC_RCV		(0x4700)
+#define NIX_AF_RX_NPC_MC_DROP		(0x4710)
+#define NIX_AF_RX_NPC_MIRROR_RCV	(0x4720)
+#define NIX_AF_RX_NPC_MIRROR_DROP	(0x4730)
+#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a)	(0x4800 | (a) << 16)
+
+#define NIX_PRIV_AF_INT_CFG		(0x8000000)
+#define NIX_PRIV_LFX_CFG		(0x8000010)
+#define NIX_PRIV_LFX_INT_CFG		(0x8000020)
+#define NIX_AF_RVU_LF_CFG_DEBUG		(0x8000030)
+
+/* SSO */
+#define SSO_AF_CONST			(0x1000)
+#define SSO_AF_CONST1			(0x1008)
+#define SSO_AF_BLK_RST			(0x10f8)
+#define SSO_AF_LF_HWGRP_RST		(0x10e0)
+#define SSO_AF_RVU_LF_CFG_DEBUG		(0x3800)
+#define SSO_PRIV_LFX_HWGRP_CFG		(0x10000)
+#define SSO_PRIV_LFX_HWGRP_INT_CFG	(0x20000)
+
+/* SSOW */
+#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG	(0x0010)
+#define SSOW_AF_LF_HWS_RST		(0x0030)
+#define SSOW_PRIV_LFX_HWS_CFG		(0x1000)
+#define SSOW_PRIV_LFX_HWS_INT_CFG	(0x2000)
+
+/* TIM */
+#define TIM_AF_CONST			(0x90)
+#define TIM_PRIV_LFX_CFG		(0x20000)
+#define TIM_PRIV_LFX_INT_CFG		(0x24000)
+#define TIM_AF_RVU_LF_CFG_DEBUG		(0x30000)
+#define TIM_AF_BLK_RST			(0x10)
+#define TIM_AF_LF_RST			(0x20)
+
+/* CPT */
+#define CPT_AF_CONSTANTS0		(0x0000)
+#define CPT_PRIV_LFX_CFG		(0x41000)
+#define CPT_PRIV_LFX_INT_CFG		(0x43000)
+#define CPT_AF_RVU_LF_CFG_DEBUG		(0x45000)
+#define CPT_AF_LF_RST			(0x44000)
+#define CPT_AF_BLK_RST			(0x46000)
+
 #define NDC_AF_BLK_RST                  (0x002F0)
 #define NPC_AF_BLK_RST                  (0x00040)
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 03/15] soc: octeontx2: Gather RVU blocks HW info
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

This patch gathers NPA/NIX/SSO/SSOW/TIM/CPT RVU blocks's
HW info like number of LFs. Important register offsets
saved for later use to avoid code duplication for each block.
A bitmap is allocated for each of the blocks which later
on will be used to allocate a LF for a RVU PF/VF.

Also added RVU NIX/NPA block registers and few registers
of other blocks.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c     | 167 ++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h     |  21 ++
 drivers/soc/marvell/octeontx2/rvu_reg.h | 335 +++++++++++++++++++++++++++++++-
 3 files changed, 518 insertions(+), 5 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index d40fabf..fa5f40b 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -57,6 +57,15 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
 	return -EBUSY;
 }
 
+int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
+{
+	rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
+			     sizeof(long), GFP_KERNEL);
+	if (!rsrc->bmap)
+		return -ENOMEM;
+	return 0;
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -98,6 +107,157 @@ static void rvu_reset_all_blocks(struct rvu *rvu)
 	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
 }
 
+static void rvu_free_hw_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int id;
+
+	/* Free all bitmaps */
+	for (id = 0; id < BLK_COUNT; id++) {
+		block = &hw->block[id];
+		kfree(block->lf.bmap);
+	}
+}
+
+static int rvu_setup_hw_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int err;
+	u64 cfg;
+
+	/* Get HW supported max RVU PF & VF count */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	hw->total_pfs = (cfg >> 32) & 0xFF;
+	hw->total_vfs = (cfg >> 20) & 0xFFF;
+	hw->max_vfs_per_pf = (cfg >> 40) & 0xFF;
+
+	/* Init NPA LF's bitmap */
+	block = &hw->block[BLKADDR_NPA];
+	if (!block->implemented)
+		goto nix;
+	cfg = rvu_read64(rvu, BLKADDR_NPA, NPA_AF_CONST);
+	block->lf.max = (cfg >> 16) & 0xFFF;
+	block->addr = BLKADDR_NPA;
+	block->lfshift = 8;
+	block->lookup_reg = NPA_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NPA_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NPA_CFG;
+	block->lfcfg_reg = NPA_PRIV_LFX_CFG;
+	block->msixcfg_reg = NPA_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = NPA_AF_LF_RST;
+	sprintf(block->name, "NPA");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+nix:
+	/* Init NIX LF's bitmap */
+	block = &hw->block[BLKADDR_NIX0];
+	if (!block->implemented)
+		goto sso;
+	cfg = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
+	block->lf.max = cfg & 0xFFF;
+	block->addr = BLKADDR_NIX0;
+	block->lfshift = 8;
+	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX_CFG;
+	block->lfcfg_reg = NIX_PRIV_LFX_CFG;
+	block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = NIX_AF_LF_RST;
+	sprintf(block->name, "NIX");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+sso:
+	/* Init SSO group's bitmap */
+	block = &hw->block[BLKADDR_SSO];
+	if (!block->implemented)
+		goto ssow;
+	cfg = rvu_read64(rvu, BLKADDR_SSO, SSO_AF_CONST);
+	block->lf.max = cfg & 0xFFFF;
+	block->addr = BLKADDR_SSO;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = SSO_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_SSO_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSO_CFG;
+	block->lfcfg_reg = SSO_PRIV_LFX_HWGRP_CFG;
+	block->msixcfg_reg = SSO_PRIV_LFX_HWGRP_INT_CFG;
+	block->lfreset_reg = SSO_AF_LF_HWGRP_RST;
+	sprintf(block->name, "SSO GROUP");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+ssow:
+	/* Init SSO workslot's bitmap */
+	block = &hw->block[BLKADDR_SSOW];
+	if (!block->implemented)
+		goto tim;
+	block->lf.max = (cfg >> 56) & 0xFF;
+	block->addr = BLKADDR_SSOW;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = SSOW_AF_RVU_LF_HWS_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_SSOW_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSOW_CFG;
+	block->lfcfg_reg = SSOW_PRIV_LFX_HWS_CFG;
+	block->msixcfg_reg = SSOW_PRIV_LFX_HWS_INT_CFG;
+	block->lfreset_reg = SSOW_AF_LF_HWS_RST;
+	sprintf(block->name, "SSOWS");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+tim:
+	/* Init TIM LF's bitmap */
+	block = &hw->block[BLKADDR_TIM];
+	if (!block->implemented)
+		goto cpt;
+	cfg = rvu_read64(rvu, BLKADDR_TIM, TIM_AF_CONST);
+	block->lf.max = cfg & 0xFFFF;
+	block->addr = BLKADDR_TIM;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = TIM_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_TIM_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_TIM_CFG;
+	block->lfcfg_reg = TIM_PRIV_LFX_CFG;
+	block->msixcfg_reg = TIM_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = TIM_AF_LF_RST;
+	sprintf(block->name, "TIM");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+cpt:
+	/* Init CPT LF's bitmap */
+	block = &hw->block[BLKADDR_CPT0];
+	if (!block->implemented)
+		return 0;
+	cfg = rvu_read64(rvu, BLKADDR_CPT0, CPT_AF_CONSTANTS0);
+	block->lf.max = cfg & 0xFF;
+	block->addr = BLKADDR_CPT0;
+	block->multislot = true;
+	block->lfshift = 3;
+	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT_CFG;
+	block->lfcfg_reg = CPT_PRIV_LFX_CFG;
+	block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG;
+	block->lfreset_reg = CPT_AF_LF_RST;
+	sprintf(block->name, "CPT");
+	err = rvu_alloc_bitmap(&block->lf);
+	if (err)
+		return err;
+
+	return 0;
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -156,6 +316,10 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	rvu_reset_all_blocks(rvu);
 
+	err = rvu_setup_hw_resources(rvu);
+	if (err)
+		goto err_release_regions;
+
 	return 0;
 
 err_release_regions:
@@ -173,6 +337,9 @@ static void rvu_remove(struct pci_dev *pdev)
 {
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
+	rvu_reset_all_blocks(rvu);
+	rvu_free_hw_resources(rvu);
+
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index e2c54d0..592b820 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -23,11 +23,31 @@
 
 #define NAME_SIZE				32
 
+struct rsrc_bmap {
+	unsigned long *bmap;	/* Pointer to resource bitmap */
+	u16  max;		/* Max resource id or count */
+};
+
 struct rvu_block {
+	struct rsrc_bmap lf;
+	bool multislot;
 	bool implemented;
+	u8   addr;  /* RVU_BLOCK_ADDR_E */
+	u8   lfshift;
+	u64  lookup_reg;
+	u64  pf_lfcnt_reg;
+	u64  vf_lfcnt_reg;
+	u64  lfcfg_reg;
+	u64  msixcfg_reg;
+	u64  lfreset_reg;
+	unsigned char name[NAME_SIZE];
 };
 
 struct rvu_hwinfo {
+	u8	total_pfs;   /* MAX RVU PFs HW supports */
+	u16	total_vfs;   /* Max RVU VFs HW supports */
+	u16	max_vfs_per_pf; /* Max VFs that can be attached to a PF */
+
 	struct rvu_block block[BLK_COUNT]; /* Block info */
 };
 
@@ -63,6 +83,7 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
  * RVU
  */
 
+int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index e7abd7e..a7995d1 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -101,12 +101,337 @@
 #define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
 #define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
 
-
+/* NPA block's admin function registers */
 #define NPA_AF_BLK_RST                  (0x0000)
-#define NIX_AF_BLK_RST                  (0x00B0)
-#define SSO_AF_BLK_RST                  (0x10f8)
-#define TIM_AF_BLK_RST                  (0x10)
-#define CPT_AF_BLK_RST                  (0x46000)
+#define NPA_AF_CONST                    (0x0010)
+#define NPA_AF_CONST1                   (0x0018)
+#define NPA_AF_LF_RST                   (0x0020)
+#define NPA_AF_GEN_CFG                  (0x0030)
+#define NPA_AF_NDC_CFG                  (0x0040)
+#define NPA_AF_INP_CTL                  (0x00D0)
+#define NPA_AF_ACTIVE_CYCLES_PC         (0x00F0)
+#define NPA_AF_AVG_DELAY                (0x0100)
+#define NPA_AF_GEN_INT                  (0x0140)
+#define NPA_AF_GEN_INT_W1S              (0x0148)
+#define NPA_AF_GEN_INT_ENA_W1S          (0x0150)
+#define NPA_AF_GEN_INT_ENA_W1C          (0x0158)
+#define NPA_AF_RVU_INT                  (0x0160)
+#define NPA_AF_RVU_INT_W1S              (0x0168)
+#define NPA_AF_RVU_INT_ENA_W1S          (0x0170)
+#define NPA_AF_RVU_INT_ENA_W1C          (0x0178)
+#define NPA_AF_ERR_INT			(0x0180)
+#define NPA_AF_ERR_INT_W1S		(0x0188)
+#define NPA_AF_ERR_INT_ENA_W1S		(0x0190)
+#define NPA_AF_ERR_INT_ENA_W1C		(0x0198)
+#define NPA_AF_RAS			(0x01A0)
+#define NPA_AF_RAS_W1S			(0x01A8)
+#define NPA_AF_RAS_ENA_W1S		(0x01B0)
+#define NPA_AF_RAS_ENA_W1C		(0x01B8)
+#define NPA_AF_BP_TEST                  (0x0200)
+#define NPA_AF_ECO                      (0x0300)
+#define NPA_AF_AQ_CFG                   (0x0600)
+#define NPA_AF_AQ_BASE                  (0x0610)
+#define NPA_AF_AQ_STATUS		(0x0620)
+#define NPA_AF_AQ_DOOR                  (0x0630)
+#define NPA_AF_AQ_DONE_WAIT             (0x0640)
+#define NPA_AF_AQ_DONE                  (0x0650)
+#define NPA_AF_AQ_DONE_ACK              (0x0660)
+#define NPA_AF_AQ_DONE_INT              (0x0680)
+#define NPA_AF_AQ_DONE_INT_W1S          (0x0688)
+#define NPA_AF_AQ_DONE_ENA_W1S          (0x0690)
+#define NPA_AF_AQ_DONE_ENA_W1C          (0x0698)
+#define NPA_AF_LFX_AURAS_CFG(a)         (0x4000 | (a) << 18)
+#define NPA_AF_LFX_LOC_AURAS_BASE(a)    (0x4010 | (a) << 18)
+#define NPA_AF_LFX_QINTS_CFG(a)         (0x4100 | (a) << 18)
+#define NPA_AF_LFX_QINTS_BASE(a)        (0x4110 | (a) << 18)
+#define NPA_PRIV_AF_INT_CFG             (0x10000)
+#define NPA_PRIV_LFX_CFG		(0x10010)
+#define NPA_PRIV_LFX_INT_CFG		(0x10020)
+#define NPA_AF_RVU_LF_CFG_DEBUG         (0x10030)
+
+
+/* NIX block's admin function registers */
+#define NIX_AF_CFG			(0x0000)
+#define NIX_AF_STATUS			(0x0010)
+#define NIX_AF_NDC_CFG			(0x0018)
+#define NIX_AF_CONST			(0x0020)
+#define NIX_AF_CONST1			(0x0028)
+#define NIX_AF_CONST2			(0x0030)
+#define NIX_AF_CONST3			(0x0038)
+#define NIX_AF_SQ_CONST			(0x0040)
+#define NIX_AF_CQ_CONST			(0x0048)
+#define NIX_AF_RQ_CONST			(0x0050)
+#define NIX_AF_PSE_CONST		(0x0060)
+#define NIX_AF_TL1_CONST		(0x0070)
+#define NIX_AF_TL2_CONST		(0x0078)
+#define NIX_AF_TL3_CONST		(0x0080)
+#define NIX_AF_TL4_CONST		(0x0088)
+#define NIX_AF_MDQ_CONST		(0x0090)
+#define NIX_AF_MC_MIRROR_CONST		(0x0098)
+#define NIX_AF_LSO_CFG			(0x00A8)
+#define NIX_AF_BLK_RST			(0x00B0)
+#define NIX_AF_TX_TSTMP_CFG		(0x00C0)
+#define NIX_AF_RX_CFG			(0x00D0)
+#define NIX_AF_AVG_DELAY		(0x00E0)
+#define NIX_AF_CINT_DELAY		(0x00F0)
+#define NIX_AF_RX_MCAST_BASE		(0x0100)
+#define NIX_AF_RX_MCAST_CFG		(0x0110)
+#define NIX_AF_RX_MCAST_BUF_BASE	(0x0120)
+#define NIX_AF_RX_MCAST_BUF_CFG		(0x0130)
+#define NIX_AF_RX_MIRROR_BUF_BASE	(0x0140)
+#define NIX_AF_RX_MIRROR_BUF_CFG	(0x0148)
+#define NIX_AF_LF_RST			(0x0150)
+#define NIX_AF_GEN_INT			(0x0160)
+#define NIX_AF_GEN_INT_W1S		(0x0168)
+#define NIX_AF_GEN_INT_ENA_W1S		(0x0170)
+#define NIX_AF_GEN_INT_ENA_W1C		(0x0178)
+#define NIX_AF_ERR_INT			(0x0180)
+#define NIX_AF_ERR_INT_W1S		(0x0188)
+#define NIX_AF_ERR_INT_ENA_W1S		(0x0190)
+#define NIX_AF_ERR_INT_ENA_W1C		(0x0198)
+#define NIX_AF_RAS			(0x01A0)
+#define NIX_AF_RAS_W1S			(0x01A8)
+#define NIX_AF_RAS_ENA_W1S		(0x01B0)
+#define NIX_AF_RAS_ENA_W1C		(0x01B8)
+#define NIX_AF_RVU_INT			(0x01C0)
+#define NIX_AF_RVU_INT_W1S		(0x01C8)
+#define NIX_AF_RVU_INT_ENA_W1S		(0x01D0)
+#define NIX_AF_RVU_INT_ENA_W1C		(0x01D8)
+#define NIX_AF_TCP_TIMER		(0x01E0)
+#define NIX_AF_RX_WQE_TAG_CTL		(0x01F0)
+#define NIX_AF_RX_DEF_OL2		(0x0200)
+#define NIX_AF_RX_DEF_OIP4		(0x0210)
+#define NIX_AF_RX_DEF_IIP4		(0x0220)
+#define NIX_AF_RX_DEF_OIP6		(0x0230)
+#define NIX_AF_RX_DEF_IIP6		(0x0240)
+#define NIX_AF_RX_DEF_OTCP		(0x0250)
+#define NIX_AF_RX_DEF_ITCP		(0x0260)
+#define NIX_AF_RX_DEF_OUDP		(0x0270)
+#define NIX_AF_RX_DEF_IUDP		(0x0280)
+#define NIX_AF_RX_DEF_OSCTP		(0x0290)
+#define NIX_AF_RX_DEF_ISCTP		(0x02A0)
+#define NIX_AF_RX_DEF_IPSECX		(0x02B0)
+#define NIX_AF_RX_IPSEC_GEN_CFG		(0x0300)
+#define NIX_AF_RX_CPTX_INST_ADDR	(0x0310)
+#define NIX_AF_NDC_TX_SYNC		(0x03F0)
+#define NIX_AF_AQ_CFG			(0x0400)
+#define NIX_AF_AQ_BASE			(0x0410)
+#define NIX_AF_AQ_STATUS		(0x0420)
+#define NIX_AF_AQ_DOOR			(0x0430)
+#define NIX_AF_AQ_DONE_WAIT		(0x0440)
+#define NIX_AF_AQ_DONE			(0x0450)
+#define NIX_AF_AQ_DONE_ACK		(0x0460)
+#define NIX_AF_AQ_DONE_TIMER		(0x0470)
+#define NIX_AF_AQ_DONE_INT		(0x0480)
+#define NIX_AF_AQ_DONE_INT_W1S		(0x0488)
+#define NIX_AF_AQ_DONE_ENA_W1S		(0x0490)
+#define NIX_AF_AQ_DONE_ENA_W1C		(0x0498)
+#define NIX_AF_RX_LINKX_SLX_SPKT_CNT	(0x0500)
+#define NIX_AF_RX_LINKX_SLX_SXQE_CNT	(0x0510)
+#define NIX_AF_RX_MCAST_JOBSX_SW_CNT	(0x0520)
+#define NIX_AF_RX_MIRROR_JOBSX_SW_CNT	(0x0530)
+#define NIX_AF_RX_LINKX_CFG(a)		(0x0540 | (a) << 16)
+#define NIX_AF_RX_SW_SYNC		(0x0550)
+#define NIX_AF_RX_SW_SYNC_DONE		(0x0560)
+#define NIX_AF_SEB_ECO			(0x0600)
+#define NIX_AF_SEB_TEST_BP		(0x0610)
+#define NIX_AF_NORM_TX_FIFO_STATUS	(0x0620)
+#define NIX_AF_EXPR_TX_FIFO_STATUS	(0x0630)
+#define NIX_AF_SDP_TX_FIFO_STATUS	(0x0640)
+#define NIX_AF_TX_NPC_CAPTURE_CONFIG	(0x0660)
+#define NIX_AF_TX_NPC_CAPTURE_INFO	(0x0670)
+
+#define NIX_AF_DEBUG_NPC_RESP_DATAX(a)          (0x680 | (a) << 3)
+#define NIX_AF_SMQX_CFG(a)                      (0x700 | (a) << 16)
+#define NIX_AF_PSE_CHANNEL_LEVEL                (0x800)
+#define NIX_AF_PSE_SHAPER_CFG                   (0x810)
+#define NIX_AF_TX_EXPR_CREDIT			(0x830)
+#define NIX_AF_MARK_FORMATX_CTL(a)              (0x900 | (a) << 18)
+#define NIX_AF_TX_LINKX_NORM_CREDIT(a)		(0xA00 | (a) << 16)
+#define NIX_AF_TX_LINKX_EXPR_CREDIT(a)		(0xA10 | (a) << 16)
+#define NIX_AF_TX_LINKX_SW_XOFF(a)              (0xA20 | (a) << 16)
+#define NIX_AF_TX_LINKX_HW_XOFF(a)              (0xA30 | (a) << 16)
+#define NIX_AF_SDP_LINK_CREDIT                  (0xa40)
+#define NIX_AF_SDP_SW_XOFFX(a)                  (0xA60 | (a) << 3)
+#define NIX_AF_SDP_HW_XOFFX(a)                  (0xAC0 | (a) << 3)
+#define NIX_AF_TL4X_BP_STATUS(a)                (0xB00 | (a) << 16)
+#define NIX_AF_TL4X_SDP_LINK_CFG(a)             (0xB10 | (a) << 16)
+#define NIX_AF_TL1X_SCHEDULE(a)                 (0xC00 | (a) << 16)
+#define NIX_AF_TL1X_SHAPE(a)                    (0xC10 | (a) << 16)
+#define NIX_AF_TL1X_CIR(a)                      (0xC20 | (a) << 16)
+#define NIX_AF_TL1X_SHAPE_STATE(a)              (0xC50 | (a) << 16)
+#define NIX_AF_TL1X_SW_XOFF(a)                  (0xC70 | (a) << 16)
+#define NIX_AF_TL1X_TOPOLOGY(a)                 (0xC80 | (a) << 16)
+#define NIX_AF_TL1X_GREEN(a)                    (0xC90 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW(a)                   (0xCA0 | (a) << 16)
+#define NIX_AF_TL1X_RED(a)                      (0xCB0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG0(a)                (0xCC0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG1(a)                (0xCC8 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG2(a)                (0xCD0 | (a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG3(a)                (0xCD8 | (a) << 16)
+#define NIX_AF_TL1A_DEBUG                       (0xce0)
+#define NIX_AF_TL1B_DEBUG                       (0xcf0)
+#define NIX_AF_TL1_DEBUG_GREEN                  (0xd00)
+#define NIX_AF_TL1_DEBUG_NODE                   (0xd10)
+#define NIX_AF_TL1X_DROPPED_PACKETS(a)          (0xD20 | (a) << 16)
+#define NIX_AF_TL1X_DROPPED_BYTES(a)            (0xD30 | (a) << 16)
+#define NIX_AF_TL1X_RED_PACKETS(a)              (0xD40 | (a) << 16)
+#define NIX_AF_TL1X_RED_BYTES(a)                (0xD50 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW_PACKETS(a)           (0xD60 | (a) << 16)
+#define NIX_AF_TL1X_YELLOW_BYTES(a)             (0xD70 | (a) << 16)
+#define NIX_AF_TL1X_GREEN_PACKETS(a)            (0xD80 | (a) << 16)
+#define NIX_AF_TL1X_GREEN_BYTES(a)              (0xD90 | (a) << 16)
+#define NIX_AF_TL2X_SCHEDULE(a)                 (0xE00 | (a) << 16)
+#define NIX_AF_TL2X_SHAPE(a)                    (0xE10 | (a) << 16)
+#define NIX_AF_TL2X_CIR(a)                      (0xE20 | (a) << 16)
+#define NIX_AF_TL2X_PIR(a)                      (0xE30 | (a) << 16)
+#define NIX_AF_TL2X_SCHED_STATE(a)              (0xE40 | (a) << 16)
+#define NIX_AF_TL2X_SHAPE_STATE(a)              (0xE50 | (a) << 16)
+#define NIX_AF_TL2X_POINTERS(a)                 (0xE60 | (a) << 16)
+#define NIX_AF_TL2X_SW_XOFF(a)                  (0xE70 | (a) << 16)
+#define NIX_AF_TL2X_TOPOLOGY(a)                 (0xE80 | (a) << 16)
+#define NIX_AF_TL2X_PARENT(a)                   (0xE88 | (a) << 16)
+#define NIX_AF_TL2X_GREEN(a)                    (0xE90 | (a) << 16)
+#define NIX_AF_TL2X_YELLOW(a)                   (0xEA0 | (a) << 16)
+#define NIX_AF_TL2X_RED(a)                      (0xEB0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG0(a)                (0xEC0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG1(a)                (0xEC8 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG2(a)                (0xED0 | (a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG3(a)                (0xED8 | (a) << 16)
+#define NIX_AF_TL2A_DEBUG                       (0xee0)
+#define NIX_AF_TL2B_DEBUG                       (0xef0)
+#define NIX_AF_TL3X_SCHEDULE(a)                 (0x1000 | (a) << 16)
+#define NIX_AF_TL3X_SHAPE(a)                    (0x1010 | (a) << 16)
+#define NIX_AF_TL3X_CIR(a)                      (0x1020 | (a) << 16)
+#define NIX_AF_TL3X_PIR(a)                      (0x1030 | (a) << 16)
+#define NIX_AF_TL3X_SCHED_STATE(a)              (0x1040 | (a) << 16)
+#define NIX_AF_TL3X_SHAPE_STATE(a)              (0x1050 | (a) << 16)
+#define NIX_AF_TL3X_POINTERS(a)                 (0x1060 | (a) << 16)
+#define NIX_AF_TL3X_SW_XOFF(a)                  (0x1070 | (a) << 16)
+#define NIX_AF_TL3X_TOPOLOGY(a)                 (0x1080 | (a) << 16)
+#define NIX_AF_TL3X_PARENT(a)                   (0x1088 | (a) << 16)
+#define NIX_AF_TL3X_GREEN(a)                    (0x1090 | (a) << 16)
+#define NIX_AF_TL3X_YELLOW(a)                   (0x10A0 | (a) << 16)
+#define NIX_AF_TL3X_RED(a)                      (0x10B0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG0(a)                (0x10C0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG1(a)                (0x10C8 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG2(a)                (0x10D0 | (a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG3(a)                (0x10D8 | (a) << 16)
+#define NIX_AF_TL3A_DEBUG                       (0x10e0)
+#define NIX_AF_TL3B_DEBUG                       (0x10f0)
+#define NIX_AF_TL4X_SCHEDULE(a)                 (0x1200 | (a) << 16)
+#define NIX_AF_TL4X_SHAPE(a)                    (0x1210 | (a) << 16)
+#define NIX_AF_TL4X_CIR(a)                      (0x1220 | (a) << 16)
+#define NIX_AF_TL4X_PIR(a)                      (0x1230 | (a) << 16)
+#define NIX_AF_TL4X_SCHED_STATE(a)              (0x1240 | (a) << 16)
+#define NIX_AF_TL4X_SHAPE_STATE(a)              (0x1250 | (a) << 16)
+#define NIX_AF_TL4X_POINTERS(a)                 (0x1260 | (a) << 16)
+#define NIX_AF_TL4X_SW_XOFF(a)                  (0x1270 | (a) << 16)
+#define NIX_AF_TL4X_TOPOLOGY(a)                 (0x1280 | (a) << 16)
+#define NIX_AF_TL4X_PARENT(a)                   (0x1288 | (a) << 16)
+#define NIX_AF_TL4X_GREEN(a)                    (0x1290 | (a) << 16)
+#define NIX_AF_TL4X_YELLOW(a)                   (0x12A0 | (a) << 16)
+#define NIX_AF_TL4X_RED(a)                      (0x12B0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG0(a)                (0x12C0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG1(a)                (0x12C8 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG2(a)                (0x12D0 | (a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG3(a)                (0x12D8 | (a) << 16)
+#define NIX_AF_TL4A_DEBUG                       (0x12e0)
+#define NIX_AF_TL4B_DEBUG                       (0x12f0)
+#define NIX_AF_MDQX_SCHEDULE(a)                 (0x1400 | (a) << 16)
+#define NIX_AF_MDQX_SHAPE(a)                    (0x1410 | (a) << 16)
+#define NIX_AF_MDQX_CIR(a)                      (0x1420 | (a) << 16)
+#define NIX_AF_MDQX_PIR(a)                      (0x1430 | (a) << 16)
+#define NIX_AF_MDQX_SCHED_STATE(a)              (0x1440 | (a) << 16)
+#define NIX_AF_MDQX_SHAPE_STATE(a)              (0x1450 | (a) << 16)
+#define NIX_AF_MDQX_POINTERS(a)                 (0x1460 | (a) << 16)
+#define NIX_AF_MDQX_SW_XOFF(a)                  (0x1470 | (a) << 16)
+#define NIX_AF_MDQX_PARENT(a)                   (0x1480 | (a) << 16)
+#define NIX_AF_MDQX_MD_DEBUG(a)                 (0x14C0 | (a) << 16)
+#define NIX_AF_MDQX_PTR_FIFO(a)                 (0x14D0 | (a) << 16)
+#define NIX_AF_MDQA_DEBUG                       (0x14e0)
+#define NIX_AF_MDQB_DEBUG                       (0x14f0)
+#define NIX_AF_TL3_TL2X_CFG(a)                  (0x1600 | (a) << 18)
+#define NIX_AF_TL3_TL2X_BP_STATUS(a)            (0x1610 | (a) << 16)
+#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)         (0x1700 | (a) << 16 | (b) << 3)
+#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b)    (0x1800 | (a) << 18 | (b) << 3)
+#define NIX_AF_TX_MCASTX(a)                     (0x1900 | (a) << 15)
+#define NIX_AF_TX_VTAG_DEFX_CTL(a)              (0x1A00 | (a) << 16)
+#define NIX_AF_TX_VTAG_DEFX_DATA(a)             (0x1A10 | (a) << 16)
+#define NIX_AF_RX_BPIDX_STATUS(a)               (0x1A20 | (a) << 17)
+#define NIX_AF_RX_CHANX_CFG(a)                  (0x1A30 | (a) << 15)
+#define NIX_AF_CINT_TIMERX(a)                   (0x1A40 | (a) << 18)
+#define NIX_AF_LSO_FORMATX_FIELDX(a, b)         (0x1B00 | (a) << 16 | (b) << 3)
+#define NIX_AF_LFX_CFG(a)		(0x4000 | (a) << 17)
+#define NIX_AF_LFX_SQS_CFG(a)		(0x4020 | (a) << 17)
+#define NIX_AF_LFX_TX_CFG2(a)		(0x4028 | (a) << 17)
+#define NIX_AF_LFX_SQS_BASE(a)		(0x4030 | (a) << 17)
+#define NIX_AF_LFX_RQS_CFG(a)		(0x4040 | (a) << 17)
+#define NIX_AF_LFX_RQS_BASE(a)		(0x4050 | (a) << 17)
+#define NIX_AF_LFX_CQS_CFG(a)		(0x4060 | (a) << 17)
+#define NIX_AF_LFX_CQS_BASE(a)		(0x4070 | (a) << 17)
+#define NIX_AF_LFX_TX_CFG(a)		(0x4080 | (a) << 17)
+#define NIX_AF_LFX_TX_PARSE_CFG(a)	(0x4090 | (a) << 17)
+#define NIX_AF_LFX_RX_CFG(a)		(0x40A0 | (a) << 17)
+#define NIX_AF_LFX_RSS_CFG(a)		(0x40C0 | (a) << 17)
+#define NIX_AF_LFX_RSS_BASE(a)		(0x40D0 | (a) << 17)
+#define NIX_AF_LFX_QINTS_CFG(a)		(0x4100 | (a) << 17)
+#define NIX_AF_LFX_QINTS_BASE(a)	(0x4110 | (a) << 17)
+#define NIX_AF_LFX_CINTS_CFG(a)		(0x4120 | (a) << 17)
+#define NIX_AF_LFX_CINTS_BASE(a)	(0x4130 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG0(a)	(0x4140 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG1(a)	(0x4148 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a)	(0x4150 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a)	(0x4158 | (a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a)	(0x4170 | (a) << 17)
+#define NIX_AF_LFX_TX_STATUS(a)		(0x4180 | (a) << 17)
+#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b)	(0x4200 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_LOCKX(a, b)		(0x4300 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_TX_STATX(a, b)	(0x4400 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_RX_STATX(a, b)	(0x4500 | (a) << 17 | (b) << 3)
+#define NIX_AF_LFX_RSS_GRPX(a, b)	(0x4600 | (a) << 17 | (b) << 3)
+#define NIX_AF_RX_NPC_MC_RCV		(0x4700)
+#define NIX_AF_RX_NPC_MC_DROP		(0x4710)
+#define NIX_AF_RX_NPC_MIRROR_RCV	(0x4720)
+#define NIX_AF_RX_NPC_MIRROR_DROP	(0x4730)
+#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a)	(0x4800 | (a) << 16)
+
+#define NIX_PRIV_AF_INT_CFG		(0x8000000)
+#define NIX_PRIV_LFX_CFG		(0x8000010)
+#define NIX_PRIV_LFX_INT_CFG		(0x8000020)
+#define NIX_AF_RVU_LF_CFG_DEBUG		(0x8000030)
+
+/* SSO */
+#define SSO_AF_CONST			(0x1000)
+#define SSO_AF_CONST1			(0x1008)
+#define SSO_AF_BLK_RST			(0x10f8)
+#define SSO_AF_LF_HWGRP_RST		(0x10e0)
+#define SSO_AF_RVU_LF_CFG_DEBUG		(0x3800)
+#define SSO_PRIV_LFX_HWGRP_CFG		(0x10000)
+#define SSO_PRIV_LFX_HWGRP_INT_CFG	(0x20000)
+
+/* SSOW */
+#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG	(0x0010)
+#define SSOW_AF_LF_HWS_RST		(0x0030)
+#define SSOW_PRIV_LFX_HWS_CFG		(0x1000)
+#define SSOW_PRIV_LFX_HWS_INT_CFG	(0x2000)
+
+/* TIM */
+#define TIM_AF_CONST			(0x90)
+#define TIM_PRIV_LFX_CFG		(0x20000)
+#define TIM_PRIV_LFX_INT_CFG		(0x24000)
+#define TIM_AF_RVU_LF_CFG_DEBUG		(0x30000)
+#define TIM_AF_BLK_RST			(0x10)
+#define TIM_AF_LF_RST			(0x20)
+
+/* CPT */
+#define CPT_AF_CONSTANTS0		(0x0000)
+#define CPT_PRIV_LFX_CFG		(0x41000)
+#define CPT_PRIV_LFX_INT_CFG		(0x43000)
+#define CPT_AF_RVU_LF_CFG_DEBUG		(0x45000)
+#define CPT_AF_LF_RST			(0x44000)
+#define CPT_AF_BLK_RST			(0x46000)
+
 #define NDC_AF_BLK_RST                  (0x002F0)
 #define NPC_AF_BLK_RST                  (0x00040)
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 04/15] soc: octeontx2: Add mailbox support infra
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Aleksey Makarov,
	Sunil Goutham, Lukasz Bartosik

From: Aleksey Makarov <amakarov@marvell.com>

This patch adds mailbox support infrastructure APIs.
Each RVU device has a dedicated 64KB mailbox region
shared with it's peer for communication. RVU AF has
a separate mailbox region shared with each of RVU PFs
and a RVU PF has a separate region shared with each of
it's VF.

These set of APIs are used by this driver (RVU AF) and
other RVU PF/VF drivers eg netdev, crypto e.t.c.

Signed-off-by: Aleksey Makarov <amakarov@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
---
 drivers/soc/marvell/Kconfig             |   4 +
 drivers/soc/marvell/octeontx2/Makefile  |   2 +
 drivers/soc/marvell/octeontx2/mbox.c    | 303 ++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/mbox.h    | 142 +++++++++++++++
 drivers/soc/marvell/octeontx2/rvu_reg.h |   4 +
 5 files changed, 455 insertions(+)
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.c
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.h

diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
index 4499caf..428d22e 100644
--- a/drivers/soc/marvell/Kconfig
+++ b/drivers/soc/marvell/Kconfig
@@ -4,8 +4,12 @@
 
 menu "Marvell SoC drivers"
 
+config OCTEONTX2_MBOX
+	tristate
+
 config OCTEONTX2_AF
 	tristate "OcteonTX2 RVU Admin Function driver"
+	select OCTEONTX2_MBOX
 	depends on ARM64 && PCI
 	help
 	  This driver supports Marvell's OcteonTX2 Resource Virtualization
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index dacbd16..ac17cb9 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -3,6 +3,8 @@
 # Makefile for Marvell's OcteonTX2 RVU Admin Function driver
 #
 
+obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
+octeontx2_mbox-y := mbox.o
 octeontx2_af-y := rvu.o
diff --git a/drivers/soc/marvell/octeontx2/mbox.c b/drivers/soc/marvell/octeontx2/mbox.c
new file mode 100644
index 0000000..0722fa4
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/mbox.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "rvu_reg.h"
+#include "mbox.h"
+
+static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+
+void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_hdr *tx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->tx_start);
+	struct mbox_hdr *rx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->rx_start);
+
+	spin_lock(&mdev->mbox_lock);
+	mdev->msg_size = 0;
+	mdev->rsp_size = 0;
+	tx_hdr->num_msgs = 0;
+	rx_hdr->num_msgs = 0;
+	spin_unlock(&mdev->mbox_lock);
+}
+EXPORT_SYMBOL(otx2_mbox_reset);
+
+void otx2_mbox_destroy(struct otx2_mbox *mbox)
+{
+	mbox->reg_base = NULL;
+	mbox->hwbase = NULL;
+
+	kfree(mbox->dev);
+	mbox->dev = NULL;
+}
+EXPORT_SYMBOL(otx2_mbox_destroy);
+
+int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
+		   void *reg_base, int direction, int ndevs)
+{
+	int devid;
+	struct otx2_mbox_dev *mdev;
+
+	switch (direction) {
+	case MBOX_DIR_AFPF:
+	case MBOX_DIR_PFVF:
+		mbox->tx_start = MBOX_DOWN_TX_START;
+		mbox->rx_start = MBOX_DOWN_RX_START;
+		mbox->tx_size  = MBOX_DOWN_TX_SIZE;
+		mbox->rx_size  = MBOX_DOWN_RX_SIZE;
+		break;
+	case MBOX_DIR_PFAF:
+	case MBOX_DIR_VFPF:
+		mbox->tx_start = MBOX_DOWN_RX_START;
+		mbox->rx_start = MBOX_DOWN_TX_START;
+		mbox->tx_size  = MBOX_DOWN_RX_SIZE;
+		mbox->rx_size  = MBOX_DOWN_TX_SIZE;
+		break;
+	case MBOX_DIR_AFPF_UP:
+	case MBOX_DIR_PFVF_UP:
+		mbox->tx_start = MBOX_UP_TX_START;
+		mbox->rx_start = MBOX_UP_RX_START;
+		mbox->tx_size  = MBOX_UP_TX_SIZE;
+		mbox->rx_size  = MBOX_UP_RX_SIZE;
+		break;
+	case MBOX_DIR_PFAF_UP:
+	case MBOX_DIR_VFPF_UP:
+		mbox->tx_start = MBOX_UP_RX_START;
+		mbox->rx_start = MBOX_UP_TX_START;
+		mbox->tx_size  = MBOX_UP_RX_SIZE;
+		mbox->rx_size  = MBOX_UP_TX_SIZE;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	switch (direction) {
+	case MBOX_DIR_AFPF:
+	case MBOX_DIR_AFPF_UP:
+		mbox->trigger = RVU_AF_AFPF_MBOX0;
+		mbox->tr_shift = 4;
+		break;
+	case MBOX_DIR_PFAF:
+	case MBOX_DIR_PFAF_UP:
+		mbox->trigger = RVU_PF_PFAF_MBOX1;
+		mbox->tr_shift = 0;
+		break;
+	case MBOX_DIR_PFVF:
+	case MBOX_DIR_PFVF_UP:
+		mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
+		mbox->tr_shift = 12;
+		break;
+	case MBOX_DIR_VFPF:
+	case MBOX_DIR_VFPF_UP:
+		mbox->trigger = RVU_VF_VFPF_MBOX1;
+		mbox->tr_shift = 0;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	mbox->reg_base = reg_base;
+	mbox->hwbase = hwbase;
+	mbox->pdev = pdev;
+
+	mbox->dev = kcalloc(ndevs, sizeof(struct otx2_mbox_dev), GFP_KERNEL);
+	if (!mbox->dev) {
+		otx2_mbox_destroy(mbox);
+		return -ENOMEM;
+	}
+
+	mbox->ndevs = ndevs;
+	for (devid = 0; devid < ndevs; devid++) {
+		mdev = &mbox->dev[devid];
+		mdev->mbase = mbox->hwbase + (devid * MBOX_SIZE);
+		spin_lock_init(&mdev->mbox_lock);
+		/* Init header to reset value */
+		otx2_mbox_reset(mbox, devid);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(otx2_mbox_init);
+
+int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	int timeout = 0, sleep = 1;
+
+	while (mdev->num_msgs != mdev->msgs_acked) {
+		msleep(sleep);
+		timeout += sleep;
+		if (timeout >= MBOX_RSP_TIMEOUT)
+			return -EIO;
+	}
+	return 0;
+}
+EXPORT_SYMBOL(otx2_mbox_wait_for_rsp);
+
+int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	unsigned long timeout = jiffies + 1 * HZ;
+
+	while (!time_after(jiffies, timeout)) {
+		if (mdev->num_msgs == mdev->msgs_acked)
+			return 0;
+		cpu_relax();
+	}
+	return -EIO;
+}
+EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
+
+void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_hdr *tx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->tx_start);
+	struct mbox_hdr *rx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->rx_start);
+
+	spin_lock(&mdev->mbox_lock);
+	/* Reset header for next messages */
+	mdev->msg_size = 0;
+	mdev->rsp_size = 0;
+	mdev->msgs_acked = 0;
+
+	/* Sync mbox data into memory */
+	smp_wmb();
+
+	/* num_msgs != 0 signals to the peer that the buffer has a number of
+	 * messages.  So this should be written after writing all the messages
+	 * to the shared memory.
+	 */
+	tx_hdr->num_msgs = mdev->num_msgs;
+	rx_hdr->num_msgs = 0;
+	spin_unlock(&mdev->mbox_lock);
+
+	/* The interrupt should be fired after num_msgs is written
+	 * to the shared memory
+	 */
+	writeq(1, (void __iomem *)mbox->reg_base +
+	       (mbox->trigger | (devid << mbox->tr_shift)));
+}
+EXPORT_SYMBOL(otx2_mbox_msg_send);
+
+struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+					    int size, int size_rsp)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_msghdr *msghdr = NULL;
+
+	spin_lock(&mdev->mbox_lock);
+	size = ALIGN(size, MBOX_MSG_ALIGN);
+	size_rsp = ALIGN(size_rsp, MBOX_MSG_ALIGN);
+	/* Check if there is space in mailbox */
+	if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset)
+		goto exit;
+	if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset)
+		goto exit;
+
+	if (mdev->msg_size == 0)
+		mdev->num_msgs = 0;
+	mdev->num_msgs++;
+
+	msghdr = (struct mbox_msghdr *)(mdev->mbase + mbox->tx_start +
+					msgs_offset + mdev->msg_size);
+	/* Clear the whole msg region */
+	memset(msghdr, 0, sizeof(*msghdr) + size);
+	/* Init message header with reset values */
+	msghdr->ver = OTX2_MBOX_VERSION;
+	mdev->msg_size += size;
+	mdev->rsp_size += size_rsp;
+	msghdr->next_msgoff = mdev->msg_size + msgs_offset;
+exit:
+	spin_unlock(&mdev->mbox_lock);
+
+	return msghdr;
+}
+EXPORT_SYMBOL(otx2_mbox_alloc_msg_rsp);
+
+struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
+				      struct mbox_msghdr *msg)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	unsigned long imsg = mbox->tx_start + msgs_offset;
+	unsigned long irsp = mbox->rx_start + msgs_offset;
+	u16 msgs;
+
+	if (mdev->num_msgs != mdev->msgs_acked)
+		return ERR_PTR(-ENODEV);
+
+	for (msgs = 0; msgs < mdev->msgs_acked; msgs++) {
+		struct mbox_msghdr *pmsg = mdev->mbase + imsg;
+		struct mbox_msghdr *prsp = mdev->mbase + irsp;
+
+		if (msg == pmsg) {
+			if (pmsg->id != prsp->id)
+				return ERR_PTR(-ENODEV);
+			return prsp;
+		}
+
+		imsg = pmsg->next_msgoff;
+		irsp = prsp->next_msgoff;
+	}
+
+	return ERR_PTR(-ENODEV);
+}
+EXPORT_SYMBOL(otx2_mbox_get_rsp);
+
+int
+otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, u16 pcifunc, u16 id)
+{
+	struct msg_rsp *rsp;
+
+	rsp = (struct msg_rsp *)
+	       otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
+	if (!rsp)
+		return -ENOMEM;
+	rsp->hdr.id = id;
+	rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
+	rsp->hdr.rc = MBOX_MSG_INVALID;
+	rsp->hdr.pcifunc = pcifunc;
+	return 0;
+}
+EXPORT_SYMBOL(otx2_reply_invalid_msg);
+
+bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	bool ret;
+
+	spin_lock(&mdev->mbox_lock);
+	ret = mdev->num_msgs != 0;
+	spin_unlock(&mdev->mbox_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(otx2_mbox_nonempty);
+
+const char *otx2_mbox_id2name(u16 id)
+{
+	switch (id) {
+#define M(_name, _id, _1, _2) case _id: return # _name;
+	MBOX_MESSAGES
+#undef M
+	default:
+		return "INVALID ID";
+	}
+}
+EXPORT_SYMBOL(otx2_mbox_id2name);
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
new file mode 100644
index 0000000..8e205fd
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef MBOX_H
+#define MBOX_H
+
+#include <linux/etherdevice.h>
+#include <linux/sizes.h>
+
+#include "rvu_struct.h"
+
+#define MBOX_SIZE		SZ_64K
+
+/* AF/PF: PF initiated, PF/VF VF initiated */
+#define MBOX_DOWN_RX_START	0
+#define MBOX_DOWN_RX_SIZE	(46 * SZ_1K)
+#define MBOX_DOWN_TX_START	(MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
+#define MBOX_DOWN_TX_SIZE	(16 * SZ_1K)
+/* AF/PF: AF initiated, PF/VF PF initiated */
+#define MBOX_UP_RX_START	(MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
+#define MBOX_UP_RX_SIZE		SZ_1K
+#define MBOX_UP_TX_START	(MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
+#define MBOX_UP_TX_SIZE		SZ_1K
+
+#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
+# error "incorrect mailbox area sizes"
+#endif
+
+#define MBOX_RSP_TIMEOUT	1000 /* in ms, Time to wait for mbox response */
+
+#define MBOX_MSG_ALIGN		16  /* Align mbox msg start to 16bytes */
+
+/* Mailbox directions */
+#define MBOX_DIR_AFPF		0  /* AF replies to PF */
+#define MBOX_DIR_PFAF		1  /* PF sends messages to AF */
+#define MBOX_DIR_PFVF		2  /* PF replies to VF */
+#define MBOX_DIR_VFPF		3  /* VF sends messages to PF */
+#define MBOX_DIR_AFPF_UP	4  /* AF sends messages to PF */
+#define MBOX_DIR_PFAF_UP	5  /* PF replies to AF */
+#define MBOX_DIR_PFVF_UP	6  /* PF sends messages to VF */
+#define MBOX_DIR_VFPF_UP	7  /* VF replies to PF */
+
+struct otx2_mbox_dev {
+	void	    *mbase;   /* This dev's mbox region */
+	spinlock_t  mbox_lock;
+	u16         msg_size; /* Total msg size to be sent */
+	u16         rsp_size; /* Total rsp size to be sure the reply is ok */
+	u16         num_msgs; /* No of msgs sent or waiting for response */
+	u16         msgs_acked; /* No of msgs for which response is received */
+};
+
+struct otx2_mbox {
+	struct pci_dev *pdev;
+	void   *hwbase;  /* Mbox region advertised by HW */
+	void   *reg_base;/* CSR base for this dev */
+	u64    trigger;  /* Trigger mbox notification */
+	u16    tr_shift; /* Mbox trigger shift */
+	u64    rx_start; /* Offset of Rx region in mbox memory */
+	u64    tx_start; /* Offset of Tx region in mbox memory */
+	u16    rx_size;  /* Size of Rx region */
+	u16    tx_size;  /* Size of Tx region */
+	u16    ndevs;    /* The number of peers */
+	struct otx2_mbox_dev *dev;
+};
+
+/* Header which preceeds all mbox messages */
+struct mbox_hdr {
+	u16  num_msgs;   /* No of msgs embedded */
+};
+
+/* Header which preceeds every msg and is also part of it */
+struct mbox_msghdr {
+	u16 pcifunc;     /* Who's sending this msg */
+	u16 id;          /* Mbox message ID */
+#define OTX2_MBOX_REQ_SIG (0xdead)
+#define OTX2_MBOX_RSP_SIG (0xbeef)
+	u16 sig;         /* Signature, for validating corrupted msgs */
+#define OTX2_MBOX_VERSION (0x0001)
+	u16 ver;         /* Version of msg's structure for this ID */
+	u16 next_msgoff; /* Offset of next msg within mailbox region */
+	int rc;          /* Msg process'ed response code */
+};
+
+void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
+void otx2_mbox_destroy(struct otx2_mbox *mbox);
+int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
+		   void *reg_base, int direction, int ndevs);
+void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
+int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
+int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
+struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+					    int size, int size_rsp);
+struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
+				      struct mbox_msghdr *msg);
+int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid,
+			   u16 pcifunc, u16 id);
+bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid);
+const char *otx2_mbox_id2name(u16 id);
+static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
+						      int devid, int size)
+{
+	return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
+}
+
+/* Mailbox message types */
+#define MBOX_MSG_MASK				0xFFFF
+#define MBOX_MSG_INVALID			0xFFFE
+#define MBOX_MSG_MAX				0xFFFF
+
+#define MBOX_MESSAGES							\
+M(READY,		0x001, msg_req, msg_rsp)
+
+enum {
+#define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
+MBOX_MESSAGES
+#undef M
+};
+
+/* Mailbox message formats */
+
+/* Generic request msg used for those mbox messages which
+ * don't send any data in the request.
+ */
+struct msg_req {
+	struct mbox_msghdr hdr;
+};
+
+/* Generic rsponse msg used a ack or response for those mbox
+ * messages which doesn't have a specific rsp msg format.
+ */
+struct msg_rsp {
+	struct mbox_msghdr hdr;
+};
+
+#endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index a7995d1..3bfb1e0 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -101,6 +101,10 @@
 #define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
 #define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
 
+/* RVU VF registers */
+#define	RVU_VF_VFPF_MBOX0		    (0x00000)
+#define	RVU_VF_VFPF_MBOX1		    (0x00008)
+
 /* NPA block's admin function registers */
 #define NPA_AF_BLK_RST                  (0x0000)
 #define NPA_AF_CONST                    (0x0010)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 04/15] soc: octeontx2: Add mailbox support infra
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Aleksey Makarov <amakarov@marvell.com>

This patch adds mailbox support infrastructure APIs.
Each RVU device has a dedicated 64KB mailbox region
shared with it's peer for communication. RVU AF has
a separate mailbox region shared with each of RVU PFs
and a RVU PF has a separate region shared with each of
it's VF.

These set of APIs are used by this driver (RVU AF) and
other RVU PF/VF drivers eg netdev, crypto e.t.c.

Signed-off-by: Aleksey Makarov <amakarov@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
---
 drivers/soc/marvell/Kconfig             |   4 +
 drivers/soc/marvell/octeontx2/Makefile  |   2 +
 drivers/soc/marvell/octeontx2/mbox.c    | 303 ++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/mbox.h    | 142 +++++++++++++++
 drivers/soc/marvell/octeontx2/rvu_reg.h |   4 +
 5 files changed, 455 insertions(+)
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.c
 create mode 100644 drivers/soc/marvell/octeontx2/mbox.h

diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
index 4499caf..428d22e 100644
--- a/drivers/soc/marvell/Kconfig
+++ b/drivers/soc/marvell/Kconfig
@@ -4,8 +4,12 @@
 
 menu "Marvell SoC drivers"
 
+config OCTEONTX2_MBOX
+	tristate
+
 config OCTEONTX2_AF
 	tristate "OcteonTX2 RVU Admin Function driver"
+	select OCTEONTX2_MBOX
 	depends on ARM64 && PCI
 	help
 	  This driver supports Marvell's OcteonTX2 Resource Virtualization
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index dacbd16..ac17cb9 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -3,6 +3,8 @@
 # Makefile for Marvell's OcteonTX2 RVU Admin Function driver
 #
 
+obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
+octeontx2_mbox-y := mbox.o
 octeontx2_af-y := rvu.o
diff --git a/drivers/soc/marvell/octeontx2/mbox.c b/drivers/soc/marvell/octeontx2/mbox.c
new file mode 100644
index 0000000..0722fa4
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/mbox.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "rvu_reg.h"
+#include "mbox.h"
+
+static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+
+void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_hdr *tx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->tx_start);
+	struct mbox_hdr *rx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->rx_start);
+
+	spin_lock(&mdev->mbox_lock);
+	mdev->msg_size = 0;
+	mdev->rsp_size = 0;
+	tx_hdr->num_msgs = 0;
+	rx_hdr->num_msgs = 0;
+	spin_unlock(&mdev->mbox_lock);
+}
+EXPORT_SYMBOL(otx2_mbox_reset);
+
+void otx2_mbox_destroy(struct otx2_mbox *mbox)
+{
+	mbox->reg_base = NULL;
+	mbox->hwbase = NULL;
+
+	kfree(mbox->dev);
+	mbox->dev = NULL;
+}
+EXPORT_SYMBOL(otx2_mbox_destroy);
+
+int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
+		   void *reg_base, int direction, int ndevs)
+{
+	int devid;
+	struct otx2_mbox_dev *mdev;
+
+	switch (direction) {
+	case MBOX_DIR_AFPF:
+	case MBOX_DIR_PFVF:
+		mbox->tx_start = MBOX_DOWN_TX_START;
+		mbox->rx_start = MBOX_DOWN_RX_START;
+		mbox->tx_size  = MBOX_DOWN_TX_SIZE;
+		mbox->rx_size  = MBOX_DOWN_RX_SIZE;
+		break;
+	case MBOX_DIR_PFAF:
+	case MBOX_DIR_VFPF:
+		mbox->tx_start = MBOX_DOWN_RX_START;
+		mbox->rx_start = MBOX_DOWN_TX_START;
+		mbox->tx_size  = MBOX_DOWN_RX_SIZE;
+		mbox->rx_size  = MBOX_DOWN_TX_SIZE;
+		break;
+	case MBOX_DIR_AFPF_UP:
+	case MBOX_DIR_PFVF_UP:
+		mbox->tx_start = MBOX_UP_TX_START;
+		mbox->rx_start = MBOX_UP_RX_START;
+		mbox->tx_size  = MBOX_UP_TX_SIZE;
+		mbox->rx_size  = MBOX_UP_RX_SIZE;
+		break;
+	case MBOX_DIR_PFAF_UP:
+	case MBOX_DIR_VFPF_UP:
+		mbox->tx_start = MBOX_UP_RX_START;
+		mbox->rx_start = MBOX_UP_TX_START;
+		mbox->tx_size  = MBOX_UP_RX_SIZE;
+		mbox->rx_size  = MBOX_UP_TX_SIZE;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	switch (direction) {
+	case MBOX_DIR_AFPF:
+	case MBOX_DIR_AFPF_UP:
+		mbox->trigger = RVU_AF_AFPF_MBOX0;
+		mbox->tr_shift = 4;
+		break;
+	case MBOX_DIR_PFAF:
+	case MBOX_DIR_PFAF_UP:
+		mbox->trigger = RVU_PF_PFAF_MBOX1;
+		mbox->tr_shift = 0;
+		break;
+	case MBOX_DIR_PFVF:
+	case MBOX_DIR_PFVF_UP:
+		mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
+		mbox->tr_shift = 12;
+		break;
+	case MBOX_DIR_VFPF:
+	case MBOX_DIR_VFPF_UP:
+		mbox->trigger = RVU_VF_VFPF_MBOX1;
+		mbox->tr_shift = 0;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	mbox->reg_base = reg_base;
+	mbox->hwbase = hwbase;
+	mbox->pdev = pdev;
+
+	mbox->dev = kcalloc(ndevs, sizeof(struct otx2_mbox_dev), GFP_KERNEL);
+	if (!mbox->dev) {
+		otx2_mbox_destroy(mbox);
+		return -ENOMEM;
+	}
+
+	mbox->ndevs = ndevs;
+	for (devid = 0; devid < ndevs; devid++) {
+		mdev = &mbox->dev[devid];
+		mdev->mbase = mbox->hwbase + (devid * MBOX_SIZE);
+		spin_lock_init(&mdev->mbox_lock);
+		/* Init header to reset value */
+		otx2_mbox_reset(mbox, devid);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(otx2_mbox_init);
+
+int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	int timeout = 0, sleep = 1;
+
+	while (mdev->num_msgs != mdev->msgs_acked) {
+		msleep(sleep);
+		timeout += sleep;
+		if (timeout >= MBOX_RSP_TIMEOUT)
+			return -EIO;
+	}
+	return 0;
+}
+EXPORT_SYMBOL(otx2_mbox_wait_for_rsp);
+
+int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	unsigned long timeout = jiffies + 1 * HZ;
+
+	while (!time_after(jiffies, timeout)) {
+		if (mdev->num_msgs == mdev->msgs_acked)
+			return 0;
+		cpu_relax();
+	}
+	return -EIO;
+}
+EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
+
+void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_hdr *tx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->tx_start);
+	struct mbox_hdr *rx_hdr =
+		(struct mbox_hdr *)(mdev->mbase  + mbox->rx_start);
+
+	spin_lock(&mdev->mbox_lock);
+	/* Reset header for next messages */
+	mdev->msg_size = 0;
+	mdev->rsp_size = 0;
+	mdev->msgs_acked = 0;
+
+	/* Sync mbox data into memory */
+	smp_wmb();
+
+	/* num_msgs != 0 signals to the peer that the buffer has a number of
+	 * messages.  So this should be written after writing all the messages
+	 * to the shared memory.
+	 */
+	tx_hdr->num_msgs = mdev->num_msgs;
+	rx_hdr->num_msgs = 0;
+	spin_unlock(&mdev->mbox_lock);
+
+	/* The interrupt should be fired after num_msgs is written
+	 * to the shared memory
+	 */
+	writeq(1, (void __iomem *)mbox->reg_base +
+	       (mbox->trigger | (devid << mbox->tr_shift)));
+}
+EXPORT_SYMBOL(otx2_mbox_msg_send);
+
+struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+					    int size, int size_rsp)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	struct mbox_msghdr *msghdr = NULL;
+
+	spin_lock(&mdev->mbox_lock);
+	size = ALIGN(size, MBOX_MSG_ALIGN);
+	size_rsp = ALIGN(size_rsp, MBOX_MSG_ALIGN);
+	/* Check if there is space in mailbox */
+	if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset)
+		goto exit;
+	if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset)
+		goto exit;
+
+	if (mdev->msg_size == 0)
+		mdev->num_msgs = 0;
+	mdev->num_msgs++;
+
+	msghdr = (struct mbox_msghdr *)(mdev->mbase + mbox->tx_start +
+					msgs_offset + mdev->msg_size);
+	/* Clear the whole msg region */
+	memset(msghdr, 0, sizeof(*msghdr) + size);
+	/* Init message header with reset values */
+	msghdr->ver = OTX2_MBOX_VERSION;
+	mdev->msg_size += size;
+	mdev->rsp_size += size_rsp;
+	msghdr->next_msgoff = mdev->msg_size + msgs_offset;
+exit:
+	spin_unlock(&mdev->mbox_lock);
+
+	return msghdr;
+}
+EXPORT_SYMBOL(otx2_mbox_alloc_msg_rsp);
+
+struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
+				      struct mbox_msghdr *msg)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	unsigned long imsg = mbox->tx_start + msgs_offset;
+	unsigned long irsp = mbox->rx_start + msgs_offset;
+	u16 msgs;
+
+	if (mdev->num_msgs != mdev->msgs_acked)
+		return ERR_PTR(-ENODEV);
+
+	for (msgs = 0; msgs < mdev->msgs_acked; msgs++) {
+		struct mbox_msghdr *pmsg = mdev->mbase + imsg;
+		struct mbox_msghdr *prsp = mdev->mbase + irsp;
+
+		if (msg == pmsg) {
+			if (pmsg->id != prsp->id)
+				return ERR_PTR(-ENODEV);
+			return prsp;
+		}
+
+		imsg = pmsg->next_msgoff;
+		irsp = prsp->next_msgoff;
+	}
+
+	return ERR_PTR(-ENODEV);
+}
+EXPORT_SYMBOL(otx2_mbox_get_rsp);
+
+int
+otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, u16 pcifunc, u16 id)
+{
+	struct msg_rsp *rsp;
+
+	rsp = (struct msg_rsp *)
+	       otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
+	if (!rsp)
+		return -ENOMEM;
+	rsp->hdr.id = id;
+	rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
+	rsp->hdr.rc = MBOX_MSG_INVALID;
+	rsp->hdr.pcifunc = pcifunc;
+	return 0;
+}
+EXPORT_SYMBOL(otx2_reply_invalid_msg);
+
+bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
+{
+	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+	bool ret;
+
+	spin_lock(&mdev->mbox_lock);
+	ret = mdev->num_msgs != 0;
+	spin_unlock(&mdev->mbox_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(otx2_mbox_nonempty);
+
+const char *otx2_mbox_id2name(u16 id)
+{
+	switch (id) {
+#define M(_name, _id, _1, _2) case _id: return # _name;
+	MBOX_MESSAGES
+#undef M
+	default:
+		return "INVALID ID";
+	}
+}
+EXPORT_SYMBOL(otx2_mbox_id2name);
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
new file mode 100644
index 0000000..8e205fd
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef MBOX_H
+#define MBOX_H
+
+#include <linux/etherdevice.h>
+#include <linux/sizes.h>
+
+#include "rvu_struct.h"
+
+#define MBOX_SIZE		SZ_64K
+
+/* AF/PF: PF initiated, PF/VF VF initiated */
+#define MBOX_DOWN_RX_START	0
+#define MBOX_DOWN_RX_SIZE	(46 * SZ_1K)
+#define MBOX_DOWN_TX_START	(MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
+#define MBOX_DOWN_TX_SIZE	(16 * SZ_1K)
+/* AF/PF: AF initiated, PF/VF PF initiated */
+#define MBOX_UP_RX_START	(MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
+#define MBOX_UP_RX_SIZE		SZ_1K
+#define MBOX_UP_TX_START	(MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
+#define MBOX_UP_TX_SIZE		SZ_1K
+
+#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
+# error "incorrect mailbox area sizes"
+#endif
+
+#define MBOX_RSP_TIMEOUT	1000 /* in ms, Time to wait for mbox response */
+
+#define MBOX_MSG_ALIGN		16  /* Align mbox msg start to 16bytes */
+
+/* Mailbox directions */
+#define MBOX_DIR_AFPF		0  /* AF replies to PF */
+#define MBOX_DIR_PFAF		1  /* PF sends messages to AF */
+#define MBOX_DIR_PFVF		2  /* PF replies to VF */
+#define MBOX_DIR_VFPF		3  /* VF sends messages to PF */
+#define MBOX_DIR_AFPF_UP	4  /* AF sends messages to PF */
+#define MBOX_DIR_PFAF_UP	5  /* PF replies to AF */
+#define MBOX_DIR_PFVF_UP	6  /* PF sends messages to VF */
+#define MBOX_DIR_VFPF_UP	7  /* VF replies to PF */
+
+struct otx2_mbox_dev {
+	void	    *mbase;   /* This dev's mbox region */
+	spinlock_t  mbox_lock;
+	u16         msg_size; /* Total msg size to be sent */
+	u16         rsp_size; /* Total rsp size to be sure the reply is ok */
+	u16         num_msgs; /* No of msgs sent or waiting for response */
+	u16         msgs_acked; /* No of msgs for which response is received */
+};
+
+struct otx2_mbox {
+	struct pci_dev *pdev;
+	void   *hwbase;  /* Mbox region advertised by HW */
+	void   *reg_base;/* CSR base for this dev */
+	u64    trigger;  /* Trigger mbox notification */
+	u16    tr_shift; /* Mbox trigger shift */
+	u64    rx_start; /* Offset of Rx region in mbox memory */
+	u64    tx_start; /* Offset of Tx region in mbox memory */
+	u16    rx_size;  /* Size of Rx region */
+	u16    tx_size;  /* Size of Tx region */
+	u16    ndevs;    /* The number of peers */
+	struct otx2_mbox_dev *dev;
+};
+
+/* Header which preceeds all mbox messages */
+struct mbox_hdr {
+	u16  num_msgs;   /* No of msgs embedded */
+};
+
+/* Header which preceeds every msg and is also part of it */
+struct mbox_msghdr {
+	u16 pcifunc;     /* Who's sending this msg */
+	u16 id;          /* Mbox message ID */
+#define OTX2_MBOX_REQ_SIG (0xdead)
+#define OTX2_MBOX_RSP_SIG (0xbeef)
+	u16 sig;         /* Signature, for validating corrupted msgs */
+#define OTX2_MBOX_VERSION (0x0001)
+	u16 ver;         /* Version of msg's structure for this ID */
+	u16 next_msgoff; /* Offset of next msg within mailbox region */
+	int rc;          /* Msg process'ed response code */
+};
+
+void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
+void otx2_mbox_destroy(struct otx2_mbox *mbox);
+int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
+		   void *reg_base, int direction, int ndevs);
+void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
+int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
+int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
+struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+					    int size, int size_rsp);
+struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
+				      struct mbox_msghdr *msg);
+int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid,
+			   u16 pcifunc, u16 id);
+bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid);
+const char *otx2_mbox_id2name(u16 id);
+static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
+						      int devid, int size)
+{
+	return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
+}
+
+/* Mailbox message types */
+#define MBOX_MSG_MASK				0xFFFF
+#define MBOX_MSG_INVALID			0xFFFE
+#define MBOX_MSG_MAX				0xFFFF
+
+#define MBOX_MESSAGES							\
+M(READY,		0x001, msg_req, msg_rsp)
+
+enum {
+#define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
+MBOX_MESSAGES
+#undef M
+};
+
+/* Mailbox message formats */
+
+/* Generic request msg used for those mbox messages which
+ * don't send any data in the request.
+ */
+struct msg_req {
+	struct mbox_msghdr hdr;
+};
+
+/* Generic rsponse msg used a ack or response for those mbox
+ * messages which doesn't have a specific rsp msg format.
+ */
+struct msg_rsp {
+	struct mbox_msghdr hdr;
+};
+
+#endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index a7995d1..3bfb1e0 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -101,6 +101,10 @@
 #define RVU_PF_MSIX_VECX_CTL(a)             (0x008 | (a) << 4)
 #define RVU_PF_MSIX_PBAX(a)                 (0xF0000 | (a) << 3)
 
+/* RVU VF registers */
+#define	RVU_VF_VFPF_MBOX0		    (0x00000)
+#define	RVU_VF_VFPF_MBOX1		    (0x00008)
+
 /* NPA block's admin function registers */
 #define NPA_AF_BLK_RST                  (0x0000)
 #define NPA_AF_CONST                    (0x0010)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 05/15] soc: octeontx2: Add mailbox IRQ and msg handlers
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds support for mailbox interrupt and message
handling. Mapped mailbox region and registered a workqueue
for message handling. Enabled mailbox IRQ of RVU PFs
and registered a interrupt handler. When IRQ is triggered
work is added to the mbox workqueue for msgs to get processed.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h       |  14 +-
 drivers/soc/marvell/octeontx2/rvu.c        | 254 +++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  22 +++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  22 +++
 4 files changed, 309 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index 8e205fd..fc593f0 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -33,6 +33,8 @@
 # error "incorrect mailbox area sizes"
 #endif
 
+#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
+
 #define MBOX_RSP_TIMEOUT	1000 /* in ms, Time to wait for mbox response */
 
 #define MBOX_MSG_ALIGN		16  /* Align mbox msg start to 16bytes */
@@ -90,8 +92,9 @@ struct mbox_msghdr {
 
 void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
 void otx2_mbox_destroy(struct otx2_mbox *mbox);
-int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
-		   void *reg_base, int direction, int ndevs);
+int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
+		   struct pci_dev *pdev, void __force *reg_base,
+		   int direction, int ndevs);
 void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
 int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
 int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
@@ -115,7 +118,7 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 #define MBOX_MSG_MAX				0xFFFF
 
 #define MBOX_MESSAGES							\
-M(READY,		0x001, msg_req, msg_rsp)
+M(READY,		0x001, msg_req, ready_msg_rsp)
 
 enum {
 #define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
@@ -139,4 +142,9 @@ struct msg_rsp {
 	struct mbox_msghdr hdr;
 };
 
+struct ready_msg_rsp {
+	struct mbox_msghdr hdr;
+	u16    sclk_feq;	/* SCLK frequency */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index fa5f40b..e795c2f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -258,6 +258,245 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	return 0;
 }
 
+static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
+				struct mbox_msghdr *req)
+{
+	/* Check if valid, if not reply with a invalid msg */
+	if (req->sig != OTX2_MBOX_REQ_SIG)
+		goto bad_message;
+
+	if (req->id == MBOX_MSG_READY)
+		return 0;
+
+bad_message:
+	otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
+			       req->id);
+	return -ENODEV;
+}
+
+static void rvu_mbox_handler(struct work_struct *work)
+{
+	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+	struct rvu *rvu = mwork->rvu;
+	struct otx2_mbox_dev *mdev;
+	struct mbox_hdr *req_hdr;
+	struct mbox_msghdr *msg;
+	struct otx2_mbox *mbox;
+	int offset, id, err;
+	u16 pf;
+
+	mbox = &rvu->mbox;
+	pf = mwork - rvu->mbox_wrk;
+	mdev = &mbox->dev[pf];
+
+	/* Process received mbox messages */
+	req_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+	if (req_hdr->num_msgs == 0)
+		return;
+
+	offset = mbox->rx_start + ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+
+	for (id = 0; id < req_hdr->num_msgs; id++) {
+		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+
+		/* Set which PF sent this message based on mbox IRQ */
+		msg->pcifunc &= ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
+		msg->pcifunc |= (pf << RVU_PFVF_PF_SHIFT);
+		err = rvu_process_mbox_msg(rvu, pf, msg);
+		if (!err) {
+			offset = mbox->rx_start + msg->next_msgoff;
+			continue;
+		}
+
+		if (msg->pcifunc & RVU_PFVF_FUNC_MASK)
+			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n",
+				 err, otx2_mbox_id2name(msg->id), msg->id, pf,
+				 (msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+		else
+			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n",
+				 err, otx2_mbox_id2name(msg->id), msg->id, pf);
+	}
+
+	/* Send mbox responses to PF */
+	otx2_mbox_msg_send(mbox, pf);
+}
+
+static int rvu_mbox_init(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_work *mwork;
+	void __iomem *hwbase = NULL;
+	u64 bar4_addr;
+	int err, pf;
+
+	rvu->mbox_wq = alloc_workqueue("rvu_afpf_mailbox",
+				       WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+				       hw->total_pfs);
+	if (!rvu->mbox_wq)
+		return -ENOMEM;
+
+	rvu->mbox_wrk = devm_kcalloc(rvu->dev, hw->total_pfs,
+				     sizeof(struct rvu_work), GFP_KERNEL);
+	if (!rvu->mbox_wrk) {
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* Map mbox region shared with PFs */
+	bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR);
+	/* Mailbox is a reserved memory (in RAM) region shared between
+	 * RVU devices, shouldn't be mapped as device memory to allow
+	 * unaligned accesses.
+	 */
+	hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * hw->total_pfs);
+	if (!hwbase) {
+		dev_err(rvu->dev, "Unable to map mailbox region\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	err = otx2_mbox_init(&rvu->mbox, hwbase, rvu->pdev, rvu->afreg_base,
+			     MBOX_DIR_AFPF, hw->total_pfs);
+	if (err)
+		goto exit;
+
+	for (pf = 0; pf < hw->total_pfs; pf++) {
+		mwork = &rvu->mbox_wrk[pf];
+		mwork->rvu = rvu;
+		INIT_WORK(&mwork->work, rvu_mbox_handler);
+	}
+
+	return 0;
+exit:
+	if (hwbase)
+		iounmap((void __iomem *)hwbase);
+	destroy_workqueue(rvu->mbox_wq);
+	return err;
+}
+
+static void rvu_mbox_destroy(struct rvu *rvu)
+{
+	if (rvu->mbox_wq) {
+		flush_workqueue(rvu->mbox_wq);
+		destroy_workqueue(rvu->mbox_wq);
+		rvu->mbox_wq = NULL;
+	}
+
+	if (rvu->mbox.hwbase)
+		iounmap((void __iomem *)rvu->mbox.hwbase);
+
+	otx2_mbox_destroy(&rvu->mbox);
+}
+
+static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	struct otx2_mbox_dev *mdev;
+	struct otx2_mbox *mbox;
+	struct mbox_hdr *hdr;
+	u64 intr;
+	u8  pf;
+
+	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
+	/* Clear interrupts */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT, intr);
+
+	/* Sync with mbox memory region */
+	smp_wmb();
+
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+		if (intr & (1ULL << pf)) {
+			mbox = &rvu->mbox;
+			mdev = &mbox->dev[pf];
+			hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+			if (hdr->num_msgs)
+				queue_work(rvu->mbox_wq,
+					   &rvu->mbox_wrk[pf].work);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void rvu_enable_mbox_intr(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+
+	/* Clear spurious irqs, if any */
+	rvu_write64(rvu, BLKADDR_RVUM,
+		    RVU_AF_PFAF_MBOX_INT, INTR_MASK(hw->total_pfs));
+
+	/* Enable mailbox interrupt for all PFs except PF0 i.e AF itself */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1S,
+		    INTR_MASK(hw->total_pfs) & ~1ULL);
+}
+
+static void rvu_unregister_interrupts(struct rvu *rvu)
+{
+	int irq;
+
+	/* Disable the Mbox interrupt */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
+	for (irq = 0; irq < rvu->num_vec; irq++) {
+		if (rvu->irq_allocated[irq])
+			free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
+	}
+
+	pci_free_irq_vectors(rvu->pdev);
+	rvu->num_vec = 0;
+}
+
+static int rvu_register_interrupts(struct rvu *rvu)
+{
+	int ret;
+
+	rvu->num_vec = pci_msix_vec_count(rvu->pdev);
+
+	rvu->irq_name = devm_kmalloc_array(rvu->dev, rvu->num_vec,
+					   NAME_SIZE, GFP_KERNEL);
+	if (!rvu->irq_name)
+		return -ENOMEM;
+
+	rvu->irq_allocated = devm_kcalloc(rvu->dev, rvu->num_vec,
+					  sizeof(bool), GFP_KERNEL);
+	if (!rvu->irq_allocated)
+		return -ENOMEM;
+
+	/* Enable MSI-X */
+	ret = pci_alloc_irq_vectors(rvu->pdev, rvu->num_vec,
+				    rvu->num_vec, PCI_IRQ_MSIX);
+	if (ret < 0) {
+		dev_err(rvu->dev,
+			"RVUAF: Request for %d msix vectors failed, ret %d\n",
+			rvu->num_vec, ret);
+		return ret;
+	}
+
+	/* Register mailbox interrupt handler */
+	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
+	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
+			  rvu_mbox_intr_handler, 0,
+			  &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for mbox irq\n");
+		goto fail;
+	}
+
+	rvu->irq_allocated[RVU_AF_INT_VEC_MBOX] = true;
+
+	/* Enable mailbox interrupts from all PFs */
+	rvu_enable_mbox_intr(rvu);
+
+	return 0;
+
+fail:
+	pci_free_irq_vectors(rvu->pdev);
+	return ret;
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -320,8 +559,21 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_release_regions;
 
+	err = rvu_mbox_init(rvu);
+	if (err)
+		goto err_hwsetup;
+
+	err = rvu_register_interrupts(rvu);
+	if (err)
+		goto err_mbox;
+
 	return 0;
 
+err_mbox:
+	rvu_mbox_destroy(rvu);
+err_hwsetup:
+	rvu_reset_all_blocks(rvu);
+	rvu_free_hw_resources(rvu);
 err_release_regions:
 	pci_release_regions(pdev);
 err_disable_device:
@@ -337,6 +589,8 @@ static void rvu_remove(struct pci_dev *pdev)
 {
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
+	rvu_unregister_interrupts(rvu);
+	rvu_mbox_destroy(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 592b820..2fb9407 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -12,6 +12,7 @@
 #define RVU_H
 
 #include "rvu_struct.h"
+#include "mbox.h"
 
 /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
@@ -23,6 +24,17 @@
 
 #define NAME_SIZE				32
 
+/* PF_FUNC */
+#define RVU_PFVF_PF_SHIFT	10
+#define RVU_PFVF_PF_MASK	0x3F
+#define RVU_PFVF_FUNC_SHIFT	0
+#define RVU_PFVF_FUNC_MASK	0x3FF
+
+struct rvu_work {
+	struct	work_struct work;
+	struct	rvu *rvu;
+};
+
 struct rsrc_bmap {
 	unsigned long *bmap;	/* Pointer to resource bitmap */
 	u16  max;		/* Max resource id or count */
@@ -57,6 +69,16 @@ struct rvu {
 	struct pci_dev		*pdev;
 	struct device		*dev;
 	struct rvu_hwinfo       *hw;
+
+	/* Mbox */
+	struct otx2_mbox	mbox;
+	struct rvu_work		*mbox_wrk;
+	struct workqueue_struct *mbox_wq;
+
+	/* MSI-X */
+	u16			num_vec;
+	char			*irq_name;
+	bool			*irq_allocated;
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 4122559..1dc1518 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -33,4 +33,26 @@ enum rvu_block_addr_e {
 	BLK_COUNT	= 0xfULL,
 };
 
+/* RVU Admin function Interrupt Vector Enumeration */
+enum rvu_af_int_vec_e {
+	RVU_AF_INT_VEC_POISON = 0x0,
+	RVU_AF_INT_VEC_PFFLR  = 0x1,
+	RVU_AF_INT_VEC_PFME   = 0x2,
+	RVU_AF_INT_VEC_GEN    = 0x3,
+	RVU_AF_INT_VEC_MBOX   = 0x4,
+};
+
+/**
+ * RVU PF Interrupt Vector Enumeration
+ */
+enum rvu_pf_int_vec_e {
+	RVU_PF_INT_VEC_VFFLR0     = 0x0,
+	RVU_PF_INT_VEC_VFFLR1     = 0x1,
+	RVU_PF_INT_VEC_VFME0      = 0x2,
+	RVU_PF_INT_VEC_VFME1      = 0x3,
+	RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
+	RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
+	RVU_PF_INT_VEC_AFPF_MBOX  = 0x6,
+};
+
 #endif /* RVU_STRUCT_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 05/15] soc: octeontx2: Add mailbox IRQ and msg handlers
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds support for mailbox interrupt and message
handling. Mapped mailbox region and registered a workqueue
for message handling. Enabled mailbox IRQ of RVU PFs
and registered a interrupt handler. When IRQ is triggered
work is added to the mbox workqueue for msgs to get processed.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h       |  14 +-
 drivers/soc/marvell/octeontx2/rvu.c        | 254 +++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/rvu.h        |  22 +++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  22 +++
 4 files changed, 309 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index 8e205fd..fc593f0 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -33,6 +33,8 @@
 # error "incorrect mailbox area sizes"
 #endif
 
+#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
+
 #define MBOX_RSP_TIMEOUT	1000 /* in ms, Time to wait for mbox response */
 
 #define MBOX_MSG_ALIGN		16  /* Align mbox msg start to 16bytes */
@@ -90,8 +92,9 @@ struct mbox_msghdr {
 
 void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
 void otx2_mbox_destroy(struct otx2_mbox *mbox);
-int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
-		   void *reg_base, int direction, int ndevs);
+int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
+		   struct pci_dev *pdev, void __force *reg_base,
+		   int direction, int ndevs);
 void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
 int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
 int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
@@ -115,7 +118,7 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 #define MBOX_MSG_MAX				0xFFFF
 
 #define MBOX_MESSAGES							\
-M(READY,		0x001, msg_req, msg_rsp)
+M(READY,		0x001, msg_req, ready_msg_rsp)
 
 enum {
 #define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
@@ -139,4 +142,9 @@ struct msg_rsp {
 	struct mbox_msghdr hdr;
 };
 
+struct ready_msg_rsp {
+	struct mbox_msghdr hdr;
+	u16    sclk_feq;	/* SCLK frequency */
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index fa5f40b..e795c2f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -258,6 +258,245 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	return 0;
 }
 
+static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
+				struct mbox_msghdr *req)
+{
+	/* Check if valid, if not reply with a invalid msg */
+	if (req->sig != OTX2_MBOX_REQ_SIG)
+		goto bad_message;
+
+	if (req->id == MBOX_MSG_READY)
+		return 0;
+
+bad_message:
+	otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
+			       req->id);
+	return -ENODEV;
+}
+
+static void rvu_mbox_handler(struct work_struct *work)
+{
+	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+	struct rvu *rvu = mwork->rvu;
+	struct otx2_mbox_dev *mdev;
+	struct mbox_hdr *req_hdr;
+	struct mbox_msghdr *msg;
+	struct otx2_mbox *mbox;
+	int offset, id, err;
+	u16 pf;
+
+	mbox = &rvu->mbox;
+	pf = mwork - rvu->mbox_wrk;
+	mdev = &mbox->dev[pf];
+
+	/* Process received mbox messages */
+	req_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+	if (req_hdr->num_msgs == 0)
+		return;
+
+	offset = mbox->rx_start + ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+
+	for (id = 0; id < req_hdr->num_msgs; id++) {
+		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+
+		/* Set which PF sent this message based on mbox IRQ */
+		msg->pcifunc &= ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
+		msg->pcifunc |= (pf << RVU_PFVF_PF_SHIFT);
+		err = rvu_process_mbox_msg(rvu, pf, msg);
+		if (!err) {
+			offset = mbox->rx_start + msg->next_msgoff;
+			continue;
+		}
+
+		if (msg->pcifunc & RVU_PFVF_FUNC_MASK)
+			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n",
+				 err, otx2_mbox_id2name(msg->id), msg->id, pf,
+				 (msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+		else
+			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n",
+				 err, otx2_mbox_id2name(msg->id), msg->id, pf);
+	}
+
+	/* Send mbox responses to PF */
+	otx2_mbox_msg_send(mbox, pf);
+}
+
+static int rvu_mbox_init(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_work *mwork;
+	void __iomem *hwbase = NULL;
+	u64 bar4_addr;
+	int err, pf;
+
+	rvu->mbox_wq = alloc_workqueue("rvu_afpf_mailbox",
+				       WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+				       hw->total_pfs);
+	if (!rvu->mbox_wq)
+		return -ENOMEM;
+
+	rvu->mbox_wrk = devm_kcalloc(rvu->dev, hw->total_pfs,
+				     sizeof(struct rvu_work), GFP_KERNEL);
+	if (!rvu->mbox_wrk) {
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* Map mbox region shared with PFs */
+	bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR);
+	/* Mailbox is a reserved memory (in RAM) region shared between
+	 * RVU devices, shouldn't be mapped as device memory to allow
+	 * unaligned accesses.
+	 */
+	hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * hw->total_pfs);
+	if (!hwbase) {
+		dev_err(rvu->dev, "Unable to map mailbox region\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	err = otx2_mbox_init(&rvu->mbox, hwbase, rvu->pdev, rvu->afreg_base,
+			     MBOX_DIR_AFPF, hw->total_pfs);
+	if (err)
+		goto exit;
+
+	for (pf = 0; pf < hw->total_pfs; pf++) {
+		mwork = &rvu->mbox_wrk[pf];
+		mwork->rvu = rvu;
+		INIT_WORK(&mwork->work, rvu_mbox_handler);
+	}
+
+	return 0;
+exit:
+	if (hwbase)
+		iounmap((void __iomem *)hwbase);
+	destroy_workqueue(rvu->mbox_wq);
+	return err;
+}
+
+static void rvu_mbox_destroy(struct rvu *rvu)
+{
+	if (rvu->mbox_wq) {
+		flush_workqueue(rvu->mbox_wq);
+		destroy_workqueue(rvu->mbox_wq);
+		rvu->mbox_wq = NULL;
+	}
+
+	if (rvu->mbox.hwbase)
+		iounmap((void __iomem *)rvu->mbox.hwbase);
+
+	otx2_mbox_destroy(&rvu->mbox);
+}
+
+static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+{
+	struct rvu *rvu = (struct rvu *)rvu_irq;
+	struct otx2_mbox_dev *mdev;
+	struct otx2_mbox *mbox;
+	struct mbox_hdr *hdr;
+	u64 intr;
+	u8  pf;
+
+	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
+	/* Clear interrupts */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT, intr);
+
+	/* Sync with mbox memory region */
+	smp_wmb();
+
+	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+		if (intr & (1ULL << pf)) {
+			mbox = &rvu->mbox;
+			mdev = &mbox->dev[pf];
+			hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+			if (hdr->num_msgs)
+				queue_work(rvu->mbox_wq,
+					   &rvu->mbox_wrk[pf].work);
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+static void rvu_enable_mbox_intr(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+
+	/* Clear spurious irqs, if any */
+	rvu_write64(rvu, BLKADDR_RVUM,
+		    RVU_AF_PFAF_MBOX_INT, INTR_MASK(hw->total_pfs));
+
+	/* Enable mailbox interrupt for all PFs except PF0 i.e AF itself */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1S,
+		    INTR_MASK(hw->total_pfs) & ~1ULL);
+}
+
+static void rvu_unregister_interrupts(struct rvu *rvu)
+{
+	int irq;
+
+	/* Disable the Mbox interrupt */
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C,
+		    INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
+
+	for (irq = 0; irq < rvu->num_vec; irq++) {
+		if (rvu->irq_allocated[irq])
+			free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
+	}
+
+	pci_free_irq_vectors(rvu->pdev);
+	rvu->num_vec = 0;
+}
+
+static int rvu_register_interrupts(struct rvu *rvu)
+{
+	int ret;
+
+	rvu->num_vec = pci_msix_vec_count(rvu->pdev);
+
+	rvu->irq_name = devm_kmalloc_array(rvu->dev, rvu->num_vec,
+					   NAME_SIZE, GFP_KERNEL);
+	if (!rvu->irq_name)
+		return -ENOMEM;
+
+	rvu->irq_allocated = devm_kcalloc(rvu->dev, rvu->num_vec,
+					  sizeof(bool), GFP_KERNEL);
+	if (!rvu->irq_allocated)
+		return -ENOMEM;
+
+	/* Enable MSI-X */
+	ret = pci_alloc_irq_vectors(rvu->pdev, rvu->num_vec,
+				    rvu->num_vec, PCI_IRQ_MSIX);
+	if (ret < 0) {
+		dev_err(rvu->dev,
+			"RVUAF: Request for %d msix vectors failed, ret %d\n",
+			rvu->num_vec, ret);
+		return ret;
+	}
+
+	/* Register mailbox interrupt handler */
+	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
+	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
+			  rvu_mbox_intr_handler, 0,
+			  &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
+	if (ret) {
+		dev_err(rvu->dev,
+			"RVUAF: IRQ registration failed for mbox irq\n");
+		goto fail;
+	}
+
+	rvu->irq_allocated[RVU_AF_INT_VEC_MBOX] = true;
+
+	/* Enable mailbox interrupts from all PFs */
+	rvu_enable_mbox_intr(rvu);
+
+	return 0;
+
+fail:
+	pci_free_irq_vectors(rvu->pdev);
+	return ret;
+}
+
 static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
@@ -320,8 +559,21 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_release_regions;
 
+	err = rvu_mbox_init(rvu);
+	if (err)
+		goto err_hwsetup;
+
+	err = rvu_register_interrupts(rvu);
+	if (err)
+		goto err_mbox;
+
 	return 0;
 
+err_mbox:
+	rvu_mbox_destroy(rvu);
+err_hwsetup:
+	rvu_reset_all_blocks(rvu);
+	rvu_free_hw_resources(rvu);
 err_release_regions:
 	pci_release_regions(pdev);
 err_disable_device:
@@ -337,6 +589,8 @@ static void rvu_remove(struct pci_dev *pdev)
 {
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
+	rvu_unregister_interrupts(rvu);
+	rvu_mbox_destroy(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 592b820..2fb9407 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -12,6 +12,7 @@
 #define RVU_H
 
 #include "rvu_struct.h"
+#include "mbox.h"
 
 /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_RVU_AF		0xA065
@@ -23,6 +24,17 @@
 
 #define NAME_SIZE				32
 
+/* PF_FUNC */
+#define RVU_PFVF_PF_SHIFT	10
+#define RVU_PFVF_PF_MASK	0x3F
+#define RVU_PFVF_FUNC_SHIFT	0
+#define RVU_PFVF_FUNC_MASK	0x3FF
+
+struct rvu_work {
+	struct	work_struct work;
+	struct	rvu *rvu;
+};
+
 struct rsrc_bmap {
 	unsigned long *bmap;	/* Pointer to resource bitmap */
 	u16  max;		/* Max resource id or count */
@@ -57,6 +69,16 @@ struct rvu {
 	struct pci_dev		*pdev;
 	struct device		*dev;
 	struct rvu_hwinfo       *hw;
+
+	/* Mbox */
+	struct otx2_mbox	mbox;
+	struct rvu_work		*mbox_wrk;
+	struct workqueue_struct *mbox_wq;
+
+	/* MSI-X */
+	u16			num_vec;
+	char			*irq_name;
+	bool			*irq_allocated;
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 4122559..1dc1518 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -33,4 +33,26 @@ enum rvu_block_addr_e {
 	BLK_COUNT	= 0xfULL,
 };
 
+/* RVU Admin function Interrupt Vector Enumeration */
+enum rvu_af_int_vec_e {
+	RVU_AF_INT_VEC_POISON = 0x0,
+	RVU_AF_INT_VEC_PFFLR  = 0x1,
+	RVU_AF_INT_VEC_PFME   = 0x2,
+	RVU_AF_INT_VEC_GEN    = 0x3,
+	RVU_AF_INT_VEC_MBOX   = 0x4,
+};
+
+/**
+ * RVU PF Interrupt Vector Enumeration
+ */
+enum rvu_pf_int_vec_e {
+	RVU_PF_INT_VEC_VFFLR0     = 0x0,
+	RVU_PF_INT_VEC_VFFLR1     = 0x1,
+	RVU_PF_INT_VEC_VFME0      = 0x2,
+	RVU_PF_INT_VEC_VFME1      = 0x3,
+	RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
+	RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
+	RVU_PF_INT_VEC_AFPF_MBOX  = 0x6,
+};
+
 #endif /* RVU_STRUCT_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 06/15] soc: octeontx2: Convert mbox msg id check to a macro
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Aleksey Makarov

From: Aleksey Makarov <amakarov@marvell.com>

With 10's of mailbox messages expected to be handled in future,
checking for message id could become a lengthy switch case. Hence
added a macro to auto generate the switch case for each msg id.

Signed-off-by: Aleksey Makarov <amakarov@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c | 44 ++++++++++++++++++++++++++++++++-----
 1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index e795c2f..25f79bf 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -258,6 +258,12 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	return 0;
 }
 
+static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req,
+				  struct ready_msg_rsp *rsp)
+{
+	return 0;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
@@ -265,13 +271,39 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 	if (req->sig != OTX2_MBOX_REQ_SIG)
 		goto bad_message;
 
-	if (req->id == MBOX_MSG_READY)
-		return 0;
-
+	switch (req->id) {
+#define M(_name, _id, _req_type, _rsp_type)				\
+	case _id: {							\
+		struct _rsp_type *rsp;					\
+		int err;						\
+									\
+		rsp = (struct _rsp_type *)otx2_mbox_alloc_msg(		\
+			&rvu->mbox, devid,				\
+			sizeof(struct _rsp_type));			\
+		if (rsp) {						\
+			rsp->hdr.id = _id;				\
+			rsp->hdr.sig = OTX2_MBOX_RSP_SIG;		\
+			rsp->hdr.pcifunc = req->pcifunc;		\
+			rsp->hdr.rc = 0;				\
+		}							\
+									\
+		err = rvu_mbox_handler_ ## _name(rvu,			\
+						 (struct _req_type *)req, \
+						 rsp);			\
+		if (rsp && err)						\
+			rsp->hdr.rc = err;				\
+									\
+		return rsp ? err : -ENOMEM;				\
+	}
+MBOX_MESSAGES
+#undef M
+		break;
 bad_message:
-	otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
-			       req->id);
-	return -ENODEV;
+	default:
+		otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
+				       req->id);
+		return -ENODEV;
+	}
 }
 
 static void rvu_mbox_handler(struct work_struct *work)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 06/15] soc: octeontx2: Convert mbox msg id check to a macro
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Aleksey Makarov <amakarov@marvell.com>

With 10's of mailbox messages expected to be handled in future,
checking for message id could become a lengthy switch case. Hence
added a macro to auto generate the switch case for each msg id.

Signed-off-by: Aleksey Makarov <amakarov@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c | 44 ++++++++++++++++++++++++++++++++-----
 1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index e795c2f..25f79bf 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -258,6 +258,12 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	return 0;
 }
 
+static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req,
+				  struct ready_msg_rsp *rsp)
+{
+	return 0;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
@@ -265,13 +271,39 @@ static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 	if (req->sig != OTX2_MBOX_REQ_SIG)
 		goto bad_message;
 
-	if (req->id == MBOX_MSG_READY)
-		return 0;
-
+	switch (req->id) {
+#define M(_name, _id, _req_type, _rsp_type)				\
+	case _id: {							\
+		struct _rsp_type *rsp;					\
+		int err;						\
+									\
+		rsp = (struct _rsp_type *)otx2_mbox_alloc_msg(		\
+			&rvu->mbox, devid,				\
+			sizeof(struct _rsp_type));			\
+		if (rsp) {						\
+			rsp->hdr.id = _id;				\
+			rsp->hdr.sig = OTX2_MBOX_RSP_SIG;		\
+			rsp->hdr.pcifunc = req->pcifunc;		\
+			rsp->hdr.rc = 0;				\
+		}							\
+									\
+		err = rvu_mbox_handler_ ## _name(rvu,			\
+						 (struct _req_type *)req, \
+						 rsp);			\
+		if (rsp && err)						\
+			rsp->hdr.rc = err;				\
+									\
+		return rsp ? err : -ENOMEM;				\
+	}
+MBOX_MESSAGES
+#undef M
+		break;
 bad_message:
-	otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
-			       req->id);
-	return -ENODEV;
+	default:
+		otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
+				       req->id);
+		return -ENODEV;
+	}
 }
 
 static void rvu_mbox_handler(struct work_struct *work)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 07/15] soc: octeontx2: Scan blocks for LFs provisioned to PF/VF
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Scan all RVU blocks to find any 'LF to RVU PF/VF' mapping done by
low level firmware. If found any, mark them as used in respective
block's LF bitmap and also save mapped PF/VF's PF_FUNC info.

This is done to avoid reattaching a block LF to a different RVU PF/VF.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c        | 148 ++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h        |  16 ++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  18 ++++
 3 files changed, 180 insertions(+), 2 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 25f79bf..9539ab9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -22,6 +22,8 @@
 #define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
 #define DRV_VERSION	"1.0"
 
+static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
+
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
@@ -66,6 +68,91 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, u16 pcifunc,
+				u16 lf, bool attach)
+{
+	int devnum, num_lfs = 0;
+	bool is_pf;
+	u64 reg;
+
+	if (lf >= block->lf.max) {
+		dev_err(&rvu->pdev->dev,
+			"%s: FATAL: LF %d is >= %s's max lfs i.e %d\n",
+			__func__, lf, block->name, block->lf.max);
+		return;
+	}
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	block->fn_map[lf] = attach ? pcifunc : 0;
+
+	switch (block->type) {
+	case BLKTYPE_NPA:
+		pfvf->npalf = attach ? true : false;
+		num_lfs = pfvf->npalf;
+		break;
+	case BLKTYPE_NIX:
+		pfvf->nixlf = attach ? true : false;
+		num_lfs = pfvf->nixlf;
+		break;
+	case BLKTYPE_SSO:
+		attach ? pfvf->sso++ : pfvf->sso--;
+		num_lfs = pfvf->sso;
+		break;
+	case BLKTYPE_SSOW:
+		attach ? pfvf->ssow++ : pfvf->ssow--;
+		num_lfs = pfvf->ssow;
+		break;
+	case BLKTYPE_TIM:
+		attach ? pfvf->timlfs++ : pfvf->timlfs--;
+		num_lfs = pfvf->timlfs;
+		break;
+	case BLKTYPE_CPT:
+		attach ? pfvf->cptlfs++ : pfvf->cptlfs--;
+		num_lfs = pfvf->cptlfs;
+		break;
+	}
+
+	reg = is_pf ? block->pf_lfcnt_reg : block->vf_lfcnt_reg;
+	rvu_write64(rvu, BLKADDR_RVUM, reg | (devnum << 16), num_lfs);
+}
+
+inline int rvu_get_pf(u16 pcifunc)
+{
+	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+}
+
+static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
+{
+	int pf, func;
+	u64 cfg;
+
+	pf = rvu_get_pf(pcifunc);
+	func = pcifunc & RVU_PFVF_FUNC_MASK;
+
+	/* Get first HWVF attached to this PF */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+
+	return ((cfg & 0xFFF) + func - 1);
+}
+
+struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
+{
+	/* Check if it is a PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK)
+		return &rvu->hwvf[rvu_get_hwvf(rvu, pcifunc)];
+	else
+		return &rvu->pf[rvu_get_pf(pcifunc)];
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -107,6 +194,28 @@ static void rvu_reset_all_blocks(struct rvu *rvu)
 	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
 }
 
+static void rvu_scan_block(struct rvu *rvu, struct rvu_block *block)
+{
+	struct rvu_pfvf *pfvf;
+	u64 cfg;
+	int lf;
+
+	for (lf = 0; lf < block->lf.max; lf++) {
+		cfg = rvu_read64(rvu, block->addr,
+				 block->lfcfg_reg | (lf << block->lfshift));
+		if (!(cfg & BIT_ULL(63)))
+			continue;
+
+		/* Set this resource as being used */
+		__set_bit(lf, block->lf.bmap);
+
+		/* Get, to whom this LF is attached */
+		pfvf = rvu_get_pfvf(rvu, (cfg >> 8) & 0xFFFF);
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    (cfg >> 8) & 0xFFFF, lf, true);
+	}
+}
+
 static void rvu_free_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -124,7 +233,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
-	int err;
+	int blkid, err;
 	u64 cfg;
 
 	/* Get HW supported max RVU PF & VF count */
@@ -140,6 +249,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_NPA, NPA_AF_CONST);
 	block->lf.max = (cfg >> 16) & 0xFFF;
 	block->addr = BLKADDR_NPA;
+	block->type = BLKTYPE_NPA;
 	block->lfshift = 8;
 	block->lookup_reg = NPA_AF_RVU_LF_CFG_DEBUG;
 	block->pf_lfcnt_reg = RVU_PRIV_PFX_NPA_CFG;
@@ -160,6 +270,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
 	block->lf.max = cfg & 0xFFF;
 	block->addr = BLKADDR_NIX0;
+	block->type = BLKTYPE_NIX;
 	block->lfshift = 8;
 	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
 	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
@@ -180,6 +291,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_SSO, SSO_AF_CONST);
 	block->lf.max = cfg & 0xFFFF;
 	block->addr = BLKADDR_SSO;
+	block->type = BLKTYPE_SSO;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = SSO_AF_RVU_LF_CFG_DEBUG;
@@ -200,6 +312,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		goto tim;
 	block->lf.max = (cfg >> 56) & 0xFF;
 	block->addr = BLKADDR_SSOW;
+	block->type = BLKTYPE_SSOW;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = SSOW_AF_RVU_LF_HWS_CFG_DEBUG;
@@ -221,6 +334,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_TIM, TIM_AF_CONST);
 	block->lf.max = cfg & 0xFFFF;
 	block->addr = BLKADDR_TIM;
+	block->type = BLKTYPE_TIM;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = TIM_AF_RVU_LF_CFG_DEBUG;
@@ -238,10 +352,11 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	/* Init CPT LF's bitmap */
 	block = &hw->block[BLKADDR_CPT0];
 	if (!block->implemented)
-		return 0;
+		goto init;
 	cfg = rvu_read64(rvu, BLKADDR_CPT0, CPT_AF_CONSTANTS0);
 	block->lf.max = cfg & 0xFF;
 	block->addr = BLKADDR_CPT0;
+	block->type = BLKTYPE_CPT;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
@@ -255,6 +370,35 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	if (err)
 		return err;
 
+init:
+	/* Allocate memory for PFVF data */
+	rvu->pf = devm_kcalloc(rvu->dev, hw->total_pfs,
+			       sizeof(struct rvu_pfvf), GFP_KERNEL);
+	if (!rvu->pf)
+		return -ENOMEM;
+
+	rvu->hwvf = devm_kcalloc(rvu->dev, hw->total_vfs,
+				 sizeof(struct rvu_pfvf), GFP_KERNEL);
+	if (!rvu->hwvf)
+		return -ENOMEM;
+
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		if (!block->lf.bmap)
+			continue;
+
+		/* Allocate memory for block LF/slot to pcifunc mapping info */
+		block->fn_map = devm_kcalloc(rvu->dev, block->lf.max,
+					     sizeof(u16), GFP_KERNEL);
+		if (!block->fn_map)
+			return -ENOMEM;
+
+		/* Scan all blocks to check if low level firmware has
+		 * already provisioned any of the resources to a PF/VF.
+		 */
+		rvu_scan_block(rvu, block);
+	}
+
 	return 0;
 }
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 2fb9407..ce9897b 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -42,9 +42,11 @@ struct rsrc_bmap {
 
 struct rvu_block {
 	struct rsrc_bmap lf;
+	u16  *fn_map; /* LF to pcifunc mapping */
 	bool multislot;
 	bool implemented;
 	u8   addr;  /* RVU_BLOCK_ADDR_E */
+	u8   type;  /* RVU_BLOCK_TYPE_E */
 	u8   lfshift;
 	u64  lookup_reg;
 	u64  pf_lfcnt_reg;
@@ -55,6 +57,16 @@ struct rvu_block {
 	unsigned char name[NAME_SIZE];
 };
 
+/* Structure for per RVU func info ie PF/VF */
+struct rvu_pfvf {
+	bool		npalf; /* Only one NPALF per RVU_FUNC */
+	bool		nixlf; /* Only one NIXLF per RVU_FUNC */
+	u16		sso;
+	u16		ssow;
+	u16		cptlfs;
+	u16		timlfs;
+};
+
 struct rvu_hwinfo {
 	u8	total_pfs;   /* MAX RVU PFs HW supports */
 	u16	total_vfs;   /* Max RVU VFs HW supports */
@@ -69,6 +81,8 @@ struct rvu {
 	struct pci_dev		*pdev;
 	struct device		*dev;
 	struct rvu_hwinfo       *hw;
+	struct rvu_pfvf		*pf;
+	struct rvu_pfvf		*hwvf;
 
 	/* Mbox */
 	struct otx2_mbox	mbox;
@@ -107,5 +121,7 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
 
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+int rvu_get_pf(u16 pcifunc);
+struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 1dc1518..77ac751 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -33,6 +33,24 @@ enum rvu_block_addr_e {
 	BLK_COUNT	= 0xfULL,
 };
 
+/*
+ * RVU Block Type Enumeration
+ */
+enum rvu_block_type_e {
+	BLKTYPE_RVUM = 0x0,
+	BLKTYPE_MSIX = 0x1,
+	BLKTYPE_LMT  = 0x2,
+	BLKTYPE_NIX  = 0x3,
+	BLKTYPE_NPA  = 0x4,
+	BLKTYPE_NPC  = 0x5,
+	BLKTYPE_SSO  = 0x6,
+	BLKTYPE_SSOW = 0x7,
+	BLKTYPE_TIM  = 0x8,
+	BLKTYPE_CPT  = 0x9,
+	BLKTYPE_NDC  = 0xa,
+	BLKTYPE_MAX  = 0xa,
+};
+
 /* RVU Admin function Interrupt Vector Enumeration */
 enum rvu_af_int_vec_e {
 	RVU_AF_INT_VEC_POISON = 0x0,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 07/15] soc: octeontx2: Scan blocks for LFs provisioned to PF/VF
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Scan all RVU blocks to find any 'LF to RVU PF/VF' mapping done by
low level firmware. If found any, mark them as used in respective
block's LF bitmap and also save mapped PF/VF's PF_FUNC info.

This is done to avoid reattaching a block LF to a different RVU PF/VF.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c        | 148 ++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h        |  16 ++++
 drivers/soc/marvell/octeontx2/rvu_struct.h |  18 ++++
 3 files changed, 180 insertions(+), 2 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 25f79bf..9539ab9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -22,6 +22,8 @@
 #define DRV_STRING      "Marvell OcteonTX2 RVU Admin Function Driver"
 #define DRV_VERSION	"1.0"
 
+static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
+
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
@@ -66,6 +68,91 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, u16 pcifunc,
+				u16 lf, bool attach)
+{
+	int devnum, num_lfs = 0;
+	bool is_pf;
+	u64 reg;
+
+	if (lf >= block->lf.max) {
+		dev_err(&rvu->pdev->dev,
+			"%s: FATAL: LF %d is >= %s's max lfs i.e %d\n",
+			__func__, lf, block->name, block->lf.max);
+		return;
+	}
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	block->fn_map[lf] = attach ? pcifunc : 0;
+
+	switch (block->type) {
+	case BLKTYPE_NPA:
+		pfvf->npalf = attach ? true : false;
+		num_lfs = pfvf->npalf;
+		break;
+	case BLKTYPE_NIX:
+		pfvf->nixlf = attach ? true : false;
+		num_lfs = pfvf->nixlf;
+		break;
+	case BLKTYPE_SSO:
+		attach ? pfvf->sso++ : pfvf->sso--;
+		num_lfs = pfvf->sso;
+		break;
+	case BLKTYPE_SSOW:
+		attach ? pfvf->ssow++ : pfvf->ssow--;
+		num_lfs = pfvf->ssow;
+		break;
+	case BLKTYPE_TIM:
+		attach ? pfvf->timlfs++ : pfvf->timlfs--;
+		num_lfs = pfvf->timlfs;
+		break;
+	case BLKTYPE_CPT:
+		attach ? pfvf->cptlfs++ : pfvf->cptlfs--;
+		num_lfs = pfvf->cptlfs;
+		break;
+	}
+
+	reg = is_pf ? block->pf_lfcnt_reg : block->vf_lfcnt_reg;
+	rvu_write64(rvu, BLKADDR_RVUM, reg | (devnum << 16), num_lfs);
+}
+
+inline int rvu_get_pf(u16 pcifunc)
+{
+	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+}
+
+static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
+{
+	int pf, func;
+	u64 cfg;
+
+	pf = rvu_get_pf(pcifunc);
+	func = pcifunc & RVU_PFVF_FUNC_MASK;
+
+	/* Get first HWVF attached to this PF */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+
+	return ((cfg & 0xFFF) + func - 1);
+}
+
+struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
+{
+	/* Check if it is a PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK)
+		return &rvu->hwvf[rvu_get_hwvf(rvu, pcifunc)];
+	else
+		return &rvu->pf[rvu_get_pf(pcifunc)];
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -107,6 +194,28 @@ static void rvu_reset_all_blocks(struct rvu *rvu)
 	rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
 }
 
+static void rvu_scan_block(struct rvu *rvu, struct rvu_block *block)
+{
+	struct rvu_pfvf *pfvf;
+	u64 cfg;
+	int lf;
+
+	for (lf = 0; lf < block->lf.max; lf++) {
+		cfg = rvu_read64(rvu, block->addr,
+				 block->lfcfg_reg | (lf << block->lfshift));
+		if (!(cfg & BIT_ULL(63)))
+			continue;
+
+		/* Set this resource as being used */
+		__set_bit(lf, block->lf.bmap);
+
+		/* Get, to whom this LF is attached */
+		pfvf = rvu_get_pfvf(rvu, (cfg >> 8) & 0xFFFF);
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    (cfg >> 8) & 0xFFFF, lf, true);
+	}
+}
+
 static void rvu_free_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -124,7 +233,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
-	int err;
+	int blkid, err;
 	u64 cfg;
 
 	/* Get HW supported max RVU PF & VF count */
@@ -140,6 +249,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_NPA, NPA_AF_CONST);
 	block->lf.max = (cfg >> 16) & 0xFFF;
 	block->addr = BLKADDR_NPA;
+	block->type = BLKTYPE_NPA;
 	block->lfshift = 8;
 	block->lookup_reg = NPA_AF_RVU_LF_CFG_DEBUG;
 	block->pf_lfcnt_reg = RVU_PRIV_PFX_NPA_CFG;
@@ -160,6 +270,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
 	block->lf.max = cfg & 0xFFF;
 	block->addr = BLKADDR_NIX0;
+	block->type = BLKTYPE_NIX;
 	block->lfshift = 8;
 	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
 	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
@@ -180,6 +291,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_SSO, SSO_AF_CONST);
 	block->lf.max = cfg & 0xFFFF;
 	block->addr = BLKADDR_SSO;
+	block->type = BLKTYPE_SSO;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = SSO_AF_RVU_LF_CFG_DEBUG;
@@ -200,6 +312,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		goto tim;
 	block->lf.max = (cfg >> 56) & 0xFF;
 	block->addr = BLKADDR_SSOW;
+	block->type = BLKTYPE_SSOW;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = SSOW_AF_RVU_LF_HWS_CFG_DEBUG;
@@ -221,6 +334,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	cfg = rvu_read64(rvu, BLKADDR_TIM, TIM_AF_CONST);
 	block->lf.max = cfg & 0xFFFF;
 	block->addr = BLKADDR_TIM;
+	block->type = BLKTYPE_TIM;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = TIM_AF_RVU_LF_CFG_DEBUG;
@@ -238,10 +352,11 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	/* Init CPT LF's bitmap */
 	block = &hw->block[BLKADDR_CPT0];
 	if (!block->implemented)
-		return 0;
+		goto init;
 	cfg = rvu_read64(rvu, BLKADDR_CPT0, CPT_AF_CONSTANTS0);
 	block->lf.max = cfg & 0xFF;
 	block->addr = BLKADDR_CPT0;
+	block->type = BLKTYPE_CPT;
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
@@ -255,6 +370,35 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	if (err)
 		return err;
 
+init:
+	/* Allocate memory for PFVF data */
+	rvu->pf = devm_kcalloc(rvu->dev, hw->total_pfs,
+			       sizeof(struct rvu_pfvf), GFP_KERNEL);
+	if (!rvu->pf)
+		return -ENOMEM;
+
+	rvu->hwvf = devm_kcalloc(rvu->dev, hw->total_vfs,
+				 sizeof(struct rvu_pfvf), GFP_KERNEL);
+	if (!rvu->hwvf)
+		return -ENOMEM;
+
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		if (!block->lf.bmap)
+			continue;
+
+		/* Allocate memory for block LF/slot to pcifunc mapping info */
+		block->fn_map = devm_kcalloc(rvu->dev, block->lf.max,
+					     sizeof(u16), GFP_KERNEL);
+		if (!block->fn_map)
+			return -ENOMEM;
+
+		/* Scan all blocks to check if low level firmware has
+		 * already provisioned any of the resources to a PF/VF.
+		 */
+		rvu_scan_block(rvu, block);
+	}
+
 	return 0;
 }
 
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 2fb9407..ce9897b 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -42,9 +42,11 @@ struct rsrc_bmap {
 
 struct rvu_block {
 	struct rsrc_bmap lf;
+	u16  *fn_map; /* LF to pcifunc mapping */
 	bool multislot;
 	bool implemented;
 	u8   addr;  /* RVU_BLOCK_ADDR_E */
+	u8   type;  /* RVU_BLOCK_TYPE_E */
 	u8   lfshift;
 	u64  lookup_reg;
 	u64  pf_lfcnt_reg;
@@ -55,6 +57,16 @@ struct rvu_block {
 	unsigned char name[NAME_SIZE];
 };
 
+/* Structure for per RVU func info ie PF/VF */
+struct rvu_pfvf {
+	bool		npalf; /* Only one NPALF per RVU_FUNC */
+	bool		nixlf; /* Only one NIXLF per RVU_FUNC */
+	u16		sso;
+	u16		ssow;
+	u16		cptlfs;
+	u16		timlfs;
+};
+
 struct rvu_hwinfo {
 	u8	total_pfs;   /* MAX RVU PFs HW supports */
 	u16	total_vfs;   /* Max RVU VFs HW supports */
@@ -69,6 +81,8 @@ struct rvu {
 	struct pci_dev		*pdev;
 	struct device		*dev;
 	struct rvu_hwinfo       *hw;
+	struct rvu_pfvf		*pf;
+	struct rvu_pfvf		*hwvf;
 
 	/* Mbox */
 	struct otx2_mbox	mbox;
@@ -107,5 +121,7 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
 
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+int rvu_get_pf(u16 pcifunc);
+struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 1dc1518..77ac751 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -33,6 +33,24 @@ enum rvu_block_addr_e {
 	BLK_COUNT	= 0xfULL,
 };
 
+/*
+ * RVU Block Type Enumeration
+ */
+enum rvu_block_type_e {
+	BLKTYPE_RVUM = 0x0,
+	BLKTYPE_MSIX = 0x1,
+	BLKTYPE_LMT  = 0x2,
+	BLKTYPE_NIX  = 0x3,
+	BLKTYPE_NPA  = 0x4,
+	BLKTYPE_NPC  = 0x5,
+	BLKTYPE_SSO  = 0x6,
+	BLKTYPE_SSOW = 0x7,
+	BLKTYPE_TIM  = 0x8,
+	BLKTYPE_CPT  = 0x9,
+	BLKTYPE_NDC  = 0xa,
+	BLKTYPE_MAX  = 0xa,
+};
+
 /* RVU Admin function Interrupt Vector Enumeration */
 enum rvu_af_int_vec_e {
 	RVU_AF_INT_VEC_POISON = 0x0,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 08/15] soc: octeontx2: Add RVU block LF provisioning support
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Added support for a RVU PF/VF to request AF via mailbox
to attach or detach NPA/NIX/SSO/SSOW/TIM/CPT block LFs.
Also supports partial detachment and modifying current
LF attached count of a certian block type.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h    |  45 ++-
 drivers/soc/marvell/octeontx2/rvu.c     | 472 +++++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h     |   8 +-
 drivers/soc/marvell/octeontx2/rvu_reg.h |   8 +-
 4 files changed, 523 insertions(+), 10 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index fc593f0..7280d49 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -118,7 +118,17 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 #define MBOX_MSG_MAX				0xFFFF
 
 #define MBOX_MESSAGES							\
-M(READY,		0x001, msg_req, ready_msg_rsp)
+/* Generic mbox IDs (range 0x000 - 0x1FF) */				\
+M(READY,		0x001, msg_req, ready_msg_rsp)			\
+M(ATTACH_RESOURCES,	0x002, rsrc_attach, msg_rsp)			\
+M(DETACH_RESOURCES,	0x003, rsrc_detach, msg_rsp)			\
+/* CGX mbox IDs (range 0x200 - 0x3FF) */				\
+/* NPA mbox IDs (range 0x400 - 0x5FF) */				\
+/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */				\
+/* TIM mbox IDs (range 0x800 - 0x9FF) */				\
+/* CPT mbox IDs (range 0xA00 - 0xBFF) */				\
+/* NPC mbox IDs (range 0x6000 - 0x7FFF) */				\
+/* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 
 enum {
 #define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
@@ -147,4 +157,37 @@ struct ready_msg_rsp {
 	u16    sclk_feq;	/* SCLK frequency */
 };
 
+/* Structure for requesting resource provisioning.
+ * 'modify' flag to be used when either requesting more
+ * or to detach partial of a cetain resource type.
+ * Rest of the fields specify how many of what type to
+ * be attached.
+ */
+struct rsrc_attach {
+	struct mbox_msghdr hdr;
+	u8   modify:1;
+	u8   npalf:1;
+	u8   nixlf:1;
+	u16  sso;
+	u16  ssow;
+	u16  timlfs;
+	u16  cptlfs;
+};
+
+/* Structure for relinquishing resources.
+ * 'partial' flag to be used when relinquishing all resources
+ * but only of a certain type. If not set, all resources of all
+ * types provisioned to the RVU function will be detached.
+ */
+struct rsrc_detach {
+	struct mbox_msghdr hdr;
+	u8 partial:1;
+	u8 npalf:1;
+	u8 nixlf:1;
+	u8 sso:1;
+	u8 ssow:1;
+	u8 timlfs:1;
+	u8 cptlfs:1;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 9539ab9..39dc45d 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -59,6 +59,41 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
 	return -EBUSY;
 }
 
+int rvu_alloc_rsrc(struct rsrc_bmap *rsrc)
+{
+	int id;
+
+	if (!rsrc->bmap)
+		return -EINVAL;
+
+	id = find_first_zero_bit(rsrc->bmap, rsrc->max);
+	if (id >= rsrc->max)
+		return -ENOSPC;
+
+	__set_bit(id, rsrc->bmap);
+
+	return id;
+}
+
+void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id)
+{
+	if (!rsrc->bmap)
+		return;
+
+	__clear_bit(id, rsrc->bmap);
+}
+
+int rvu_rsrc_free_count(struct rsrc_bmap *rsrc)
+{
+	int used;
+
+	if (!rsrc->bmap)
+		return 0;
+
+	used = bitmap_weight(rsrc->bmap, rsrc->max);
+	return (rsrc->max - used);
+}
+
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 {
 	rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
@@ -68,6 +103,78 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+/* Convert BLOCK_TYPE_E to a BLOCK_ADDR_E.
+ * Some silicon variants of OcteonTX2 supports
+ * multiple blocks of same type.
+ *
+ * @pcifunc has to be zero when no LF is yet attached.
+ */
+int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc)
+{
+	int devnum, blkaddr = -ENODEV;
+	u64 cfg, reg;
+	bool is_pf;
+
+	switch (blktype) {
+	case BLKTYPE_NPA:
+		blkaddr = BLKADDR_NPA;
+		goto exit;
+	case BLKTYPE_NIX:
+		/* For now assume NIX0 */
+		if (!pcifunc) {
+			blkaddr = BLKADDR_NIX0;
+			goto exit;
+		}
+		break;
+	case BLKTYPE_SSO:
+		blkaddr = BLKADDR_SSO;
+		goto exit;
+	case BLKTYPE_SSOW:
+		blkaddr = BLKADDR_SSOW;
+		goto exit;
+	case BLKTYPE_TIM:
+		blkaddr = BLKADDR_TIM;
+		goto exit;
+	case BLKTYPE_CPT:
+		/* For now assume CPT0 */
+		if (!pcifunc) {
+			blkaddr = BLKADDR_CPT0;
+			goto exit;
+		}
+		break;
+	}
+
+	/* Check if this is a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	/* Check if the 'pcifunc' has a NIX LF from 'BLKADDR_NIX0' */
+	if (blktype == BLKTYPE_NIX) {
+		reg = is_pf ? RVU_PRIV_PFX_NIX0_CFG : RVU_PRIV_HWVFX_NIX0_CFG;
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
+		if (cfg)
+			blkaddr = BLKADDR_NIX0;
+	}
+
+	/* Check if the 'pcifunc' has a CPT LF from 'BLKADDR_CPT0' */
+	if (blktype == BLKTYPE_CPT) {
+		reg = is_pf ? RVU_PRIV_PFX_CPT0_CFG : RVU_PRIV_HWVFX_CPT0_CFG;
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
+		if (cfg)
+			blkaddr = BLKADDR_CPT0;
+	}
+
+exit:
+	if (is_block_implemented(rvu->hw, blkaddr))
+		return blkaddr;
+	return -ENODEV;
+}
+
 static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				struct rvu_block *block, u16 pcifunc,
 				u16 lf, bool attach)
@@ -153,6 +260,17 @@ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
 		return &rvu->pf[rvu_get_pf(pcifunc)];
 }
 
+bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr)
+{
+	struct rvu_block *block;
+
+	if ((blkaddr < BLKADDR_RVUM) || (blkaddr >= BLK_COUNT))
+		return false;
+
+	block = &hw->block[blkaddr];
+	return block->implemented;
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -273,8 +391,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	block->type = BLKTYPE_NIX;
 	block->lfshift = 8;
 	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
-	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
-	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX_CFG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX0_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX0_CFG;
 	block->lfcfg_reg = NIX_PRIV_LFX_CFG;
 	block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG;
 	block->lfreset_reg = NIX_AF_LF_RST;
@@ -360,8 +478,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
-	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT_CFG;
-	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT_CFG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT0_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT0_CFG;
 	block->lfcfg_reg = CPT_PRIV_LFX_CFG;
 	block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG;
 	block->lfreset_reg = CPT_AF_LF_RST;
@@ -399,6 +517,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		rvu_scan_block(rvu, block);
 	}
 
+	spin_lock_init(&rvu->rsrc_lock);
+
 	return 0;
 }
 
@@ -408,6 +528,350 @@ static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req,
 	return 0;
 }
 
+/* Get current count of a RVU block's LF/slots
+ * provisioned to a given RVU func.
+ */
+static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
+{
+	switch (blktype) {
+	case BLKTYPE_NPA:
+		return pfvf->npalf ? 1 : 0;
+	case BLKTYPE_NIX:
+		return pfvf->nixlf ? 1 : 0;
+	case BLKTYPE_SSO:
+		return pfvf->sso;
+	case BLKTYPE_SSOW:
+		return pfvf->ssow;
+	case BLKTYPE_TIM:
+		return pfvf->timlfs;
+	case BLKTYPE_CPT:
+		return pfvf->cptlfs;
+	}
+	return 0;
+}
+
+static int rvu_lookup_rsrc(struct rvu *rvu, struct rvu_block *block,
+			   int pcifunc, int slot)
+{
+	u64 val;
+
+	val = ((u64)pcifunc << 24) | (slot << 16) | (1ULL << 13);
+	rvu_write64(rvu, block->addr, block->lookup_reg, val);
+	/* Wait for the lookup to finish */
+	/* TODO: put some timeout here */
+	while (rvu_read64(rvu, block->addr, block->lookup_reg) & (1ULL << 13))
+		;
+
+	val = rvu_read64(rvu, block->addr, block->lookup_reg);
+
+	/* Check LF valid bit */
+	if (!(val & (1ULL << 12)))
+		return -1;
+
+	return (val & 0xFFF);
+}
+
+static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int slot, lf, num_lfs;
+	int blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, blktype, pcifunc);
+	if (blkaddr < 0)
+		return;
+
+	block = &hw->block[blkaddr];
+
+	num_lfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+	if (!num_lfs)
+		return;
+
+	for (slot = 0; slot < num_lfs; slot++) {
+		lf = rvu_lookup_rsrc(rvu, block, pcifunc, slot);
+		if (lf < 0) /* This should never happen */
+			continue;
+
+		/* Disable the LF */
+		rvu_write64(rvu, blkaddr, block->lfcfg_reg |
+			    (lf << block->lfshift), 0x00ULL);
+
+		/* Update SW maintained mapping info as well */
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    pcifunc, lf, false);
+
+		/* Free the resource */
+		rvu_free_rsrc(&block->lf, lf);
+	}
+}
+
+static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach,
+			    u16 pcifunc)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	bool is_pf, detach_all = true;
+	struct rvu_block *block;
+	int devnum, blkid;
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	spin_lock(&rvu->rsrc_lock);
+
+	/* Check for partial resource detach */
+	if (detach && detach->partial)
+		detach_all = false;
+
+	/* Check for RVU block's LFs attached to this func,
+	 * if so, detach them.
+	 */
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		if (!block->lf.bmap)
+			continue;
+		if (!detach_all && detach) {
+			if ((blkid == BLKADDR_NPA) && !detach->npalf)
+				continue;
+			else if ((blkid == BLKADDR_NIX0) && !detach->nixlf)
+				continue;
+			else if ((blkid == BLKADDR_SSO) && !detach->sso)
+				continue;
+			else if ((blkid == BLKADDR_SSOW) && !detach->ssow)
+				continue;
+			else if ((blkid == BLKADDR_TIM) && !detach->timlfs)
+				continue;
+			else if ((blkid == BLKADDR_CPT0) && !detach->cptlfs)
+				continue;
+		}
+		rvu_detach_block(rvu, pcifunc, block->type);
+	}
+
+	spin_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+static int rvu_mbox_handler_DETACH_RESOURCES(struct rvu *rvu,
+					     struct rsrc_detach *detach,
+					     struct msg_rsp *rsp)
+{
+	return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc);
+}
+
+static void rvu_attach_block(struct rvu *rvu, int pcifunc,
+			     int blktype, int num_lfs)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int slot, lf;
+	int blkaddr;
+	u64 cfg;
+
+	if (!num_lfs)
+		return;
+
+	blkaddr = rvu_get_blkaddr(rvu, blktype, 0);
+	if (blkaddr < 0)
+		return;
+
+	block = &hw->block[blkaddr];
+	if (!block->lf.bmap)
+		return;
+
+	for (slot = 0; slot < num_lfs; slot++) {
+		/* Allocate the resource */
+		lf = rvu_alloc_rsrc(&block->lf);
+		if (lf < 0)
+			return;
+
+		cfg = (1ULL << 63) | (pcifunc << 8) | slot;
+		rvu_write64(rvu, blkaddr, block->lfcfg_reg |
+			    (lf << block->lfshift), cfg);
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    pcifunc, lf, true);
+	}
+}
+
+static int rvu_check_rsrc_availability(struct rvu *rvu,
+				       struct rsrc_attach *req, u16 pcifunc)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int free_lfs, mappedlfs;
+
+	/* Only one NPA LF can be attached */
+	if (req->npalf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NPA)) {
+		block = &hw->block[BLKADDR_NPA];
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if (!free_lfs)
+			goto fail;
+	} else if (req->npalf) {
+		dev_err(&rvu->pdev->dev,
+			"Func 0x%x: Invalid req, already has NPA\n",
+			 pcifunc);
+		return -EINVAL;
+	}
+
+	/* Only one NIX LF can be attached */
+	if (req->nixlf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NIX)) {
+		block = &hw->block[BLKADDR_NIX0];
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if (!free_lfs)
+			goto fail;
+	} else if (req->nixlf) {
+		dev_err(&rvu->pdev->dev,
+			"Func 0x%x: Invalid req, already has NIX\n",
+			pcifunc);
+		return -EINVAL;
+	}
+
+	if (req->sso) {
+		block = &hw->block[BLKADDR_SSO];
+		/* Is request within limits ? */
+		if (req->sso > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid SSO req, %d > max %d\n",
+				 pcifunc, req->sso, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		/* Check if additional resources are available */
+		if ((req->sso > mappedlfs) &&
+		    ((req->sso - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->ssow) {
+		block = &hw->block[BLKADDR_SSOW];
+		if (req->ssow > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid SSOW req, %d > max %d\n",
+				 pcifunc, req->sso, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->ssow > mappedlfs) &&
+		    ((req->ssow - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->timlfs) {
+		block = &hw->block[BLKADDR_TIM];
+		if (req->timlfs > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid TIMLF req, %d > max %d\n",
+				 pcifunc, req->timlfs, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->timlfs > mappedlfs) &&
+		    ((req->timlfs - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->cptlfs) {
+		block = &hw->block[BLKADDR_CPT0];
+		if (req->cptlfs > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid CPTLF req, %d > max %d\n",
+				 pcifunc, req->cptlfs, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->cptlfs > mappedlfs) &&
+		    ((req->cptlfs - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	return 0;
+
+fail:
+	dev_info(rvu->dev, "Request for %s failed\n", block->name);
+	return -ENOSPC;
+}
+
+static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
+					     struct rsrc_attach *attach,
+					     struct msg_rsp *rsp)
+{
+	u16 pcifunc = attach->hdr.pcifunc;
+	int devnum, err;
+	bool is_pf;
+
+	/* If first request, detach all existing attached resources */
+	if (!attach->modify)
+		rvu_detach_rsrcs(rvu, NULL, pcifunc);
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	spin_lock(&rvu->rsrc_lock);
+
+	/* Check if the request can be accommodated */
+	err = rvu_check_rsrc_availability(rvu, attach, pcifunc);
+	if (err)
+		goto exit;
+
+	/* Now attach the requested resources */
+	if (attach->npalf)
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1);
+
+	if (attach->nixlf)
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1);
+
+	if (attach->sso) {
+		/* RVU func doesn't know which exact LF or slot is attached
+		 * to it, it always sees as slot 0,1,2. So for a 'modify'
+		 * request, simply detach all existing attached LFs/slots
+		 * and attach a fresh.
+		 */
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_SSO);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_SSO, attach->sso);
+	}
+
+	if (attach->ssow) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_SSOW);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_SSOW, attach->ssow);
+	}
+
+	if (attach->timlfs) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_TIM);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_TIM, attach->timlfs);
+	}
+
+	if (attach->cptlfs) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_CPT);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_CPT, attach->cptlfs);
+	}
+
+exit:
+	spin_unlock(&rvu->rsrc_lock);
+	return err;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index ce9897b..0f76704 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -83,6 +83,7 @@ struct rvu {
 	struct rvu_hwinfo       *hw;
 	struct rvu_pfvf		*pf;
 	struct rvu_pfvf		*hwvf;
+	spinlock_t		rsrc_lock; /* Serialize resource alloc/free */
 
 	/* Mbox */
 	struct otx2_mbox	mbox;
@@ -120,8 +121,13 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
  */
 
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
-int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
+void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
+int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
 int rvu_get_pf(u16 pcifunc);
 struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index 3bfb1e0..b28b310 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -54,20 +54,20 @@
 #define RVU_PRIV_PFX_MSIX_CFG(a)            (0x8000110 | (a) << 16)
 #define RVU_PRIV_PFX_ID_CFG(a)              (0x8000120 | (a) << 16)
 #define RVU_PRIV_PFX_INT_CFG(a)             (0x8000200 | (a) << 16)
-#define RVU_PRIV_PFX_NIX_CFG                (0x8000300)
+#define RVU_PRIV_PFX_NIX0_CFG               (0x8000300)
 #define RVU_PRIV_PFX_NPA_CFG		    (0x8000310)
 #define RVU_PRIV_PFX_SSO_CFG                (0x8000320)
 #define RVU_PRIV_PFX_SSOW_CFG               (0x8000330)
 #define RVU_PRIV_PFX_TIM_CFG                (0x8000340)
-#define RVU_PRIV_PFX_CPT_CFG                (0x8000350)
+#define RVU_PRIV_PFX_CPT0_CFG               (0x8000350)
 #define RVU_PRIV_BLOCK_TYPEX_REV(a)         (0x8000400 | (a) << 3)
 #define RVU_PRIV_HWVFX_INT_CFG(a)           (0x8001280 | (a) << 16)
-#define RVU_PRIV_HWVFX_NIX_CFG              (0x8001300)
+#define RVU_PRIV_HWVFX_NIX0_CFG             (0x8001300)
 #define RVU_PRIV_HWVFX_NPA_CFG              (0x8001310)
 #define RVU_PRIV_HWVFX_SSO_CFG              (0x8001320)
 #define RVU_PRIV_HWVFX_SSOW_CFG             (0x8001330)
 #define RVU_PRIV_HWVFX_TIM_CFG              (0x8001340)
-#define RVU_PRIV_HWVFX_CPT_CFG              (0x8001350)
+#define RVU_PRIV_HWVFX_CPT0_CFG             (0x8001350)
 
 /* RVU PF registers */
 #define	RVU_PF_VFX_PFVF_MBOX0		    (0x00000)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 08/15] soc: octeontx2: Add RVU block LF provisioning support
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Added support for a RVU PF/VF to request AF via mailbox
to attach or detach NPA/NIX/SSO/SSOW/TIM/CPT block LFs.
Also supports partial detachment and modifying current
LF attached count of a certian block type.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h    |  45 ++-
 drivers/soc/marvell/octeontx2/rvu.c     | 472 +++++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h     |   8 +-
 drivers/soc/marvell/octeontx2/rvu_reg.h |   8 +-
 4 files changed, 523 insertions(+), 10 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index fc593f0..7280d49 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -118,7 +118,17 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 #define MBOX_MSG_MAX				0xFFFF
 
 #define MBOX_MESSAGES							\
-M(READY,		0x001, msg_req, ready_msg_rsp)
+/* Generic mbox IDs (range 0x000 - 0x1FF) */				\
+M(READY,		0x001, msg_req, ready_msg_rsp)			\
+M(ATTACH_RESOURCES,	0x002, rsrc_attach, msg_rsp)			\
+M(DETACH_RESOURCES,	0x003, rsrc_detach, msg_rsp)			\
+/* CGX mbox IDs (range 0x200 - 0x3FF) */				\
+/* NPA mbox IDs (range 0x400 - 0x5FF) */				\
+/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */				\
+/* TIM mbox IDs (range 0x800 - 0x9FF) */				\
+/* CPT mbox IDs (range 0xA00 - 0xBFF) */				\
+/* NPC mbox IDs (range 0x6000 - 0x7FFF) */				\
+/* NIX mbox IDs (range 0x8000 - 0xFFFF) */				\
 
 enum {
 #define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
@@ -147,4 +157,37 @@ struct ready_msg_rsp {
 	u16    sclk_feq;	/* SCLK frequency */
 };
 
+/* Structure for requesting resource provisioning.
+ * 'modify' flag to be used when either requesting more
+ * or to detach partial of a cetain resource type.
+ * Rest of the fields specify how many of what type to
+ * be attached.
+ */
+struct rsrc_attach {
+	struct mbox_msghdr hdr;
+	u8   modify:1;
+	u8   npalf:1;
+	u8   nixlf:1;
+	u16  sso;
+	u16  ssow;
+	u16  timlfs;
+	u16  cptlfs;
+};
+
+/* Structure for relinquishing resources.
+ * 'partial' flag to be used when relinquishing all resources
+ * but only of a certain type. If not set, all resources of all
+ * types provisioned to the RVU function will be detached.
+ */
+struct rsrc_detach {
+	struct mbox_msghdr hdr;
+	u8 partial:1;
+	u8 npalf:1;
+	u8 nixlf:1;
+	u8 sso:1;
+	u8 ssow:1;
+	u8 timlfs:1;
+	u8 cptlfs:1;
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 9539ab9..39dc45d 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -59,6 +59,41 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
 	return -EBUSY;
 }
 
+int rvu_alloc_rsrc(struct rsrc_bmap *rsrc)
+{
+	int id;
+
+	if (!rsrc->bmap)
+		return -EINVAL;
+
+	id = find_first_zero_bit(rsrc->bmap, rsrc->max);
+	if (id >= rsrc->max)
+		return -ENOSPC;
+
+	__set_bit(id, rsrc->bmap);
+
+	return id;
+}
+
+void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id)
+{
+	if (!rsrc->bmap)
+		return;
+
+	__clear_bit(id, rsrc->bmap);
+}
+
+int rvu_rsrc_free_count(struct rsrc_bmap *rsrc)
+{
+	int used;
+
+	if (!rsrc->bmap)
+		return 0;
+
+	used = bitmap_weight(rsrc->bmap, rsrc->max);
+	return (rsrc->max - used);
+}
+
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 {
 	rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
@@ -68,6 +103,78 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+/* Convert BLOCK_TYPE_E to a BLOCK_ADDR_E.
+ * Some silicon variants of OcteonTX2 supports
+ * multiple blocks of same type.
+ *
+ * @pcifunc has to be zero when no LF is yet attached.
+ */
+int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc)
+{
+	int devnum, blkaddr = -ENODEV;
+	u64 cfg, reg;
+	bool is_pf;
+
+	switch (blktype) {
+	case BLKTYPE_NPA:
+		blkaddr = BLKADDR_NPA;
+		goto exit;
+	case BLKTYPE_NIX:
+		/* For now assume NIX0 */
+		if (!pcifunc) {
+			blkaddr = BLKADDR_NIX0;
+			goto exit;
+		}
+		break;
+	case BLKTYPE_SSO:
+		blkaddr = BLKADDR_SSO;
+		goto exit;
+	case BLKTYPE_SSOW:
+		blkaddr = BLKADDR_SSOW;
+		goto exit;
+	case BLKTYPE_TIM:
+		blkaddr = BLKADDR_TIM;
+		goto exit;
+	case BLKTYPE_CPT:
+		/* For now assume CPT0 */
+		if (!pcifunc) {
+			blkaddr = BLKADDR_CPT0;
+			goto exit;
+		}
+		break;
+	}
+
+	/* Check if this is a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	/* Check if the 'pcifunc' has a NIX LF from 'BLKADDR_NIX0' */
+	if (blktype == BLKTYPE_NIX) {
+		reg = is_pf ? RVU_PRIV_PFX_NIX0_CFG : RVU_PRIV_HWVFX_NIX0_CFG;
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
+		if (cfg)
+			blkaddr = BLKADDR_NIX0;
+	}
+
+	/* Check if the 'pcifunc' has a CPT LF from 'BLKADDR_CPT0' */
+	if (blktype == BLKTYPE_CPT) {
+		reg = is_pf ? RVU_PRIV_PFX_CPT0_CFG : RVU_PRIV_HWVFX_CPT0_CFG;
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
+		if (cfg)
+			blkaddr = BLKADDR_CPT0;
+	}
+
+exit:
+	if (is_block_implemented(rvu->hw, blkaddr))
+		return blkaddr;
+	return -ENODEV;
+}
+
 static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
 				struct rvu_block *block, u16 pcifunc,
 				u16 lf, bool attach)
@@ -153,6 +260,17 @@ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
 		return &rvu->pf[rvu_get_pf(pcifunc)];
 }
 
+bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr)
+{
+	struct rvu_block *block;
+
+	if ((blkaddr < BLKADDR_RVUM) || (blkaddr >= BLK_COUNT))
+		return false;
+
+	block = &hw->block[blkaddr];
+	return block->implemented;
+}
+
 static void rvu_check_block_implemented(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
@@ -273,8 +391,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	block->type = BLKTYPE_NIX;
 	block->lfshift = 8;
 	block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
-	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX_CFG;
-	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX_CFG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX0_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX0_CFG;
 	block->lfcfg_reg = NIX_PRIV_LFX_CFG;
 	block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG;
 	block->lfreset_reg = NIX_AF_LF_RST;
@@ -360,8 +478,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	block->multislot = true;
 	block->lfshift = 3;
 	block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
-	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT_CFG;
-	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT_CFG;
+	block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT0_CFG;
+	block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT0_CFG;
 	block->lfcfg_reg = CPT_PRIV_LFX_CFG;
 	block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG;
 	block->lfreset_reg = CPT_AF_LF_RST;
@@ -399,6 +517,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		rvu_scan_block(rvu, block);
 	}
 
+	spin_lock_init(&rvu->rsrc_lock);
+
 	return 0;
 }
 
@@ -408,6 +528,350 @@ static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req,
 	return 0;
 }
 
+/* Get current count of a RVU block's LF/slots
+ * provisioned to a given RVU func.
+ */
+static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
+{
+	switch (blktype) {
+	case BLKTYPE_NPA:
+		return pfvf->npalf ? 1 : 0;
+	case BLKTYPE_NIX:
+		return pfvf->nixlf ? 1 : 0;
+	case BLKTYPE_SSO:
+		return pfvf->sso;
+	case BLKTYPE_SSOW:
+		return pfvf->ssow;
+	case BLKTYPE_TIM:
+		return pfvf->timlfs;
+	case BLKTYPE_CPT:
+		return pfvf->cptlfs;
+	}
+	return 0;
+}
+
+static int rvu_lookup_rsrc(struct rvu *rvu, struct rvu_block *block,
+			   int pcifunc, int slot)
+{
+	u64 val;
+
+	val = ((u64)pcifunc << 24) | (slot << 16) | (1ULL << 13);
+	rvu_write64(rvu, block->addr, block->lookup_reg, val);
+	/* Wait for the lookup to finish */
+	/* TODO: put some timeout here */
+	while (rvu_read64(rvu, block->addr, block->lookup_reg) & (1ULL << 13))
+		;
+
+	val = rvu_read64(rvu, block->addr, block->lookup_reg);
+
+	/* Check LF valid bit */
+	if (!(val & (1ULL << 12)))
+		return -1;
+
+	return (val & 0xFFF);
+}
+
+static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int slot, lf, num_lfs;
+	int blkaddr;
+
+	blkaddr = rvu_get_blkaddr(rvu, blktype, pcifunc);
+	if (blkaddr < 0)
+		return;
+
+	block = &hw->block[blkaddr];
+
+	num_lfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+	if (!num_lfs)
+		return;
+
+	for (slot = 0; slot < num_lfs; slot++) {
+		lf = rvu_lookup_rsrc(rvu, block, pcifunc, slot);
+		if (lf < 0) /* This should never happen */
+			continue;
+
+		/* Disable the LF */
+		rvu_write64(rvu, blkaddr, block->lfcfg_reg |
+			    (lf << block->lfshift), 0x00ULL);
+
+		/* Update SW maintained mapping info as well */
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    pcifunc, lf, false);
+
+		/* Free the resource */
+		rvu_free_rsrc(&block->lf, lf);
+	}
+}
+
+static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach,
+			    u16 pcifunc)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	bool is_pf, detach_all = true;
+	struct rvu_block *block;
+	int devnum, blkid;
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	spin_lock(&rvu->rsrc_lock);
+
+	/* Check for partial resource detach */
+	if (detach && detach->partial)
+		detach_all = false;
+
+	/* Check for RVU block's LFs attached to this func,
+	 * if so, detach them.
+	 */
+	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
+		block = &hw->block[blkid];
+		if (!block->lf.bmap)
+			continue;
+		if (!detach_all && detach) {
+			if ((blkid == BLKADDR_NPA) && !detach->npalf)
+				continue;
+			else if ((blkid == BLKADDR_NIX0) && !detach->nixlf)
+				continue;
+			else if ((blkid == BLKADDR_SSO) && !detach->sso)
+				continue;
+			else if ((blkid == BLKADDR_SSOW) && !detach->ssow)
+				continue;
+			else if ((blkid == BLKADDR_TIM) && !detach->timlfs)
+				continue;
+			else if ((blkid == BLKADDR_CPT0) && !detach->cptlfs)
+				continue;
+		}
+		rvu_detach_block(rvu, pcifunc, block->type);
+	}
+
+	spin_unlock(&rvu->rsrc_lock);
+	return 0;
+}
+
+static int rvu_mbox_handler_DETACH_RESOURCES(struct rvu *rvu,
+					     struct rsrc_detach *detach,
+					     struct msg_rsp *rsp)
+{
+	return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc);
+}
+
+static void rvu_attach_block(struct rvu *rvu, int pcifunc,
+			     int blktype, int num_lfs)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int slot, lf;
+	int blkaddr;
+	u64 cfg;
+
+	if (!num_lfs)
+		return;
+
+	blkaddr = rvu_get_blkaddr(rvu, blktype, 0);
+	if (blkaddr < 0)
+		return;
+
+	block = &hw->block[blkaddr];
+	if (!block->lf.bmap)
+		return;
+
+	for (slot = 0; slot < num_lfs; slot++) {
+		/* Allocate the resource */
+		lf = rvu_alloc_rsrc(&block->lf);
+		if (lf < 0)
+			return;
+
+		cfg = (1ULL << 63) | (pcifunc << 8) | slot;
+		rvu_write64(rvu, blkaddr, block->lfcfg_reg |
+			    (lf << block->lfshift), cfg);
+		rvu_update_rsrc_map(rvu, pfvf, block,
+				    pcifunc, lf, true);
+	}
+}
+
+static int rvu_check_rsrc_availability(struct rvu *rvu,
+				       struct rsrc_attach *req, u16 pcifunc)
+{
+	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	int free_lfs, mappedlfs;
+
+	/* Only one NPA LF can be attached */
+	if (req->npalf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NPA)) {
+		block = &hw->block[BLKADDR_NPA];
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if (!free_lfs)
+			goto fail;
+	} else if (req->npalf) {
+		dev_err(&rvu->pdev->dev,
+			"Func 0x%x: Invalid req, already has NPA\n",
+			 pcifunc);
+		return -EINVAL;
+	}
+
+	/* Only one NIX LF can be attached */
+	if (req->nixlf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NIX)) {
+		block = &hw->block[BLKADDR_NIX0];
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if (!free_lfs)
+			goto fail;
+	} else if (req->nixlf) {
+		dev_err(&rvu->pdev->dev,
+			"Func 0x%x: Invalid req, already has NIX\n",
+			pcifunc);
+		return -EINVAL;
+	}
+
+	if (req->sso) {
+		block = &hw->block[BLKADDR_SSO];
+		/* Is request within limits ? */
+		if (req->sso > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid SSO req, %d > max %d\n",
+				 pcifunc, req->sso, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		/* Check if additional resources are available */
+		if ((req->sso > mappedlfs) &&
+		    ((req->sso - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->ssow) {
+		block = &hw->block[BLKADDR_SSOW];
+		if (req->ssow > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid SSOW req, %d > max %d\n",
+				 pcifunc, req->sso, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->ssow > mappedlfs) &&
+		    ((req->ssow - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->timlfs) {
+		block = &hw->block[BLKADDR_TIM];
+		if (req->timlfs > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid TIMLF req, %d > max %d\n",
+				 pcifunc, req->timlfs, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->timlfs > mappedlfs) &&
+		    ((req->timlfs - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	if (req->cptlfs) {
+		block = &hw->block[BLKADDR_CPT0];
+		if (req->cptlfs > block->lf.max) {
+			dev_err(&rvu->pdev->dev,
+				"Func 0x%x: Invalid CPTLF req, %d > max %d\n",
+				 pcifunc, req->cptlfs, block->lf.max);
+			return -EINVAL;
+		}
+		mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
+		free_lfs = rvu_rsrc_free_count(&block->lf);
+		if ((req->cptlfs > mappedlfs) &&
+		    ((req->cptlfs - mappedlfs) > free_lfs))
+			goto fail;
+	}
+
+	return 0;
+
+fail:
+	dev_info(rvu->dev, "Request for %s failed\n", block->name);
+	return -ENOSPC;
+}
+
+static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
+					     struct rsrc_attach *attach,
+					     struct msg_rsp *rsp)
+{
+	u16 pcifunc = attach->hdr.pcifunc;
+	int devnum, err;
+	bool is_pf;
+
+	/* If first request, detach all existing attached resources */
+	if (!attach->modify)
+		rvu_detach_rsrcs(rvu, NULL, pcifunc);
+
+	/* Check if this is for a RVU PF or VF */
+	if (pcifunc & RVU_PFVF_FUNC_MASK) {
+		is_pf = false;
+		devnum = rvu_get_hwvf(rvu, pcifunc);
+	} else {
+		is_pf = true;
+		devnum = rvu_get_pf(pcifunc);
+	}
+
+	spin_lock(&rvu->rsrc_lock);
+
+	/* Check if the request can be accommodated */
+	err = rvu_check_rsrc_availability(rvu, attach, pcifunc);
+	if (err)
+		goto exit;
+
+	/* Now attach the requested resources */
+	if (attach->npalf)
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1);
+
+	if (attach->nixlf)
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1);
+
+	if (attach->sso) {
+		/* RVU func doesn't know which exact LF or slot is attached
+		 * to it, it always sees as slot 0,1,2. So for a 'modify'
+		 * request, simply detach all existing attached LFs/slots
+		 * and attach a fresh.
+		 */
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_SSO);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_SSO, attach->sso);
+	}
+
+	if (attach->ssow) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_SSOW);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_SSOW, attach->ssow);
+	}
+
+	if (attach->timlfs) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_TIM);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_TIM, attach->timlfs);
+	}
+
+	if (attach->cptlfs) {
+		if (attach->modify)
+			rvu_detach_block(rvu, pcifunc, BLKTYPE_CPT);
+		rvu_attach_block(rvu, pcifunc, BLKTYPE_CPT, attach->cptlfs);
+	}
+
+exit:
+	spin_unlock(&rvu->rsrc_lock);
+	return err;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index ce9897b..0f76704 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -83,6 +83,7 @@ struct rvu {
 	struct rvu_hwinfo       *hw;
 	struct rvu_pfvf		*pf;
 	struct rvu_pfvf		*hwvf;
+	spinlock_t		rsrc_lock; /* Serialize resource alloc/free */
 
 	/* Mbox */
 	struct otx2_mbox	mbox;
@@ -120,8 +121,13 @@ static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
  */
 
 int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
-int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
+int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
+void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
+int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
 int rvu_get_pf(u16 pcifunc);
 struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
+int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_reg.h b/drivers/soc/marvell/octeontx2/rvu_reg.h
index 3bfb1e0..b28b310 100644
--- a/drivers/soc/marvell/octeontx2/rvu_reg.h
+++ b/drivers/soc/marvell/octeontx2/rvu_reg.h
@@ -54,20 +54,20 @@
 #define RVU_PRIV_PFX_MSIX_CFG(a)            (0x8000110 | (a) << 16)
 #define RVU_PRIV_PFX_ID_CFG(a)              (0x8000120 | (a) << 16)
 #define RVU_PRIV_PFX_INT_CFG(a)             (0x8000200 | (a) << 16)
-#define RVU_PRIV_PFX_NIX_CFG                (0x8000300)
+#define RVU_PRIV_PFX_NIX0_CFG               (0x8000300)
 #define RVU_PRIV_PFX_NPA_CFG		    (0x8000310)
 #define RVU_PRIV_PFX_SSO_CFG                (0x8000320)
 #define RVU_PRIV_PFX_SSOW_CFG               (0x8000330)
 #define RVU_PRIV_PFX_TIM_CFG                (0x8000340)
-#define RVU_PRIV_PFX_CPT_CFG                (0x8000350)
+#define RVU_PRIV_PFX_CPT0_CFG               (0x8000350)
 #define RVU_PRIV_BLOCK_TYPEX_REV(a)         (0x8000400 | (a) << 3)
 #define RVU_PRIV_HWVFX_INT_CFG(a)           (0x8001280 | (a) << 16)
-#define RVU_PRIV_HWVFX_NIX_CFG              (0x8001300)
+#define RVU_PRIV_HWVFX_NIX0_CFG             (0x8001300)
 #define RVU_PRIV_HWVFX_NPA_CFG              (0x8001310)
 #define RVU_PRIV_HWVFX_SSO_CFG              (0x8001320)
 #define RVU_PRIV_HWVFX_SSOW_CFG             (0x8001330)
 #define RVU_PRIV_HWVFX_TIM_CFG              (0x8001340)
-#define RVU_PRIV_HWVFX_CPT_CFG              (0x8001350)
+#define RVU_PRIV_HWVFX_CPT0_CFG             (0x8001350)
 
 /* RVU PF registers */
 #define	RVU_PF_VFX_PFVF_MBOX0		    (0x00000)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 09/15] soc: octeontx2: Configure block LF's MSIX vector offset
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Firmware configures a certain number of MSIX vectors to each of
enabled RVU PF/VF. When a block LF is attached to a PF/VF, number
of MSIX vectors needed by that LF are set aside (out of PF/VF's
total MSIX vectors) and LF's msix_offset is configured in HW.

Also added support for a RVU PF/VF to retrieve that block LF's
MSIX vector offset information from AF via mbox.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h       |  18 ++
 drivers/soc/marvell/octeontx2/rvu.c        | 333 ++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h        |   7 +
 drivers/soc/marvell/octeontx2/rvu_struct.h |   2 +
 4 files changed, 357 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index 7280d49..bedf0ee 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -122,6 +122,7 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 M(READY,		0x001, msg_req, ready_msg_rsp)			\
 M(ATTACH_RESOURCES,	0x002, rsrc_attach, msg_rsp)			\
 M(DETACH_RESOURCES,	0x003, rsrc_detach, msg_rsp)			\
+M(MSIX_OFFSET,		0x004, msg_req, msix_offset_rsp)		\
 /* CGX mbox IDs (range 0x200 - 0x3FF) */				\
 /* NPA mbox IDs (range 0x400 - 0x5FF) */				\
 /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */				\
@@ -190,4 +191,21 @@ struct rsrc_detach {
 	u8 cptlfs:1;
 };
 
+#define MSIX_VECTOR_INVALID	0xFFFF
+#define MAX_RVU_BLKLF_CNT	256
+
+struct msix_offset_rsp {
+	struct mbox_msghdr hdr;
+	u16  npa_msixoff;
+	u16  nix_msixoff;
+	u8   sso;
+	u8   ssow;
+	u8   timlfs;
+	u8   cptlfs;
+	u16  sso_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  ssow_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  timlf_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  cptlf_msixoff[MAX_RVU_BLKLF_CNT];
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 39dc45d..8ac3524 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -24,6 +24,11 @@
 
 static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
 
+static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, int lf);
+static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				  struct rvu_block *block, int lf);
+
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
@@ -75,6 +80,45 @@ int rvu_alloc_rsrc(struct rsrc_bmap *rsrc)
 	return id;
 }
 
+static int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc)
+{
+	int start;
+
+	if (!rsrc->bmap)
+		return -EINVAL;
+
+	start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
+	if (start >= rsrc->max)
+		return -ENOSPC;
+
+	bitmap_set(rsrc->bmap, start, nrsrc);
+	return start;
+}
+
+static void rvu_free_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc, int start)
+{
+	if (!rsrc->bmap)
+		return;
+	if (start >= rsrc->max)
+		return;
+
+	bitmap_clear(rsrc->bmap, start, nrsrc);
+}
+
+static bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc)
+{
+	int start;
+
+	if (!rsrc->bmap)
+		return false;
+
+	start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
+	if (start >= rsrc->max)
+		return false;
+
+	return true;
+}
+
 void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id)
 {
 	if (!rsrc->bmap)
@@ -103,6 +147,26 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+/* Get block LF's HW index from a PF_FUNC's block slot number */
+int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot)
+{
+	int lf;
+	u16 match = 0;
+
+	spin_lock(&rvu->rsrc_lock);
+	for (lf = 0; lf < block->lf.max; lf++) {
+		if (block->fn_map[lf] == pcifunc) {
+			if (slot == match) {
+				spin_unlock(&rvu->rsrc_lock);
+				return lf;
+			}
+			match++;
+		}
+	}
+	spin_unlock(&rvu->rsrc_lock);
+	return -ENODEV;
+}
+
 /* Convert BLOCK_TYPE_E to a BLOCK_ADDR_E.
  * Some silicon variants of OcteonTX2 supports
  * multiple blocks of same type.
@@ -237,6 +301,16 @@ inline int rvu_get_pf(u16 pcifunc)
 	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
 }
 
+void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf)
+{
+	u64 cfg;
+
+	/* Get numVFs attached to this PF and first HWVF */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+	*numvfs = (cfg >> 12) & 0xFF;
+	*hwvf = cfg & 0xFFF;
+}
+
 static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
 {
 	int pf, func;
@@ -331,20 +405,150 @@ static void rvu_scan_block(struct rvu *rvu, struct rvu_block *block)
 		pfvf = rvu_get_pfvf(rvu, (cfg >> 8) & 0xFFFF);
 		rvu_update_rsrc_map(rvu, pfvf, block,
 				    (cfg >> 8) & 0xFFFF, lf, true);
+
+		/* Set start MSIX vector for this LF within this PF/VF */
+		rvu_set_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
+static void rvu_check_min_msix_vec(struct rvu *rvu, int nvecs, int pf, int vf)
+{
+	int min_vecs;
+
+	if (!vf)
+		goto check_pf;
+
+	if (!nvecs) {
+		dev_warn(rvu->dev,
+			 "PF%d:VF%d is configured with zero msix vectors, %d\n",
+			 pf, vf - 1, nvecs);
+	}
+	return;
+
+check_pf:
+	if (pf == 0)
+		min_vecs = RVU_AF_INT_VEC_CNT + RVU_PF_INT_VEC_CNT;
+	else
+		min_vecs = RVU_PF_INT_VEC_CNT;
+
+	if (!(nvecs < min_vecs))
+		return;
+	dev_warn(rvu->dev,
+		 "PF%d is configured with too few vectors, %d, min is %d\n",
+		 pf, nvecs, min_vecs);
+}
+
+static int rvu_setup_msix_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	int pf, vf, numvfs, hwvf, err;
+	struct rvu_pfvf *pfvf;
+	int nvecs, offset;
+	u64 cfg;
+
+	for (pf = 0; pf < hw->total_pfs; pf++) {
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+		/* If PF is not enabled, nothing to do */
+		if (!((cfg >> 20) & 0x01))
+			continue;
+
+		rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf);
+
+		pfvf = &rvu->pf[pf];
+		/* Get num of MSIX vectors attached to this PF */
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_MSIX_CFG(pf));
+		pfvf->msix.max = ((cfg >> 32) & 0xFFF) + 1;
+		rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, 0);
+
+		/* Alloc msix bitmap for this PF */
+		err = rvu_alloc_bitmap(&pfvf->msix);
+		if (err)
+			return err;
+
+		/* Allocate memory for MSIX vector to RVU block LF mapping */
+		pfvf->msix_lfmap = devm_kcalloc(rvu->dev, pfvf->msix.max,
+						sizeof(u16), GFP_KERNEL);
+		if (!pfvf->msix_lfmap)
+			return -ENOMEM;
+
+		/* For PF0 (AF) firmware will set msix vector offsets for
+		 * AF, block AF and PF0_INT vectors, so jump to VFs.
+		 */
+		if (!pf)
+			goto setup_vfmsix;
+
+		/* Set MSIX offset for PF's 'RVU_PF_INT_VEC' vectors.
+		 * These are allocated on driver init and never freed,
+		 * so no need to set 'msix_lfmap' for these.
+		 */
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(pf));
+		nvecs = (cfg >> 12) & 0xFF;
+		cfg &= ~0x7FFULL;
+		offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+		rvu_write64(rvu, BLKADDR_RVUM,
+			    RVU_PRIV_PFX_INT_CFG(pf), cfg | offset);
+setup_vfmsix:
+		/* Alloc msix bitmap for VFs */
+		for (vf = 0; vf < numvfs; vf++) {
+			pfvf =  &rvu->hwvf[hwvf + vf];
+			/* Get num of MSIX vectors attached to this VF */
+			cfg = rvu_read64(rvu, BLKADDR_RVUM,
+					 RVU_PRIV_PFX_MSIX_CFG(pf));
+			pfvf->msix.max = (cfg & 0xFFF) + 1;
+			rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, vf + 1);
+
+			/* Alloc msix bitmap for this VF */
+			err = rvu_alloc_bitmap(&pfvf->msix);
+			if (err)
+				return err;
+
+			pfvf->msix_lfmap =
+				devm_kcalloc(rvu->dev, pfvf->msix.max,
+					     sizeof(u16), GFP_KERNEL);
+			if (!pfvf->msix_lfmap)
+				return -ENOMEM;
+
+			/* Set MSIX offset for HWVF's 'RVU_VF_INT_VEC' vectors.
+			 * These are allocated on driver init and never freed,
+			 * so no need to set 'msix_lfmap' for these.
+			 */
+			cfg = rvu_read64(rvu, BLKADDR_RVUM,
+					 RVU_PRIV_HWVFX_INT_CFG(hwvf + vf));
+			nvecs = (cfg >> 12) & 0xFF;
+			cfg &= ~0x7FFULL;
+			offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+			rvu_write64(rvu, BLKADDR_RVUM,
+				    RVU_PRIV_HWVFX_INT_CFG(hwvf + vf),
+				    cfg | offset);
+		}
+	}
+
+	return 0;
+}
+
 static void rvu_free_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
+	struct rvu_pfvf  *pfvf;
 	int id;
 
-	/* Free all bitmaps */
+	/* Free block LF bitmaps */
 	for (id = 0; id < BLK_COUNT; id++) {
 		block = &hw->block[id];
 		kfree(block->lf.bmap);
 	}
+
+	/* Free MSIX bitmaps */
+	for (id = 0; id < hw->total_pfs; id++) {
+		pfvf = &rvu->pf[id];
+		kfree(pfvf->msix.bmap);
+	}
+
+	for (id = 0; id < hw->total_vfs; id++) {
+		pfvf = &rvu->hwvf[id];
+		kfree(pfvf->msix.bmap);
+	}
 }
 
 static int rvu_setup_hw_resources(struct rvu *rvu)
@@ -500,6 +704,12 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	if (!rvu->hwvf)
 		return -ENOMEM;
 
+	spin_lock_init(&rvu->rsrc_lock);
+
+	err = rvu_setup_msix_resources(rvu);
+	if (err)
+		return err;
+
 	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
 		block = &hw->block[blkid];
 		if (!block->lf.bmap)
@@ -517,8 +727,6 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		rvu_scan_block(rvu, block);
 	}
 
-	spin_lock_init(&rvu->rsrc_lock);
-
 	return 0;
 }
 
@@ -604,6 +812,9 @@ static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
 
 		/* Free the resource */
 		rvu_free_rsrc(&block->lf, lf);
+
+		/* Clear MSIX vector offset for this LF */
+		rvu_clear_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
@@ -697,6 +908,9 @@ static void rvu_attach_block(struct rvu *rvu, int pcifunc,
 			    (lf << block->lfshift), cfg);
 		rvu_update_rsrc_map(rvu, pfvf, block,
 				    pcifunc, lf, true);
+
+		/* Set start MSIX vector for this LF within this PF/VF */
+		rvu_set_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
@@ -872,6 +1086,119 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
 	return err;
 }
 
+static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+			       int blkaddr, int lf)
+{
+	u16 vec;
+
+	if (lf < 0)
+		return MSIX_VECTOR_INVALID;
+
+	for (vec = 0; vec < pfvf->msix.max; vec++) {
+		if (pfvf->msix_lfmap[vec] == MSIX_BLKLF(blkaddr, lf))
+			return vec;
+	}
+	return MSIX_VECTOR_INVALID;
+}
+
+static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, int lf)
+{
+	u16 nvecs, vec, offset;
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
+			 (lf << block->lfshift));
+	nvecs = (cfg >> 12) & 0xFF;
+
+	/* Check and alloc MSIX vectors, must be contiguous */
+	if (!rvu_rsrc_check_contig(&pfvf->msix, nvecs))
+		return;
+
+	offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+
+	/* Config MSIX offset in LF */
+	rvu_write64(rvu, block->addr, block->msixcfg_reg |
+		    (lf << block->lfshift), (cfg & ~0x7FFULL) | offset);
+
+	/* Update the bitmap as well */
+	for (vec = 0; vec < nvecs; vec++)
+		pfvf->msix_lfmap[offset + vec] = MSIX_BLKLF(block->addr, lf);
+}
+
+static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				  struct rvu_block *block, int lf)
+{
+	u16 nvecs, vec, offset;
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
+			 (lf << block->lfshift));
+	nvecs = (cfg >> 12) & 0xFF;
+
+	/* Clear MSIX offset in LF */
+	rvu_write64(rvu, block->addr, block->msixcfg_reg |
+		    (lf << block->lfshift), cfg & ~0x7FFULL);
+
+	offset = rvu_get_msix_offset(rvu, pfvf, block->addr, lf);
+
+	/* Update the mapping */
+	for (vec = 0; vec < nvecs; vec++)
+		pfvf->msix_lfmap[offset + vec] = 0;
+
+	/* Free the same in MSIX bitmap */
+	rvu_free_rsrc_contig(&pfvf->msix, nvecs, offset);
+}
+
+static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req,
+					struct msix_offset_rsp *rsp)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	u16 pcifunc = req->hdr.pcifunc;
+	struct rvu_pfvf *pfvf;
+	int lf, slot;
+
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+	if (!pfvf->msix.bmap)
+		return 0;
+
+	/* Set MSIX offsets for each block's LFs attached to this PF/VF */
+	lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NPA], pcifunc, 0);
+	rsp->npa_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NPA, lf);
+
+	lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NIX0], pcifunc, 0);
+	rsp->nix_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NIX0, lf);
+
+	rsp->sso = pfvf->sso;
+	for (slot = 0; slot < rsp->sso; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSO], pcifunc, slot);
+		rsp->sso_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSO, lf);
+	}
+
+	rsp->ssow = pfvf->ssow;
+	for (slot = 0; slot < rsp->ssow; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSOW], pcifunc, slot);
+		rsp->ssow_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSOW, lf);
+	}
+
+	rsp->timlfs = pfvf->timlfs;
+	for (slot = 0; slot < rsp->timlfs; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_TIM], pcifunc, slot);
+		rsp->timlf_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_TIM, lf);
+	}
+
+	rsp->cptlfs = pfvf->cptlfs;
+	for (slot = 0; slot < rsp->cptlfs; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_CPT0], pcifunc, slot);
+		rsp->cptlf_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_CPT0, lf);
+	}
+	return 0;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 0f76704..7435e83 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -65,6 +65,11 @@ struct rvu_pfvf {
 	u16		ssow;
 	u16		cptlfs;
 	u16		timlfs;
+
+	/* Block LF's MSIX vector info */
+	struct rsrc_bmap msix;      /* Bitmap for MSIX vector alloc */
+#define MSIX_BLKLF(blkaddr, lf) (((blkaddr) << 8) | ((lf) & 0xFF))
+	u16		 *msix_lfmap; /* Vector to block LF mapping */
 };
 
 struct rvu_hwinfo {
@@ -126,7 +131,9 @@ void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
 int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
 int rvu_get_pf(u16 pcifunc);
 struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
 bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
 int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 77ac751..f61c862 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -58,6 +58,7 @@ enum rvu_af_int_vec_e {
 	RVU_AF_INT_VEC_PFME   = 0x2,
 	RVU_AF_INT_VEC_GEN    = 0x3,
 	RVU_AF_INT_VEC_MBOX   = 0x4,
+	RVU_AF_INT_VEC_CNT    = 0x5,
 };
 
 /**
@@ -71,6 +72,7 @@ enum rvu_pf_int_vec_e {
 	RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
 	RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
 	RVU_PF_INT_VEC_AFPF_MBOX  = 0x6,
+	RVU_PF_INT_VEC_CNT	  = 0x7,
 };
 
 #endif /* RVU_STRUCT_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 09/15] soc: octeontx2: Configure block LF's MSIX vector offset
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Firmware configures a certain number of MSIX vectors to each of
enabled RVU PF/VF. When a block LF is attached to a PF/VF, number
of MSIX vectors needed by that LF are set aside (out of PF/VF's
total MSIX vectors) and LF's msix_offset is configured in HW.

Also added support for a RVU PF/VF to retrieve that block LF's
MSIX vector offset information from AF via mbox.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/mbox.h       |  18 ++
 drivers/soc/marvell/octeontx2/rvu.c        | 333 ++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/rvu.h        |   7 +
 drivers/soc/marvell/octeontx2/rvu_struct.h |   2 +
 4 files changed, 357 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/mbox.h b/drivers/soc/marvell/octeontx2/mbox.h
index 7280d49..bedf0ee 100644
--- a/drivers/soc/marvell/octeontx2/mbox.h
+++ b/drivers/soc/marvell/octeontx2/mbox.h
@@ -122,6 +122,7 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
 M(READY,		0x001, msg_req, ready_msg_rsp)			\
 M(ATTACH_RESOURCES,	0x002, rsrc_attach, msg_rsp)			\
 M(DETACH_RESOURCES,	0x003, rsrc_detach, msg_rsp)			\
+M(MSIX_OFFSET,		0x004, msg_req, msix_offset_rsp)		\
 /* CGX mbox IDs (range 0x200 - 0x3FF) */				\
 /* NPA mbox IDs (range 0x400 - 0x5FF) */				\
 /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */				\
@@ -190,4 +191,21 @@ struct rsrc_detach {
 	u8 cptlfs:1;
 };
 
+#define MSIX_VECTOR_INVALID	0xFFFF
+#define MAX_RVU_BLKLF_CNT	256
+
+struct msix_offset_rsp {
+	struct mbox_msghdr hdr;
+	u16  npa_msixoff;
+	u16  nix_msixoff;
+	u8   sso;
+	u8   ssow;
+	u8   timlfs;
+	u8   cptlfs;
+	u16  sso_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  ssow_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  timlf_msixoff[MAX_RVU_BLKLF_CNT];
+	u16  cptlf_msixoff[MAX_RVU_BLKLF_CNT];
+};
+
 #endif /* MBOX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 39dc45d..8ac3524 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -24,6 +24,11 @@
 
 static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
 
+static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, int lf);
+static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				  struct rvu_block *block, int lf);
+
 /* Supported devices */
 static const struct pci_device_id rvu_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
@@ -75,6 +80,45 @@ int rvu_alloc_rsrc(struct rsrc_bmap *rsrc)
 	return id;
 }
 
+static int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc)
+{
+	int start;
+
+	if (!rsrc->bmap)
+		return -EINVAL;
+
+	start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
+	if (start >= rsrc->max)
+		return -ENOSPC;
+
+	bitmap_set(rsrc->bmap, start, nrsrc);
+	return start;
+}
+
+static void rvu_free_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc, int start)
+{
+	if (!rsrc->bmap)
+		return;
+	if (start >= rsrc->max)
+		return;
+
+	bitmap_clear(rsrc->bmap, start, nrsrc);
+}
+
+static bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc)
+{
+	int start;
+
+	if (!rsrc->bmap)
+		return false;
+
+	start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
+	if (start >= rsrc->max)
+		return false;
+
+	return true;
+}
+
 void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id)
 {
 	if (!rsrc->bmap)
@@ -103,6 +147,26 @@ int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
 	return 0;
 }
 
+/* Get block LF's HW index from a PF_FUNC's block slot number */
+int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot)
+{
+	int lf;
+	u16 match = 0;
+
+	spin_lock(&rvu->rsrc_lock);
+	for (lf = 0; lf < block->lf.max; lf++) {
+		if (block->fn_map[lf] == pcifunc) {
+			if (slot == match) {
+				spin_unlock(&rvu->rsrc_lock);
+				return lf;
+			}
+			match++;
+		}
+	}
+	spin_unlock(&rvu->rsrc_lock);
+	return -ENODEV;
+}
+
 /* Convert BLOCK_TYPE_E to a BLOCK_ADDR_E.
  * Some silicon variants of OcteonTX2 supports
  * multiple blocks of same type.
@@ -237,6 +301,16 @@ inline int rvu_get_pf(u16 pcifunc)
 	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
 }
 
+void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf)
+{
+	u64 cfg;
+
+	/* Get numVFs attached to this PF and first HWVF */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+	*numvfs = (cfg >> 12) & 0xFF;
+	*hwvf = cfg & 0xFFF;
+}
+
 static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
 {
 	int pf, func;
@@ -331,20 +405,150 @@ static void rvu_scan_block(struct rvu *rvu, struct rvu_block *block)
 		pfvf = rvu_get_pfvf(rvu, (cfg >> 8) & 0xFFFF);
 		rvu_update_rsrc_map(rvu, pfvf, block,
 				    (cfg >> 8) & 0xFFFF, lf, true);
+
+		/* Set start MSIX vector for this LF within this PF/VF */
+		rvu_set_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
+static void rvu_check_min_msix_vec(struct rvu *rvu, int nvecs, int pf, int vf)
+{
+	int min_vecs;
+
+	if (!vf)
+		goto check_pf;
+
+	if (!nvecs) {
+		dev_warn(rvu->dev,
+			 "PF%d:VF%d is configured with zero msix vectors, %d\n",
+			 pf, vf - 1, nvecs);
+	}
+	return;
+
+check_pf:
+	if (pf == 0)
+		min_vecs = RVU_AF_INT_VEC_CNT + RVU_PF_INT_VEC_CNT;
+	else
+		min_vecs = RVU_PF_INT_VEC_CNT;
+
+	if (!(nvecs < min_vecs))
+		return;
+	dev_warn(rvu->dev,
+		 "PF%d is configured with too few vectors, %d, min is %d\n",
+		 pf, nvecs, min_vecs);
+}
+
+static int rvu_setup_msix_resources(struct rvu *rvu)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	int pf, vf, numvfs, hwvf, err;
+	struct rvu_pfvf *pfvf;
+	int nvecs, offset;
+	u64 cfg;
+
+	for (pf = 0; pf < hw->total_pfs; pf++) {
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+		/* If PF is not enabled, nothing to do */
+		if (!((cfg >> 20) & 0x01))
+			continue;
+
+		rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf);
+
+		pfvf = &rvu->pf[pf];
+		/* Get num of MSIX vectors attached to this PF */
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_MSIX_CFG(pf));
+		pfvf->msix.max = ((cfg >> 32) & 0xFFF) + 1;
+		rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, 0);
+
+		/* Alloc msix bitmap for this PF */
+		err = rvu_alloc_bitmap(&pfvf->msix);
+		if (err)
+			return err;
+
+		/* Allocate memory for MSIX vector to RVU block LF mapping */
+		pfvf->msix_lfmap = devm_kcalloc(rvu->dev, pfvf->msix.max,
+						sizeof(u16), GFP_KERNEL);
+		if (!pfvf->msix_lfmap)
+			return -ENOMEM;
+
+		/* For PF0 (AF) firmware will set msix vector offsets for
+		 * AF, block AF and PF0_INT vectors, so jump to VFs.
+		 */
+		if (!pf)
+			goto setup_vfmsix;
+
+		/* Set MSIX offset for PF's 'RVU_PF_INT_VEC' vectors.
+		 * These are allocated on driver init and never freed,
+		 * so no need to set 'msix_lfmap' for these.
+		 */
+		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(pf));
+		nvecs = (cfg >> 12) & 0xFF;
+		cfg &= ~0x7FFULL;
+		offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+		rvu_write64(rvu, BLKADDR_RVUM,
+			    RVU_PRIV_PFX_INT_CFG(pf), cfg | offset);
+setup_vfmsix:
+		/* Alloc msix bitmap for VFs */
+		for (vf = 0; vf < numvfs; vf++) {
+			pfvf =  &rvu->hwvf[hwvf + vf];
+			/* Get num of MSIX vectors attached to this VF */
+			cfg = rvu_read64(rvu, BLKADDR_RVUM,
+					 RVU_PRIV_PFX_MSIX_CFG(pf));
+			pfvf->msix.max = (cfg & 0xFFF) + 1;
+			rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, vf + 1);
+
+			/* Alloc msix bitmap for this VF */
+			err = rvu_alloc_bitmap(&pfvf->msix);
+			if (err)
+				return err;
+
+			pfvf->msix_lfmap =
+				devm_kcalloc(rvu->dev, pfvf->msix.max,
+					     sizeof(u16), GFP_KERNEL);
+			if (!pfvf->msix_lfmap)
+				return -ENOMEM;
+
+			/* Set MSIX offset for HWVF's 'RVU_VF_INT_VEC' vectors.
+			 * These are allocated on driver init and never freed,
+			 * so no need to set 'msix_lfmap' for these.
+			 */
+			cfg = rvu_read64(rvu, BLKADDR_RVUM,
+					 RVU_PRIV_HWVFX_INT_CFG(hwvf + vf));
+			nvecs = (cfg >> 12) & 0xFF;
+			cfg &= ~0x7FFULL;
+			offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+			rvu_write64(rvu, BLKADDR_RVUM,
+				    RVU_PRIV_HWVFX_INT_CFG(hwvf + vf),
+				    cfg | offset);
+		}
+	}
+
+	return 0;
+}
+
 static void rvu_free_hw_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
+	struct rvu_pfvf  *pfvf;
 	int id;
 
-	/* Free all bitmaps */
+	/* Free block LF bitmaps */
 	for (id = 0; id < BLK_COUNT; id++) {
 		block = &hw->block[id];
 		kfree(block->lf.bmap);
 	}
+
+	/* Free MSIX bitmaps */
+	for (id = 0; id < hw->total_pfs; id++) {
+		pfvf = &rvu->pf[id];
+		kfree(pfvf->msix.bmap);
+	}
+
+	for (id = 0; id < hw->total_vfs; id++) {
+		pfvf = &rvu->hwvf[id];
+		kfree(pfvf->msix.bmap);
+	}
 }
 
 static int rvu_setup_hw_resources(struct rvu *rvu)
@@ -500,6 +704,12 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 	if (!rvu->hwvf)
 		return -ENOMEM;
 
+	spin_lock_init(&rvu->rsrc_lock);
+
+	err = rvu_setup_msix_resources(rvu);
+	if (err)
+		return err;
+
 	for (blkid = 0; blkid < BLK_COUNT; blkid++) {
 		block = &hw->block[blkid];
 		if (!block->lf.bmap)
@@ -517,8 +727,6 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
 		rvu_scan_block(rvu, block);
 	}
 
-	spin_lock_init(&rvu->rsrc_lock);
-
 	return 0;
 }
 
@@ -604,6 +812,9 @@ static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
 
 		/* Free the resource */
 		rvu_free_rsrc(&block->lf, lf);
+
+		/* Clear MSIX vector offset for this LF */
+		rvu_clear_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
@@ -697,6 +908,9 @@ static void rvu_attach_block(struct rvu *rvu, int pcifunc,
 			    (lf << block->lfshift), cfg);
 		rvu_update_rsrc_map(rvu, pfvf, block,
 				    pcifunc, lf, true);
+
+		/* Set start MSIX vector for this LF within this PF/VF */
+		rvu_set_msix_offset(rvu, pfvf, block, lf);
 	}
 }
 
@@ -872,6 +1086,119 @@ static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
 	return err;
 }
 
+static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+			       int blkaddr, int lf)
+{
+	u16 vec;
+
+	if (lf < 0)
+		return MSIX_VECTOR_INVALID;
+
+	for (vec = 0; vec < pfvf->msix.max; vec++) {
+		if (pfvf->msix_lfmap[vec] == MSIX_BLKLF(blkaddr, lf))
+			return vec;
+	}
+	return MSIX_VECTOR_INVALID;
+}
+
+static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				struct rvu_block *block, int lf)
+{
+	u16 nvecs, vec, offset;
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
+			 (lf << block->lfshift));
+	nvecs = (cfg >> 12) & 0xFF;
+
+	/* Check and alloc MSIX vectors, must be contiguous */
+	if (!rvu_rsrc_check_contig(&pfvf->msix, nvecs))
+		return;
+
+	offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
+
+	/* Config MSIX offset in LF */
+	rvu_write64(rvu, block->addr, block->msixcfg_reg |
+		    (lf << block->lfshift), (cfg & ~0x7FFULL) | offset);
+
+	/* Update the bitmap as well */
+	for (vec = 0; vec < nvecs; vec++)
+		pfvf->msix_lfmap[offset + vec] = MSIX_BLKLF(block->addr, lf);
+}
+
+static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+				  struct rvu_block *block, int lf)
+{
+	u16 nvecs, vec, offset;
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
+			 (lf << block->lfshift));
+	nvecs = (cfg >> 12) & 0xFF;
+
+	/* Clear MSIX offset in LF */
+	rvu_write64(rvu, block->addr, block->msixcfg_reg |
+		    (lf << block->lfshift), cfg & ~0x7FFULL);
+
+	offset = rvu_get_msix_offset(rvu, pfvf, block->addr, lf);
+
+	/* Update the mapping */
+	for (vec = 0; vec < nvecs; vec++)
+		pfvf->msix_lfmap[offset + vec] = 0;
+
+	/* Free the same in MSIX bitmap */
+	rvu_free_rsrc_contig(&pfvf->msix, nvecs, offset);
+}
+
+static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req,
+					struct msix_offset_rsp *rsp)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	u16 pcifunc = req->hdr.pcifunc;
+	struct rvu_pfvf *pfvf;
+	int lf, slot;
+
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+	if (!pfvf->msix.bmap)
+		return 0;
+
+	/* Set MSIX offsets for each block's LFs attached to this PF/VF */
+	lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NPA], pcifunc, 0);
+	rsp->npa_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NPA, lf);
+
+	lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NIX0], pcifunc, 0);
+	rsp->nix_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NIX0, lf);
+
+	rsp->sso = pfvf->sso;
+	for (slot = 0; slot < rsp->sso; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSO], pcifunc, slot);
+		rsp->sso_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSO, lf);
+	}
+
+	rsp->ssow = pfvf->ssow;
+	for (slot = 0; slot < rsp->ssow; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSOW], pcifunc, slot);
+		rsp->ssow_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSOW, lf);
+	}
+
+	rsp->timlfs = pfvf->timlfs;
+	for (slot = 0; slot < rsp->timlfs; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_TIM], pcifunc, slot);
+		rsp->timlf_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_TIM, lf);
+	}
+
+	rsp->cptlfs = pfvf->cptlfs;
+	for (slot = 0; slot < rsp->cptlfs; slot++) {
+		lf = rvu_get_lf(rvu, &hw->block[BLKADDR_CPT0], pcifunc, slot);
+		rsp->cptlf_msixoff[slot] =
+			rvu_get_msix_offset(rvu, pfvf, BLKADDR_CPT0, lf);
+	}
+	return 0;
+}
+
 static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
 				struct mbox_msghdr *req)
 {
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 0f76704..7435e83 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -65,6 +65,11 @@ struct rvu_pfvf {
 	u16		ssow;
 	u16		cptlfs;
 	u16		timlfs;
+
+	/* Block LF's MSIX vector info */
+	struct rsrc_bmap msix;      /* Bitmap for MSIX vector alloc */
+#define MSIX_BLKLF(blkaddr, lf) (((blkaddr) << 8) | ((lf) & 0xFF))
+	u16		 *msix_lfmap; /* Vector to block LF mapping */
 };
 
 struct rvu_hwinfo {
@@ -126,7 +131,9 @@ void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
 int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
 int rvu_get_pf(u16 pcifunc);
 struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
 bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
 int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
diff --git a/drivers/soc/marvell/octeontx2/rvu_struct.h b/drivers/soc/marvell/octeontx2/rvu_struct.h
index 77ac751..f61c862 100644
--- a/drivers/soc/marvell/octeontx2/rvu_struct.h
+++ b/drivers/soc/marvell/octeontx2/rvu_struct.h
@@ -58,6 +58,7 @@ enum rvu_af_int_vec_e {
 	RVU_AF_INT_VEC_PFME   = 0x2,
 	RVU_AF_INT_VEC_GEN    = 0x3,
 	RVU_AF_INT_VEC_MBOX   = 0x4,
+	RVU_AF_INT_VEC_CNT    = 0x5,
 };
 
 /**
@@ -71,6 +72,7 @@ enum rvu_pf_int_vec_e {
 	RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
 	RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
 	RVU_PF_INT_VEC_AFPF_MBOX  = 0x6,
+	RVU_PF_INT_VEC_CNT	  = 0x7,
 };
 
 #endif /* RVU_STRUCT_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 10/15] soc: octeontx2: Reconfig MSIX base with IOVA
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Geetha sowjanya,
	Sunil Goutham

From: Geetha sowjanya <gakula@marvell.com>

HW interprets RVU_AF_MSIXTR_BASE address as an IOVA, hence
create a IOMMU mapping for the physcial address configured by
firmware and reconfig RVU_AF_MSIXTR_BASE with IOVA.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c | 33 ++++++++++++++++++++++++++++++---
 drivers/soc/marvell/octeontx2/rvu.h |  1 +
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 8ac3524..40684c9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -442,9 +442,10 @@ static int rvu_setup_msix_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	int pf, vf, numvfs, hwvf, err;
+	int nvecs, offset, max_msix;
 	struct rvu_pfvf *pfvf;
-	int nvecs, offset;
-	u64 cfg;
+	u64 cfg, phy_addr;
+	dma_addr_t iova;
 
 	for (pf = 0; pf < hw->total_pfs; pf++) {
 		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
@@ -523,6 +524,22 @@ static int rvu_setup_msix_resources(struct rvu *rvu)
 		}
 	}
 
+	/* HW interprets RVU_AF_MSIXTR_BASE address as an IOVA, hence
+	 * create a IOMMU mapping for the physcial address configured by
+	 * firmware and reconfig RVU_AF_MSIXTR_BASE with IOVA.
+	 */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	max_msix = cfg & 0xFFFFF;
+	phy_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE);
+	iova = dma_map_single(rvu->dev, (void *)phy_addr,
+			      max_msix * PCI_MSIX_ENTRY_SIZE,
+			      DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(rvu->dev, iova))
+		return -ENOMEM;
+
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE, (u64)iova);
+	rvu->msix_base_iova = iova;
+
 	return 0;
 }
 
@@ -531,7 +548,8 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
 	struct rvu_pfvf  *pfvf;
-	int id;
+	int id, max_msix;
+	u64 cfg;
 
 	/* Free block LF bitmaps */
 	for (id = 0; id < BLK_COUNT; id++) {
@@ -549,6 +567,15 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 		pfvf = &rvu->hwvf[id];
 		kfree(pfvf->msix.bmap);
 	}
+
+	/* Unmap MSIX vector base IOVA mapping */
+	if (!rvu->msix_base_iova)
+		return;
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	max_msix = cfg & 0xFFFFF;
+	dma_unmap_single(rvu->dev, rvu->msix_base_iova,
+			 max_msix * PCI_MSIX_ENTRY_SIZE,
+			 DMA_BIDIRECTIONAL);
 }
 
 static int rvu_setup_hw_resources(struct rvu *rvu)
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 7435e83..92c2022 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -99,6 +99,7 @@ struct rvu {
 	u16			num_vec;
 	char			*irq_name;
 	bool			*irq_allocated;
+	dma_addr_t		msix_base_iova;
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 10/15] soc: octeontx2: Reconfig MSIX base with IOVA
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Geetha sowjanya <gakula@marvell.com>

HW interprets RVU_AF_MSIXTR_BASE address as an IOVA, hence
create a IOMMU mapping for the physcial address configured by
firmware and reconfig RVU_AF_MSIXTR_BASE with IOVA.

Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c | 33 ++++++++++++++++++++++++++++++---
 drivers/soc/marvell/octeontx2/rvu.h |  1 +
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 8ac3524..40684c9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -442,9 +442,10 @@ static int rvu_setup_msix_resources(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	int pf, vf, numvfs, hwvf, err;
+	int nvecs, offset, max_msix;
 	struct rvu_pfvf *pfvf;
-	int nvecs, offset;
-	u64 cfg;
+	u64 cfg, phy_addr;
+	dma_addr_t iova;
 
 	for (pf = 0; pf < hw->total_pfs; pf++) {
 		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
@@ -523,6 +524,22 @@ static int rvu_setup_msix_resources(struct rvu *rvu)
 		}
 	}
 
+	/* HW interprets RVU_AF_MSIXTR_BASE address as an IOVA, hence
+	 * create a IOMMU mapping for the physcial address configured by
+	 * firmware and reconfig RVU_AF_MSIXTR_BASE with IOVA.
+	 */
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	max_msix = cfg & 0xFFFFF;
+	phy_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE);
+	iova = dma_map_single(rvu->dev, (void *)phy_addr,
+			      max_msix * PCI_MSIX_ENTRY_SIZE,
+			      DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(rvu->dev, iova))
+		return -ENOMEM;
+
+	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE, (u64)iova);
+	rvu->msix_base_iova = iova;
+
 	return 0;
 }
 
@@ -531,7 +548,8 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 	struct rvu_hwinfo *hw = rvu->hw;
 	struct rvu_block *block;
 	struct rvu_pfvf  *pfvf;
-	int id;
+	int id, max_msix;
+	u64 cfg;
 
 	/* Free block LF bitmaps */
 	for (id = 0; id < BLK_COUNT; id++) {
@@ -549,6 +567,15 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 		pfvf = &rvu->hwvf[id];
 		kfree(pfvf->msix.bmap);
 	}
+
+	/* Unmap MSIX vector base IOVA mapping */
+	if (!rvu->msix_base_iova)
+		return;
+	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
+	max_msix = cfg & 0xFFFFF;
+	dma_unmap_single(rvu->dev, rvu->msix_base_iova,
+			 max_msix * PCI_MSIX_ENTRY_SIZE,
+			 DMA_BIDIRECTIONAL);
 }
 
 static int rvu_setup_hw_resources(struct rvu *rvu)
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 7435e83..92c2022 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -99,6 +99,7 @@ struct rvu {
 	u16			num_vec;
 	char			*irq_name;
 	bool			*irq_allocated;
+	dma_addr_t		msix_base_iova;
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 11/15] soc: octeontx2: Add Marvell OcteonTX2 CGX driver
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds basic template for Marvell OcteonTX2's
CGX ethernet interface driver. Just the probe.
RVU AF driver will use APIs exported by this driver
for various things like PF to physical interface mapping,
loopback mode, interface stats etc. Hence marged both
drivers into a single module.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/Kconfig            |   3 +-
 drivers/soc/marvell/octeontx2/Makefile |   2 +-
 drivers/soc/marvell/octeontx2/cgx.c    | 102 +++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/cgx.h    |  22 +++++++
 drivers/soc/marvell/octeontx2/rvu.c    |  14 ++++-
 5 files changed, 140 insertions(+), 3 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.h

diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
index 428d22e..8f36f3a 100644
--- a/drivers/soc/marvell/Kconfig
+++ b/drivers/soc/marvell/Kconfig
@@ -11,7 +11,8 @@ config OCTEONTX2_AF
 	tristate "OcteonTX2 RVU Admin Function driver"
 	select OCTEONTX2_MBOX
 	depends on ARM64 && PCI
-	help
+	---help---
 	  This driver supports Marvell's OcteonTX2 Resource Virtualization
 	  Unit's admin function manager which manages all RVU HW resources.
+
 endmenu
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index ac17cb9..8646421 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
 octeontx2_mbox-y := mbox.o
-octeontx2_af-y := rvu.o
+octeontx2_af-y := cgx.o rvu.o
diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
new file mode 100644
index 0000000..47aa4cb
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/phy.h>
+#include <linux/of.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+
+#include "cgx.h"
+
+#define DRV_NAME	"octeontx2-cgx"
+#define DRV_STRING      "Marvell OcteonTX2 CGX/MAC Driver"
+#define DRV_VERSION	"1.0"
+
+struct cgx {
+	void __iomem		*reg_base;
+	struct pci_dev		*pdev;
+	u8			cgx_id;
+};
+
+/* Supported devices */
+static const struct pci_device_id cgx_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
+	{ 0, }  /* end of table */
+};
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_DESCRIPTION(DRV_STRING);
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION(DRV_VERSION);
+MODULE_DEVICE_TABLE(pci, cgx_id_table);
+
+static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	int err;
+	struct device *dev = &pdev->dev;
+	struct cgx *cgx;
+
+	cgx = devm_kzalloc(dev, sizeof(*cgx), GFP_KERNEL);
+	if (!cgx)
+		return -ENOMEM;
+	cgx->pdev = pdev;
+
+	pci_set_drvdata(pdev, cgx);
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device\n");
+		pci_set_drvdata(pdev, NULL);
+		return err;
+	}
+
+	err = pci_request_regions(pdev, DRV_NAME);
+	if (err) {
+		dev_err(dev, "PCI request regions failed 0x%x\n", err);
+		goto err_disable_device;
+	}
+
+	/* MAP configuration registers */
+	cgx->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+	if (!cgx->reg_base) {
+		dev_err(dev, "CGX: Cannot map CSR memory space, aborting\n");
+		err = -ENOMEM;
+		goto err_release_regions;
+	}
+
+	return 0;
+
+err_release_regions:
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return err;
+}
+
+static void cgx_remove(struct pci_dev *pdev)
+{
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+struct pci_driver cgx_driver = {
+	.name = DRV_NAME,
+	.id_table = cgx_id_table,
+	.probe = cgx_probe,
+	.remove = cgx_remove,
+};
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
new file mode 100644
index 0000000..a7d4b39
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef CGX_H
+#define CGX_H
+
+ /* PCI device IDs */
+#define	PCI_DEVID_OCTEONTX2_CGX			0xA059
+
+/* PCI BAR nos */
+#define PCI_CFG_REG_BAR_NUM			0
+
+extern struct pci_driver cgx_driver;
+
+#endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 40684c9..daa6fd3 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -15,6 +15,7 @@
 #include <linux/pci.h>
 #include <linux/sysfs.h>
 
+#include "cgx.h"
 #include "rvu.h"
 #include "rvu_reg.h"
 
@@ -1605,14 +1606,25 @@ static struct pci_driver rvu_driver = {
 
 static int __init rvu_init_module(void)
 {
+	int err;
+
 	pr_info("%s: %s\n", DRV_NAME, DRV_STRING);
 
-	return pci_register_driver(&rvu_driver);
+	err = pci_register_driver(&cgx_driver);
+	if (err < 0)
+		return err;
+
+	err =  pci_register_driver(&rvu_driver);
+	if (err < 0)
+		pci_unregister_driver(&cgx_driver);
+
+	return err;
 }
 
 static void __exit rvu_cleanup_module(void)
 {
 	pci_unregister_driver(&rvu_driver);
+	pci_unregister_driver(&cgx_driver);
 }
 
 module_init(rvu_init_module);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 11/15] soc: octeontx2: Add Marvell OcteonTX2 CGX driver
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

This patch adds basic template for Marvell OcteonTX2's
CGX ethernet interface driver. Just the probe.
RVU AF driver will use APIs exported by this driver
for various things like PF to physical interface mapping,
loopback mode, interface stats etc. Hence marged both
drivers into a single module.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 drivers/soc/marvell/Kconfig            |   3 +-
 drivers/soc/marvell/octeontx2/Makefile |   2 +-
 drivers/soc/marvell/octeontx2/cgx.c    | 102 +++++++++++++++++++++++++++++++++
 drivers/soc/marvell/octeontx2/cgx.h    |  22 +++++++
 drivers/soc/marvell/octeontx2/rvu.c    |  14 ++++-
 5 files changed, 140 insertions(+), 3 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.c
 create mode 100644 drivers/soc/marvell/octeontx2/cgx.h

diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
index 428d22e..8f36f3a 100644
--- a/drivers/soc/marvell/Kconfig
+++ b/drivers/soc/marvell/Kconfig
@@ -11,7 +11,8 @@ config OCTEONTX2_AF
 	tristate "OcteonTX2 RVU Admin Function driver"
 	select OCTEONTX2_MBOX
 	depends on ARM64 && PCI
-	help
+	---help---
 	  This driver supports Marvell's OcteonTX2 Resource Virtualization
 	  Unit's admin function manager which manages all RVU HW resources.
+
 endmenu
diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index ac17cb9..8646421 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
 octeontx2_mbox-y := mbox.o
-octeontx2_af-y := rvu.o
+octeontx2_af-y := cgx.o rvu.o
diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
new file mode 100644
index 0000000..47aa4cb
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/phy.h>
+#include <linux/of.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+
+#include "cgx.h"
+
+#define DRV_NAME	"octeontx2-cgx"
+#define DRV_STRING      "Marvell OcteonTX2 CGX/MAC Driver"
+#define DRV_VERSION	"1.0"
+
+struct cgx {
+	void __iomem		*reg_base;
+	struct pci_dev		*pdev;
+	u8			cgx_id;
+};
+
+/* Supported devices */
+static const struct pci_device_id cgx_id_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
+	{ 0, }  /* end of table */
+};
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_DESCRIPTION(DRV_STRING);
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION(DRV_VERSION);
+MODULE_DEVICE_TABLE(pci, cgx_id_table);
+
+static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	int err;
+	struct device *dev = &pdev->dev;
+	struct cgx *cgx;
+
+	cgx = devm_kzalloc(dev, sizeof(*cgx), GFP_KERNEL);
+	if (!cgx)
+		return -ENOMEM;
+	cgx->pdev = pdev;
+
+	pci_set_drvdata(pdev, cgx);
+
+	err = pci_enable_device(pdev);
+	if (err) {
+		dev_err(dev, "Failed to enable PCI device\n");
+		pci_set_drvdata(pdev, NULL);
+		return err;
+	}
+
+	err = pci_request_regions(pdev, DRV_NAME);
+	if (err) {
+		dev_err(dev, "PCI request regions failed 0x%x\n", err);
+		goto err_disable_device;
+	}
+
+	/* MAP configuration registers */
+	cgx->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+	if (!cgx->reg_base) {
+		dev_err(dev, "CGX: Cannot map CSR memory space, aborting\n");
+		err = -ENOMEM;
+		goto err_release_regions;
+	}
+
+	return 0;
+
+err_release_regions:
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return err;
+}
+
+static void cgx_remove(struct pci_dev *pdev)
+{
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+struct pci_driver cgx_driver = {
+	.name = DRV_NAME,
+	.id_table = cgx_id_table,
+	.probe = cgx_probe,
+	.remove = cgx_remove,
+};
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
new file mode 100644
index 0000000..a7d4b39
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef CGX_H
+#define CGX_H
+
+ /* PCI device IDs */
+#define	PCI_DEVID_OCTEONTX2_CGX			0xA059
+
+/* PCI BAR nos */
+#define PCI_CFG_REG_BAR_NUM			0
+
+extern struct pci_driver cgx_driver;
+
+#endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index 40684c9..daa6fd3 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -15,6 +15,7 @@
 #include <linux/pci.h>
 #include <linux/sysfs.h>
 
+#include "cgx.h"
 #include "rvu.h"
 #include "rvu_reg.h"
 
@@ -1605,14 +1606,25 @@ static struct pci_driver rvu_driver = {
 
 static int __init rvu_init_module(void)
 {
+	int err;
+
 	pr_info("%s: %s\n", DRV_NAME, DRV_STRING);
 
-	return pci_register_driver(&rvu_driver);
+	err = pci_register_driver(&cgx_driver);
+	if (err < 0)
+		return err;
+
+	err =  pci_register_driver(&rvu_driver);
+	if (err < 0)
+		pci_unregister_driver(&cgx_driver);
+
+	return err;
 }
 
 static void __exit rvu_cleanup_module(void)
 {
 	pci_unregister_driver(&rvu_driver);
+	pci_unregister_driver(&cgx_driver);
 }
 
 module_init(rvu_init_module);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 12/15] soc: octeontx2: Set RVU PFs to CGX LMACs mapping
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Linu Cherian,
	Geetha sowjanya

From: Linu Cherian <lcherian@marvell.com>

Each of the enabled CGX LMAC is considered a physical
interface and RVU PFs are mapped to these. VFs of these
SRIOV PFs will be virtual interfaces and share CGX LMAC
along with PF.

This mapping info will be used later on for Rx/Tx pkt steering.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
---
 drivers/soc/marvell/octeontx2/Makefile  |  2 +-
 drivers/soc/marvell/octeontx2/cgx.c     | 59 ++++++++++++++++++++
 drivers/soc/marvell/octeontx2/cgx.h     | 15 ++++-
 drivers/soc/marvell/octeontx2/rvu.c     |  4 ++
 drivers/soc/marvell/octeontx2/rvu.h     | 12 ++++
 drivers/soc/marvell/octeontx2/rvu_cgx.c | 97 +++++++++++++++++++++++++++++++++
 6 files changed, 186 insertions(+), 3 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_cgx.c

diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index 8646421..eaac264 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
 octeontx2_mbox-y := mbox.o
-octeontx2_af-y := cgx.o rvu.o
+octeontx2_af-y := cgx.o rvu.o rvu_cgx.o
diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
index 47aa4cb..c5e0ebb 100644
--- a/drivers/soc/marvell/octeontx2/cgx.c
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -29,8 +29,12 @@ struct cgx {
 	void __iomem		*reg_base;
 	struct pci_dev		*pdev;
 	u8			cgx_id;
+	u8			lmac_count;
+	struct list_head	cgx_list;
 };
 
+static LIST_HEAD(cgx_list);
+
 /* Supported devices */
 static const struct pci_device_id cgx_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
@@ -43,6 +47,53 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, cgx_id_table);
 
+static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset)
+{
+	return readq(cgx->reg_base + (lmac << 18) + offset);
+}
+
+int cgx_get_cgx_cnt(void)
+{
+	struct cgx *cgx_dev;
+	int count = 0;
+
+	list_for_each_entry(cgx_dev, &cgx_list, cgx_list)
+		count++;
+
+	return count;
+}
+EXPORT_SYMBOL(cgx_get_cgx_cnt);
+
+int cgx_get_lmac_cnt(void *cgxd)
+{
+	struct cgx *cgx = cgxd;
+
+	if (!cgx)
+		return -ENODEV;
+
+	return cgx->lmac_count;
+}
+EXPORT_SYMBOL(cgx_get_lmac_cnt);
+
+void *cgx_get_pdata(int cgx_id)
+{
+	struct cgx *cgx_dev;
+
+	list_for_each_entry(cgx_dev, &cgx_list, cgx_list) {
+		if (cgx_dev->cgx_id == cgx_id)
+			return cgx_dev;
+	}
+	return NULL;
+}
+EXPORT_SYMBOL(cgx_get_pdata);
+
+static void cgx_lmac_init(struct cgx *cgx)
+{
+	cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7;
+	if (cgx->lmac_count > MAX_LMAC_PER_CGX)
+		cgx->lmac_count = MAX_LMAC_PER_CGX;
+}
+
 static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	int err;
@@ -77,9 +128,14 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	list_add(&cgx->cgx_list, &cgx_list);
+	cgx->cgx_id = cgx_get_cgx_cnt() - 1;
+	cgx_lmac_init(cgx);
+
 	return 0;
 
 err_release_regions:
+	list_del(&cgx->cgx_list);
 	pci_release_regions(pdev);
 err_disable_device:
 	pci_disable_device(pdev);
@@ -89,6 +145,9 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 static void cgx_remove(struct pci_dev *pdev)
 {
+	struct cgx *cgx = pci_get_drvdata(pdev);
+
+	list_del(&cgx->cgx_list);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
index a7d4b39..acdc16e 100644
--- a/drivers/soc/marvell/octeontx2/cgx.h
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -12,11 +12,22 @@
 #define CGX_H
 
  /* PCI device IDs */
-#define	PCI_DEVID_OCTEONTX2_CGX			0xA059
+#define	PCI_DEVID_OCTEONTX2_CGX		0xA059
 
 /* PCI BAR nos */
-#define PCI_CFG_REG_BAR_NUM			0
+#define PCI_CFG_REG_BAR_NUM		0
+
+#define MAX_CGX				3
+#define MAX_LMAC_PER_CGX		4
+#define CGX_OFFSET(x)			((x) * MAX_LMAC_PER_CGX)
+
+/* Registers */
+#define CGXX_CMRX_RX_ID_MAP		0x060
+#define CGXX_CMRX_RX_LMACS		0x128
 
 extern struct pci_driver cgx_driver;
 
+int cgx_get_cgx_cnt(void);
+int cgx_get_lmac_cnt(void *cgxd);
+void *cgx_get_pdata(int cgx_id);
 #endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index daa6fd3..faf7d0f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -1558,6 +1558,10 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_hwsetup;
 
+	err = rvu_cgx_probe(rvu);
+	if (err)
+		goto err_mbox;
+
 	err = rvu_register_interrupts(rvu);
 	if (err)
 		goto err_mbox;
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 92c2022..385f597 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -100,6 +100,16 @@ struct rvu {
 	char			*irq_name;
 	bool			*irq_allocated;
 	dma_addr_t		msix_base_iova;
+
+	/* CGX */
+#define PF_CGXMAP_BASE		1 /* PF 0 is reserved for RVU PF */
+	u8			cgx_mapped_pfs;
+	u8			cgx_cnt; /* available cgx ports */
+	u8			*pf2cgxlmac_map; /* pf to cgx_lmac map */
+	u16			*cgxlmac2pf_map; /* bitmap of mapped pfs for
+						  * every cgx lmac port
+						  */
+	void			**cgx_idmap; /* cgx id to cgx data map table */
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -138,4 +148,6 @@ int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
 int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
+/* CGX APIs */
+int rvu_cgx_probe(struct rvu *rvu);
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_cgx.c b/drivers/soc/marvell/octeontx2/rvu_cgx.c
new file mode 100644
index 0000000..bf81507
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_cgx.c
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "rvu.h"
+#include "cgx.h"
+
+static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
+{
+	return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
+}
+
+static void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu)
+{
+	if (cgx_id >= rvu->cgx_cnt)
+		return NULL;
+
+	return rvu->cgx_idmap[cgx_id];
+}
+
+static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
+{
+	int cgx_cnt = rvu->cgx_cnt;
+	int cgx, lmac_cnt, lmac;
+	int pf = PF_CGXMAP_BASE;
+	int size;
+
+	if (!cgx_cnt)
+		return 0;
+
+	if (cgx_cnt > 0xF || MAX_LMAC_PER_CGX > 0xF)
+		return -EINVAL;
+
+	/* Alloc map table
+	 * An additional entry is required since PF id starts from 1 and
+	 * hence entry at offset 0 is invalid.
+	 */
+	size = (cgx_cnt * MAX_LMAC_PER_CGX + 1) * sizeof(u8);
+	rvu->pf2cgxlmac_map = devm_kzalloc(rvu->dev, size, GFP_KERNEL);
+	if (!rvu->pf2cgxlmac_map)
+		return -ENOMEM;
+
+	/* Initialize offset 0 with an invalid cgx and lmac id */
+	rvu->pf2cgxlmac_map[0] = 0xFF;
+
+	/* Reverse map table */
+	rvu->cgxlmac2pf_map = devm_kzalloc(rvu->dev,
+				  cgx_cnt * MAX_LMAC_PER_CGX * sizeof(u16),
+				  GFP_KERNEL);
+	if (!rvu->cgxlmac2pf_map)
+		return -ENOMEM;
+
+	rvu->cgx_mapped_pfs = 0;
+	for (cgx = 0; cgx < cgx_cnt; cgx++) {
+		lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
+		for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) {
+			rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac);
+			rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf;
+			rvu->cgx_mapped_pfs++;
+		}
+	}
+	return 0;
+}
+
+int rvu_cgx_probe(struct rvu *rvu)
+{
+	int i;
+
+	/* find available cgx ports */
+	rvu->cgx_cnt = cgx_get_cgx_cnt();
+	if (!rvu->cgx_cnt) {
+		dev_info(rvu->dev, "No CGX devices found!\n");
+		return -ENODEV;
+	}
+
+	rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt * sizeof(void *),
+				      GFP_KERNEL);
+	if (!rvu->cgx_idmap)
+		return -ENOMEM;
+
+	/* Initialize the cgxdata table */
+	for (i = 0; i < rvu->cgx_cnt; i++)
+		rvu->cgx_idmap[i] = cgx_get_pdata(i);
+
+	/* Map CGX LMAC interfaces to RVU PFs */
+	return rvu_map_cgx_lmac_pf(rvu);
+}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 12/15] soc: octeontx2: Set RVU PFs to CGX LMACs mapping
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Linu Cherian <lcherian@marvell.com>

Each of the enabled CGX LMAC is considered a physical
interface and RVU PFs are mapped to these. VFs of these
SRIOV PFs will be virtual interfaces and share CGX LMAC
along with PF.

This mapping info will be used later on for Rx/Tx pkt steering.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
---
 drivers/soc/marvell/octeontx2/Makefile  |  2 +-
 drivers/soc/marvell/octeontx2/cgx.c     | 59 ++++++++++++++++++++
 drivers/soc/marvell/octeontx2/cgx.h     | 15 ++++-
 drivers/soc/marvell/octeontx2/rvu.c     |  4 ++
 drivers/soc/marvell/octeontx2/rvu.h     | 12 ++++
 drivers/soc/marvell/octeontx2/rvu_cgx.c | 97 +++++++++++++++++++++++++++++++++
 6 files changed, 186 insertions(+), 3 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/rvu_cgx.c

diff --git a/drivers/soc/marvell/octeontx2/Makefile b/drivers/soc/marvell/octeontx2/Makefile
index 8646421..eaac264 100644
--- a/drivers/soc/marvell/octeontx2/Makefile
+++ b/drivers/soc/marvell/octeontx2/Makefile
@@ -7,4 +7,4 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
 
 octeontx2_mbox-y := mbox.o
-octeontx2_af-y := cgx.o rvu.o
+octeontx2_af-y := cgx.o rvu.o rvu_cgx.o
diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
index 47aa4cb..c5e0ebb 100644
--- a/drivers/soc/marvell/octeontx2/cgx.c
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -29,8 +29,12 @@ struct cgx {
 	void __iomem		*reg_base;
 	struct pci_dev		*pdev;
 	u8			cgx_id;
+	u8			lmac_count;
+	struct list_head	cgx_list;
 };
 
+static LIST_HEAD(cgx_list);
+
 /* Supported devices */
 static const struct pci_device_id cgx_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
@@ -43,6 +47,53 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, cgx_id_table);
 
+static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset)
+{
+	return readq(cgx->reg_base + (lmac << 18) + offset);
+}
+
+int cgx_get_cgx_cnt(void)
+{
+	struct cgx *cgx_dev;
+	int count = 0;
+
+	list_for_each_entry(cgx_dev, &cgx_list, cgx_list)
+		count++;
+
+	return count;
+}
+EXPORT_SYMBOL(cgx_get_cgx_cnt);
+
+int cgx_get_lmac_cnt(void *cgxd)
+{
+	struct cgx *cgx = cgxd;
+
+	if (!cgx)
+		return -ENODEV;
+
+	return cgx->lmac_count;
+}
+EXPORT_SYMBOL(cgx_get_lmac_cnt);
+
+void *cgx_get_pdata(int cgx_id)
+{
+	struct cgx *cgx_dev;
+
+	list_for_each_entry(cgx_dev, &cgx_list, cgx_list) {
+		if (cgx_dev->cgx_id == cgx_id)
+			return cgx_dev;
+	}
+	return NULL;
+}
+EXPORT_SYMBOL(cgx_get_pdata);
+
+static void cgx_lmac_init(struct cgx *cgx)
+{
+	cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7;
+	if (cgx->lmac_count > MAX_LMAC_PER_CGX)
+		cgx->lmac_count = MAX_LMAC_PER_CGX;
+}
+
 static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	int err;
@@ -77,9 +128,14 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	list_add(&cgx->cgx_list, &cgx_list);
+	cgx->cgx_id = cgx_get_cgx_cnt() - 1;
+	cgx_lmac_init(cgx);
+
 	return 0;
 
 err_release_regions:
+	list_del(&cgx->cgx_list);
 	pci_release_regions(pdev);
 err_disable_device:
 	pci_disable_device(pdev);
@@ -89,6 +145,9 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 static void cgx_remove(struct pci_dev *pdev)
 {
+	struct cgx *cgx = pci_get_drvdata(pdev);
+
+	list_del(&cgx->cgx_list);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
index a7d4b39..acdc16e 100644
--- a/drivers/soc/marvell/octeontx2/cgx.h
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -12,11 +12,22 @@
 #define CGX_H
 
  /* PCI device IDs */
-#define	PCI_DEVID_OCTEONTX2_CGX			0xA059
+#define	PCI_DEVID_OCTEONTX2_CGX		0xA059
 
 /* PCI BAR nos */
-#define PCI_CFG_REG_BAR_NUM			0
+#define PCI_CFG_REG_BAR_NUM		0
+
+#define MAX_CGX				3
+#define MAX_LMAC_PER_CGX		4
+#define CGX_OFFSET(x)			((x) * MAX_LMAC_PER_CGX)
+
+/* Registers */
+#define CGXX_CMRX_RX_ID_MAP		0x060
+#define CGXX_CMRX_RX_LMACS		0x128
 
 extern struct pci_driver cgx_driver;
 
+int cgx_get_cgx_cnt(void);
+int cgx_get_lmac_cnt(void *cgxd);
+void *cgx_get_pdata(int cgx_id);
 #endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index daa6fd3..faf7d0f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -1558,6 +1558,10 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (err)
 		goto err_hwsetup;
 
+	err = rvu_cgx_probe(rvu);
+	if (err)
+		goto err_mbox;
+
 	err = rvu_register_interrupts(rvu);
 	if (err)
 		goto err_mbox;
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 92c2022..385f597 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -100,6 +100,16 @@ struct rvu {
 	char			*irq_name;
 	bool			*irq_allocated;
 	dma_addr_t		msix_base_iova;
+
+	/* CGX */
+#define PF_CGXMAP_BASE		1 /* PF 0 is reserved for RVU PF */
+	u8			cgx_mapped_pfs;
+	u8			cgx_cnt; /* available cgx ports */
+	u8			*pf2cgxlmac_map; /* pf to cgx_lmac map */
+	u16			*cgxlmac2pf_map; /* bitmap of mapped pfs for
+						  * every cgx lmac port
+						  */
+	void			**cgx_idmap; /* cgx id to cgx data map table */
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -138,4 +148,6 @@ int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
 int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
 int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
+/* CGX APIs */
+int rvu_cgx_probe(struct rvu *rvu);
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_cgx.c b/drivers/soc/marvell/octeontx2/rvu_cgx.c
new file mode 100644
index 0000000..bf81507
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/rvu_cgx.c
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell OcteonTx2 RVU Admin Function driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "rvu.h"
+#include "cgx.h"
+
+static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
+{
+	return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
+}
+
+static void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu)
+{
+	if (cgx_id >= rvu->cgx_cnt)
+		return NULL;
+
+	return rvu->cgx_idmap[cgx_id];
+}
+
+static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
+{
+	int cgx_cnt = rvu->cgx_cnt;
+	int cgx, lmac_cnt, lmac;
+	int pf = PF_CGXMAP_BASE;
+	int size;
+
+	if (!cgx_cnt)
+		return 0;
+
+	if (cgx_cnt > 0xF || MAX_LMAC_PER_CGX > 0xF)
+		return -EINVAL;
+
+	/* Alloc map table
+	 * An additional entry is required since PF id starts from 1 and
+	 * hence entry at offset 0 is invalid.
+	 */
+	size = (cgx_cnt * MAX_LMAC_PER_CGX + 1) * sizeof(u8);
+	rvu->pf2cgxlmac_map = devm_kzalloc(rvu->dev, size, GFP_KERNEL);
+	if (!rvu->pf2cgxlmac_map)
+		return -ENOMEM;
+
+	/* Initialize offset 0 with an invalid cgx and lmac id */
+	rvu->pf2cgxlmac_map[0] = 0xFF;
+
+	/* Reverse map table */
+	rvu->cgxlmac2pf_map = devm_kzalloc(rvu->dev,
+				  cgx_cnt * MAX_LMAC_PER_CGX * sizeof(u16),
+				  GFP_KERNEL);
+	if (!rvu->cgxlmac2pf_map)
+		return -ENOMEM;
+
+	rvu->cgx_mapped_pfs = 0;
+	for (cgx = 0; cgx < cgx_cnt; cgx++) {
+		lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
+		for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) {
+			rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac);
+			rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf;
+			rvu->cgx_mapped_pfs++;
+		}
+	}
+	return 0;
+}
+
+int rvu_cgx_probe(struct rvu *rvu)
+{
+	int i;
+
+	/* find available cgx ports */
+	rvu->cgx_cnt = cgx_get_cgx_cnt();
+	if (!rvu->cgx_cnt) {
+		dev_info(rvu->dev, "No CGX devices found!\n");
+		return -ENODEV;
+	}
+
+	rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt * sizeof(void *),
+				      GFP_KERNEL);
+	if (!rvu->cgx_idmap)
+		return -ENOMEM;
+
+	/* Initialize the cgxdata table */
+	for (i = 0; i < rvu->cgx_cnt; i++)
+		rvu->cgx_idmap[i] = cgx_get_pdata(i);
+
+	/* Map CGX LMAC interfaces to RVU PFs */
+	return rvu_map_cgx_lmac_pf(rvu);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 13/15] soc: octeontx2: Add support for CGX link management
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Linu Cherian, Nithya Mani

From: Linu Cherian <lcherian@marvell.com>

CGX LMAC initialization, link status polling etc is done
by low level secure firmware. For link management this patch
adds a interface or communication mechanism between firmware
and this kernel CGX driver.

- Firmware interface specification is defined in cgx_fw_if.h.
- Support to send/receive commands/events to/form firmware.
- events/commands implemented
  * link up
  * link down
  * reading firmware version

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Signed-off-by: Nithya Mani <nmani@marvell.com>
---
 drivers/soc/marvell/octeontx2/cgx.c       | 364 +++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/cgx.h       |  32 +++
 drivers/soc/marvell/octeontx2/cgx_fw_if.h | 225 ++++++++++++++++++
 3 files changed, 617 insertions(+), 4 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/cgx_fw_if.h

diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
index c5e0ebb..26af8fa 100644
--- a/drivers/soc/marvell/octeontx2/cgx.c
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -25,16 +25,43 @@
 #define DRV_STRING      "Marvell OcteonTX2 CGX/MAC Driver"
 #define DRV_VERSION	"1.0"
 
+/**
+ * struct lmac
+ * @wq_cmd_cmplt:	waitq to keep the process blocked until cmd completion
+ * @cmd_lock:		Lock to serialize the command interface
+ * @resp:		command response
+ * @event_cb:		callback for linkchange events
+ * @cmd_pend:		flag set before new command is started
+ *			flag cleared after command response is received
+ * @cgx:		parent cgx port
+ * @lmac_id:		lmac port id
+ * @name:		lmac port name
+ */
+struct lmac {
+	wait_queue_head_t wq_cmd_cmplt;
+	struct mutex cmd_lock;
+	struct cgx_evt_sts resp;
+	struct cgx_event_cb event_cb;
+	bool cmd_pend;
+	struct cgx *cgx;
+	u8 lmac_id;
+	char *name;
+};
+
 struct cgx {
 	void __iomem		*reg_base;
 	struct pci_dev		*pdev;
 	u8			cgx_id;
 	u8			lmac_count;
+	struct lmac		*lmac_idmap[MAX_LMAC_PER_CGX];
 	struct list_head	cgx_list;
 };
 
 static LIST_HEAD(cgx_list);
 
+/* CGX PHY management internal APIs */
+static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en);
+
 /* Supported devices */
 static const struct pci_device_id cgx_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
@@ -47,11 +74,24 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, cgx_id_table);
 
+static void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
+{
+	writeq(val, cgx->reg_base + (lmac << 18) + offset);
+}
+
 static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset)
 {
 	return readq(cgx->reg_base + (lmac << 18) + offset);
 }
 
+static inline struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx)
+{
+	if (!cgx || lmac_id >= MAX_LMAC_PER_CGX)
+		return NULL;
+
+	return cgx->lmac_idmap[lmac_id];
+}
+
 int cgx_get_cgx_cnt(void)
 {
 	struct cgx *cgx_dev;
@@ -87,18 +127,318 @@ void *cgx_get_pdata(int cgx_id)
 }
 EXPORT_SYMBOL(cgx_get_pdata);
 
-static void cgx_lmac_init(struct cgx *cgx)
+/* CGX Firmware interface low level support */
+static int cgx_fwi_cmd_send(struct cgx_cmd *cmd, struct cgx_evt_sts *rsp,
+			    struct lmac *lmac)
+{
+	struct cgx *cgx = lmac->cgx;
+	union cgx_cmdreg creg;
+	union cgx_evtreg ereg;
+	struct device *dev;
+	int err = 0;
+
+	/* Ensure no other command is in progress */
+	err = mutex_lock_interruptible(&lmac->cmd_lock);
+	if (err)
+		return err;
+
+	/* Ensure command register is free */
+	creg.val = cgx_read(cgx, lmac->lmac_id,  CGX_COMMAND_REG);
+	if (creg.cmd.own != CGX_CMD_OWN_NS) {
+		err = -EBUSY;
+		goto unlock;
+	}
+
+	/* Update ownership in command request */
+	cmd->own = CGX_CMD_OWN_FIRMWARE;
+
+	/* Mark this lmac as pending, before we start */
+	lmac->cmd_pend = true;
+
+	/* Start command in hardware */
+	creg.cmd = *cmd;
+	cgx_write(cgx, lmac->lmac_id, CGX_COMMAND_REG, creg.val);
+	creg.val = cgx_read(cgx, lmac->lmac_id,  CGX_COMMAND_REG);
+
+	/* Ensure command is completed without errors */
+	if (!wait_event_timeout(lmac->wq_cmd_cmplt, !lmac->cmd_pend,
+				msecs_to_jiffies(CGX_CMD_TIMEOUT))) {
+		dev = &cgx->pdev->dev;
+		ereg.val = cgx_read(cgx, lmac->lmac_id,  CGX_EVENT_REG);
+		if (ereg.val) {
+			dev_err(dev, "cgx port %d:%d: No event for response\n",
+				cgx->cgx_id, lmac->lmac_id);
+			/* copy event */
+			lmac->resp = ereg.evt_sts;
+		} else {
+			dev_err(dev, "cgx port %d:%d cmd timeout\n",
+				cgx->cgx_id, lmac->lmac_id);
+			err = -EIO;
+			goto unlock;
+		}
+	}
+
+	/* we have a valid command response */
+	smp_rmb(); /* Ensure the latest updates are visible */
+	*rsp = lmac->resp;
+
+unlock:
+	mutex_unlock(&lmac->cmd_lock);
+
+	return err;
+}
+
+static inline int cgx_fwi_cmd_generic(struct cgx_cmd *req,
+				      struct cgx_evt_sts *rsp,
+				      struct cgx *cgx, int lmac_id)
+{
+	struct lmac *lmac;
+	int err;
+
+	lmac = lmac_pdata(lmac_id, cgx);
+	if (!lmac)
+		return -ENODEV;
+
+	err = cgx_fwi_cmd_send(req, rsp, lmac);
+
+	/* Check for valid response */
+	if (!err) {
+		if (rsp->stat == CGX_STAT_FAIL)
+			return -EIO;
+		else
+			return 0;
+	}
+
+	return err;
+}
+
+/* Hardware event handlers */
+static inline void cgx_link_change_handler(struct cgx_lnk_sts *lstat,
+					   struct lmac *lmac)
+{
+	struct cgx *cgx = lmac->cgx;
+	struct cgx_link_event event;
+	struct device *dev = &cgx->pdev->dev;
+
+	event.lstat = *lstat;
+	event.cgx_id = cgx->cgx_id;
+	event.lmac_id = lmac->lmac_id;
+
+	if (!lmac->event_cb.notify_link_chg) {
+		dev_dbg(dev, "cgx port %d:%d Link change handler null",
+			cgx->cgx_id, lmac->lmac_id);
+		if (lstat->err_type != CGX_ERR_NONE) {
+			dev_err(dev, "cgx port %d:%d Link error %d\n",
+				cgx->cgx_id, lmac->lmac_id, lstat->err_type);
+		}
+		dev_info(dev, "cgx port %d:%d Link status %s, speed %x\n",
+			 cgx->cgx_id, lmac->lmac_id,
+			lstat->link_up ? "UP" : "DOWN", lstat->speed);
+		return;
+	}
+
+	if (lmac->event_cb.notify_link_chg(&event, lmac->event_cb.data))
+		dev_err(dev, "event notification failure\n");
+}
+
+static inline bool cgx_cmdresp_is_linkevent(struct cgx_evt_sts *rsp)
+{
+	if (rsp->id == CGX_CMD_LINK_BRING_UP ||
+	    rsp->id == CGX_CMD_LINK_BRING_DOWN)
+		return true;
+	else
+		return false;
+}
+
+static inline bool cgx_event_is_linkevent(struct cgx_evt_sts *evt)
+{
+	if (evt->id == CGX_EVT_LINK_CHANGE)
+		return true;
+	else
+		return false;
+}
+
+static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
+{
+	struct lmac *lmac = data;
+	struct cgx *cgx = lmac->cgx;
+	struct cgx_evt_sts event;
+	union cgx_evtreg ereg;
+	struct device *dev;
+
+	ereg.val = cgx_read(cgx, lmac->lmac_id, CGX_EVENT_REG);
+	if (!ereg.evt_sts.ack)
+		return IRQ_NONE;
+
+	dev = &cgx->pdev->dev;
+
+	event = ereg.evt_sts;
+
+	switch (event.evt_type) {
+	case CGX_EVT_CMD_RESP:
+		/* Copy the response. Since only one command is active at a
+		 * time, there is no way a response can get overwritten
+		 */
+		lmac->resp = event;
+		/* Ensure response is updated before thread context starts */
+		smp_wmb();
+
+		/* There wont be separate events for link change initiated from
+		 * software; Hence report the command responses as events
+		 */
+		if (cgx_cmdresp_is_linkevent(&event))
+			cgx_link_change_handler(&ereg.link_sts, lmac);
+
+		/* Release thread waiting for completion  */
+		lmac->cmd_pend = false;
+		wake_up_interruptible(&lmac->wq_cmd_cmplt);
+		break;
+	case CGX_EVT_ASYNC:
+		if (cgx_event_is_linkevent(&event))
+			cgx_link_change_handler(&ereg.link_sts, lmac);
+		break;
+	default:
+		dev_err(dev, "cgx port %d:%d Unknown event received\n",
+			cgx->cgx_id, lmac->lmac_id);
+	}
+
+	/* Any new event or command response will be posted by firmware
+	 * only after the current status is acked.
+	 * Ack the interrupt register as well.
+	 */
+	cgx_write(lmac->cgx, lmac->lmac_id, CGX_EVENT_REG, 0);
+	cgx_write(lmac->cgx, lmac->lmac_id, CGXX_CMRX_INT, FW_CGX_INT);
+
+	return IRQ_HANDLED;
+}
+
+/* APIs for PHY management using CGX firmware interface */
+
+/* callback registration for hardware events like link change */
+int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id)
 {
+	struct lmac *lmac;
+	struct cgx *cgx = cgxd;
+
+	lmac = lmac_pdata(lmac_id, cgx);
+	if (!lmac)
+		return -ENODEV;
+
+	lmac->event_cb = *cb;
+
+	return 0;
+}
+EXPORT_SYMBOL(cgx_lmac_evh_register);
+
+static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool enable)
+{
+	struct cgx_cmd req = { 0 };
+	struct cgx_evt_sts rsp;
+
+	if (enable)
+		req.id = CGX_CMD_LINK_BRING_UP;
+	else
+		req.id = CGX_CMD_LINK_BRING_DOWN;
+
+	return cgx_fwi_cmd_generic(&req, &rsp, cgx, lmac_id);
+}
+EXPORT_SYMBOL(cgx_fwi_link_change);
+
+static inline int cgx_fwi_read_version(struct cgx_ver_s *ver, struct cgx *cgx)
+{
+	struct cgx_cmd req = { 0 };
+	union cgx_evtreg event;
+	int err;
+
+	req.id = CGX_CMD_GET_FW_VER;
+
+	err = cgx_fwi_cmd_generic(&req, &event.evt_sts, cgx, 0);
+	if (!err)
+		*ver = event.ver;
+
+	return err;
+}
+
+static int cgx_lmac_verify_fwi_version(struct cgx *cgx)
+{
+	struct cgx_ver_s ver;
+	struct device *dev = &cgx->pdev->dev;
+	int err;
+
+	if (!cgx->lmac_count)
+		return 0;
+
+	err = cgx_fwi_read_version(&ver, cgx);
+	dev_dbg(dev, "Firmware command interface version = %d.%d\n",
+		ver.major_ver, ver.minor_ver);
+	if (err || ver.major_ver != CGX_FIRMWARE_MAJOR_VER ||
+	    ver.minor_ver != CGX_FIRMWARE_MINOR_VER)
+		return -EIO;
+	else
+		return 0;
+}
+
+static int cgx_lmac_init(struct cgx *cgx)
+{
+	struct lmac *lmac;
+	int i, err;
+
 	cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7;
 	if (cgx->lmac_count > MAX_LMAC_PER_CGX)
 		cgx->lmac_count = MAX_LMAC_PER_CGX;
+
+	for (i = 0; i < cgx->lmac_count; i++) {
+		lmac = kcalloc(1, sizeof(struct lmac), GFP_KERNEL);
+		if (!lmac)
+			return -ENOMEM;
+		lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL);
+		if (!lmac->name)
+			return -ENOMEM;
+		sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i);
+		lmac->lmac_id = i;
+		lmac->cgx = cgx;
+		init_waitqueue_head(&lmac->wq_cmd_cmplt);
+		mutex_init(&lmac->cmd_lock);
+		err = request_irq(pci_irq_vector(cgx->pdev,
+						 CGX_LMAC_FWI + i * 9),
+				   cgx_fwi_event_handler, 0, lmac->name, lmac);
+		if (err)
+			return err;
+
+		/* Enable interrupt */
+		cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S,
+			  FW_CGX_INT);
+
+		/* Add reference */
+		cgx->lmac_idmap[i] = lmac;
+	}
+
+	return cgx_lmac_verify_fwi_version(cgx);
+}
+
+static int cgx_lmac_exit(struct cgx *cgx)
+{
+	struct lmac *lmac;
+	int i;
+
+	/* Free all lmac related resources */
+	for (i = 0; i < cgx->lmac_count; i++) {
+		lmac = cgx->lmac_idmap[i];
+		if (!lmac)
+			continue;
+		free_irq(pci_irq_vector(cgx->pdev, CGX_LMAC_FWI + i * 9), lmac);
+		kfree(lmac->name);
+		kfree(lmac);
+	}
+
+	return 0;
 }
 
 static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
-	int err;
 	struct device *dev = &pdev->dev;
 	struct cgx *cgx;
+	int err, nvec;
 
 	cgx = devm_kzalloc(dev, sizeof(*cgx), GFP_KERNEL);
 	if (!cgx)
@@ -128,14 +468,28 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	nvec = CGX_NVEC;
+	err = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_MSIX);
+	if (err < 0 || err != nvec) {
+		dev_err(dev, "Request for %d msix vectors failed, err %d\n",
+			nvec, err);
+		goto err_release_regions;
+	}
+
 	list_add(&cgx->cgx_list, &cgx_list);
 	cgx->cgx_id = cgx_get_cgx_cnt() - 1;
-	cgx_lmac_init(cgx);
+
+	err = cgx_lmac_init(cgx);
+	if (err)
+		goto err_release_lmac;
+
 
 	return 0;
 
-err_release_regions:
+err_release_lmac:
+	cgx_lmac_exit(cgx);
 	list_del(&cgx->cgx_list);
+err_release_regions:
 	pci_release_regions(pdev);
 err_disable_device:
 	pci_disable_device(pdev);
@@ -147,7 +501,9 @@ static void cgx_remove(struct pci_dev *pdev)
 {
 	struct cgx *cgx = pci_get_drvdata(pdev);
 
+	cgx_lmac_exit(cgx);
 	list_del(&cgx->cgx_list);
+	pci_free_irq_vectors(pdev);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
index acdc16e..a2a7a6d 100644
--- a/drivers/soc/marvell/octeontx2/cgx.h
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -11,6 +11,8 @@
 #ifndef CGX_H
 #define CGX_H
 
+#include "cgx_fw_if.h"
+
  /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_CGX		0xA059
 
@@ -22,12 +24,42 @@
 #define CGX_OFFSET(x)			((x) * MAX_LMAC_PER_CGX)
 
 /* Registers */
+#define CGXX_CMRX_INT			0x040
+#define  FW_CGX_INT				BIT_ULL(1)
+#define CGXX_CMRX_INT_ENA_W1S		0x058
 #define CGXX_CMRX_RX_ID_MAP		0x060
 #define CGXX_CMRX_RX_LMACS		0x128
+#define CGXX_SCRATCH0_REG		0x1050
+#define CGXX_SCRATCH1_REG		0x1058
+#define CGX_CONST			0x2000
+
+#define CGX_COMMAND_REG			CGXX_SCRATCH1_REG
+#define CGX_EVENT_REG			CGXX_SCRATCH0_REG
+#define CGX_CMD_TIMEOUT			2200 /* msecs */
+
+#define CGX_NVEC			37
+#define CGX_LMAC_FWI			0
+
+struct cgx_link_event {
+	struct cgx_lnk_sts lstat;
+	u8 cgx_id;
+	u8 lmac_id;
+};
+
+/**
+ * struct cgx_event_cb
+ * @notify_link_chg:	callback for link change notification
+ * @data:	data passed to callback function
+ */
+struct cgx_event_cb {
+	int (*notify_link_chg)(struct cgx_link_event *event, void *data);
+	void *data;
+};
 
 extern struct pci_driver cgx_driver;
 
 int cgx_get_cgx_cnt(void);
 int cgx_get_lmac_cnt(void *cgxd);
 void *cgx_get_pdata(int cgx_id);
+int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id);
 #endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/cgx_fw_if.h b/drivers/soc/marvell/octeontx2/cgx_fw_if.h
new file mode 100644
index 0000000..771dd50
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx_fw_if.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CGX_FW_INTF_H__
+#define __CGX_FW_INTF_H__
+
+#define CGX_FIRMWARE_MAJOR_VER		1
+#define CGX_FIRMWARE_MINOR_VER		0
+
+#define CGX_EVENT_ACK                   1UL
+
+/* CGX error types. set for cmd response status as CGX_STAT_FAIL */
+enum cgx_error_type {
+	CGX_ERR_NONE,
+	CGX_ERR_LMAC_NOT_ENABLED,
+	CGX_ERR_LMAC_MODE_INVALID,
+	CGX_ERR_REQUEST_ID_INVALID,
+	CGX_ERR_PREV_ACK_NOT_CLEAR,
+	CGX_ERR_PHY_LINK_DOWN,
+	CGX_ERR_PCS_RESET_FAIL,
+	CGX_ERR_AN_CPT_FAIL,
+	CGX_ERR_TX_NOT_IDLE,
+	CGX_ERR_RX_NOT_IDLE,
+	CGX_ERR_SPUX_BR_BLKLOCK_FAIL,
+	CGX_ERR_SPUX_RX_ALIGN_FAIL,
+	CGX_ERR_SPUX_TX_FAULT,
+	CGX_ERR_SPUX_RX_FAULT,
+	CGX_ERR_SPUX_RESET_FAIL,
+	CGX_ERR_SPUX_AN_RESET_FAIL,
+	CGX_ERR_SPUX_USX_AN_RESET_FAIL,
+	CGX_ERR_SMUX_RX_LINK_NOT_OK,
+	CGX_ERR_PCS_RECV_LINK_FAIL,
+	CGX_ERR_TRAINING_FAIL,
+	CGX_ERR_RX_EQU_FAIL,
+	CGX_ERR_SPUX_BER_FAIL,
+	CGX_ERR_SPUX_RSFEC_ALGN_FAIL,   /* = 22 */
+};
+
+/* LINK speed types */
+enum cgx_link_speed {
+	CGX_LINK_NONE,
+	CGX_LINK_10M,
+	CGX_LINK_100M,
+	CGX_LINK_1G,
+	CGX_LINK_2HG,
+	CGX_LINK_5G,
+	CGX_LINK_10G,
+	CGX_LINK_20G,
+	CGX_LINK_25G,
+	CGX_LINK_40G,
+	CGX_LINK_50G,
+	CGX_LINK_100G,
+	CGX_LINK_SPEED_MAX,
+};
+
+/* REQUEST ID types. Input to firmware */
+enum cgx_cmd_id {
+	CGX_CMD_NONE,
+	CGX_CMD_GET_FW_VER,
+	CGX_CMD_GET_MAC_ADDR,
+	CGX_CMD_SET_MTU,
+	CGX_CMD_GET_LINK_STS,		/* optional to user */
+	CGX_CMD_LINK_BRING_UP,
+	CGX_CMD_LINK_BRING_DOWN,
+	CGX_CMD_INTERNAL_LBK,
+	CGX_CMD_EXTERNAL_LBK,
+	CGX_CMD_HIGIG,
+	CGX_CMD_LINK_STATE_CHANGE,
+	CGX_CMD_MODE_CHANGE,		/* hot plug support */
+	CGX_CMD_INTF_SHUTDOWN,
+	CGX_CMD_IRQ_ENABLE,
+	CGX_CMD_IRQ_DISABLE,
+};
+
+/* async event ids */
+enum cgx_evt_id {
+	CGX_EVT_NONE,
+	CGX_EVT_LINK_CHANGE,
+};
+
+/* event types - cause of interrupt */
+enum cgx_evt_type {
+	CGX_EVT_ASYNC,
+	CGX_EVT_CMD_RESP
+};
+
+enum cgx_stat {
+	CGX_STAT_SUCCESS,
+	CGX_STAT_FAIL
+};
+
+enum cgx_cmd_own {
+	CGX_CMD_OWN_NS,
+	CGX_CMD_OWN_FIRMWARE,
+};
+
+/* scratchx(0) CSR used for ATF->non-secure SW communication.
+ * This acts as the status register
+ * Provides details on command ack/status, link status, error details
+ */
+struct cgx_evt_sts {
+	uint64_t ack:1;
+	uint64_t evt_type:1;		/* cgx_evt_type */
+	uint64_t stat:1;		/* cgx_stat */
+	uint64_t id:6;			/* cgx_evt_id/cgx_cmd_id */
+	uint64_t reserved:55;
+};
+
+/* Response to command IDs with command status as CGX_STAT_FAIL
+ *
+ * Not applicable for commands :
+ * CGX_CMD_LINK_BRING_UP/DOWN/CGX_EVT_LINK_CHANGE
+ * check struct cgx_lnk_sts comments
+ */
+struct cgx_err_sts_s {
+	uint64_t reserved1:9;
+	uint64_t type:10;		/* cgx_error_type */
+	uint64_t reserved2:35;
+};
+
+/* Response to cmd ID as CGX_CMD_GET_FW_VER with cmd status as
+ * CGX_STAT_SUCCESS
+ */
+struct cgx_ver_s {
+	uint64_t reserved1:9;
+	uint64_t major_ver:4;
+	uint64_t minor_ver:4;
+	uint64_t reserved2:47;
+};
+
+/* Response to cmd ID as CGX_CMD_GET_MAC_ADDR with cmd status as
+ * CGX_STAT_SUCCESS
+ */
+struct cgx_mac_addr_s {
+	uint64_t reserved1:9;
+	uint64_t local_mac_addr:48;
+	uint64_t reserved2:7;
+};
+
+/* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE
+ * status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS
+ *
+ * In case of CGX_STAT_FAIL, it indicates CGX configuration failed
+ * when processing link up/down/change command.
+ * Both err_type and current link status will be updated
+ *
+ * In case of CGX_STAT_SUCCESS, err_type will be CGX_ERR_NONE and current
+ * link status will be updated
+ */
+struct cgx_lnk_sts {
+	uint64_t reserved1:9;
+	uint64_t link_up:1;
+	uint64_t full_duplex:1;
+	uint64_t speed:4;		/* cgx_link_speed */
+	uint64_t err_type:10;
+	uint64_t reserved2:39;
+};
+
+union cgx_evtreg {
+	u64 val;
+	struct cgx_evt_sts evt_sts; /* common for all commands/events */
+	struct cgx_lnk_sts link_sts; /* response to LINK_BRINGUP/DOWN/CHANGE */
+	struct cgx_ver_s ver;		/* response to CGX_CMD_GET_FW_VER */
+	struct cgx_mac_addr_s mac_addr;	/* response to CGX_CMD_GET_MAC_ADDR */
+	struct cgx_err_sts_s err;	/* response if evt_status = CMD_FAIL */
+};
+
+/* scratchx(1) CSR used for non-secure SW->ATF communication
+ * This CSR acts as a command register
+ */
+struct cgx_cmd {
+	uint64_t own:2;			/* cgx_csr_own */
+	uint64_t id:6;			/* cgx_request_id */
+	uint64_t reserved2:56;
+};
+
+/* Any command using enable/disable as an argument need
+ * to pass the option via this structure.
+ * Ex: Loopback, HiGig...
+ */
+struct cgx_ctl_args {
+	uint64_t reserved1:8;
+	uint64_t enable:1;
+	uint64_t reserved2:55;
+};
+
+/* command argument to be passed for cmd ID - CGX_CMD_SET_MTU */
+struct cgx_mtu_args {
+	uint64_t reserved1:8;
+	uint64_t size:16;
+	uint64_t reserved2:40;
+};
+
+/* command argument to be passed for cmd ID - CGX_CMD_LINK_CHANGE */
+struct cgx_link_change_args {
+	uint64_t reserved1:8;
+	uint64_t link_up:1;
+	uint64_t full_duplex:1;
+	uint64_t speed:4;		/* cgx_link_speed */
+	uint64_t reserved2:50;
+};
+
+struct cgx_irq_cfg {
+	uint64_t reserved1:8;
+	uint64_t irq_phys:32;
+	uint64_t reserved2:24;
+};
+
+union cgx_cmdreg {
+	u64 val;
+	struct cgx_cmd cmd;
+	struct cgx_ctl_args cmd_args;
+	struct cgx_mtu_args mtu_size;
+	struct cgx_irq_cfg irq_cfg; /* Input to CGX_CMD_IRQ_ENABLE */
+	struct cgx_link_change_args lnk_args;/* Input to CGX_CMD_LINK_CHANGE */
+};
+
+#endif /* __CGX_FW_INTF_H__ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 13/15] soc: octeontx2: Add support for CGX link management
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Linu Cherian <lcherian@marvell.com>

CGX LMAC initialization, link status polling etc is done
by low level secure firmware. For link management this patch
adds a interface or communication mechanism between firmware
and this kernel CGX driver.

- Firmware interface specification is defined in cgx_fw_if.h.
- Support to send/receive commands/events to/form firmware.
- events/commands implemented
  * link up
  * link down
  * reading firmware version

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Signed-off-by: Nithya Mani <nmani@marvell.com>
---
 drivers/soc/marvell/octeontx2/cgx.c       | 364 +++++++++++++++++++++++++++++-
 drivers/soc/marvell/octeontx2/cgx.h       |  32 +++
 drivers/soc/marvell/octeontx2/cgx_fw_if.h | 225 ++++++++++++++++++
 3 files changed, 617 insertions(+), 4 deletions(-)
 create mode 100644 drivers/soc/marvell/octeontx2/cgx_fw_if.h

diff --git a/drivers/soc/marvell/octeontx2/cgx.c b/drivers/soc/marvell/octeontx2/cgx.c
index c5e0ebb..26af8fa 100644
--- a/drivers/soc/marvell/octeontx2/cgx.c
+++ b/drivers/soc/marvell/octeontx2/cgx.c
@@ -25,16 +25,43 @@
 #define DRV_STRING      "Marvell OcteonTX2 CGX/MAC Driver"
 #define DRV_VERSION	"1.0"
 
+/**
+ * struct lmac
+ * @wq_cmd_cmplt:	waitq to keep the process blocked until cmd completion
+ * @cmd_lock:		Lock to serialize the command interface
+ * @resp:		command response
+ * @event_cb:		callback for linkchange events
+ * @cmd_pend:		flag set before new command is started
+ *			flag cleared after command response is received
+ * @cgx:		parent cgx port
+ * @lmac_id:		lmac port id
+ * @name:		lmac port name
+ */
+struct lmac {
+	wait_queue_head_t wq_cmd_cmplt;
+	struct mutex cmd_lock;
+	struct cgx_evt_sts resp;
+	struct cgx_event_cb event_cb;
+	bool cmd_pend;
+	struct cgx *cgx;
+	u8 lmac_id;
+	char *name;
+};
+
 struct cgx {
 	void __iomem		*reg_base;
 	struct pci_dev		*pdev;
 	u8			cgx_id;
 	u8			lmac_count;
+	struct lmac		*lmac_idmap[MAX_LMAC_PER_CGX];
 	struct list_head	cgx_list;
 };
 
 static LIST_HEAD(cgx_list);
 
+/* CGX PHY management internal APIs */
+static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en);
+
 /* Supported devices */
 static const struct pci_device_id cgx_id_table[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
@@ -47,11 +74,24 @@ MODULE_LICENSE("GPL v2");
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, cgx_id_table);
 
+static void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
+{
+	writeq(val, cgx->reg_base + (lmac << 18) + offset);
+}
+
 static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset)
 {
 	return readq(cgx->reg_base + (lmac << 18) + offset);
 }
 
+static inline struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx)
+{
+	if (!cgx || lmac_id >= MAX_LMAC_PER_CGX)
+		return NULL;
+
+	return cgx->lmac_idmap[lmac_id];
+}
+
 int cgx_get_cgx_cnt(void)
 {
 	struct cgx *cgx_dev;
@@ -87,18 +127,318 @@ void *cgx_get_pdata(int cgx_id)
 }
 EXPORT_SYMBOL(cgx_get_pdata);
 
-static void cgx_lmac_init(struct cgx *cgx)
+/* CGX Firmware interface low level support */
+static int cgx_fwi_cmd_send(struct cgx_cmd *cmd, struct cgx_evt_sts *rsp,
+			    struct lmac *lmac)
+{
+	struct cgx *cgx = lmac->cgx;
+	union cgx_cmdreg creg;
+	union cgx_evtreg ereg;
+	struct device *dev;
+	int err = 0;
+
+	/* Ensure no other command is in progress */
+	err = mutex_lock_interruptible(&lmac->cmd_lock);
+	if (err)
+		return err;
+
+	/* Ensure command register is free */
+	creg.val = cgx_read(cgx, lmac->lmac_id,  CGX_COMMAND_REG);
+	if (creg.cmd.own != CGX_CMD_OWN_NS) {
+		err = -EBUSY;
+		goto unlock;
+	}
+
+	/* Update ownership in command request */
+	cmd->own = CGX_CMD_OWN_FIRMWARE;
+
+	/* Mark this lmac as pending, before we start */
+	lmac->cmd_pend = true;
+
+	/* Start command in hardware */
+	creg.cmd = *cmd;
+	cgx_write(cgx, lmac->lmac_id, CGX_COMMAND_REG, creg.val);
+	creg.val = cgx_read(cgx, lmac->lmac_id,  CGX_COMMAND_REG);
+
+	/* Ensure command is completed without errors */
+	if (!wait_event_timeout(lmac->wq_cmd_cmplt, !lmac->cmd_pend,
+				msecs_to_jiffies(CGX_CMD_TIMEOUT))) {
+		dev = &cgx->pdev->dev;
+		ereg.val = cgx_read(cgx, lmac->lmac_id,  CGX_EVENT_REG);
+		if (ereg.val) {
+			dev_err(dev, "cgx port %d:%d: No event for response\n",
+				cgx->cgx_id, lmac->lmac_id);
+			/* copy event */
+			lmac->resp = ereg.evt_sts;
+		} else {
+			dev_err(dev, "cgx port %d:%d cmd timeout\n",
+				cgx->cgx_id, lmac->lmac_id);
+			err = -EIO;
+			goto unlock;
+		}
+	}
+
+	/* we have a valid command response */
+	smp_rmb(); /* Ensure the latest updates are visible */
+	*rsp = lmac->resp;
+
+unlock:
+	mutex_unlock(&lmac->cmd_lock);
+
+	return err;
+}
+
+static inline int cgx_fwi_cmd_generic(struct cgx_cmd *req,
+				      struct cgx_evt_sts *rsp,
+				      struct cgx *cgx, int lmac_id)
+{
+	struct lmac *lmac;
+	int err;
+
+	lmac = lmac_pdata(lmac_id, cgx);
+	if (!lmac)
+		return -ENODEV;
+
+	err = cgx_fwi_cmd_send(req, rsp, lmac);
+
+	/* Check for valid response */
+	if (!err) {
+		if (rsp->stat == CGX_STAT_FAIL)
+			return -EIO;
+		else
+			return 0;
+	}
+
+	return err;
+}
+
+/* Hardware event handlers */
+static inline void cgx_link_change_handler(struct cgx_lnk_sts *lstat,
+					   struct lmac *lmac)
+{
+	struct cgx *cgx = lmac->cgx;
+	struct cgx_link_event event;
+	struct device *dev = &cgx->pdev->dev;
+
+	event.lstat = *lstat;
+	event.cgx_id = cgx->cgx_id;
+	event.lmac_id = lmac->lmac_id;
+
+	if (!lmac->event_cb.notify_link_chg) {
+		dev_dbg(dev, "cgx port %d:%d Link change handler null",
+			cgx->cgx_id, lmac->lmac_id);
+		if (lstat->err_type != CGX_ERR_NONE) {
+			dev_err(dev, "cgx port %d:%d Link error %d\n",
+				cgx->cgx_id, lmac->lmac_id, lstat->err_type);
+		}
+		dev_info(dev, "cgx port %d:%d Link status %s, speed %x\n",
+			 cgx->cgx_id, lmac->lmac_id,
+			lstat->link_up ? "UP" : "DOWN", lstat->speed);
+		return;
+	}
+
+	if (lmac->event_cb.notify_link_chg(&event, lmac->event_cb.data))
+		dev_err(dev, "event notification failure\n");
+}
+
+static inline bool cgx_cmdresp_is_linkevent(struct cgx_evt_sts *rsp)
+{
+	if (rsp->id == CGX_CMD_LINK_BRING_UP ||
+	    rsp->id == CGX_CMD_LINK_BRING_DOWN)
+		return true;
+	else
+		return false;
+}
+
+static inline bool cgx_event_is_linkevent(struct cgx_evt_sts *evt)
+{
+	if (evt->id == CGX_EVT_LINK_CHANGE)
+		return true;
+	else
+		return false;
+}
+
+static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
+{
+	struct lmac *lmac = data;
+	struct cgx *cgx = lmac->cgx;
+	struct cgx_evt_sts event;
+	union cgx_evtreg ereg;
+	struct device *dev;
+
+	ereg.val = cgx_read(cgx, lmac->lmac_id, CGX_EVENT_REG);
+	if (!ereg.evt_sts.ack)
+		return IRQ_NONE;
+
+	dev = &cgx->pdev->dev;
+
+	event = ereg.evt_sts;
+
+	switch (event.evt_type) {
+	case CGX_EVT_CMD_RESP:
+		/* Copy the response. Since only one command is active at a
+		 * time, there is no way a response can get overwritten
+		 */
+		lmac->resp = event;
+		/* Ensure response is updated before thread context starts */
+		smp_wmb();
+
+		/* There wont be separate events for link change initiated from
+		 * software; Hence report the command responses as events
+		 */
+		if (cgx_cmdresp_is_linkevent(&event))
+			cgx_link_change_handler(&ereg.link_sts, lmac);
+
+		/* Release thread waiting for completion  */
+		lmac->cmd_pend = false;
+		wake_up_interruptible(&lmac->wq_cmd_cmplt);
+		break;
+	case CGX_EVT_ASYNC:
+		if (cgx_event_is_linkevent(&event))
+			cgx_link_change_handler(&ereg.link_sts, lmac);
+		break;
+	default:
+		dev_err(dev, "cgx port %d:%d Unknown event received\n",
+			cgx->cgx_id, lmac->lmac_id);
+	}
+
+	/* Any new event or command response will be posted by firmware
+	 * only after the current status is acked.
+	 * Ack the interrupt register as well.
+	 */
+	cgx_write(lmac->cgx, lmac->lmac_id, CGX_EVENT_REG, 0);
+	cgx_write(lmac->cgx, lmac->lmac_id, CGXX_CMRX_INT, FW_CGX_INT);
+
+	return IRQ_HANDLED;
+}
+
+/* APIs for PHY management using CGX firmware interface */
+
+/* callback registration for hardware events like link change */
+int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id)
 {
+	struct lmac *lmac;
+	struct cgx *cgx = cgxd;
+
+	lmac = lmac_pdata(lmac_id, cgx);
+	if (!lmac)
+		return -ENODEV;
+
+	lmac->event_cb = *cb;
+
+	return 0;
+}
+EXPORT_SYMBOL(cgx_lmac_evh_register);
+
+static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool enable)
+{
+	struct cgx_cmd req = { 0 };
+	struct cgx_evt_sts rsp;
+
+	if (enable)
+		req.id = CGX_CMD_LINK_BRING_UP;
+	else
+		req.id = CGX_CMD_LINK_BRING_DOWN;
+
+	return cgx_fwi_cmd_generic(&req, &rsp, cgx, lmac_id);
+}
+EXPORT_SYMBOL(cgx_fwi_link_change);
+
+static inline int cgx_fwi_read_version(struct cgx_ver_s *ver, struct cgx *cgx)
+{
+	struct cgx_cmd req = { 0 };
+	union cgx_evtreg event;
+	int err;
+
+	req.id = CGX_CMD_GET_FW_VER;
+
+	err = cgx_fwi_cmd_generic(&req, &event.evt_sts, cgx, 0);
+	if (!err)
+		*ver = event.ver;
+
+	return err;
+}
+
+static int cgx_lmac_verify_fwi_version(struct cgx *cgx)
+{
+	struct cgx_ver_s ver;
+	struct device *dev = &cgx->pdev->dev;
+	int err;
+
+	if (!cgx->lmac_count)
+		return 0;
+
+	err = cgx_fwi_read_version(&ver, cgx);
+	dev_dbg(dev, "Firmware command interface version = %d.%d\n",
+		ver.major_ver, ver.minor_ver);
+	if (err || ver.major_ver != CGX_FIRMWARE_MAJOR_VER ||
+	    ver.minor_ver != CGX_FIRMWARE_MINOR_VER)
+		return -EIO;
+	else
+		return 0;
+}
+
+static int cgx_lmac_init(struct cgx *cgx)
+{
+	struct lmac *lmac;
+	int i, err;
+
 	cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7;
 	if (cgx->lmac_count > MAX_LMAC_PER_CGX)
 		cgx->lmac_count = MAX_LMAC_PER_CGX;
+
+	for (i = 0; i < cgx->lmac_count; i++) {
+		lmac = kcalloc(1, sizeof(struct lmac), GFP_KERNEL);
+		if (!lmac)
+			return -ENOMEM;
+		lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL);
+		if (!lmac->name)
+			return -ENOMEM;
+		sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i);
+		lmac->lmac_id = i;
+		lmac->cgx = cgx;
+		init_waitqueue_head(&lmac->wq_cmd_cmplt);
+		mutex_init(&lmac->cmd_lock);
+		err = request_irq(pci_irq_vector(cgx->pdev,
+						 CGX_LMAC_FWI + i * 9),
+				   cgx_fwi_event_handler, 0, lmac->name, lmac);
+		if (err)
+			return err;
+
+		/* Enable interrupt */
+		cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S,
+			  FW_CGX_INT);
+
+		/* Add reference */
+		cgx->lmac_idmap[i] = lmac;
+	}
+
+	return cgx_lmac_verify_fwi_version(cgx);
+}
+
+static int cgx_lmac_exit(struct cgx *cgx)
+{
+	struct lmac *lmac;
+	int i;
+
+	/* Free all lmac related resources */
+	for (i = 0; i < cgx->lmac_count; i++) {
+		lmac = cgx->lmac_idmap[i];
+		if (!lmac)
+			continue;
+		free_irq(pci_irq_vector(cgx->pdev, CGX_LMAC_FWI + i * 9), lmac);
+		kfree(lmac->name);
+		kfree(lmac);
+	}
+
+	return 0;
 }
 
 static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
-	int err;
 	struct device *dev = &pdev->dev;
 	struct cgx *cgx;
+	int err, nvec;
 
 	cgx = devm_kzalloc(dev, sizeof(*cgx), GFP_KERNEL);
 	if (!cgx)
@@ -128,14 +468,28 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto err_release_regions;
 	}
 
+	nvec = CGX_NVEC;
+	err = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_MSIX);
+	if (err < 0 || err != nvec) {
+		dev_err(dev, "Request for %d msix vectors failed, err %d\n",
+			nvec, err);
+		goto err_release_regions;
+	}
+
 	list_add(&cgx->cgx_list, &cgx_list);
 	cgx->cgx_id = cgx_get_cgx_cnt() - 1;
-	cgx_lmac_init(cgx);
+
+	err = cgx_lmac_init(cgx);
+	if (err)
+		goto err_release_lmac;
+
 
 	return 0;
 
-err_release_regions:
+err_release_lmac:
+	cgx_lmac_exit(cgx);
 	list_del(&cgx->cgx_list);
+err_release_regions:
 	pci_release_regions(pdev);
 err_disable_device:
 	pci_disable_device(pdev);
@@ -147,7 +501,9 @@ static void cgx_remove(struct pci_dev *pdev)
 {
 	struct cgx *cgx = pci_get_drvdata(pdev);
 
+	cgx_lmac_exit(cgx);
 	list_del(&cgx->cgx_list);
+	pci_free_irq_vectors(pdev);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
 	pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/octeontx2/cgx.h b/drivers/soc/marvell/octeontx2/cgx.h
index acdc16e..a2a7a6d 100644
--- a/drivers/soc/marvell/octeontx2/cgx.h
+++ b/drivers/soc/marvell/octeontx2/cgx.h
@@ -11,6 +11,8 @@
 #ifndef CGX_H
 #define CGX_H
 
+#include "cgx_fw_if.h"
+
  /* PCI device IDs */
 #define	PCI_DEVID_OCTEONTX2_CGX		0xA059
 
@@ -22,12 +24,42 @@
 #define CGX_OFFSET(x)			((x) * MAX_LMAC_PER_CGX)
 
 /* Registers */
+#define CGXX_CMRX_INT			0x040
+#define  FW_CGX_INT				BIT_ULL(1)
+#define CGXX_CMRX_INT_ENA_W1S		0x058
 #define CGXX_CMRX_RX_ID_MAP		0x060
 #define CGXX_CMRX_RX_LMACS		0x128
+#define CGXX_SCRATCH0_REG		0x1050
+#define CGXX_SCRATCH1_REG		0x1058
+#define CGX_CONST			0x2000
+
+#define CGX_COMMAND_REG			CGXX_SCRATCH1_REG
+#define CGX_EVENT_REG			CGXX_SCRATCH0_REG
+#define CGX_CMD_TIMEOUT			2200 /* msecs */
+
+#define CGX_NVEC			37
+#define CGX_LMAC_FWI			0
+
+struct cgx_link_event {
+	struct cgx_lnk_sts lstat;
+	u8 cgx_id;
+	u8 lmac_id;
+};
+
+/**
+ * struct cgx_event_cb
+ * @notify_link_chg:	callback for link change notification
+ * @data:	data passed to callback function
+ */
+struct cgx_event_cb {
+	int (*notify_link_chg)(struct cgx_link_event *event, void *data);
+	void *data;
+};
 
 extern struct pci_driver cgx_driver;
 
 int cgx_get_cgx_cnt(void);
 int cgx_get_lmac_cnt(void *cgxd);
 void *cgx_get_pdata(int cgx_id);
+int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id);
 #endif /* CGX_H */
diff --git a/drivers/soc/marvell/octeontx2/cgx_fw_if.h b/drivers/soc/marvell/octeontx2/cgx_fw_if.h
new file mode 100644
index 0000000..771dd50
--- /dev/null
+++ b/drivers/soc/marvell/octeontx2/cgx_fw_if.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Marvell OcteonTx2 CGX driver
+ *
+ * Copyright (C) 2018 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CGX_FW_INTF_H__
+#define __CGX_FW_INTF_H__
+
+#define CGX_FIRMWARE_MAJOR_VER		1
+#define CGX_FIRMWARE_MINOR_VER		0
+
+#define CGX_EVENT_ACK                   1UL
+
+/* CGX error types. set for cmd response status as CGX_STAT_FAIL */
+enum cgx_error_type {
+	CGX_ERR_NONE,
+	CGX_ERR_LMAC_NOT_ENABLED,
+	CGX_ERR_LMAC_MODE_INVALID,
+	CGX_ERR_REQUEST_ID_INVALID,
+	CGX_ERR_PREV_ACK_NOT_CLEAR,
+	CGX_ERR_PHY_LINK_DOWN,
+	CGX_ERR_PCS_RESET_FAIL,
+	CGX_ERR_AN_CPT_FAIL,
+	CGX_ERR_TX_NOT_IDLE,
+	CGX_ERR_RX_NOT_IDLE,
+	CGX_ERR_SPUX_BR_BLKLOCK_FAIL,
+	CGX_ERR_SPUX_RX_ALIGN_FAIL,
+	CGX_ERR_SPUX_TX_FAULT,
+	CGX_ERR_SPUX_RX_FAULT,
+	CGX_ERR_SPUX_RESET_FAIL,
+	CGX_ERR_SPUX_AN_RESET_FAIL,
+	CGX_ERR_SPUX_USX_AN_RESET_FAIL,
+	CGX_ERR_SMUX_RX_LINK_NOT_OK,
+	CGX_ERR_PCS_RECV_LINK_FAIL,
+	CGX_ERR_TRAINING_FAIL,
+	CGX_ERR_RX_EQU_FAIL,
+	CGX_ERR_SPUX_BER_FAIL,
+	CGX_ERR_SPUX_RSFEC_ALGN_FAIL,   /* = 22 */
+};
+
+/* LINK speed types */
+enum cgx_link_speed {
+	CGX_LINK_NONE,
+	CGX_LINK_10M,
+	CGX_LINK_100M,
+	CGX_LINK_1G,
+	CGX_LINK_2HG,
+	CGX_LINK_5G,
+	CGX_LINK_10G,
+	CGX_LINK_20G,
+	CGX_LINK_25G,
+	CGX_LINK_40G,
+	CGX_LINK_50G,
+	CGX_LINK_100G,
+	CGX_LINK_SPEED_MAX,
+};
+
+/* REQUEST ID types. Input to firmware */
+enum cgx_cmd_id {
+	CGX_CMD_NONE,
+	CGX_CMD_GET_FW_VER,
+	CGX_CMD_GET_MAC_ADDR,
+	CGX_CMD_SET_MTU,
+	CGX_CMD_GET_LINK_STS,		/* optional to user */
+	CGX_CMD_LINK_BRING_UP,
+	CGX_CMD_LINK_BRING_DOWN,
+	CGX_CMD_INTERNAL_LBK,
+	CGX_CMD_EXTERNAL_LBK,
+	CGX_CMD_HIGIG,
+	CGX_CMD_LINK_STATE_CHANGE,
+	CGX_CMD_MODE_CHANGE,		/* hot plug support */
+	CGX_CMD_INTF_SHUTDOWN,
+	CGX_CMD_IRQ_ENABLE,
+	CGX_CMD_IRQ_DISABLE,
+};
+
+/* async event ids */
+enum cgx_evt_id {
+	CGX_EVT_NONE,
+	CGX_EVT_LINK_CHANGE,
+};
+
+/* event types - cause of interrupt */
+enum cgx_evt_type {
+	CGX_EVT_ASYNC,
+	CGX_EVT_CMD_RESP
+};
+
+enum cgx_stat {
+	CGX_STAT_SUCCESS,
+	CGX_STAT_FAIL
+};
+
+enum cgx_cmd_own {
+	CGX_CMD_OWN_NS,
+	CGX_CMD_OWN_FIRMWARE,
+};
+
+/* scratchx(0) CSR used for ATF->non-secure SW communication.
+ * This acts as the status register
+ * Provides details on command ack/status, link status, error details
+ */
+struct cgx_evt_sts {
+	uint64_t ack:1;
+	uint64_t evt_type:1;		/* cgx_evt_type */
+	uint64_t stat:1;		/* cgx_stat */
+	uint64_t id:6;			/* cgx_evt_id/cgx_cmd_id */
+	uint64_t reserved:55;
+};
+
+/* Response to command IDs with command status as CGX_STAT_FAIL
+ *
+ * Not applicable for commands :
+ * CGX_CMD_LINK_BRING_UP/DOWN/CGX_EVT_LINK_CHANGE
+ * check struct cgx_lnk_sts comments
+ */
+struct cgx_err_sts_s {
+	uint64_t reserved1:9;
+	uint64_t type:10;		/* cgx_error_type */
+	uint64_t reserved2:35;
+};
+
+/* Response to cmd ID as CGX_CMD_GET_FW_VER with cmd status as
+ * CGX_STAT_SUCCESS
+ */
+struct cgx_ver_s {
+	uint64_t reserved1:9;
+	uint64_t major_ver:4;
+	uint64_t minor_ver:4;
+	uint64_t reserved2:47;
+};
+
+/* Response to cmd ID as CGX_CMD_GET_MAC_ADDR with cmd status as
+ * CGX_STAT_SUCCESS
+ */
+struct cgx_mac_addr_s {
+	uint64_t reserved1:9;
+	uint64_t local_mac_addr:48;
+	uint64_t reserved2:7;
+};
+
+/* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE
+ * status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS
+ *
+ * In case of CGX_STAT_FAIL, it indicates CGX configuration failed
+ * when processing link up/down/change command.
+ * Both err_type and current link status will be updated
+ *
+ * In case of CGX_STAT_SUCCESS, err_type will be CGX_ERR_NONE and current
+ * link status will be updated
+ */
+struct cgx_lnk_sts {
+	uint64_t reserved1:9;
+	uint64_t link_up:1;
+	uint64_t full_duplex:1;
+	uint64_t speed:4;		/* cgx_link_speed */
+	uint64_t err_type:10;
+	uint64_t reserved2:39;
+};
+
+union cgx_evtreg {
+	u64 val;
+	struct cgx_evt_sts evt_sts; /* common for all commands/events */
+	struct cgx_lnk_sts link_sts; /* response to LINK_BRINGUP/DOWN/CHANGE */
+	struct cgx_ver_s ver;		/* response to CGX_CMD_GET_FW_VER */
+	struct cgx_mac_addr_s mac_addr;	/* response to CGX_CMD_GET_MAC_ADDR */
+	struct cgx_err_sts_s err;	/* response if evt_status = CMD_FAIL */
+};
+
+/* scratchx(1) CSR used for non-secure SW->ATF communication
+ * This CSR acts as a command register
+ */
+struct cgx_cmd {
+	uint64_t own:2;			/* cgx_csr_own */
+	uint64_t id:6;			/* cgx_request_id */
+	uint64_t reserved2:56;
+};
+
+/* Any command using enable/disable as an argument need
+ * to pass the option via this structure.
+ * Ex: Loopback, HiGig...
+ */
+struct cgx_ctl_args {
+	uint64_t reserved1:8;
+	uint64_t enable:1;
+	uint64_t reserved2:55;
+};
+
+/* command argument to be passed for cmd ID - CGX_CMD_SET_MTU */
+struct cgx_mtu_args {
+	uint64_t reserved1:8;
+	uint64_t size:16;
+	uint64_t reserved2:40;
+};
+
+/* command argument to be passed for cmd ID - CGX_CMD_LINK_CHANGE */
+struct cgx_link_change_args {
+	uint64_t reserved1:8;
+	uint64_t link_up:1;
+	uint64_t full_duplex:1;
+	uint64_t speed:4;		/* cgx_link_speed */
+	uint64_t reserved2:50;
+};
+
+struct cgx_irq_cfg {
+	uint64_t reserved1:8;
+	uint64_t irq_phys:32;
+	uint64_t reserved2:24;
+};
+
+union cgx_cmdreg {
+	u64 val;
+	struct cgx_cmd cmd;
+	struct cgx_ctl_args cmd_args;
+	struct cgx_mtu_args mtu_size;
+	struct cgx_irq_cfg irq_cfg; /* Input to CGX_CMD_IRQ_ENABLE */
+	struct cgx_link_change_args lnk_args;/* Input to CGX_CMD_LINK_CHANGE */
+};
+
+#endif /* __CGX_FW_INTF_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 14/15] soc: octeontx2: Register for CGX lmac events
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Linu Cherian

From: Linu Cherian <lcherian@marvell.com>

Added support in RVU AF driver to register for
CGX LMAC link status change events from firmware
and managing them. Processing part will be added
in followup patches.

- Introduced eventqueue for posting events from cgx lmac.
  Queueing mechanism will ensure that events can be posted
  and firmware can be acked immediately and hence event
  reception and processing are decoupled.
- Events gets added to the queue by notification callback.
  Notification callback is expected to be atomic, since it
  is called from interrupt context.
- Events are dequeued and processed in a worker thread.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c     |   6 +-
 drivers/soc/marvell/octeontx2/rvu.h     |   5 ++
 drivers/soc/marvell/octeontx2/rvu_cgx.c | 101 +++++++++++++++++++++++++++++++-
 3 files changed, 108 insertions(+), 4 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index faf7d0f..282982f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -1564,10 +1564,11 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	err = rvu_register_interrupts(rvu);
 	if (err)
-		goto err_mbox;
+		goto err_cgx;
 
 	return 0;
-
+err_cgx:
+	rvu_cgx_wq_destroy(rvu);
 err_mbox:
 	rvu_mbox_destroy(rvu);
 err_hwsetup:
@@ -1589,6 +1590,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
 	rvu_unregister_interrupts(rvu);
+	rvu_cgx_wq_destroy(rvu);
 	rvu_mbox_destroy(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 385f597..d169fa9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -110,6 +110,10 @@ struct rvu {
 						  * every cgx lmac port
 						  */
 	void			**cgx_idmap; /* cgx id to cgx data map table */
+	struct			work_struct cgx_evh_work;
+	struct			workqueue_struct *cgx_evh_wq;
+	spinlock_t		cgx_evq_lock; /* cgx event queue lock */
+	struct list_head	cgx_evq_head; /* cgx event queue head */
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -150,4 +154,5 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 /* CGX APIs */
 int rvu_cgx_probe(struct rvu *rvu);
+void rvu_cgx_wq_destroy(struct rvu *rvu);
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_cgx.c b/drivers/soc/marvell/octeontx2/rvu_cgx.c
index bf81507..2359806e 100644
--- a/drivers/soc/marvell/octeontx2/rvu_cgx.c
+++ b/drivers/soc/marvell/octeontx2/rvu_cgx.c
@@ -15,6 +15,11 @@
 #include "rvu.h"
 #include "cgx.h"
 
+struct cgx_evq_entry {
+	struct list_head evq_node;
+	struct cgx_link_event link_event;
+};
+
 static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
 {
 	return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
@@ -72,9 +77,95 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
 	return 0;
 }
 
+/* This is called from interrupt context and is expected to be atomic */
+static int cgx_lmac_postevent(struct cgx_link_event *event, void *data)
+{
+	struct rvu *rvu = data;
+	struct cgx_evq_entry *qentry;
+
+	/* post event to the event queue */
+	qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
+	if (!qentry)
+		return -ENOMEM;
+	qentry->link_event = *event;
+	spin_lock(&rvu->cgx_evq_lock);
+	list_add_tail(&qentry->evq_node, &rvu->cgx_evq_head);
+	spin_unlock(&rvu->cgx_evq_lock);
+
+	/* start worker to process the events */
+	queue_work(rvu->cgx_evh_wq, &rvu->cgx_evh_work);
+
+	return 0;
+}
+
+static void cgx_evhandler_task(struct work_struct *work)
+{
+	struct rvu *rvu = container_of(work, struct rvu, cgx_evh_work);
+	struct cgx_evq_entry *qentry;
+	struct cgx_link_event *event;
+	unsigned long flags;
+
+	do {
+		/* Dequeue an event */
+		spin_lock_irqsave(&rvu->cgx_evq_lock, flags);
+		qentry = list_first_entry_or_null(&rvu->cgx_evq_head,
+						  struct cgx_evq_entry,
+						  evq_node);
+		if (qentry)
+			list_del(&qentry->evq_node);
+		spin_unlock_irqrestore(&rvu->cgx_evq_lock, flags);
+		if (!qentry)
+			break; /* nothing more to process */
+
+		event = &qentry->link_event;
+
+		/* Do nothing for now */
+		kfree(qentry);
+	} while (1);
+}
+
+static void cgx_lmac_event_handler_init(struct rvu *rvu)
+{
+	struct cgx_event_cb cb;
+	int cgx, lmac, err;
+	void *cgxd;
+
+	spin_lock_init(&rvu->cgx_evq_lock);
+	INIT_LIST_HEAD(&rvu->cgx_evq_head);
+	INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task);
+	rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0);
+	if (!rvu->cgx_evh_wq) {
+		dev_err(rvu->dev, "alloc workqueue failed");
+		return;
+	}
+
+	cb.notify_link_chg = cgx_lmac_postevent; /* link change call back */
+	cb.data = rvu;
+
+	for (cgx = 0; cgx < rvu->cgx_cnt; cgx++) {
+		cgxd = rvu_cgx_pdata(cgx, rvu);
+		for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) {
+			err = cgx_lmac_evh_register(&cb, cgxd, lmac);
+			if (err)
+				dev_err(rvu->dev,
+					"%d:%d handler register failed\n",
+					cgx, lmac);
+		}
+	}
+}
+
+void rvu_cgx_wq_destroy(struct rvu *rvu)
+{
+	if (rvu->cgx_evh_wq) {
+		flush_workqueue(rvu->cgx_evh_wq);
+		destroy_workqueue(rvu->cgx_evh_wq);
+		rvu->cgx_evh_wq = NULL;
+	}
+}
+
 int rvu_cgx_probe(struct rvu *rvu)
 {
-	int i;
+	int i, err;
 
 	/* find available cgx ports */
 	rvu->cgx_cnt = cgx_get_cgx_cnt();
@@ -93,5 +184,11 @@ int rvu_cgx_probe(struct rvu *rvu)
 		rvu->cgx_idmap[i] = cgx_get_pdata(i);
 
 	/* Map CGX LMAC interfaces to RVU PFs */
-	return rvu_map_cgx_lmac_pf(rvu);
+	err = rvu_map_cgx_lmac_pf(rvu);
+	if (err)
+		return err;
+
+	/* Register for CGX events */
+	cgx_lmac_event_handler_init(rvu);
+	return 0;
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 14/15] soc: octeontx2: Register for CGX lmac events
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Linu Cherian <lcherian@marvell.com>

Added support in RVU AF driver to register for
CGX LMAC link status change events from firmware
and managing them. Processing part will be added
in followup patches.

- Introduced eventqueue for posting events from cgx lmac.
  Queueing mechanism will ensure that events can be posted
  and firmware can be acked immediately and hence event
  reception and processing are decoupled.
- Events gets added to the queue by notification callback.
  Notification callback is expected to be atomic, since it
  is called from interrupt context.
- Events are dequeued and processed in a worker thread.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
---
 drivers/soc/marvell/octeontx2/rvu.c     |   6 +-
 drivers/soc/marvell/octeontx2/rvu.h     |   5 ++
 drivers/soc/marvell/octeontx2/rvu_cgx.c | 101 +++++++++++++++++++++++++++++++-
 3 files changed, 108 insertions(+), 4 deletions(-)

diff --git a/drivers/soc/marvell/octeontx2/rvu.c b/drivers/soc/marvell/octeontx2/rvu.c
index faf7d0f..282982f 100644
--- a/drivers/soc/marvell/octeontx2/rvu.c
+++ b/drivers/soc/marvell/octeontx2/rvu.c
@@ -1564,10 +1564,11 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	err = rvu_register_interrupts(rvu);
 	if (err)
-		goto err_mbox;
+		goto err_cgx;
 
 	return 0;
-
+err_cgx:
+	rvu_cgx_wq_destroy(rvu);
 err_mbox:
 	rvu_mbox_destroy(rvu);
 err_hwsetup:
@@ -1589,6 +1590,7 @@ static void rvu_remove(struct pci_dev *pdev)
 	struct rvu *rvu = pci_get_drvdata(pdev);
 
 	rvu_unregister_interrupts(rvu);
+	rvu_cgx_wq_destroy(rvu);
 	rvu_mbox_destroy(rvu);
 	rvu_reset_all_blocks(rvu);
 	rvu_free_hw_resources(rvu);
diff --git a/drivers/soc/marvell/octeontx2/rvu.h b/drivers/soc/marvell/octeontx2/rvu.h
index 385f597..d169fa9 100644
--- a/drivers/soc/marvell/octeontx2/rvu.h
+++ b/drivers/soc/marvell/octeontx2/rvu.h
@@ -110,6 +110,10 @@ struct rvu {
 						  * every cgx lmac port
 						  */
 	void			**cgx_idmap; /* cgx id to cgx data map table */
+	struct			work_struct cgx_evh_work;
+	struct			workqueue_struct *cgx_evh_wq;
+	spinlock_t		cgx_evq_lock; /* cgx event queue lock */
+	struct list_head	cgx_evq_head; /* cgx event queue head */
 };
 
 static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -150,4 +154,5 @@ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
 
 /* CGX APIs */
 int rvu_cgx_probe(struct rvu *rvu);
+void rvu_cgx_wq_destroy(struct rvu *rvu);
 #endif /* RVU_H */
diff --git a/drivers/soc/marvell/octeontx2/rvu_cgx.c b/drivers/soc/marvell/octeontx2/rvu_cgx.c
index bf81507..2359806e 100644
--- a/drivers/soc/marvell/octeontx2/rvu_cgx.c
+++ b/drivers/soc/marvell/octeontx2/rvu_cgx.c
@@ -15,6 +15,11 @@
 #include "rvu.h"
 #include "cgx.h"
 
+struct cgx_evq_entry {
+	struct list_head evq_node;
+	struct cgx_link_event link_event;
+};
+
 static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
 {
 	return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
@@ -72,9 +77,95 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
 	return 0;
 }
 
+/* This is called from interrupt context and is expected to be atomic */
+static int cgx_lmac_postevent(struct cgx_link_event *event, void *data)
+{
+	struct rvu *rvu = data;
+	struct cgx_evq_entry *qentry;
+
+	/* post event to the event queue */
+	qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
+	if (!qentry)
+		return -ENOMEM;
+	qentry->link_event = *event;
+	spin_lock(&rvu->cgx_evq_lock);
+	list_add_tail(&qentry->evq_node, &rvu->cgx_evq_head);
+	spin_unlock(&rvu->cgx_evq_lock);
+
+	/* start worker to process the events */
+	queue_work(rvu->cgx_evh_wq, &rvu->cgx_evh_work);
+
+	return 0;
+}
+
+static void cgx_evhandler_task(struct work_struct *work)
+{
+	struct rvu *rvu = container_of(work, struct rvu, cgx_evh_work);
+	struct cgx_evq_entry *qentry;
+	struct cgx_link_event *event;
+	unsigned long flags;
+
+	do {
+		/* Dequeue an event */
+		spin_lock_irqsave(&rvu->cgx_evq_lock, flags);
+		qentry = list_first_entry_or_null(&rvu->cgx_evq_head,
+						  struct cgx_evq_entry,
+						  evq_node);
+		if (qentry)
+			list_del(&qentry->evq_node);
+		spin_unlock_irqrestore(&rvu->cgx_evq_lock, flags);
+		if (!qentry)
+			break; /* nothing more to process */
+
+		event = &qentry->link_event;
+
+		/* Do nothing for now */
+		kfree(qentry);
+	} while (1);
+}
+
+static void cgx_lmac_event_handler_init(struct rvu *rvu)
+{
+	struct cgx_event_cb cb;
+	int cgx, lmac, err;
+	void *cgxd;
+
+	spin_lock_init(&rvu->cgx_evq_lock);
+	INIT_LIST_HEAD(&rvu->cgx_evq_head);
+	INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task);
+	rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0);
+	if (!rvu->cgx_evh_wq) {
+		dev_err(rvu->dev, "alloc workqueue failed");
+		return;
+	}
+
+	cb.notify_link_chg = cgx_lmac_postevent; /* link change call back */
+	cb.data = rvu;
+
+	for (cgx = 0; cgx < rvu->cgx_cnt; cgx++) {
+		cgxd = rvu_cgx_pdata(cgx, rvu);
+		for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) {
+			err = cgx_lmac_evh_register(&cb, cgxd, lmac);
+			if (err)
+				dev_err(rvu->dev,
+					"%d:%d handler register failed\n",
+					cgx, lmac);
+		}
+	}
+}
+
+void rvu_cgx_wq_destroy(struct rvu *rvu)
+{
+	if (rvu->cgx_evh_wq) {
+		flush_workqueue(rvu->cgx_evh_wq);
+		destroy_workqueue(rvu->cgx_evh_wq);
+		rvu->cgx_evh_wq = NULL;
+	}
+}
+
 int rvu_cgx_probe(struct rvu *rvu)
 {
-	int i;
+	int i, err;
 
 	/* find available cgx ports */
 	rvu->cgx_cnt = cgx_get_cgx_cnt();
@@ -93,5 +184,11 @@ int rvu_cgx_probe(struct rvu *rvu)
 		rvu->cgx_idmap[i] = cgx_get_pdata(i);
 
 	/* Map CGX LMAC interfaces to RVU PFs */
-	return rvu_map_cgx_lmac_pf(rvu);
+	err = rvu_map_cgx_lmac_pf(rvu);
+	if (err)
+		return err;
+
+	/* Register for CGX events */
+	cgx_lmac_event_handler_init(rvu);
+	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 15/15] MAINTAINERS: Add entry for Marvell OcteonTX2 Admin Function driver
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  -1 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-kernel, arnd, olof
  Cc: linux-arm-kernel, linux-soc, andrew, davem, Sunil Goutham

From: Sunil Goutham <sgoutham@marvell.com>

Added maintainers entry for Marvell OcteonTX2 SOC's RVU
admin function driver.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index e178f2b..38f874c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8748,6 +8748,16 @@ S:	Supported
 F:	drivers/mmc/host/sdhci-xenon*
 F:	Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt
 
+MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
+M:	Sunil Goutham <sgoutham@marvell.com>
+M:	Linu Cherian <lcherian@marvell.com>
+M:	Geetha sowjanya <gakula@marvell.com>
+M:	Jerin Jacob <jerinj@marvell.com>
+L:	linux-kernel@vger.kernel.org
+L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S:	Maintained
+F:	drivers/soc/marvell/octeontx2
+
 MATROX FRAMEBUFFER DRIVER
 L:	linux-fbdev@vger.kernel.org
 S:	Orphan
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 15/15] MAINTAINERS: Add entry for Marvell OcteonTX2 Admin Function driver
@ 2018-09-04 11:54   ` sunil.kovvuri at gmail.com
  0 siblings, 0 replies; 36+ messages in thread
From: sunil.kovvuri at gmail.com @ 2018-09-04 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

From: Sunil Goutham <sgoutham@marvell.com>

Added maintainers entry for Marvell OcteonTX2 SOC's RVU
admin function driver.

Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index e178f2b..38f874c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8748,6 +8748,16 @@ S:	Supported
 F:	drivers/mmc/host/sdhci-xenon*
 F:	Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt
 
+MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
+M:	Sunil Goutham <sgoutham@marvell.com>
+M:	Linu Cherian <lcherian@marvell.com>
+M:	Geetha sowjanya <gakula@marvell.com>
+M:	Jerin Jacob <jerinj@marvell.com>
+L:	linux-kernel at vger.kernel.org
+L:	linux-arm-kernel at lists.infradead.org (moderated for non-subscribers)
+S:	Maintained
+F:	drivers/soc/marvell/octeontx2
+
 MATROX FRAMEBUFFER DRIVER
 L:	linux-fbdev at vger.kernel.org
 S:	Orphan
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
  2018-09-04 11:54 ` sunil.kovvuri at gmail.com
@ 2018-09-04 12:46   ` Andrew Lunn
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Lunn @ 2018-09-04 12:46 UTC (permalink / raw)
  To: sunil.kovvuri
  Cc: linux-kernel, arnd, olof, linux-arm-kernel, linux-soc, davem,
	Sunil Goutham

On Tue, Sep 04, 2018 at 05:24:35PM +0530, sunil.kovvuri@gmail.com wrote:
> From: Sunil Goutham <sgoutham@marvell.com>
> 
> Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
> multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
> PF0 is called administrative / admin function (AF) and has privilege access
> to registers to provision different RVU functional blocks to each of
> PF/VF.

Hi Sunil

Please keep netdev in the loop. The people with most experience with
PF/VF tend to hang out there, not arm-soc.

      Andrew

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
@ 2018-09-04 12:46   ` Andrew Lunn
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Lunn @ 2018-09-04 12:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 04, 2018 at 05:24:35PM +0530, sunil.kovvuri at gmail.com wrote:
> From: Sunil Goutham <sgoutham@marvell.com>
> 
> Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
> multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
> PF0 is called administrative / admin function (AF) and has privilege access
> to registers to provision different RVU functional blocks to each of
> PF/VF.

Hi Sunil

Please keep netdev in the loop. The people with most experience with
PF/VF tend to hang out there, not arm-soc.

      Andrew

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
  2018-09-04 12:46   ` Andrew Lunn
@ 2018-09-04 16:14     ` Sunil Kovvuri
  -1 siblings, 0 replies; 36+ messages in thread
From: Sunil Kovvuri @ 2018-09-04 16:14 UTC (permalink / raw)
  To: Andrew Lunn
  Cc: LKML, Arnd Bergmann, olof, LAKML, linux-soc, David S. Miller,
	Sunil Goutham

On Tue, Sep 4, 2018 at 6:16 PM Andrew Lunn <andrew@lunn.ch> wrote:
>
> On Tue, Sep 04, 2018 at 05:24:35PM +0530, sunil.kovvuri@gmail.com wrote:
> > From: Sunil Goutham <sgoutham@marvell.com>
> >
> > Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
> > multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
> > PF0 is called administrative / admin function (AF) and has privilege access
> > to registers to provision different RVU functional blocks to each of
> > PF/VF.
>
> Hi Sunil
>
> Please keep netdev in the loop. The people with most experience with
> PF/VF tend to hang out there, not arm-soc.
>
>       Andrew

Sure, will submit again with netdev in loop.

Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver
@ 2018-09-04 16:14     ` Sunil Kovvuri
  0 siblings, 0 replies; 36+ messages in thread
From: Sunil Kovvuri @ 2018-09-04 16:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 4, 2018 at 6:16 PM Andrew Lunn <andrew@lunn.ch> wrote:
>
> On Tue, Sep 04, 2018 at 05:24:35PM +0530, sunil.kovvuri at gmail.com wrote:
> > From: Sunil Goutham <sgoutham@marvell.com>
> >
> > Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC supports
> > multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs).
> > PF0 is called administrative / admin function (AF) and has privilege access
> > to registers to provision different RVU functional blocks to each of
> > PF/VF.
>
> Hi Sunil
>
> Please keep netdev in the loop. The people with most experience with
> PF/VF tend to hang out there, not arm-soc.
>
>       Andrew

Sure, will submit again with netdev in loop.

Sunil.

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2018-09-04 16:14 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-04 11:54 [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver sunil.kovvuri
2018-09-04 11:54 ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 01/15] soc: octeontx2: Add Marvell OcteonTX2 RVU AF driver sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 02/15] soc: octeontx2: Reset all RVU blocks sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 03/15] soc: octeontx2: Gather RVU blocks HW info sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 04/15] soc: octeontx2: Add mailbox support infra sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 05/15] soc: octeontx2: Add mailbox IRQ and msg handlers sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 06/15] soc: octeontx2: Convert mbox msg id check to a macro sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 07/15] soc: octeontx2: Scan blocks for LFs provisioned to PF/VF sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 08/15] soc: octeontx2: Add RVU block LF provisioning support sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 09/15] soc: octeontx2: Configure block LF's MSIX vector offset sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 10/15] soc: octeontx2: Reconfig MSIX base with IOVA sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 11/15] soc: octeontx2: Add Marvell OcteonTX2 CGX driver sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 12/15] soc: octeontx2: Set RVU PFs to CGX LMACs mapping sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 13/15] soc: octeontx2: Add support for CGX link management sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 14/15] soc: octeontx2: Register for CGX lmac events sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 11:54 ` [PATCH v2 15/15] MAINTAINERS: Add entry for Marvell OcteonTX2 Admin Function driver sunil.kovvuri
2018-09-04 11:54   ` sunil.kovvuri at gmail.com
2018-09-04 12:46 ` [PATCH v2 00/15] soc: octeontx2: Add RVU admin function driver Andrew Lunn
2018-09-04 12:46   ` Andrew Lunn
2018-09-04 16:14   ` Sunil Kovvuri
2018-09-04 16:14     ` Sunil Kovvuri

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.