All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support.
@ 2018-06-27 21:25 Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface Steven J. Hill
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Chandrakala Chavva

Add the Cavium OCTEON III network driver. There are some corresponding
MIPS architecture support changes which will be upstreamed separately.

Changes in v12:

o Complete reorganization of driver files and defined all bitfields
  used in the driver.
o Implemented suggested changes from Andrew Lunn.
o Ran checkpatch and did whitespace cleanups.

Carlos Munoz (9):
  dt-bindings: Add Cavium Octeon Common Ethernet Interface.
  netdev: cavium: octeon: Header for Octeon III BGX Ethernet
  netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  netdev: cavium: octeon: Add Octeon III BGX Ports
  netdev: cavium: octeon: Add Octeon III PKI Support
  netdev: cavium: octeon: Add Octeon III PKO Support
  netdev: cavium: octeon: Add Octeon III SSO Support
  netdev: cavium: octeon: Add Octeon III BGX Ethernet core
  netdev: cavium: octeon: Add Octeon III BGX Ethernet building

David Daney (1):
  MAINTAINERS: Add entry for
    drivers/net/ethernet/cavium/octeon/octeon3-*

 .../devicetree/bindings/net/cavium-bgx.txt         |   59 +
 MAINTAINERS                                        |    6 +
 drivers/net/ethernet/cavium/Kconfig                |   22 +-
 drivers/net/ethernet/cavium/octeon/Makefile        |    8 +-
 .../net/ethernet/cavium/octeon/octeon3-bgx-nexus.c |  670 ++++++
 .../net/ethernet/cavium/octeon/octeon3-bgx-port.c  | 2192 ++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-bgx.h   |  191 ++
 drivers/net/ethernet/cavium/octeon/octeon3-core.c  | 2363 ++++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-pki.c   |  789 +++++++
 drivers/net/ethernet/cavium/octeon/octeon3-pki.h   |  113 +
 drivers/net/ethernet/cavium/octeon/octeon3-pko.c   | 1638 ++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-pko.h   |  159 ++
 drivers/net/ethernet/cavium/octeon/octeon3-sso.c   |  221 ++
 drivers/net/ethernet/cavium/octeon/octeon3-sso.h   |   89 +
 drivers/net/ethernet/cavium/octeon/octeon3.h       |  330 +++
 15 files changed, 8848 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/cavium-bgx.txt
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx-nexus.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx-port.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-core.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pki.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pki.h
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pko.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pko.h
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-sso.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-sso.h
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface.
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-28  8:35   ` Andrew Lunn
  2018-06-27 21:25 ` [PATCH v12 02/10] netdev: cavium: octeon: Header for Octeon III BGX Ethernet Steven J. Hill
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add bindings for Common Ethernet Interface (BGX) block.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 .../devicetree/bindings/net/cavium-bgx.txt         | 59 ++++++++++++++++++++++
 1 file changed, 59 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/cavium-bgx.txt

diff --git a/Documentation/devicetree/bindings/net/cavium-bgx.txt b/Documentation/devicetree/bindings/net/cavium-bgx.txt
new file mode 100644
index 0000000..21c9606
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/cavium-bgx.txt
@@ -0,0 +1,59 @@
+* Common Ethernet Interface (BGX) block
+
+Properties:
+
+- compatible: "cavium,octeon-7890-bgx": Compatibility with all cn7xxx SOCs.
+
+- reg: The base address of the BGX block.
+
+- #address-cells: Must be <1>.
+
+- #size-cells: Must be <0>.  BGX addresses have no size component.
+
+Typically a BGX block has several children each representing a ethernet
+interface.
+
+Example:
+
+	ethernet-mac-nexus@11800e0000000 {
+		compatible = "cavium,octeon-7890-bgx";
+		reg = <0x00011800 0xe0000000 0x00000000 0x01000000>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		ethernet-mac@0 {
+			...
+			reg = <0>;
+		};
+	};
+
+
+* Ethernet Interface (BGX port) connects to PKI/PKO
+
+Properties:
+
+- compatible: "cavium,octeon-7890-bgx-port": Compatibility with all cn7xxx
+  SOCs.
+
+- reg: The index of the interface withing the BGX block.
+
+- local-mac-address: Mac address for the interface.
+
+- phy-handle: phandle to the phy node connected to the interface.
+
+
+* Ethernet Interface (BGX port) connects to XCV
+
+
+Properties:
+
+- compatible: "cavium,octeon-7360-xcv": Compatibility with cn73xx SOCs.
+
+- reg: The index of the interface withing the BGX block.
+
+- local-mac-address: Mac address for the interface.
+
+- phy-handle: phandle to the phy node connected to the interface.
+
+- cavium,rx-clk-delay-bypass: Set to <1> to bypass the rx clock delay setting.
+  Needed by the Micrel PHY.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 02/10] netdev: cavium: octeon: Header for Octeon III BGX Ethernet
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus Steven J. Hill
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add the common header file used by the Octeon III BGX Ethernet
driver.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/octeon/octeon3-bgx.h | 150 +++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3.h     | 330 +++++++++++++++++++++++
 2 files changed, 480 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3.h

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h b/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
new file mode 100644
index 0000000..df794f5
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Octeon III BGX Nexus Ethernet driver
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#ifndef _OCTEON3_BGX_H_
+#define _OCTEON3_BGX_H_
+
+#include <linux/bitops.h>
+
+#define BGX_CMR_CHAN_MSK_MASK		GENMASK_ULL(15, 0)
+#define BGX_CMR_CHAN_MSK_SHIFT		16
+#define BGX_CMR_CONFIG_DATA_PKT_RX_EN	BIT(14)
+#define BGX_CMR_CONFIG_DATA_PKT_TX_EN	BIT(13)
+#define BGX_CMR_CONFIG_ENABLE	BIT(15)
+#define BGX_CMR_CONFIG_LMAC_TYPE_MASK	GENMASK_ULL(10, 8)
+#define BGX_CMR_CONFIG_LMAC_TYPE_SHIFT	8
+#define BGX_CMR_CONFIG_MIX_EN		BIT(11)
+#define BGX_CMR_GLOBAL_CONFIG_CMR_MIX0_RST	BIT(3)
+#define BGX_CMR_GLOBAL_CONFIG_CMR_MIX1_RST	BIT(4)
+#define BGX_CMR_RX_ADR_CTL_ACCEPT_ALL_MCST	BIT(1)
+#define BGX_CMR_RX_ADR_CTL_BCST_ACCEPT		BIT(0)
+#define BGX_CMR_RX_ADR_CTL_CAM_ACCEPT		BIT(3)
+#define BGX_CMR_RX_ADR_CTL_MCST_MODE_MASK	GENMASK_ULL(2, 1)
+#define BGX_CMR_RX_ADR_CTL_USE_CAM_FILTER	BIT(2)
+#define BGX_CMR_RX_ADRX_CAM_EN		BIT(48)
+#define BGX_CMR_RX_ADRX_CAM_ID_SHIFT	52
+#define BGX_CMR_RX_FIFO_LEN_MASK	GENMASK_ULL(12, 0)
+#define BGX_CMR_RX_ID_MAP_PKND_MASK	GENMASK_ULL(7, 0)
+#define BGX_CMR_RX_ID_MAP_RID_MASK	GENMASK_ULL(14, 8)
+#define BGX_CMR_RX_ID_MAP_RID_SHIFT	8
+#define BGX_CMR_TX_FIFO_LEN_LMAC_IDLE	BIT(13)
+#define BGX_CMR_TX_LMACS_MASK		GENMASK_ULL(2, 0)
+#define BGX_GMP_GMI_PRT_CFG_DUPLEX	BIT(2)
+#define BGX_GMP_GMI_PRT_CFG_RX_IDLE	BIT(12)
+#define BGX_GMP_GMI_PRT_CFG_SLOTTIME	BIT(3)
+#define BGX_GMP_GMI_PRT_CFG_SPEED	BIT(1)
+#define BGX_GMP_GMI_PRT_CFG_SPEED_MSB	BIT(8)
+#define BGX_GMP_GMI_PRT_CFG_TX_IDLE	BIT(13)
+#define BGX_GMP_GMI_TX_APPEND_FCS		BIT(2)
+#define BGX_GMP_GMI_TX_APPEND_PAD		BIT(1)
+#define BGX_GMP_GMI_TX_APPEND_PREAMBLE		BIT(0)
+#define BGX_GMP_GMI_TX_THRESH_DEFAULT		0x20
+#define BGX_GMP_PCS_AN_ADV_FULL_DUPLEX		BIT(5)
+#define BGX_GMP_PCS_AN_ADV_HALF_DUPLEX		BIT(6)
+#define BGX_GMP_PCS_AN_ADV_PAUSE_ASYMMETRIC	2
+#define BGX_GMP_PCS_AN_ADV_PAUSE_BOTH		3
+#define BGX_GMP_PCS_AN_ADV_PAUSE_MASK	GENMASK_ULL(8, 7)
+#define BGX_GMP_PCS_AN_ADV_PAUSE_NONE		0
+#define BGX_GMP_PCS_AN_ADV_PAUSE_SHIFT	7
+#define BGX_GMP_PCS_AN_ADV_PAUSE_SYMMETRIC	1
+#define BGX_GMP_PCS_AN_ADV_REM_FLT_MASK	GENMASK_ULL(13, 12)
+#define BGX_GMP_PCS_LINK_TIMER_COUNT_SHIFT	10
+#define BGX_GMP_PCS_MISC_CTL_GMXENO	BIT(11)
+#define BGX_GMP_PCS_MISC_CTL_MAC_PHY	BIT(9)
+#define BGX_GMP_PCS_MISC_CTL_MODE	BIT(8)
+#define BGX_GMP_PCS_MISC_CTL_SAMP_PT_MASK	GENMASK_ULL(6, 0)
+#define BGX_GMP_PCS_MR_CONTROL_AN_EN	BIT(12)
+#define BGX_GMP_PCS_MR_CONTROL_PWR_DN	BIT(11)
+#define BGX_GMP_PCS_MR_CONTROL_RESET	BIT(15)
+#define BGX_GMP_PCS_MR_CONTROL_RST_AN	BIT(9)
+#define BGX_GMP_PCS_MR_CONTROL_SPDLSB	BIT(13)
+#define BGX_GMP_PCS_MR_CONTROL_SPDMSB	BIT(6)
+#define BGX_GMP_PCS_MR_STATUS_AN_CPT	BIT(5)
+#define BGX_GMP_PCS_SGM_AN_ADV_DUPLEX_FULL	BIT(12)
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10		0
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10000	3
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_1000	2
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_100	1
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_MASK	GENMASK_ULL(11, 10)
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_SHIFT	10
+#define BGX_SMU_CTRL_RX_IDLE		BIT(0)
+#define BGX_SMU_CTRL_TX_IDLE		BIT(1)
+#define BGX_SMU_RX_CTL_STATUS_MASK	GENMASK_ULL(1, 0)
+#define BGX_SMU_TX_APPEND_FCS_C		BIT(3)
+#define BGX_SMU_TX_APPEND_FCS_D			BIT(2)
+#define BGX_SMU_TX_APPEND_PAD			BIT(1)
+#define BGX_SMU_TX_CTL_DIC_EN	BIT(0)
+#define BGX_SMU_TX_CTL_LS_MASK		GENMASK_ULL(5, 4)
+#define BGX_SMU_TX_CTL_UNI_EN	BIT(1)
+#define BGX_SPU_AN_ADV_A100G_CR10	BIT(26)
+#define BGX_SPU_AN_ADV_A10G_KR		BIT(23)
+#define BGX_SPU_AN_ADV_A10G_KX4		BIT(22)
+#define BGX_SPU_AN_ADV_A1G_KX		BIT(21)
+#define BGX_SPU_AN_ADV_A40G_CR4		BIT(25)
+#define BGX_SPU_AN_ADV_A40G_KR4		BIT(24)
+#define BGX_SPU_AN_ADV_FEC_ABLE		BIT(46)
+#define BGX_SPU_AN_ADV_FEC_REQ		BIT(47)
+#define BGX_SPU_AN_ADV_RF		BIT(13)
+#define BGX_SPU_AN_ADV_XNP_ABLE		BIT(12)
+#define BGX_SPU_AN_CONTROL_AN_EN	BIT(12)
+#define BGX_SPU_AN_CONTROL_AN_RESTART	BIT(9)
+#define BGX_SPU_AN_CONTROL_XNP_EN	BIT(13)
+#define BGX_SPU_AN_STATUS_AN_COMPLETE	BIT(5)
+#define BGX_SPU_BR_PMD_CONTROL_TRAIN_EN		BIT(1)
+#define BGX_SPU_BR_PMD_CONTROL_TRAIN_RESTART	BIT(0)
+#define BGX_SPU_BR_STATUS1_BLK_LOCK		BIT(0)
+#define BGX_SPU_BR_STATUS2_LATCHED_BER		BIT(14)
+#define BGX_SPU_BR_STATUS2_LATCHED_LOCK		BIT(15)
+#define BGX_SPU_BX_STATUS_ALIGND	BIT(12)
+#define BGX_SPU_CONTROL1_LOOPBACK	BIT(14)
+#define BGX_SPU_CONTROL1_LO_PWR	BIT(11)
+#define BGX_SPU_CONTROL1_RESET	BIT(15)
+#define BGX_SPU_DBG_CONTROL_AN_ARB_LINK_CHK_EN	BIT(18)
+#define BGX_SPU_DBG_CONTROL_AN_NONCE_MATCH_DIS	BIT(29)
+#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_MASK	GENMASK_ULL(43, 32)
+#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_SHIFT	32
+#define BGX_SPU_FEC_CONTROL_FEC_EN	BIT(0)
+#define BGX_SPU_INT_AN_COMPLETE		BIT(12)
+#define BGX_SPU_INT_AN_LINK_GOOD	BIT(11)
+#define BGX_SPU_INT_AN_PAGE_RX		BIT(10)
+#define BGX_SPU_INT_TRAINING_DONE	BIT(13)
+#define BGX_SPU_INT_TRAINING_FAILURE	BIT(14)
+#define BGX_SPU_MISC_CONTROL_INTLV_RDISP	BIT(10)
+#define BGX_SPU_MISC_CONTROL_RX_PACKET_DIS	BIT(12)
+#define BGX_SPU_STATUS1_RCV_LINK	BIT(2)
+#define BGX_SPU_STATUS2_RCVFLT		BIT(10)
+#define GSER_BR_RX_CTL_RXT_EER		BIT(15)
+#define GSER_BR_RX_CTL_RXT_ESV		BIT(14)
+#define GSER_BR_RX_CTL_RXT_SWM		BIT(2)
+#define GSER_BR_RX_EER_RXT_EER         BIT(15)
+#define GSER_BR_RX_EER_RXT_ESV         BIT(14)
+#define GSER_LANE_LBERT_CFG_LBERT_PM_EN		BIT(6)
+#define GSER_LANE_MODE_LMODE_MASK	GENMASK_ULL(3, 0)
+#define GSER_LANE_MODE_MASK	GENMASK_ULL(3, 0)
+#define GSER_LANE_PCS_CTLIFC_0_CFG_TX_COEFF_REQ_OVRRD_VAL	BIT(12)
+#define GSER_LANE_PCS_CTLIFC_2_CFG_TX_COEFF_REQ_OVRRD_EN	BIT(7)
+#define GSER_LANE_PCS_CTLIFC_2_CTLIFC_OVRRD_REQ			BIT(15)
+#define GSER_LANE_P_MODE_1_VMA_MM	BIT(14)
+#define GSER_PHY_CTL_PHY_PD	BIT(0)
+#define GSER_PHY_CTL_PHY_RESET	BIT(1)
+#define GSER_RX_EIE_DETSTS_CDRLOCK_SHIFT	8
+#define XCV_BATCH_CRD_RET_CRD_RET	BIT(0)
+#define XCV_COMP_CTL_DRV_BYP		BIT(63)
+#define XCV_CTL_LPBK_INT	BIT(2)
+#define XCV_CTL_SPEED_MASK	GENMASK_ULL(1, 0)
+#define XCV_DLL_CTL_CLKRX_BYP		BIT(23)
+#define XCV_DLL_CTL_CLKRX_SET_MASK	GENMASK_ULL(22, 16)
+#define XCV_DLL_CTL_CLKTX_BYP		BIT(15)
+#define XCV_DLL_CTL_REFCLK_SEL_MASK	GENMASK_ULL(1, 0)
+#define XCV_RESET_CLKRST	BIT(15)
+#define XCV_RESET_COMP		BIT(7)
+#define XCV_RESET_DLLRST	BIT(11)
+#define XCV_RESET_ENABLE	BIT(63)
+#define XCV_RESET_RX_DAT_RST_N	BIT(0)
+#define XCV_RESET_RX_PKT_RST_N	BIT(1)
+#define XCV_RESET_TX_DAT_RST_N	BIT(2)
+#define XCV_RESET_TX_PKT_RST_N	BIT(3)
+
+#endif /* _OCTEON3_BGX_H_ */
diff --git a/drivers/net/ethernet/cavium/octeon/octeon3.h b/drivers/net/ethernet/cavium/octeon/octeon3.h
new file mode 100644
index 0000000..99c092d
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3.h
@@ -0,0 +1,330 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Octeon III BGX Ethernet Driver
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#ifndef _OCTEON3_H_
+#define _OCTEON3_H_
+
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+
+#include <asm/octeon/octeon.h>
+
+#include "octeon3-bgx.h"
+#include "octeon3-pki.h"
+#include "octeon3-pko.h"
+#include "octeon3-sso.h"
+
+#define MAX_CORES			48
+#define MAX_NODES			2
+#define NODE_MASK			(MAX_NODES - 1)
+#define MAX_BGX_PER_NODE		6
+#define MAX_LMAC_PER_BGX		4
+
+#define IOBDMA_ORDERED_IO_ADDR		0xffffffffffffa200ull
+#define LMTDMA_ORDERED_IO_ADDR		0xffffffffffffa400ull
+#define SCRATCH_BASE_ADDR		0xffffffffffff8000ull
+
+#define PKO_LMTLINE			2ull
+#define LMTDMA_SCR_OFFSET		(PKO_LMTLINE * CVMX_CACHE_LINE_SIZE)
+#define PKO_LMTDMA_SCRADDR_SHIFT	3
+
+/* Registers are accessed via xkphys */
+#define SET_XKPHYS			BIT_ULL(63)
+#define NODE_OFFSET(node)		((node) * 0x1000000000ull)
+
+/* DPI registers */
+#define DPI_BASE			0x1df0000000000ull
+#define DPI_ADDR(n)			(DPI_BASE + SET_XKPHYS + NODE_OFFSET(n))
+#define DPI_CTL(n)			(DPI_ADDR(n) + 0x00040)
+
+/* Gser register definitions */
+#define GSER_BASE			0x1180090000000ull
+#define GSER_ADDR(n, gser)		(GSER_BASE + SET_XKPHYS +	       \
+					 NODE_OFFSET(n) + ((gser) << 24))
+#define GSER_LANE_OFFSET(lane)		((lane) << 20)
+#define GSER_LANE_OFFSET1(lane)		((lane) << 7)
+#define GSER_LANE_OFFSET2(lane)		((lane) << 5)
+
+#define GSER_LANE_ADDR(n, g, l)		(GSER_ADDR(n, g) + GSER_LANE_OFFSET(l))
+#define GSER_LANE_ADDR1(n, g, l)	(GSER_ADDR(n, g) + GSER_LANE_OFFSET1(l))
+#define GSER_LANE_ADDR2(n, g, l)	(GSER_ADDR(n, g) + GSER_LANE_OFFSET2(l))
+
+#define GSER_PHY_CTL(n, g)		(GSER_LANE_ADDR(n, g, 0) + 0x000000)
+#define GSER_CFG(n, g)			(GSER_LANE_ADDR(n, g, 0) + 0x000080)
+#define GSER_LANE_MODE(n, g)		(GSER_LANE_ADDR(n, g, 0) + 0x000118)
+#define GSER_RX_EIE_DETSTS(n, g)	(GSER_LANE_ADDR(n, g, 0) + 0x000150)
+#define GSER_LANE_LBERT_CFG(n, g, l)	(GSER_LANE_ADDR(n, g, l) + 0x4c0020)
+#define GSER_LANE_PCS_CTLIFC_0(n, g, l)	(GSER_LANE_ADDR(n, g, l) + 0x4c0060)
+#define GSER_LANE_PCS_CTLIFC_2(n, g, l)	(GSER_LANE_ADDR(n, g, l) + 0x4c0070)
+#define GSER_BR_RX_CTL(n, g, l)		(GSER_LANE_ADDR1(n, g, l) + 0x000400)
+#define GSER_BR_RX_EER(n, g, l)		(GSER_LANE_ADDR1(n, g, l) + 0x000418)
+#define GSER_LANE_P_MODE_1(n, g, m)	(GSER_LANE_ADDR2(n, g, m) + 0x4e0048)
+
+/* Gser register bitfields */
+#define GSER_PHY_CTL_PHY_RESET		BIT(1)
+#define GSER_PHY_CTL_PHY_PD		BIT(0)
+#define GSER_CFG_BGX			BIT(2)
+#define GSER_LANE_MODE_LMODE_MASK	GENMASK_ULL(3, 0)
+#define GSER_RX_EIE_DETSTS_CDRLCK_SHIFT	8
+#define GSER_LANE_LBERT_CFG_LBERT_PM_EN	BIT(6)
+#define GSER_LANE_PCS_CTLIFC_0_CFG_TX_COEFF_REQ_OVRRD_VAL	BIT(12)
+#define GSER_LANE_PCS_CTLIFC_2_CTLIFC_OVRRD_REQ			BIT(15)
+#define GSER_LANE_PCS_CTLIFC_2_CFG_TX_COEFF_REQ_OVRRD_EN	BIT(7)
+#define GSER_BR_RX_CTL_RXT_EER		BIT(15)
+#define GSER_BR_RX_CTL_RXT_ESV		BIT(14)
+#define GSER_BR_RX_CTL_RXT_SWM		BIT(2)
+#define GSER_BR_RX_EER_RXT_EER		BIT(15)
+#define GSER_BR_RX_EER_RXT_ESV		BIT(14)
+#define GSER_LANE_P_MODE_1_VMA_MM	BIT(14)
+
+/* XCV register definitions */
+#define XCV_BASE			0x11800db000000ull
+#define XCV_ADDR(n)			(XCV_BASE + SET_XKPHYS + NODE_OFFSET(n))
+
+#define XCV_RESET(n)			(XCV_ADDR(n) + 0x0000)
+#define XCV_DLL_CTL(n)			(XCV_ADDR(n) + 0x0010)
+#define XCV_COMP_CTL(n)			(XCV_ADDR(n) + 0x0020)
+#define XCV_CTL(n)			(XCV_ADDR(n) + 0x0030)
+#define XCV_INT(n)			(XCV_ADDR(n) + 0x0040)
+#define XCV_INBND_STATUS(n)		(XCV_ADDR(n) + 0x0080)
+#define XCV_BATCH_CRD_RET(n)		(XCV_ADDR(n) + 0x0100)
+
+/* XCV register bitfields */
+#define XCV_RESET_ENABLE		BIT(63)
+#define XCV_RESET_CLKRST		BIT(15)
+#define XCV_RESET_DLLRST		BIT(11)
+#define XCV_RESET_COMP			BIT(7)
+#define XCV_RESET_TX_PKT_RST_N		BIT(3)
+#define XCV_RESET_TX_DAT_RST_N		BIT(2)
+#define XCV_RESET_RX_PKT_RST_N		BIT(1)
+#define XCV_RESET_RX_DAT_RST_N		BIT(0)
+#define XCV_DLL_CTL_CLKRX_BYP		BIT(23)
+#define XCV_DLL_CTL_CLKRX_SET_MASK	GENMASK_ULL(22, 16)
+#define XCV_DLL_CTL_CLKTX_BYP		BIT(15)
+#define XCV_DLL_CTL_REFCLK_SEL_MASK	GENMASK_ULL(1, 0)
+#define XCV_COMP_CTL_DRV_BYP		BIT(63)
+#define XCV_CTL_LPBK_INT		BIT(2)
+#define XCV_CTL_SPEED_MASK		GENMASK_ULL(1, 0)
+#define XCV_BATCH_CRD_RET_CRD_RET	BIT(0)
+
+enum octeon3_mac_type {
+	BGX_MAC,
+	SRIO_MAC
+};
+
+enum octeon3_src_type {
+	QLM,
+	XCV
+};
+
+struct mac_platform_data {
+	enum octeon3_mac_type mac_type;
+	enum octeon3_src_type src_type;
+	int interface;
+	int numa_node;
+	int port;
+};
+
+struct bgx_port_netdev_priv {
+	struct bgx_port_priv *bgx_priv;
+};
+
+union wqe_word0 {
+	u64 u64;
+	struct {
+		__BITFIELD_FIELD(u64 rsvd_0:4,
+		__BITFIELD_FIELD(u64 aura:12,
+		__BITFIELD_FIELD(u64 rsvd_1:1,
+		__BITFIELD_FIELD(u64 apad:3,
+		__BITFIELD_FIELD(u64 channel:12,
+		__BITFIELD_FIELD(u64 bufs:8,
+		__BITFIELD_FIELD(u64 style:8,
+		__BITFIELD_FIELD(u64 rsvd_2:10,
+		__BITFIELD_FIELD(u64 pknd:6,
+		;)))))))))
+	};
+};
+
+union wqe_word1 {
+	u64 u64;
+	struct {
+		__BITFIELD_FIELD(u64 len:16,
+		__BITFIELD_FIELD(u64 rsvd_0:2,
+		__BITFIELD_FIELD(u64 rsvd_1:2,
+		__BITFIELD_FIELD(u64 grp:10,
+		__BITFIELD_FIELD(u64 tag_type:2,
+		__BITFIELD_FIELD(u64 tag:32,
+		;))))))
+	};
+};
+
+union wqe_word2 {
+	u64 u64;
+	struct {
+		__BITFIELD_FIELD(u64 software:1,
+		__BITFIELD_FIELD(u64 lg_hdr_type:5,
+		__BITFIELD_FIELD(u64 lf_hdr_type:5,
+		__BITFIELD_FIELD(u64 le_hdr_type:5,
+		__BITFIELD_FIELD(u64 ld_hdr_type:5,
+		__BITFIELD_FIELD(u64 lc_hdr_type:5,
+		__BITFIELD_FIELD(u64 lb_hdr_type:5,
+		__BITFIELD_FIELD(u64 is_la_ether:1,
+		__BITFIELD_FIELD(u64 rsvd_0:8,
+		__BITFIELD_FIELD(u64 vlan_valid:1,
+		__BITFIELD_FIELD(u64 vlan_stacked:1,
+		__BITFIELD_FIELD(u64 stat_inc:1,
+		__BITFIELD_FIELD(u64 pcam_flag4:1,
+		__BITFIELD_FIELD(u64 pcam_flag3:1,
+		__BITFIELD_FIELD(u64 pcam_flag2:1,
+		__BITFIELD_FIELD(u64 pcam_flag1:1,
+		__BITFIELD_FIELD(u64 is_frag:1,
+		__BITFIELD_FIELD(u64 is_l3_bcast:1,
+		__BITFIELD_FIELD(u64 is_l3_mcast:1,
+		__BITFIELD_FIELD(u64 is_l2_bcast:1,
+		__BITFIELD_FIELD(u64 is_l2_mcast:1,
+		__BITFIELD_FIELD(u64 is_raw:1,
+		__BITFIELD_FIELD(u64 err_level:3,
+		__BITFIELD_FIELD(u64 err_code:8,
+		;))))))))))))))))))))))))
+	};
+};
+
+union buf_ptr {
+	u64 u64;
+	struct {
+		__BITFIELD_FIELD(u64 size:16,
+		__BITFIELD_FIELD(u64 packet_outside_wqe:1,
+		__BITFIELD_FIELD(u64 rsvd0:5,
+		__BITFIELD_FIELD(u64 addr:42,
+		;))))
+	};
+};
+
+union wqe_word4 {
+	u64 u64;
+	struct {
+		__BITFIELD_FIELD(u64 ptr_vlan:8,
+		__BITFIELD_FIELD(u64 ptr_layer_g:8,
+		__BITFIELD_FIELD(u64 ptr_layer_f:8,
+		__BITFIELD_FIELD(u64 ptr_layer_e:8,
+		__BITFIELD_FIELD(u64 ptr_layer_d:8,
+		__BITFIELD_FIELD(u64 ptr_layer_c:8,
+		__BITFIELD_FIELD(u64 ptr_layer_b:8,
+		__BITFIELD_FIELD(u64 ptr_layer_a:8,
+		;))))))))
+	};
+};
+
+struct wqe {
+	union wqe_word0	word0;
+	union wqe_word1	word1;
+	union wqe_word2	word2;
+	union buf_ptr packet_ptr;
+	union wqe_word4	word4;
+	u64 wqe_data[11];
+};
+
+enum port_mode {
+	PORT_MODE_DISABLED,
+	PORT_MODE_SGMII,
+	PORT_MODE_RGMII,
+	PORT_MODE_XAUI,
+	PORT_MODE_RXAUI,
+	PORT_MODE_XLAUI,
+	PORT_MODE_XFI,
+	PORT_MODE_10G_KR,
+	PORT_MODE_40G_KR4
+};
+
+enum lane_mode {
+	R_25G_REFCLK100,
+	R_5G_REFCLK100,
+	R_8G_REFCLK100,
+	R_125G_REFCLK15625_KX,
+	R_3125G_REFCLK15625_XAUI,
+	R_103125G_REFCLK15625_KR,
+	R_125G_REFCLK15625_SGMII,
+	R_5G_REFCLK15625_QSGMII,
+	R_625G_REFCLK15625_RXAUI,
+	R_25G_REFCLK125,
+	R_5G_REFCLK125,
+	R_8G_REFCLK125
+};
+
+struct port_status {
+	int link;
+	int duplex;
+	int speed;
+};
+
+static inline u64 oct_csr_read(u64 addr)
+{
+	return __raw_readq((void __iomem *)addr);
+}
+
+static inline void oct_csr_write(u64 data, u64 addr)
+{
+	__raw_writeq(data, (void __iomem *)addr);
+}
+
+extern int ilk0_lanes;
+extern int ilk1_lanes;
+
+void bgx_nexus_load(void);
+
+int bgx_port_allocate_pknd(int node);
+int bgx_port_get_pknd(int node, int bgx, int index);
+enum port_mode bgx_port_get_mode(int node, int bgx, int index);
+int bgx_port_get_qlm(int node, int bgx, int index);
+void bgx_port_set_netdev(struct device *dev, struct net_device *netdev);
+int bgx_port_enable(struct net_device *netdev);
+int bgx_port_disable(struct net_device *netdev);
+const u8 *bgx_port_get_mac(struct net_device *netdev);
+void bgx_port_set_rx_filtering(struct net_device *netdev);
+int bgx_port_change_mtu(struct net_device *netdev, int new_mtu);
+int bgx_port_ethtool_get_link_ksettings(struct net_device *netdev,
+					struct ethtool_link_ksettings *cmd);
+int bgx_port_ethtool_get_settings(struct net_device *netdev,
+				  struct ethtool_cmd *cmd);
+int bgx_port_ethtool_set_settings(struct net_device *netdev,
+				  struct ethtool_cmd *cmd);
+int bgx_port_ethtool_nway_reset(struct net_device *netdev);
+int bgx_port_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd);
+
+void bgx_port_mix_assert_reset(struct net_device *netdev, int mix, bool v);
+
+int octeon3_pki_vlan_init(int node);
+int octeon3_pki_cluster_init(int node, struct platform_device *pdev);
+int octeon3_pki_ltype_init(int node);
+int octeon3_pki_enable(int node);
+int octeon3_pki_port_init(int node, int aura, int grp, int skip, int mb_size,
+			  int pknd, int num_rx_cxt);
+int octeon3_pki_get_stats(int node, int pknd, u64 *packets, u64 *octets,
+			  u64 *dropped);
+int octeon3_pki_set_ptp_skip(int node, int pknd, int skip);
+int octeon3_pki_port_shutdown(int node, int pknd);
+void octeon3_pki_shutdown(int node);
+
+void octeon3_sso_pass1_limit(int node, int grp);
+int octeon3_sso_init(int node, int aura);
+void octeon3_sso_shutdown(int node, int aura);
+int octeon3_sso_alloc_groups(int node, int *groups, int cnt, int start);
+void octeon3_sso_free_groups(int node, int *groups, int cnt);
+void octeon3_sso_irq_set(int node, int grp, bool en);
+
+int octeon3_pko_interface_init(int node, int interface, int index,
+			       enum octeon3_mac_type mac_type, int ipd_port);
+int octeon3_pko_activate_dq(int node, int dq, int cnt);
+int octeon3_pko_get_fifo_size(int node, int interface, int index,
+			      enum octeon3_mac_type mac_type);
+int octeon3_pko_set_mac_options(int node, int interface, int index,
+				enum octeon3_mac_type mac_type, bool fcs_en,
+				bool pad_en, int fcs_sop_off);
+int octeon3_pko_init_global(int node, int aura);
+int octeon3_pko_interface_uninit(int node, const int *dq, int num_dq);
+int octeon3_pko_exit_global(int node);
+
+#endif /* _OCTEON3_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 02/10] netdev: cavium: octeon: Header for Octeon III BGX Ethernet Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-28  8:41   ` Andrew Lunn
  2018-06-27 21:25 ` [PATCH v12 04/10] netdev: cavium: octeon: Add Octeon III BGX Ports Steven J. Hill
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add the BGX nexus architeture for Octeon III BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 .../net/ethernet/cavium/octeon/octeon3-bgx-nexus.c | 670 +++++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-bgx.h   | 281 +++++----
 2 files changed, 831 insertions(+), 120 deletions(-)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx-nexus.c

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-bgx-nexus.c b/drivers/net/ethernet/cavium/octeon/octeon3-bgx-nexus.c
new file mode 100644
index 0000000..fced298
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-bgx-nexus.c
@@ -0,0 +1,670 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III BGX Nexus Ethernet driver
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+
+#include <linux/ctype.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+
+#include "octeon3.h"
+
+static atomic_t request_mgmt_once;
+static atomic_t load_driver_once;
+static atomic_t pki_id;
+
+static char *mix_port;
+module_param(mix_port, charp, 0444);
+MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
+
+static char *pki_port;
+module_param(pki_port, charp, 0444);
+MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");
+
+#define MAX_MIX_PER_NODE	2
+#define MAX_MIX			(MAX_NODES * MAX_MIX_PER_NODE)
+
+/* struct mix_port_lmac - Describes a lmac that connects to a mix port. The lmac
+ *			  must be on the same node as the mix.
+ * @node: Node of the lmac.
+ * @bgx: Bgx of the lmac.
+ * @lmac: Lmac index.
+ */
+struct mix_port_lmac {
+	int node;
+	int bgx;
+	int lmac;
+};
+
+/* mix_ports_lmacs contains all the lmacs connected to mix ports */
+static struct mix_port_lmac mix_port_lmacs[MAX_MIX];
+
+/* pki_ports keeps track of the lmacs connected to the pki */
+static bool pki_ports[MAX_NODES][MAX_BGX_PER_NODE][MAX_LMAC_PER_BGX];
+
+/* Created platform devices get added to this list */
+static struct list_head pdev_list;
+static struct mutex pdev_list_lock;
+
+/* Created platform device use this structure to add themselves to the list */
+struct pdev_list_item {
+	struct list_head list;
+	struct platform_device *pdev;
+};
+
+/* is_lmac_to_mix - Search the list of lmacs connected to mix'es for a match.
+ * @node: Numa node of lmac to search for.
+ * @bgx: Bgx of lmac to search for.
+ * @lmac: Lmac index to search for.
+ *
+ * Returns true if the lmac is connected to a mix.
+ * Returns false if the lmac is not connected to a mix.
+ */
+static bool is_lmac_to_mix(int node, int bgx, int lmac)
+{
+	int i;
+
+	for (i = 0; i < MAX_MIX; i++) {
+		if (mix_port_lmacs[i].node == node &&
+		    mix_port_lmacs[i].bgx == bgx &&
+		    mix_port_lmacs[i].lmac == lmac)
+			return true;
+	}
+
+	return false;
+}
+
+/* is_lmac_to_pki - Search the list of lmacs connected to the pki for a match.
+ * @node: Numa node of lmac to search for.
+ * @bgx: Bgx of lmac to search for.
+ * @lmac: Lmac index to search for.
+ *
+ * Returns true if the lmac is connected to the pki.
+ * Returns false if the lmac is not connected to the pki.
+ */
+static bool is_lmac_to_pki(int node, int bgx, int lmac)
+{
+	return pki_ports[node][bgx][lmac];
+}
+
+/* is_lmac_to_xcv - Check if this lmac is connected to the xcv block (rgmii).
+ * @of_node: Device node to check.
+ *
+ * Returns true if the lmac is connected to the xcv port.
+ * Returns false if the lmac is not connected to the xcv port.
+ */
+static bool is_lmac_to_xcv(struct device_node *of_node)
+{
+	return of_device_is_compatible(of_node, "cavium,octeon-7360-xcv");
+}
+
+static int bgx_probe(struct platform_device *pdev)
+{
+	struct platform_device *new_dev, *pki_dev;
+	struct mac_platform_data platform_data;
+	int i, interface, numa_node, r = 0;
+	struct device_node *child;
+	const __be32 *reg;
+	u64 addr, data;
+	char id[64];
+	u32 port;
+
+	reg = of_get_property(pdev->dev.of_node, "reg", NULL);
+	addr = of_translate_address(pdev->dev.of_node, reg);
+	interface = (addr >> 24) & 0xf;
+	numa_node = (addr >> 36) & 0x7;
+
+	/* Assign 8 CAM entries per LMAC */
+	for (i = 0; i < 32; i++) {
+		data = i >> 3;
+		oct_csr_write(data,
+			      BGX_CMR_RX_ADRX_CAM(numa_node, interface, i));
+	}
+
+	for_each_available_child_of_node(pdev->dev.of_node, child) {
+		struct pdev_list_item *pdev_item;
+		bool is_mix = false;
+		bool is_pki = false;
+		bool is_xcv = false;
+
+		if (!of_device_is_compatible(child, "cavium,octeon-7890-bgx-port") &&
+		    !of_device_is_compatible(child, "cavium,octeon-7360-xcv"))
+			continue;
+		r = of_property_read_u32(child, "reg", &port);
+		if (r)
+			return -ENODEV;
+
+		is_mix = is_lmac_to_mix(numa_node, interface, port);
+		is_pki = is_lmac_to_pki(numa_node, interface, port);
+		is_xcv = is_lmac_to_xcv(child);
+
+		/* Check if this port should be configured */
+		if (!is_mix && !is_pki)
+			continue;
+
+		/* Connect to PKI/PKO */
+		data = oct_csr_read(BGX_CMR_CONFIG(numa_node, interface, port));
+		if (is_mix)
+			data |= BGX_CMR_CONFIG_MIX_EN;
+		else
+			data &= ~BGX_CMR_CONFIG_MIX_EN;
+		oct_csr_write(data, BGX_CMR_CONFIG(numa_node, interface, port));
+
+		/* Unreset the mix bgx interface or it will interfare with the
+		 * other ports.
+		 */
+		if (is_mix) {
+			data = oct_csr_read(BGX_CMR_GLOBAL_CONFIG(numa_node,
+								  interface));
+			if (!port)
+				data &= ~BGX_CMR_GLOBAL_CONFIG_CMR_MIX0_RST;
+			else if (port == 1)
+				data &= ~BGX_CMR_GLOBAL_CONFIG_CMR_MIX1_RST;
+			oct_csr_write(data, BGX_CMR_GLOBAL_CONFIG(numa_node,
+								  interface));
+		}
+
+		snprintf(id, sizeof(id), "%llx.%u.ethernet-mac",
+			 (unsigned long long)addr, port);
+		new_dev = of_platform_device_create(child, id, &pdev->dev);
+		if (!new_dev) {
+			dev_err(&pdev->dev, "Error creating %s\n", id);
+			continue;
+		}
+		platform_data.mac_type = BGX_MAC;
+		platform_data.numa_node = numa_node;
+		platform_data.interface = interface;
+		platform_data.port = port;
+		if (is_xcv)
+			platform_data.src_type = XCV;
+		else
+			platform_data.src_type = QLM;
+
+		/* Add device to the list of created devices so we can remove it
+		 * on exit.
+		 */
+		pdev_item = kmalloc(sizeof(*pdev_item), GFP_KERNEL);
+		pdev_item->pdev = new_dev;
+		mutex_lock(&pdev_list_lock);
+		list_add(&pdev_item->list, &pdev_list);
+		mutex_unlock(&pdev_list_lock);
+
+		atomic_inc(&pki_id);
+		pki_dev = platform_device_register_data(&new_dev->dev,
+				is_mix ? "octeon_mgmt" : "ethernet-mac-pki",
+				(int)atomic_read(&pki_id),
+				&platform_data, sizeof(platform_data));
+		dev_info(&pdev->dev, "Created %s %u: %p\n",
+			 is_mix ? "MIX" : "PKI", pki_dev->id, pki_dev);
+
+		/* Add device to the list of created devices so we can remove it
+		 * on exit.
+		 */
+		pdev_item = kmalloc(sizeof(*pdev_item), GFP_KERNEL);
+		pdev_item->pdev = pki_dev;
+		mutex_lock(&pdev_list_lock);
+		list_add(&pdev_item->list, &pdev_list);
+		mutex_unlock(&pdev_list_lock);
+
+#ifdef CONFIG_NUMA
+		new_dev->dev.numa_node = pdev->dev.numa_node;
+		pki_dev->dev.numa_node = pdev->dev.numa_node;
+#endif
+		/* One time request driver module */
+		if (is_mix) {
+			if (atomic_cmpxchg(&request_mgmt_once, 0, 1) == 0)
+				request_module_nowait("octeon_mgmt");
+		}
+		if (is_pki) {
+			if (atomic_cmpxchg(&load_driver_once, 0, 1) == 0)
+				request_module_nowait("octeon3-ethernet");
+		}
+	}
+
+	dev_info(&pdev->dev, "Probed\n");
+	return 0;
+}
+
+/* bgx_mix_init_from_fdt - Initialize the list of lmacs that connect to mix
+ *			   ports from information in the device tree.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int bgx_mix_init_from_fdt(void)
+{
+	struct device_node *node, *parent = NULL;
+	int mix = 0;
+
+	for_each_compatible_node(node, NULL, "cavium,octeon-7890-mix") {
+		struct device_node *lmac_fdt_node;
+		const __be32 *reg;
+		u64 addr;
+
+		/* Get the fdt node of the lmac connected to this mix */
+		lmac_fdt_node = of_parse_phandle(node, "cavium,mac-handle", 0);
+		if (!lmac_fdt_node)
+			goto err;
+
+		/* Get the numa node and bgx of the lmac */
+		parent = of_get_parent(lmac_fdt_node);
+		if (!parent)
+			goto err;
+		reg = of_get_property(parent, "reg", NULL);
+		if (!reg)
+			goto err;
+		addr = of_translate_address(parent, reg);
+		of_node_put(parent);
+		parent = NULL;
+
+		mix_port_lmacs[mix].node = (addr >> 36) & 0x7;
+		mix_port_lmacs[mix].bgx = (addr >> 24) & 0xf;
+
+		/* Get the lmac index */
+		reg = of_get_property(lmac_fdt_node, "reg", NULL);
+		if (!reg)
+			goto err;
+
+		mix_port_lmacs[mix].lmac = *reg;
+
+		mix++;
+		if (mix >= MAX_MIX)
+			break;
+	}
+
+	return 0;
+err:
+	pr_warn("Invalid device tree mix port information\n");
+	for (mix = 0; mix < MAX_MIX; mix++) {
+		mix_port_lmacs[mix].node = -1;
+		mix_port_lmacs[mix].bgx = -1;
+		mix_port_lmacs[mix].lmac = -1;
+	}
+	if (parent)
+		of_node_put(parent);
+
+	return -EINVAL;
+}
+
+/* bgx_mix_init_from_param - Initialize the list of lmacs that connect to mix
+ *			     ports from information in the "mix_port" parameter.
+ *			     The mix_port parameter format is as follows:
+ *			     mix_port=nbl
+ *			     where:
+ *				n = node
+ *				b = bgx
+ *				l = lmac
+ *			     There can be up to 4 lmacs defined separated by
+ *			     commas. For example to select node0, bgx0, lmac0
+ *			     and node0, bgx4, lamc0, the mix_port parameter
+ *			     would be: mix_port=000,040
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int bgx_mix_init_from_param(void)
+{
+	char *p = mix_port;
+	int i, mix = 0;
+
+	while (*p) {
+		int node = -1;
+		int bgx = -1;
+		int lmac = -1;
+
+		if (strlen(p) < 3)
+			goto err;
+
+		/* Get the numa node */
+		if (!isdigit(*p))
+			goto err;
+		node = *p - '0';
+		if (node >= MAX_NODES)
+			goto err;
+
+		/* Get the bgx */
+		p++;
+		if (!isdigit(*p))
+			goto err;
+		bgx = *p - '0';
+		if (bgx >= MAX_BGX_PER_NODE)
+			goto err;
+
+		/* Get the lmac index */
+		p++;
+		if (!isdigit(*p))
+			goto err;
+		lmac = *p - '0';
+		if (lmac >= 2)
+			goto err;
+
+		/* Only one lmac0 and one lmac1 per node is supported */
+		for (i = 0; i < MAX_MIX; i++) {
+			if (mix_port_lmacs[i].node == node &&
+			    mix_port_lmacs[i].lmac == lmac)
+				goto err;
+		}
+
+		mix_port_lmacs[mix].node = node;
+		mix_port_lmacs[mix].bgx = bgx;
+		mix_port_lmacs[mix].lmac = lmac;
+
+		p++;
+		if (*p == ',')
+			p++;
+
+		mix++;
+		if (mix >= MAX_MIX)
+			break;
+	}
+
+	return 0;
+err:
+	pr_warn("Invalid parameter mix_port=%s\n", mix_port);
+	for (mix = 0; mix < MAX_MIX; mix++) {
+		mix_port_lmacs[mix].node = -1;
+		mix_port_lmacs[mix].bgx = -1;
+		mix_port_lmacs[mix].lmac = -1;
+	}
+	return -EINVAL;
+}
+
+/* bgx_mix_port_lmacs_init - Initialize the mix_port_lmacs variable with the
+ *			     lmacs that connect to mic ports.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int bgx_mix_port_lmacs_init(void)
+{
+	int mix;
+
+	/* Start with no mix ports configured */
+	for (mix = 0; mix < MAX_MIX; mix++) {
+		mix_port_lmacs[mix].node = -1;
+		mix_port_lmacs[mix].bgx = -1;
+		mix_port_lmacs[mix].lmac = -1;
+	}
+
+	/* Check if no mix port should be configured */
+	if (mix_port && !strcmp(mix_port, "none"))
+		return 0;
+
+	/* Configure the mix ports using information from the device tree if no
+	 * parameter was passed. Otherwise, use the information in the module
+	 * parameter.
+	 */
+	if (!mix_port)
+		bgx_mix_init_from_fdt();
+	else
+		bgx_mix_init_from_param();
+
+	return 0;
+}
+
+/* bgx_parse_pki_elem - Parse a single element (node, bgx, or lmac) out a pki
+ *			lmac string and set its bitmap accordingly.
+ * @str: Pki lmac string to parse.
+ * @bitmap: Updated with the bits selected by str.
+ * @size: Maximum size of the bitmap.
+ *
+ * Returns number of characters processed from str.
+ * Returns <0 for error codes.
+ */
+static int bgx_parse_pki_elem(const char *str, unsigned long *bitmap, int size)
+{
+	const char *p = str;
+	int bit, len = -1;
+
+	if (*p == 0) {
+		/* If identifier is missing, the whole subset is allowed */
+		bitmap_set(bitmap, 0, size);
+		len = 0;
+	} else if (*p == '*') {
+		/* If identifier is an asterisk, the whole subset is allowed */
+		bitmap_set(bitmap, 0, size);
+		len = 1;
+	} else if (isdigit(*p)) {
+		/* If identifier is a digit, only the bit corresponding to the
+		 * digit is set.
+		 */
+		bit = *p - '0';
+		if (bit < size) {
+			bitmap_set(bitmap, bit, 1);
+			len = 1;
+		}
+	} else if (*p == '[') {
+		/* If identifier is a bracket, all the bits corresponding to
+		 * the digits inside the bracket are set.
+		 */
+		p++;
+		len = 1;
+		do {
+			if (isdigit(*p)) {
+				bit = *p - '0';
+				if (bit < size)
+					bitmap_set(bitmap, bit, 1);
+				else
+					return -1;
+			} else {
+				return -1;
+			}
+			p++;
+			len++;
+		} while (*p != ']');
+		len++;
+	} else {
+		len = -1;
+	}
+
+	return len;
+}
+
+/* bgx_pki_bitmap_set - Set the bitmap bits for all elements (node, bgx, and
+ *			lmac) selected by a pki lmac string.
+ * @str: Pki lmac string to process.
+ * @node: Updated with the nodes specified in the pki lmac string.
+ * @bgx: Updated with the bgx's specified in the pki lmac string.
+ * @lmac: Updated with the lmacs specified in the pki lmac string.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static unsigned long bgx_pki_bitmap_set(const char *str, unsigned long *node,
+					unsigned long *bgx, unsigned long *lmac)
+{
+	const char *p = str;
+	int len;
+
+	/* Parse the node */
+	len = bgx_parse_pki_elem(p, node, MAX_NODES);
+	if (len < 0)
+		goto err;
+
+	/* Parse the bgx */
+	p += len;
+	len = bgx_parse_pki_elem(p, bgx, MAX_BGX_PER_NODE);
+	if (len < 0)
+		goto err;
+
+	/* Parse the lmac */
+	p += len;
+	len = bgx_parse_pki_elem(p, lmac, MAX_LMAC_PER_BGX);
+	if (len < 0)
+		goto err;
+
+	return 0;
+err:
+	bitmap_zero(node, MAX_NODES);
+	bitmap_zero(bgx, MAX_BGX_PER_NODE);
+	bitmap_zero(lmac, MAX_LMAC_PER_BGX);
+	return len;
+}
+
+/* bgx_pki_init_from_param - Initialize the list of lmacs that connect to the
+ *			     pki from information in the "pki_port" parameter.
+ *
+ *			     The pki_port parameter format is as follows:
+ *			     pki_port=nbl
+ *			     where:
+ *				n = node
+ *				b = bgx
+ *				l = lmac
+ *
+ *			     Commas must be used to separate multiple lmacs:
+ *			     pki_port=000,100,110
+ *
+ *			     Asterisks (*) specify all possible characters in
+ *			     the subset:
+ *			     pki_port=00* (all lmacs of node0 bgx0).
+ *
+ *			     Missing lmacs identifiers default to all
+ *			     possible characters in the subset:
+ *			     pki_port=00 (all lmacs on node0 bgx0)
+ *
+ *			     Brackets ('[' and ']') specify the valid
+ *			     characters in the subset:
+ *			     pki_port=00[01] (lmac0 and lmac1 of node0 bgx0).
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int bgx_pki_init_from_param(void)
+{
+	DECLARE_BITMAP(lmac_bitmap, MAX_LMAC_PER_BGX);
+	DECLARE_BITMAP(bgx_bitmap, MAX_BGX_PER_NODE);
+	DECLARE_BITMAP(node_bitmap, MAX_NODES);
+	char *cur, *next;
+
+	/* Parse each comma separated lmac specifier */
+	cur = pki_port;
+	while (cur) {
+		unsigned long bgx, lmac, node;
+
+		bitmap_zero(node_bitmap, BITS_PER_LONG);
+		bitmap_zero(bgx_bitmap, BITS_PER_LONG);
+		bitmap_zero(lmac_bitmap, BITS_PER_LONG);
+
+		next = strchr(cur, ',');
+		if (next)
+			*next++ = '\0';
+
+		/* Convert the specifier into a bitmap */
+		bgx_pki_bitmap_set(cur, node_bitmap, bgx_bitmap, lmac_bitmap);
+
+		/* Mark the lmacs to be connected to the pki */
+		for_each_set_bit(node, node_bitmap, MAX_NODES) {
+			for_each_set_bit(bgx, bgx_bitmap, MAX_BGX_PER_NODE) {
+				for_each_set_bit(lmac, lmac_bitmap,
+						 MAX_LMAC_PER_BGX)
+					pki_ports[node][bgx][lmac] = true;
+			}
+		}
+
+		cur = next;
+	}
+
+	return 0;
+}
+
+/* bgx_pki_ports_init - Initialize the pki_ports variable with the lmacs that
+ *			connect to the pki.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int bgx_pki_ports_init(void)
+{
+	bool def_val;
+	int i, j, k;
+
+	/* Whether all ports default to connect to the pki or not depend on the
+	 * passed module parameter (if any).
+	 */
+	if (pki_port)
+		def_val = false;
+	else
+		def_val = true;
+
+	for (i = 0; i < MAX_NODES; i++) {
+		for (j = 0; j < MAX_BGX_PER_NODE; j++) {
+			for (k = 0; k < MAX_LMAC_PER_BGX; k++)
+				pki_ports[i][j][k] = def_val;
+		}
+	}
+
+	/* Check if ports have to be individually configured */
+	if (pki_port && strcmp(pki_port, "none"))
+		bgx_pki_init_from_param();
+
+	return 0;
+}
+
+static int bgx_remove(struct platform_device *pdev)
+{
+	return 0;
+}
+
+static void bgx_shutdown(struct platform_device *pdev)
+{
+}
+
+static const struct of_device_id bgx_match[] = {
+	{
+		.compatible = "cavium,octeon-7890-bgx",
+	},
+	{},
+};
+MODULE_DEVICE_TABLE(of, bgx_match);
+
+static struct platform_driver bgx_driver = {
+	.probe		= bgx_probe,
+	.remove		= bgx_remove,
+	.shutdown       = bgx_shutdown,
+	.driver		= {
+		.owner	= THIS_MODULE,
+		.name	= KBUILD_MODNAME,
+		.of_match_table = bgx_match,
+	},
+};
+
+/* Allow bgx_port driver to force this driver to load */
+void bgx_nexus_load(void)
+{
+}
+EXPORT_SYMBOL(bgx_nexus_load);
+
+static int __init bgx_driver_init(void)
+{
+	INIT_LIST_HEAD(&pdev_list);
+	mutex_init(&pdev_list_lock);
+	bgx_mix_port_lmacs_init();
+	bgx_pki_ports_init();
+
+	return platform_driver_register(&bgx_driver);
+}
+
+static void __exit bgx_driver_exit(void)
+{
+	struct pdev_list_item *pdev_item;
+
+	mutex_lock(&pdev_list_lock);
+	while (!list_empty(&pdev_list)) {
+		pdev_item = list_first_entry(&pdev_list,
+					     struct pdev_list_item, list);
+		list_del(&pdev_item->list);
+		platform_device_unregister(pdev_item->pdev);
+		kfree(pdev_item);
+	}
+	mutex_unlock(&pdev_list_lock);
+
+	platform_driver_unregister(&bgx_driver);
+}
+
+module_init(bgx_driver_init);
+module_exit(bgx_driver_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Cavium, Inc. <support@caviumnetworks.com>");
+MODULE_DESCRIPTION("Cavium, Inc. BGX MAC Nexus driver.");
diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h b/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
index df794f5..58ed870 100644
--- a/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-bgx.h
@@ -8,14 +8,83 @@
 
 #include <linux/bitops.h>
 
-#define BGX_CMR_CHAN_MSK_MASK		GENMASK_ULL(15, 0)
-#define BGX_CMR_CHAN_MSK_SHIFT		16
-#define BGX_CMR_CONFIG_DATA_PKT_RX_EN	BIT(14)
-#define BGX_CMR_CONFIG_DATA_PKT_TX_EN	BIT(13)
-#define BGX_CMR_CONFIG_ENABLE	BIT(15)
-#define BGX_CMR_CONFIG_LMAC_TYPE_MASK	GENMASK_ULL(10, 8)
-#define BGX_CMR_CONFIG_LMAC_TYPE_SHIFT	8
-#define BGX_CMR_CONFIG_MIX_EN		BIT(11)
+#define BGX_RX_FIFO_SIZE		(64 * 1024)
+#define BGX_TX_FIFO_SIZE		(32 * 1024)
+
+/* Bgx register definitions */
+#define BGX_BASE			0x11800e0000000ull
+#define BGX_ADDR(node, bgx, index)	(BGX_BASE + SET_XKPHYS +	       \
+					 NODE_OFFSET(node) + ((bgx) << 24) +   \
+					 ((index) << 20))
+#define BGX_ADDR_CAM(n, b, mac)		(BGX_ADDR(n, b, 0) + ((mac) << 3))
+
+#define BGX_CMR_CONFIG(n, b, i)		(BGX_ADDR(n, b, i) + 0x00000)
+#define BGX_CMR_GLOBAL_CONFIG(n, b)	(BGX_ADDR(n, b, 0) + 0x00008)
+#define BGX_CMR_RX_ID_MAP(n, b, i)	(BGX_ADDR(n, b, i) + 0x00028)
+#define BGX_CMR_RX_BP_ON(n, b, i)	(BGX_ADDR(n, b, i) + 0x00088)
+#define BGX_CMR_RX_ADR_CTL(n, b, i)	(BGX_ADDR(n, b, i) + 0x000a0)
+#define BGX_CMR_RX_FIFO_LEN(n, b, i)	(BGX_ADDR(n, b, i) + 0x000c0)
+#define BGX_CMR_RX_ADRX_CAM(n, b, m)	(BGX_ADDR_CAM(n, b, m) + 0x00100)
+#define BGX_CMR_CHAN_MSK_AND(n, b)	(BGX_ADDR(n, b, 0) + 0x00200)
+#define BGX_CMR_CHAN_MSK_OR(n, b)	(BGX_ADDR(n, b, 0) + 0x00208)
+#define BGX_CMR_TX_FIFO_LEN(n, b, i)	(BGX_ADDR(n, b, i) + 0x00418)
+#define BGX_CMR_TX_LMACS(n, b)		(BGX_ADDR(n, b, 0) + 0x01000)
+
+#define BGX_SPU_CONTROL1(n, b, i)	(BGX_ADDR(n, b, i) + 0x10000)
+#define BGX_SPU_STATUS1(n, b, i)	(BGX_ADDR(n, b, i) + 0x10008)
+#define BGX_SPU_STATUS2(n, b, i)	(BGX_ADDR(n, b, i) + 0x10020)
+#define BGX_SPU_BX_STATUS(n, b, i)	(BGX_ADDR(n, b, i) + 0x10028)
+#define BGX_SPU_BR_STATUS1(n, b, i)	(BGX_ADDR(n, b, i) + 0x10030)
+#define BGX_SPU_BR_STATUS2(n, b, i)	(BGX_ADDR(n, b, i) + 0x10038)
+#define BGX_SPU_BR_BIP_ERR_CNT(n, b, i)	(BGX_ADDR(n, b, i) + 0x10058)
+#define BGX_SPU_BR_PMD_CONTROL(n, b, i)	(BGX_ADDR(n, b, i) + 0x10068)
+#define BGX_SPU_BR_PMD_LP_CUP(n, b, i)	(BGX_ADDR(n, b, i) + 0x10078)
+#define BGX_SPU_BR_PMD_LD_CUP(n, b, i)	(BGX_ADDR(n, b, i) + 0x10088)
+#define BGX_SPU_BR_PMD_LD_REP(n, b, i)	(BGX_ADDR(n, b, i) + 0x10090)
+#define BGX_SPU_FEC_CONTROL(n, b, i)	(BGX_ADDR(n, b, i) + 0x100a0)
+#define BGX_SPU_AN_CONTROL(n, b, i)	(BGX_ADDR(n, b, i) + 0x100c8)
+#define BGX_SPU_AN_STATUS(n, b, i)	(BGX_ADDR(n, b, i) + 0x100d0)
+#define BGX_SPU_AN_ADV(n, b, i)		(BGX_ADDR(n, b, i) + 0x100d8)
+#define BGX_SPU_MISC_CONTROL(n, b, i)	(BGX_ADDR(n, b, i) + 0x10218)
+#define BGX_SPU_INT(n, b, i)		(BGX_ADDR(n, b, i) + 0x10220)
+#define BGX_SPU_DBG_CONTROL(n, b)	(BGX_ADDR(n, b, 0) + 0x10300)
+
+#define BGX_SMU_RX_INT(n, b, i)		(BGX_ADDR(n, b, i) + 0x20000)
+#define BGX_SMU_RX_FRM_CTL(n, b, i)	(BGX_ADDR(n, b, i) + 0x20008)
+#define BGX_SMU_RX_JABBER(n, b, i)	(BGX_ADDR(n, b, i) + 0x20018)
+#define BGX_SMU_RX_CTL(n, b, i)		(BGX_ADDR(n, b, i) + 0x20030)
+#define BGX_SMU_TX_APPEND(n, b, i)	(BGX_ADDR(n, b, i) + 0x20100)
+#define BGX_SMU_TX_MIN_PKT(n, b, i)	(BGX_ADDR(n, b, i) + 0x20118)
+#define BGX_SMU_TX_INT(n, b, i)		(BGX_ADDR(n, b, i) + 0x20140)
+#define BGX_SMU_TX_CTL(n, b, i)		(BGX_ADDR(n, b, i) + 0x20160)
+#define BGX_SMU_TX_THRESH(n, b, i)	(BGX_ADDR(n, b, i) + 0x20168)
+#define BGX_SMU_CTRL(n, b, i)		(BGX_ADDR(n, b, i) + 0x20200)
+
+#define BGX_GMP_PCS_MR_CONTROL(n, b, i)	(BGX_ADDR(n, b, i) + 0x30000)
+#define BGX_GMP_PCS_MR_STATUS(n, b, i)	(BGX_ADDR(n, b, i) + 0x30008)
+#define BGX_GMP_PCS_AN_ADV(n, b, i)	(BGX_ADDR(n, b, i) + 0x30010)
+#define BGX_GMP_PCS_LINK_TIMER(n, b, i)	(BGX_ADDR(n, b, i) + 0x30040)
+#define BGX_GMP_PCS_SGM_AN_ADV(n, b, i)	(BGX_ADDR(n, b, i) + 0x30068)
+#define BGX_GMP_PCS_MISC_CTL(n, b, i)	(BGX_ADDR(n, b, i) + 0x30078)
+#define BGX_GMP_GMI_PRT_CFG(n, b, i)	(BGX_ADDR(n, b, i) + 0x38010)
+#define BGX_GMP_GMI_RX_FRM_CTL(n, b, i)	(BGX_ADDR(n, b, i) + 0x38018)
+#define BGX_GMP_GMI_RX_JABBER(n, b, i)	(BGX_ADDR(n, b, i) + 0x38038)
+#define BGX_GMP_GMI_TX_THRESH(n, b, i)	(BGX_ADDR(n, b, i) + 0x38210)
+#define BGX_GMP_GMI_TX_APPEND(n, b, i)	(BGX_ADDR(n, b, i) + 0x38218)
+#define BGX_GMP_GMI_TX_SLOT(n, b, i)	(BGX_ADDR(n, b, i) + 0x38220)
+#define BGX_GMP_GMI_TX_BURST(n, b, i)	(BGX_ADDR(n, b, i) + 0x38228)
+#define BGX_GMP_GMI_TX_MIN_PKT(n, b, i)	(BGX_ADDR(n, b, i) + 0x38240)
+#define BGX_GMP_GMI_TX_SGMII_CTL(n, b, i) (BGX_ADDR(n, b, i) + 0x38300)
+
+/* Bgx register bitfields */
+#define BGX_CMR_CHAN_MSK_MASK			GENMASK_ULL(15, 0)
+#define BGX_CMR_CHAN_MSK_SHIFT			16
+#define BGX_CMR_CONFIG_DATA_PKT_RX_EN		BIT(14)
+#define BGX_CMR_CONFIG_DATA_PKT_TX_EN		BIT(13)
+#define BGX_CMR_CONFIG_ENABLE			BIT(15)
+#define BGX_CMR_CONFIG_LMAC_TYPE_MASK		GENMASK_ULL(10, 8)
+#define BGX_CMR_CONFIG_LMAC_TYPE_SHIFT		8
+#define BGX_CMR_CONFIG_MIX_EN			BIT(11)
 #define BGX_CMR_GLOBAL_CONFIG_CMR_MIX0_RST	BIT(3)
 #define BGX_CMR_GLOBAL_CONFIG_CMR_MIX1_RST	BIT(4)
 #define BGX_CMR_RX_ADR_CTL_ACCEPT_ALL_MCST	BIT(1)
@@ -23,128 +92,100 @@
 #define BGX_CMR_RX_ADR_CTL_CAM_ACCEPT		BIT(3)
 #define BGX_CMR_RX_ADR_CTL_MCST_MODE_MASK	GENMASK_ULL(2, 1)
 #define BGX_CMR_RX_ADR_CTL_USE_CAM_FILTER	BIT(2)
-#define BGX_CMR_RX_ADRX_CAM_EN		BIT(48)
-#define BGX_CMR_RX_ADRX_CAM_ID_SHIFT	52
-#define BGX_CMR_RX_FIFO_LEN_MASK	GENMASK_ULL(12, 0)
-#define BGX_CMR_RX_ID_MAP_PKND_MASK	GENMASK_ULL(7, 0)
-#define BGX_CMR_RX_ID_MAP_RID_MASK	GENMASK_ULL(14, 8)
-#define BGX_CMR_RX_ID_MAP_RID_SHIFT	8
-#define BGX_CMR_TX_FIFO_LEN_LMAC_IDLE	BIT(13)
-#define BGX_CMR_TX_LMACS_MASK		GENMASK_ULL(2, 0)
-#define BGX_GMP_GMI_PRT_CFG_DUPLEX	BIT(2)
-#define BGX_GMP_GMI_PRT_CFG_RX_IDLE	BIT(12)
-#define BGX_GMP_GMI_PRT_CFG_SLOTTIME	BIT(3)
-#define BGX_GMP_GMI_PRT_CFG_SPEED	BIT(1)
-#define BGX_GMP_GMI_PRT_CFG_SPEED_MSB	BIT(8)
-#define BGX_GMP_GMI_PRT_CFG_TX_IDLE	BIT(13)
+#define BGX_CMR_RX_ADRX_CAM_EN			BIT(48)
+#define BGX_CMR_RX_ADRX_CAM_ID_SHIFT		52
+#define BGX_CMR_RX_FIFO_LEN_MASK		GENMASK_ULL(12, 0)
+#define BGX_CMR_RX_ID_MAP_PKND_MASK		GENMASK_ULL(7, 0)
+#define BGX_CMR_RX_ID_MAP_RID_MASK		GENMASK_ULL(14, 8)
+#define BGX_CMR_RX_ID_MAP_RID_SHIFT		8
+#define BGX_CMR_TX_FIFO_LEN_LMAC_IDLE		BIT(13)
+#define BGX_CMR_TX_LMACS_MASK			GENMASK_ULL(2, 0)
+
+#define BGX_SPU_AN_ADV_FEC_ABLE			BIT(46)
+#define BGX_SPU_AN_ADV_FEC_REQ			BIT(47)
+#define BGX_SPU_AN_ADV_A100G_CR10		BIT(26)
+#define BGX_SPU_AN_ADV_A40G_CR4			BIT(25)
+#define BGX_SPU_AN_ADV_A40G_KR4			BIT(24)
+#define BGX_SPU_AN_ADV_A10G_KR			BIT(23)
+#define BGX_SPU_AN_ADV_A10G_KX4			BIT(22)
+#define BGX_SPU_AN_ADV_A1G_KX			BIT(21)
+#define BGX_SPU_AN_ADV_RF			BIT(13)
+#define BGX_SPU_AN_ADV_XNP_ABLE			BIT(12)
+#define BGX_SPU_AN_CONTROL_XNP_EN		BIT(13)
+#define BGX_SPU_AN_CONTROL_AN_EN		BIT(12)
+#define BGX_SPU_AN_CONTROL_AN_RESTART		BIT(9)
+#define BGX_SPU_AN_STATUS_AN_COMPLETE		BIT(5)
+#define BGX_SPU_BR_PMD_CONTROL_TRAIN_EN		BIT(1)
+#define BGX_SPU_BR_PMD_CONTROL_TRAIN_RESTART	BIT(0)
+#define BGX_SPU_BR_STATUS1_BLK_LOCK		BIT(0)
+#define BGX_SPU_BR_STATUS2_LATCHED_LOCK		BIT(15)
+#define BGX_SPU_BR_STATUS2_LATCHED_BER		BIT(14)
+#define BGX_SPU_BX_STATUS_ALIGND		BIT(12)
+#define BGX_SPU_CONTROL1_RESET			BIT(15)
+#define BGX_SPU_CONTROL1_LOOPBACK		BIT(14)
+#define BGX_SPU_CONTROL1_LO_PWR			BIT(11)
+#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_MASK	GENMASK_ULL(43, 32)
+#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_SHIFT	32
+#define BGX_SPU_DBG_CONTROL_AN_NONCE_MATCH_DIS	BIT(29)
+#define BGX_SPU_DBG_CONTROL_AN_ARB_LINK_CHK_EN	BIT(18)
+#define BGX_SPU_FEC_CONTROL_FEC_EN		BIT(0)
+#define BGX_SPU_INT_AN_COMPLETE			BIT(12)
+#define BGX_SPU_INT_AN_LINK_GOOD		BIT(11)
+#define BGX_SPU_INT_AN_PAGE_RX			BIT(10)
+#define BGX_SPU_INT_TRAINING_FAILURE		BIT(14)
+#define BGX_SPU_INT_TRAINING_DONE		BIT(13)
+#define BGX_SPU_MISC_CONTROL_RX_PACKET_DIS	BIT(12)
+#define BGX_SPU_MISC_CONTROL_INTLV_RDISP	BIT(10)
+#define BGX_SPU_STATUS1_RCV_LINK		BIT(2)
+#define BGX_SPU_STATUS2_RCVFLT			BIT(10)
+
+#define BGX_SMU_CTRL_TX_IDLE			BIT(1)
+#define BGX_SMU_CTRL_RX_IDLE			BIT(0)
+#define BGX_SMU_RX_CTL_STATUS_MASK		GENMASK_ULL(1, 0)
+#define BGX_SMU_TX_APPEND_FCS_C			BIT(3)
+#define BGX_SMU_TX_APPEND_FCS_D			BIT(2)
+#define BGX_SMU_TX_APPEND_PAD			BIT(1)
+#define BGX_SMU_TX_CTL_LS_MASK			GENMASK_ULL(5, 4)
+#define BGX_SMU_TX_CTL_UNI_EN			BIT(1)
+#define BGX_SMU_TX_CTL_DIC_EN			BIT(0)
+
+#define BGX_GMP_GMI_PRT_CFG_TX_IDLE		BIT(13)
+#define BGX_GMP_GMI_PRT_CFG_RX_IDLE		BIT(12)
+#define BGX_GMP_GMI_PRT_CFG_SPEED_MSB		BIT(8)
+#define BGX_GMP_GMI_PRT_CFG_SLOTTIME		BIT(3)
+#define BGX_GMP_GMI_PRT_CFG_DUPLEX		BIT(2)
+#define BGX_GMP_GMI_PRT_CFG_SPEED		BIT(1)
 #define BGX_GMP_GMI_TX_APPEND_FCS		BIT(2)
 #define BGX_GMP_GMI_TX_APPEND_PAD		BIT(1)
 #define BGX_GMP_GMI_TX_APPEND_PREAMBLE		BIT(0)
 #define BGX_GMP_GMI_TX_THRESH_DEFAULT		0x20
-#define BGX_GMP_PCS_AN_ADV_FULL_DUPLEX		BIT(5)
-#define BGX_GMP_PCS_AN_ADV_HALF_DUPLEX		BIT(6)
-#define BGX_GMP_PCS_AN_ADV_PAUSE_ASYMMETRIC	2
-#define BGX_GMP_PCS_AN_ADV_PAUSE_BOTH		3
-#define BGX_GMP_PCS_AN_ADV_PAUSE_MASK	GENMASK_ULL(8, 7)
+#define BGX_GMP_PCS_AN_ADV_REM_FLT_MASK		GENMASK_ULL(13, 12)
+#define BGX_GMP_PCS_AN_ADV_PAUSE_MASK		GENMASK_ULL(8, 7)
+#define BGX_GMP_PCS_AN_ADV_PAUSE_SHIFT		7
 #define BGX_GMP_PCS_AN_ADV_PAUSE_NONE		0
-#define BGX_GMP_PCS_AN_ADV_PAUSE_SHIFT	7
 #define BGX_GMP_PCS_AN_ADV_PAUSE_SYMMETRIC	1
-#define BGX_GMP_PCS_AN_ADV_REM_FLT_MASK	GENMASK_ULL(13, 12)
+#define BGX_GMP_PCS_AN_ADV_PAUSE_ASYMMETRIC	2
+#define BGX_GMP_PCS_AN_ADV_PAUSE_BOTH		3
+#define BGX_GMP_PCS_AN_ADV_HALF_DUPLEX		BIT(6)
+#define BGX_GMP_PCS_AN_ADV_FULL_DUPLEX		BIT(5)
 #define BGX_GMP_PCS_LINK_TIMER_COUNT_SHIFT	10
-#define BGX_GMP_PCS_MISC_CTL_GMXENO	BIT(11)
-#define BGX_GMP_PCS_MISC_CTL_MAC_PHY	BIT(9)
-#define BGX_GMP_PCS_MISC_CTL_MODE	BIT(8)
+#define BGX_GMP_PCS_MISC_CTL_GMXENO		BIT(11)
+#define BGX_GMP_PCS_MISC_CTL_MAC_PHY		BIT(9)
+#define BGX_GMP_PCS_MISC_CTL_MODE		BIT(8)
 #define BGX_GMP_PCS_MISC_CTL_SAMP_PT_MASK	GENMASK_ULL(6, 0)
-#define BGX_GMP_PCS_MR_CONTROL_AN_EN	BIT(12)
-#define BGX_GMP_PCS_MR_CONTROL_PWR_DN	BIT(11)
-#define BGX_GMP_PCS_MR_CONTROL_RESET	BIT(15)
-#define BGX_GMP_PCS_MR_CONTROL_RST_AN	BIT(9)
-#define BGX_GMP_PCS_MR_CONTROL_SPDLSB	BIT(13)
-#define BGX_GMP_PCS_MR_CONTROL_SPDMSB	BIT(6)
-#define BGX_GMP_PCS_MR_STATUS_AN_CPT	BIT(5)
+#define BGX_GMP_PCS_MR_CONTROL_RESET		BIT(15)
+#define BGX_GMP_PCS_MR_CONTROL_SPDLSB		BIT(13)
+#define BGX_GMP_PCS_MR_CONTROL_AN_EN		BIT(12)
+#define BGX_GMP_PCS_MR_CONTROL_PWR_DN		BIT(11)
+#define BGX_GMP_PCS_MR_CONTROL_RST_AN		BIT(9)
+#define BGX_GMP_PCS_MR_CONTROL_SPDMSB		BIT(6)
+#define BGX_GMP_PCS_MR_STATUS_AN_CPT		BIT(5)
 #define BGX_GMP_PCS_SGM_AN_ADV_DUPLEX_FULL	BIT(12)
-#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10		0
-#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10000	3
-#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_1000	2
-#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_100	1
 #define BGX_GMP_PCS_SGM_AN_ADV_SPEED_MASK	GENMASK_ULL(11, 10)
 #define BGX_GMP_PCS_SGM_AN_ADV_SPEED_SHIFT	10
-#define BGX_SMU_CTRL_RX_IDLE		BIT(0)
-#define BGX_SMU_CTRL_TX_IDLE		BIT(1)
-#define BGX_SMU_RX_CTL_STATUS_MASK	GENMASK_ULL(1, 0)
-#define BGX_SMU_TX_APPEND_FCS_C		BIT(3)
-#define BGX_SMU_TX_APPEND_FCS_D			BIT(2)
-#define BGX_SMU_TX_APPEND_PAD			BIT(1)
-#define BGX_SMU_TX_CTL_DIC_EN	BIT(0)
-#define BGX_SMU_TX_CTL_LS_MASK		GENMASK_ULL(5, 4)
-#define BGX_SMU_TX_CTL_UNI_EN	BIT(1)
-#define BGX_SPU_AN_ADV_A100G_CR10	BIT(26)
-#define BGX_SPU_AN_ADV_A10G_KR		BIT(23)
-#define BGX_SPU_AN_ADV_A10G_KX4		BIT(22)
-#define BGX_SPU_AN_ADV_A1G_KX		BIT(21)
-#define BGX_SPU_AN_ADV_A40G_CR4		BIT(25)
-#define BGX_SPU_AN_ADV_A40G_KR4		BIT(24)
-#define BGX_SPU_AN_ADV_FEC_ABLE		BIT(46)
-#define BGX_SPU_AN_ADV_FEC_REQ		BIT(47)
-#define BGX_SPU_AN_ADV_RF		BIT(13)
-#define BGX_SPU_AN_ADV_XNP_ABLE		BIT(12)
-#define BGX_SPU_AN_CONTROL_AN_EN	BIT(12)
-#define BGX_SPU_AN_CONTROL_AN_RESTART	BIT(9)
-#define BGX_SPU_AN_CONTROL_XNP_EN	BIT(13)
-#define BGX_SPU_AN_STATUS_AN_COMPLETE	BIT(5)
-#define BGX_SPU_BR_PMD_CONTROL_TRAIN_EN		BIT(1)
-#define BGX_SPU_BR_PMD_CONTROL_TRAIN_RESTART	BIT(0)
-#define BGX_SPU_BR_STATUS1_BLK_LOCK		BIT(0)
-#define BGX_SPU_BR_STATUS2_LATCHED_BER		BIT(14)
-#define BGX_SPU_BR_STATUS2_LATCHED_LOCK		BIT(15)
-#define BGX_SPU_BX_STATUS_ALIGND	BIT(12)
-#define BGX_SPU_CONTROL1_LOOPBACK	BIT(14)
-#define BGX_SPU_CONTROL1_LO_PWR	BIT(11)
-#define BGX_SPU_CONTROL1_RESET	BIT(15)
-#define BGX_SPU_DBG_CONTROL_AN_ARB_LINK_CHK_EN	BIT(18)
-#define BGX_SPU_DBG_CONTROL_AN_NONCE_MATCH_DIS	BIT(29)
-#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_MASK	GENMASK_ULL(43, 32)
-#define BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_SHIFT	32
-#define BGX_SPU_FEC_CONTROL_FEC_EN	BIT(0)
-#define BGX_SPU_INT_AN_COMPLETE		BIT(12)
-#define BGX_SPU_INT_AN_LINK_GOOD	BIT(11)
-#define BGX_SPU_INT_AN_PAGE_RX		BIT(10)
-#define BGX_SPU_INT_TRAINING_DONE	BIT(13)
-#define BGX_SPU_INT_TRAINING_FAILURE	BIT(14)
-#define BGX_SPU_MISC_CONTROL_INTLV_RDISP	BIT(10)
-#define BGX_SPU_MISC_CONTROL_RX_PACKET_DIS	BIT(12)
-#define BGX_SPU_STATUS1_RCV_LINK	BIT(2)
-#define BGX_SPU_STATUS2_RCVFLT		BIT(10)
-#define GSER_BR_RX_CTL_RXT_EER		BIT(15)
-#define GSER_BR_RX_CTL_RXT_ESV		BIT(14)
-#define GSER_BR_RX_CTL_RXT_SWM		BIT(2)
-#define GSER_BR_RX_EER_RXT_EER         BIT(15)
-#define GSER_BR_RX_EER_RXT_ESV         BIT(14)
-#define GSER_LANE_LBERT_CFG_LBERT_PM_EN		BIT(6)
-#define GSER_LANE_MODE_LMODE_MASK	GENMASK_ULL(3, 0)
-#define GSER_LANE_MODE_MASK	GENMASK_ULL(3, 0)
-#define GSER_LANE_PCS_CTLIFC_0_CFG_TX_COEFF_REQ_OVRRD_VAL	BIT(12)
-#define GSER_LANE_PCS_CTLIFC_2_CFG_TX_COEFF_REQ_OVRRD_EN	BIT(7)
-#define GSER_LANE_PCS_CTLIFC_2_CTLIFC_OVRRD_REQ			BIT(15)
-#define GSER_LANE_P_MODE_1_VMA_MM	BIT(14)
-#define GSER_PHY_CTL_PHY_PD	BIT(0)
-#define GSER_PHY_CTL_PHY_RESET	BIT(1)
-#define GSER_RX_EIE_DETSTS_CDRLOCK_SHIFT	8
-#define XCV_BATCH_CRD_RET_CRD_RET	BIT(0)
-#define XCV_COMP_CTL_DRV_BYP		BIT(63)
-#define XCV_CTL_LPBK_INT	BIT(2)
-#define XCV_CTL_SPEED_MASK	GENMASK_ULL(1, 0)
-#define XCV_DLL_CTL_CLKRX_BYP		BIT(23)
-#define XCV_DLL_CTL_CLKRX_SET_MASK	GENMASK_ULL(22, 16)
-#define XCV_DLL_CTL_CLKTX_BYP		BIT(15)
-#define XCV_DLL_CTL_REFCLK_SEL_MASK	GENMASK_ULL(1, 0)
-#define XCV_RESET_CLKRST	BIT(15)
-#define XCV_RESET_COMP		BIT(7)
-#define XCV_RESET_DLLRST	BIT(11)
-#define XCV_RESET_ENABLE	BIT(63)
-#define XCV_RESET_RX_DAT_RST_N	BIT(0)
-#define XCV_RESET_RX_PKT_RST_N	BIT(1)
-#define XCV_RESET_TX_DAT_RST_N	BIT(2)
-#define XCV_RESET_TX_PKT_RST_N	BIT(3)
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10		0
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_100	1
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_1000	2
+#define BGX_GMP_PCS_SGM_AN_ADV_SPEED_10000	3
 
 #endif /* _OCTEON3_BGX_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 04/10] netdev: cavium: octeon: Add Octeon III BGX Ports
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (2 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 05/10] netdev: cavium: octeon: Add Octeon III PKI Support Steven J. Hill
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add individual BGX nexus port support for Octeon III BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 .../net/ethernet/cavium/octeon/octeon3-bgx-port.c  | 2192 ++++++++++++++++++++
 1 file changed, 2192 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-bgx-port.c

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-bgx-port.c b/drivers/net/ethernet/cavium/octeon/octeon3-bgx-port.c
new file mode 100644
index 0000000..eb5921b
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-bgx-port.c
@@ -0,0 +1,2192 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III BGX Nexus Ethernet driver
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/of_address.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+
+#include "octeon3.h"
+
+struct bgx_port_priv {
+	int node;
+	int bgx;
+	int index; /* Port index on BGX block*/
+	enum port_mode mode;
+	int pknd;
+	int qlm;
+	const u8 *mac_addr;
+	struct phy_device *phydev;
+	struct device_node *phy_np;
+	bool mode_1000basex;
+	bool bgx_as_phy;
+	struct net_device *netdev;
+	struct mutex lock;	/* Serializes delayed work */
+	struct port_status (*get_link)(struct bgx_port_priv *priv);
+	int (*set_link)(struct bgx_port_priv *priv, struct port_status status);
+	struct port_status last_status;
+	struct delayed_work dwork;
+	bool work_queued;
+};
+
+/* lmac_pknd keeps track of the port kinds assigned to the lmacs */
+static int lmac_pknd[MAX_NODES][MAX_BGX_PER_NODE][MAX_LMAC_PER_BGX];
+
+static struct workqueue_struct *check_state_wq;
+static DEFINE_MUTEX(check_state_wq_mutex);
+
+int bgx_port_get_qlm(int node, int bgx, int index)
+{
+	int qlm = -1;
+	u64 data;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		if (bgx < 2) {
+			data = oct_csr_read(BGX_CMR_GLOBAL_CONFIG(node, bgx));
+			if (data & 1)
+				qlm = bgx + 2;
+			else
+				qlm = bgx;
+		} else {
+			qlm = bgx + 2;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		if (bgx < 2) {
+			qlm = bgx + 2;
+		} else {
+			/* Ports on bgx2 can be connected to qlm5 or qlm6 */
+			if (index < 2)
+				qlm = 5;
+			else
+				qlm = 6;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		/* Ports on bgx0 can be connected to qlm4 or qlm5 */
+		if (index < 2)
+			qlm = 4;
+		else
+			qlm = 5;
+	}
+
+	return qlm;
+}
+EXPORT_SYMBOL(bgx_port_get_qlm);
+
+/* Returns the mode of the bgx port */
+enum port_mode bgx_port_get_mode(int node, int bgx, int index)
+{
+	enum port_mode mode;
+	u64 data;
+
+	data = oct_csr_read(BGX_CMR_CONFIG(node, bgx, index)) &
+			    BGX_CMR_CONFIG_LMAC_TYPE_MASK;
+
+	switch (data  >> BGX_CMR_CONFIG_LMAC_TYPE_SHIFT) {
+	case 0:
+		mode = PORT_MODE_SGMII;
+		break;
+	case 1:
+		mode = PORT_MODE_XAUI;
+		break;
+	case 2:
+		mode = PORT_MODE_RXAUI;
+		break;
+	case 3:
+		data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(node, bgx, index));
+		/* The use of training differentiates 10G_KR from xfi */
+		if (data & BGX_SPU_BR_PMD_CONTROL_TRAIN_EN)
+			mode = PORT_MODE_10G_KR;
+		else
+			mode = PORT_MODE_XFI;
+		break;
+	case 4:
+		data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(node, bgx, index));
+		/* The use of training differentiates 40G_KR4 from xlaui */
+		if (data & BGX_SPU_BR_PMD_CONTROL_TRAIN_EN)
+			mode = PORT_MODE_40G_KR4;
+		else
+			mode = PORT_MODE_XLAUI;
+		break;
+	default:
+		mode = PORT_MODE_DISABLED;
+		break;
+	}
+
+	return mode;
+}
+EXPORT_SYMBOL(bgx_port_get_mode);
+
+int bgx_port_allocate_pknd(int node)
+{
+	struct global_resource_tag tag;
+	char buf[16];
+	int pknd;
+
+	strncpy((char *)&tag.lo, "cvm_pknd", 8);
+	snprintf(buf, 16, "_%d......", node);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_create_resource(tag, 64);
+	pknd = res_mgr_alloc(tag, -1, false);
+	if (pknd < 0) {
+		pr_err("bgx-port: Failed to allocate pknd\n");
+		return -ENODEV;
+	}
+
+	return pknd;
+}
+EXPORT_SYMBOL(bgx_port_allocate_pknd);
+
+int bgx_port_get_pknd(int node, int bgx, int index)
+{
+	return lmac_pknd[node][bgx][index];
+}
+EXPORT_SYMBOL(bgx_port_get_pknd);
+
+/* GSER-20075 */
+static void bgx_port_gser_20075(struct bgx_port_priv *priv, int qlm, int lane)
+{
+	u64 addr, data;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+	    (lane == -1 || lane == 3)) {
+		/* Enable software control */
+		addr = GSER_BR_RX_CTL(priv->node, qlm, 3);
+		data = oct_csr_read(addr);
+		data |= GSER_BR_RX_CTL_RXT_SWM;
+		oct_csr_write(data, addr);
+
+		/* Clear the completion flag */
+		addr = GSER_BR_RX_EER(priv->node, qlm, 3);
+		data = oct_csr_read(addr);
+		data &= ~GSER_BR_RX_CTL_RXT_ESV;
+		oct_csr_write(data, addr);
+
+		/* Initiate a new request on lane 2 */
+		if (lane == 3) {
+			addr = GSER_BR_RX_EER(priv->node, qlm, 2);
+			data = oct_csr_read(addr);
+			data |= GSER_BR_RX_CTL_RXT_EER;
+			oct_csr_write(data, addr);
+		}
+	}
+}
+
+static void bgx_common_init_pknd(struct bgx_port_priv *priv)
+{
+	int num_ports;
+	u64 data;
+
+	/* Setup pkind */
+	priv->pknd = bgx_port_allocate_pknd(priv->node);
+	lmac_pknd[priv->node][priv->bgx][priv->index] = priv->pknd;
+	data = oct_csr_read(BGX_CMR_RX_ID_MAP(priv->node, priv->bgx,
+					      priv->index));
+	data &= ~BGX_CMR_RX_ID_MAP_PKND_MASK;
+	data |= priv->pknd;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		/* Change the default reassembly id (max allowed is 14) */
+		data &= ~BGX_CMR_RX_ID_MAP_RID_MASK;
+		data |= ((4 * priv->bgx) + 2 + priv->index) <<
+			BGX_CMR_RX_ID_MAP_RID_SHIFT;
+	}
+	oct_csr_write(data, BGX_CMR_RX_ID_MAP(priv->node, priv->bgx,
+					      priv->index));
+
+	/* Set backpressure channel mask AND/OR registers */
+	data = oct_csr_read(BGX_CMR_CHAN_MSK_AND(priv->node, priv->bgx));
+	data |= BGX_CMR_CHAN_MSK_MASK << (BGX_CMR_CHAN_MSK_SHIFT * priv->index);
+	oct_csr_write(data, BGX_CMR_CHAN_MSK_AND(priv->node, priv->bgx));
+
+	data = oct_csr_read(BGX_CMR_CHAN_MSK_OR(priv->node, priv->bgx));
+	data |= BGX_CMR_CHAN_MSK_MASK << (BGX_CMR_CHAN_MSK_SHIFT * priv->index);
+	oct_csr_write(data, BGX_CMR_CHAN_MSK_OR(priv->node, priv->bgx));
+
+	/* Rx back pressure watermark:
+	 * Set to 1/4 of the available lmacs buffer (in multiple of 16 bytes)
+	 */
+	data = oct_csr_read(BGX_CMR_TX_LMACS(priv->node, priv->bgx));
+	num_ports = data & BGX_CMR_TX_LMACS_MASK;
+	data = BGX_RX_FIFO_SIZE / (num_ports * 4 * 16);
+	oct_csr_write(data, BGX_CMR_RX_BP_ON(priv->node, priv->bgx,
+					     priv->index));
+}
+
+static int bgx_xgmii_hardware_init(struct bgx_port_priv *priv)
+{
+	u64 ctl, clock_mhz, data;
+
+	/* Set TX Threshold */
+	data = BGX_GMP_GMI_TX_THRESH_DEFAULT;
+	oct_csr_write(data, BGX_GMP_GMI_TX_THRESH(priv->node, priv->bgx,
+						  priv->index));
+
+	data = oct_csr_read(BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+	data &= ~(BGX_GMP_PCS_MISC_CTL_MAC_PHY | BGX_GMP_PCS_MISC_CTL_MODE);
+	if (priv->mode_1000basex)
+		data |= BGX_GMP_PCS_MISC_CTL_MODE;
+	if (priv->bgx_as_phy)
+		data |= BGX_GMP_PCS_MISC_CTL_MAC_PHY;
+	oct_csr_write(data, BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+
+	data = oct_csr_read(BGX_GMP_PCS_LINK_TIMER(priv->node, priv->bgx,
+						   priv->index));
+	clock_mhz = octeon_get_io_clock_rate() / 1000000;
+	if (priv->mode_1000basex)
+		data = (10000ull * clock_mhz) >>
+			BGX_GMP_PCS_LINK_TIMER_COUNT_SHIFT;
+	else
+		data = (1600ull * clock_mhz) >>
+			BGX_GMP_PCS_LINK_TIMER_COUNT_SHIFT;
+	oct_csr_write(data, BGX_GMP_PCS_LINK_TIMER(priv->node, priv->bgx,
+						   priv->index));
+
+	if (priv->mode_1000basex) {
+		data = oct_csr_read(BGX_GMP_PCS_AN_ADV(priv->node, priv->bgx,
+						       priv->index));
+		data &= ~(BGX_GMP_PCS_AN_ADV_REM_FLT_MASK |
+			  BGX_GMP_PCS_AN_ADV_PAUSE_MASK);
+		data |= BGX_GMP_PCS_AN_ADV_PAUSE_BOTH <<
+			BGX_GMP_PCS_AN_ADV_PAUSE_SHIFT;
+		data |= BGX_GMP_PCS_AN_ADV_HALF_DUPLEX |
+			BGX_GMP_PCS_AN_ADV_FULL_DUPLEX;
+		oct_csr_write(data, BGX_GMP_PCS_AN_ADV(priv->node, priv->bgx,
+						       priv->index));
+	} else if (priv->bgx_as_phy) {
+		data = oct_csr_read(BGX_GMP_PCS_SGM_AN_ADV(priv->node,
+							   priv->bgx,
+							   priv->index));
+		data |= BGX_GMP_PCS_SGM_AN_ADV_DUPLEX_FULL;
+		data &= ~BGX_GMP_PCS_SGM_AN_ADV_SPEED_MASK;
+		data |= BGX_GMP_PCS_SGM_AN_ADV_SPEED_1000 <<
+			BGX_GMP_PCS_SGM_AN_ADV_SPEED_SHIFT;
+		oct_csr_write(data, BGX_GMP_PCS_SGM_AN_ADV(priv->node,
+							   priv->bgx,
+							   priv->index));
+	}
+
+	data = oct_csr_read(BGX_GMP_GMI_TX_APPEND(priv->node, priv->bgx,
+						  priv->index));
+	ctl = oct_csr_read(BGX_GMP_GMI_TX_SGMII_CTL(priv->node, priv->bgx,
+						    priv->index));
+	ctl &= ~BGX_GMP_GMI_TX_APPEND_PREAMBLE;
+	ctl |= (data & BGX_GMP_GMI_TX_APPEND_PREAMBLE) ? 0 : 1;
+	oct_csr_write(ctl, BGX_GMP_GMI_TX_SGMII_CTL(priv->node, priv->bgx,
+						    priv->index));
+
+	if (priv->mode == PORT_MODE_RGMII) {
+		/* Disable XCV interface when initialized */
+		data = oct_csr_read(XCV_RESET(priv->node));
+		data &= ~(XCV_RESET_ENABLE | XCV_RESET_TX_PKT_RST_N |
+			  XCV_RESET_RX_PKT_RST_N);
+		oct_csr_write(data, XCV_RESET(priv->node));
+	}
+
+	return 0;
+}
+
+int bgx_get_tx_fifo_size(struct bgx_port_priv *priv)
+{
+	int num_ports;
+	u64 data;
+
+	data = oct_csr_read(BGX_CMR_TX_LMACS(priv->node, priv->bgx));
+	num_ports = (data & BGX_CMR_TX_LMACS_MASK);
+
+	switch (num_ports) {
+	case 1:
+		return BGX_TX_FIFO_SIZE;
+	case 2:
+		return BGX_TX_FIFO_SIZE / 2;
+	case 3:
+	case 4:
+		return BGX_TX_FIFO_SIZE / 4;
+	default:
+		return 0;
+	}
+}
+
+static int bgx_xaui_hardware_init(struct bgx_port_priv *priv)
+{
+	u64 clock_mhz, data, tx_fifo_size;
+
+	if (octeon_is_simulation()) {
+		/* Enable the port */
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		data |= BGX_CMR_CONFIG_ENABLE;
+		oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+	} else {
+		/* Reset the port */
+		data = oct_csr_read(BGX_SPU_CONTROL1(priv->node, priv->bgx,
+						     priv->index));
+		data |= BGX_SPU_CONTROL1_RESET;
+		oct_csr_write(data, BGX_SPU_CONTROL1(priv->node, priv->bgx,
+						     priv->index));
+
+		/* Wait for reset to complete */
+		udelay(1);
+		data = oct_csr_read(BGX_SPU_CONTROL1(priv->node, priv->bgx,
+						     priv->index));
+		if (data & BGX_SPU_CONTROL1_RESET) {
+			netdev_err(priv->netdev,
+				   "BGX%d:%d: SPU stuck in reset\n", priv->bgx,
+				   priv->node);
+			return -1;
+		}
+
+		/* Reset the SerDes lanes */
+		data = oct_csr_read(BGX_SPU_CONTROL1(priv->node, priv->bgx,
+						     priv->index));
+		data |= BGX_SPU_CONTROL1_LO_PWR;
+		oct_csr_write(data, BGX_SPU_CONTROL1(priv->node, priv->bgx,
+						     priv->index));
+
+		/* Disable packet reception */
+		data = oct_csr_read(BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+							 priv->index));
+		data |= BGX_SPU_MISC_CONTROL_RX_PACKET_DIS;
+		oct_csr_write(data, BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+							 priv->index));
+
+		/* Clear/disable interrupts */
+		data = oct_csr_read(BGX_SMU_RX_INT(priv->node, priv->bgx,
+						   priv->index));
+		oct_csr_write(data, BGX_SMU_RX_INT(priv->node, priv->bgx,
+						   priv->index));
+		data = oct_csr_read(BGX_SMU_TX_INT(priv->node, priv->bgx,
+						   priv->index));
+		oct_csr_write(data, BGX_SMU_TX_INT(priv->node, priv->bgx,
+						   priv->index));
+		data = oct_csr_read(BGX_SPU_INT(priv->node, priv->bgx,
+						priv->index));
+		oct_csr_write(data, BGX_SPU_INT(priv->node, priv->bgx,
+						priv->index));
+
+		if ((priv->mode == PORT_MODE_10G_KR ||
+		     priv->mode == PORT_MODE_40G_KR4) &&
+		    !OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+			oct_csr_write(0, BGX_SPU_BR_PMD_LP_CUP(priv->node,
+							       priv->bgx,
+							       priv->index));
+			oct_csr_write(0, BGX_SPU_BR_PMD_LD_CUP(priv->node,
+							       priv->bgx,
+							       priv->index));
+			oct_csr_write(0, BGX_SPU_BR_PMD_LD_REP(priv->node,
+							       priv->bgx,
+							       priv->index));
+			data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node,
+								   priv->bgx,
+								   priv->index));
+			data |= BGX_SPU_BR_PMD_CONTROL_TRAIN_EN;
+			oct_csr_write(data, BGX_SPU_BR_PMD_CONTROL(priv->node,
+								   priv->bgx,
+								   priv->index));
+		}
+	}
+
+	data = oct_csr_read(BGX_SMU_TX_APPEND(priv->node, priv->bgx,
+					      priv->index));
+	data |= BGX_SMU_TX_APPEND_FCS_C;
+	oct_csr_write(data, BGX_SMU_TX_APPEND(priv->node, priv->bgx,
+					      priv->index));
+
+	if (!octeon_is_simulation()) {
+		/* Disable fec */
+		data = oct_csr_read(BGX_SPU_FEC_CONTROL(priv->node, priv->bgx,
+							priv->index));
+		data &= ~BGX_SPU_FEC_CONTROL_FEC_EN;
+		oct_csr_write(data, BGX_SPU_FEC_CONTROL(priv->node, priv->bgx,
+							priv->index));
+
+		/* Disable/configure auto negotiation */
+		data = oct_csr_read(BGX_SPU_AN_CONTROL(priv->node, priv->bgx,
+						       priv->index));
+		data &= ~(BGX_SPU_AN_CONTROL_XNP_EN | BGX_SPU_AN_CONTROL_AN_EN);
+		oct_csr_write(data, BGX_SPU_AN_CONTROL(priv->node, priv->bgx,
+						       priv->index));
+
+		data = oct_csr_read(BGX_SPU_AN_ADV(priv->node, priv->bgx,
+						   priv->index));
+		data &= ~(BGX_SPU_AN_ADV_FEC_REQ | BGX_SPU_AN_ADV_A100G_CR10 |
+			  BGX_SPU_AN_ADV_A40G_CR4 | BGX_SPU_AN_ADV_A10G_KX4 |
+			  BGX_SPU_AN_ADV_A1G_KX | BGX_SPU_AN_ADV_RF |
+			  BGX_SPU_AN_ADV_XNP_ABLE);
+		data |= BGX_SPU_AN_ADV_FEC_REQ;
+		if (priv->mode == PORT_MODE_40G_KR4)
+			data |= BGX_SPU_AN_ADV_A40G_KR4;
+		else
+			data &= ~BGX_SPU_AN_ADV_A40G_KR4;
+		if (priv->mode == PORT_MODE_10G_KR)
+			data |= BGX_SPU_AN_ADV_A10G_KR;
+		else
+			data &= ~BGX_SPU_AN_ADV_A10G_KR;
+		oct_csr_write(data, BGX_SPU_AN_ADV(priv->node, priv->bgx,
+						   priv->index));
+
+		data = oct_csr_read(BGX_SPU_DBG_CONTROL(priv->node, priv->bgx));
+		data |= BGX_SPU_DBG_CONTROL_AN_NONCE_MATCH_DIS;
+		if (priv->mode == PORT_MODE_10G_KR ||
+		    priv->mode == PORT_MODE_40G_KR4)
+			data |= BGX_SPU_DBG_CONTROL_AN_ARB_LINK_CHK_EN;
+		else
+			data &= ~BGX_SPU_DBG_CONTROL_AN_ARB_LINK_CHK_EN;
+		oct_csr_write(data, BGX_SPU_DBG_CONTROL(priv->node, priv->bgx));
+
+		/* Enable the port */
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		data |= BGX_CMR_CONFIG_ENABLE;
+		oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && priv->index) {
+			/* BGX-22429 */
+			data = oct_csr_read(BGX_CMR_CONFIG(priv->node,
+							   priv->bgx, 0));
+			data |= BGX_CMR_CONFIG_ENABLE;
+			oct_csr_write(data, BGX_CMR_CONFIG(priv->node,
+							   priv->bgx, 0));
+		}
+	}
+
+	data = oct_csr_read(BGX_SPU_CONTROL1(priv->node, priv->bgx,
+					     priv->index));
+	data &= ~BGX_SPU_CONTROL1_LO_PWR;
+	oct_csr_write(data, BGX_SPU_CONTROL1(priv->node, priv->bgx,
+					     priv->index));
+
+	data = oct_csr_read(BGX_SMU_TX_CTL(priv->node, priv->bgx, priv->index));
+	data |= BGX_SMU_TX_CTL_DIC_EN;
+	data &= ~BGX_SMU_TX_CTL_UNI_EN;
+	oct_csr_write(data, BGX_SMU_TX_CTL(priv->node, priv->bgx, priv->index));
+
+	clock_mhz = octeon_get_io_clock_rate() / 1000000;
+	data = oct_csr_read(BGX_SPU_DBG_CONTROL(priv->node, priv->bgx));
+	data &= ~BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_MASK;
+	data |= (clock_mhz - 1) << BGX_SPU_DBG_CONTROL_US_CLK_PERIOD_SHIFT;
+	oct_csr_write(data, BGX_SPU_DBG_CONTROL(priv->node, priv->bgx));
+
+	/* Fifo in 16-byte words */
+	tx_fifo_size = bgx_get_tx_fifo_size(priv);
+	tx_fifo_size >>= 4;
+	oct_csr_write(tx_fifo_size - 10, BGX_SMU_TX_THRESH(priv->node,
+							   priv->bgx,
+							   priv->index));
+
+	if (priv->mode == PORT_MODE_RXAUI && priv->phy_np) {
+		data = oct_csr_read(BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+							 priv->index));
+		data |= BGX_SPU_MISC_CONTROL_INTLV_RDISP;
+		oct_csr_write(data, BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+							 priv->index));
+	}
+
+	/* Some PHYs take up to 250ms to stabilize */
+	if (!octeon_is_simulation())
+		usleep_range(250000, 300000);
+
+	return 0;
+}
+
+/* Configure/initialize a bgx port.
+ */
+static int bgx_port_init(struct bgx_port_priv *priv)
+{
+	int rc = 0;
+	u64 data;
+
+	/* GSER-20956 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+	    (priv->mode == PORT_MODE_10G_KR ||
+	     priv->mode == PORT_MODE_XFI ||
+	     priv->mode == PORT_MODE_40G_KR4 ||
+	     priv->mode == PORT_MODE_XLAUI)) {
+		/* Disable link training */
+		data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+
+		data &= ~BGX_SPU_BR_PMD_CONTROL_TRAIN_EN;
+		oct_csr_write(data, BGX_SPU_BR_PMD_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+	}
+
+	bgx_common_init_pknd(priv);
+
+	if (priv->mode == PORT_MODE_SGMII ||
+	    priv->mode == PORT_MODE_RGMII)
+		rc = bgx_xgmii_hardware_init(priv);
+	else
+		rc = bgx_xaui_hardware_init(priv);
+
+	return rc;
+}
+
+static int bgx_port_get_qlm_speed(struct bgx_port_priv	*priv, int qlm)
+{
+	enum lane_mode lmode;
+	u64 data;
+
+	data = oct_csr_read(GSER_LANE_MODE(priv->node, qlm));
+	lmode = data & GSER_LANE_MODE_LMODE_MASK;
+
+	switch (lmode) {
+	case R_125G_REFCLK15625_KX:
+	case R_125G_REFCLK15625_SGMII:
+		return 1250;
+	case R_25G_REFCLK100:
+	case R_25G_REFCLK125:
+		return 2500;
+	case R_3125G_REFCLK15625_XAUI:
+		return 3125;
+	case R_5G_REFCLK100:
+	case R_5G_REFCLK15625_QSGMII:
+	case R_5G_REFCLK125:
+		return 5000;
+	case R_8G_REFCLK100:
+	case R_8G_REFCLK125:
+		return 8000;
+	case R_625G_REFCLK15625_RXAUI:
+		return 6250;
+	case R_103125G_REFCLK15625_KR:
+		return 10312;
+	default:
+		return 0;
+	}
+}
+
+static struct port_status bgx_port_get_sgmii_link(struct bgx_port_priv *priv)
+{
+	struct port_status status;
+
+	status.link = 1;
+	status.duplex = DUPLEX_FULL;
+	status.speed = 1000;
+
+	return status;
+}
+
+static int bgx_port_xgmii_set_link_up(struct bgx_port_priv *priv)
+{
+	int timeout;
+	u64 data;
+
+	if (!octeon_is_simulation()) {
+		/* PCS reset sequence */
+		data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+		data |= BGX_GMP_PCS_MR_CONTROL_RESET;
+		oct_csr_write(data, BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+
+		/* Wait for reset to complete */
+		udelay(1);
+		data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+		if (data & BGX_GMP_PCS_MR_CONTROL_RESET) {
+			netdev_err(priv->netdev,
+				   "BGX%d:%d: PCS stuck in reset\n", priv->bgx,
+				   priv->node);
+			return -1;
+		}
+	}
+
+	/* Autonegotiation */
+	if (priv->phy_np) {
+		data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+		data |= BGX_GMP_PCS_MR_CONTROL_RST_AN;
+		if (priv->mode != PORT_MODE_RGMII)
+			data |= BGX_GMP_PCS_MR_CONTROL_AN_EN;
+		else
+			data &= ~BGX_GMP_PCS_MR_CONTROL_AN_EN;
+		data &= ~BGX_GMP_PCS_MR_CONTROL_PWR_DN;
+		oct_csr_write(data, BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+	} else {
+		data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+		data |= BGX_GMP_PCS_MR_CONTROL_SPDMSB;
+		data &= ~(BGX_GMP_PCS_MR_CONTROL_SPDLSB |
+			  BGX_GMP_PCS_MR_CONTROL_AN_EN |
+			  BGX_GMP_PCS_MR_CONTROL_PWR_DN);
+		oct_csr_write(data, BGX_GMP_PCS_MR_CONTROL(priv->node,
+							   priv->bgx,
+							   priv->index));
+	}
+
+	data = oct_csr_read(BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+	data &= ~(BGX_GMP_PCS_MISC_CTL_MAC_PHY | BGX_GMP_PCS_MISC_CTL_MODE);
+	if (priv->mode_1000basex)
+		data |= BGX_GMP_PCS_MISC_CTL_MODE;
+	if (priv->bgx_as_phy)
+		data |= BGX_GMP_PCS_MISC_CTL_MAC_PHY;
+	oct_csr_write(data, BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+
+	/* Wait for autonegotiation to complete */
+	if (!octeon_is_simulation() && !priv->bgx_as_phy &&
+	    priv->mode != PORT_MODE_RGMII) {
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_GMP_PCS_MR_STATUS(priv->node,
+								  priv->bgx,
+								  priv->index));
+			if (data & BGX_GMP_PCS_MR_STATUS_AN_CPT)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			netdev_err(priv->netdev, "BGX%d:%d: AN timeout\n",
+				   priv->bgx, priv->node);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static void bgx_port_rgmii_set_link_down(struct bgx_port_priv *priv)
+{
+	int rx_fifo_len;
+	u64 data;
+
+	data = oct_csr_read(XCV_RESET(priv->node));
+	data &= ~XCV_RESET_RX_PKT_RST_N;
+	oct_csr_write(data, XCV_RESET(priv->node));
+	/* Is this read really needed? TODO */
+	data = oct_csr_read(XCV_RESET(priv->node));
+
+	/* Wait for 2 MTUs */
+	mdelay(10);
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data &= ~BGX_CMR_CONFIG_DATA_PKT_RX_EN;
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	/* Wait for the rx and tx fifos to drain */
+	do {
+		data = oct_csr_read(BGX_CMR_RX_FIFO_LEN(priv->node, priv->bgx,
+							priv->index));
+		rx_fifo_len = data & BGX_CMR_RX_FIFO_LEN_MASK;
+		data = oct_csr_read(BGX_CMR_TX_FIFO_LEN(priv->node, priv->bgx,
+							priv->index));
+	} while (rx_fifo_len > 0 || !(data & BGX_CMR_TX_FIFO_LEN_LMAC_IDLE));
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data &= ~BGX_CMR_CONFIG_DATA_PKT_TX_EN;
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	data = oct_csr_read(XCV_RESET(priv->node));
+	data &= ~XCV_RESET_TX_PKT_RST_N;
+	oct_csr_write(data, XCV_RESET(priv->node));
+
+	data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	data |= BGX_GMP_PCS_MR_CONTROL_PWR_DN;
+	oct_csr_write(data, BGX_GMP_PCS_MR_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+}
+
+static void bgx_port_sgmii_set_link_down(struct bgx_port_priv *priv)
+{
+	u64 data;
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data &= ~(BGX_CMR_CONFIG_DATA_PKT_RX_EN |
+		  BGX_CMR_CONFIG_DATA_PKT_TX_EN);
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	data = oct_csr_read(BGX_GMP_PCS_MR_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	data &= ~BGX_GMP_PCS_MR_CONTROL_AN_EN;
+	oct_csr_write(data, BGX_GMP_PCS_MR_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+
+	data = oct_csr_read(BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+	data |= BGX_GMP_PCS_MISC_CTL_GMXENO;
+	oct_csr_write(data, BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+	data = oct_csr_read(BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						 priv->index));
+}
+
+static int bgx_port_sgmii_set_link_speed(struct bgx_port_priv *priv,
+					 struct port_status status)
+{
+	u64 data, miscx, prtx;
+	int timeout;
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data &= ~(BGX_CMR_CONFIG_DATA_PKT_RX_EN |
+		  BGX_CMR_CONFIG_DATA_PKT_TX_EN);
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	timeout = 10000;
+	do {
+		prtx = oct_csr_read(BGX_GMP_GMI_PRT_CFG(priv->node, priv->bgx,
+							priv->index));
+		if ((prtx & BGX_GMP_GMI_PRT_CFG_TX_IDLE) &&
+		    (prtx & BGX_GMP_GMI_PRT_CFG_RX_IDLE))
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout) {
+		netdev_err(priv->netdev, "BGX%d:%d: GMP idle timeout\n",
+			   priv->bgx, priv->node);
+		return -1;
+	}
+
+	prtx = oct_csr_read(BGX_GMP_GMI_PRT_CFG(priv->node, priv->bgx,
+						priv->index));
+	miscx = oct_csr_read(BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						  priv->index));
+	if (status.link) {
+		miscx &= ~BGX_GMP_PCS_MISC_CTL_GMXENO;
+		if (status.duplex == DUPLEX_FULL)
+			prtx |= BGX_GMP_GMI_PRT_CFG_DUPLEX;
+		else
+			prtx &= ~BGX_GMP_GMI_PRT_CFG_DUPLEX;
+	} else {
+		miscx |= BGX_GMP_PCS_MISC_CTL_GMXENO;
+	}
+
+	switch (status.speed) {
+	case 10:
+		prtx &= ~(BGX_GMP_GMI_PRT_CFG_SLOTTIME |
+			  BGX_GMP_GMI_PRT_CFG_SPEED);
+		prtx |= BGX_GMP_GMI_PRT_CFG_SPEED_MSB;
+		miscx &= ~BGX_GMP_PCS_MISC_CTL_SAMP_PT_MASK;
+		miscx |= 25;
+		oct_csr_write(64, BGX_GMP_GMI_TX_SLOT(priv->node, priv->bgx,
+						      priv->index));
+		oct_csr_write(0, BGX_GMP_GMI_TX_BURST(priv->node, priv->bgx,
+						      priv->index));
+		break;
+	case 100:
+		prtx &= ~(BGX_GMP_GMI_PRT_CFG_SPEED_MSB |
+			  BGX_GMP_GMI_PRT_CFG_SLOTTIME |
+			  BGX_GMP_GMI_PRT_CFG_SPEED);
+		miscx &= ~BGX_GMP_PCS_MISC_CTL_SAMP_PT_MASK;
+		miscx |= 5;
+		oct_csr_write(64, BGX_GMP_GMI_TX_SLOT(priv->node, priv->bgx,
+						      priv->index));
+		oct_csr_write(0, BGX_GMP_GMI_TX_BURST(priv->node, priv->bgx,
+						      priv->index));
+		break;
+	case 1000:
+		prtx |= (BGX_GMP_GMI_PRT_CFG_SLOTTIME |
+			 BGX_GMP_GMI_PRT_CFG_SPEED);
+		prtx &= ~BGX_GMP_GMI_PRT_CFG_SPEED_MSB;
+		miscx &= ~BGX_GMP_PCS_MISC_CTL_SAMP_PT_MASK;
+		miscx |= 1;
+		oct_csr_write(512, BGX_GMP_GMI_TX_SLOT(priv->node, priv->bgx,
+						       priv->index));
+		if (status.duplex == DUPLEX_FULL)
+			oct_csr_write(0, BGX_GMP_GMI_TX_BURST(priv->node,
+							      priv->bgx,
+							      priv->index));
+		else
+			oct_csr_write(8192, BGX_GMP_GMI_TX_BURST(priv->node,
+								 priv->bgx,
+								 priv->index));
+		break;
+	default:
+		break;
+	}
+
+	oct_csr_write(miscx, BGX_GMP_PCS_MISC_CTL(priv->node, priv->bgx,
+						  priv->index));
+	oct_csr_write(prtx, BGX_GMP_GMI_PRT_CFG(priv->node, priv->bgx,
+						priv->index));
+	/* This read verifies the write completed */
+	prtx = oct_csr_read(BGX_GMP_GMI_PRT_CFG(priv->node, priv->bgx,
+						priv->index));
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data |= BGX_CMR_CONFIG_DATA_PKT_RX_EN | BGX_CMR_CONFIG_DATA_PKT_TX_EN;
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	return 0;
+}
+
+static int bgx_port_rgmii_set_link_speed(struct bgx_port_priv *priv,
+					 struct port_status status)
+{
+	bool do_credits, int_lpbk = false, speed_changed = false;
+	int speed;
+	u64 data;
+
+	switch (status.speed) {
+	case 10:
+		speed = 0;
+		break;
+	case 100:
+		speed = 1;
+		break;
+	case 1000:
+	default:
+		speed = 2;
+		break;
+	}
+
+	/* Do credits if link came up */
+	data = oct_csr_read(XCV_RESET(priv->node));
+	do_credits = status.link && !(data & XCV_RESET_ENABLE);
+
+	/* Was there a speed change */
+	data = oct_csr_read(XCV_CTL(priv->node));
+	if ((data & XCV_CTL_SPEED_MASK) != speed)
+		speed_changed = true;
+
+	/* Clear clkrst when in internal loopback */
+	if (data & XCV_CTL_LPBK_INT) {
+		int_lpbk = true;
+		data = oct_csr_read(XCV_RESET(priv->node));
+		data &= ~XCV_RESET_CLKRST;
+		oct_csr_write(data, XCV_RESET(priv->node));
+	}
+
+	/* Link came up or there was a speed change */
+	data = oct_csr_read(XCV_RESET(priv->node));
+	if (status.link && (!(data & XCV_RESET_ENABLE) || speed_changed)) {
+		data |= XCV_RESET_ENABLE;
+		oct_csr_write(data, XCV_RESET(priv->node));
+
+		data = oct_csr_read(XCV_CTL(priv->node));
+		data &= ~XCV_CTL_SPEED_MASK;
+		data |= speed;
+		oct_csr_write(data, XCV_CTL(priv->node));
+
+		data = oct_csr_read(XCV_DLL_CTL(priv->node));
+		data |= XCV_DLL_CTL_CLKRX_BYP;
+		data &= ~XCV_DLL_CTL_CLKRX_SET_MASK;
+		data &= ~XCV_DLL_CTL_CLKTX_BYP;
+		oct_csr_write(data, XCV_DLL_CTL(priv->node));
+
+		data = oct_csr_read(XCV_DLL_CTL(priv->node));
+		data &= ~XCV_DLL_CTL_REFCLK_SEL_MASK;
+		oct_csr_write(data, XCV_DLL_CTL(priv->node));
+
+		data = oct_csr_read(XCV_RESET(priv->node));
+		data &= ~XCV_RESET_DLLRST;
+		oct_csr_write(data, XCV_RESET(priv->node));
+
+		usleep_range(10, 100);
+
+		data = oct_csr_read(XCV_COMP_CTL(priv->node));
+		data &= ~XCV_COMP_CTL_DRV_BYP;
+		oct_csr_write(data, XCV_COMP_CTL(priv->node));
+
+		data = oct_csr_read(XCV_RESET(priv->node));
+		data |= XCV_RESET_COMP;
+		oct_csr_write(data, XCV_RESET(priv->node));
+
+		data = oct_csr_read(XCV_RESET(priv->node));
+		if (int_lpbk)
+			data &= ~XCV_RESET_CLKRST;
+		else
+			data |= XCV_RESET_CLKRST;
+		oct_csr_write(data, XCV_RESET(priv->node));
+
+		data = oct_csr_read(XCV_RESET(priv->node));
+		data |= XCV_RESET_TX_DAT_RST_N | XCV_RESET_RX_DAT_RST_N;
+		oct_csr_write(data, XCV_RESET(priv->node));
+	}
+
+	data = oct_csr_read(XCV_RESET(priv->node));
+	if (status.link)
+		data |= XCV_RESET_TX_PKT_RST_N | XCV_RESET_RX_PKT_RST_N;
+	else
+		data &= ~(XCV_RESET_TX_PKT_RST_N | XCV_RESET_RX_PKT_RST_N);
+	oct_csr_write(data, XCV_RESET(priv->node));
+
+	if (!status.link) {
+		mdelay(10);
+		oct_csr_write(0, XCV_RESET(priv->node));
+	}
+
+	/* Grant pko tx credits */
+	if (do_credits) {
+		data = oct_csr_read(XCV_BATCH_CRD_RET(priv->node));
+		data |= XCV_BATCH_CRD_RET_CRD_RET;
+		oct_csr_write(data, XCV_BATCH_CRD_RET(priv->node));
+	}
+
+	return 0;
+}
+
+static int bgx_port_set_xgmii_link(struct bgx_port_priv *priv,
+				   struct port_status status)
+{
+	int rc = 0;
+	u64 data;
+
+	if (status.link) {
+		/* Link up */
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		data |= BGX_CMR_CONFIG_ENABLE;
+		oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+
+		/* BGX-22429 */
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && priv->index) {
+			data = oct_csr_read(BGX_CMR_CONFIG(priv->node,
+							   priv->bgx, 0));
+			data |= BGX_CMR_CONFIG_ENABLE;
+			oct_csr_write(data, BGX_CMR_CONFIG(priv->node,
+							   priv->bgx, 0));
+		}
+
+		rc = bgx_port_xgmii_set_link_up(priv);
+		if (rc)
+			return rc;
+		rc = bgx_port_sgmii_set_link_speed(priv, status);
+		if (rc)
+			return rc;
+		if (priv->mode == PORT_MODE_RGMII)
+			rc = bgx_port_rgmii_set_link_speed(priv, status);
+	} else {
+		/* Link down */
+		if (priv->mode == PORT_MODE_RGMII) {
+			bgx_port_rgmii_set_link_down(priv);
+			rc = bgx_port_sgmii_set_link_speed(priv, status);
+			if (rc)
+				return rc;
+			rc = bgx_port_rgmii_set_link_speed(priv, status);
+		} else {
+			bgx_port_sgmii_set_link_down(priv);
+		}
+	}
+
+	return rc;
+}
+
+static struct port_status bgx_port_get_xaui_link(struct bgx_port_priv *priv)
+{
+	struct port_status status;
+	int lanes, speed;
+	u64 data;
+
+	status.link = 0;
+	status.duplex = DUPLEX_HALF;
+	status.speed = 0;
+
+	/* Get the link state */
+	data = oct_csr_read(BGX_SMU_TX_CTL(priv->node, priv->bgx, priv->index));
+	data &= BGX_SMU_TX_CTL_LS_MASK;
+	if (!data) {
+		data = oct_csr_read(BGX_SMU_RX_CTL(priv->node, priv->bgx,
+						   priv->index));
+		data &= BGX_SMU_RX_CTL_STATUS_MASK;
+		if (!data) {
+			data = oct_csr_read(BGX_SPU_STATUS1(priv->node,
+							    priv->bgx,
+							    priv->index));
+			if (data & BGX_SPU_STATUS1_RCV_LINK)
+				status.link = 1;
+		}
+	}
+
+	if (status.link) {
+		/* Always full duplex */
+		status.duplex = DUPLEX_FULL;
+
+		/* Speed */
+		speed = bgx_port_get_qlm_speed(priv, priv->qlm);
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		switch ((data & BGX_CMR_CONFIG_LMAC_TYPE_MASK) >>
+			BGX_CMR_CONFIG_LMAC_TYPE_SHIFT) {
+		default:
+		case 1:
+			speed = (speed * 8 + 5) / 10;
+			lanes = 4;
+			break;
+		case 2:
+			speed = (speed * 8 + 5) / 10;
+			lanes = 2;
+			break;
+		case 3:
+			speed = (speed * 64 + 33) / 66;
+			lanes = 1;
+			break;
+		case 4:
+			if (speed == 6250)
+				speed = 6445;
+			speed = (speed * 64 + 33) / 66;
+			lanes = 4;
+			break;
+		}
+
+		speed *= lanes;
+		status.speed = speed;
+	}
+
+	return status;
+}
+
+static int bgx_port_init_xaui_an(struct bgx_port_priv *priv)
+{
+	u64 data;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+		data = oct_csr_read(BGX_SPU_INT(priv->node, priv->bgx,
+						priv->index));
+		/* If autonegotiation is no good */
+		if (!(data & BGX_SPU_INT_AN_LINK_GOOD)) {
+			data = BGX_SPU_INT_AN_COMPLETE |
+			       BGX_SPU_INT_AN_LINK_GOOD |
+			       BGX_SPU_INT_AN_PAGE_RX;
+			oct_csr_write(data, BGX_SPU_INT(priv->node, priv->bgx,
+							priv->index));
+
+			data = oct_csr_read(BGX_SPU_AN_CONTROL(priv->node,
+							       priv->bgx,
+							       priv->index));
+			data |= BGX_SPU_AN_CONTROL_AN_RESTART;
+			oct_csr_write(data, BGX_SPU_AN_CONTROL(priv->node,
+							       priv->bgx,
+							       priv->index));
+			return -1;
+		}
+	} else {
+		data = oct_csr_read(BGX_SPU_AN_STATUS(priv->node, priv->bgx,
+						      priv->index));
+		/* If autonegotiation hasn't completed */
+		if (!(data & BGX_SPU_AN_STATUS_AN_COMPLETE)) {
+			data = oct_csr_read(BGX_SPU_AN_CONTROL(priv->node,
+							       priv->bgx,
+							       priv->index));
+			data |= BGX_SPU_AN_CONTROL_AN_RESTART;
+			oct_csr_write(data, BGX_SPU_AN_CONTROL(priv->node,
+							       priv->bgx,
+							       priv->index));
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static void bgx_port_xaui_start_training(struct bgx_port_priv *priv)
+{
+	u64 data;
+
+	data = BGX_SPU_INT_TRAINING_FAILURE | BGX_SPU_INT_TRAINING_DONE;
+	oct_csr_write(data, BGX_SPU_INT(priv->node, priv->bgx, priv->index));
+
+	/* BGX-20968 */
+	oct_csr_write(0, BGX_SPU_BR_PMD_LP_CUP(priv->node, priv->bgx,
+					       priv->index));
+	oct_csr_write(0, BGX_SPU_BR_PMD_LD_CUP(priv->node, priv->bgx,
+					       priv->index));
+	oct_csr_write(0, BGX_SPU_BR_PMD_LD_REP(priv->node, priv->bgx,
+					       priv->index));
+	data = oct_csr_read(BGX_SPU_AN_CONTROL(priv->node, priv->bgx,
+					       priv->index));
+	data &= ~BGX_SPU_AN_CONTROL_AN_EN;
+	oct_csr_write(data, BGX_SPU_AN_CONTROL(priv->node, priv->bgx,
+					       priv->index));
+	udelay(1);
+
+	data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	data |= BGX_SPU_BR_PMD_CONTROL_TRAIN_EN;
+	oct_csr_write(data, BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	udelay(1);
+
+	data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	data |= BGX_SPU_BR_PMD_CONTROL_TRAIN_RESTART;
+	oct_csr_write(data, BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+}
+
+static int bgx_port_gser_27882(struct bgx_port_priv *priv)
+{
+	u64 addr, data;
+	int timeout;
+
+	timeout = 200;
+	do {
+		data = oct_csr_read(GSER_RX_EIE_DETSTS(priv->node, priv->qlm));
+		if (data &
+		    (1 << (priv->index + GSER_RX_EIE_DETSTS_CDRLCK_SHIFT)))
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout)
+		return -1;
+
+	addr = GSER_LANE_PCS_CTLIFC_0(priv->node, priv->qlm, priv->index);
+	data = oct_csr_read(addr);
+	data |= GSER_LANE_PCS_CTLIFC_0_CFG_TX_COEFF_REQ_OVRRD_VAL;
+	oct_csr_write(data, addr);
+
+	addr = GSER_LANE_PCS_CTLIFC_2(priv->node, priv->qlm, priv->index);
+	data = oct_csr_read(addr);
+	data |= GSER_LANE_PCS_CTLIFC_2_CFG_TX_COEFF_REQ_OVRRD_EN;
+	oct_csr_write(data, addr);
+
+	data = oct_csr_read(addr);
+	data |= GSER_LANE_PCS_CTLIFC_2_CTLIFC_OVRRD_REQ;
+	oct_csr_write(data, addr);
+
+	data = oct_csr_read(addr);
+	data &= ~GSER_LANE_PCS_CTLIFC_2_CFG_TX_COEFF_REQ_OVRRD_EN;
+	oct_csr_write(data, addr);
+
+	data = oct_csr_read(addr);
+	data |= GSER_LANE_PCS_CTLIFC_2_CTLIFC_OVRRD_REQ;
+	oct_csr_write(data, addr);
+
+	return 0;
+}
+
+static void bgx_port_xaui_restart_training(struct bgx_port_priv *priv)
+{
+	u64 data;
+
+	data = BGX_SPU_INT_TRAINING_FAILURE | BGX_SPU_INT_TRAINING_DONE;
+	oct_csr_write(data, BGX_SPU_INT(priv->node, priv->bgx, priv->index));
+	usleep_range(1700, 2000);
+
+	/* BGX-20968 */
+	oct_csr_write(0, BGX_SPU_BR_PMD_LP_CUP(priv->node, priv->bgx,
+					       priv->index));
+	oct_csr_write(0, BGX_SPU_BR_PMD_LD_CUP(priv->node, priv->bgx,
+					       priv->index));
+	oct_csr_write(0, BGX_SPU_BR_PMD_LD_REP(priv->node, priv->bgx,
+					       priv->index));
+
+	/* Restart training */
+	data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+	data |= BGX_SPU_BR_PMD_CONTROL_TRAIN_RESTART;
+	oct_csr_write(data, BGX_SPU_BR_PMD_CONTROL(priv->node, priv->bgx,
+						   priv->index));
+}
+
+static int bgx_port_get_max_qlm_lanes(int qlm)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return (qlm < 4) ? 4 : 2;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 2;
+	return 4;
+}
+
+static int bgx_port_qlm_rx_equalization(struct bgx_port_priv *priv, int qlm,
+					int lane)
+{
+	int i, lane_mask, max_lanes, timeout, rc = 0;
+	u64 addr, data, lmode;
+
+	max_lanes = bgx_port_get_max_qlm_lanes(qlm);
+	lane_mask = lane == -1 ? ((1 << max_lanes) - 1) : (1 << lane);
+
+	/* Nothing to do for qlms in reset */
+	data = oct_csr_read(GSER_PHY_CTL(priv->node, qlm));
+	if (data & (GSER_PHY_CTL_PHY_RESET | GSER_PHY_CTL_PHY_PD))
+		return -1;
+
+	for (i = 0; i < max_lanes; i++) {
+		if (!(i & lane_mask))
+			continue;
+
+		addr = GSER_LANE_LBERT_CFG(priv->node, qlm, i);
+		data = oct_csr_read(addr);
+		/* Rx equalization can't be completed while pattern matcher is
+		 * enabled because it causes errors.
+		 */
+		if (data & GSER_LANE_LBERT_CFG_LBERT_PM_EN)
+			return -1;
+	}
+
+	lmode = oct_csr_read(GSER_LANE_MODE(priv->node, qlm));
+	lmode &= GSER_LANE_MODE_LMODE_MASK;
+	addr = GSER_LANE_P_MODE_1(priv->node, qlm, lmode);
+	data = oct_csr_read(addr);
+	/* Don't complete rx equalization if in VMA manual mode */
+	if (data & GSER_LANE_P_MODE_1_VMA_MM)
+		return 0;
+
+	/* Apply rx equalization for speed > 6250 */
+	if (bgx_port_get_qlm_speed(priv, qlm) < 6250)
+		return 0;
+
+	/* Wait until rx data is valid (CDRLOCK) */
+	timeout = 500;
+	addr = GSER_RX_EIE_DETSTS(priv->node, qlm);
+	do {
+		data = oct_csr_read(addr);
+		data >>= GSER_RX_EIE_DETSTS_CDRLCK_SHIFT;
+		data &= lane_mask;
+		if (data == lane_mask)
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout) {
+		pr_debug("QLM%d:%d: CDRLOCK timeout\n", qlm, priv->node);
+		return -1;
+	}
+
+	bgx_port_gser_20075(priv, qlm, lane);
+
+	for (i = 0; i < max_lanes; i++) {
+		if (!(i & lane_mask))
+			continue;
+		/* Skip lane 3 on 78p1.x due to gser-20075. Handled above */
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && i == 3)
+			continue;
+
+		/* Enable software control */
+		addr = GSER_BR_RX_CTL(priv->node, qlm, i);
+		data = oct_csr_read(addr);
+		data |= GSER_BR_RX_CTL_RXT_SWM;
+		oct_csr_write(data, addr);
+
+		/* Clear the completion flag */
+		addr = GSER_BR_RX_EER(priv->node, qlm, i);
+		data = oct_csr_read(addr);
+		data &= ~GSER_BR_RX_EER_RXT_ESV;
+		data |= GSER_BR_RX_EER_RXT_EER;
+		oct_csr_write(data, addr);
+	}
+
+	/* Wait for rx equalization to complete */
+	for (i = 0; i < max_lanes; i++) {
+		if (!(i & lane_mask))
+			continue;
+
+		timeout = 250000;
+		addr = GSER_BR_RX_EER(priv->node, qlm, i);
+		do {
+			data = oct_csr_read(addr);
+			if (data & GSER_BR_RX_EER_RXT_ESV)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("QLM%d:%d: RXT_ESV timeout\n",
+				 qlm, priv->node);
+			rc = -1;
+		}
+
+		/* Switch back to hardware control */
+		addr = GSER_BR_RX_CTL(priv->node, qlm, i);
+		data = oct_csr_read(addr);
+		data &= ~GSER_BR_RX_CTL_RXT_SWM;
+		oct_csr_write(data, addr);
+	}
+
+	return rc;
+}
+
+static int bgx_port_xaui_equalization(struct bgx_port_priv *priv)
+{
+	u64 data;
+	int lane;
+
+	/* Nothing to do for loopback mode */
+	data = oct_csr_read(BGX_SPU_CONTROL1(priv->node, priv->bgx,
+					     priv->index));
+	if (data & BGX_SPU_CONTROL1_LOOPBACK)
+		return 0;
+
+	if (priv->mode == PORT_MODE_XAUI || priv->mode == PORT_MODE_XLAUI) {
+		if (bgx_port_qlm_rx_equalization(priv, priv->qlm, -1))
+			return -1;
+
+		/* BGX2 of 73xx uses 2 dlms */
+		if (OCTEON_IS_MODEL(OCTEON_CN73XX) && priv->bgx == 2) {
+			if (bgx_port_qlm_rx_equalization(priv, priv->qlm + 1,
+							 -1))
+				return -1;
+		}
+	} else if (priv->mode == PORT_MODE_RXAUI) {
+		/* Rxaui always uses 2 lanes */
+		if (bgx_port_qlm_rx_equalization(priv, priv->qlm, -1))
+			return -1;
+	} else if (priv->mode == PORT_MODE_XFI) {
+		lane = priv->index;
+		if ((OCTEON_IS_MODEL(OCTEON_CN73XX) && priv->qlm == 6) ||
+		    (OCTEON_IS_MODEL(OCTEON_CNF75XX) && priv->qlm == 5))
+			lane -= 2;
+
+		if (bgx_port_qlm_rx_equalization(priv, priv->qlm, lane))
+			return -1;
+	}
+
+	return 0;
+}
+
+static int bgx_port_init_xaui_link(struct bgx_port_priv *priv)
+{
+	int use_ber = 0, use_training = 0;
+	int rc = 0, timeout;
+	u64 data;
+
+	if (priv->mode == PORT_MODE_10G_KR || priv->mode == PORT_MODE_40G_KR4)
+		use_training = 1;
+
+	if (!octeon_is_simulation() &&
+	    (priv->mode == PORT_MODE_XFI || priv->mode == PORT_MODE_XLAUI ||
+	     priv->mode == PORT_MODE_10G_KR || priv->mode == PORT_MODE_40G_KR4))
+		use_ber = 1;
+
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data &= ~(BGX_CMR_CONFIG_DATA_PKT_RX_EN |
+		  BGX_CMR_CONFIG_DATA_PKT_TX_EN);
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	data = oct_csr_read(BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+						 priv->index));
+	data |= BGX_SPU_MISC_CONTROL_RX_PACKET_DIS;
+	oct_csr_write(data, BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+						 priv->index));
+
+	if (!octeon_is_simulation()) {
+		data = oct_csr_read(BGX_SPU_AN_CONTROL(priv->node, priv->bgx,
+						       priv->index));
+		/* Restart autonegotiation */
+		if (data & BGX_SPU_AN_CONTROL_AN_EN) {
+			rc = bgx_port_init_xaui_an(priv);
+			if (rc)
+				return rc;
+		}
+
+		if (use_training) {
+			data = oct_csr_read(BGX_SPU_BR_PMD_CONTROL(priv->node,
+								   priv->bgx,
+								   priv->index));
+			/* Check if training is enabled */
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+			    !(data & BGX_SPU_BR_PMD_CONTROL_TRAIN_EN)) {
+				bgx_port_xaui_start_training(priv);
+				return -1;
+			}
+
+			if (OCTEON_IS_MODEL(OCTEON_CN73XX) ||
+			    OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
+			    OCTEON_IS_MODEL(OCTEON_CN78XX))
+				bgx_port_gser_27882(priv);
+
+			data = oct_csr_read(BGX_SPU_INT(priv->node, priv->bgx,
+							priv->index));
+
+			/* Restart training if it failed */
+			if ((data & BGX_SPU_INT_TRAINING_FAILURE) &&
+			    !OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+				bgx_port_xaui_restart_training(priv);
+				return -1;
+			}
+
+			if (!(data & BGX_SPU_INT_TRAINING_DONE)) {
+				pr_debug("Waiting for link training\n");
+				return -1;
+			}
+		} else {
+			bgx_port_xaui_equalization(priv);
+		}
+
+		/* Wait until the reset is complete */
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_SPU_CONTROL1(priv->node,
+							     priv->bgx,
+							     priv->index));
+			if (!(data & BGX_SPU_CONTROL1_RESET))
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("BGX%d:%d:%d: Reset timeout\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+
+		if (use_ber) {
+			timeout = 10000;
+			do {
+				data =
+				oct_csr_read(BGX_SPU_BR_STATUS1(priv->node,
+								priv->bgx,
+								priv->index));
+				if (data & BGX_SPU_BR_STATUS1_BLK_LOCK)
+					break;
+				timeout--;
+				udelay(1);
+			} while (timeout);
+			if (!timeout) {
+				pr_debug("BGX%d:%d:%d: BLK_LOCK timeout\n",
+					 priv->bgx, priv->index, priv->node);
+				return -1;
+			}
+		} else {
+			timeout = 10000;
+			do {
+				data =
+				oct_csr_read(BGX_SPU_BX_STATUS(priv->node,
+							       priv->bgx,
+							       priv->index));
+				if (data & BGX_SPU_BX_STATUS_ALIGND)
+					break;
+				timeout--;
+				udelay(1);
+			} while (timeout);
+			if (!timeout) {
+				pr_debug("BGX%d:%d:%d: Lanes align timeout\n",
+					 priv->bgx, priv->index, priv->node);
+				return -1;
+			}
+		}
+
+		if (use_ber) {
+			data = oct_csr_read(BGX_SPU_BR_STATUS2(priv->node,
+							       priv->bgx,
+							       priv->index));
+			data |= BGX_SPU_BR_STATUS2_LATCHED_LOCK;
+			oct_csr_write(data, BGX_SPU_BR_STATUS2(priv->node,
+							       priv->bgx,
+							       priv->index));
+		}
+
+		data = oct_csr_read(BGX_SPU_STATUS2(priv->node, priv->bgx,
+						    priv->index));
+		data |= BGX_SPU_STATUS2_RCVFLT;
+		oct_csr_write(data, BGX_SPU_STATUS2(priv->node, priv->bgx,
+						    priv->index));
+
+		data = oct_csr_read(BGX_SPU_STATUS2(priv->node, priv->bgx,
+						    priv->index));
+		if (data & BGX_SPU_STATUS2_RCVFLT) {
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+			    use_training)
+				bgx_port_xaui_restart_training(priv);
+			return -1;
+		}
+
+		/* Wait for mac rx to be ready */
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_SMU_RX_CTL(priv->node,
+							   priv->bgx,
+							   priv->index));
+			data &= BGX_SMU_RX_CTL_STATUS_MASK;
+			if (!data)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("BGX%d:%d:%d: mac ready timeout\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+
+		/* Wait for bgx rx to be idle */
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_SMU_CTRL(priv->node, priv->bgx,
+							 priv->index));
+			if (data & BGX_SMU_CTRL_RX_IDLE)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("BGX%d:%d:%d: rx idle timeout\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+
+		/* Wait for gmx tx to be idle */
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_SMU_CTRL(priv->node, priv->bgx,
+							 priv->index));
+			if (data & BGX_SMU_CTRL_TX_IDLE)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("BGX%d:%d:%d: tx idle timeout\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+
+		/* Check rcvflt is still be 0 */
+		data = oct_csr_read(BGX_SPU_STATUS2(priv->node, priv->bgx,
+						    priv->index));
+		if (data & BGX_SPU_STATUS2_RCVFLT) {
+			pr_debug("BGX%d:%d:%d: receive fault\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+
+		/* Receive link is latching low. Force it high and verify it */
+		data = oct_csr_read(BGX_SPU_STATUS1(priv->node, priv->bgx,
+						    priv->index));
+		data |= BGX_SPU_STATUS1_RCV_LINK;
+		oct_csr_write(data, BGX_SPU_STATUS1(priv->node, priv->bgx,
+						    priv->index));
+		timeout = 10000;
+		do {
+			data = oct_csr_read(BGX_SPU_STATUS1(priv->node,
+							    priv->bgx,
+							    priv->index));
+			if (data & BGX_SPU_STATUS1_RCV_LINK)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout) {
+			pr_debug("BGX%d:%d:%d: rx link down\n", priv->bgx,
+				 priv->index, priv->node);
+			return -1;
+		}
+	}
+
+	if (use_ber) {
+		/* Read error counters to clear */
+		data = oct_csr_read(BGX_SPU_BR_BIP_ERR_CNT(priv->node,
+							   priv->bgx,
+							   priv->index));
+		data = oct_csr_read(BGX_SPU_BR_STATUS2(priv->node, priv->bgx,
+						       priv->index));
+
+		/* Verify latch lock is set */
+		if (!(data & BGX_SPU_BR_STATUS2_LATCHED_LOCK)) {
+			pr_debug("BGX%d:%d:%d: latch lock lost\n",
+				 priv->bgx, priv->index, priv->node);
+			return -1;
+		}
+
+		/* LATCHED_BER is cleared by writing 1 to it */
+		if (data & BGX_SPU_BR_STATUS2_LATCHED_BER)
+			oct_csr_write(data, BGX_SPU_BR_STATUS2(priv->node,
+							       priv->bgx,
+							       priv->index));
+
+		usleep_range(1500, 2000);
+		data = oct_csr_read(BGX_SPU_BR_STATUS2(priv->node, priv->bgx,
+						       priv->index));
+		if (data & BGX_SPU_BR_STATUS2_LATCHED_BER) {
+			pr_debug("BGX%d:%d:%d: BER test failed\n",
+				 priv->bgx, priv->index, priv->node);
+			return -1;
+		}
+	}
+
+	/* Enable packet transmit and receive */
+	data = oct_csr_read(BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+						 priv->index));
+	data &= ~BGX_SPU_MISC_CONTROL_RX_PACKET_DIS;
+	oct_csr_write(data, BGX_SPU_MISC_CONTROL(priv->node, priv->bgx,
+						 priv->index));
+	data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+	data |= BGX_CMR_CONFIG_DATA_PKT_RX_EN | BGX_CMR_CONFIG_DATA_PKT_TX_EN;
+	oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx, priv->index));
+
+	return 0;
+}
+
+static int bgx_port_set_xaui_link(struct bgx_port_priv *priv,
+				  struct port_status status)
+{
+	bool smu_rx_ok = false, smu_tx_ok = false, spu_link_ok = false;
+	int rc = 0;
+	u64 data;
+
+	/* Initialize hardware if link is up but hardware is not happy */
+	if (status.link) {
+		data = oct_csr_read(BGX_SMU_TX_CTL(priv->node, priv->bgx,
+						   priv->index));
+		data &= BGX_SMU_TX_CTL_LS_MASK;
+		smu_tx_ok = data == 0;
+
+		data = oct_csr_read(BGX_SMU_RX_CTL(priv->node, priv->bgx,
+						   priv->index));
+		data &= BGX_SMU_RX_CTL_STATUS_MASK;
+		smu_rx_ok = data == 0;
+
+		data = oct_csr_read(BGX_SPU_STATUS1(priv->node, priv->bgx,
+						    priv->index));
+		data &= BGX_SPU_STATUS1_RCV_LINK;
+		spu_link_ok = data == BGX_SPU_STATUS1_RCV_LINK;
+
+		if (!smu_tx_ok || !smu_rx_ok || !spu_link_ok)
+			rc = bgx_port_init_xaui_link(priv);
+	}
+
+	return rc;
+}
+
+static struct bgx_port_priv *bgx_port_netdev2priv(struct net_device *netdev)
+{
+	struct bgx_port_netdev_priv *nd_priv = netdev_priv(netdev);
+
+	return nd_priv->bgx_priv;
+}
+
+void bgx_port_set_netdev(struct device *dev, struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = dev_get_drvdata(dev);
+
+	if (netdev) {
+		struct bgx_port_netdev_priv *nd_priv = netdev_priv(netdev);
+
+		nd_priv->bgx_priv = priv;
+	}
+
+	priv->netdev = netdev;
+}
+EXPORT_SYMBOL(bgx_port_set_netdev);
+
+int bgx_port_ethtool_get_link_ksettings(struct net_device *netdev,
+					struct ethtool_link_ksettings *cmd)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+
+	if (priv->phydev) {
+		phy_ethtool_ksettings_get(priv->phydev, cmd);
+		return 0;
+	} else {
+		return -EINVAL;
+	}
+}
+EXPORT_SYMBOL(bgx_port_ethtool_get_link_ksettings);
+
+int bgx_port_ethtool_set_settings(struct net_device *netdev,
+				  struct ethtool_cmd *cmd)
+{
+	struct bgx_port_priv *p = bgx_port_netdev2priv(netdev);
+
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	if (p->phydev)
+		return phy_ethtool_sset(p->phydev, cmd);
+
+	return -EOPNOTSUPP;
+}
+EXPORT_SYMBOL(bgx_port_ethtool_set_settings);
+
+int bgx_port_ethtool_nway_reset(struct net_device *netdev)
+{
+	struct bgx_port_priv *p = bgx_port_netdev2priv(netdev);
+
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	if (p->phydev)
+		return phy_start_aneg(p->phydev);
+
+	return -EOPNOTSUPP;
+}
+EXPORT_SYMBOL(bgx_port_ethtool_nway_reset);
+
+const u8 *bgx_port_get_mac(struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+
+	return priv->mac_addr;
+}
+EXPORT_SYMBOL(bgx_port_get_mac);
+
+int bgx_port_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+	struct bgx_port_priv *p = bgx_port_netdev2priv(netdev);
+
+	if (p->phydev)
+		return phy_mii_ioctl(p->phydev, ifr, cmd);
+	return -EOPNOTSUPP;
+}
+EXPORT_SYMBOL(bgx_port_do_ioctl);
+
+static void bgx_port_write_cam(struct bgx_port_priv *priv, int cam,
+			       const u8 *mac)
+{
+	u64 m = 0;
+	int i;
+
+	if (mac) {
+		for (i = 0; i < 6; i++)
+			m |= (((u64)mac[i]) << ((5 - i) * 8));
+		m |= BGX_CMR_RX_ADRX_CAM_EN;
+	}
+
+	m |= (u64)priv->index << BGX_CMR_RX_ADRX_CAM_ID_SHIFT;
+	oct_csr_write(m, BGX_CMR_RX_ADRX_CAM(priv->node, priv->bgx,
+					     priv->index * 8 + cam));
+}
+
+/* Set MAC address for the net_device that is attached. */
+void bgx_port_set_rx_filtering(struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	int available_cam_entries, current_cam_entry;
+	struct netdev_hw_addr *ha;
+	u64 data;
+
+	available_cam_entries = 8;
+	data = BGX_CMR_RX_ADR_CTL_BCST_ACCEPT;
+
+	if ((netdev->flags & IFF_PROMISC) || netdev->uc.count > 7) {
+		data &= ~BGX_CMR_RX_ADR_CTL_CAM_ACCEPT;
+		available_cam_entries = 0;
+	} else {
+		/* One CAM entry for the primary address, leaves seven
+		 * for the secondary addresses.
+		 */
+		data |= BGX_CMR_RX_ADR_CTL_CAM_ACCEPT;
+		available_cam_entries = 7 - netdev->uc.count;
+	}
+
+	if (netdev->flags & IFF_PROMISC) {
+		data |= BGX_CMR_RX_ADR_CTL_ACCEPT_ALL_MCST;
+	} else {
+		if (netdev->flags & IFF_MULTICAST) {
+			if ((netdev->flags & IFF_ALLMULTI) ||
+			    netdev_mc_count(netdev) > available_cam_entries)
+				data |= BGX_CMR_RX_ADR_CTL_ACCEPT_ALL_MCST;
+			else
+				data |= BGX_CMR_RX_ADR_CTL_USE_CAM_FILTER;
+		}
+	}
+	current_cam_entry = 0;
+	if (data & BGX_CMR_RX_ADR_CTL_CAM_ACCEPT) {
+		bgx_port_write_cam(priv, current_cam_entry, netdev->dev_addr);
+		current_cam_entry++;
+		netdev_for_each_uc_addr(ha, netdev) {
+			bgx_port_write_cam(priv, current_cam_entry, ha->addr);
+			current_cam_entry++;
+		}
+	}
+	if ((data & BGX_CMR_RX_ADR_CTL_MCST_MODE_MASK) ==
+	    BGX_CMR_RX_ADR_CTL_USE_CAM_FILTER) {
+		/* Accept all Multicast via CAM */
+		netdev_for_each_mc_addr(ha, netdev) {
+			bgx_port_write_cam(priv, current_cam_entry, ha->addr);
+			current_cam_entry++;
+		}
+	}
+	while (current_cam_entry < 8) {
+		bgx_port_write_cam(priv, current_cam_entry, NULL);
+		current_cam_entry++;
+	}
+	oct_csr_write(data, BGX_CMR_RX_ADR_CTL(priv->node, priv->bgx,
+					       priv->index));
+}
+EXPORT_SYMBOL(bgx_port_set_rx_filtering);
+
+static void bgx_port_adjust_link(struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	unsigned int duplex, link, speed;
+	int link_changed = 0;
+
+	mutex_lock(&priv->lock);
+
+	if (!priv->phydev->link && priv->last_status.link)
+		link_changed = -1;
+
+	if (priv->phydev->link &&
+	    (priv->last_status.link != priv->phydev->link ||
+	     priv->last_status.duplex != priv->phydev->duplex ||
+	     priv->last_status.speed != priv->phydev->speed))
+		link_changed = 1;
+
+	link = priv->phydev->link;
+	priv->last_status.link = priv->phydev->link;
+
+	speed = priv->phydev->speed;
+	priv->last_status.speed = priv->phydev->speed;
+
+	duplex = priv->phydev->duplex;
+	priv->last_status.duplex = priv->phydev->duplex;
+
+	mutex_unlock(&priv->lock);
+
+	if (link_changed != 0) {
+		struct port_status status;
+
+		if (link_changed > 0) {
+			netdev_info(netdev, "Link is up - %d/%s\n",
+				    priv->phydev->speed,
+				    priv->phydev->duplex == DUPLEX_FULL ?
+				    "Full" : "Half");
+		} else {
+			netdev_info(netdev, "Link is down\n");
+		}
+		status.link = link ? 1 : 0;
+		status.duplex = duplex;
+		status.speed = speed;
+		priv->set_link(priv, status);
+	}
+}
+
+static void bgx_port_check_state(struct work_struct *work)
+{
+	struct bgx_port_priv *priv;
+	struct port_status status;
+
+	priv = container_of(work, struct bgx_port_priv, dwork.work);
+
+	status = priv->get_link(priv);
+
+	if (!status.link &&
+	    priv->mode != PORT_MODE_SGMII && priv->mode != PORT_MODE_RGMII)
+		bgx_port_init_xaui_link(priv);
+
+	if (priv->last_status.link != status.link) {
+		priv->last_status.link = status.link;
+		if (status.link)
+			netdev_info(priv->netdev, "Link is up - %d/%s\n",
+				    status.speed,
+				    status.duplex == DUPLEX_FULL ?
+				    "Full" : "Half");
+		else
+			netdev_info(priv->netdev, "Link is down\n");
+	}
+
+	mutex_lock(&priv->lock);
+	if (priv->work_queued)
+		queue_delayed_work(check_state_wq, &priv->dwork, HZ);
+	mutex_unlock(&priv->lock);
+}
+
+int bgx_port_enable(struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	struct port_status status;
+	bool dont_use_phy;
+	u64 data;
+
+	if (priv->mode == PORT_MODE_SGMII || priv->mode == PORT_MODE_RGMII) {
+		/* 1G */
+		data = oct_csr_read(BGX_GMP_GMI_TX_APPEND(priv->node,
+							  priv->bgx,
+							  priv->index));
+		data |= BGX_GMP_GMI_TX_APPEND_FCS | BGX_GMP_GMI_TX_APPEND_PAD;
+		oct_csr_write(data, BGX_GMP_GMI_TX_APPEND(priv->node,
+							  priv->bgx,
+							  priv->index));
+
+		/* Packets are padded (without FCS) to MIN_SIZE + 1 in SGMII */
+		data = 60 - 1;
+		oct_csr_write(data, BGX_GMP_GMI_TX_MIN_PKT(priv->node,
+							   priv->bgx,
+							   priv->index));
+	} else {
+		/* 10G or higher */
+		data = oct_csr_read(BGX_SMU_TX_APPEND(priv->node,
+						      priv->bgx,
+						      priv->index));
+		data |= BGX_SMU_TX_APPEND_FCS_D | BGX_SMU_TX_APPEND_PAD;
+		oct_csr_write(data, BGX_SMU_TX_APPEND(priv->node,
+						      priv->bgx,
+						      priv->index));
+
+		/* Packets are padded(with FCS) to MIN_SIZE  in non-SGMII */
+		data = 60 + 4;
+		oct_csr_write(data, BGX_SMU_TX_MIN_PKT(priv->node,
+						       priv->bgx,
+						       priv->index));
+	}
+
+	switch (priv->mode) {
+	case PORT_MODE_XLAUI:
+	case PORT_MODE_XFI:
+	case PORT_MODE_10G_KR:
+	case PORT_MODE_40G_KR4:
+		dont_use_phy = true;
+		break;
+	default:
+		dont_use_phy = false;
+		break;
+	}
+
+	if (!priv->phy_np || dont_use_phy) {
+		status = priv->get_link(priv);
+		priv->set_link(priv, status);
+
+		mutex_lock(&check_state_wq_mutex);
+		if (!check_state_wq) {
+			check_state_wq =
+				alloc_workqueue("check_state_wq",
+						WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
+		}
+		mutex_unlock(&check_state_wq_mutex);
+		if (!check_state_wq)
+			return -ENOMEM;
+
+		mutex_lock(&priv->lock);
+		INIT_DELAYED_WORK(&priv->dwork, bgx_port_check_state);
+		queue_delayed_work(check_state_wq, &priv->dwork, 0);
+		priv->work_queued = true;
+		mutex_unlock(&priv->lock);
+
+		netdev_info(priv->netdev, "Link is not ready\n");
+
+	} else {
+		priv->phydev = of_phy_connect(netdev, priv->phy_np,
+					      bgx_port_adjust_link, 0,
+					      of_get_phy_mode(priv->phy_np));
+		if (!priv->phydev)
+			return -ENODEV;
+
+		if (priv->phydev)
+			phy_start_aneg(priv->phydev);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(bgx_port_enable);
+
+int bgx_port_disable(struct net_device *netdev)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	struct port_status status;
+
+	if (priv->phydev)
+		phy_disconnect(priv->phydev);
+	priv->phydev = NULL;
+
+	memset(&status, 0, sizeof(status));
+	priv->last_status.link = 0;
+	priv->set_link(priv, status);
+
+	mutex_lock(&priv->lock);
+	if (priv->work_queued) {
+		cancel_delayed_work_sync(&priv->dwork);
+		priv->work_queued = false;
+	}
+	mutex_unlock(&priv->lock);
+
+	return 0;
+}
+EXPORT_SYMBOL(bgx_port_disable);
+
+int bgx_port_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	int max_frame;
+
+	if (new_mtu < 60 || new_mtu > 65392) {
+		netdev_warn(netdev, "Maximum MTU supported is 65392\n");
+		return -EINVAL;
+	}
+
+	netdev->mtu = new_mtu;
+
+	max_frame = round_up(new_mtu + ETH_HLEN + ETH_FCS_LEN, 8);
+
+	if (priv->mode == PORT_MODE_SGMII || priv->mode == PORT_MODE_RGMII) {
+		/* 1G */
+		oct_csr_write(max_frame, BGX_GMP_GMI_RX_JABBER(priv->node,
+							       priv->bgx,
+							       priv->index));
+	} else {
+		/* 10G or higher */
+		oct_csr_write(max_frame, BGX_SMU_RX_JABBER(priv->node,
+							   priv->bgx,
+							   priv->index));
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(bgx_port_change_mtu);
+
+void bgx_port_mix_assert_reset(struct net_device *netdev, int mix, bool v)
+{
+	struct bgx_port_priv *priv = bgx_port_netdev2priv(netdev);
+	u64 data, mask = 1ull << (3 + (mix & 1));
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && v) {
+		/* Need to disable the mix before resetting the bgx-mix
+		 * interface as not doing so confuses the other already up
+		 * lmacs.
+		 */
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		data &= ~BGX_CMR_CONFIG_MIX_EN;
+		oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+	}
+
+	data = oct_csr_read(BGX_CMR_GLOBAL_CONFIG(priv->node, priv->bgx));
+	if (v)
+		data |= mask;
+	else
+		data &= ~mask;
+	oct_csr_write(data, BGX_CMR_GLOBAL_CONFIG(priv->node, priv->bgx));
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !v) {
+		data = oct_csr_read(BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+		data |= BGX_CMR_CONFIG_MIX_EN;
+		oct_csr_write(data, BGX_CMR_CONFIG(priv->node, priv->bgx,
+						   priv->index));
+	}
+}
+EXPORT_SYMBOL(bgx_port_mix_assert_reset);
+
+static int bgx_port_probe(struct platform_device *pdev)
+{
+	struct bgx_port_priv *priv;
+	const __be32 *reg;
+	int numa_node, rc;
+	const u8 *mac;
+	u32 index;
+	u64 addr;
+
+	reg = of_get_property(pdev->dev.parent->of_node, "reg", NULL);
+	addr = of_translate_address(pdev->dev.parent->of_node, reg);
+	mac = of_get_mac_address(pdev->dev.of_node);
+
+	numa_node = (addr >> 36) & 0x7;
+
+	rc = of_property_read_u32(pdev->dev.of_node, "reg", &index);
+	if (rc)
+		return -ENODEV;
+	priv = kzalloc_node(sizeof(*priv), GFP_KERNEL, numa_node);
+	if (!priv)
+		return -ENOMEM;
+	priv->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+	if (of_get_property(pdev->dev.of_node, "cavium,sgmii-mac-1000x-mode",
+			    NULL))
+		priv->mode_1000basex = true;
+	if (of_get_property(pdev->dev.of_node, "cavium,sgmii-mac-phy-mode",
+			    NULL))
+		priv->bgx_as_phy = true;
+
+	mutex_init(&priv->lock);
+	priv->node = numa_node;
+	priv->bgx = (addr >> 24) & 0xf;
+	priv->index = index;
+	if (mac)
+		priv->mac_addr = mac;
+
+	priv->qlm = bgx_port_get_qlm(priv->node, priv->bgx, priv->index);
+	priv->mode = bgx_port_get_mode(priv->node, priv->bgx, priv->index);
+
+	switch (priv->mode) {
+	case PORT_MODE_SGMII:
+	case PORT_MODE_RGMII:
+		priv->get_link = bgx_port_get_sgmii_link;
+		priv->set_link = bgx_port_set_xgmii_link;
+		break;
+	case PORT_MODE_XAUI:
+	case PORT_MODE_RXAUI:
+	case PORT_MODE_XLAUI:
+	case PORT_MODE_XFI:
+	case PORT_MODE_10G_KR:
+	case PORT_MODE_40G_KR4:
+		priv->get_link = bgx_port_get_xaui_link;
+		priv->set_link = bgx_port_set_xaui_link;
+		break;
+	default:
+		goto err;
+	}
+
+	dev_set_drvdata(&pdev->dev, priv);
+
+	bgx_port_init(priv);
+
+	dev_info(&pdev->dev, "Probed\n");
+	return 0;
+ err:
+	kfree(priv);
+	return rc;
+}
+
+static int bgx_port_remove(struct platform_device *pdev)
+{
+	struct bgx_port_priv *priv = dev_get_drvdata(&pdev->dev);
+
+	kfree(priv);
+	return 0;
+}
+
+static void bgx_port_shutdown(struct platform_device *pdev)
+{
+}
+
+static const struct of_device_id bgx_port_match[] = {
+	{
+		.compatible = "cavium,octeon-7890-bgx-port",
+	},
+	{
+		.compatible = "cavium,octeon-7360-xcv",
+	},
+	{},
+};
+MODULE_DEVICE_TABLE(of, bgx_port_match);
+
+static struct platform_driver bgx_port_driver = {
+	.probe		= bgx_port_probe,
+	.remove		= bgx_port_remove,
+	.shutdown       = bgx_port_shutdown,
+	.driver		= {
+		.owner	= THIS_MODULE,
+		.name	= KBUILD_MODNAME,
+		.of_match_table = bgx_port_match,
+	},
+};
+
+static int __init bgx_port_driver_init(void)
+{
+	int i, j, k, r;
+
+	for (i = 0; i < MAX_NODES; i++) {
+		for (j = 0; j < MAX_BGX_PER_NODE; j++) {
+			for (k = 0; k < MAX_LMAC_PER_BGX; k++)
+				lmac_pknd[i][j][k] = -1;
+		}
+	}
+
+	bgx_nexus_load();
+	r =  platform_driver_register(&bgx_port_driver);
+	return r;
+}
+module_init(bgx_port_driver_init);
+
+static void __exit bgx_port_driver_exit(void)
+{
+	platform_driver_unregister(&bgx_port_driver);
+	if (check_state_wq)
+		destroy_workqueue(check_state_wq);
+}
+module_exit(bgx_port_driver_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Cavium, Inc. <support@caviumnetworks.com>");
+MODULE_DESCRIPTION("BGX Nexus Ethernet MAC driver.");
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 05/10] netdev: cavium: octeon: Add Octeon III PKI Support
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (3 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 04/10] netdev: cavium: octeon: Add Octeon III BGX Ports Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 06/10] netdev: cavium: octeon: Add Octeon III PKO Support Steven J. Hill
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add support for Octeon III PKI logic block for BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/octeon/octeon3-pki.c | 789 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-pki.h | 113 ++++
 2 files changed, 902 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pki.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pki.h

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-pki.c b/drivers/net/ethernet/cavium/octeon/octeon3-pki.c
new file mode 100644
index 0000000..9782ba1
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-pki.c
@@ -0,0 +1,789 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III Packet Input Unit (PKI)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+
+#include <linux/firmware.h>
+
+#include "octeon3.h"
+#include "octeon3-pki.h"
+
+#define PKI_CLUSTER_FIRMWARE		"cavium/pki-cluster.bin"
+#define VERSION_LEN			8
+
+#define PKI_MAX_PCAM_BANKS		2
+#define PKI_MAX_PCAM_BANK_ENTRIES	192
+#define PKI_MAX_PKNDS			64
+#define PKI_NUM_QPG_ENTRIES		2048
+#define PKI_NUM_STYLES			256
+#define PKI_NUM_FINAL_STYLES		64
+
+#define PKI_BASE			0x1180044000000ull
+#define PKI_ADDR(n)			(PKI_BASE + SET_XKPHYS + NODE_OFFSET(n))
+#define PKI_CL_ADDR(n, c)		(PKI_ADDR(n) + ((c) << 16))
+#define PKI_CL_PCAM(n, c, b, e)		(PKI_CL_ADDR(n, c) + ((b) << 12) +     \
+					 ((e) << 3))
+#define PKI_CL_PKIND(n, c, p)		(PKI_CL_ADDR(n, c) + ((p) << 8))
+#define PKI_CL_STYLE(n, c, s)		(PKI_CL_ADDR(n, c) + ((s) << 3))
+
+#define PKI_AURA_CFG(n, a)		(PKI_ADDR(n) + 0x900000 + ((a) << 3))
+#define PKI_BUF_CTL(n)			(PKI_ADDR(n) + 0x000100)
+#define PKI_ICG_CFG(n)			(PKI_ADDR(n) + 0x00a000)
+#define PKI_IMEM(n, i)			(PKI_ADDR(n) + 0x100000 + ((i) << 3))
+#define PKI_LTYPE_MAP(n, l)		(PKI_ADDR(n) + 0x005000 + ((l) << 3))
+#define PKI_QPG_TBL(n, i)		(PKI_ADDR(n) + 0x800000 + ((i) << 3))
+#define PKI_SFT_RST(n)			(PKI_ADDR(n) + 0x000010)
+#define PKI_STAT_CTL(n)			(PKI_ADDR(n) + 0x000110)
+#define PKI_STAT_STAT0(n, p)		(PKI_ADDR(n) + 0xe00038 + ((p) << 8))
+#define PKI_STAT_STAT1(n, p)		(PKI_ADDR(n) + 0xe00040 + ((p) << 8))
+#define PKI_STAT_STAT3(n, p)		(PKI_ADDR(n) + 0xe00050 + ((p) << 8))
+#define PKI_STYLE_BUF(n, s)		(PKI_ADDR(n) + 0x024000 + ((s) << 3))
+
+#define PKI_CL_ECC_CTL(n, c)		(PKI_CL_ADDR(n, c) + 0x00c020)
+#define PKI_CL_PCAM_TERM(n, c, b, e)	(PKI_CL_PCAM(n, c, b, e) + 0x700000)
+#define PKI_CL_PCAM_MATCH(n, c, b, e)	(PKI_CL_PCAM(n, c, b, e) + 0x704000)
+#define PKI_CL_PCAM_ACTION(n, c, b, e)	(PKI_CL_PCAM(n, c, b, e) + 0x708000)
+#define PKI_CL_PKIND_CFG(n, c, p)	(PKI_CL_PKIND(n, c, p) + 0x300040)
+#define PKI_CL_PKIND_STYLE(n, c, p)	(PKI_CL_PKIND(n, c, p) + 0x300048)
+#define PKI_CL_PKIND_SKIP(n, c, p)	(PKI_CL_PKIND(n, c, p) + 0x300050)
+#define PKI_CL_PKIND_L2_CUSTOM(n, c, p)	(PKI_CL_PKIND(n, c, p) + 0x300058)
+#define PKI_CL_PKIND_LG_CUSTOM(n, c, p)	(PKI_CL_PKIND(n, c, p) + 0x300060)
+#define PKI_CL_STYLE_CFG(n, c, s)	(PKI_CL_STYLE(n, c, s) + 0x500000)
+#define PKI_CL_STYLE_CFG2(n, c, s)	(PKI_CL_STYLE(n, c, s) + 0x500800)
+#define PKI_CL_STYLE_ALG(n, c, s)	(PKI_CL_STYLE(n, c, s) + 0x501000)
+
+enum pcam_term {
+	NONE,
+	L2_CUSTOM = 0x2,
+	HIGIGD = 0x4,
+	HIGIG = 0x5,
+	SMACH = 0x8,
+	SMACL = 0x9,
+	DMACH = 0xa,
+	DMACL = 0xb,
+	GLORT = 0x12,
+	DSA = 0x13,
+	ETHTYPE0 = 0x18,
+	ETHTYPE1 = 0x19,
+	ETHTYPE2 = 0x1a,
+	ETHTYPE3 = 0x1b,
+	MPLS0 = 0x1e,
+	L3_SIPHH = 0x1f,
+	L3_SIPMH = 0x20,
+	L3_SIPML = 0x21,
+	L3_SIPLL = 0x22,
+	L3_FLAGS = 0x23,
+	L3_DIPHH = 0x24,
+	L3_DIPMH = 0x25,
+	L3_DIPML = 0x26,
+	L3_DIPLL = 0x27,
+	LD_VNI = 0x28,
+	IL3_FLAGS = 0x2b,
+	LF_SPI = 0x2e,
+	L4_SPORT = 0x2f,
+	L4_PORT = 0x30,
+	LG_CUSTOM = 0x39
+};
+
+enum pki_ltype {
+	LTYPE_NONE,
+	LTYPE_ENET,
+	LTYPE_VLAN,
+	LTYPE_SNAP_PAYLD = 0x05,
+	LTYPE_ARP = 0x06,
+	LTYPE_RARP = 0x07,
+	LTYPE_IP4 = 0x08,
+	LTYPE_IP4_OPT = 0x09,
+	LTYPE_IP6 = 0x0a,
+	LTYPE_IP6_OPT = 0x0b,
+	LTYPE_IPSEC_ESP = 0x0c,
+	LTYPE_IPFRAG = 0x0d,
+	LTYPE_IPCOMP = 0x0e,
+	LTYPE_TCP = 0x10,
+	LTYPE_UDP = 0x11,
+	LTYPE_SCTP = 0x12,
+	LTYPE_UDP_VXLAN = 0x13,
+	LTYPE_GRE = 0x14,
+	LTYPE_NVGRE = 0x15,
+	LTYPE_GTP = 0x16,
+	LTYPE_UDP_GENEVE = 0x17,
+	LTYPE_SW28 = 0x1c,
+	LTYPE_SW29 = 0x1d,
+	LTYPE_SW30 = 0x1e,
+	LTYPE_SW31 = 0x1f
+};
+
+enum pki_beltype {
+	BELTYPE_NONE,
+	BELTYPE_MISC,
+	BELTYPE_IP4,
+	BELTYPE_IP6,
+	BELTYPE_TCP,
+	BELTYPE_UDP,
+	BELTYPE_SCTP,
+	BELTYPE_SNAP
+};
+
+struct ltype_beltype {
+	enum pki_ltype ltype;
+	enum pki_beltype beltype;
+};
+
+/* struct pcam_term_info - Describes a term to configure in the PCAM.
+ * @term: Identifies the term to configure.
+ * @term_mask: Specifies don't cares in the term.
+ * @style: Style to compare.
+ * @style_mask: Specifies don't cares in the style.
+ * @data: Data to compare.
+ * @data_mask: Specifies don't cares in the data.
+ */
+struct pcam_term_info {
+	u8 term;
+	u8 term_mask;
+	u8 style;
+	u8 style_mask;
+	u32 data;
+	u32 data_mask;
+};
+
+/* struct fw_hdr - Describes the firmware.
+ * @version: Firmware version.
+ * @size: Size of the data in bytes.
+ * @data: Actual firmware data.
+ */
+struct fw_hdr {
+	char version[VERSION_LEN];
+	u64 size;
+	u64 data[];
+};
+
+static struct ltype_beltype dflt_ltype_config[] = {
+	{ LTYPE_NONE, BELTYPE_NONE },
+	{ LTYPE_ENET, BELTYPE_MISC },
+	{ LTYPE_VLAN, BELTYPE_MISC },
+	{ LTYPE_SNAP_PAYLD, BELTYPE_MISC },
+	{ LTYPE_ARP, BELTYPE_MISC },
+	{ LTYPE_RARP, BELTYPE_MISC },
+	{ LTYPE_IP4, BELTYPE_IP4 },
+	{ LTYPE_IP4_OPT, BELTYPE_IP4 },
+	{ LTYPE_IP6, BELTYPE_IP6 },
+	{ LTYPE_IP6_OPT, BELTYPE_IP6 },
+	{ LTYPE_IPSEC_ESP, BELTYPE_MISC },
+	{ LTYPE_IPFRAG, BELTYPE_MISC },
+	{ LTYPE_IPCOMP, BELTYPE_MISC },
+	{ LTYPE_TCP, BELTYPE_TCP },
+	{ LTYPE_UDP, BELTYPE_UDP },
+	{ LTYPE_SCTP, BELTYPE_SCTP },
+	{ LTYPE_UDP_VXLAN, BELTYPE_UDP },
+	{ LTYPE_GRE, BELTYPE_MISC },
+	{ LTYPE_NVGRE, BELTYPE_MISC },
+	{ LTYPE_GTP, BELTYPE_MISC },
+	{ LTYPE_UDP_GENEVE, BELTYPE_UDP },
+	{ LTYPE_SW28, BELTYPE_MISC },
+	{ LTYPE_SW29, BELTYPE_MISC },
+	{ LTYPE_SW30, BELTYPE_MISC },
+	{ LTYPE_SW31, BELTYPE_MISC }
+};
+
+static int get_num_clusters(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 2;
+	return 4;
+}
+
+static int octeon3_pki_pcam_alloc_entry(int node, int entry, int bank)
+{
+	struct global_resource_tag tag;
+	int i, rc, num_clusters;
+	char buf[16];
+
+	/* Allocate a PCAM entry for cluster0. */
+	strncpy((char *)&tag.lo, "cvm_pcam", 8);
+	snprintf(buf, 16, "_%d%d%d....", node, 0, bank);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_create_resource(tag, PKI_MAX_PCAM_BANK_ENTRIES);
+	rc = res_mgr_alloc(tag, entry, false);
+	if (rc < 0)
+		return rc;
+
+	entry = rc;
+
+	/* Allocate entries for all clusters. */
+	num_clusters = get_num_clusters();
+	for (i = 1; i < num_clusters; i++) {
+		strncpy((char *)&tag.lo, "cvm_pcam", 8);
+		snprintf(buf, 16, "_%d%d%d....", node, i, bank);
+		memcpy(&tag.hi, buf, 8);
+
+		res_mgr_create_resource(tag, PKI_MAX_PCAM_BANK_ENTRIES);
+		rc = res_mgr_alloc(tag, entry, false);
+		if (rc < 0) {
+			int	j;
+
+			pr_err("%s: Failed to allocate PCAM entry\n", __FILE__);
+			/* Undo whatever we've did */
+			for (j = 0; i < i; j++) {
+				strncpy((char *)&tag.lo, "cvm_pcam", 8);
+				snprintf(buf, 16, "_%d%d%d....", node, j, bank);
+				memcpy(&tag.hi, buf, 8);
+				res_mgr_free(tag, entry);
+			}
+
+			return -1;
+		}
+	}
+
+	return entry;
+}
+
+static int octeon3_pki_pcam_write_entry(int node,
+					struct pcam_term_info *term_info)
+{
+	int bank, entry, i, num_clusters;
+	u64 action, match, term;
+
+	/* Bit 0 of the PCAM term determines the bank to use. */
+	bank = term_info->term & 1;
+
+	/* Allocate a PCAM entry. */
+	entry = octeon3_pki_pcam_alloc_entry(node, -1, bank);
+	if (entry < 0)
+		return entry;
+
+	term = PKI_CL_PCAM_TERM_VALID;
+	term |= (u64)(term_info->term & term_info->term_mask)
+		<< PKI_CL_PCAM_TERM_TERM1_SHIFT;
+	term |= (~term_info->term & term_info->term_mask)
+		 << PKI_CL_PCAM_TERM_TERM0_SHIFT;
+	term |= (u64)(term_info->style & term_info->style_mask)
+		<< PKI_CL_PCAM_TERM_STYLE1_SHIFT;
+	term |= ~term_info->style & term_info->style_mask;
+
+	match = (u64)(term_info->data & term_info->data_mask)
+		 << PKI_CL_PCAM_MATCH_DATA1_SHIFT;
+	match |= ~term_info->data & term_info->data_mask;
+
+	action = 0;
+	if (term_info->term >= ETHTYPE0 && term_info->term <= ETHTYPE3) {
+		action |= (PKI_CL_PCAM_ACTION_L2_CUSTOM <<
+			   PKI_CL_PCAM_ACTION_SETTY_SHIFT);
+		action |= PKI_CL_PCAM_ACTION_ADVANCE_4B;
+	}
+
+	/* Must write the term to all clusters. */
+	num_clusters = get_num_clusters();
+	for (i = 0; i < num_clusters; i++) {
+		oct_csr_write(0, PKI_CL_PCAM_TERM(node, i, bank, entry));
+		oct_csr_write(match, PKI_CL_PCAM_MATCH(node, i, bank, entry));
+		oct_csr_write(action, PKI_CL_PCAM_ACTION(node, i, bank, entry));
+		oct_csr_write(term, PKI_CL_PCAM_TERM(node, i, bank, entry));
+	}
+
+	return 0;
+}
+
+static int octeon3_pki_alloc_qpg_entry(int node)
+{
+	struct global_resource_tag tag;
+	char buf[16];
+	int entry;
+
+	/* Allocate a Qpg entry. */
+	strncpy((char *)&tag.lo, "cvm_qpge", 8);
+	snprintf(buf, 16, "t_%d.....", node);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_create_resource(tag, PKI_NUM_QPG_ENTRIES);
+	entry = res_mgr_alloc(tag, -1, false);
+	if (entry < 0)
+		pr_err("%s: Failed to allocate qpg entry", __FILE__);
+
+	return entry;
+}
+
+static int octeon3_pki_alloc_style(int node)
+{
+	struct global_resource_tag tag;
+	char buf[16];
+	int entry;
+
+	/* Allocate a style entry. */
+	strncpy((char *)&tag.lo, "cvm_styl", 8);
+	snprintf(buf, 16, "e_%d.....", node);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_create_resource(tag, PKI_NUM_STYLES);
+	entry = res_mgr_alloc(tag, -1, false);
+	if (entry < 0)
+		pr_err("%s: Failed to allocate style", __FILE__);
+
+	return entry;
+}
+
+int octeon3_pki_set_ptp_skip(int node, int pknd, int skip)
+{
+	int i, num_clusters;
+	u64 data;
+
+	num_clusters = get_num_clusters();
+	for (i = 0; i < num_clusters; i++) {
+		data = oct_csr_read(PKI_CL_PKIND_SKIP(node, i, pknd));
+		data &= ~(PKI_CL_PKIND_SKIP_INST_SKIP_MASK |
+			  PKI_CL_PKIND_SKIP_FCS_SKIP_MASK);
+		data |= (skip << PKI_CL_PKIND_SKIP_FCS_SKIP_SHIFT) | skip;
+		oct_csr_write(data, PKI_CL_PKIND_SKIP(node, i, pknd));
+
+		data = oct_csr_read(PKI_CL_PKIND_L2_CUSTOM(node, i, pknd));
+		data &= ~PKI_CL_PKIND_SKIP_INST_SKIP_MASK;
+		data |= skip;
+		oct_csr_write(data, PKI_CL_PKIND_L2_CUSTOM(node, i, pknd));
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_set_ptp_skip);
+
+/* octeon3_pki_get_stats - Get the statistics for a given pknd (port).
+ * @node: Node to get statistics for.
+ * @pknd: Pknd to get statistis for.
+ * @packets: Updated with the number of packets received.
+ * @octets: Updated with the number of octets received.
+ * @dropped: Updated with the number of dropped packets.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_get_stats(int node, int pknd, u64 *packets, u64 *octets,
+			  u64 *dropped)
+{
+	/* PKI-20775, must read until not all ones. */
+	do {
+		*packets = oct_csr_read(PKI_STAT_STAT0(node, pknd));
+	} while (*packets == 0xffffffffffffffffull);
+
+	do {
+		*octets = oct_csr_read(PKI_STAT_STAT1(node, pknd));
+	} while (*octets == 0xffffffffffffffffull);
+
+	do {
+		*dropped = oct_csr_read(PKI_STAT_STAT3(node, pknd));
+	} while (*dropped == 0xffffffffffffffffull);
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_get_stats);
+
+/* octeon3_pki_port_init - Initialize a port.
+ * @node: Node port is using.
+ * @aura: Aura to use for packet buffers.
+ * @grp: SSO group packets will be queued up for.
+ * @skip: Extra bytes to skip before packet data.
+ * @mb_size: Size of packet buffers.
+ * @pknd: Port kind assigned to the port.
+ * @num_rx_cxt: Number of SSO groups used by the port.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_port_init(int node, int aura, int grp, int skip, int mb_size,
+			  int pknd, int num_rx_cxt)
+{
+	int num_clusters, qpg_entry, style;
+	u64 data, i;
+
+	/* Allocate and configure a Qpg table entry for the port's group. */
+	i = 0;
+	while ((num_rx_cxt & (1 << i)) == 0)
+		i++;
+	qpg_entry = octeon3_pki_alloc_qpg_entry(node);
+	data = oct_csr_read(PKI_QPG_TBL(node, qpg_entry));
+	data &= ~(PKI_QPG_TBL_PADD_MASK | PKI_QPG_TBL_GRPTAG_OK_MASK |
+		  PKI_QPG_TBL_GRP_OK_MASK | PKI_QPG_TBL_GRPTAG_BAD_MASK |
+		  PKI_QPG_TBL_GRP_BAD_MASK | PKI_QPG_TBL_LAURA_MASK);
+	data |= i << PKI_QPG_TBL_GRPTAG_OK_SHIFT;
+	data |= ((u64)((node << PKI_QPG_TBL_GRP_NUM_BITS_SHIFT) | grp)
+		 << PKI_QPG_TBL_GRP_OK_SHIFT);
+	data |= i << PKI_QPG_TBL_GRPTAG_BAD_SHIFT;
+	data |= (((node << PKI_QPG_TBL_GRP_NUM_BITS_SHIFT) | grp)
+		 << PKI_QPG_TBL_GRP_BAD_SHIFT);
+	data |= aura;
+	oct_csr_write(data, PKI_QPG_TBL(node, qpg_entry));
+
+	/* Allocate a style for the port */
+	style = octeon3_pki_alloc_style(node);
+
+	/* Map the Qpg table entry to the style. */
+	num_clusters = get_num_clusters();
+	for (i = 0; i < num_clusters; i++) {
+		data = PKI_CL_STYLE_CFG_LENERR_EN | qpg_entry |
+			PKI_CL_STYLE_CFG_FCS_CHK;
+		oct_csr_write(data, PKI_CL_STYLE_CFG(node, i, style));
+
+		/* Specify the tag generation rules and checksum to use. */
+		oct_csr_write((PKI_CL_STYLE_CFG2_CSUM_ALL |
+			       PKI_CL_STYLE_CFG2_LEN_TAG_ALL |
+			       PKI_CL_STYLE_CFG2_LEN_LF |
+			       PKI_CL_STYLE_CFG2_LEN_LC),
+			      PKI_CL_STYLE_CFG2(node, i, style));
+
+		data = SSO_TAG_TYPE_UNTAGGED << PKI_CL_STYLE_ALG_TT_SHIFT;
+		oct_csr_write(data, PKI_CL_STYLE_ALG(node, i, style));
+	}
+
+	/* Set the style's buffer size and skips:
+	 *	Every buffer has 128 bytes reserved for Linux.
+	 *	The first buffer must also skip the WQE (40 bytes).
+	 *	SRIO also requires skipping its header (skip).
+	 */
+	data = 1 << PKI_STYLE_BUF_WQE_SKIP_SHIFT;
+	data |= ((128 + 40 + skip) / 8) << PKI_STYLE_BUF_FIRST_SKIP_SHIFT;
+	data |= (128 / 8) << PKI_STYLE_BUF_LATER_SKIP_SHIFT;
+	data |= (mb_size & ~0xf) / 8;
+	oct_csr_write(data, PKI_STYLE_BUF(node, style));
+
+	/* Assign the initial style to the port via the pknd. */
+	for (i = 0; i < num_clusters; i++) {
+		data = oct_csr_read(PKI_CL_PKIND_STYLE(node, i, pknd));
+		data &= ~PKI_CL_PKIND_STYLE_STYLE_MASK;
+		data |= style;
+		oct_csr_write(data, PKI_CL_PKIND_STYLE(node, i, pknd));
+	}
+
+	/* Enable red. */
+	data = PKI_AURA_CFG_ENA_RED;
+	oct_csr_write(data, PKI_AURA_CFG(node, aura));
+
+	/* Clear statistic counters. */
+	oct_csr_write(0, PKI_STAT_STAT0(node, pknd));
+	oct_csr_write(0, PKI_STAT_STAT1(node, pknd));
+	oct_csr_write(0, PKI_STAT_STAT3(node, pknd));
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_port_init);
+
+/* octeon3_pki_port_shutdown - Release all the resources used by a port.
+ * @node: Node the port is on.
+ * @pknd: Pknd assigned to the port.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_port_shutdown(int node, int pknd)
+{
+	/* Nothing at the moment. */
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_port_shutdown);
+
+/* octeon3_pki_cluster_init - Loads cluster firmware into the PKI clusters.
+ * @node: Node to configure.
+ * @pdev: Device requesting the firmware.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_cluster_init(int node, struct platform_device *pdev)
+{
+	const struct firmware *pki_fw;
+	const struct fw_hdr *hdr;
+	const u64 *data;
+	int i, rc;
+
+	rc = request_firmware(&pki_fw, PKI_CLUSTER_FIRMWARE, &pdev->dev);
+	if (rc) {
+		dev_err(&pdev->dev, "%s: Failed to load %s error=%d\n",
+			__FILE__, PKI_CLUSTER_FIRMWARE, rc);
+		return rc;
+	}
+
+	/* Verify the firmware is valid. */
+	hdr = (const struct fw_hdr *)pki_fw->data;
+	if ((pki_fw->size - sizeof(const struct fw_hdr) != hdr->size) ||
+	    hdr->size % 8) {
+		dev_err(&pdev->dev, "%s: Corrupted PKI firmware\n", __FILE__);
+		goto err;
+	}
+
+	dev_info(&pdev->dev, "%s: Loading PKI firmware %s\n", __FILE__,
+		 hdr->version);
+	data = hdr->data;
+	for (i = 0; i < hdr->size / 8; i++) {
+		oct_csr_write(cpu_to_be64(*data), PKI_IMEM(node, i));
+		data++;
+	}
+err:
+	release_firmware(pki_fw);
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_cluster_init);
+
+/* octeon3_pki_vlan_init - Configure PCAM to recognize the VLAN ethtypes.
+ * @node: Node to configure.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_vlan_init(int node)
+{
+	int i, rc;
+	u64 data;
+
+	/* PKI-20858 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+		for (i = 0; i < 4; i++) {
+			data = oct_csr_read(PKI_CL_ECC_CTL(node, i));
+			data &= ~PKI_CL_ECC_CTL_PCAM_EN;
+			data |= PKI_CL_ECC_CTL_PCAM1_CDIS |
+				PKI_CL_ECC_CTL_PCAM0_CDIS;
+			oct_csr_write(data, PKI_CL_ECC_CTL(node, i));
+		}
+	}
+
+	/* Configure the pcam ethtype0 and ethtype1 terms */
+	for (i = ETHTYPE0; i <= ETHTYPE1; i++) {
+		struct pcam_term_info	term_info;
+
+		/* Term for 0x8100 ethtype */
+		term_info.term = i;
+		term_info.term_mask = 0xfd;
+		term_info.style = 0;
+		term_info.style_mask = 0;
+		term_info.data = 0x81000000;
+		term_info.data_mask = 0xffff0000;
+		rc = octeon3_pki_pcam_write_entry(node, &term_info);
+		if (rc)
+			return rc;
+
+		/* Term for 0x88a8 ethtype */
+		term_info.data = 0x88a80000;
+		rc = octeon3_pki_pcam_write_entry(node, &term_info);
+		if (rc)
+			return rc;
+
+		/* Term for 0x9200 ethtype */
+		term_info.data = 0x92000000;
+		rc = octeon3_pki_pcam_write_entry(node, &term_info);
+		if (rc)
+			return rc;
+
+		/* Term for 0x9100 ethtype */
+		term_info.data = 0x91000000;
+		rc = octeon3_pki_pcam_write_entry(node, &term_info);
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_vlan_init);
+
+/* octeon3_pki_ltype_init - Configures the PKI layer types.
+ * @node: Node to configure.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+int octeon3_pki_ltype_init(int node)
+{
+	enum pki_ltype ltype;
+	u64 data;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dflt_ltype_config); i++) {
+		ltype = dflt_ltype_config[i].ltype;
+		data = oct_csr_read(PKI_LTYPE_MAP(node, ltype));
+		data &= ~PKI_LTYPE_MAP_BELTYPE_MASK;
+		data |= dflt_ltype_config[i].beltype;
+		oct_csr_write(data, PKI_LTYPE_MAP(node, ltype));
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_ltype_init);
+
+int octeon3_pki_srio_init(int node, int pknd)
+{
+	int i, num_clusters, style;
+	u64 data;
+
+	num_clusters = get_num_clusters();
+	for (i = 0; i < num_clusters; i++) {
+		data = oct_csr_read(PKI_CL_PKIND_STYLE(node, i, pknd));
+		style = data & PKI_CL_PKIND_STYLE_STYLE_MASK;
+		data &= ~PKI_CL_PKIND_STYLE_PARSE_MODE_MASK;
+		oct_csr_write(data, PKI_CL_PKIND_STYLE(node, i, pknd));
+
+		/* Disable packet length errors and FCS. */
+		data = oct_csr_read(PKI_CL_STYLE_CFG(node, i, style));
+		data &= ~(PKI_CL_STYLE_CFG_LENERR_EN |
+			  PKI_CL_STYLE_CFG_MAXERR_EN |
+			  PKI_CL_STYLE_CFG_MINERR_EN |
+			  PKI_CL_STYLE_CFG_FCS_STRIP |
+			  PKI_CL_STYLE_CFG_FCS_CHK);
+		oct_csr_write(data, PKI_CL_STYLE_CFG(node, i, style));
+
+		/* Packets have no FCS. */
+		data = oct_csr_read(PKI_CL_PKIND_CFG(node, i, pknd));
+		data &= ~PKI_CL_PKIND_CFG_FCS_PRES;
+		oct_csr_write(data, PKI_CL_PKIND_CFG(node, i, pknd));
+
+		/* Skip the SRIO header and the INST_HDR_S data. */
+		data = oct_csr_read(PKI_CL_PKIND_SKIP(node, i, pknd));
+		data &= ~(PKI_CL_PKIND_SKIP_FCS_SKIP_MASK |
+			  PKI_CL_PKIND_SKIP_INST_SKIP_MASK);
+		data |= (16 << PKI_CL_PKIND_SKIP_FCS_SKIP_SHIFT) | 16;
+		oct_csr_write(data, PKI_CL_PKIND_SKIP(node, i, pknd));
+
+		/* Exclude port number from Qpg. */
+		data = oct_csr_read(PKI_CL_STYLE_ALG(node, i, style));
+		data &= ~PKI_CL_STYLE_ALG_QPG_PORT_MSB_MASK;
+		oct_csr_write(data, PKI_CL_STYLE_ALG(node, i, style));
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_srio_init);
+
+int octeon3_pki_enable(int node)
+{
+	int timeout;
+	u64 data;
+
+	/* Enable backpressure. */
+	data = oct_csr_read(PKI_BUF_CTL(node));
+	data |= PKI_BUF_CTL_PBP_EN;
+	oct_csr_write(data, PKI_BUF_CTL(node));
+
+	/* Enable cluster parsing. */
+	data = oct_csr_read(PKI_ICG_CFG(node));
+	data |= PKI_ICG_CFG_PENA;
+	oct_csr_write(data, PKI_ICG_CFG(node));
+
+	/* Wait until the PKI is out of reset. */
+	timeout = 10000;
+	do {
+		data = oct_csr_read(PKI_SFT_RST(node));
+		if (!(data & PKI_SFT_RST_BUSY))
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout) {
+		pr_err("%s: timeout waiting for reset\n", __FILE__);
+		return -1;
+	}
+
+	/* Enable the PKI. */
+	data = oct_csr_read(PKI_BUF_CTL(node));
+	data |= PKI_BUF_CTL_PKI_EN;
+	oct_csr_write(data, PKI_BUF_CTL(node));
+
+	/* Statistics are kept per pknd. */
+	oct_csr_write(0, PKI_STAT_CTL(node));
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pki_enable);
+
+void octeon3_pki_shutdown(int node)
+{
+	struct global_resource_tag tag;
+	int i, j, k, timeout;
+	char buf[16];
+	u64 data;
+
+	/* Disable the PKI. */
+	data = oct_csr_read(PKI_BUF_CTL(node));
+	if (data & PKI_BUF_CTL_PKI_EN) {
+		data &= ~PKI_BUF_CTL_PKI_EN;
+		oct_csr_write(data, PKI_BUF_CTL(node));
+
+		/* Wait until the PKI has finished processing packets. */
+		timeout = 10000;
+		do {
+			data = oct_csr_read(PKI_SFT_RST(node));
+			if (data & PKI_SFT_RST_ACTIVE)
+				break;
+			timeout--;
+			udelay(1);
+		} while (timeout);
+		if (!timeout)
+			pr_warn("%s: disable timeout\n", __FILE__);
+	}
+
+	/* Give all prefetched buffers back to the FPA. */
+	data = oct_csr_read(PKI_BUF_CTL(node));
+	data |= PKI_BUF_CTL_FPA_WAIT | PKI_BUF_CTL_PKT_OFF;
+	oct_csr_write(data, PKI_BUF_CTL(node));
+
+	/* Dummy read to get the register write to take effect. */
+	data = oct_csr_read(PKI_BUF_CTL(node));
+
+	/* Now we can reset the PKI. */
+	data = oct_csr_read(PKI_SFT_RST(node));
+	data |= PKI_SFT_RST_RST;
+	oct_csr_write(data, PKI_SFT_RST(node));
+	timeout = 10000;
+	do {
+		data = oct_csr_read(PKI_SFT_RST(node));
+		if ((data & PKI_SFT_RST_BUSY) == 0)
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout)
+		pr_warn("%s: reset timeout\n", __FILE__);
+
+	/* Free all the allocated resources. */
+	for (i = 0; i < PKI_NUM_STYLES; i++) {
+		strncpy((char *)&tag.lo, "cvm_styl", 8);
+		snprintf(buf, 16, "e_%d.....", node);
+		memcpy(&tag.hi, buf, 8);
+		res_mgr_free(tag, i);
+	}
+	for (i = 0; i < PKI_NUM_QPG_ENTRIES; i++) {
+		strncpy((char *)&tag.lo, "cvm_qpge", 8);
+		snprintf(buf, 16, "t_%d.....", node);
+		memcpy(&tag.hi, buf, 8);
+		res_mgr_free(tag, i);
+	}
+	for (i = 0; i < get_num_clusters(); i++) {
+		for (j = 0; j < PKI_MAX_PCAM_BANKS; j++) {
+			strncpy((char *)&tag.lo, "cvm_pcam", 8);
+			snprintf(buf, 16, "_%d%d%d....", node, i, j);
+			memcpy(&tag.hi, buf, 8);
+			for (k = 0; k < PKI_MAX_PCAM_BANK_ENTRIES; k++)
+				res_mgr_free(tag, k);
+		}
+	}
+
+	/* Restore the registers back to their reset state. */
+	for (i = 0; i < get_num_clusters(); i++) {
+		for (j = 0; j < PKI_MAX_PKNDS; j++) {
+			oct_csr_write(0, PKI_CL_PKIND_CFG(node, i, j));
+			oct_csr_write(0, PKI_CL_PKIND_STYLE(node, i, j));
+			oct_csr_write(0, PKI_CL_PKIND_SKIP(node, i, j));
+			oct_csr_write(0, PKI_CL_PKIND_L2_CUSTOM(node, i, j));
+			oct_csr_write(0, PKI_CL_PKIND_LG_CUSTOM(node, i, j));
+		}
+		for (j = 0; j < PKI_NUM_FINAL_STYLES; j++) {
+			oct_csr_write(0, PKI_CL_STYLE_CFG(node, i, j));
+			oct_csr_write(0, PKI_CL_STYLE_CFG2(node, i, j));
+			oct_csr_write(0, PKI_CL_STYLE_ALG(node, i, j));
+		}
+	}
+	for (i = 0; i < PKI_NUM_FINAL_STYLES; i++)
+		oct_csr_write((5 << PKI_STYLE_BUF_FIRST_SKIP_SHIFT) | 32,
+			      PKI_STYLE_BUF(node, i));
+}
+EXPORT_SYMBOL(octeon3_pki_shutdown);
+
+MODULE_LICENSE("GPL");
+MODULE_FIRMWARE(PKI_CLUSTER_FIRMWARE);
+MODULE_AUTHOR("Carlos Munoz <cmunoz@cavium.com>");
+MODULE_DESCRIPTION("Octeon III PKI management.");
diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-pki.h b/drivers/net/ethernet/cavium/octeon/octeon3-pki.h
new file mode 100644
index 0000000..6c3899e
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-pki.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Octeon III Packet Input Unit (PKI)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#ifndef _OCTEON3_PKI_H_
+#define _OCTEON3_PKI_H_
+
+#include <linux/bitops.h>
+
+#define PKI_AURA_CFG_ENA_RED			BIT(18)
+
+#define PKI_BUF_CTL_FPA_WAIT			BIT(10)
+#define PKI_BUF_CTL_PKT_OFF			BIT(5)
+#define PKI_BUF_CTL_PBP_EN			BIT(2)
+#define PKI_BUF_CTL_PKI_EN			BIT(0)
+
+#define PKI_CL_ECC_CTL_PCAM_EN			BIT(63)
+#define PKI_CL_ECC_CTL_PCAM1_CDIS		BIT(4)
+#define PKI_CL_ECC_CTL_PCAM0_CDIS		BIT(3)
+
+#define PKI_CL_PCAM_ACTION_ADVANCE_4B		BIT(2)
+#define PKI_CL_PCAM_ACTION_L2_CUSTOM		BIT(1)
+#define PKI_CL_PCAM_ACTION_SETTY_SHIFT		8
+#define PKI_CL_PCAM_MATCH_DATA1_SHIFT		32
+#define PKI_CL_PCAM_TERM_VALID			BIT(63)
+#define PKI_CL_PCAM_TERM_TERM1_SHIFT		40
+#define PKI_CL_PCAM_TERM_STYLE1_SHIFT		32
+#define PKI_CL_PCAM_TERM_TERM0_SHIFT		8
+
+#define PKI_CL_PKIND_CFG_FCS_PRES		BIT(7)
+
+#define PKI_CL_PKIND_SKIP_FCS_SKIP_MASK		GENMASK_ULL(15, 8)
+#define PKI_CL_PKIND_SKIP_FCS_SKIP_SHIFT	8
+#define PKI_CL_PKIND_SKIP_INST_SKIP_MASK	GENMASK_ULL(7, 0)
+
+#define PKI_CL_PKIND_STYLE_PARSE_MODE_MASK	GENMASK_ULL(14, 8)
+#define PKI_CL_PKIND_STYLE_STYLE_MASK		GENMASK_ULL(7, 0)
+
+#define PKI_CL_STYLE_ALG_QPG_PORT_MSB_MASK	GENMASK_ULL(20, 17)
+#define PKI_CL_STYLE_ALG_TT_SHIFT		30
+
+#define PKI_CL_STYLE_CFG_LENERR_EN		BIT(29)
+#define PKI_CL_STYLE_CFG_MAXERR_EN		BIT(26)
+#define PKI_CL_STYLE_CFG_MINERR_EN		BIT(25)
+#define PKI_CL_STYLE_CFG_FCS_STRIP		BIT(23)
+#define PKI_CL_STYLE_CFG_FCS_CHK		BIT(22)
+
+#define PKI_CL_STYLE_CFG2_LEN_TAG_SRC_LF	BIT(22)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_SRC_LE	BIT(21)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_SRC_LD	BIT(20)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_SRC_LC	BIT(19)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_DST_LF	BIT(16)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_DST_LE	BIT(15)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_DST_LD	BIT(14)
+#define PKI_CL_STYLE_CFG2_LEN_TAG_DST_LC	BIT(13)
+#define PKI_CL_STYLE_CFG2_LEN_LF		BIT(10)
+#define PKI_CL_STYLE_CFG2_LEN_LC		BIT(7)
+#define PKI_CL_STYLE_CFG2_CSUM_LF		BIT(4)
+#define PKI_CL_STYLE_CFG2_CSUM_LE		BIT(3)
+#define PKI_CL_STYLE_CFG2_CSUM_LD		BIT(2)
+#define PKI_CL_STYLE_CFG2_CSUM_LC		BIT(1)
+
+#define PKI_CL_STYLE_CFG2_CSUM_ALL	(PKI_CL_STYLE_CFG2_CSUM_LF |	       \
+					 PKI_CL_STYLE_CFG2_CSUM_LE |	       \
+					 PKI_CL_STYLE_CFG2_CSUM_LD |	       \
+					 PKI_CL_STYLE_CFG2_CSUM_LC)
+
+#define PKI_CL_STYLE_CFG2_LEN_TAG_ALL	(PKI_CL_STYLE_CFG2_LEN_TAG_DST_LF |    \
+					 PKI_CL_STYLE_CFG2_LEN_TAG_DST_LE |    \
+					 PKI_CL_STYLE_CFG2_LEN_TAG_DST_LD |    \
+					 PKI_CL_STYLE_CFG2_LEN_TAG_DST_LC)
+
+#define PKI_ICG_CFG_PENA			BIT(24)
+
+#define PKI_LTYPE_MAP_BELTYPE_MASK		GENMASK_ULL(2, 0)
+
+#define PKI_QPG_TBL_PADD_MASK			GENMASK_ULL(59, 48)
+#define PKI_QPG_TBL_GRPTAG_OK_MASK		GENMASK_ULL(47, 45)
+#define PKI_QPG_TBL_GRPTAG_OK_SHIFT		45
+#define PKI_QPG_TBL_GRP_OK_MASK			GENMASK_ULL(41, 32)
+#define PKI_QPG_TBL_GRP_OK_SHIFT		32
+#define PKI_QPG_TBL_GRPTAG_BAD_MASK		GENMASK_ULL(31, 29)
+#define PKI_QPG_TBL_GRPTAG_BAD_SHIFT		29
+#define PKI_QPG_TBL_GRP_BAD_MASK		GENMASK_ULL(25, 16)
+#define PKI_QPG_TBL_GRP_BAD_SHIFT		16
+#define PKI_QPG_TBL_LAURA_MASK			GENMASK_ULL(9, 0)
+#define PKI_QPG_TBL_GRP_NUM_BITS_SHIFT		8
+
+#define PKI_SFT_RST_BUSY			BIT(63)
+#define PKI_SFT_RST_ACTIVE			BIT(32)
+#define PKI_SFT_RST_RST				BIT(0)
+
+#define PKI_STYLE_BUF_WQE_SKIP_SHIFT	28
+#define PKI_STYLE_BUF_FIRST_SKIP_SHIFT	22
+#define PKI_STYLE_BUF_LATER_SKIP_SHIFT	16
+
+/* Values for wqe word2 [ERRLEV] */
+#define PKI_ERRLEV_LA		0x01
+
+/* Values for wqe word2 [OPCODE] */
+#define PKI_OPCODE_NONE		0x00
+#define PKI_OPCODE_JABBER	0x02
+#define PKI_OPCODE_FCS		0x07
+
+/* Values for layer type in wqe */
+#define PKI_LTYPE_IP4		0x08
+#define PKI_LTYPE_IP6		0x0a
+#define PKI_LTYPE_TCP		0x10
+#define PKI_LTYPE_UDP		0x11
+#define PKI_LTYPE_SCTP		0x12
+
+#endif /* _OCTEON3_PKI_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 06/10] netdev: cavium: octeon: Add Octeon III PKO Support
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (4 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 05/10] netdev: cavium: octeon: Add Octeon III PKI Support Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 07/10] netdev: cavium: octeon: Add Octeon III SSO Support Steven J. Hill
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add support for Octeon III PKO logic block for BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/octeon/octeon3-pko.c | 1638 ++++++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-pko.h |  159 +++
 2 files changed, 1797 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pko.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-pko.h

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-pko.c b/drivers/net/ethernet/cavium/octeon/octeon3-pko.c
new file mode 100644
index 0000000..238bf51
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-pko.c
@@ -0,0 +1,1638 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III Packet-Output Processing Unit (PKO)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+
+#include "octeon3.h"
+
+#define PKO_MAX_FIFO_GROUPS		8
+#define PKO_MAX_OUTPUT_MACS		28
+#define PKO_FIFO_SIZE			2560
+
+#define PKO_BASE			0x1540000000000ull
+#define PKO_ADDR(n)			(PKO_BASE + SET_XKPHYS + NODE_OFFSET(n))
+#define PKO_Q_ADDR(n, q)		(PKO_ADDR(n) + ((q) << 9))
+
+#define PKO_CHANNEL_LEVEL(n)		(PKO_ADDR(n) + 0x0800f0)
+#define PKO_DPFI_ENA(n)			(PKO_ADDR(n) + 0xc00018)
+#define PKO_DPFI_FLUSH(n)		(PKO_ADDR(n) + 0xc00008)
+#define PKO_DPFI_FPA_AURA(n)		(PKO_ADDR(n) + 0xc00010)
+#define PKO_DPFI_STATUS(n)		(PKO_ADDR(n) + 0xc00000)
+#define PKO_DQ_SCHEDULE(n, q)		(PKO_Q_ADDR(n, q) + 0x280008)
+#define PKO_DQ_SW_XOFF(n, q)		(PKO_Q_ADDR(n, q) + 0x2800e0)
+#define PKO_DQ_TOPOLOGY(n, q)		(PKO_Q_ADDR(n, q) + 0x300000)
+#define PKO_DQ_WM_CTL(n, q)		(PKO_Q_ADDR(n, q) + 0x000040)
+#define PKO_ENABLE(n)			(PKO_ADDR(n) + 0xd00008)
+#define PKO_L1_SQ_LINK(n, q)		(PKO_Q_ADDR(n, q) + 0x000038)
+#define PKO_L1_SQ_SHAPE(n, q)		(PKO_Q_ADDR(n, q) + 0x000010)
+#define PKO_L1_SQ_TOPOLOGY(n, q)	(PKO_Q_ADDR(n, q) + 0x080000)
+#define PKO_L2_SQ_SCHEDULE(n, q)	(PKO_Q_ADDR(n, q) + 0x080008)
+#define PKO_L2_SQ_TOPOLOGY(n, q)	(PKO_Q_ADDR(n, q) + 0x100000)
+#define PKO_L3_L2_SQ_CHANNEL(n, q)	(PKO_Q_ADDR(n, q) + 0x080038)
+#define PKO_L3_SQ_SCHEDULE(n, q)	(PKO_Q_ADDR(n, q) + 0x100008)
+#define PKO_L3_SQ_TOPOLOGY(n, q)	(PKO_Q_ADDR(n, q) + 0x180000)
+#define PKO_L4_SQ_SCHEDULE(n, q)	(PKO_Q_ADDR(n, q) + 0x180008)
+#define PKO_L4_SQ_TOPOLOGY(n, q)	(PKO_Q_ADDR(n, q) + 0x200000)
+#define PKO_L5_SQ_SCHEDULE(n, q)	(PKO_Q_ADDR(n, q) + 0x200008)
+#define PKO_L5_SQ_TOPOLOGY(n, q)	(PKO_Q_ADDR(n, q) + 0x280000)
+#define PKO_LUT(n, c)			(PKO_ADDR(n) + ((c) << 3) + 0xb00000)
+#define PKO_MAC_CFG(n, m)		(PKO_ADDR(n) + ((m) << 3) + 0x900000)
+#define PKO_MCI0_MAX_CRED(n, m)		(PKO_ADDR(n) + ((m) << 3) + 0xa00000)
+#define PKO_MCI1_MAX_CRED(n, m)		(PKO_ADDR(n) + ((m) << 3) + 0xa80000)
+#define PKO_PDM_CFG(n)			(PKO_ADDR(n) + 0x800000)
+#define PKO_PDM_DQ_MINPAD(n, q)		(PKO_ADDR(n) + ((q) << 3) + 0x8f0000)
+#define PKO_PTF_IOBP_CFG(n)		(PKO_ADDR(n) + 0x900300)
+#define PKO_PTF_STATUS(n, f)		(PKO_ADDR(n) + ((f) << 3) + 0x900100)
+#define PKO_PTGF_CFG(n, g)		(PKO_ADDR(n) + ((g) << 3) + 0x900200)
+#define PKO_SHAPER_CFG(n)		(PKO_ADDR(n) + 0x0800f8)
+#define PKO_STATUS(n)			(PKO_ADDR(n) + 0xd00000)
+
+/* These levels mimic the PKO internal linked queue structure */
+enum queue_level {
+	PQ = 1,
+	L2_SQ = 2,
+	L3_SQ = 3,
+	L4_SQ = 4,
+	L5_SQ = 5,
+	DQ = 6
+};
+
+enum pko_dqop_e {
+	DQOP_SEND,
+	DQOP_OPEN,
+	DQOP_CLOSE,
+	DQOP_QUERY
+};
+
+enum pko_dqstatus_e {
+	PASS = 0,
+	BADSTATE = 0x8,
+	NOFPABUF = 0x9,
+	NOPKOBUF = 0xa,
+	FAILRTNPTR = 0xb,
+	ALREADY = 0xc,
+	NOTCREATED = 0xd,
+	NOTEMPTY = 0xe,
+	SENDPKTDROP = 0xf
+};
+
+struct mac_info {
+	int fifo_cnt;
+	int prio;
+	int speed;
+	int fifo;
+	int num_lmacs;
+};
+
+struct fifo_grp_info {
+	int speed;
+	int size;
+};
+
+static const int lut_index_78xx[] = {
+	0x200,
+	0x240,
+	0x280,
+	0x2c0,
+	0x300,
+	0x340
+};
+
+static const int lut_index_73xx[] = {
+	0x000,
+	0x040,
+	0x080
+};
+
+static enum queue_level max_sq_level(void)
+{
+	/* 73xx and 75xx only have 3 scheduler queue levels */
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return L3_SQ;
+	return L5_SQ;
+}
+
+static int get_num_fifos(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 16;
+	return 28;
+}
+
+static int get_num_fifo_groups(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 5;
+	return 8;
+}
+
+static int get_num_output_macs(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 28;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 10;
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 14;
+	return 0;
+}
+
+static int get_output_mac(int interface, int index,
+			  enum octeon3_mac_type mac_type)
+{
+	int mac;
+
+	/* Output macs are hardcoded in the hardware. See PKO Output MACs
+	 * section in the HRM.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		if (mac_type == SRIO_MAC)
+			mac = 4 + 2 * interface + index;
+		else
+			mac = 2 + 4 * interface + index;
+	} else {
+		mac = 4 + 4 * interface + index;
+	}
+	return mac;
+}
+
+static int get_num_port_queues(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 16;
+	return 32;
+}
+
+static int allocate_queues(int node, enum queue_level level, int num_queues,
+			   int *queues)
+{
+	struct global_resource_tag tag;
+	int rc, max_queues = 0;
+	char buf[16];
+
+	if (level == PQ) {
+		strncpy((char *)&tag.lo, "cvm_pkop", 8);
+		snprintf(buf, 16, "oq_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 32;
+		else
+			max_queues = 16;
+	} else if (level == L2_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "2q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 512;
+		else
+			max_queues = 256;
+	} else if (level == L3_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "3q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 512;
+		else
+			max_queues = 256;
+	} else if (level == L4_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "4q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 1024;
+		else
+			max_queues = 0;
+	} else if (level == L5_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "5q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 1024;
+		else
+			max_queues = 0;
+	} else if (level == DQ) {
+		strncpy((char *)&tag.lo, "cvm_pkod", 8);
+		snprintf(buf, 16, "eq_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			max_queues = 1024;
+		else
+			max_queues = 256;
+	}
+
+	res_mgr_create_resource(tag, max_queues);
+	rc = res_mgr_alloc_range(tag, -1, num_queues, false, queues);
+	if (rc < 0)
+		return rc;
+
+	return 0;
+}
+
+static void free_queues(int node, enum queue_level level, int num_queues,
+			const int *queues)
+{
+	struct global_resource_tag tag;
+	char buf[16];
+
+	if (level == PQ) {
+		strncpy((char *)&tag.lo, "cvm_pkop", 8);
+		snprintf(buf, 16, "oq_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	} else if (level == L2_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "2q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	} else if (level == L3_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "3q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	} else if (level == L4_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "4q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	} else if (level == L5_SQ) {
+		strncpy((char *)&tag.lo, "cvm_pkol", 8);
+		snprintf(buf, 16, "5q_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	} else if (level == DQ) {
+		strncpy((char *)&tag.lo, "cvm_pkod", 8);
+		snprintf(buf, 16, "eq_%d....", node);
+		memcpy(&tag.hi, buf, 8);
+	}
+
+	res_mgr_free_range(tag, queues, num_queues);
+}
+
+static int port_queue_init(int node, int pq, int mac)
+{
+	u64 data;
+
+	data = mac << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+	oct_csr_write(data, PKO_L1_SQ_TOPOLOGY(node, pq));
+
+	data = mac << PKO_L1_SQ_SHAPE_LINK_SHIFT;
+	oct_csr_write(data, PKO_L1_SQ_SHAPE(node, pq));
+
+	data = mac;
+	data <<= PKO_L1_SQ_LINK_LINK_SHIFT;
+	oct_csr_write(data, PKO_L1_SQ_LINK(node, pq));
+
+	return 0;
+}
+
+static int scheduler_queue_l2_init(int node, int queue, int parent_q)
+{
+	u64 data;
+
+	data = oct_csr_read(PKO_L1_SQ_TOPOLOGY(node, parent_q));
+	data &= ~(PKO_L12_SQ_TOPOLOGY_PRIO_ANCHOR_MASK |
+		  PKO_SQ_TOPOLOGY_RR_PRIO_MASK);
+	data |= (u64)queue << PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT;
+	data |= PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER << PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT;
+	oct_csr_write(data, PKO_L1_SQ_TOPOLOGY(node, parent_q));
+
+	oct_csr_write(0, PKO_L2_SQ_SCHEDULE(node, queue));
+
+	data = parent_q << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+	oct_csr_write(data, PKO_L2_SQ_TOPOLOGY(node, queue));
+
+	return 0;
+}
+
+static int scheduler_queue_l3_init(int node, int queue, int parent_q)
+{
+	u64 data;
+
+	data = oct_csr_read(PKO_L2_SQ_TOPOLOGY(node, parent_q));
+	data &= ~(PKO_L345_SQ_TOPOLOGY_PRIO_ANCHOR_MASK |
+		  PKO_SQ_TOPOLOGY_RR_PRIO_MASK);
+	data |= (u64)queue << PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT;
+	data |= PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER << PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT;
+	oct_csr_write(data, PKO_L2_SQ_TOPOLOGY(node, parent_q));
+
+	oct_csr_write(0, PKO_L3_SQ_SCHEDULE(node, queue));
+
+	data = parent_q << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+	oct_csr_write(data, PKO_L3_SQ_TOPOLOGY(node, queue));
+
+	return 0;
+}
+
+static int scheduler_queue_l4_init(int node, int queue, int parent_q)
+{
+	u64 data;
+
+	data = oct_csr_read(PKO_L3_SQ_TOPOLOGY(node, parent_q));
+	data &= ~(PKO_L345_SQ_TOPOLOGY_PRIO_ANCHOR_MASK |
+		  PKO_SQ_TOPOLOGY_RR_PRIO_MASK);
+	data |= (u64)queue << PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT;
+	data |= PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER << PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT;
+	oct_csr_write(data, PKO_L3_SQ_TOPOLOGY(node, parent_q));
+
+	oct_csr_write(0, PKO_L4_SQ_SCHEDULE(node, queue));
+
+	data = parent_q << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+	oct_csr_write(data, PKO_L4_SQ_TOPOLOGY(node, queue));
+
+	return 0;
+}
+
+static int scheduler_queue_l5_init(int node, int queue, int parent_q)
+{
+	u64 data;
+
+	data = oct_csr_read(PKO_L4_SQ_TOPOLOGY(node, parent_q));
+	data &= ~(PKO_L345_SQ_TOPOLOGY_PRIO_ANCHOR_MASK |
+		  PKO_SQ_TOPOLOGY_RR_PRIO_MASK);
+	data |= (u64)queue << PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT;
+	data |= PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER << PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT;
+	oct_csr_write(data, PKO_L4_SQ_TOPOLOGY(node, parent_q));
+
+	oct_csr_write(0, PKO_L5_SQ_SCHEDULE(node, queue));
+
+	data = parent_q << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+	oct_csr_write(data, PKO_L5_SQ_TOPOLOGY(node, queue));
+
+	return 0;
+}
+
+static int descriptor_queue_init(int node, const int *queue, int parent_q,
+				 int num_dq)
+{
+	int i, prio, rr_prio, rr_quantum;
+	u64 addr, data;
+
+	/* Limit static priorities to the available prio field bits */
+	if (num_dq > 9) {
+		pr_err("%s: Invalid number of dqs\n", __FILE__);
+		return -1;
+	}
+
+	prio = 0;
+
+	if (num_dq == 1) {
+		/* Single dq */
+		rr_prio = PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER;
+		rr_quantum = 0x10;
+	} else {
+		/* Multiple dqs */
+		rr_prio = num_dq;
+		rr_quantum = 0;
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		addr = PKO_L5_SQ_TOPOLOGY(node, parent_q);
+	else
+		addr = PKO_L3_SQ_TOPOLOGY(node, parent_q);
+
+	data = oct_csr_read(addr);
+	data &= ~(PKO_L345_SQ_TOPOLOGY_PRIO_ANCHOR_MASK |
+		  PKO_SQ_TOPOLOGY_RR_PRIO_MASK);
+	data |= (u64)queue[0] << PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT;
+	data |= rr_prio << PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT;
+	oct_csr_write(data, addr);
+
+	for (i = 0; i < num_dq; i++) {
+		data = (prio << PKO_DQ_SCHEDULE_PRIO_SHIFT) | rr_quantum;
+		oct_csr_write(data, PKO_DQ_SCHEDULE(node, queue[i]));
+
+		data = parent_q << PKO_SQ_TOPOLOGY_LINK_SHIFT;
+		oct_csr_write(data, PKO_DQ_TOPOLOGY(node, queue[i]));
+
+		data = PKO_DQ_WM_CTL_KIND;
+		oct_csr_write(data, PKO_DQ_WM_CTL(node, queue[i]));
+
+		if (prio << rr_prio)
+			prio++;
+	}
+
+	return 0;
+}
+
+static int map_channel(int node, int pq, int queue, int ipd_port)
+{
+	int table_index, lut_index = 0;
+	u64 data;
+
+	data = oct_csr_read(PKO_L3_L2_SQ_CHANNEL(node, queue));
+	data &= ~PKO_L3_L2_CHANNEL_CC_CHANNEL_MASK;
+	data |= (u64)ipd_port << PKO_L3_L2_CHANNEL_CC_CHANNEL_SHIFT;
+	oct_csr_write(data, PKO_L3_L2_SQ_CHANNEL(node, queue));
+
+	/* See PKO_LUT register description in the HRM for how to compose the
+	 * lut_index.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		table_index = ((ipd_port & PKO_LUT_PORT_MASK) -
+			PKO_LUT_PORT_TO_INDEX) >> PKO_LUT_QUEUE_NUMBER_SHIFT;
+		lut_index = lut_index_78xx[table_index];
+		lut_index += ipd_port & PKO_LUT_QUEUE_NUMBER_MASK;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		table_index = ((ipd_port & PKO_LUT_PORT_MASK) -
+			PKO_LUT_PORT_TO_INDEX) >> PKO_LUT_QUEUE_NUMBER_SHIFT;
+		lut_index = lut_index_73xx[table_index];
+		lut_index += ipd_port & PKO_LUT_QUEUE_NUMBER_MASK;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		if ((ipd_port & PKO_LUT_PORT_MASK) != PKO_LUT_PORT_TO_INDEX)
+			return -1;
+		lut_index = ipd_port & PKO_LUT_QUEUE_NUMBER_MASK;
+	}
+
+	data = PKO_LUT_VALID;
+	data |= pq << PKO_LUT_PQ_IDX_SHIFT;
+	data |= queue;
+	oct_csr_write(data, PKO_LUT(node, lut_index));
+
+	return 0;
+}
+
+static int open_dq(int node, int dq)
+{
+	u64 data, *iobdma_addr, *scratch_addr;
+	enum pko_dqstatus_e status;
+
+	/* Build the dq open query. See PKO_QUERY_DMA_S in the HRM for the
+	 * query format.
+	 */
+	data = (LMTDMA_SCR_OFFSET >> PKO_LMTDMA_SCRADDR_SHIFT)
+	       << PKO_QUERY_DMA_SCRADDR_SHIFT;
+	data |= 1ull << PKO_QUERY_DMA_RTNLEN_SHIFT;	/* Must always be 1 */
+	data |= 0x51ull << PKO_QUERY_DMA_DID_SHIFT;	/* Must always be 51h */
+	data |= (u64)node << PKO_QUERY_DMA_NODE_SHIFT;
+	data |= (u64)DQOP_OPEN << PKO_QUERY_DMA_DQOP_SHIFT;
+	data |= dq << PKO_QUERY_DMA_DQ_SHIFT;
+
+	CVMX_SYNCWS;
+	preempt_disable();
+
+	/* Clear return location */
+	scratch_addr = (u64 *)(SCRATCH_BASE_ADDR + LMTDMA_SCR_OFFSET);
+	*scratch_addr = ~0ull;
+
+	/* Issue pko lmtdma command */
+	iobdma_addr = (u64 *)(IOBDMA_ORDERED_IO_ADDR);
+	*iobdma_addr = data;
+
+	/* Wait for lmtdma command to complete and get response*/
+	CVMX_SYNCIOBDMA;
+	data = *scratch_addr;
+
+	preempt_enable();
+
+	/* See PKO_QUERY_RTN_S in the HRM for response format */
+	status = (data & PKO_QUERY_RTN_DQSTATUS_MASK)
+		 >> PKO_QUERY_RTN_DQSTATUS_SHIFT;
+	if (status != PASS && status != ALREADY) {
+		pr_err("%s: Failed to open dq\n", __FILE__);
+		return -1;
+	}
+
+	return 0;
+}
+
+static s64 query_dq(int node, int dq)
+{
+	u64 data, *iobdma_addr, *scratch_addr;
+	enum pko_dqstatus_e status;
+	s64 depth;
+
+	/* Build the dq open query. See PKO_QUERY_DMA_S in the HRM for the
+	 * query format.
+	 */
+	data = (LMTDMA_SCR_OFFSET >> PKO_LMTDMA_SCRADDR_SHIFT) <<
+		PKO_QUERY_DMA_SCRADDR_SHIFT;
+	data |= 1ull << PKO_QUERY_DMA_RTNLEN_SHIFT;	/* Must always be 1 */
+	data |= 0x51ull << PKO_QUERY_DMA_DID_SHIFT;	/* Must always be 51h */
+	data |= (u64)node << PKO_QUERY_DMA_NODE_SHIFT;
+	data |= (u64)DQOP_QUERY << PKO_QUERY_DMA_DQOP_SHIFT;
+	data |= dq << PKO_QUERY_DMA_DQ_SHIFT;
+
+	CVMX_SYNCWS;
+	preempt_disable();
+
+	/* Clear return location */
+	scratch_addr = (u64 *)(SCRATCH_BASE_ADDR + LMTDMA_SCR_OFFSET);
+	*scratch_addr = ~0ull;
+
+	/* Issue pko lmtdma command */
+	iobdma_addr = (u64 *)(IOBDMA_ORDERED_IO_ADDR);
+	*iobdma_addr = data;
+
+	/* Wait for lmtdma command to complete and get response*/
+	CVMX_SYNCIOBDMA;
+	data = *scratch_addr;
+
+	preempt_enable();
+
+	/* See PKO_QUERY_RTN_S in the HRM for response format */
+	status = (data & PKO_QUERY_RTN_DQSTATUS_MASK)
+		 >> PKO_QUERY_RTN_DQSTATUS_SHIFT;
+	if (status != PASS) {
+		pr_err("%s: Failed to query dq=%d\n", __FILE__, dq);
+		return -1;
+	}
+
+	depth = data & PKO_QUERY_RTN_DEPTH_MASK;
+
+	return depth;
+}
+
+static u64 close_dq(int node, int dq)
+{
+	u64 data, *iobdma_addr, *scratch_addr;
+	enum pko_dqstatus_e status;
+
+	/* Build the dq open query. See PKO_QUERY_DMA_S in the HRM for the
+	 * query format.
+	 */
+	data = (LMTDMA_SCR_OFFSET >> PKO_LMTDMA_SCRADDR_SHIFT) <<
+		PKO_QUERY_DMA_SCRADDR_SHIFT;
+	data |= 1ull << PKO_QUERY_DMA_RTNLEN_SHIFT;	/* Must always be 1 */
+	data |= 0x51ull << PKO_QUERY_DMA_DID_SHIFT;	/* Must always be 51h */
+	data |= (u64)node << PKO_QUERY_DMA_NODE_SHIFT;
+	data |= (u64)DQOP_CLOSE << PKO_QUERY_DMA_DQOP_SHIFT;
+	data |= dq << PKO_QUERY_DMA_DQ_SHIFT;
+
+	CVMX_SYNCWS;
+	preempt_disable();
+
+	/* Clear return location */
+	scratch_addr = (u64 *)(SCRATCH_BASE_ADDR + LMTDMA_SCR_OFFSET);
+	*scratch_addr = ~0ull;
+
+	/* Issue pko lmtdma command */
+	iobdma_addr = (u64 *)(IOBDMA_ORDERED_IO_ADDR);
+	*iobdma_addr = data;
+
+	/* Wait for lmtdma command to complete and get response*/
+	CVMX_SYNCIOBDMA;
+	data = *scratch_addr;
+
+	preempt_enable();
+
+	/* See PKO_QUERY_RTN_S in the HRM for response format */
+	status = (data & PKO_QUERY_RTN_DQSTATUS_MASK)
+		 >> PKO_QUERY_RTN_DQSTATUS_SHIFT;
+	if (status != PASS) {
+		pr_err("%s: Failed to close dq\n", __FILE__);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int get_78xx_fifos_required(int node, struct mac_info *macs)
+{
+	int bgx, cnt, i, index, num_lmacs, prio, qlm, fifo_cnt = 0;
+	enum port_mode mode;
+	u64 data;
+
+	/* The loopback mac gets 1 fifo by default */
+	macs[0].fifo_cnt = 1;
+	macs[0].speed = 1;
+	fifo_cnt += 1;
+
+	/* The dpi mac gets 1 fifo by default */
+	macs[1].fifo_cnt = 1;
+	macs[1].speed = 50;
+	fifo_cnt += 1;
+
+	/* The ilk macs get default number of fifos (module param) */
+	macs[2].fifo_cnt = ilk0_lanes <= 4 ? ilk0_lanes : 4;
+	macs[2].speed = 40;
+	fifo_cnt += macs[2].fifo_cnt;
+	macs[3].fifo_cnt = ilk1_lanes <= 4 ? ilk1_lanes : 4;
+	macs[3].speed = 40;
+	fifo_cnt += macs[3].fifo_cnt;
+
+	/* Assign fifos to the active bgx macs */
+	for (i = 4; i < get_num_output_macs(); i += 4) {
+		bgx = (i - 4) / 4;
+		qlm = bgx_port_get_qlm(node, bgx, 0);
+
+		data = oct_csr_read(GSER_CFG(node, qlm));
+		if (data & GSER_CFG_BGX) {
+			data = oct_csr_read(BGX_CMR_TX_LMACS(node, bgx));
+			num_lmacs = data & 7;
+
+			for (index = 0; index < num_lmacs; index++) {
+				switch (num_lmacs) {
+				case 1:
+					macs[i + index].num_lmacs = 4;
+					break;
+				case 2:
+					macs[i + index].num_lmacs = 2;
+					break;
+				case 4:
+				default:
+					macs[i + index].num_lmacs = 1;
+					break;
+				}
+
+				mode = bgx_port_get_mode(node, bgx, 0);
+				switch (mode) {
+				case PORT_MODE_SGMII:
+				case PORT_MODE_RGMII:
+					macs[i + index].fifo_cnt = 1;
+					macs[i + index].prio = 1;
+					macs[i + index].speed = 1;
+					break;
+
+				case PORT_MODE_XAUI:
+				case PORT_MODE_RXAUI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 2;
+					macs[i + index].speed = 20;
+					break;
+
+				case PORT_MODE_10G_KR:
+				case PORT_MODE_XFI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 2;
+					macs[i + index].speed = 10;
+					break;
+
+				case PORT_MODE_40G_KR4:
+				case PORT_MODE_XLAUI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 3;
+					macs[i + index].speed = 40;
+					break;
+
+				default:
+					macs[i + index].fifo_cnt = 0;
+					macs[i + index].prio = 0;
+					macs[i + index].speed = 0;
+					macs[i + index].num_lmacs = 0;
+					break;
+				}
+
+				fifo_cnt += macs[i + index].fifo_cnt;
+			}
+		}
+	}
+
+	/* If more fifos than available were assigned, reduce the number of
+	 * fifos until within limit. Start with the lowest priority macs with 4
+	 * fifos.
+	 */
+	prio = 1;
+	cnt = 4;
+	while (fifo_cnt > get_num_fifos()) {
+		for (i = 0; i < get_num_output_macs(); i++) {
+			if (macs[i].prio == prio && macs[i].fifo_cnt == cnt) {
+				macs[i].fifo_cnt >>= 1;
+				fifo_cnt -= macs[i].fifo_cnt;
+			}
+
+			if (fifo_cnt <= get_num_fifos())
+				break;
+		}
+
+		if (prio >= 3) {
+			prio = 1;
+			cnt >>= 1;
+		} else {
+			prio++;
+		}
+
+		if (cnt == 0)
+			break;
+	}
+
+	/* Assign left over fifos to dpi */
+	if (get_num_fifos() - fifo_cnt > 0) {
+		if (get_num_fifos() - fifo_cnt >= 3) {
+			macs[1].fifo_cnt += 3;
+			fifo_cnt -= 3;
+		} else {
+			macs[1].fifo_cnt += 1;
+			fifo_cnt -= 1;
+		}
+	}
+
+	return 0;
+}
+
+static int get_75xx_fifos_required(int node, struct mac_info *macs)
+{
+	int bgx, cnt, i, index, prio, qlm, fifo_cnt = 0;
+	enum port_mode mode;
+	u64 data;
+
+	/* The loopback mac gets 1 fifo by default */
+	macs[0].fifo_cnt = 1;
+	macs[0].speed = 1;
+	fifo_cnt += 1;
+
+	/* The dpi mac gets 1 fifo by default */
+	macs[1].fifo_cnt = 1;
+	macs[1].speed = 50;
+	fifo_cnt += 1;
+
+	/* Assign fifos to the active bgx macs */
+	bgx = 0;
+	for (i = 2; i < 6; i++) {
+		index = i - 2;
+		qlm = bgx_port_get_qlm(node, bgx, index);
+		data = oct_csr_read(GSER_CFG(node, qlm));
+		if (data & GSER_CFG_BGX) {
+			macs[i].num_lmacs = 1;
+
+			mode = bgx_port_get_mode(node, bgx, index);
+			switch (mode) {
+			case PORT_MODE_SGMII:
+			case PORT_MODE_RGMII:
+				macs[i].fifo_cnt = 1;
+				macs[i].prio = 1;
+				macs[i].speed = 1;
+				break;
+
+			case PORT_MODE_10G_KR:
+			case PORT_MODE_XFI:
+				macs[i].fifo_cnt = 4;
+				macs[i].prio = 2;
+				macs[i].speed = 10;
+				break;
+
+			default:
+				macs[i].fifo_cnt = 0;
+				macs[i].prio = 0;
+				macs[i].speed = 0;
+				macs[i].num_lmacs = 0;
+				break;
+			}
+
+			fifo_cnt += macs[i].fifo_cnt;
+		}
+	}
+
+	/* If more fifos than available were assigned, reduce the number of
+	 * fifos until within limit. Start with the lowest priority macs with 4
+	 * fifos.
+	 */
+	prio = 1;
+	cnt = 4;
+	while (fifo_cnt > get_num_fifos()) {
+		for (i = 0; i < get_num_output_macs(); i++) {
+			if (macs[i].prio == prio && macs[i].fifo_cnt == cnt) {
+				macs[i].fifo_cnt >>= 1;
+				fifo_cnt -= macs[i].fifo_cnt;
+			}
+
+			if (fifo_cnt <= get_num_fifos())
+				break;
+		}
+
+		if (prio >= 3) {
+			prio = 1;
+			cnt >>= 1;
+		} else {
+			prio++;
+		}
+
+		if (cnt == 0)
+			break;
+	}
+
+	/* Assign left over fifos to dpi */
+	if (get_num_fifos() - fifo_cnt > 0) {
+		if (get_num_fifos() - fifo_cnt >= 3) {
+			macs[1].fifo_cnt += 3;
+			fifo_cnt -= 3;
+		} else {
+			macs[1].fifo_cnt += 1;
+			fifo_cnt -= 1;
+		}
+	}
+
+	return 0;
+}
+
+static int get_73xx_fifos_required(int node, struct mac_info *macs)
+{
+	int bgx, cnt, i, index, num_lmacs, prio, qlm, fifo_cnt = 0;
+	enum port_mode mode;
+	u64 data;
+
+	/* The loopback mac gets 1 fifo by default */
+	macs[0].fifo_cnt = 1;
+	macs[0].speed = 1;
+	fifo_cnt += 1;
+
+	/* The dpi mac gets 1 fifo by default */
+	macs[1].fifo_cnt = 1;
+	macs[1].speed = 50;
+	fifo_cnt += 1;
+
+	/* Assign fifos to the active bgx macs */
+	for (i = 2; i < get_num_output_macs(); i += 4) {
+		bgx = (i - 2) / 4;
+		qlm = bgx_port_get_qlm(node, bgx, 0);
+		data = oct_csr_read(GSER_CFG(node, qlm));
+
+		/* Bgx2 can be connected to dlm 5, 6, or both */
+		if (bgx == 2) {
+			if (!(data & GSER_CFG_BGX)) {
+				qlm = bgx_port_get_qlm(node, bgx, 2);
+				data = oct_csr_read(GSER_CFG(node, qlm));
+			}
+		}
+
+		if (data & GSER_CFG_BGX) {
+			data = oct_csr_read(BGX_CMR_TX_LMACS(node, bgx));
+			num_lmacs = data & 7;
+
+			for (index = 0; index < num_lmacs; index++) {
+				switch (num_lmacs) {
+				case 1:
+					macs[i + index].num_lmacs = 4;
+					break;
+				case 2:
+					macs[i + index].num_lmacs = 2;
+					break;
+				case 4:
+				default:
+					macs[i + index].num_lmacs = 1;
+					break;
+				}
+
+				mode = bgx_port_get_mode(node, bgx, index);
+				switch (mode) {
+				case PORT_MODE_SGMII:
+				case PORT_MODE_RGMII:
+					macs[i + index].fifo_cnt = 1;
+					macs[i + index].prio = 1;
+					macs[i + index].speed = 1;
+					break;
+
+				case PORT_MODE_XAUI:
+				case PORT_MODE_RXAUI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 2;
+					macs[i + index].speed = 20;
+					break;
+
+				case PORT_MODE_10G_KR:
+				case PORT_MODE_XFI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 2;
+					macs[i + index].speed = 10;
+					break;
+
+				case PORT_MODE_40G_KR4:
+				case PORT_MODE_XLAUI:
+					macs[i + index].fifo_cnt = 4;
+					macs[i + index].prio = 3;
+					macs[i + index].speed = 40;
+					break;
+
+				default:
+					macs[i + index].fifo_cnt = 0;
+					macs[i + index].prio = 0;
+					macs[i + index].speed = 0;
+					break;
+				}
+
+				fifo_cnt += macs[i + index].fifo_cnt;
+			}
+		}
+	}
+
+	/* If more fifos than available were assigned, reduce the number of
+	 * fifos until within limit. Start with the lowest priority macs with 4
+	 * fifos.
+	 */
+	prio = 1;
+	cnt = 4;
+	while (fifo_cnt > get_num_fifos()) {
+		for (i = 0; i < get_num_output_macs(); i++) {
+			if (macs[i].prio == prio && macs[i].fifo_cnt == cnt) {
+				macs[i].fifo_cnt >>= 1;
+				fifo_cnt -= macs[i].fifo_cnt;
+			}
+
+			if (fifo_cnt <= get_num_fifos())
+				break;
+		}
+
+		if (prio >= 3) {
+			prio = 1;
+			cnt >>= 1;
+		} else {
+			prio++;
+		}
+
+		if (cnt == 0)
+			break;
+	}
+
+	/* Assign left over fifos to dpi */
+	if (get_num_fifos() - fifo_cnt > 0) {
+		if (get_num_fifos() - fifo_cnt >= 3) {
+			macs[1].fifo_cnt += 3;
+			fifo_cnt -= 3;
+		} else {
+			macs[1].fifo_cnt += 1;
+			fifo_cnt -= 1;
+		}
+	}
+
+	return 0;
+}
+
+static int setup_macs(int node)
+{
+	struct fifo_grp_info fifo_grp[PKO_MAX_FIFO_GROUPS];
+	struct mac_info macs[PKO_MAX_OUTPUT_MACS];
+	int cnt, fifo, grp, i, size;
+	u64 data;
+
+	memset(macs, 0, sizeof(macs));
+	memset(fifo_grp, 0, sizeof(fifo_grp));
+
+	/* Get the number of fifos required by each mac */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		get_78xx_fifos_required(node, macs);
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		get_75xx_fifos_required(node, macs);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		get_73xx_fifos_required(node, macs);
+	} else {
+		pr_err("%s: Unsupported board type\n", __FILE__);
+		return -1;
+	}
+
+	/* Assign fifos to each mac. Start with macs requiring 4 fifos */
+	fifo = 0;
+	for (cnt = 4; cnt > 0; cnt >>= 1) {
+		for (i = 0; i < get_num_output_macs(); i++) {
+			if (macs[i].fifo_cnt != cnt)
+				continue;
+
+			macs[i].fifo = fifo;
+			grp = fifo / 4;
+
+			fifo_grp[grp].speed += macs[i].speed;
+
+			if (cnt == 4) {
+				/* 10, 0, 0, 0 */
+				fifo_grp[grp].size = 4;
+			} else if (cnt == 2) {
+				/* 5, 0, 5, 0 */
+				fifo_grp[grp].size = 3;
+			} else if (cnt == 1) {
+				if ((fifo & 0x2) && fifo_grp[grp].size == 3) {
+					/* 5, 0, 2.5, 2.5 */
+					fifo_grp[grp].size = 1;
+				} else {
+					/* 2.5, 2.5, 2.5, 2.5 */
+					fifo_grp[grp].size = 0;
+				}
+			}
+
+			fifo += cnt;
+		}
+	}
+
+	/* Configure the fifo groups */
+	for (i = 0; i < get_num_fifo_groups(); i++) {
+		data = oct_csr_read(PKO_PTGF_CFG(node, i));
+		size = data & PKO_PTGF_CFG_SIZE_MASK;
+		if (size != fifo_grp[i].size)
+			data |= PKO_PTGF_CFG_RESET;
+		data &= ~PKO_PTGF_CFG_SIZE_MASK;
+		data |= fifo_grp[i].size;
+
+		data &= ~PKO_PTGF_CFG_RATE_MASK;
+		if (fifo_grp[i].speed >= 40) {
+			if (fifo_grp[i].size >= 3) {
+				/* 50 Gbps */
+				data |= 0x3 << PKO_PTGF_CFG_RATE_SHIFT;
+			} else {
+				/* 25 Gbps */
+				data |= 0x2 << PKO_PTGF_CFG_RATE_SHIFT;
+			}
+		} else if (fifo_grp[i].speed >= 20) {
+			/* 25 Gbps */
+			data |= 0x2 << PKO_PTGF_CFG_RATE_SHIFT;
+		} else if (fifo_grp[i].speed >= 10) {
+			/* 12.5 Gbps */
+			data |= 0x1 << PKO_PTGF_CFG_RATE_SHIFT;
+		}
+		oct_csr_write(data, PKO_PTGF_CFG(node, i));
+		data &= ~PKO_PTGF_CFG_RESET;
+		oct_csr_write(data, PKO_PTGF_CFG(node, i));
+	}
+
+	/* Configure the macs with their assigned fifo */
+	for (i = 0; i < get_num_output_macs(); i++) {
+		data = oct_csr_read(PKO_MAC_CFG(node, i));
+		data &= ~PKO_MAC_CFG_FIFO_NUM_MASK;
+		if (!macs[i].fifo_cnt)
+			data |= PKO_MAC_CFG_FIFO_UNDEFINED;
+		else
+			data |= macs[i].fifo;
+		oct_csr_write(data, PKO_MAC_CFG(node, i));
+	}
+
+	/* Setup mci0/mci1/skid credits */
+	for (i = 0; i < get_num_output_macs(); i++) {
+		int fifo_credit, mac_credit, skid_credit;
+
+		if (!macs[i].fifo_cnt)
+			continue;
+
+		if (i == 0) {
+			/* Loopback */
+			mac_credit = 4 * 1024;
+			skid_credit = 0;
+		} else if (i == 1) {
+			/* Dpi */
+			mac_credit = 2 * 1024;
+			skid_credit = 0;
+		} else if (OCTEON_IS_MODEL(OCTEON_CN78XX) &&
+			   ((i == 2 || i == 3))) {
+			/* ILK */
+			mac_credit = 4 * 1024;
+			skid_credit = 0;
+		} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX) &&
+			   ((i >= 6 && i <= 9))) {
+			/* Srio */
+			mac_credit = 1024 / 2;
+			skid_credit = 0;
+		} else {
+			/* Bgx */
+			mac_credit = macs[i].num_lmacs * 8 * 1024;
+			skid_credit = macs[i].num_lmacs * 256;
+		}
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+			fifo_credit = macs[i].fifo_cnt * PKO_FIFO_SIZE;
+			data = (fifo_credit + mac_credit) / 16;
+			oct_csr_write(data, PKO_MCI0_MAX_CRED(node, i));
+		}
+
+		data = mac_credit / 16;
+		oct_csr_write(data, PKO_MCI1_MAX_CRED(node, i));
+
+		data = oct_csr_read(PKO_MAC_CFG(node, i));
+		data &= ~PKO_MAC_CFG_SKID_MAX_CNT_MASK;
+		data |= ((skid_credit / 256) >> 1) << 5;
+		oct_csr_write(data, PKO_MAC_CFG(node, i));
+	}
+
+	return 0;
+}
+
+static int hw_init_global(int node, int aura)
+{
+	int timeout;
+	u64 data;
+
+	data = oct_csr_read(PKO_ENABLE(node));
+	if (data & PKO_ENABLE_ENABLE) {
+		pr_info("%s: Pko already enabled on node %d\n", __FILE__, node);
+		return 0;
+	}
+
+	/* Enable color awareness */
+	data = oct_csr_read(PKO_SHAPER_CFG(node));
+	data |= PKO_SHAPER_CFG_COLOR_AWARE;
+	oct_csr_write(data, PKO_SHAPER_CFG(node));
+
+	/* Clear flush command */
+	oct_csr_write(0, PKO_DPFI_FLUSH(node));
+
+	/* Set the aura number */
+	data = (node << PKO_DPFI_FPA_AURA_NODE_SHIFT) | aura;
+	oct_csr_write(data, PKO_DPFI_FPA_AURA(node));
+
+	data = PKO_DPFI_ENA_ENABLE;
+	oct_csr_write(data, PKO_DPFI_ENA(node));
+
+	/* Wait until all pointers have been returned */
+	timeout = 100000;
+	do {
+		data = oct_csr_read(PKO_STATUS(node));
+		if (data & PKO_STATUS_PKO_RDY)
+			break;
+		udelay(1);
+		timeout--;
+	} while (timeout);
+	if (!timeout) {
+		pr_err("%s: Pko dfpi failed on node %d\n", __FILE__, node);
+		return -1;
+	}
+
+	/* Set max outstanding requests in IOBP for any FIFO.*/
+	data = oct_csr_read(PKO_PTF_IOBP_CFG(node));
+	data &= ~PKO_PTF_IOBP_CFG_MAX_READ_SIZE_MASK;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		data |= 16;
+	else
+		data |= 3;
+	oct_csr_write(data, PKO_PTF_IOBP_CFG(node));
+
+	/* Set minimum packet size per Ethernet standard */
+	data = (PKO_PDM_PAD_MINLEN_TYPICAL << PKO_PDM_PAD_MINLEN_SHIFT);
+	oct_csr_write(data, PKO_PDM_CFG(node));
+
+	/* Initialize macs and fifos */
+	setup_macs(node);
+
+	/* Enable pko */
+	data = PKO_ENABLE_ENABLE;
+	oct_csr_write(data, PKO_ENABLE(node));
+
+	/* Verify pko is ready */
+	data = oct_csr_read(PKO_STATUS(node));
+	if (!(data & PKO_STATUS_PKO_RDY)) {
+		pr_err("%s: pko is not ready\n", __FILE__);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int hw_exit_global(int node)
+{
+	int i, timeout;
+	u64 data;
+
+	/* Wait until there are no in-flight packets */
+	for (i = 0; i < get_num_fifos(); i++) {
+		data = oct_csr_read(PKO_PTF_STATUS(node, i));
+		if ((data & PKO_PTF_STATUS_MAC_NUM_MASK) ==
+		    PKO_MAC_CFG_FIFO_UNDEFINED)
+			continue;
+
+		timeout = 10000;
+		do {
+			if (!(data & PKO_PTF_STATUS_IN_FLIGHT_CNT_MASK))
+				break;
+			udelay(1);
+			timeout--;
+			data = oct_csr_read(PKO_PTF_STATUS(node, i));
+		} while (timeout);
+		if (!timeout) {
+			pr_err("%s: Timeout in-flight fifo\n", __FILE__);
+			return -1;
+		}
+	}
+
+	/* Disable pko */
+	oct_csr_write(0, PKO_ENABLE(node));
+
+	/* Reset all port queues to the virtual mac */
+	for (i = 0; i < get_num_port_queues(); i++) {
+		data = get_num_output_macs() << PKO_L1_SQ_TOPOLOGY_LINK_SHIFT;
+		oct_csr_write(data, PKO_L1_SQ_TOPOLOGY(node, i));
+
+		data = get_num_output_macs() << PKO_L1_SQ_SHAPE_LINK_SHIFT;
+		oct_csr_write(data, PKO_L1_SQ_SHAPE(node, i));
+
+		data = (u64)get_num_output_macs() << PKO_L1_SQ_LINK_LINK_SHIFT;
+		oct_csr_write(data, PKO_L1_SQ_LINK(node, i));
+	}
+
+	/* Reset all output macs */
+	for (i = 0; i < get_num_output_macs(); i++) {
+		data = PKO_MAC_CFG_FIFO_NUM_UNDEFINED;
+		oct_csr_write(data, PKO_MAC_CFG(node, i));
+	}
+
+	/* Reset all fifo groups */
+	for (i = 0; i < get_num_fifo_groups(); i++) {
+		data = oct_csr_read(PKO_PTGF_CFG(node, i));
+		/* Simulator asserts if an unused group is reset */
+		if (data == 0)
+			continue;
+		data = PKO_PTGF_CFG_RESET;
+		oct_csr_write(data, PKO_PTGF_CFG(node, i));
+	}
+
+	/* Return cache pointers to fpa */
+	data = PKO_DPFI_FLUSH_FLUSH_EN;
+	oct_csr_write(data, PKO_DPFI_FLUSH(node));
+	timeout = 10000;
+	do {
+		data = oct_csr_read(PKO_DPFI_STATUS(node));
+		if (data & PKO_DPFI_STATUS_CACHE_FLUSHED)
+			break;
+		udelay(1);
+		timeout--;
+	} while (timeout);
+	if (!timeout) {
+		pr_err("%s: Timeout flushing cache\n", __FILE__);
+		return -1;
+	}
+	oct_csr_write(0, PKO_DPFI_ENA(node));
+	oct_csr_write(0, PKO_DPFI_FLUSH(node));
+
+	return 0;
+}
+
+static int virtual_mac_config(int node)
+{
+	int dq[8], i, num_dq, parent_q, pq, queue, rc, vmac;
+	enum queue_level level;
+
+	/* The virtual mac is after the last output mac. Note: for the 73xx it
+	 * might be 2 after the last output mac (15).
+	 */
+	vmac = get_num_output_macs();
+
+	/* Allocate a port queue */
+	rc = allocate_queues(node, PQ, 1, &pq);
+	if (rc < 0) {
+		pr_err("%s: Failed to allocate port queue\n", __FILE__);
+		return rc;
+	}
+
+	/* Connect the port queue to the output mac */
+	port_queue_init(node, pq, vmac);
+
+	parent_q = pq;
+	for (level = L2_SQ; level <= max_sq_level(); level++) {
+		rc = allocate_queues(node, level, 1, &queue);
+		if (rc < 0) {
+			pr_err("%s: Failed to allocate queue\n", __FILE__);
+			return rc;
+		}
+
+		switch (level) {
+		case L2_SQ:
+			scheduler_queue_l2_init(node, queue, parent_q);
+			break;
+		case L3_SQ:
+			scheduler_queue_l3_init(node, queue, parent_q);
+			break;
+		case L4_SQ:
+			scheduler_queue_l4_init(node, queue, parent_q);
+			break;
+		case L5_SQ:
+			scheduler_queue_l5_init(node, queue, parent_q);
+			break;
+		default:
+			break;
+		}
+
+		parent_q = queue;
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+		num_dq = 8;
+	else
+		num_dq = 1;
+
+	rc = allocate_queues(node, DQ, num_dq, dq);
+	if (rc < 0) {
+		pr_err("%s: Failed to allocate description queues\n", __FILE__);
+		return rc;
+	}
+
+	/* By convention the dq must be zero */
+	if (dq[0] != 0) {
+		pr_err("%s: Failed to reserve description queues\n", __FILE__);
+		return -1;
+	}
+	descriptor_queue_init(node, dq, parent_q, num_dq);
+
+	/* Open the dqs */
+	for (i = 0; i < num_dq; i++)
+		open_dq(node, dq[i]);
+
+	return 0;
+}
+
+static int drain_dq(int node, int dq)
+{
+	int timeout;
+	u64 data;
+	s64 rc;
+
+	data = PKO_DQ_SW_XOFF_SC_RAM_FLIP | PKO_DQ_SW_XOFF_SC_RAM_CDIS;
+	oct_csr_write(data, PKO_DQ_SW_XOFF(node, dq));
+
+	usleep_range(1000, 2000);
+
+	data = 0;
+	oct_csr_write(data, PKO_DQ_SW_XOFF(node, dq));
+
+	/* Wait for the dq to drain */
+	timeout = 10000;
+	do {
+		rc = query_dq(node, dq);
+		if (!rc)
+			break;
+		else if (rc < 0)
+			return rc;
+		udelay(1);
+		timeout--;
+	} while (timeout);
+	if (!timeout) {
+		pr_err("%s: Timeout waiting for dq to drain\n", __FILE__);
+		return -1;
+	}
+
+	/* Close the queue anf free internal buffers */
+	close_dq(node, dq);
+
+	return 0;
+}
+
+int octeon3_pko_exit_global(int node)
+{
+	int dq[8], i, num_dq;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+		num_dq = 8;
+	else
+		num_dq = 1;
+
+	/* Shutdown the virtual/null interface */
+	for (i = 0; i < ARRAY_SIZE(dq); i++)
+		dq[i] = i;
+	octeon3_pko_interface_uninit(node, dq, num_dq);
+
+	/* Shutdown pko */
+	hw_exit_global(node);
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pko_exit_global);
+
+int octeon3_pko_init_global(int node, int aura)
+{
+	int rc;
+
+	rc = hw_init_global(node, aura);
+	if (rc)
+		return rc;
+
+	/* Channel credit level at level 2 */
+	oct_csr_write(0, PKO_CHANNEL_LEVEL(node));
+
+	/* Configure the null mac */
+	rc = virtual_mac_config(node);
+	if (rc)
+		return rc;
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pko_init_global);
+
+int octeon3_pko_set_mac_options(int node, int interface, int index,
+				enum octeon3_mac_type mac_type, bool fcs_en,
+				bool pad_en, int fcs_sop_off)
+{
+	int fifo_num, mac;
+	u64 data;
+
+	mac = get_output_mac(interface, index, mac_type);
+
+	data = oct_csr_read(PKO_MAC_CFG(node, mac));
+	fifo_num = data & PKO_MAC_CFG_FIFO_NUM_MASK;
+	if (fifo_num == PKO_MAC_CFG_FIFO_UNDEFINED) {
+		pr_err("%s: mac not configured %d:%d:%d\n", __FILE__, node,
+		       interface, index);
+		return -ENODEV;
+	}
+
+	/* Some silicon requires fifo_num=0x1f to change padding, fcs */
+	data &= ~PKO_MAC_CFG_FIFO_NUM_MASK;
+	data |= PKO_MAC_CFG_FIFO_UNDEFINED;
+
+	data &= ~(PKO_MAC_CFG_MIN_PAD_ENA | PKO_MAC_CFG_FCS_ENA |
+		  PKO_MAC_CFG_FCS_SOP_OFF_MASK);
+	if (pad_en)
+		data |= PKO_MAC_CFG_MIN_PAD_ENA;
+	if (fcs_en)
+		data |= PKO_MAC_CFG_FCS_ENA;
+	if (fcs_sop_off)
+		data |= fcs_sop_off << PKO_MAC_CFG_FCS_SOP_OFF_SHIFT;
+
+	oct_csr_write(data, PKO_MAC_CFG(node, mac));
+
+	data &= ~PKO_MAC_CFG_FIFO_NUM_MASK;
+	data |= fifo_num;
+	oct_csr_write(data, PKO_MAC_CFG(node, mac));
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pko_set_mac_options);
+
+int octeon3_pko_get_fifo_size(int node, int interface, int index,
+			      enum octeon3_mac_type mac_type)
+{
+	int fifo_grp, fifo_off, mac, size;
+	u64 data;
+
+	/* Set fifo size to 2.4 KB */
+	size = PKO_FIFO_SIZE;
+
+	mac = get_output_mac(interface, index, mac_type);
+
+	data = oct_csr_read(PKO_MAC_CFG(node, mac));
+	if ((data & PKO_MAC_CFG_FIFO_NUM_MASK) == PKO_MAC_CFG_FIFO_UNDEFINED) {
+		pr_err("%s: mac not configured %d:%d:%d\n", __FILE__, node,
+		       interface, index);
+		return -ENODEV;
+	}
+	fifo_grp = (data & PKO_MAC_CFG_FIFO_NUM_MASK)
+		   >> PKO_MAC_CFG_FIFO_GRP_SHIFT;
+	fifo_off = data & PKO_MAC_CFG_FIFO_OFF_MASK;
+
+	data = oct_csr_read(PKO_PTGF_CFG(node, fifo_grp));
+	data &= PKO_PTGF_CFG_SIZE_MASK;
+	switch (data) {
+	case 0:
+		/* 2.5l, 2.5k, 2.5k, 2.5k */
+		break;
+	case 1:
+		/* 5.0k, 0.0k, 2.5k, 2.5k */
+		if (fifo_off == 0)
+			size *= 2;
+		if (fifo_off == 1)
+			size = 0;
+		break;
+	case 2:
+		/* 2.5k, 2.5k, 5.0k, 0.0k */
+		if (fifo_off == 2)
+			size *= 2;
+		if (fifo_off == 3)
+			size = 0;
+		break;
+	case 3:
+		/* 5k, 0, 5k, 0 */
+		if ((fifo_off & 1) != 0)
+			size = 0;
+		size *= 2;
+		break;
+	case 4:
+		/* 10k, 0, 0, 0 */
+		if (fifo_off != 0)
+			size = 0;
+		size *= 4;
+		break;
+	default:
+		size = -1;
+	}
+
+	return size;
+}
+EXPORT_SYMBOL(octeon3_pko_get_fifo_size);
+
+int octeon3_pko_activate_dq(int node, int dq, int cnt)
+{
+	int i, rc = 0;
+	u64 data;
+
+	for (i = 0; i < cnt; i++) {
+		rc = open_dq(node, dq + i);
+		if (rc)
+			break;
+
+		data = oct_csr_read(PKO_PDM_DQ_MINPAD(node, dq + i));
+		data &= ~PKO_PDM_DQ_MINPAD_MINPAD;
+		oct_csr_write(data, PKO_PDM_DQ_MINPAD(node, dq + i));
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL(octeon3_pko_activate_dq);
+
+int octeon3_pko_interface_init(int node, int interface, int index,
+			       enum octeon3_mac_type mac_type, int ipd_port)
+{
+	int mac, pq, parent_q, queue, rc;
+	enum queue_level level;
+
+	mac = get_output_mac(interface, index, mac_type);
+
+	/* Allocate a port queue for this interface */
+	rc = allocate_queues(node, PQ, 1, &pq);
+	if (rc < 0) {
+		pr_err("%s: Failed to allocate port queue\n", __FILE__);
+		return rc;
+	}
+
+	/* Connect the port queue to the output mac */
+	port_queue_init(node, pq, mac);
+
+	/* Link scheduler queues to the port queue */
+	parent_q = pq;
+	for (level = L2_SQ; level <= max_sq_level(); level++) {
+		rc = allocate_queues(node, level, 1, &queue);
+		if (rc < 0) {
+			pr_err("%s: Failed to allocate queue\n", __FILE__);
+			return rc;
+		}
+
+		switch (level) {
+		case L2_SQ:
+			scheduler_queue_l2_init(node, queue, parent_q);
+			map_channel(node, pq, queue, ipd_port);
+			break;
+		case L3_SQ:
+			scheduler_queue_l3_init(node, queue, parent_q);
+			break;
+		case L4_SQ:
+			scheduler_queue_l4_init(node, queue, parent_q);
+			break;
+		case L5_SQ:
+			scheduler_queue_l5_init(node, queue, parent_q);
+			break;
+		default:
+			break;
+		}
+
+		parent_q = queue;
+	}
+
+	/* Link the descriptor queue */
+	rc = allocate_queues(node, DQ, 1, &queue);
+	if (rc < 0) {
+		pr_err("%s: Failed to allocate descriptor queue\n", __FILE__);
+		return rc;
+	}
+	descriptor_queue_init(node, &queue, parent_q, 1);
+
+	return queue;
+}
+EXPORT_SYMBOL(octeon3_pko_interface_init);
+
+int octeon3_pko_interface_uninit(int node, const int *dq, int num_dq)
+{
+	int i, parent_q, queue, rc;
+	enum queue_level level;
+	u64 addr, data, mask, shift = PKO_DQ_TOPOLOGY_PARENT_SHIFT;
+
+	/* Drain all dqs */
+	for (i = 0; i < num_dq; i++) {
+		rc = drain_dq(node, dq[i]);
+		if (rc)
+			return rc;
+
+		/* Free the dq */
+		data = oct_csr_read(PKO_DQ_TOPOLOGY(node, dq[i]));
+
+		parent_q = (data & PKO_DQ_TOPOLOGY_PARENT_MASK) >> shift;
+		free_queues(node, DQ, 1, &dq[i]);
+
+		/* Free all the scheduler queues */
+		queue = parent_q;
+		for (level = max_sq_level(); (signed int)level >= PQ; level--) {
+			switch (level) {
+			case L5_SQ:
+				addr = PKO_L5_SQ_TOPOLOGY(node, queue);
+				data = oct_csr_read(addr);
+				mask = PKO_DQ_TOPOLOGY_PARENT_MASK;
+				parent_q = (data & mask) >> shift;
+				break;
+
+			case L4_SQ:
+				addr = PKO_L4_SQ_TOPOLOGY(node, queue);
+				data = oct_csr_read(addr);
+				mask = PKO_L34_DQ_TOPOLOGY_PARENT_MASK;
+				parent_q = (data & mask) >> shift;
+				break;
+
+			case L3_SQ:
+				addr = PKO_L3_SQ_TOPOLOGY(node, queue);
+				data = oct_csr_read(addr);
+				mask = PKO_L34_DQ_TOPOLOGY_PARENT_MASK;
+				parent_q = (data & mask) >> shift;
+				break;
+
+			case L2_SQ:
+				addr = PKO_L2_SQ_TOPOLOGY(node, queue);
+				data = oct_csr_read(addr);
+				mask = PKO_L2_DQ_TOPOLOGY_PARENT_MASK;
+				parent_q = (data & mask) >> shift;
+				break;
+
+			case PQ:
+				break;
+
+			default:
+				pr_err("%s: Invalid lvl=%d\n", __FILE__, level);
+				return -1;
+			}
+
+			free_queues(node, level, 1, &queue);
+			queue = parent_q;
+		}
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(octeon3_pko_interface_uninit);
diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-pko.h b/drivers/net/ethernet/cavium/octeon/octeon3-pko.h
new file mode 100644
index 0000000..f1053a8
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-pko.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Octeon III Packet-Output Processing Unit (PKO)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#ifndef _OCTEON3_PKO_H_
+#define _OCTEON3_PKO_H_
+
+#include <linux/bitops.h>
+
+#define PKO_DPFI_ENA_ENABLE			BIT(0)
+#define PKO_DPFI_FLUSH_FLUSH_EN			BIT(0)
+#define PKO_DPFI_FPA_AURA_NODE_SHIFT		10
+#define PKO_DPFI_STATUS_CACHE_FLUSHED		BIT(0)
+#define PKO_DQ_SCHEDULE_PRIO_SHIFT		24
+#define PKO_DQ_SW_XOFF_SC_RAM_FLIP		BIT(2)
+#define PKO_DQ_SW_XOFF_SC_RAM_CDIS		BIT(1)
+#define PKO_DQ_TOPOLOGY_PARENT_MASK		GENMASK_ULL(25, 16)
+#define PKO_DQ_TOPOLOGY_PARENT_SHIFT		16
+#define PKO_DQ_WM_CTL_KIND			BIT(49)
+#define PKO_ENABLE_ENABLE			BIT(0)
+#define PKO_L12_SQ_TOPOLOGY_PRIO_ANCHOR_MASK	GENMASK_ULL(40, 32)
+#define PKO_L1_SQ_LINK_LINK_SHIFT		44
+#define PKO_L1_SQ_SHAPE_LINK_SHIFT		13
+#define PKO_L2_DQ_TOPOLOGY_PARENT_MASK		GENMASK_ULL(20, 16)
+#define PKO_L1_SQ_TOPOLOGY_LINK_SHIFT		16
+#define PKO_L345_SQ_TOPOLOGY_PRIO_ANCHOR_MASK	GENMASK_ULL(41, 32)
+#define PKO_L34_DQ_TOPOLOGY_PARENT_MASK		GENMASK_ULL(24, 16)
+#define PKO_L3_L2_CHANNEL_CC_CHANNEL_MASK	GENMASK_ULL(43, 32)
+#define PKO_L3_L2_CHANNEL_CC_CHANNEL_SHIFT	32
+#define PKO_LUT_PORT_MASK			0xf00
+#define PKO_LUT_PORT_TO_INDEX			0x800
+#define PKO_LUT_PQ_IDX_SHIFT			9
+#define PKO_LUT_QUEUE_NUMBER_MASK		GENMASK_ULL(7, 0)
+#define PKO_LUT_QUEUE_NUMBER_SHIFT		8
+#define PKO_LUT_VALID				BIT(15)
+#define PKO_MAC_CFG_FCS_ENA			BIT(15)
+#define PKO_MAC_CFG_FCS_SOP_OFF_MASK		GENMASK_ULL(14, 7)
+#define PKO_MAC_CFG_FCS_SOP_OFF_SHIFT		7
+#define PKO_MAC_CFG_FIFO_GRP_SHIFT		2
+#define PKO_MAC_CFG_FIFO_NUM_MASK		GENMASK_ULL(4, 0)
+#define PKO_MAC_CFG_FIFO_NUM_UNDEFINED		0x1f
+#define PKO_MAC_CFG_FIFO_OFF_MASK		GENMASK_ULL(1, 0)
+#define PKO_MAC_CFG_FIFO_UNDEFINED		0x1f
+#define PKO_MAC_CFG_MIN_PAD_ENA			BIT(16)
+#define PKO_MAC_CFG_SKID_MAX_CNT_MASK		GENMASK_ULL(6, 5)
+#define PKO_PDM_DQ_MINPAD_MINPAD		BIT(0)
+#define PKO_PDM_PAD_MINLEN_SHIFT		3
+#define PKO_PDM_PAD_MINLEN_TYPICAL		60
+#define PKO_PTF_IOBP_CFG_MAX_READ_SIZE_MASK	GENMASK_ULL(6, 0)
+#define PKO_PTF_STATUS_IN_FLIGHT_CNT_MASK	GENMASK_ULL(11, 5)
+#define PKO_PTF_STATUS_MAC_NUM_MASK		GENMASK_ULL(4, 0)
+#define PKO_PTGF_CFG_RATE_MASK			GENMASK_ULL(5, 3)
+#define PKO_PTGF_CFG_RATE_SHIFT			3
+#define PKO_PTGF_CFG_RESET			BIT(6)
+#define PKO_PTGF_CFG_SIZE_MASK			GENMASK_ULL(2, 0)
+#define PKO_QUERY_DMA_SCRADDR_SHIFT		56
+#define PKO_QUERY_DMA_RTNLEN_SHIFT		48
+#define PKO_QUERY_DMA_DID_SHIFT			40
+#define PKO_QUERY_DMA_NODE_SHIFT		36
+#define PKO_QUERY_DMA_DQOP_SHIFT		32
+#define PKO_QUERY_DMA_DQ_SHIFT			16
+#define PKO_QUERY_RTN_DEPTH_MASK		GENMASK_ULL(47, 0)
+#define PKO_QUERY_RTN_DQSTATUS_MASK		GENMASK_ULL(63, 60)
+#define PKO_QUERY_RTN_DQSTATUS_SHIFT		60
+
+/* PKO_DQ_STATUS_E */
+enum pko_dq_status_e {
+	PKO_DQSTATUS_PASS = 0,
+	PKO_DQSTATUS_BADSTATE = 8,
+	PKO_DQSTATUS_NOFPABUF,
+	PKO_DQSTATUS_NOPKOBUF,
+	PKO_DQSTATUS_FAILRTNPTR,
+	PKO_DQSTATUS_ALREADYCREATED,
+	PKO_DQSTATUS_NOTCREATED,
+	PKO_DQSTATUS_NOTEMPTY,
+	PKO_DQSTATUS_SENDPKTDROP
+};
+
+#define PKO_SHAPER_CFG_COLOR_AWARE		BIT(1)
+#define PKO_SQ_TOPOLOGY_LINK_SHIFT		16
+#define PKO_SQ_TOPOLOGY_PRIO_ANCHOR_SHIFT	32
+#define PKO_SQ_TOPOLOGY_RR_PRIO_MASK		GENMASK_ULL(4, 1)
+#define PKO_SQ_TOPOLOGY_RR_PRIO_SHAPER		0xf
+#define PKO_SQ_TOPOLOGY_RR_PRIO_SHIFT		1
+#define PKO_STATUS_PKO_RDY			BIT(63)
+
+#define PKO_SEND_HDR_AURA_SHIFT			48
+#define PKO_SEND_HDR_CKL4_SHIFT			46
+#define PKO_SEND_HDR_CKL3			BIT(45)
+#define PKO_SEND_HDR_LE				BIT(43)
+#define PKO_SEND_HDR_N2				BIT(42)
+#define PKO_SEND_HDR_DF				BIT(40)
+#define PKO_SEND_HDR_L4PTR_MASK			GENMASK_ULL(31, 24)
+#define PKO_SEND_HDR_L4PTR_SHIFT		24
+#define PKO_SEND_HDR_L3PTR_SHIFT		16
+
+#define PKO_SEND_SUBDC4_SHIFT			44
+#define PKO_SEND_EXT_RA_SHIFT			40
+#define PKO_SEND_EXT_TSTMP			BIT(39)
+#define PKO_SEND_EXT_MARKPTR_SHIFT		16
+
+/* PKO_REDALG_E */
+enum pko_redalg_e {
+	PKO_REDALG_E_STD,
+	PKO_REDALG_E_SEND,
+	PKO_REDALG_E_STALL,
+	PKO_REDALG_E_DISCARD
+};
+
+#define PKO_SEND_TSO_L2LEN_SHIFT		56
+#define PKO_SEND_TSO_SB_SHIFT			24
+#define PKO_SEND_TSO_MSS_SHIFT			8
+
+#define PKO_SEND_MEM_WMEM_SHIFT			62
+#define PKO_SEND_MEM_DSZ_SHIFT			60
+#define PKO_SEND_MEM_ALG_SHIFT			56
+#define PKO_SEND_MEM_OFFSET_SHIFT		48
+
+/* PKO_MEMALG_E */
+enum pko_memalg_e {
+	PKO_MEMALG_SET = 0,
+	PKO_MEMALG_SETTSTMP,
+	PKO_MEMALG_SETRSLT,
+	PKO_MEMALG_ADD = 8,
+	PKO_MEMALG_SUB,
+	PKO_MEMALG_ADDLEN,
+	PKO_MEMALG_SUBLEN,
+	PKO_MEMALG_ADDMBUF,
+	PKO_MEMALG_SUBMBUF
+};
+
+/* PKO_MEMDSZ_E */
+enum pko_memdsz_e {
+	PKO_MEMDSZ_B64,
+	PKO_MEMDSZ_B32,
+	PKO_MEMDSZ_B16,
+	PKO_MEMDSZ_B8
+};
+
+#define PKO_SEND_WORK_GRP_SHIFT			52
+#define PKO_SEND_WORK_TT_SHIFT			50
+
+#define PKO_SEND_GATHER_SIZE_MASK		GENMASK_ULL(63, 48)
+#define PKO_SEND_GATHER_SIZE_SHIFT		48
+#define PKO_SEND_GATHER_SUBDC_SHIFT		45
+#define PKO_SEND_GATHER_ADDR_MASK		GENMASK_ULL(41, 0)
+
+/* Pko sub-command three bit codes (SUBDC3) */
+#define PKO_SENDSUBDC_GATHER			0x1
+
+/* Pko sub-command four bit codes (SUBDC4) */
+#define PKO_SENDSUBDC_TSO			0x8
+#define PKO_SENDSUBDC_FREE			0x9
+#define PKO_SENDSUBDC_WORK			0xa
+#define PKO_SENDSUBDC_MEM			0xc
+#define PKO_SENDSUBDC_EXT			0xd
+
+#endif /* _OCTEON3_PKO_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 07/10] netdev: cavium: octeon: Add Octeon III SSO Support
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (5 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 06/10] netdev: cavium: octeon: Add Octeon III PKO Support Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 08/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet core Steven J. Hill
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add support for Octeon III SSO logic block for BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/octeon/octeon3-sso.c | 221 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/octeon/octeon3-sso.h |  89 +++++++++
 2 files changed, 310 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-sso.c
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-sso.h

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-sso.c b/drivers/net/ethernet/cavium/octeon/octeon3-sso.c
new file mode 100644
index 0000000..73afad0
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-sso.c
@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III Schedule/Synchronize/Order Unit (SSO)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+
+#include "octeon3.h"
+
+static int octeon3_sso_get_num_groups(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 256;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX) || OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 64;
+	return 0;
+}
+
+void octeon3_sso_irq_set(int node, int group, bool enable)
+{
+	if (enable)
+		oct_csr_write(1, SSO_GRP_INT_THR(node, group));
+	else
+		oct_csr_write(0, SSO_GRP_INT_THR(node, group));
+
+	oct_csr_write(SSO_GRP_INT_EXE_INT, SSO_GRP_INT(node, group));
+}
+EXPORT_SYMBOL(octeon3_sso_irq_set);
+
+/* octeon3_sso_alloc_groups - Allocate a range of SSO groups.
+ * @node: Node where SSO resides.
+ * @groups: Pointer to allocated groups.
+ * @cnt: Number of groups to allocate.
+ * @start: Group number to start sequential allocation from. -1 for don't care.
+ *
+ * Returns 0 if successful, error code otherwise..
+ */
+int octeon3_sso_alloc_groups(int node, int *groups, int cnt, int start)
+{
+	struct global_resource_tag tag;
+	int group, ret;
+	char buf[16];
+
+	strncpy((char *)&tag.lo, "cvm_sso_", 8);
+	snprintf(buf, 16, "0%d......", node);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_create_resource(tag, octeon3_sso_get_num_groups());
+
+	if (!groups)
+		ret = res_mgr_alloc_range(tag, start, cnt, false, &group);
+		if (!ret)
+			ret = group;
+	else
+		ret = res_mgr_alloc_range(tag, start, cnt, false, groups);
+
+	return ret;
+}
+EXPORT_SYMBOL(octeon3_sso_alloc_groups);
+
+/* octeon3_sso_free_groups - Free SSO groups.
+ * @node: Node where SSO resides.
+ * @groups: Array of groups to free.
+ * @cnt: Number of groups to free.
+ */
+void octeon3_sso_free_groups(int node, int *groups, int	cnt)
+{
+	struct global_resource_tag tag;
+	char buf[16];
+
+	/* Allocate the requested groups. */
+	strncpy((char *)&tag.lo, "cvm_sso_", 8);
+	snprintf(buf, 16, "0%d......", node);
+	memcpy(&tag.hi, buf, 8);
+
+	res_mgr_free_range(tag, groups, cnt);
+}
+EXPORT_SYMBOL(octeon3_sso_free_groups);
+
+/* octeon3_sso_pass1_limit - When the Transitory Admission Queue (TAQ) is
+ *   almost full, it is possible for the SSo to hang. We work around this
+ *   by ensuring that the sum of SSO_GRP(0..255)_TAQ_THR[MAX_THR] of all
+ *   used groups is <= 1264. This may reduce single group performance when
+ *   many groups are in use.
+ * @node: Node to update.
+ * @grp: SSO group to update.
+ */
+void octeon3_sso_pass1_limit(int node, int group)
+{
+	u64 max_thr, rsvd_thr, taq_add, taq_thr;
+
+	/* Ideally we would like to divide the maximum number of TAQ buffers
+	 * (1264) among the SSO groups in use. However, since we do not know
+	 * how many SSO groups are used by code outside this driver, we take
+	 * the worst case approach.
+	 */
+	max_thr = 1264 / octeon3_sso_get_num_groups();
+	if (max_thr < 4)
+		max_thr = 4;
+	rsvd_thr = max_thr - 1;
+
+	/* Changes to SSO_GRP_TAQ_THR[rsvd_thr] must also update
+	 * SSO_TAQ_ADD[RSVD_FREE].
+	 */
+	taq_thr = oct_csr_read(SSO_GRP_TAQ_THR(node, group));
+	taq_add = (rsvd_thr - (taq_thr & SSO_GRP_TAQ_THR_RSVD_THR_MASK)) <<
+		  SSO_TAQ_ADD_RSVD_FREE_SHIFT;
+
+	taq_thr &= ~(SSO_GRP_TAQ_THR_MAX_THR_MASK |
+		     SSO_GRP_TAQ_THR_RSVD_THR_MASK);
+	taq_thr |= max_thr << SSO_GRP_TAQ_THR_RSVD_THR_SHIFT;
+	taq_thr |= rsvd_thr;
+
+	oct_csr_write(taq_thr, SSO_GRP_TAQ_THR(node, group));
+	oct_csr_write(taq_add, SSO_TAQ_ADD(node));
+}
+EXPORT_SYMBOL(octeon3_sso_pass1_limit);
+
+/* octeon3_sso_shutdown - Shutdown the SSO.
+ * @node: Node where SSO to disable is.
+ * @aura: Aura used for the SSO buffers.
+ */
+void octeon3_sso_shutdown(int node, int aura)
+{
+	int i, max_grps, timeout;
+	u64 data, head, tail;
+	void *ptr;
+
+	/* Disable SSO. */
+	data = oct_csr_read(SSO_AW_CFG(node));
+	data |= SSO_AW_CFG_XAQ_ALOC_DIS | SSO_AW_CFG_XAQ_BYP_DIS;
+	data &= ~SSO_AW_CFG_RWEN;
+	oct_csr_write(data, SSO_AW_CFG(node));
+
+	/* Extract the FPA buffers. */
+	max_grps = octeon3_sso_get_num_groups();
+	for (i = 0; i < max_grps; i++) {
+		head = oct_csr_read(SSO_XAQ_HEAD_PTR(node, i));
+		tail = oct_csr_read(SSO_XAQ_TAIL_PTR(node, i));
+		data = oct_csr_read(SSO_GRP_AQ_CNT(node, i));
+
+		/* Verify pointers. */
+		head &= SSO_XAQ_PTR_MASK;
+		tail &= SSO_XAQ_PTR_MASK;
+		if (head != tail) {
+			pr_err("%s: Bad pointer\n", __func__);
+			continue;
+		}
+
+		/* This SSO group should have no pending entries. */
+		if (data & SSO_GRP_AQ_CNT_AQ_CNT_MASK)
+			pr_err("%s: Group not empty\n", __func__);
+
+		ptr = phys_to_virt(head);
+		octeon_fpa3_free(node, aura, ptr);
+
+		/* Clear pointers. */
+		oct_csr_write(0, SSO_XAQ_HEAD_PTR(node, i));
+		oct_csr_write(0, SSO_XAQ_HEAD_NEXT(node, i));
+		oct_csr_write(0, SSO_XAQ_TAIL_PTR(node, i));
+		oct_csr_write(0, SSO_XAQ_TAIL_NEXT(node, i));
+	}
+
+	/* Make sure all buffers are drained. */
+	timeout = 10000;
+	do {
+		data = oct_csr_read(SSO_AW_STATUS(node));
+		if ((data & SSO_AW_STATUS_XAQ_BU_CACHED_MASK) == 0)
+			break;
+		timeout--;
+		udelay(1);
+	} while (timeout);
+	if (!timeout)
+		pr_err("%s: Timed out draining buffers\n", __func__);
+}
+EXPORT_SYMBOL(octeon3_sso_shutdown);
+
+/* octeon3_sso_init - Initialize the SSO.
+ * @node: Node where SSO resides.
+ * @aura: Aura used for the SSO buffers.
+ */
+int octeon3_sso_init(int node, int aura)
+{
+	int i, max_grps, err = 0;
+	u64 data, phys;
+	void *mem;
+
+	data = SSO_AW_CFG_STT | SSO_AW_CFG_LDT | SSO_AW_CFG_LDWB;
+	oct_csr_write(data, SSO_AW_CFG(node));
+
+	data = (node << SSO_XAQ_AURA_NODE_SHIFT) | aura;
+	oct_csr_write(data, SSO_XAQ_AURA(node));
+
+	max_grps = octeon3_sso_get_num_groups();
+	for (i = 0; i < max_grps; i++) {
+		mem = octeon_fpa3_alloc(node, aura);
+		if (!mem) {
+			err = -ENOMEM;
+			goto out;
+		}
+
+		phys = virt_to_phys(mem);
+		oct_csr_write(phys, SSO_XAQ_HEAD_PTR(node, i));
+		oct_csr_write(phys, SSO_XAQ_HEAD_NEXT(node, i));
+		oct_csr_write(phys, SSO_XAQ_TAIL_PTR(node, i));
+		oct_csr_write(phys, SSO_XAQ_TAIL_NEXT(node, i));
+
+		/* SSO-18678 */
+		data = SSO_GRP_PRI_WEIGHT_MAXIMUM << SSO_GRP_PRI_WEIGHT_SHIFT;
+		oct_csr_write(data, SSO_GRP_PRI(node, i));
+	}
+
+	data = SSO_ERR0_FPE;
+	oct_csr_write(data, SSO_ERR0(node));
+
+	data = SSO_AW_CFG_STT | SSO_AW_CFG_LDT | SSO_AW_CFG_LDWB |
+	       SSO_AW_CFG_RWEN;
+	oct_csr_write(data, SSO_AW_CFG(node));
+out:
+	return err;
+}
+EXPORT_SYMBOL(octeon3_sso_init);
diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-sso.h b/drivers/net/ethernet/cavium/octeon/octeon3-sso.h
new file mode 100644
index 0000000..dc68c4b
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-sso.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Octeon III Schedule/Synchronize/Order Unit (SSO)
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#ifndef _OCTEON3_SSO_H_
+#define _OCTEON3_SSO_H_
+
+#include <linux/bitops.h>
+
+#define SSO_BASE		0x1670000000000ull
+#define SSO_ADDR(n)		(SSO_BASE + SET_XKPHYS + NODE_OFFSET(n))
+#define SSO_AQ_ADDR(n, a)	(SSO_ADDR(n) + ((a) << 3))
+#define SSO_GRP_ADDR(n, g)	(SSO_ADDR(n) + ((g) << 16))
+
+#define SSO_AW_STATUS(n)	(SSO_ADDR(n) + 0x000010e0)
+#define SSO_AW_CFG(n)		(SSO_ADDR(n) + 0x000010f0)
+#define SSO_ERR0(n)		(SSO_ADDR(n) + 0x00001240)
+#define SSO_TAQ_ADD(n)		(SSO_ADDR(n) + 0x000020e0)
+#define SSO_XAQ_AURA(n)		(SSO_ADDR(n) + 0x00002100)
+
+#define SSO_XAQ_HEAD_PTR(n, a)	(SSO_AQ_ADDR(n, a) + 0x00080000)
+#define SSO_XAQ_TAIL_PTR(n, a)	(SSO_AQ_ADDR(n, a) + 0x00090000)
+#define SSO_XAQ_HEAD_NEXT(n, a)	(SSO_AQ_ADDR(n, a) + 0x000a0000)
+#define SSO_XAQ_TAIL_NEXT(n, a)	(SSO_AQ_ADDR(n, a) + 0x000b0000)
+
+#define SSO_GRP_TAQ_THR(n, g)	(SSO_GRP_ADDR(n, g) + 0x20000100)
+#define SSO_GRP_PRI(n, g)	(SSO_GRP_ADDR(n, g) + 0x20000200)
+#define SSO_GRP_INT(n, g)	(SSO_GRP_ADDR(n, g) + 0x20000400)
+#define SSO_GRP_INT_THR(n, g)	(SSO_GRP_ADDR(n, g) + 0x20000500)
+#define SSO_GRP_AQ_CNT(n, g)	(SSO_GRP_ADDR(n, g) + 0x20000700)
+
+/* SSO interrupt numbers start here */
+#define SSO_IRQ_START		0x61000
+
+#define SSO_AW_STATUS_XAQ_BU_CACHED_MASK	GENMASK_ULL(5, 0)
+
+#define SSO_AW_CFG_XAQ_ALOC_DIS		BIT(6)
+#define SSO_AW_CFG_XAQ_BYP_DIS		BIT(4)
+#define SSO_AW_CFG_STT			BIT(3)
+#define SSO_AW_CFG_LDT			BIT(2)
+#define SSO_AW_CFG_LDWB			BIT(1)
+#define SSO_AW_CFG_RWEN			BIT(0)
+
+#define SSO_ERR0_FPE			BIT(0)
+
+#define SSO_TAQ_ADD_RSVD_FREE_SHIFT	16
+
+#define SSO_XAQ_AURA_NODE_SHIFT		10
+
+#define SSO_XAQ_PTR_MASK		GENMASK_ULL(41, 7)
+
+#define SSO_GRP_TAQ_THR_MAX_THR_MASK	GENMASK_ULL(42, 32)
+#define SSO_GRP_TAQ_THR_RSVD_THR_MASK	GENMASK_ULL(10, 0)
+#define SSO_GRP_TAQ_THR_RSVD_THR_SHIFT	32
+
+#define SSO_GRP_PRI_WEIGHT_MAXIMUM	63
+#define SSO_GRP_PRI_WEIGHT_SHIFT	16
+
+#define SSO_GRP_INT_EXE_INT		BIT(1)
+
+#define SSO_GRP_AQ_CNT_AQ_CNT_MASK	GENMASK_ULL(32, 0)
+
+/* SSO tag types */
+#define SSO_TAG_TYPE_ORDERED            0ull
+#define SSO_TAG_TYPE_ATOMIC             1ull
+#define SSO_TAG_TYPE_UNTAGGED           2ull
+#define SSO_TAG_TYPE_EMPTY              3ull
+#define SSO_TAG_SWDID			0x60ull
+
+
+/* SSO work queue bitfields */
+#define SSO_GET_WORK_DID_SHIFT		40
+#define SSO_GET_WORK_NODE_SHIFT		36
+#define SSO_GET_WORK_GROUPED		BIT(30)
+#define SSO_GET_WORK_RTNGRP		BIT(29)
+#define SSO_GET_WORK_IDX_GRP_MASK_SHIFT	4
+#define SSO_GET_WORK_WAITW_WAIT		BIT(3)
+#define SSO_GET_WORK_WAITW_NO_WAIT	0ull
+
+#define SSO_GET_WORK_DMA_S_SCRADDR	BIT(63)
+#define SSO_GET_WORK_DMA_S_LEN_SHIFT	48
+#define SSO_GET_WORK_LD_S_IO		BIT(48)
+#define SSO_GET_WORK_RTN_S_NO_WORK	BIT(63)
+#define SSO_GET_WORK_RTN_S_GRP_MASK	GENMASK_ULL(57, 48)
+#define SSO_GET_WORK_RTN_S_GRP_SHIFT	48
+#define SSO_GET_WORK_RTN_S_WQP_MASK	GENMASK_ULL(41, 0)
+
+#endif /* _OCTEON3_SSO_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 08/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet core
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (6 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 07/10] netdev: cavium: octeon: Add Octeon III SSO Support Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 09/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet building Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 10/10] MAINTAINERS: Add entry for drivers/net/ethernet/cavium/octeon/octeon3-* Steven J. Hill
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

This is the main core of the BGX Ethernet driver.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/octeon/octeon3-core.c | 2363 +++++++++++++++++++++
 1 file changed, 2363 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/octeon/octeon3-core.c

diff --git a/drivers/net/ethernet/cavium/octeon/octeon3-core.c b/drivers/net/ethernet/cavium/octeon/octeon3-core.c
new file mode 100644
index 0000000..b0dfacb
--- /dev/null
+++ b/drivers/net/ethernet/cavium/octeon/octeon3-core.c
@@ -0,0 +1,2363 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Octeon III BGX Nexus Ethernet driver core
+ *
+ * Copyright (C) 2018 Cavium, Inc.
+ */
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/kthread.h>
+#include <linux/net_tstamp.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/timecounter.h>
+
+#include <asm/octeon/cvmx-mio-defs.h>
+
+#include "octeon3.h"
+
+/*  First buffer:
+ *
+ *                            +---SKB---------+
+ *                            |               |
+ *                            |               |
+ *                         +--+--*data        |
+ *                         |  |               |
+ *                         |  |               |
+ *                         |  +---------------+
+ *                         |       /|\
+ *                         |        |
+ *                         |        |
+ *                        \|/       |
+ * WQE - 128 -+-----> +-------------+-------+     -+-
+ *            |       |    *skb ----+       |      |
+ *            |       |                     |      |
+ *            |       |                     |      |
+ *  WQE_SKIP = 128    |                     |      |
+ *            |       |                     |      |
+ *            |       |                     |      |
+ *            |       |                     |      |
+ *            |       |                     |      First Skip
+ * WQE   -----+-----> +---------------------+      |
+ *                    |   word 0            |      |
+ *                    |   word 1            |      |
+ *                    |   word 2            |      |
+ *                    |   word 3            |      |
+ *                    |   word 4            |      |
+ *                    +---------------------+     -+-
+ *               +----+- packet link        |
+ *               |    |  packet data        |
+ *               |    |                     |
+ *               |    |                     |
+ *               |    |         .           |
+ *               |    |         .           |
+ *               |    |         .           |
+ *               |    +---------------------+
+ *               |
+ *               |
+ * Later buffers:|
+ *               |
+ *               |
+ *               |
+ *               |
+ *               |
+ *               |            +---SKB---------+
+ *               |            |               |
+ *               |            |               |
+ *               |         +--+--*data        |
+ *               |         |  |               |
+ *               |         |  |               |
+ *               |         |  +---------------+
+ *               |         |       /|\
+ *               |         |        |
+ *               |         |        |
+ *               |        \|/       |
+ * WQE - 128 ----+--> +-------------+-------+     -+-
+ *               |    |    *skb ----+       |      |
+ *               |    |                     |      |
+ *               |    |                     |      |
+ *               |    |                     |      |
+ *               |    |                     |      LATER_SKIP = 128
+ *               |    |                     |      |
+ *               |    |                     |      |
+ *               |    |                     |      |
+ *               |    +---------------------+     -+-
+ *               |    |  packet link        |
+ *               +--> |  packet data        |
+ *                    |                     |
+ *                    |                     |
+ *                    |         .           |
+ *                    |         .           |
+ *                    |         .           |
+ *                    +---------------------+
+ */
+
+#define FPA3_NUM_AURAS		1024
+#define MAX_TX_QUEUE_DEPTH	512
+#define MAX_RX_CONTEXTS		32
+#define USE_ASYNC_IOBDMA	1	/* Always 1 */
+
+#define SKB_AURA_MAGIC		0xbadc0ffee4dad000ULL
+#define SKB_AURA_OFFSET		1
+#define SKB_PTR_OFFSET		0
+
+/* PTP registers and bits */
+#define MIO_PTP_CLOCK_HI(n)	(CVMX_MIO_PTP_CLOCK_HI + NODE_OFFSET(n))
+#define MIO_PTP_CLOCK_CFG(n)	(CVMX_MIO_PTP_CLOCK_CFG + NODE_OFFSET(n))
+#define MIO_PTP_CLOCK_COMP(n)	(CVMX_MIO_PTP_CLOCK_COMP + NODE_OFFSET(n))
+
+/* Misc. bitfields */
+#define MIO_PTP_CLOCK_CFG_PTP_EN		BIT(0)
+#define BGX_GMP_GMI_RX_FRM_CTL_PTP_MODE		BIT(12)
+
+/* Up to 2 napis per core are supported */
+#define MAX_NAPI_PER_CPU	2
+#define MAX_NAPIS_PER_NODE	(MAX_CORES * MAX_NAPI_PER_CPU)
+
+struct octeon3_napi_wrapper {
+	struct napi_struct napi;
+	int available;
+	int idx;
+	int cpu;
+	struct octeon3_rx *cxt;
+} ____cacheline_aligned_in_smp;
+
+static struct octeon3_napi_wrapper
+napi_wrapper[MAX_NODES][MAX_NAPIS_PER_NODE]
+__cacheline_aligned_in_smp;
+
+struct octeon3_ethernet;
+
+struct octeon3_rx {
+	struct octeon3_napi_wrapper *napiw;
+	DECLARE_BITMAP(napi_idx_bitmap, MAX_CORES);
+	spinlock_t napi_idx_lock;	/* Protect the napi index bitmap */
+	struct octeon3_ethernet *parent;
+	int rx_grp;
+	int rx_irq;
+	cpumask_t rx_affinity_hint;
+};
+
+struct octeon3_ethernet {
+	struct bgx_port_netdev_priv bgx_priv; /* Must be first element. */
+	struct list_head list;
+	struct net_device *netdev;
+	enum octeon3_mac_type mac_type;
+	struct octeon3_rx rx_cxt[MAX_RX_CONTEXTS];
+	struct ptp_clock_info ptp_info;
+	struct ptp_clock *ptp_clock;
+	struct cyclecounter cc;
+	struct timecounter tc;
+	spinlock_t ptp_lock;		/* Serialize ptp clock adjustments */
+	int num_rx_cxt;
+	int pki_aura;
+	int pknd;
+	int pko_queue;
+	int node;
+	int interface;
+	int index;
+	int rx_buf_count;
+	int tx_complete_grp;
+	int rx_timestamp_hw:1;
+	int tx_timestamp_hw:1;
+	spinlock_t stat_lock;		/* Protects stats counters */
+	u64 last_packets;
+	u64 last_octets;
+	u64 last_dropped;
+	atomic64_t rx_packets;
+	atomic64_t rx_octets;
+	atomic64_t rx_dropped;
+	atomic64_t rx_errors;
+	atomic64_t rx_length_errors;
+	atomic64_t rx_crc_errors;
+	atomic64_t tx_packets;
+	atomic64_t tx_octets;
+	atomic64_t tx_dropped;
+	/* The following two fields need to be on a different cache line as
+	 * they are updated by pko which invalidates the cache every time it
+	 * updates them. The idea is to prevent other fields from being
+	 * invalidated unnecessarily.
+	 */
+	char cacheline_pad1[CVMX_CACHE_LINE_SIZE];
+	atomic64_t buffers_needed;
+	atomic64_t tx_backlog;
+	char cacheline_pad2[CVMX_CACHE_LINE_SIZE];
+};
+
+static DEFINE_MUTEX(octeon3_eth_init_mutex);
+
+struct octeon3_ethernet_node;
+
+struct octeon3_ethernet_worker {
+	wait_queue_head_t queue;
+	struct task_struct *task;
+	struct octeon3_ethernet_node *oen;
+	atomic_t kick;
+	int order;
+};
+
+struct octeon3_ethernet_node {
+	bool init_done;
+	bool napi_init_done;
+	int next_cpu_irq_affinity;
+	int node;
+	int pki_packet_pool;
+	int sso_pool;
+	int pko_pool;
+	void *sso_pool_stack;
+	void *pko_pool_stack;
+	void *pki_packet_pool_stack;
+	int sso_aura;
+	int pko_aura;
+	int tx_complete_grp;
+	int tx_irq;
+	cpumask_t tx_affinity_hint;
+	struct octeon3_ethernet_worker workers[8];
+	struct mutex device_list_lock;	/* Protects the device list */
+	struct list_head device_list;
+	spinlock_t napi_alloc_lock;	/* Protects napi allocations */
+};
+
+/* This array keeps track of the number of napis running on each cpu */
+static u8 octeon3_cpu_napi_cnt[NR_CPUS] __cacheline_aligned_in_smp;
+
+static int use_tx_queues;
+module_param(use_tx_queues, int, 0644);
+MODULE_PARM_DESC(use_tx_queues, "Use network layer transmit queues.");
+
+static int wait_pko_response;
+module_param(wait_pko_response, int, 0644);
+MODULE_PARM_DESC(wait_pko_response, "Wait for response after each pko command.");
+
+static int num_packet_buffers = 768;
+module_param(num_packet_buffers, int, 0444);
+MODULE_PARM_DESC(num_packet_buffers, "Number of packet buffers to allocate per port.");
+
+static int packet_buffer_size = 2048;
+module_param(packet_buffer_size, int, 0444);
+MODULE_PARM_DESC(packet_buffer_size, "Size of each RX packet buffer.");
+
+static int rx_contexts = 1;
+module_param(rx_contexts, int, 0444);
+MODULE_PARM_DESC(rx_contexts, "Number of RX threads per port.");
+
+int ilk0_lanes = 1;
+module_param(ilk0_lanes, int, 0444);
+MODULE_PARM_DESC(ilk0_lanes, "Number of SerDes lanes used by ILK link 0.");
+
+int ilk1_lanes = 1;
+module_param(ilk1_lanes, int, 0444);
+MODULE_PARM_DESC(ilk1_lanes, "Number of SerDes lanes used by ILK link 1.");
+
+static struct octeon3_ethernet_node octeon3_eth_node[MAX_NODES];
+static struct kmem_cache *octeon3_eth_sso_pko_cache;
+
+/* Reads a 64 bit value from the processor local scratchpad memory.
+ *
+ * @param offset byte offset into scratch pad to read
+ *
+ * @return value read
+ */
+static inline u64 scratch_read64(u64 offset)
+{
+	return *(u64 *)((long)SCRATCH_BASE_ADDR + offset);
+}
+
+/* Write a 64 bit value to the processor local scratchpad memory.
+ *
+ * @param offset byte offset into scratch pad to write
+ @ @praram value to write
+ */
+static inline void scratch_write64(u64 offset, u64 value)
+{
+	*(u64 *)((long)SCRATCH_BASE_ADDR + offset) = value;
+}
+
+static int get_pki_chan(int node, int interface, int index)
+{
+	int pki_chan;
+
+	pki_chan = node << 12;
+
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX) &&
+	    (interface == 1 || interface == 2)) {
+		/* SRIO */
+		pki_chan |= 0x240 + (2 * (interface - 1)) + index;
+	} else {
+		/* BGX */
+		pki_chan |= 0x800 + (0x100 * interface) + (0x10 * index);
+	}
+
+	return pki_chan;
+}
+
+/* Map auras to the field priv->buffers_needed. Used to speed up packet
+ * transmission.
+ */
+static void *aura2bufs_needed[MAX_NODES][FPA3_NUM_AURAS];
+
+static int octeon3_eth_lgrp_to_ggrp(int node, int grp)
+{
+	return (node << 8) | grp;
+}
+
+static void octeon3_eth_gen_affinity(int node, cpumask_t *mask)
+{
+	int cpu;
+
+	do {
+		cpu = cpumask_next(octeon3_eth_node[node].next_cpu_irq_affinity,
+				   cpu_online_mask);
+		octeon3_eth_node[node].next_cpu_irq_affinity++;
+		if (cpu >= nr_cpu_ids) {
+			octeon3_eth_node[node].next_cpu_irq_affinity = -1;
+			continue;
+		}
+	} while (false);
+	cpumask_clear(mask);
+	cpumask_set_cpu(cpu, mask);
+}
+
+struct wr_ret {
+	void *work;
+	u16 grp;
+};
+
+static inline struct wr_ret octeon3_core_get_work_sync(int grp)
+{
+	u64 node = cvmx_get_node_num();
+	u64 addr, response;
+	struct wr_ret r;
+
+	/* See SSO_GET_WORK_LD_S for the address to read */
+	addr = SSO_GET_WORK_DMA_S_SCRADDR;
+	addr |= SSO_GET_WORK_LD_S_IO;
+	addr |= SSO_TAG_SWDID << SSO_GET_WORK_DID_SHIFT;
+	addr |= node << SSO_GET_WORK_NODE_SHIFT;
+	addr |= SSO_GET_WORK_GROUPED;
+	addr |= SSO_GET_WORK_RTNGRP;
+	addr |= octeon3_eth_lgrp_to_ggrp(node, grp) <<
+		SSO_GET_WORK_IDX_GRP_MASK_SHIFT;
+	addr |= SSO_GET_WORK_WAITW_NO_WAIT;
+	response = __raw_readq((void __iomem *)addr);
+
+	/* See SSO_GET_WORK_RTN_S for the format of the response */
+	r.grp = (response & SSO_GET_WORK_RTN_S_GRP_MASK) >>
+		SSO_GET_WORK_RTN_S_GRP_SHIFT;
+	if (response & SSO_GET_WORK_RTN_S_NO_WORK)
+		r.work = NULL;
+	else
+		r.work = phys_to_virt(response & SSO_GET_WORK_RTN_S_WQP_MASK);
+
+	return r;
+}
+
+/* octeon3_core_get_work_async - Request work via a iobdma command. Doesn't wait
+ *				 for the response.
+ *
+ * @grp: Group to request work for.
+ */
+static inline void octeon3_core_get_work_async(unsigned int grp)
+{
+	u64 data, node = cvmx_get_node_num();
+
+	/* See SSO_GET_WORK_DMA_S for the command structure */
+	data = 1ull << SSO_GET_WORK_DMA_S_LEN_SHIFT;
+	data |= SSO_TAG_SWDID << SSO_GET_WORK_DID_SHIFT;
+	data |= node << SSO_GET_WORK_NODE_SHIFT;
+	data |= SSO_GET_WORK_GROUPED;
+	data |= SSO_GET_WORK_RTNGRP;
+	data |= octeon3_eth_lgrp_to_ggrp(node, grp) <<
+		SSO_GET_WORK_IDX_GRP_MASK_SHIFT;
+	data |= SSO_GET_WORK_WAITW_NO_WAIT;
+
+	__raw_writeq(data, (void __iomem *)IOBDMA_ORDERED_IO_ADDR);
+}
+
+/* octeon3_core_get_response_async - Read the request work response. Must be
+ *				     called after calling
+ *				     octeon3_core_get_work_async().
+ *
+ * Returns work queue entry.
+ */
+static inline struct wr_ret octeon3_core_get_response_async(void)
+{
+	struct wr_ret r;
+	u64 response;
+
+	CVMX_SYNCIOBDMA;
+	response = scratch_read64(0);
+
+	/* See SSO_GET_WORK_RTN_S for the format of the response */
+	r.grp = (response & SSO_GET_WORK_RTN_S_GRP_MASK) >>
+		SSO_GET_WORK_RTN_S_GRP_SHIFT;
+	if (response & SSO_GET_WORK_RTN_S_NO_WORK)
+		r.work = NULL;
+	else
+		r.work = phys_to_virt(response & SSO_GET_WORK_RTN_S_WQP_MASK);
+
+	return r;
+}
+
+static void octeon3_eth_replenish_rx(struct octeon3_ethernet *priv, int count)
+{
+	struct sk_buff *skb;
+	int i;
+
+	for (i = 0; i < count; i++) {
+		void **buf;
+
+		skb = __alloc_skb(packet_buffer_size, GFP_ATOMIC, 0,
+				  priv->node);
+		if (!skb)
+			break;
+		buf = (void **)PTR_ALIGN(skb->head, 128);
+		buf[SKB_PTR_OFFSET] = skb;
+		octeon_fpa3_free(priv->node, priv->pki_aura, buf);
+	}
+}
+
+static bool octeon3_eth_tx_done_runnable(struct octeon3_ethernet_worker *worker)
+{
+	return atomic_read(&worker->kick) != 0 || kthread_should_stop();
+}
+
+static int octeon3_eth_replenish_all(struct octeon3_ethernet_node *oen)
+{
+	int batch_size = 32, pending = 0;
+	struct octeon3_ethernet *priv;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(priv, &oen->device_list, list) {
+		int amount = atomic64_sub_if_positive(batch_size,
+						      &priv->buffers_needed);
+
+		if (amount >= 0) {
+			octeon3_eth_replenish_rx(priv, batch_size);
+			pending += amount;
+		}
+	}
+	rcu_read_unlock();
+	return pending;
+}
+
+static int octeon3_eth_tx_complete_hwtstamp(struct octeon3_ethernet *priv,
+					    struct sk_buff *skb)
+{
+	struct skb_shared_hwtstamps shts;
+	u64 hwts, ns;
+
+	hwts = *((u64 *)(skb->cb) + 1);
+	ns = timecounter_cyc2time(&priv->tc, hwts);
+	memset(&shts, 0, sizeof(shts));
+	shts.hwtstamp = ns_to_ktime(ns);
+	skb_tstamp_tx(skb, &shts);
+
+	return 0;
+}
+
+static int octeon3_eth_tx_complete_worker(void *data)
+{
+	int backlog, backlog_stop_thresh, i, order, tx_complete_stop_thresh;
+	struct octeon3_ethernet_worker *worker = data;
+	struct octeon3_ethernet_node *oen = worker->oen;
+	u64 aq_cnt;
+
+	order = worker->order;
+	backlog_stop_thresh = (order == 0 ? 31 : order * 80);
+	tx_complete_stop_thresh = (order * 100);
+
+	while (!kthread_should_stop()) {
+		/* Replaced by wait_event to avoid warnings like
+		 * "task oct3_eth/0:2:1250 blocked for more than 120 seconds."
+		 */
+		wait_event_interruptible(worker->queue,
+					 octeon3_eth_tx_done_runnable(worker));
+		atomic_dec_if_positive(&worker->kick); /* clear the flag */
+
+		do {
+			backlog = octeon3_eth_replenish_all(oen);
+			for (i = 0; i < 100; i++) {
+				void **work;
+				struct net_device *tx_netdev;
+				struct octeon3_ethernet *tx_priv;
+				struct sk_buff *skb;
+				struct wr_ret r;
+
+				r = octeon3_core_get_work_sync(oen->tx_complete_grp);
+				work = r.work;
+				if (!work)
+					break;
+				tx_netdev = work[0];
+				tx_priv = netdev_priv(tx_netdev);
+				if (unlikely(netif_queue_stopped(tx_netdev)) && atomic64_read(&tx_priv->tx_backlog) < MAX_TX_QUEUE_DEPTH)
+					netif_wake_queue(tx_netdev);
+				skb = container_of((void *)work,
+						   struct sk_buff, cb);
+				if (unlikely(tx_priv->tx_timestamp_hw) && unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS))
+					octeon3_eth_tx_complete_hwtstamp(tx_priv, skb);
+				dev_kfree_skb(skb);
+			}
+
+			aq_cnt = oct_csr_read(SSO_GRP_AQ_CNT(oen->node, oen->tx_complete_grp));
+			aq_cnt &= SSO_GRP_AQ_CNT_AQ_CNT_MASK;
+			if ((backlog > backlog_stop_thresh ||
+			     aq_cnt > tx_complete_stop_thresh) &&
+			     order < ARRAY_SIZE(oen->workers) - 1) {
+				atomic_set(&oen->workers[order + 1].kick, 1);
+				wake_up(&oen->workers[order + 1].queue);
+			}
+		} while (!need_resched() && (backlog > backlog_stop_thresh ||
+			 aq_cnt > tx_complete_stop_thresh));
+
+		cond_resched();
+
+		if (!octeon3_eth_tx_done_runnable(worker))
+			octeon3_sso_irq_set(oen->node, oen->tx_complete_grp,
+					    true);
+	}
+
+	return 0;
+}
+
+static irqreturn_t octeon3_eth_tx_handler(int irq, void *info)
+{
+	struct octeon3_ethernet_node *oen = info;
+
+	/* Disarm the irq. */
+	octeon3_sso_irq_set(oen->node, oen->tx_complete_grp, false);
+	atomic_set(&oen->workers[0].kick, 1);
+	wake_up(&oen->workers[0].queue);
+	return IRQ_HANDLED;
+}
+
+static int octeon3_eth_global_init(unsigned int node,
+				   struct platform_device *pdev)
+{
+	struct octeon3_ethernet_node *oen;
+	unsigned int sso_intsn;
+	int i, rv = 0;
+
+	mutex_lock(&octeon3_eth_init_mutex);
+
+	oen = octeon3_eth_node + node;
+
+	if (oen->init_done)
+		goto done;
+
+	/* CN78XX-P1.0 cannot un-initialize PKO, so get a module
+	 * reference to prevent it from being unloaded.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+		if (!try_module_get(THIS_MODULE))
+			dev_err(&pdev->dev, "ERROR: Could not obtain module reference for CN78XX-P1.0\n");
+
+	INIT_LIST_HEAD(&oen->device_list);
+	mutex_init(&oen->device_list_lock);
+	spin_lock_init(&oen->napi_alloc_lock);
+
+	oen->node = node;
+
+	octeon_fpa3_init(node);
+	rv = octeon_fpa3_pool_init(node, -1, &oen->sso_pool,
+				   &oen->sso_pool_stack, 40960);
+	if (rv)
+		goto done;
+
+	rv = octeon_fpa3_pool_init(node, -1, &oen->pko_pool,
+				   &oen->pko_pool_stack, 40960);
+	if (rv)
+		goto done;
+
+	rv = octeon_fpa3_pool_init(node, -1, &oen->pki_packet_pool,
+				   &oen->pki_packet_pool_stack,
+				   64 * num_packet_buffers);
+	if (rv)
+		goto done;
+
+	rv = octeon_fpa3_aura_init(node, oen->sso_pool, -1,
+				   &oen->sso_aura, num_packet_buffers, 20480);
+	if (rv)
+		goto done;
+
+	rv = octeon_fpa3_aura_init(node, oen->pko_pool, -1,
+				   &oen->pko_aura, num_packet_buffers, 20480);
+	if (rv)
+		goto done;
+
+	dev_info(&pdev->dev, "SSO:%d:%d, PKO:%d:%d\n", oen->sso_pool,
+		 oen->sso_aura, oen->pko_pool, oen->pko_aura);
+
+	if (!octeon3_eth_sso_pko_cache) {
+		octeon3_eth_sso_pko_cache = kmem_cache_create("sso_pko", 4096,
+							      128, 0, NULL);
+		if (!octeon3_eth_sso_pko_cache) {
+			rv = -ENOMEM;
+			goto done;
+		}
+	}
+
+	rv = octeon_fpa3_mem_fill(node, octeon3_eth_sso_pko_cache,
+				  oen->sso_aura, 1024);
+	if (rv)
+		goto done;
+
+	rv = octeon_fpa3_mem_fill(node, octeon3_eth_sso_pko_cache,
+				  oen->pko_aura, 1024);
+	if (rv)
+		goto done;
+
+	rv = octeon3_sso_init(node, oen->sso_aura);
+	if (rv)
+		goto done;
+
+	oen->tx_complete_grp = octeon3_sso_alloc_groups(node, NULL, 1, -1);
+	if (oen->tx_complete_grp < 0)
+		goto done;
+
+	sso_intsn = SSO_IRQ_START | oen->tx_complete_grp;
+	oen->tx_irq = irq_create_mapping(NULL, sso_intsn);
+	if (!oen->tx_irq) {
+		rv = -ENODEV;
+		goto done;
+	}
+
+	rv = octeon3_pko_init_global(node, oen->pko_aura);
+	if (rv) {
+		rv = -ENODEV;
+		goto done;
+	}
+
+	octeon3_pki_vlan_init(node);
+	octeon3_pki_cluster_init(node, pdev);
+	octeon3_pki_ltype_init(node);
+	octeon3_pki_enable(node);
+
+	for (i = 0; i < ARRAY_SIZE(oen->workers); i++) {
+		oen->workers[i].oen = oen;
+		init_waitqueue_head(&oen->workers[i].queue);
+		oen->workers[i].order = i;
+	}
+	for (i = 0; i < ARRAY_SIZE(oen->workers); i++) {
+		oen->workers[i].task =
+			kthread_create_on_node(octeon3_eth_tx_complete_worker,
+					       oen->workers + i, node,
+					       "oct3_eth/%d:%d", node, i);
+		if (IS_ERR(oen->workers[i].task)) {
+			rv = PTR_ERR(oen->workers[i].task);
+			goto done;
+		} else {
+#ifdef CONFIG_NUMA
+			set_cpus_allowed_ptr(oen->workers[i].task,
+					     cpumask_of_node(node));
+#endif
+			wake_up_process(oen->workers[i].task);
+		}
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+		octeon3_sso_pass1_limit(node, oen->tx_complete_grp);
+
+	rv = request_irq(oen->tx_irq, octeon3_eth_tx_handler,
+			 IRQ_TYPE_EDGE_RISING, "oct3_eth_tx_done", oen);
+	if (rv)
+		goto done;
+	octeon3_eth_gen_affinity(node, &oen->tx_affinity_hint);
+	irq_set_affinity_hint(oen->tx_irq, &oen->tx_affinity_hint);
+
+	octeon3_sso_irq_set(node, oen->tx_complete_grp, true);
+
+	oen->init_done = true;
+done:
+	mutex_unlock(&octeon3_eth_init_mutex);
+	return rv;
+}
+
+static struct sk_buff *octeon3_eth_work_to_skb(void *w)
+{
+	struct sk_buff *skb;
+	void **f = w;
+
+	skb = f[-16];
+	return skb;
+}
+
+/* octeon3_napi_alloc_cpu - Find an available cpu. This function must be called
+ *			    with the napi_alloc_lock lock held.
+ * @node:		    Node to allocate cpu from.
+ * @cpu:		    Cpu to bind the napi to:
+ *				<  0: use any cpu.
+ *				>= 0: use requested cpu.
+ *
+ * Returns cpu number.
+ * Returns <0 for error codes.
+ */
+static int octeon3_napi_alloc_cpu(int node, int cpu)
+{
+	int min_cnt = MAX_NAPIS_PER_NODE;
+	int min_cpu = -EBUSY;
+
+	if (cpu >= 0) {
+		min_cpu = cpu;
+	} else {
+		for_each_cpu(cpu, cpumask_of_node(node)) {
+			if (octeon3_cpu_napi_cnt[cpu] == 0) {
+				min_cpu = cpu;
+				break;
+			} else if (octeon3_cpu_napi_cnt[cpu] < min_cnt) {
+				min_cnt = octeon3_cpu_napi_cnt[cpu];
+				min_cpu = cpu;
+			}
+		}
+	}
+
+	if (min_cpu < 0)
+		return min_cpu;
+
+	octeon3_cpu_napi_cnt[min_cpu]++;
+
+	return min_cpu;
+}
+
+/* octeon3_napi_alloc - Allocate a napi.
+ * @cxt: Receive context the napi will be added to.
+ * @idx: Napi index within the receive context.
+ * @cpu: Cpu to bind the napi to:
+ *		<  0: use any cpu.
+ *		>= 0: use requested cpu.
+ *
+ * Returns pointer to napi wrapper.
+ * Returns NULL on error.
+ */
+static struct octeon3_napi_wrapper *octeon3_napi_alloc(struct octeon3_rx *cxt,
+						       int idx, int cpu)
+{
+	struct octeon3_ethernet *priv = cxt->parent;
+	struct octeon3_ethernet_node *oen;
+	int i, node = priv->node;
+	unsigned long flags;
+
+	oen = octeon3_eth_node + node;
+	spin_lock_irqsave(&oen->napi_alloc_lock, flags);
+
+	/* Find a free napi wrapper */
+	for (i = 0; i < MAX_NAPIS_PER_NODE; i++) {
+		if (napi_wrapper[node][i].available) {
+			/* Allocate a cpu to use */
+			cpu = octeon3_napi_alloc_cpu(node, cpu);
+			if (cpu < 0)
+				break;
+
+			napi_wrapper[node][i].available = 0;
+			napi_wrapper[node][i].idx = idx;
+			napi_wrapper[node][i].cpu = cpu;
+			napi_wrapper[node][i].cxt = cxt;
+			spin_unlock_irqrestore(&oen->napi_alloc_lock, flags);
+			return &napi_wrapper[node][i];
+		}
+	}
+
+	spin_unlock_irqrestore(&oen->napi_alloc_lock, flags);
+	return NULL;
+}
+
+/* octeon_cpu_napi_sched - Schedule a napi for execution. The napi will start
+ *			   executing on the cpu calling this function.
+ * @info: Pointer to the napi to schedule for execution.
+ */
+static void octeon_cpu_napi_sched(void *info)
+{
+	struct napi_struct *napi = info;
+
+	napi_schedule(napi);
+}
+
+/* octeon3_rm_napi_from_cxt - Remove a napi from a receive context.
+ * @node: Node napi belongs to.
+ * @napiw: Pointer to napi to remove.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int octeon3_rm_napi_from_cxt(int node,
+				    struct octeon3_napi_wrapper *napiw)
+{
+	struct octeon3_ethernet_node *oen;
+	struct octeon3_rx *cxt;
+	unsigned long flags;
+	int idx;
+
+	oen = octeon3_eth_node + node;
+	cxt = napiw->cxt;
+	idx = napiw->idx;
+
+	/* Free the napi block */
+	spin_lock_irqsave(&oen->napi_alloc_lock, flags);
+	octeon3_cpu_napi_cnt[napiw->cpu]--;
+	napiw->available = 1;
+	napiw->idx = -1;
+	napiw->cpu = -1;
+	napiw->cxt = NULL;
+	spin_unlock_irqrestore(&oen->napi_alloc_lock, flags);
+
+	/* Free the napi idx */
+	spin_lock_irqsave(&cxt->napi_idx_lock, flags);
+	bitmap_clear(cxt->napi_idx_bitmap, idx, 1);
+	spin_unlock_irqrestore(&cxt->napi_idx_lock, flags);
+
+	return 0;
+}
+
+/* octeon3_add_napi_to_cxt - Add a napi to a receive context.
+ * @cxt: Pointer to receive context.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int octeon3_add_napi_to_cxt(struct octeon3_rx *cxt)
+{
+	struct octeon3_ethernet *priv = cxt->parent;
+	struct octeon3_napi_wrapper *napiw;
+	unsigned long flags;
+	int idx, rc;
+
+	/* Get a free napi idx */
+	spin_lock_irqsave(&cxt->napi_idx_lock, flags);
+	idx = find_first_zero_bit(cxt->napi_idx_bitmap, MAX_CORES);
+	if (unlikely(idx >= MAX_CORES)) {
+		spin_unlock_irqrestore(&cxt->napi_idx_lock, flags);
+		return -ENOMEM;
+	}
+	bitmap_set(cxt->napi_idx_bitmap, idx, 1);
+	spin_unlock_irqrestore(&cxt->napi_idx_lock, flags);
+
+	/* Get a free napi block */
+	napiw = octeon3_napi_alloc(cxt, idx, -1);
+	if (unlikely(!napiw)) {
+		spin_lock_irqsave(&cxt->napi_idx_lock, flags);
+		bitmap_clear(cxt->napi_idx_bitmap, idx, 1);
+		spin_unlock_irqrestore(&cxt->napi_idx_lock, flags);
+		return -ENOMEM;
+	}
+
+	rc = smp_call_function_single(napiw->cpu, octeon_cpu_napi_sched,
+				      &napiw->napi, 0);
+	if (unlikely(rc))
+		octeon3_rm_napi_from_cxt(priv->node, napiw);
+
+	return rc;
+}
+
+/* Receive one packet.
+ * returns the number of RX buffers consumed.
+ */
+static int octeon3_eth_rx_one(struct octeon3_rx *rx, bool is_async,
+			      bool req_next)
+{
+	struct octeon3_ethernet *priv = rx->parent;
+	int len_remaining, ret, segments;
+	union buf_ptr packet_ptr;
+	unsigned int packet_len;
+	struct sk_buff *skb;
+	struct wqe *work;
+	struct wr_ret r;
+	void **buf;
+	u64 gaura;
+	u8 *data;
+
+	if (is_async)
+		r = octeon3_core_get_response_async();
+	else
+		r = octeon3_core_get_work_sync(rx->rx_grp);
+	work = r.work;
+	if (!work)
+		return 0;
+
+	/* Request the next work so it'll be ready when we need it */
+	if (is_async && req_next)
+		octeon3_core_get_work_async(rx->rx_grp);
+
+	skb = octeon3_eth_work_to_skb(work);
+
+	/* Save the aura and node this skb came from to allow the pko to free
+	 * the skb back to the correct aura. A magic number is also added to
+	 * later verify the skb came from the fpa.
+	 *
+	 *  63                                    12 11  10 9                  0
+	 * ---------------------------------------------------------------------
+	 * |                  magic                 | node |        aura       |
+	 * ---------------------------------------------------------------------
+	 */
+	buf = (void **)PTR_ALIGN(skb->head, 128);
+	gaura = SKB_AURA_MAGIC | work->word0.aura;
+	buf[SKB_AURA_OFFSET] = (void *)gaura;
+
+	segments = work->word0.bufs;
+	ret = segments;
+	packet_ptr = work->packet_ptr;
+	if (unlikely(work->word2.err_level <= PKI_ERRLEV_LA &&
+		     work->word2.err_code != PKI_OPCODE_NONE)) {
+		atomic64_inc(&priv->rx_errors);
+		switch (work->word2.err_code) {
+		case PKI_OPCODE_JABBER:
+			atomic64_inc(&priv->rx_length_errors);
+			break;
+		case PKI_OPCODE_FCS:
+			atomic64_inc(&priv->rx_crc_errors);
+			break;
+		}
+		data = phys_to_virt(packet_ptr.addr);
+		for (;;) {
+			dev_kfree_skb_any(skb);
+			segments--;
+			if (segments <= 0)
+				break;
+			packet_ptr.u64 = *(u64 *)(data - 8);
+#ifndef __LITTLE_ENDIAN
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+				/* PKI_BUFLINK_S's are endian-swapped */
+				packet_ptr.u64 = swab64(packet_ptr.u64);
+			}
+#endif
+			data = phys_to_virt(packet_ptr.addr);
+			skb = octeon3_eth_work_to_skb((void *)round_down((unsigned long)data, 128ull));
+		}
+		goto out;
+	}
+
+	packet_len = work->word1.len;
+	data = phys_to_virt(packet_ptr.addr);
+	skb->data = data;
+	skb->len = packet_len;
+	len_remaining = packet_len;
+	if (segments == 1) {
+		/* Strip the ethernet fcs */
+		skb->len -= 4;
+		skb_set_tail_pointer(skb, skb->len);
+	} else {
+		bool first_frag = true;
+		struct sk_buff *current_skb = skb;
+		struct sk_buff *next_skb = NULL;
+		unsigned int segment_size;
+
+		skb_frag_list_init(skb);
+		for (;;) {
+			segment_size = (segments == 1) ?
+				len_remaining : packet_ptr.size;
+			len_remaining -= segment_size;
+			if (!first_frag) {
+				current_skb->len = segment_size;
+				skb->data_len += segment_size;
+				skb->truesize += current_skb->truesize;
+			}
+			skb_set_tail_pointer(current_skb, segment_size);
+			segments--;
+			if (segments == 0)
+				break;
+			packet_ptr.u64 = *(u64 *)(data - 8);
+#ifndef __LITTLE_ENDIAN
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+				/* PKI_BUFLINK_S's are endian-swapped */
+				packet_ptr.u64 = swab64(packet_ptr.u64);
+			}
+#endif
+			data = phys_to_virt(packet_ptr.addr);
+			next_skb = octeon3_eth_work_to_skb((void *)round_down((unsigned long)data, 128ull));
+			if (first_frag) {
+				next_skb->next =
+					skb_shinfo(current_skb)->frag_list;
+				skb_shinfo(current_skb)->frag_list = next_skb;
+			} else {
+				current_skb->next = next_skb;
+				next_skb->next = NULL;
+			}
+			current_skb = next_skb;
+			first_frag = false;
+			current_skb->data = data;
+		}
+
+		/* Strip the ethernet fcs */
+		pskb_trim(skb, skb->len - 4);
+	}
+
+	if (likely(priv->netdev->flags & IFF_UP)) {
+		skb_checksum_none_assert(skb);
+		if (unlikely(priv->rx_timestamp_hw)) {
+			/* The first 8 bytes are the timestamp */
+			u64 hwts = *(u64 *)skb->data;
+			u64 ns;
+			struct skb_shared_hwtstamps *shts;
+
+			ns = timecounter_cyc2time(&priv->tc, hwts);
+			shts = skb_hwtstamps(skb);
+			memset(shts, 0, sizeof(*shts));
+			shts->hwtstamp = ns_to_ktime(ns);
+			__skb_pull(skb, 8);
+		}
+
+		skb->protocol = eth_type_trans(skb, priv->netdev);
+		skb->dev = priv->netdev;
+		if (priv->netdev->features & NETIF_F_RXCSUM) {
+			if ((work->word2.lc_hdr_type == PKI_LTYPE_IP4 ||
+			     work->word2.lc_hdr_type == PKI_LTYPE_IP6) &&
+			    (work->word2.lf_hdr_type == PKI_LTYPE_TCP ||
+			     work->word2.lf_hdr_type == PKI_LTYPE_UDP ||
+			     work->word2.lf_hdr_type == PKI_LTYPE_SCTP))
+				if (work->word2.err_code == 0)
+					skb->ip_summed = CHECKSUM_UNNECESSARY;
+		}
+
+		netif_receive_skb(skb);
+	} else {
+		/* Drop any packet received for a device that isn't up */
+		atomic64_inc(&priv->rx_dropped);
+		dev_kfree_skb_any(skb);
+	}
+out:
+	return ret;
+}
+
+static int octeon3_eth_napi(struct napi_struct *napi, int budget)
+{
+	int idx, napis_inuse, n = 0, n_bufs = 0, rx_count = 0;
+	struct octeon3_napi_wrapper *napiw;
+	struct octeon3_ethernet *priv;
+	u64 aq_cnt, old_scratch;
+	struct octeon3_rx *cxt;
+
+	napiw = container_of(napi, struct octeon3_napi_wrapper, napi);
+	cxt = napiw->cxt;
+	priv = cxt->parent;
+
+	/* Get the amount of work pending */
+	aq_cnt = oct_csr_read(SSO_GRP_AQ_CNT(priv->node, cxt->rx_grp));
+	aq_cnt &= SSO_GRP_AQ_CNT_AQ_CNT_MASK;
+	/* Allow the last thread to add/remove threads if the work
+	 * incremented/decremented by more than what the current number
+	 * of threads can support.
+	 */
+	idx = find_last_bit(cxt->napi_idx_bitmap, MAX_CORES);
+	napis_inuse = bitmap_weight(cxt->napi_idx_bitmap, MAX_CORES);
+
+	if (napiw->idx == idx) {
+		if (aq_cnt > napis_inuse * 128) {
+			octeon3_add_napi_to_cxt(cxt);
+		} else if (napiw->idx > 0 && aq_cnt < (napis_inuse - 1) * 128) {
+			napi_complete(napi);
+			octeon3_rm_napi_from_cxt(priv->node, napiw);
+			return 0;
+		}
+	}
+
+	if (likely(USE_ASYNC_IOBDMA)) {
+		/* Save scratch in case userspace is using it */
+		CVMX_SYNCIOBDMA;
+		old_scratch = scratch_read64(0);
+
+		octeon3_core_get_work_async(cxt->rx_grp);
+	}
+
+	while (rx_count < budget) {
+		n = 0;
+
+		if (likely(USE_ASYNC_IOBDMA)) {
+			bool req_next = rx_count < (budget - 1) ? true : false;
+
+			n = octeon3_eth_rx_one(cxt, true, req_next);
+		} else {
+			n = octeon3_eth_rx_one(cxt, false, false);
+		}
+
+		if (n == 0)
+			break;
+
+		n_bufs += n;
+		rx_count++;
+	}
+
+	/* Wake up worker threads */
+	n_bufs = atomic64_add_return(n_bufs, &priv->buffers_needed);
+	if (n_bufs >= 32) {
+		struct octeon3_ethernet_node *oen;
+
+		oen = octeon3_eth_node + priv->node;
+		atomic_set(&oen->workers[0].kick, 1);
+		wake_up(&oen->workers[0].queue);
+	}
+
+	/* Stop the thread when no work is pending */
+	if (rx_count < budget) {
+		napi_complete(napi);
+
+		if (napiw->idx > 0)
+			octeon3_rm_napi_from_cxt(priv->node, napiw);
+		else
+			octeon3_sso_irq_set(cxt->parent->node, cxt->rx_grp,
+					    true);
+	}
+
+	if (likely(USE_ASYNC_IOBDMA)) {
+		/* Restore the scratch area */
+		scratch_write64(0, old_scratch);
+	}
+
+	return rx_count;
+}
+
+/* octeon3_napi_init_node - Initialize the node napis.
+ * @node: Node napis belong to.
+ * @netdev: Default network device used to initialize the napis.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int octeon3_napi_init_node(int node, struct net_device *netdev)
+{
+	struct octeon3_ethernet_node *oen;
+	unsigned long flags;
+	int i;
+
+	oen = octeon3_eth_node + node;
+	spin_lock_irqsave(&oen->napi_alloc_lock, flags);
+
+	if (oen->napi_init_done)
+		goto done;
+
+	for (i = 0; i < MAX_NAPIS_PER_NODE; i++) {
+		netif_napi_add(netdev, &napi_wrapper[node][i].napi,
+			       octeon3_eth_napi, 32);
+		napi_enable(&napi_wrapper[node][i].napi);
+		napi_wrapper[node][i].available = 1;
+		napi_wrapper[node][i].idx = -1;
+		napi_wrapper[node][i].cpu = -1;
+		napi_wrapper[node][i].cxt = NULL;
+	}
+
+	oen->napi_init_done = true;
+done:
+	spin_unlock_irqrestore(&oen->napi_alloc_lock, flags);
+	return 0;
+}
+
+#undef BROKEN_SIMULATOR_CSUM
+
+static void ethtool_get_drvinfo(struct net_device *netdev,
+				struct ethtool_drvinfo *info)
+{
+	strcpy(info->driver, "octeon3-ethernet");
+	strcpy(info->version, "1.0");
+	strcpy(info->bus_info, "Builtin");
+}
+
+static int ethtool_get_ts_info(struct net_device *ndev,
+			       struct ethtool_ts_info *info)
+{
+	struct octeon3_ethernet *priv = netdev_priv(ndev);
+
+	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+		SOF_TIMESTAMPING_RX_HARDWARE | SOF_TIMESTAMPING_RAW_HARDWARE;
+
+	if (priv->ptp_clock)
+		info->phc_index = ptp_clock_index(priv->ptp_clock);
+	else
+		info->phc_index = -1;
+
+	info->tx_types = (1 << HWTSTAMP_TX_OFF) | (1 << HWTSTAMP_TX_ON);
+
+	info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+		(1 << HWTSTAMP_FILTER_ALL);
+
+	return 0;
+}
+
+static const struct ethtool_ops octeon3_ethtool_ops = {
+	.get_drvinfo = ethtool_get_drvinfo,
+	.get_link_ksettings = bgx_port_ethtool_get_link_ksettings,
+	.set_settings = bgx_port_ethtool_set_settings,
+	.nway_reset = bgx_port_ethtool_nway_reset,
+	.get_link = ethtool_op_get_link,
+	.get_ts_info = ethtool_get_ts_info,
+};
+
+static int octeon3_eth_ndo_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+		struct octeon3_ethernet *priv = netdev_priv(netdev);
+		int fifo_size, max_mtu = 1500;
+
+		/* On 78XX-Pass1 the mtu must be limited.  The PKO may
+		 * to lock up when calculating the L4 checksum for
+		 * large packets. How large the packets can be depends
+		 * on the amount of pko fifo assigned to the port.
+		 *
+		 *   FIFO size                Max frame size
+		 *	2.5 KB				1920
+		 *	5.0 KB				4480
+		 *     10.0 KB				9600
+		 *
+		 * The maximum mtu is set to the largest frame size minus the
+		 * l2 header.
+		 */
+		fifo_size = octeon3_pko_get_fifo_size(priv->node,
+						      priv->interface,
+						      priv->index,
+						      priv->mac_type);
+
+		switch (fifo_size) {
+		case 2560:
+			max_mtu = 1920 - ETH_HLEN - ETH_FCS_LEN -
+				(2 * VLAN_HLEN);
+			break;
+
+		case 5120:
+			max_mtu = 4480 - ETH_HLEN - ETH_FCS_LEN -
+				(2 * VLAN_HLEN);
+			break;
+
+		case 10240:
+			max_mtu = 9600 - ETH_HLEN - ETH_FCS_LEN -
+				(2 * VLAN_HLEN);
+			break;
+
+		default:
+			break;
+		}
+		if (new_mtu > max_mtu) {
+			netdev_warn(netdev, "Maximum MTU supported is %d",
+				    max_mtu);
+			return -EINVAL;
+		}
+	}
+	return bgx_port_change_mtu(netdev, new_mtu);
+}
+
+static int octeon3_eth_common_ndo_init(struct net_device *netdev,
+				       int extra_skip)
+{
+	int aura, base_rx_grp[MAX_RX_CONTEXTS], dq, i, pki_chan, r;
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	struct octeon3_ethernet_node *oen = octeon3_eth_node + priv->node;
+
+	netif_carrier_off(netdev);
+
+	netdev->features |=
+#ifndef BROKEN_SIMULATOR_CSUM
+		NETIF_F_IP_CSUM |
+		NETIF_F_IPV6_CSUM |
+#endif
+		NETIF_F_SG |
+		NETIF_F_FRAGLIST |
+		NETIF_F_RXCSUM |
+		NETIF_F_LLTX;
+
+	if (!OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+		netdev->features |= NETIF_F_SCTP_CRC;
+
+	netdev->features |= NETIF_F_TSO | NETIF_F_TSO6;
+
+	/* Set user changeable settings */
+	netdev->hw_features = netdev->features;
+
+	priv->rx_buf_count = num_packet_buffers;
+
+	pki_chan = get_pki_chan(priv->node, priv->interface, priv->index);
+
+	dq = octeon3_pko_interface_init(priv->node, priv->interface,
+					priv->index, priv->mac_type, pki_chan);
+	if (dq < 0) {
+		dev_err(netdev->dev.parent, "Failed to initialize pko\n");
+		return -ENODEV;
+	}
+
+	r = octeon3_pko_activate_dq(priv->node, dq, 1);
+	if (r < 0) {
+		dev_err(netdev->dev.parent, "Failed to activate dq\n");
+		return -ENODEV;
+	}
+
+	priv->pko_queue = dq;
+	octeon_fpa3_aura_init(priv->node, oen->pki_packet_pool, -1, &aura,
+			      num_packet_buffers, num_packet_buffers * 2);
+	priv->pki_aura = aura;
+	aura2bufs_needed[priv->node][priv->pki_aura] = &priv->buffers_needed;
+
+	r = octeon3_sso_alloc_groups(priv->node, base_rx_grp, rx_contexts, -1);
+	if (r) {
+		dev_err(netdev->dev.parent, "Failed to allocated SSO group\n");
+		return -ENODEV;
+	}
+	for (i = 0; i < rx_contexts; i++) {
+		priv->rx_cxt[i].rx_grp = base_rx_grp[i];
+		priv->rx_cxt[i].parent = priv;
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			octeon3_sso_pass1_limit(priv->node,
+						priv->rx_cxt[i].rx_grp);
+	}
+	priv->num_rx_cxt = rx_contexts;
+
+	priv->tx_complete_grp = oen->tx_complete_grp;
+	dev_info(netdev->dev.parent,
+		 "rx sso grp:%d..%d aura:%d pknd:%d pko_queue:%d\n",
+		 *base_rx_grp, *(base_rx_grp + priv->num_rx_cxt - 1),
+		 priv->pki_aura, priv->pknd, priv->pko_queue);
+
+	octeon3_pki_port_init(priv->node, priv->pki_aura, *base_rx_grp,
+			      extra_skip, (packet_buffer_size - 128),
+			      priv->pknd, priv->num_rx_cxt);
+
+	priv->last_packets = 0;
+	priv->last_octets = 0;
+	priv->last_dropped = 0;
+
+	octeon3_napi_init_node(priv->node, netdev);
+
+	/* Register ethtool methods */
+	netdev->ethtool_ops = &octeon3_ethtool_ops;
+
+	return 0;
+}
+
+static int octeon3_eth_bgx_ndo_init(struct net_device *netdev)
+{
+	struct octeon3_ethernet	*priv = netdev_priv(netdev);
+	const u8 *mac;
+	int r;
+
+	priv->pknd = bgx_port_get_pknd(priv->node, priv->interface,
+				       priv->index);
+	octeon3_eth_common_ndo_init(netdev, 0);
+
+	/* Padding and FCS are done in BGX */
+	r = octeon3_pko_set_mac_options(priv->node, priv->interface,
+					priv->index, priv->mac_type, false,
+					false, 0);
+	if (r)
+		return r;
+
+	mac = bgx_port_get_mac(netdev);
+	if (mac && is_valid_ether_addr(mac)) {
+		memcpy(netdev->dev_addr, mac, ETH_ALEN);
+		netdev->addr_assign_type &= ~NET_ADDR_RANDOM;
+	} else {
+		eth_hw_addr_random(netdev);
+	}
+
+	bgx_port_set_rx_filtering(netdev);
+	octeon3_eth_ndo_change_mtu(netdev, netdev->mtu);
+
+	return 0;
+}
+
+static void octeon3_eth_ndo_uninit(struct net_device *netdev)
+{
+	struct octeon3_ethernet	*priv = netdev_priv(netdev);
+	int grp[MAX_RX_CONTEXTS], i;
+
+	/* Shutdwon pki for this interface */
+	octeon3_pki_port_shutdown(priv->node, priv->pknd);
+	octeon_fpa3_release_aura(priv->node, priv->pki_aura);
+	aura2bufs_needed[priv->node][priv->pki_aura] = NULL;
+
+	/* Shutdown pko for this interface */
+	octeon3_pko_interface_uninit(priv->node, &priv->pko_queue, 1);
+
+	/* Free the receive contexts sso groups */
+	for (i = 0; i < rx_contexts; i++)
+		grp[i] = priv->rx_cxt[i].rx_grp;
+	octeon3_sso_free_groups(priv->node, grp, rx_contexts);
+}
+
+static irqreturn_t octeon3_eth_rx_handler(int irq, void *info)
+{
+	struct octeon3_rx *rx = info;
+
+	/* Disarm the irq. */
+	octeon3_sso_irq_set(rx->parent->node, rx->rx_grp, false);
+
+	napi_schedule(&rx->napiw->napi);
+	return IRQ_HANDLED;
+}
+
+static int octeon3_eth_common_ndo_open(struct net_device *netdev)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	struct octeon3_rx *rx;
+	int i, idx, r;
+
+	for (i = 0; i < priv->num_rx_cxt; i++) {
+		unsigned int sso_intsn;
+		int cpu;
+
+		rx = priv->rx_cxt + i;
+		sso_intsn = SSO_IRQ_START | rx->rx_grp;
+
+		spin_lock_init(&rx->napi_idx_lock);
+
+		rx->rx_irq = irq_create_mapping(NULL, sso_intsn);
+		if (!rx->rx_irq) {
+			netdev_err(netdev, "ERROR: Couldn't map hwirq: %x\n",
+				   sso_intsn);
+			r = -EINVAL;
+			goto err1;
+		}
+		r = request_irq(rx->rx_irq, octeon3_eth_rx_handler,
+				IRQ_TYPE_EDGE_RISING, netdev_name(netdev), rx);
+		if (r) {
+			netdev_err(netdev, "ERROR: Couldn't request irq: %d\n",
+				   rx->rx_irq);
+			r = -ENOMEM;
+			goto err2;
+		}
+
+		octeon3_eth_gen_affinity(priv->node, &rx->rx_affinity_hint);
+		irq_set_affinity_hint(rx->rx_irq, &rx->rx_affinity_hint);
+
+		/* Allocate a napi index for this receive context */
+		bitmap_zero(priv->rx_cxt[i].napi_idx_bitmap, MAX_CORES);
+		idx = find_first_zero_bit(priv->rx_cxt[i].napi_idx_bitmap,
+					  MAX_CORES);
+		if (idx >= MAX_CORES) {
+			netdev_err(netdev, "ERROR: Couldn't get napi index\n");
+			r = -ENOMEM;
+			goto err3;
+		}
+		bitmap_set(priv->rx_cxt[i].napi_idx_bitmap, idx, 1);
+		cpu = cpumask_first(&rx->rx_affinity_hint);
+
+		priv->rx_cxt[i].napiw = octeon3_napi_alloc(&priv->rx_cxt[i],
+							   idx, cpu);
+		if (!priv->rx_cxt[i].napiw) {
+			r = -ENOMEM;
+			goto err4;
+		}
+
+		/* Arm the irq. */
+		octeon3_sso_irq_set(priv->node, rx->rx_grp, true);
+	}
+	octeon3_eth_replenish_rx(priv, priv->rx_buf_count);
+
+	return 0;
+
+err4:
+	bitmap_clear(priv->rx_cxt[i].napi_idx_bitmap, idx, 1);
+err3:
+	irq_set_affinity_hint(rx->rx_irq, NULL);
+	free_irq(rx->rx_irq, rx);
+err2:
+	irq_dispose_mapping(rx->rx_irq);
+err1:
+	for (i--; i >= 0; i--) {
+		rx = priv->rx_cxt + i;
+		irq_dispose_mapping(rx->rx_irq);
+		free_irq(rx->rx_irq, rx);
+		octeon3_rm_napi_from_cxt(priv->node, priv->rx_cxt[i].napiw);
+		priv->rx_cxt[i].napiw = NULL;
+	}
+
+	return r;
+}
+
+static int octeon3_eth_bgx_ndo_open(struct net_device *netdev)
+{
+	int rc;
+
+	rc = octeon3_eth_common_ndo_open(netdev);
+	if (rc == 0)
+		rc = bgx_port_enable(netdev);
+
+	return rc;
+}
+
+static int octeon3_eth_common_ndo_stop(struct net_device *netdev)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	struct octeon3_rx *rx;
+	struct sk_buff *skb;
+	void **w;
+	int i;
+
+	/* Allow enough time for ingress in transit packets to be drained */
+	msleep(20);
+
+	/* Wait until sso has no more work for this interface */
+	for (i = 0; i < priv->num_rx_cxt; i++) {
+		rx = priv->rx_cxt + i;
+		while (oct_csr_read(SSO_GRP_AQ_CNT(priv->node, rx->rx_grp)))
+			msleep(20);
+	}
+
+	/* Free the irq and napi context for each rx context */
+	for (i = 0; i < priv->num_rx_cxt; i++) {
+		rx = priv->rx_cxt + i;
+		octeon3_sso_irq_set(priv->node, rx->rx_grp, false);
+		irq_set_affinity_hint(rx->rx_irq, NULL);
+		free_irq(rx->rx_irq, rx);
+		irq_dispose_mapping(rx->rx_irq);
+		rx->rx_irq = 0;
+
+		octeon3_rm_napi_from_cxt(priv->node, rx->napiw);
+		rx->napiw = NULL;
+		WARN_ON(!bitmap_empty(rx->napi_idx_bitmap, MAX_CORES));
+	}
+
+	/* Free the packet buffers */
+	for (;;) {
+		w = octeon_fpa3_alloc(priv->node, priv->pki_aura);
+		if (!w)
+			break;
+		skb = w[0];
+		dev_kfree_skb(skb);
+	}
+
+	return 0;
+}
+
+static int octeon3_eth_bgx_ndo_stop(struct net_device *netdev)
+{
+	int r;
+
+	r = bgx_port_disable(netdev);
+	if (r)
+		return r;
+
+	return octeon3_eth_common_ndo_stop(netdev);
+}
+
+static inline u64 build_pko_send_hdr_desc(struct sk_buff *skb, int gaura)
+{
+	u64 checksum_alg, send_hdr = 0;
+	u8 l4_hdr = 0;
+
+	/* See PKO_SEND_HDR_S in the HRM for the send header descriptor
+	 * format.
+	 */
+#ifdef __LITTLE_ENDIAN
+	send_hdr |= PKO_SEND_HDR_LE;
+#endif
+
+	if (!OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+		/* Don't allocate to L2 */
+		send_hdr |= PKO_SEND_HDR_N2;
+	}
+
+	/* Don't automatically free to FPA */
+	send_hdr |= PKO_SEND_HDR_DF;
+
+	send_hdr |= skb->len;
+	send_hdr |= (u64)gaura << PKO_SEND_HDR_AURA_SHIFT;
+
+	if (skb->ip_summed != CHECKSUM_NONE &&
+	    skb->ip_summed != CHECKSUM_UNNECESSARY) {
+#ifndef BROKEN_SIMULATOR_CSUM
+		switch (skb->protocol) {
+		case htons(ETH_P_IP):
+			send_hdr |= ETH_HLEN << PKO_SEND_HDR_L3PTR_SHIFT;
+			send_hdr |= PKO_SEND_HDR_CKL3;
+			l4_hdr = ip_hdr(skb)->protocol;
+			send_hdr |= (ETH_HLEN + (4 * ip_hdr(skb)->ihl)) <<
+				    PKO_SEND_HDR_L4PTR_SHIFT;
+			break;
+
+		case htons(ETH_P_IPV6):
+			l4_hdr = ipv6_hdr(skb)->nexthdr;
+			send_hdr |= ETH_HLEN << PKO_SEND_HDR_L3PTR_SHIFT;
+			break;
+
+		default:
+			break;
+		}
+#endif
+
+		checksum_alg = 1; /* UDP == 1 */
+		switch (l4_hdr) {
+		case IPPROTO_SCTP:
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+				break;
+			checksum_alg++; /* SCTP == 3 */
+			/* Fall through */
+		case IPPROTO_TCP: /* TCP == 2 */
+			checksum_alg++;
+			/* Fall through */
+		case IPPROTO_UDP:
+			if (skb_transport_header_was_set(skb)) {
+				int l4ptr = skb_transport_header(skb) -
+					skb->data;
+				send_hdr &= ~PKO_SEND_HDR_L4PTR_MASK;
+				send_hdr |= l4ptr << PKO_SEND_HDR_L4PTR_SHIFT;
+				send_hdr |= checksum_alg <<
+					    PKO_SEND_HDR_CKL4_SHIFT;
+			}
+			break;
+
+		default:
+			break;
+		}
+	}
+
+	return send_hdr;
+}
+
+static inline u64 build_pko_send_ext_desc(struct sk_buff *skb)
+{
+	u64 send_ext;
+
+	/* See PKO_SEND_EXT_S in the HRM for the send extended descriptor
+	 * format.
+	 */
+	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+	send_ext = (u64)PKO_SENDSUBDC_EXT << PKO_SEND_SUBDC4_SHIFT;
+	send_ext |= (u64)PKO_REDALG_E_SEND << PKO_SEND_EXT_RA_SHIFT;
+	send_ext |= PKO_SEND_EXT_TSTMP;
+	send_ext |= ETH_HLEN << PKO_SEND_EXT_MARKPTR_SHIFT;
+
+	return send_ext;
+}
+
+static inline u64 build_pko_send_tso(struct sk_buff *skb, uint mtu)
+{
+	u64 send_tso;
+
+	/* See PKO_SEND_TSO_S in the HRM for the send tso descriptor format */
+	send_tso = 12ull << PKO_SEND_TSO_L2LEN_SHIFT;
+	send_tso |= (u64)PKO_SENDSUBDC_TSO << PKO_SEND_SUBDC4_SHIFT;
+	send_tso |= (skb_transport_offset(skb) + tcp_hdrlen(skb)) <<
+		    PKO_SEND_TSO_SB_SHIFT;
+	send_tso |= (mtu + ETH_HLEN) << PKO_SEND_TSO_MSS_SHIFT;
+
+	return send_tso;
+}
+
+static inline u64 build_pko_send_mem_sub(u64 addr)
+{
+	u64 send_mem;
+
+	/* See PKO_SEND_MEM_S in the HRM for the send mem descriptor format */
+	send_mem = (u64)PKO_SENDSUBDC_MEM << PKO_SEND_SUBDC4_SHIFT;
+	send_mem |= (u64)PKO_MEMDSZ_B64 << PKO_SEND_MEM_DSZ_SHIFT;
+	send_mem |= (u64)PKO_MEMALG_SUB << PKO_SEND_MEM_ALG_SHIFT;
+	send_mem |= 1ull << PKO_SEND_MEM_OFFSET_SHIFT;
+	send_mem |= addr;
+
+	return send_mem;
+}
+
+static inline u64 build_pko_send_mem_ts(u64 addr)
+{
+	u64 send_mem;
+
+	/* See PKO_SEND_MEM_S in the HRM for the send mem descriptor format */
+	send_mem = 1ull << PKO_SEND_MEM_WMEM_SHIFT;
+	send_mem |= (u64)PKO_SENDSUBDC_MEM << PKO_SEND_SUBDC4_SHIFT;
+	send_mem |= (u64)PKO_MEMDSZ_B64 << PKO_SEND_MEM_DSZ_SHIFT;
+	send_mem |= (u64)PKO_MEMALG_SETTSTMP << PKO_SEND_MEM_ALG_SHIFT;
+	send_mem |= addr;
+
+	return send_mem;
+}
+
+static inline u64 build_pko_send_free(u64 addr)
+{
+	u64 send_free;
+
+	/* See PKO_SEND_FREE_S in the HRM for the send free descriptor format */
+	send_free = (u64)PKO_SENDSUBDC_FREE << PKO_SEND_SUBDC4_SHIFT;
+	send_free |= addr;
+
+	return send_free;
+}
+
+static inline u64 build_pko_send_work(int grp, u64 addr)
+{
+	u64 send_work;
+
+	/* See PKO_SEND_WORK_S in the HRM for the send work descriptor format */
+	send_work = (u64)PKO_SENDSUBDC_WORK << PKO_SEND_SUBDC4_SHIFT;
+	send_work |= (u64)grp << PKO_SEND_WORK_GRP_SHIFT;
+	send_work |= SSO_TAG_TYPE_UNTAGGED << PKO_SEND_WORK_TT_SHIFT;
+	send_work |= addr;
+
+	return send_work;
+}
+
+static int octeon3_eth_ndo_start_xmit(struct sk_buff *skb,
+				      struct net_device *netdev)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	u64 aq_cnt = 0, *dma_addr, head_len, lmtdma_data;
+	u64 pko_send_desc, scr_off = LMTDMA_SCR_OFFSET;
+	int frag_count, gaura = 0, grp, i;
+	struct octeon3_ethernet_node *oen;
+	struct sk_buff *skb_tmp;
+	unsigned int mss;
+	long backlog;
+	void **work;
+
+	frag_count = 0;
+	if (skb_has_frag_list(skb))
+		skb_walk_frags(skb, skb_tmp)
+			frag_count++;
+
+	/* Drop the packet if pko or sso are not keeping up */
+	oen = octeon3_eth_node + priv->node;
+	aq_cnt = oct_csr_read(SSO_GRP_AQ_CNT(oen->node, oen->tx_complete_grp));
+	aq_cnt &= SSO_GRP_AQ_CNT_AQ_CNT_MASK;
+	backlog = atomic64_inc_return(&priv->tx_backlog);
+	if (unlikely(backlog > MAX_TX_QUEUE_DEPTH || aq_cnt > 100000)) {
+		if (use_tx_queues) {
+			netif_stop_queue(netdev);
+		} else {
+			atomic64_dec(&priv->tx_backlog);
+			goto skip_xmit;
+		}
+	}
+
+	/* We have space for 11 segment pointers, If there will be
+	 * more than that, we must linearize.  The count is: 1 (base
+	 * SKB) + frag_count + nr_frags.
+	 */
+	if (unlikely(skb_shinfo(skb)->nr_frags + frag_count > 10)) {
+		if (unlikely(__skb_linearize(skb)))
+			goto skip_xmit;
+		frag_count = 0;
+	}
+
+	work = (void **)skb->cb;
+	work[0] = netdev;
+	work[1] = NULL;
+
+	/* Adjust the port statistics. */
+	atomic64_inc(&priv->tx_packets);
+	atomic64_add(skb->len, &priv->tx_octets);
+
+	/* Make sure packet data writes are committed before
+	 * submitting the command below
+	 */
+	wmb();
+
+	/* Build the pko command */
+	pko_send_desc = build_pko_send_hdr_desc(skb, gaura);
+	preempt_disable();
+	scratch_write64(scr_off, pko_send_desc);
+	scr_off += sizeof(pko_send_desc);
+
+	/* Request packet to be ptp timestamped */
+	if ((unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) &&
+	    unlikely(priv->tx_timestamp_hw)) {
+		pko_send_desc = build_pko_send_ext_desc(skb);
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+
+	/* Add the tso descriptor if needed */
+	mss = skb_shinfo(skb)->gso_size;
+	if (unlikely(mss)) {
+		pko_send_desc = build_pko_send_tso(skb, netdev->mtu);
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+
+	/* Add a gather descriptor for each segment. See PKO_SEND_GATHER_S for
+	 * the send gather descriptor format.
+	 */
+	pko_send_desc = 0;
+	pko_send_desc |= (u64)PKO_SENDSUBDC_GATHER <<
+			 PKO_SEND_GATHER_SUBDC_SHIFT;
+	head_len = skb_headlen(skb);
+	if (head_len > 0) {
+		pko_send_desc |= head_len << PKO_SEND_GATHER_SIZE_SHIFT;
+		pko_send_desc |= virt_to_phys(skb->data);
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+	for (i = 1; i <= skb_shinfo(skb)->nr_frags; i++) {
+		struct skb_frag_struct *fs = skb_shinfo(skb)->frags + i - 1;
+
+		pko_send_desc &= ~(PKO_SEND_GATHER_SIZE_MASK |
+				   PKO_SEND_GATHER_ADDR_MASK);
+		pko_send_desc |= (u64)fs->size << PKO_SEND_GATHER_SIZE_SHIFT;
+		pko_send_desc |= virt_to_phys((u8 *)page_address(fs->page.p) +
+			fs->page_offset);
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+	skb_walk_frags(skb, skb_tmp) {
+		pko_send_desc &= ~(PKO_SEND_GATHER_SIZE_MASK |
+				   PKO_SEND_GATHER_ADDR_MASK);
+		pko_send_desc |= (u64)skb_tmp->len <<
+				 PKO_SEND_GATHER_SIZE_SHIFT;
+		pko_send_desc |= virt_to_phys(skb_tmp->data);
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+
+	/* Subtract 1 from the tx_backlog. */
+	pko_send_desc = build_pko_send_mem_sub(virt_to_phys(&priv->tx_backlog));
+	scratch_write64(scr_off, pko_send_desc);
+	scr_off += sizeof(pko_send_desc);
+
+	/* Write the ptp timestamp in the skb itself */
+	if ((unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) &&
+	    unlikely(priv->tx_timestamp_hw)) {
+		pko_send_desc = build_pko_send_mem_ts(virt_to_phys(&work[1]));
+		scratch_write64(scr_off, pko_send_desc);
+		scr_off += sizeof(pko_send_desc);
+	}
+
+	/* Send work when finished with the packet. */
+	grp = octeon3_eth_lgrp_to_ggrp(priv->node, priv->tx_complete_grp);
+	pko_send_desc = build_pko_send_work(grp, virt_to_phys(work));
+	scratch_write64(scr_off, pko_send_desc);
+	scr_off += sizeof(pko_send_desc);
+
+	/* See PKO_SEND_DMA_S in the HRM for the lmtdam data format */
+	lmtdma_data = (u64)(LMTDMA_SCR_OFFSET >> PKO_LMTDMA_SCRADDR_SHIFT) <<
+		      PKO_QUERY_DMA_SCRADDR_SHIFT;
+	if (wait_pko_response)
+		lmtdma_data |= 1ull << PKO_QUERY_DMA_RTNLEN_SHIFT;
+	lmtdma_data |= 0x51ull << PKO_QUERY_DMA_DID_SHIFT;
+	lmtdma_data |= (u64)priv->node << PKO_QUERY_DMA_NODE_SHIFT;
+	lmtdma_data |= priv->pko_queue << PKO_QUERY_DMA_DQ_SHIFT;
+
+	dma_addr = (u64 *)(LMTDMA_ORDERED_IO_ADDR | ((scr_off & 0x78) - 8));
+	*dma_addr = lmtdma_data;
+
+	preempt_enable();
+
+	if (wait_pko_response) {
+		u64 query_rtn;
+
+		CVMX_SYNCIOBDMA;
+
+		/* See PKO_QUERY_RTN_S in the HRM for the return format */
+		query_rtn = scratch_read64(LMTDMA_SCR_OFFSET);
+		query_rtn >>= PKO_QUERY_RTN_DQSTATUS_SHIFT;
+		if (unlikely(query_rtn != PKO_DQSTATUS_PASS)) {
+			netdev_err(netdev, "PKO enqueue failed %llx\n",
+				   (unsigned long long)query_rtn);
+			dev_kfree_skb_any(skb);
+		}
+	}
+
+	return NETDEV_TX_OK;
+skip_xmit:
+	atomic64_inc(&priv->tx_dropped);
+	dev_kfree_skb_any(skb);
+	return NETDEV_TX_OK;
+}
+
+static void octeon3_eth_ndo_get_stats64(struct net_device *netdev,
+					struct rtnl_link_stats64 *s)
+{
+	u64 delta_dropped, delta_octets, delta_packets, dropped;
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	u64 octets, packets;
+
+	spin_lock(&priv->stat_lock);
+
+	octeon3_pki_get_stats(priv->node, priv->pknd, &packets, &octets,
+			      &dropped);
+
+	delta_packets = (packets - priv->last_packets) & GENMASK_ULL(47, 0);
+	delta_octets = (octets - priv->last_octets) & GENMASK_ULL(47, 0);
+	delta_dropped = (dropped - priv->last_dropped) & GENMASK_ULL(47, 0);
+
+	priv->last_packets = packets;
+	priv->last_octets = octets;
+	priv->last_dropped = dropped;
+
+	spin_unlock(&priv->stat_lock);
+
+	atomic64_add(delta_packets, &priv->rx_packets);
+	atomic64_add(delta_octets, &priv->rx_octets);
+	atomic64_add(delta_dropped, &priv->rx_dropped);
+
+	s->rx_packets = atomic64_read(&priv->rx_packets);
+	s->rx_bytes = atomic64_read(&priv->rx_octets);
+	s->rx_dropped = atomic64_read(&priv->rx_dropped);
+	s->rx_errors = atomic64_read(&priv->rx_errors);
+	s->rx_length_errors = atomic64_read(&priv->rx_length_errors);
+	s->rx_crc_errors = atomic64_read(&priv->rx_crc_errors);
+
+	s->tx_packets = atomic64_read(&priv->tx_packets);
+	s->tx_bytes = atomic64_read(&priv->tx_octets);
+	s->tx_dropped = atomic64_read(&priv->tx_dropped);
+}
+
+static int octeon3_eth_set_mac_address(struct net_device *netdev, void *addr)
+{
+	int r = eth_mac_addr(netdev, addr);
+
+	if (r)
+		return r;
+
+	bgx_port_set_rx_filtering(netdev);
+
+	return 0;
+}
+
+static u64 octeon3_cyclecounter_read(const struct cyclecounter *cc)
+{
+	struct octeon3_ethernet *priv;
+	u64 count;
+
+	priv = container_of(cc, struct octeon3_ethernet, cc);
+	count = oct_csr_read(MIO_PTP_CLOCK_HI(priv->node));
+	return count;
+}
+
+static int octeon3_bgx_hwtstamp(struct net_device *netdev, int en)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	u64 data;
+
+	switch (bgx_port_get_mode(priv->node, priv->interface, priv->index)) {
+	case PORT_MODE_RGMII:
+	case PORT_MODE_SGMII:
+		data = oct_csr_read(BGX_GMP_GMI_RX_FRM_CTL(priv->node,
+							   priv->interface,
+							   priv->index));
+		if (en)
+			data |= BGX_GMP_GMI_RX_FRM_CTL_PTP_MODE;
+		else
+			data &= ~BGX_GMP_GMI_RX_FRM_CTL_PTP_MODE;
+		oct_csr_write(data, BGX_GMP_GMI_RX_FRM_CTL(priv->node,
+							   priv->interface,
+							   priv->index));
+		break;
+
+	case PORT_MODE_XAUI:
+	case PORT_MODE_RXAUI:
+	case PORT_MODE_10G_KR:
+	case PORT_MODE_XLAUI:
+	case PORT_MODE_40G_KR4:
+	case PORT_MODE_XFI:
+		data = oct_csr_read(BGX_SMU_RX_FRM_CTL(priv->node,
+						       priv->interface,
+						       priv->index));
+		if (en)
+			data |= BGX_GMP_GMI_RX_FRM_CTL_PTP_MODE;
+		else
+			data &= ~BGX_GMP_GMI_RX_FRM_CTL_PTP_MODE;
+		oct_csr_write(data, BGX_SMU_RX_FRM_CTL(priv->node,
+						       priv->interface,
+						       priv->index));
+		break;
+
+	default:
+		/* No timestamp support*/
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+static int octeon3_pki_hwtstamp(struct net_device *netdev, int en)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	int skip = en ? 8 : 0;
+
+	octeon3_pki_set_ptp_skip(priv->node, priv->pknd, skip);
+
+	return 0;
+}
+
+static int octeon3_ioctl_hwtstamp(struct net_device *netdev, struct ifreq *rq,
+				  int cmd)
+{
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	struct hwtstamp_config config;
+	u64 data;
+	int en;
+
+	/* The PTP block should be enabled */
+	data = oct_csr_read(MIO_PTP_CLOCK_CFG(priv->node));
+	if (!(data & MIO_PTP_CLOCK_CFG_PTP_EN)) {
+		netdev_err(netdev, "Error: PTP clock not enabled\n");
+		return -EOPNOTSUPP;
+	}
+
+	if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
+		return -EFAULT;
+
+	if (config.flags) /* reserved for future extensions */
+		return -EINVAL;
+
+	switch (config.tx_type) {
+	case HWTSTAMP_TX_OFF:
+		priv->tx_timestamp_hw = 0;
+		break;
+	case HWTSTAMP_TX_ON:
+		priv->tx_timestamp_hw = 1;
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	switch (config.rx_filter) {
+	case HWTSTAMP_FILTER_NONE:
+		priv->rx_timestamp_hw = 0;
+		en = 0;
+		break;
+	case HWTSTAMP_FILTER_ALL:
+	case HWTSTAMP_FILTER_SOME:
+	case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+	case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+	case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+	case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+	case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+	case HWTSTAMP_FILTER_PTP_V2_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+		priv->rx_timestamp_hw = 1;
+		en = 1;
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	octeon3_bgx_hwtstamp(netdev, en);
+	octeon3_pki_hwtstamp(netdev, en);
+
+	priv->cc.read = octeon3_cyclecounter_read;
+	priv->cc.mask = CYCLECOUNTER_MASK(64);
+	/* Ptp counter is always in nsec */
+	priv->cc.mult = 1;
+	priv->cc.shift = 0;
+	timecounter_init(&priv->tc, &priv->cc, ktime_to_ns(ktime_get_real()));
+
+	return 0;
+}
+
+static int octeon3_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+	struct octeon3_ethernet	*priv;
+	int neg_ppb = 0;
+	u64 comp, diff;
+
+	priv = container_of(ptp, struct octeon3_ethernet, ptp_info);
+
+	if (ppb < 0) {
+		ppb = -ppb;
+		neg_ppb = 1;
+	}
+
+	/* The part per billion (ppb) is a delta from the base frequency */
+	comp = (NSEC_PER_SEC << 32) / octeon_get_io_clock_rate();
+
+	diff = comp;
+	diff *= ppb;
+	diff = div_u64(diff, 1000000000ULL);
+
+	comp = neg_ppb ? comp - diff : comp + diff;
+
+	oct_csr_write(comp, MIO_PTP_CLOCK_COMP(priv->node));
+
+	return 0;
+}
+
+static int octeon3_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+	struct octeon3_ethernet	*priv;
+	unsigned long flags;
+	s64 now;
+
+	priv = container_of(ptp, struct octeon3_ethernet, ptp_info);
+
+	spin_lock_irqsave(&priv->ptp_lock, flags);
+	now = timecounter_read(&priv->tc);
+	now += delta;
+	timecounter_init(&priv->tc, &priv->cc, now);
+	spin_unlock_irqrestore(&priv->ptp_lock, flags);
+
+	return 0;
+}
+
+static int octeon3_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
+{
+	struct octeon3_ethernet	*priv;
+	unsigned long flags;
+	u32 remainder;
+	u64 ns;
+
+	priv = container_of(ptp, struct octeon3_ethernet, ptp_info);
+
+	spin_lock_irqsave(&priv->ptp_lock, flags);
+	ns = timecounter_read(&priv->tc);
+	spin_unlock_irqrestore(&priv->ptp_lock, flags);
+	ts->tv_sec = div_u64_rem(ns, 1000000000ULL, &remainder);
+	ts->tv_nsec = remainder;
+
+	return 0;
+}
+
+static int octeon3_settime(struct ptp_clock_info *ptp,
+			   const struct timespec64 *ts)
+{
+	struct octeon3_ethernet	*priv;
+	unsigned long flags;
+	u64 ns;
+
+	priv = container_of(ptp, struct octeon3_ethernet, ptp_info);
+	ns = timespec64_to_ns(ts);
+
+	spin_lock_irqsave(&priv->ptp_lock, flags);
+	timecounter_init(&priv->tc, &priv->cc, ns);
+	spin_unlock_irqrestore(&priv->ptp_lock, flags);
+
+	return 0;
+}
+
+static int octeon3_enable(struct ptp_clock_info *ptp,
+			  struct ptp_clock_request *rq, int on)
+{
+	return -EOPNOTSUPP;
+}
+
+static int octeon3_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+	int rc;
+
+	switch (cmd) {
+	case SIOCSHWTSTAMP:
+		rc = octeon3_ioctl_hwtstamp(netdev, ifr, cmd);
+		break;
+
+	default:
+		rc = bgx_port_do_ioctl(netdev, ifr, cmd);
+		break;
+	}
+
+	return rc;
+}
+
+static const struct net_device_ops octeon3_eth_netdev_ops = {
+	.ndo_init		= octeon3_eth_bgx_ndo_init,
+	.ndo_uninit		= octeon3_eth_ndo_uninit,
+	.ndo_open		= octeon3_eth_bgx_ndo_open,
+	.ndo_stop		= octeon3_eth_bgx_ndo_stop,
+	.ndo_start_xmit		= octeon3_eth_ndo_start_xmit,
+	.ndo_get_stats64	= octeon3_eth_ndo_get_stats64,
+	.ndo_set_rx_mode	= bgx_port_set_rx_filtering,
+	.ndo_set_mac_address	= octeon3_eth_set_mac_address,
+	.ndo_change_mtu		= octeon3_eth_ndo_change_mtu,
+	.ndo_do_ioctl		= octeon3_ioctl,
+};
+
+static int octeon3_eth_probe(struct platform_device *pdev)
+{
+	struct octeon3_ethernet *priv;
+	struct net_device *netdev;
+	int r;
+
+	struct mac_platform_data *pd = dev_get_platdata(&pdev->dev);
+
+	r = octeon3_eth_global_init(pd->numa_node, pdev);
+	if (r)
+		return r;
+
+	dev_info(&pdev->dev, "Probing %d-%d:%d\n", pd->numa_node, pd->interface,
+		 pd->port);
+	netdev = alloc_etherdev(sizeof(struct octeon3_ethernet));
+	if (!netdev) {
+		dev_err(&pdev->dev, "Failed to allocated ethernet device\n");
+		return -ENOMEM;
+	}
+
+	/* Using transmit queues degrades performance significantly */
+	if (!use_tx_queues)
+		netdev->tx_queue_len = 0;
+
+	SET_NETDEV_DEV(netdev, &pdev->dev);
+	dev_set_drvdata(&pdev->dev, netdev);
+
+	if (pd->mac_type == BGX_MAC)
+		bgx_port_set_netdev(pdev->dev.parent, netdev);
+	priv = netdev_priv(netdev);
+	priv->netdev = netdev;
+	priv->mac_type = pd->mac_type;
+	INIT_LIST_HEAD(&priv->list);
+	priv->node = pd->numa_node;
+
+	mutex_lock(&octeon3_eth_node[priv->node].device_list_lock);
+	list_add_tail_rcu(&priv->list,
+			  &octeon3_eth_node[priv->node].device_list);
+	mutex_unlock(&octeon3_eth_node[priv->node].device_list_lock);
+
+	priv->index = pd->port;
+	priv->interface = pd->interface;
+	spin_lock_init(&priv->stat_lock);
+
+	if (pd->src_type == XCV)
+		snprintf(netdev->name, IFNAMSIZ, "rgmii%d", pd->port);
+
+	if (priv->mac_type == BGX_MAC)
+		netdev->netdev_ops = &octeon3_eth_netdev_ops;
+
+	if (register_netdev(netdev) < 0) {
+		dev_err(&pdev->dev, "Failed to register ethernet device\n");
+		list_del(&priv->list);
+		free_netdev(netdev);
+	}
+
+	spin_lock_init(&priv->ptp_lock);
+	priv->ptp_info.owner = THIS_MODULE;
+	snprintf(priv->ptp_info.name, 16, "octeon3 ptp");
+	priv->ptp_info.max_adj = 250000000;
+	priv->ptp_info.n_alarm = 0;
+	priv->ptp_info.n_ext_ts = 0;
+	priv->ptp_info.n_per_out = 0;
+	priv->ptp_info.pps = 0;
+	priv->ptp_info.adjfreq = octeon3_adjfreq;
+	priv->ptp_info.adjtime = octeon3_adjtime;
+	priv->ptp_info.gettime64 = octeon3_gettime;
+	priv->ptp_info.settime64 = octeon3_settime;
+	priv->ptp_info.enable = octeon3_enable;
+	priv->ptp_clock = ptp_clock_register(&priv->ptp_info, &pdev->dev);
+
+	netdev_info(netdev, "Registered\n");
+	return 0;
+}
+
+/* octeon3_eth_global_exit - Free all the used resources and restore the
+ *			     hardware to the default state.
+ * @node: Node to free/reset.
+ *
+ * Returns 0 if successful.
+ * Returns <0 for error codes.
+ */
+static int octeon3_eth_global_exit(int node)
+{
+	struct octeon3_ethernet_node *oen = octeon3_eth_node + node;
+	int i;
+
+	/* Free the tx_complete irq */
+	octeon3_sso_irq_set(node, oen->tx_complete_grp, false);
+	irq_set_affinity_hint(oen->tx_irq, NULL);
+	free_irq(oen->tx_irq, oen);
+	irq_dispose_mapping(oen->tx_irq);
+	oen->tx_irq = 0;
+
+	/* Stop the worker threads */
+	for (i = 0; i < ARRAY_SIZE(oen->workers); i++)
+		kthread_stop(oen->workers[i].task);
+
+	/* Shutdown pki */
+	octeon3_pki_shutdown(node);
+	octeon_fpa3_release_pool(node, oen->pki_packet_pool);
+	kfree(oen->pki_packet_pool_stack);
+
+	/* Shutdown pko */
+	octeon3_pko_exit_global(node);
+	for (;;) {
+		void **w;
+
+		w = octeon_fpa3_alloc(node, oen->pko_aura);
+		if (!w)
+			break;
+		kmem_cache_free(octeon3_eth_sso_pko_cache, w);
+	}
+	octeon_fpa3_release_aura(node, oen->pko_aura);
+	octeon_fpa3_release_pool(node, oen->pko_pool);
+	kfree(oen->pko_pool_stack);
+
+	/* Shutdown sso */
+	octeon3_sso_shutdown(node, oen->sso_aura);
+	octeon3_sso_free_groups(node, &oen->tx_complete_grp, 1);
+	for (;;) {
+		void **w;
+
+		w = octeon_fpa3_alloc(node, oen->sso_aura);
+		if (!w)
+			break;
+		kmem_cache_free(octeon3_eth_sso_pko_cache, w);
+	}
+	octeon_fpa3_release_aura(node, oen->sso_aura);
+	octeon_fpa3_release_pool(node, oen->sso_pool);
+	kfree(oen->sso_pool_stack);
+
+	return 0;
+}
+
+static int octeon3_eth_remove(struct platform_device *pdev)
+{
+	struct mac_platform_data *pd = dev_get_platdata(&pdev->dev);
+	struct net_device *netdev = dev_get_drvdata(&pdev->dev);
+	struct octeon3_ethernet *priv = netdev_priv(netdev);
+	struct octeon3_ethernet_node *oen;
+	int node = priv->node;
+
+	oen = octeon3_eth_node + node;
+
+	ptp_clock_unregister(priv->ptp_clock);
+	unregister_netdev(netdev);
+	if (pd->mac_type == BGX_MAC)
+		bgx_port_set_netdev(pdev->dev.parent, NULL);
+	dev_set_drvdata(&pdev->dev, NULL);
+
+	/* Free all resources when there are no more devices */
+	mutex_lock(&octeon3_eth_init_mutex);
+	mutex_lock(&oen->device_list_lock);
+	list_del_rcu(&priv->list);
+	if (oen->init_done && list_empty(&oen->device_list)) {
+		int	i;
+
+		for (i = 0; i < MAX_NAPIS_PER_NODE; i++) {
+			napi_disable(&napi_wrapper[node][i].napi);
+			netif_napi_del(&napi_wrapper[node][i].napi);
+		}
+
+		oen->init_done = false;
+		oen->napi_init_done = false;
+		octeon3_eth_global_exit(node);
+	}
+
+	mutex_unlock(&oen->device_list_lock);
+	mutex_unlock(&octeon3_eth_init_mutex);
+	free_netdev(netdev);
+
+	return 0;
+}
+
+static void octeon3_eth_shutdown(struct platform_device *pdev)
+{
+	octeon3_eth_remove(pdev);
+}
+
+static struct platform_driver octeon3_eth_driver = {
+	.probe		= octeon3_eth_probe,
+	.remove		= octeon3_eth_remove,
+	.shutdown       = octeon3_eth_shutdown,
+	.driver		= {
+		.owner	= THIS_MODULE,
+		.name	= "ethernet-mac-pki",
+	},
+};
+
+static int __init octeon3_eth_init(void)
+{
+	if (rx_contexts <= 0)
+		rx_contexts = 1;
+	if (rx_contexts > MAX_RX_CONTEXTS)
+		rx_contexts = MAX_RX_CONTEXTS;
+
+	return platform_driver_register(&octeon3_eth_driver);
+}
+module_init(octeon3_eth_init);
+
+static void __exit octeon3_eth_exit(void)
+{
+	platform_driver_unregister(&octeon3_eth_driver);
+
+	/* Destroy the memory cache used by sso and pko */
+	kmem_cache_destroy(octeon3_eth_sso_pko_cache);
+}
+module_exit(octeon3_eth_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Cavium, Inc. <support@caviumnetworks.com>");
+MODULE_DESCRIPTION("Cavium, Inc. PKI/PKO Ethernet driver.");
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 09/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet building
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (7 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 08/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet core Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  2018-06-27 21:25 ` [PATCH v12 10/10] MAINTAINERS: Add entry for drivers/net/ethernet/cavium/octeon/octeon3-* Steven J. Hill
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: Carlos Munoz, Chandrakala Chavva, Steven J. Hill

From: Carlos Munoz <cmunoz@cavium.com>

Add the build and configuration files for the BGX Ethernet.

Signed-off-by: Carlos Munoz <cmunoz@cavium.com>
Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com>
---
 drivers/net/ethernet/cavium/Kconfig         | 22 +++++++++++++++++++++-
 drivers/net/ethernet/cavium/octeon/Makefile |  8 +++++++-
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/cavium/Kconfig b/drivers/net/ethernet/cavium/Kconfig
index 043e3c1..3b9709d 100644
--- a/drivers/net/ethernet/cavium/Kconfig
+++ b/drivers/net/ethernet/cavium/Kconfig
@@ -4,7 +4,7 @@
 
 config NET_VENDOR_CAVIUM
 	bool "Cavium ethernet drivers"
-	depends on PCI
+	depends on PCI || CAVIUM_OCTEON_SOC
 	default y
 	---help---
 	  Select this option if you want enable Cavium network support.
@@ -100,4 +100,24 @@ config LIQUIDIO_VF
 	  will be called liquidio_vf. MSI-X interrupt support is required
 	  for this driver to work correctly
 
+config OCTEON3_BGX_PORT
+	tristate "Cavium Octeon III BGX port support"
+	depends on CAVIUM_OCTEON_SOC
+	---help---
+	  This driver adds support for Cavium Octeon III BGX ports. BGX ports
+	  support sgmii, rgmii, xaui, rxaui, xlaui, xfi, 10KR and 40KR modes.
+
+	  Say Y to use the management port on Octeon III boards or to use
+	  any other ethernet port.
+
+config OCTEON3_ETHERNET
+	tristate "Cavium OCTEON III PKI/PKO Ethernet support"
+	depends on CAVIUM_OCTEON_SOC
+	select OCTEON_BGX_PORT
+	select OCTEON_FPA3
+	select FW_LOADER
+	---help---
+	  Support for 'BGX' Ethernet via PKI/PKO units. No support for
+	  cn70xx chips, use OCTEON_ETHERNET instead.
+
 endif # NET_VENDOR_CAVIUM
diff --git a/drivers/net/ethernet/cavium/octeon/Makefile b/drivers/net/ethernet/cavium/octeon/Makefile
index efa41c1..1939c84 100644
--- a/drivers/net/ethernet/cavium/octeon/Makefile
+++ b/drivers/net/ethernet/cavium/octeon/Makefile
@@ -1,5 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
 #
 # Makefile for the Cavium network device drivers.
 #
 
-obj-$(CONFIG_OCTEON_MGMT_ETHERNET)	+= octeon_mgmt.o
+obj-$(CONFIG_OCTEON_MGMT_ETHERNET) += octeon_mgmt.o
+obj-$(CONFIG_OCTEON3_BGX_PORT) += octeon3-bgx-nexus.o octeon3-bgx-port.o
+obj-$(CONFIG_OCTEON3_ETHERNET) += octeon3-ethernet.o
+
+octeon3-ethernet-objs += octeon3-core.o octeon3-pki.o octeon3-pko.o	\
+			 octeon3-sso.o
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v12 10/10] MAINTAINERS: Add entry for drivers/net/ethernet/cavium/octeon/octeon3-*
  2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
                   ` (8 preceding siblings ...)
  2018-06-27 21:25 ` [PATCH v12 09/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet building Steven J. Hill
@ 2018-06-27 21:25 ` Steven J. Hill
  9 siblings, 0 replies; 20+ messages in thread
From: Steven J. Hill @ 2018-06-27 21:25 UTC (permalink / raw)
  To: netdev; +Cc: David Daney, Chandrakala Chavva

From: David Daney <david.daney@cavium.com>

Signed-off-by: David Daney <david.daney@cavium.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 99e5cef..378009c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3279,6 +3279,12 @@ W:	http://www.cavium.com
 S:	Supported
 F:	drivers/mmc/host/cavium*
 
+CAVIUM OCTEON-III NETWORK DRIVER
+M:	Steven J. Hill <Steven.Hill@cavium.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/ethernet/cavium/octeon/octeon3-*
+
 CAVIUM OCTEON-TX CRYPTO DRIVER
 M:	George Cherian <george.cherian@cavium.com>
 L:	linux-crypto@vger.kernel.org
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface.
  2018-06-27 21:25 ` [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface Steven J. Hill
@ 2018-06-28  8:35   ` Andrew Lunn
  2018-07-06 22:10     ` Steven J. Hill
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Lunn @ 2018-06-28  8:35 UTC (permalink / raw)
  To: Steven J. Hill; +Cc: netdev, Carlos Munoz, Chandrakala Chavva

> +- cavium,rx-clk-delay-bypass: Set to <1> to bypass the rx clock delay setting.
> +  Needed by the Micrel PHY.

Could you explain this some more. Is it anything to do with RGMII delays?

Thanks
      Andrew

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-27 21:25 ` [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus Steven J. Hill
@ 2018-06-28  8:41   ` Andrew Lunn
  2018-06-28 21:20     ` Carlos Munoz
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Lunn @ 2018-06-28  8:41 UTC (permalink / raw)
  To: Steven J. Hill; +Cc: netdev, Carlos Munoz, Chandrakala Chavva

> +static char *mix_port;
> +module_param(mix_port, charp, 0444);
> +MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
> +
> +static char *pki_port;
> +module_param(pki_port, charp, 0444);
> +MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");

Module parameters are generally not liked. Can you do without them?

> +		/* One time request driver module */
> +		if (is_mix) {
> +			if (atomic_cmpxchg(&request_mgmt_once, 0, 1) == 0)
> +				request_module_nowait("octeon_mgmt");

Why is this needed? So long as the driver has the needed properties,
udev should load the module.

     Andrew

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-28  8:41   ` Andrew Lunn
@ 2018-06-28 21:20     ` Carlos Munoz
  2018-06-29  2:19       ` David Miller
  0 siblings, 1 reply; 20+ messages in thread
From: Carlos Munoz @ 2018-06-28 21:20 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: Steven J. Hill, netdev, Chandrakala Chavva



On 06/28/2018 01:41 AM, Andrew Lunn wrote:
> External Email
>
>> +static char *mix_port;
>> +module_param(mix_port, charp, 0444);
>> +MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
>> +
>> +static char *pki_port;
>> +module_param(pki_port, charp, 0444);
>> +MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");
> Module parameters are generally not liked. Can you do without them?

These parameters change the kernel port assignment required by user space applications. We rather keep them as they simplify the process.

>
>> +             /* One time request driver module */
>> +             if (is_mix) {
>> +                     if (atomic_cmpxchg(&request_mgmt_once, 0, 1) == 0)
>> +                             request_module_nowait("octeon_mgmt");
> Why is this needed? So long as the driver has the needed properties,
> udev should load the module.
>
>      Andrew

The thing is the management module is only loaded when a port is assigned to it (determined by the above module parameter "mix_port").

Best regards,
Carlos

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-28 21:20     ` Carlos Munoz
@ 2018-06-29  2:19       ` David Miller
  2018-06-29  3:30         ` Chavva, Chandrakala
  2018-06-29  6:13         ` Jiri Pirko
  0 siblings, 2 replies; 20+ messages in thread
From: David Miller @ 2018-06-29  2:19 UTC (permalink / raw)
  To: cmunoz; +Cc: andrew, steven.hill, netdev, cchavva

From: Carlos Munoz <cmunoz@cavium.com>
Date: Thu, 28 Jun 2018 14:20:05 -0700

> 
> 
> On 06/28/2018 01:41 AM, Andrew Lunn wrote:
>> External Email
>>
>>> +static char *mix_port;
>>> +module_param(mix_port, charp, 0444);
>>> +MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
>>> +
>>> +static char *pki_port;
>>> +module_param(pki_port, charp, 0444);
>>> +MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");
>> Module parameters are generally not liked. Can you do without them?
> 
> These parameters change the kernel port assignment required by user
> space applications. We rather keep them as they simplify the
> process.

This is actually a terrible user experience.

Please provide a way to do this by performing operations on a device object
after the driver loads.

Use something like devlink or similar if you have to.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-29  2:19       ` David Miller
@ 2018-06-29  3:30         ` Chavva, Chandrakala
  2018-06-29  6:21           ` David Miller
  2018-06-29  6:13         ` Jiri Pirko
  1 sibling, 1 reply; 20+ messages in thread
From: Chavva, Chandrakala @ 2018-06-29  3:30 UTC (permalink / raw)
  To: David Miller, Munoz, Carlos; +Cc: andrew, Hill, Steven, netdev

David,

How can we support NFS boot if pass the parameters via devlink. Basically this determines what phy to use from device tree.

Chandra

________________________________________
From: David Miller <davem@davemloft.net>
Sent: Thursday, June 28, 2018 7:19:05 PM
To: Munoz, Carlos
Cc: andrew@lunn.ch; Hill, Steven; netdev@vger.kernel.org; Chavva, Chandrakala
Subject: Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus

External Email

From: Carlos Munoz <cmunoz@cavium.com>
Date: Thu, 28 Jun 2018 14:20:05 -0700

>
>
> On 06/28/2018 01:41 AM, Andrew Lunn wrote:
>> External Email
>>
>>> +static char *mix_port;
>>> +module_param(mix_port, charp, 0444);
>>> +MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
>>> +
>>> +static char *pki_port;
>>> +module_param(pki_port, charp, 0444);
>>> +MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");
>> Module parameters are generally not liked. Can you do without them?
>
> These parameters change the kernel port assignment required by user
> space applications. We rather keep them as they simplify the
> process.

This is actually a terrible user experience.

Please provide a way to do this by performing operations on a device object
after the driver loads.

Use something like devlink or similar if you have to.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-29  2:19       ` David Miller
  2018-06-29  3:30         ` Chavva, Chandrakala
@ 2018-06-29  6:13         ` Jiri Pirko
  1 sibling, 0 replies; 20+ messages in thread
From: Jiri Pirko @ 2018-06-29  6:13 UTC (permalink / raw)
  To: David Miller; +Cc: cmunoz, andrew, steven.hill, netdev, cchavva

Fri, Jun 29, 2018 at 04:19:05AM CEST, davem@davemloft.net wrote:
>From: Carlos Munoz <cmunoz@cavium.com>
>Date: Thu, 28 Jun 2018 14:20:05 -0700
>
>> 
>> 
>> On 06/28/2018 01:41 AM, Andrew Lunn wrote:
>>> External Email
>>>
>>>> +static char *mix_port;
>>>> +module_param(mix_port, charp, 0444);
>>>> +MODULE_PARM_DESC(mix_port, "Specifies which ports connect to MIX interfaces.");
>>>> +
>>>> +static char *pki_port;
>>>> +module_param(pki_port, charp, 0444);
>>>> +MODULE_PARM_DESC(pki_port, "Specifies which ports connect to the PKI.");
>>> Module parameters are generally not liked. Can you do without them?
>> 
>> These parameters change the kernel port assignment required by user
>> space applications. We rather keep them as they simplify the
>> process.
>
>This is actually a terrible user experience.
>
>Please provide a way to do this by performing operations on a device object
>after the driver loads.
>
>Use something like devlink or similar if you have to.

Devlink params should be used for this. They are not upstream yet. We
will push it most likely early next week.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus
  2018-06-29  3:30         ` Chavva, Chandrakala
@ 2018-06-29  6:21           ` David Miller
  0 siblings, 0 replies; 20+ messages in thread
From: David Miller @ 2018-06-29  6:21 UTC (permalink / raw)
  To: Chandrakala.Chavva; +Cc: Carlos.Munoz, andrew, Steven.Hill, netdev

From: "Chavva, Chandrakala" <Chandrakala.Chavva@cavium.com>
Date: Fri, 29 Jun 2018 03:30:51 +0000

> How can we support NFS boot if pass the parameters via
> devlink. Basically this determines what phy to use from device tree.

initrd.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface.
  2018-06-28  8:35   ` Andrew Lunn
@ 2018-07-06 22:10     ` Steven J. Hill
  2018-07-06 22:41       ` Andrew Lunn
  0 siblings, 1 reply; 20+ messages in thread
From: Steven J. Hill @ 2018-07-06 22:10 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: netdev, Chandrakala Chavva

On 06/28/2018 03:35 AM, Andrew Lunn wrote:
> 
>> +- cavium,rx-clk-delay-bypass: Set to <1> to bypass the rx clock delay setting.
>> +  Needed by the Micrel PHY.
> 
> Could you explain this some more. Is it anything to do with RGMII delays?
> 
Andrew,

One of my colleagues tracked this down for me. This device tree option is in place
because there are several different ways to do the clock and data with respect to
RGMII. This controls the delay introduced for the RX clock with respect to the data.
Without this, RX will not work with Micrel PHYs. Thanks.

Steve

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface.
  2018-07-06 22:10     ` Steven J. Hill
@ 2018-07-06 22:41       ` Andrew Lunn
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Lunn @ 2018-07-06 22:41 UTC (permalink / raw)
  To: Steven J. Hill; +Cc: netdev, Chandrakala Chavva

On Fri, Jul 06, 2018 at 05:10:39PM -0500, Steven J. Hill wrote:
> On 06/28/2018 03:35 AM, Andrew Lunn wrote:
> > 
> >> +- cavium,rx-clk-delay-bypass: Set to <1> to bypass the rx clock delay setting.
> >> +  Needed by the Micrel PHY.
> > 
> > Could you explain this some more. Is it anything to do with RGMII delays?
> > 
> Andrew,
> 
> One of my colleagues tracked this down for me. This device tree option is in place
> because there are several different ways to do the clock and data with respect to
> RGMII. This controls the delay introduced for the RX clock with respect to the data.
> Without this, RX will not work with Micrel PHYs. Thanks.

Hi Steven

This is his RGMII delays, as i guess.

Don't add this property, do it the Linux way. Look at phy-mode values

phy.h:	  PHY_INTERFACE_MODE_RGMII_ID,
phy.h:	  PHY_INTERFACE_MODE_RGMII_RXID,
phy.h:	  PHY_INTERFACE_MODE_RGMII_TXID,

There are plenty of examples in drivers/net/ethernet

      Andrew

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-07-06 22:41 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 21:25 [PATCH v12 00/10] netdev: octeon-ethernet: Add Cavium Octeon III support Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 01/10] dt-bindings: Add Cavium Octeon Common Ethernet Interface Steven J. Hill
2018-06-28  8:35   ` Andrew Lunn
2018-07-06 22:10     ` Steven J. Hill
2018-07-06 22:41       ` Andrew Lunn
2018-06-27 21:25 ` [PATCH v12 02/10] netdev: cavium: octeon: Header for Octeon III BGX Ethernet Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 03/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet Nexus Steven J. Hill
2018-06-28  8:41   ` Andrew Lunn
2018-06-28 21:20     ` Carlos Munoz
2018-06-29  2:19       ` David Miller
2018-06-29  3:30         ` Chavva, Chandrakala
2018-06-29  6:21           ` David Miller
2018-06-29  6:13         ` Jiri Pirko
2018-06-27 21:25 ` [PATCH v12 04/10] netdev: cavium: octeon: Add Octeon III BGX Ports Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 05/10] netdev: cavium: octeon: Add Octeon III PKI Support Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 06/10] netdev: cavium: octeon: Add Octeon III PKO Support Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 07/10] netdev: cavium: octeon: Add Octeon III SSO Support Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 08/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet core Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 09/10] netdev: cavium: octeon: Add Octeon III BGX Ethernet building Steven J. Hill
2018-06-27 21:25 ` [PATCH v12 10/10] MAINTAINERS: Add entry for drivers/net/ethernet/cavium/octeon/octeon3-* Steven J. Hill

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.