All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver
@ 2020-12-11 16:05 Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 01/50] mips: global_data.h: Add Octeon specific data to arch_global_data struct Stefan Roese
                   ` (52 more replies)
  0 siblings, 53 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot


This patchset adds the serdes and (mostly networking) device helper
macros and functions, needed to support the still missing Octeon II /
III devices in mainline U-Boot.

Please excuse the massive amount of files in this patch series. Also the
sometimes huge files (mostly headers with register definitions) that I
needed to include.

The infrastructure code with all the headers is ported without any
intended functional changes from the 2013 Cavium / Marvell U-Boot
version. It has undergone many hours of extensive code cleanup and
reformatting. Some of it done by using tools (checkpatch, Lindent, clang
format etc) and also some of it done manually, as I couldn't find some
tools that could do the needed work in a reliable and functional way.
The result is that checkpatch now only throws a "few" warnings that are
left. Some of those can't be removed without an even more extensive
cleanup / rewrite of the code, like the addition of typedefs.

The added headers and helper functions will be used by the upcoming
support for the Octeon II / III networking drivers, including PHY &
switch support. It was not easily possible to split these infrastructure
files into a separate patchset, as it is heavily interconnected in the
common QLM/DLM serdes interface initialization. The result is, that the
upcoming ethernet driver support will be much smaller (this is at least
my current assumption).

The added PCIe RC support with the included DM PCIe driver is the first
driver making use of this Octeon serdes infrastructure. This has been
tested with an Intel E1000 PCIe network card in the Octeon 7304 EBB.

Thanks,
Stefan


Aaron Williams (42):
  mips: octeon: Add misc cvmx-helper header files
  mips: octeon: Add cvmx-agl-defs.h header file
  mips: octeon: Add cvmx-asxx-defs.h header file
  mips: octeon: Add cvmx-bgxx-defs.h header file
  mips: octeon: Add cvmx-ciu-defs.h header file
  mips: octeon: Add cvmx-dbg-defs.h header file
  mips: octeon: Add cvmx-dpi-defs.h header file
  mips: octeon: Add cvmx-dtx-defs.h header file
  mips: octeon: Add cvmx-fpa-defs.h header file
  mips: octeon: Add cvmx-gmxx-defs.h header file
  mips: octeon: Add cvmx-gserx-defs.h header file
  mips: octeon: Add cvmx-ipd-defs.h header file
  mips: octeon: Add cvmx-l2c-defs.h header file
  mips: octeon: Add cvmx-mio-defs.h header file
  mips: octeon: Add cvmx-npi-defs.h header file
  mips: octeon: Add cvmx-pcieepx-defs.h header file
  mips: octeon: Add cvmx-pciercx-defs.h header file
  mips: octeon: Add cvmx-pcsx-defs.h header file
  mips: octeon: Add cvmx-pemx-defs.h header file
  mips: octeon: Add cvmx-pepx-defs.h header file
  mips: octeon: Add cvmx-pip-defs.h header file
  mips: octeon: Add cvmx-pki-defs.h header file
  mips: octeon: Add cvmx-pko-defs.h header file
  mips: octeon: Add cvmx-pow-defs.h header file
  mips: octeon: Add cvmx-rst-defs.h header file
  mips: octeon: Add cvmx-sata-defs.h header file
  mips: octeon: Add cvmx-sli-defs.h header file
  mips: octeon: Add cvmx-smix-defs.h header file
  mips: octeon: Add cvmx-sriomaintx-defs.h header file
  mips: octeon: Add cvmx-sriox-defs.h header file
  mips: octeon: Add cvmx-sso-defs.h header file
  mips: octeon: Add misc remaining header files
  mips: octeon: Add cvmx-helper-cfg.c
  mips: octeon: Add cvmx-helper-fdt.c
  mips: octeon: Add cvmx-helper-jtag.c
  mips: octeon: Add cvmx-helper-util.c
  mips: octeon: Add cvmx-helper.c
  mips: octeon: Add cvmx-pcie.c
  mips: octeon: Add cvmx-qlm.c
  mips: octeon: Add octeon_fdt.c
  mips: octeon: Add octeon_qlm.c
  mips: octeon: octeon_ebb7304: Add board specific QLM init code

Stefan Roese (8):
  mips: global_data.h: Add Octeon specific data to arch_global_data
    struct
  mips: octeon: Misc changes required because of the newly added headers
  mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach
  mips: octeon: Makefile: Enable building of the newly added C files
  mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT
  mips: octeon: mrvl,cn73xx.dtsi: Add PCIe controller DT node
  mips: octeon: Add Octeon PCIe host controller driver
  mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and E1000

 arch/mips/dts/mrvl,cn73xx.dtsi                |   16 +
 arch/mips/include/asm/global_data.h           |    9 +
 arch/mips/mach-octeon/Kconfig                 |    4 +
 arch/mips/mach-octeon/Makefile                |   11 +
 arch/mips/mach-octeon/bootoctlinux.c          |    1 +
 arch/mips/mach-octeon/cvmx-bootmem.c          |    6 -
 arch/mips/mach-octeon/cvmx-coremask.c         |    1 +
 arch/mips/mach-octeon/cvmx-helper-cfg.c       | 1914 ++++
 arch/mips/mach-octeon/cvmx-helper-fdt.c       |  970 ++
 arch/mips/mach-octeon/cvmx-helper-jtag.c      |  172 +
 arch/mips/mach-octeon/cvmx-helper-util.c      | 1225 +++
 arch/mips/mach-octeon/cvmx-helper.c           | 2611 +++++
 arch/mips/mach-octeon/cvmx-pcie.c             | 2487 +++++
 arch/mips/mach-octeon/cvmx-qlm.c              | 2350 +++++
 .../mach-octeon/include/mach/cvmx-address.h   |  209 +
 .../mach-octeon/include/mach/cvmx-agl-defs.h  | 3135 ++++++
 .../mach-octeon/include/mach/cvmx-asxx-defs.h |  709 ++
 .../mach-octeon/include/mach/cvmx-bgxx-defs.h | 4106 +++++++
 .../mach-octeon/include/mach/cvmx-ciu-defs.h  | 7351 +++++++++++++
 .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +
 .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
 arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
 .../mach-octeon/include/mach/cvmx-dbg-defs.h  |   33 +
 .../mach-octeon/include/mach/cvmx-dpi-defs.h  | 1460 +++
 .../mach-octeon/include/mach/cvmx-dtx-defs.h  | 6962 ++++++++++++
 .../mach-octeon/include/mach/cvmx-error.h     |  456 +
 .../mach-octeon/include/mach/cvmx-fpa-defs.h  | 1866 ++++
 arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 +
 .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 +
 .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 +
 .../include/mach/cvmx-global-resources.h      |  213 +
 arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
 .../mach-octeon/include/mach/cvmx-gmxx-defs.h | 6378 +++++++++++
 .../include/mach/cvmx-gserx-defs.h            | 2191 ++++
 .../include/mach/cvmx-helper-agl.h            |   68 +
 .../include/mach/cvmx-helper-bgx.h            |  335 +
 .../include/mach/cvmx-helper-board.h          |  558 +
 .../include/mach/cvmx-helper-cfg.h            |  884 ++
 .../include/mach/cvmx-helper-errata.h         |   50 +
 .../include/mach/cvmx-helper-fdt.h            |  568 +
 .../include/mach/cvmx-helper-fpa.h            |   43 +
 .../include/mach/cvmx-helper-gpio.h           |  427 +
 .../include/mach/cvmx-helper-ilk.h            |   93 +
 .../include/mach/cvmx-helper-ipd.h            |   16 +
 .../include/mach/cvmx-helper-jtag.h           |   84 +
 .../include/mach/cvmx-helper-loop.h           |   37 +
 .../include/mach/cvmx-helper-npi.h            |   42 +
 .../include/mach/cvmx-helper-pki.h            |  319 +
 .../include/mach/cvmx-helper-pko.h            |   51 +
 .../include/mach/cvmx-helper-pko3.h           |   76 +
 .../include/mach/cvmx-helper-rgmii.h          |   99 +
 .../include/mach/cvmx-helper-sfp.h            |  437 +
 .../include/mach/cvmx-helper-sgmii.h          |   81 +
 .../include/mach/cvmx-helper-spi.h            |   73 +
 .../include/mach/cvmx-helper-srio.h           |   72 +
 .../include/mach/cvmx-helper-util.h           |  412 +
 .../include/mach/cvmx-helper-xaui.h           |  108 +
 .../mach-octeon/include/mach/cvmx-helper.h    |  565 +
 .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++
 .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 +
 arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
 .../mach-octeon/include/mach/cvmx-ipd-defs.h  | 1925 ++++
 arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 +
 .../mach-octeon/include/mach/cvmx-l2c-defs.h  |  172 +
 .../include/mach/{cvmx => }/cvmx-lmcx-defs.h  |    0
 .../mach-octeon/include/mach/cvmx-mio-defs.h  |  353 +
 .../mach-octeon/include/mach/cvmx-npi-defs.h  | 1953 ++++
 .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
 .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 +
 .../include/mach/cvmx-pcieepx-defs.h          | 6848 ++++++++++++
 .../include/mach/cvmx-pciercx-defs.h          | 5586 ++++++++++
 .../mach-octeon/include/mach/cvmx-pcsx-defs.h | 1005 ++
 .../mach-octeon/include/mach/cvmx-pemx-defs.h | 2028 ++++
 .../mach-octeon/include/mach/cvmx-pexp-defs.h | 1382 +++
 .../mach-octeon/include/mach/cvmx-pip-defs.h  | 3040 ++++++
 arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++
 .../mach-octeon/include/mach/cvmx-pki-defs.h  | 2353 +++++
 .../include/mach/cvmx-pki-resources.h         |  157 +
 arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++
 .../mach-octeon/include/mach/cvmx-pko-defs.h  | 9388 +++++++++++++++++
 .../mach/cvmx-pko-internal-ports-range.h      |   43 +
 .../include/mach/cvmx-pko3-queue.h            |  175 +
 .../mach-octeon/include/mach/cvmx-pow-defs.h  | 1135 ++
 arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991 ++++++
 arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 +
 .../mips/mach-octeon/include/mach/cvmx-regs.h |  330 +-
 .../mach-octeon/include/mach/cvmx-rst-defs.h  |   77 +
 .../mach-octeon/include/mach/cvmx-sata-defs.h |  311 +
 .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
 .../mach-octeon/include/mach/cvmx-sli-defs.h  | 6548 ++++++++++++
 .../mach-octeon/include/mach/cvmx-smix-defs.h |  360 +
 .../include/mach/cvmx-sriomaintx-defs.h       |   61 +
 .../include/mach/cvmx-sriox-defs.h            |   44 +
 .../mach-octeon/include/mach/cvmx-sso-defs.h  | 2904 +++++
 arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 +++
 .../mach-octeon/include/mach/octeon-feature.h |    2 +
 .../mach-octeon/include/mach/octeon-model.h   |    2 +
 .../mach-octeon/include/mach/octeon_ddr.h     |  189 +-
 arch/mips/mach-octeon/octeon_fdt.c            | 1040 ++
 arch/mips/mach-octeon/octeon_qlm.c            | 5853 ++++++++++
 board/Marvell/octeon_ebb7304/board.c          |  732 +-
 configs/octeon_ebb7304_defconfig              |    5 +-
 drivers/pci/Kconfig                           |    6 +
 drivers/pci/Makefile                          |    1 +
 drivers/pci/pcie_octeon.c                     |  159 +
 drivers/ram/octeon/octeon3_lmc.c              |   28 +-
 drivers/ram/octeon/octeon_ddr.c               |   22 +-
 107 files changed, 118724 insertions(+), 240 deletions(-)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-cfg.c
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-fdt.c
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-jtag.c
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-util.c
 create mode 100644 arch/mips/mach-octeon/cvmx-helper.c
 create mode 100644 arch/mips/mach-octeon/cvmx-pcie.c
 create mode 100644 arch/mips/mach-octeon/cvmx-qlm.c
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-agl-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-asxx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-bgxx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ciu-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dbg-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dpi-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dtx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmxx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gserx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-agl.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-bgx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-board.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-cfg.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-errata.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-fdt.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-gpio.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-jtag.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-loop.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-npi.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pko3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-rgmii.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-sfp.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-sgmii.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-spi.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-srio.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-util.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-xaui.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-l2c-defs.h
 rename arch/mips/mach-octeon/include/mach/{cvmx => }/cvmx-lmcx-defs.h (100%)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-mio-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-npi-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcieepx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pciercx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcsx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pemx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pexp-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-rst-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sata-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sli-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-smix-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sriomaintx-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sriox-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sso-defs.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
 create mode 100644 arch/mips/mach-octeon/octeon_fdt.c
 create mode 100644 arch/mips/mach-octeon/octeon_qlm.c
 create mode 100644 drivers/pci/pcie_octeon.c

-- 
2.29.2

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v1 01/50] mips: global_data.h: Add Octeon specific data to arch_global_data struct
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 02/50] mips: octeon: Add misc cvmx-helper header files Stefan Roese
                   ` (51 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

This will be used by the upcoming Serdes and driver code ported from
the original 2013 U-Boot code to mainline.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/include/asm/global_data.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/mips/include/asm/global_data.h b/arch/mips/include/asm/global_data.h
index 4c30fab871..f0d3b07bf1 100644
--- a/arch/mips/include/asm/global_data.h
+++ b/arch/mips/include/asm/global_data.h
@@ -8,6 +8,12 @@
 #define __ASM_GBL_DATA_H
 
 #include <asm/regdef.h>
+#include <asm/types.h>
+
+struct octeon_eeprom_mac_addr {
+	u8 mac_addr_base[6];
+	u8 count;
+};
 
 /* Architecture-specific global data */
 struct arch_global_data {
@@ -30,6 +36,9 @@ struct arch_global_data {
 #ifdef CONFIG_ARCH_MTMIPS
 	unsigned long timer_freq;
 #endif
+#ifdef CONFIG_ARCH_OCTEON
+	struct octeon_eeprom_mac_addr mac_desc;
+#endif
 };
 
 #include <asm-generic/global_data.h>
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 02/50] mips: octeon: Add misc cvmx-helper header files
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 01/50] mips: global_data.h: Add Octeon specific data to arch_global_data struct Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 03/50] mips: octeon: Add cvmx-agl-defs.h header file Stefan Roese
                   ` (50 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import misc cvmx-helper header files from 2013 U-Boot. They will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-helper-agl.h            |  68 ++
 .../include/mach/cvmx-helper-bgx.h            | 335 +++++++
 .../include/mach/cvmx-helper-board.h          | 558 +++++++++++
 .../include/mach/cvmx-helper-cfg.h            | 884 ++++++++++++++++++
 .../include/mach/cvmx-helper-errata.h         |  50 +
 .../include/mach/cvmx-helper-fdt.h            | 568 +++++++++++
 .../include/mach/cvmx-helper-fpa.h            |  43 +
 .../include/mach/cvmx-helper-gpio.h           | 427 +++++++++
 .../include/mach/cvmx-helper-ilk.h            |  93 ++
 .../include/mach/cvmx-helper-ipd.h            |  16 +
 .../include/mach/cvmx-helper-jtag.h           |  84 ++
 .../include/mach/cvmx-helper-loop.h           |  37 +
 .../include/mach/cvmx-helper-npi.h            |  42 +
 .../include/mach/cvmx-helper-pki.h            | 319 +++++++
 .../include/mach/cvmx-helper-pko.h            |  51 +
 .../include/mach/cvmx-helper-pko3.h           |  76 ++
 .../include/mach/cvmx-helper-rgmii.h          |  99 ++
 .../include/mach/cvmx-helper-sfp.h            | 437 +++++++++
 .../include/mach/cvmx-helper-sgmii.h          |  81 ++
 .../include/mach/cvmx-helper-spi.h            |  73 ++
 .../include/mach/cvmx-helper-srio.h           |  72 ++
 .../include/mach/cvmx-helper-util.h           | 412 ++++++++
 .../include/mach/cvmx-helper-xaui.h           | 108 +++
 .../mach-octeon/include/mach/cvmx-helper.h    | 565 +++++++++++
 24 files changed, 5498 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-agl.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-bgx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-board.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-cfg.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-errata.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-fdt.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-gpio.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-jtag.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-loop.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-npi.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-pko3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-rgmii.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-sfp.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-sgmii.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-spi.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-srio.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-util.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-xaui.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-agl.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-agl.h
new file mode 100644
index 0000000000..7a5e4d89d3
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-agl.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for AGL (RGMII) initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_AGL_H__
+#define __CVMX_HELPER_AGL_H__
+
+int __cvmx_helper_agl_enumerate(int interface);
+
+int cvmx_helper_agl_get_port(int xiface);
+
+/**
+ * @INTERNAL
+ * Probe a RGMII interface and determine the number of ports
+ * connected to it. The RGMII interface should still be down
+ * after this call.
+ *
+ * @param interface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_agl_probe(int interface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a RGMII interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param interface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_agl_enable(int interface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_agl_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_agl_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+#endif /* __CVMX_HELPER_AGL_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-bgx.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-bgx.h
new file mode 100644
index 0000000000..ead6193ec0
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-bgx.h
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions to configure the BGX MAC.
+ */
+
+#ifndef __CVMX_HELPER_BGX_H__
+#define __CVMX_HELPER_BGX_H__
+
+#define CVMX_BGX_RX_FIFO_SIZE (64 * 1024)
+#define CVMX_BGX_TX_FIFO_SIZE (32 * 1024)
+
+int __cvmx_helper_bgx_enumerate(int xiface);
+
+/**
+ * @INTERNAL
+ * Disable the BGX port
+ *
+ * @param xipd_port IPD port of the BGX interface to disable
+ */
+void cvmx_helper_bgx_disable(int xipd_port);
+
+/**
+ * @INTERNAL
+ * Probe a SGMII interface and determine the number of ports
+ * connected to it. The SGMII/XAUI interface should still be down after
+ * this call. This is used by interfaces using the bgx mac.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_bgx_probe(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a SGMII interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled. This is used by interfaces using the
+ * bgx mac.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_bgx_sgmii_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set(). This is used by
+ * interfaces using the bgx mac.
+ *
+ * @param xipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_bgx_sgmii_link_get(int xipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead. This is used by interfaces
+ * using the bgx mac.
+ *
+ * @param xipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_bgx_sgmii_link_set(int xipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again. This is used by
+ * interfaces using the bgx mac.
+ *
+ * @param xipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int __cvmx_helper_bgx_sgmii_configure_loopback(int xipd_port, int enable_internal,
+					       int enable_external);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a XAUI interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled. This is used by interfaces using the
+ * bgx mac.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_bgx_xaui_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set(). This is used by
+ * interfaces using the bgx mac.
+ *
+ * @param xipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_bgx_xaui_link_get(int xipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead. This is used by interfaces
+ * using the bgx mac.
+ *
+ * @param xipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_bgx_xaui_link_set(int xipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again. This is used by
+ * interfaces using the bgx mac.
+ *
+ * @param xipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int __cvmx_helper_bgx_xaui_configure_loopback(int xipd_port, int enable_internal,
+					      int enable_external);
+
+int __cvmx_helper_bgx_mixed_enable(int xiface);
+
+cvmx_helper_link_info_t __cvmx_helper_bgx_mixed_link_get(int xipd_port);
+
+int __cvmx_helper_bgx_mixed_link_set(int xipd_port, cvmx_helper_link_info_t link_info);
+
+int __cvmx_helper_bgx_mixed_configure_loopback(int xipd_port, int enable_internal,
+					       int enable_external);
+
+cvmx_helper_interface_mode_t cvmx_helper_bgx_get_mode(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Configure Priority-Based Flow Control (a.k.a. PFC/CBFC)
+ * on a specific BGX interface/port.
+ */
+void __cvmx_helper_bgx_xaui_config_pfc(unsigned int node, unsigned int interface, unsigned int port,
+				       bool pfc_enable);
+
+/**
+ * This function control how the hardware handles incoming PAUSE
+ * packets. The most common modes of operation:
+ * ctl_bck = 1, ctl_drp = 1: hardware handles everything
+ * ctl_bck = 0, ctl_drp = 0: software sees all PAUSE frames
+ * ctl_bck = 0, ctl_drp = 1: all PAUSE frames are completely ignored
+ * @param node		node number.
+ * @param interface	interface number
+ * @param port		port number
+ * @param ctl_bck	1: Forward PAUSE information to TX block
+ * @param ctl_drp	1: Drop control PAUSE frames.
+ */
+void cvmx_helper_bgx_rx_pause_ctl(unsigned int node, unsigned int interface, unsigned int port,
+				  unsigned int ctl_bck, unsigned int ctl_drp);
+
+/**
+ * This function configures the receive action taken for multicast, broadcast
+ * and dmac filter match packets.
+ * @param node		node number.
+ * @param interface	interface number
+ * @param port		port number
+ * @param cam_accept	0: reject packets on dmac filter match
+ *                      1: accept packet on dmac filter match
+ * @param mcast_mode	0x0 = Force reject all multicast packets
+ *                      0x1 = Force accept all multicast packets
+ *                      0x2 = Use the address filter CAM
+ * @param bcast_accept  0 = Reject all broadcast packets
+ *                      1 = Accept all broadcast packets
+ */
+void cvmx_helper_bgx_rx_adr_ctl(unsigned int node, unsigned int interface, unsigned int port,
+				unsigned int cam_accept, unsigned int mcast_mode,
+				unsigned int bcast_accept);
+
+/**
+ * Function to control the generation of FCS, padding by the BGX
+ *
+ */
+void cvmx_helper_bgx_tx_options(unsigned int node, unsigned int interface, unsigned int index,
+				bool fcs_enable, bool pad_enable);
+
+/**
+ * Set mac for the ipd_port
+ *
+ * @param xipd_port ipd_port to set the mac
+ * @param bcst      If set, accept all broadcast packets
+ * @param mcst      Multicast mode
+ *		    0 = Force reject all multicast packets
+ *		    1 = Force accept all multicast packets
+ *		    2 = use the address filter CAM.
+ * @param mac       mac address for the ipd_port
+ */
+void cvmx_helper_bgx_set_mac(int xipd_port, int bcst, int mcst, u64 mac);
+
+int __cvmx_helper_bgx_port_init(int xipd_port, int phy_pres);
+void cvmx_helper_bgx_set_jabber(int xiface, unsigned int index, unsigned int size);
+int cvmx_helper_bgx_shutdown_port(int xiface, int index);
+int cvmx_bgx_set_backpressure_override(int xiface, unsigned int port_mask);
+int __cvmx_helper_bgx_fifo_size(int xiface, unsigned int lmac);
+
+/**
+ * Returns if an interface is RGMII or not
+ *
+ * @param xiface	xinterface to check
+ * @param index		port index (must be 0 for rgmii)
+ *
+ * @return	true if RGMII, false otherwise
+ */
+static inline bool cvmx_helper_bgx_is_rgmii(int xiface, int index)
+{
+	union cvmx_bgxx_cmrx_config cmr_config;
+
+	if (!OCTEON_IS_MODEL(OCTEON_CN73XX) || index != 0)
+		return false;
+	cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(index, xiface));
+	return cmr_config.s.lmac_type == 5;
+}
+
+/**
+ * Probes the BGX Super Path (SMU/SPU) mode
+ *
+ * @param xiface	global interface number
+ * @param index		interface index
+ *
+ * @return	true, if Super-MAC/PCS mode, false -- otherwise
+ */
+bool cvmx_helper_bgx_is_smu(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Configure parameters of PAUSE packet.
+ *
+ * @param xipd_port		Global IPD port (node + IPD port).
+ * @param smac			Source MAC address.
+ * @param dmac			Destination MAC address.
+ * @param type			PAUSE packet type.
+ * @param time			Pause time for PAUSE packets (number of 512 bit-times).
+ * @param interval		Interval between PAUSE packets (number of 512 bit-times).
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_bgx_set_pause_pkt_param(int xipd_port, u64 smac, u64 dmac, unsigned int type,
+				 unsigned int time, unsigned int interval);
+
+/**
+ * @INTERNAL
+ * Setup the BGX flow-control mode.
+ *
+ * @param xipd_port		Global IPD port (node + IPD port).
+ * @param type			Flow-control type/protocol.
+ * @param mode			Flow-control mode.
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_bgx_set_flowctl_mode(int xipd_port, cvmx_qos_proto_t qos, cvmx_qos_pkt_mode_t mode);
+
+/**
+ * Enables or disables autonegotiation for an interface.
+ *
+ * @param	xiface	interface to set autonegotiation
+ * @param	index	port index
+ * @param	enable	true to enable autonegotiation, false to disable it
+ *
+ * @return	0 for success, -1 on error.
+ */
+int cvmx_helper_set_autonegotiation(int xiface, int index, bool enable);
+
+/**
+ * Enables or disables forward error correction
+ *
+ * @param	xiface	interface
+ * @param	index	port index
+ * @param	enable	set to true to enable FEC, false to disable
+ *
+ * @return	0 for success, -1 on error
+ *
+ * @NOTE:	If autonegotiation is enabled then autonegotiation will be
+ *		restarted for negotiating FEC.
+ */
+int cvmx_helper_set_fec(int xiface, int index, bool enable);
+
+#ifdef CVMX_DUMP_BGX
+/**
+ * Dump BGX configuration for node 0
+ */
+int cvmx_dump_bgx_config(unsigned int bgx);
+/**
+ * Dump BGX status for node 0
+ */
+int cvmx_dump_bgx_status(unsigned int bgx);
+/**
+ * Dump BGX configuration
+ */
+int cvmx_dump_bgx_config_node(unsigned int node, unsigned int bgx);
+/**
+ * Dump BGX status
+ */
+int cvmx_dump_bgx_status_node(unsigned int node, unsigned int bgx);
+
+#endif
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-board.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-board.h
new file mode 100644
index 0000000000..d7a7b7d227
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-board.h
@@ -0,0 +1,558 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions to abstract board specific data about
+ * network ports from the rest of the cvmx-helper files.
+ */
+
+#ifndef __CVMX_HELPER_BOARD_H__
+#define __CVMX_HELPER_BOARD_H__
+
+#define CVMX_VSC7224_NAME_LEN 16
+
+typedef enum {
+	USB_CLOCK_TYPE_REF_12,
+	USB_CLOCK_TYPE_REF_24,
+	USB_CLOCK_TYPE_REF_48,
+	USB_CLOCK_TYPE_CRYSTAL_12,
+} cvmx_helper_board_usb_clock_types_t;
+
+typedef enum cvmx_phy_type {
+	BROADCOM_GENERIC_PHY,
+	MARVELL_GENERIC_PHY,
+	CORTINA_PHY, /** Now Inphi */
+	AQUANTIA_PHY,
+	GENERIC_8023_C22_PHY,
+	GENERIC_8023_C45_PHY,
+	INBAND_PHY,
+	QUALCOMM_S17,	     /** Qualcomm QCA833X switch */
+	VITESSE_VSC8490_PHY, /** Vitesse VSC8490 is non-standard for SGMII */
+	FAKE_PHY,	     /** Unsupported or no PHY, use GPIOs for LEDs */
+} cvmx_phy_type_t;
+
+/** Used to record the host mode used by the Cortina CS4321 PHY */
+typedef enum {
+	CVMX_PHY_HOST_MODE_UNKNOWN,
+	CVMX_PHY_HOST_MODE_SGMII,
+	CVMX_PHY_HOST_MODE_QSGMII,
+	CVMX_PHY_HOST_MODE_XAUI,
+	CVMX_PHY_HOST_MODE_RXAUI,
+} cvmx_phy_host_mode_t;
+
+typedef enum {
+	set_phy_link_flags_autoneg = 0x1,
+	set_phy_link_flags_flow_control_dont_touch = 0x0 << 1,
+	set_phy_link_flags_flow_control_enable = 0x1 << 1,
+	set_phy_link_flags_flow_control_disable = 0x2 << 1,
+	set_phy_link_flags_flow_control_mask = 0x3 << 1,
+} cvmx_helper_board_set_phy_link_flags_types_t;
+
+/**
+ * The EBB6600 board uses a MDIO mux device to select between the two QLM
+ * modules since both QLM modules share the same PHY addresses.  The
+ * MDIO mux is controlled via GPIO by a GPIO device that is also on
+ * the TWSI bus rather than native OCTEON GPIO libes.
+ *
+ * To further complicate matters, the TWSI GPIO device sits behind
+ * a TWSI mux device as well, making accessing the MDIO devices on
+ * this board a very complex operation involving writing to the TWSI mux,
+ * followed by the MDIO mux device.
+ */
+/** Maximum number of GPIO devices used to control the MDIO mux */
+#define CVMX_PHY_MUX_MAX_GPIO 2
+
+/** Type of MDIO mux device, currently OTHER isn't supported */
+typedef enum {
+	SN74CBTLV3253, /** SN74CBTLV3253 I2C device */
+	OTHER	       /** Unknown/other */
+} cvmx_phy_mux_type_t;
+
+/** Type of GPIO line controlling MDIO mux */
+typedef enum {
+	GPIO_OCTEON, /** Native OCTEON */
+	GPIO_PCA8574 /** TWSI mux device */
+} cvmx_phy_gpio_type_t;
+
+/* Forward declarations */
+struct cvmx_fdt_sfp_info; /** Defined in cvmx-helper-fdt.h */
+struct cvmx_vsc7224;
+struct cvmx_fdt_gpio_info;    /** Defined in cvmx-helper-fdt.h */
+struct cvmx_fdt_i2c_bus_info; /** Defined in cvmx-helper-fdt.h */
+struct cvmx_phy_info;
+struct cvmx_fdt_i2c_bus_info;
+struct cvmx_fdt_gpio_info;
+struct cvmx_fdt_gpio_led;
+
+/**
+ * @INTERNAL
+ * This data structure is used when the port LEDs are wired up to Octeon's GPIO
+ * lines instead of to a traditional PHY.
+ */
+struct cvmx_phy_gpio_leds {
+	struct cvmx_phy_gpio_leds *next; /** For when ports are grouped together */
+	u64 last_rx_count;		 /** Counters used to check for activity */
+	u64 last_tx_count;		 /** Counters used to check for activity */
+	u64 last_activity_poll_time;	 /** Last time activity was polled */
+	u64 last_link_poll_time;	 /** Last time link was polled */
+	int of_offset;
+	int link_poll_interval_ms;     /** Link polling interval in ms */
+	int activity_poll_interval_ms; /** Activity polling interval in ms */
+	struct cvmx_fdt_gpio_led *link_status;
+	struct cvmx_fdt_gpio_led *error;
+	struct cvmx_fdt_gpio_led *rx_activity;
+	struct cvmx_fdt_gpio_led *tx_activity;
+	struct cvmx_fdt_gpio_led *identify;
+
+	struct cvmx_fdt_gpio_info *link_status_gpio;
+	struct cvmx_fdt_gpio_info *error_gpio;
+	/** Type of GPIO for error LED */
+	/** If GPIO expander, describe the bus to the expander */
+	struct cvmx_fdt_gpio_info *rx_activity_gpio;
+	struct cvmx_fdt_gpio_info *tx_activity_gpio;
+
+	u16 rx_activity_hz; /** RX activity blink time in hz */
+	u16 tx_activity_hz; /** TX activity blink time in hz */
+	/**
+	 * Set if activity and/or link is using an Inphi/Cortina CS4343 or
+	 * compatible phy that requires software assistance.  NULL if not used.
+	 */
+	bool link_status_active_low;  /** True if active link is active low */
+	bool error_status_active_low; /** True if error LED is active low */
+	bool error_active_low;	      /** True if error is active low */
+	bool rx_activity_active_low;  /** True if rx activity is active low */
+	bool tx_activity_active_low;  /** True if tx activity is active low */
+	/** Set true if LEDs are shared on an interface by all ports */
+	bool interface_leds;
+	int8_t rx_gpio_timer; /** GPIO clock generator timer [0-3] */
+	int8_t tx_gpio_timer; /** GPIO clock generator timer [0-3] */
+
+	/** True if LOS signal activates error LED */
+	bool los_generate_error;
+	/** True if the error LED is hooked up to a GPIO expander */
+	bool error_gpio_expander;
+	/** True if the link and RX activity LEDs are shared */
+	bool link_and_rx_activity_shared;
+	/** True if the link and TX activity LEDs are shared */
+	bool link_and_tx_activity_shared;
+	/** True if the RX activity and TX activity LEDs are shared */
+	bool rx_and_tx_activity_shared;
+	/** True if link is driven directly by the hardware */
+	bool link_led_hw_link;
+	bool error_lit;	    /** True if ERROR LED is lit */
+	bool quad_sfp_mode; /** True if using four SFPs for XLAUI */
+	/** User-defined function to update the link LED */
+	void (*update_link_led)(int xiface, int index, cvmx_helper_link_info_t result);
+	/** User-defined function to update the rx activity LED */
+	void (*update_rx_activity_led)(struct cvmx_phy_gpio_leds *led, int xiface, int index,
+				       bool check_time);
+};
+
+/** This structure contains the tap values to use for various cable lengths */
+struct cvmx_vsc7224_tap {
+	u16 len;      /** Starting cable length for tap values */
+	u16 main_tap; /** Main tap value to use */
+	u16 pre_tap;  /** Pre-tap value to use */
+	u16 post_tap; /** Post-tap value to use */
+};
+
+/** Data structure for Microsemi VSC7224 channel */
+struct cvmx_vsc7224_chan {
+	struct cvmx_vsc7224_chan *next, *prev; /** Used for linking */
+	int ipd_port;			       /** IPD port this channel belongs to */
+	int xiface;			       /** xinterface of SFP */
+	int index;			       /** Port index of SFP */
+	int lane;			       /** Lane on port */
+	int of_offset;			       /** Offset of channel info in dt */
+	bool is_tx;			       /** True if is transmit channel */
+	bool maintap_disable;		       /** True to disable the main tap */
+	bool pretap_disable;		       /** True to disable pre-tap */
+	bool posttap_disable;		       /** True to disable post-tap */
+	int num_taps;			       /** Number of tap values */
+	/** (Q)SFP attached to this channel */
+	struct cvmx_fdt_sfp_info *sfp_info;
+	struct cvmx_vsc7224 *vsc7224; /** Pointer to parent */
+	/** Tap values for various lengths, must be@the end */
+	struct cvmx_vsc7224_tap taps[0];
+};
+
+/** Data structure for Microsemi VSC7224 reclocking chip */
+struct cvmx_vsc7224 {
+	const char *name; /** Name */
+	/** Pointer to cannel data */
+	struct cvmx_vsc7224_chan *channel[4];
+	/** I2C bus device is connected to */
+	struct cvmx_fdt_i2c_bus_info *i2c_bus;
+	/** Address of VSC7224 on i2c bus */
+	int i2c_addr;
+	struct cvmx_fdt_gpio_info *los_gpio;   /** LoS GPIO pin */
+	struct cvmx_fdt_gpio_info *reset_gpio; /** Reset GPIO pin */
+	int of_offset;			       /** Offset in device tree */
+};
+
+/** Data structure for Avago AVSP5410 gearbox phy */
+struct cvmx_avsp5410 {
+	const char *name; /** Name */
+	/** I2C bus device is connected to */
+	struct cvmx_fdt_i2c_bus_info *i2c_bus;
+	/** Address of AVSP5410 on i2c bus */
+	int i2c_addr;
+	int of_offset;	    /** Offset in device tree */
+	int ipd_port;	    /** IPD port this phy belongs to */
+	int xiface;	    /** xinterface of SFP */
+	int index;	    /** Port index of SFP */
+	u64 prev_temp;	    /** Previous temparature recorded on Phy Core */
+	u64 prev_temp_mins; /** Mininutes when the prev temp check is done */
+	/** (Q)SFP attached to this phy */
+	struct cvmx_fdt_sfp_info *sfp_info;
+};
+
+struct cvmx_cs4343_info;
+
+/**
+ * @INTERNAL
+ *
+ * Data structure containing Inphi CS4343 slice information
+ */
+struct cvmx_cs4343_slice_info {
+	const char *name;	       /** Name of this slice in device tree */
+	struct cvmx_cs4343_info *mphy; /** Pointer to mphy cs4343 */
+	struct cvmx_phy_info *phy_info;
+	int of_offset;		      /** Offset in device tree */
+	int slice_no;		      /** Slice number */
+	int reg_offset;		      /** Offset for this slice */
+	u16 sr_stx_cmode_res;	      /** See Rainier device tree */
+	u16 sr_stx_drv_lower_cm;      /** See Rainier device tree */
+	u16 sr_stx_level;	      /** See Rainier device tree */
+	u16 sr_stx_pre_peak;	      /** See Rainier device tree */
+	u16 sr_stx_muxsubrate_sel;    /** See Rainier device tree */
+	u16 sr_stx_post_peak;	      /** See Rainier device tree */
+	u16 cx_stx_cmode_res;	      /** See Rainier device tree */
+	u16 cx_stx_drv_lower_cm;      /** See Rainier device tree */
+	u16 cx_stx_level;	      /** See Rainier device tree */
+	u16 cx_stx_pre_peak;	      /** See Rainier device tree */
+	u16 cx_stx_muxsubrate_sel;    /** See Rainier device tree */
+	u16 cx_stx_post_peak;	      /** See Rainier device tree */
+	u16 basex_stx_cmode_res;      /** See Rainier device tree */
+	u16 basex_stx_drv_lower_cm;   /** See Rainier device tree */
+	u16 basex_stx_level;	      /** See Rainier device tree */
+	u16 basex_stx_pre_peak;	      /** See Rainier device tree */
+	u16 basex_stx_muxsubrate_sel; /** See Rainier device tree */
+	u16 basex_stx_post_peak;      /** See Rainier device tree */
+	int link_gpio;		      /** Link LED gpio pin number, -1 if none */
+	int error_gpio;		      /** Error LED GPIO pin or -1 if none */
+	int los_gpio;		      /** LoS input GPIO or -1 if none */
+	bool los_inverted;	      /** True if LoS input is inverted */
+	bool link_inverted;	      /** True if link output is inverted */
+	bool error_inverted;	      /** True if error output is inverted */
+};
+
+/**
+ * @INTERNAL
+ *
+ * Data structure for Cortina/Inphi CS4343 device
+ */
+struct cvmx_cs4343_info {
+	const char *name; /** Name of Inphi/Cortina CS4343 in DT */
+	struct cvmx_phy_info *phy_info;
+	struct cvmx_cs4343_slice_info slice[4]; /** Slice information */
+	int of_offset;
+};
+
+/**
+ * @INTERNAL
+ * This data structure is used to hold PHY information and is subject to change.
+ * Please do  not use this data structure directly.
+ *
+ * NOTE: The U-Boot OCTEON Ethernet drivers depends on this data structure for
+ * the mux support.
+ */
+typedef struct cvmx_phy_info {
+	int phy_addr;	  /** MDIO address of PHY */
+	int phy_sub_addr; /** Sub-address (i.e. slice), used by Cortina */
+	int ipd_port;	  /** IPD port number for the PHY */
+	/** MDIO bus PHY connected to (even if behind mux) */
+	int mdio_unit;
+	int direct_connect;		 /** 1 if PHY is directly connected */
+	int gpio[CVMX_PHY_MUX_MAX_GPIO]; /** GPIOs used to control mux, -1 if not used */
+
+	/** Type of GPIO.  It can be a local OCTEON GPIO or a TWSI GPIO */
+	cvmx_phy_gpio_type_t gpio_type[CVMX_PHY_MUX_MAX_GPIO];
+
+	/** Address of TWSI GPIO */
+	int cvmx_gpio_twsi[CVMX_PHY_MUX_MAX_GPIO];
+
+	/** Value to put into the GPIO lines to select MDIO bus */
+	int gpio_value;
+	int gpio_parent_mux_twsi;	/** -1 if not used, parent TWSI mux for ebb6600 */
+	int gpio_parent_mux_select;	/** selector to use on parent TWSI mux */
+	cvmx_phy_type_t phy_type;	/** Type of PHY */
+	cvmx_phy_mux_type_t mux_type;	/** Type of MDIO mux */
+	int mux_twsi_addr;		/** Address of the MDIO mux */
+	cvmx_phy_host_mode_t host_mode; /** Used by Cortina PHY */
+	void *phydev;			/** Pointer to parent phy device */
+	int fdt_offset;			/** Node in flat device tree */
+	int phy_i2c_bus;		/** I2C bus for reclocking chips */
+	int phy_i2c_addr;		/** I2C address of reclocking chip */
+	int num_vsc7224;		/** Number of Microsemi VSC7224 devices */
+	struct cvmx_vsc7224 *vsc7224;	/** Info for VSC7224 devices */
+	/** SFP/QSFP descriptor */
+	struct cvmx_fdt_sfp_info *sfp_info;
+	/** CS4343 slice information for SGMII/XFI.  This is NULL in XLAUI mode */
+	struct cvmx_cs4343_slice_info *cs4343_slice_info;
+	/** CS4343 mphy information for XLAUI */
+	struct cvmx_cs4343_info *cs4343_info;
+	/** Pointer to function to return link information */
+	cvmx_helper_link_info_t (*link_function)(struct cvmx_phy_info *phy_info);
+	/**
+	 * If there are LEDs driven by GPIO lines instead of by a PHY device
+	 * then they are described here, otherwise gpio_leds should be NULL.
+	 */
+	struct cvmx_phy_gpio_leds *gpio_leds;
+} cvmx_phy_info_t;
+
+/* Fake IPD port, the RGMII/MII interface may use different PHY, use this
+   macro to return appropriate MIX address to read the PHY. */
+#define CVMX_HELPER_BOARD_MGMT_IPD_PORT -10
+
+/**
+ * Return the MII PHY address associated with the given IPD
+ * port. A result of -1 means there isn't a MII capable PHY
+ * connected to this port. On chips supporting multiple MII
+ * busses the bus number is encoded in bits <15:8>.
+ *
+ * This function must be modified for every new Octeon board.
+ * Internally it uses switch statements based on the cvmx_sysinfo
+ * data to determine board types and revisions. It relies on the
+ * fact that every Octeon board receives a unique board type
+ * enumeration from the bootloader.
+ *
+ * @param ipd_port Octeon IPD port to get the MII address for.
+ *
+ * @return MII PHY address and bus number or -1.
+ */
+int cvmx_helper_board_get_mii_address(int ipd_port);
+
+/**
+ * This function as a board specific method of changing the PHY
+ * speed, duplex, and autonegotiation. This programs the PHY and
+ * not Octeon. This can be used to force Octeon's links to
+ * specific settings.
+ *
+ * @param phy_addr  The address of the PHY to program
+ * @param link_flags
+ *                  Flags to control autonegotiation.  Bit 0 is autonegotiation
+ *                  enable/disable to maintain backward compatibility.
+ * @param link_info Link speed to program. If the speed is zero and autonegotiation
+ *                  is enabled, all possible negotiation speeds are advertised.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_board_link_set_phy(int phy_addr,
+				   cvmx_helper_board_set_phy_link_flags_types_t link_flags,
+				   cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * This function is the board specific method of determining an
+ * ethernet ports link speed. Most Octeon boards have Marvell PHYs
+ * and are handled by the fall through case. This function must be
+ * updated for boards that don't have the normal Marvell PHYs.
+ *
+ * This function must be modified for every new Octeon board.
+ * Internally it uses switch statements based on the cvmx_sysinfo
+ * data to determine board types and revisions. It relies on the
+ * fact that every Octeon board receives a unique board type
+ * enumeration from the bootloader.
+ *
+ * @param ipd_port IPD input port associated with the port we want to get link
+ *                 status for.
+ *
+ * @return The ports link status. If the link isn't fully resolved, this must
+ *         return zero.
+ */
+cvmx_helper_link_info_t __cvmx_helper_board_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * This function is called by cvmx_helper_interface_probe() after it
+ * determines the number of ports Octeon can support on a specific
+ * interface. This function is the per board location to override
+ * this value. It is called with the number of ports Octeon might
+ * support and should return the number of actual ports on the
+ * board.
+ *
+ * This function must be modified for every new Octeon board.
+ * Internally it uses switch statements based on the cvmx_sysinfo
+ * data to determine board types and revisions. It relies on the
+ * fact that every Octeon board receives a unique board type
+ * enumeration from the bootloader.
+ *
+ * @param interface Interface to probe
+ * @param supported_ports
+ *                  Number of ports Octeon supports.
+ *
+ * @return Number of ports the actual board supports. Many times this will
+ *         simple be "support_ports".
+ */
+int __cvmx_helper_board_interface_probe(int interface, int supported_ports);
+
+/**
+ * @INTERNAL
+ * Enable packet input/output from the hardware. This function is
+ * called after by cvmx_helper_packet_hardware_enable() to
+ * perform board specific initialization. For most boards
+ * nothing is needed.
+ *
+ * @param interface Interface to enable
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_board_hardware_enable(int interface);
+
+/**
+ * @INTERNAL
+ * Gets the clock type used for the USB block based on board type.
+ * Used by the USB code for auto configuration of clock type.
+ *
+ * @return USB clock type enumeration
+ */
+cvmx_helper_board_usb_clock_types_t __cvmx_helper_board_usb_get_clock_type(void);
+
+/**
+ * @INTERNAL
+ * Adjusts the number of available USB ports on Octeon based on board
+ * specifics.
+ *
+ * @param supported_ports expected number of ports based on chip type;
+ *
+ *
+ * @return number of available usb ports, based on board specifics.
+ *         Return value is supported_ports if function does not
+ *         override.
+ */
+int __cvmx_helper_board_usb_get_num_ports(int supported_ports);
+
+/**
+ * @INTERNAL
+ * Returns if a port is present on an interface
+ *
+ * @param fdt_addr - address fo flat device tree
+ * @param ipd_port - IPD port number
+ *
+ * @return 1 if port is present, 0 if not present, -1 if error
+ */
+int __cvmx_helper_board_get_port_from_dt(void *fdt_addr, int ipd_port);
+
+/**
+ * Return the host mode for the PHY.  Currently only the Cortina CS4321 PHY
+ * needs this.
+ *
+ * @param ipd_port - ipd port number to get the host mode for
+ *
+ * @return host mode for phy
+ */
+cvmx_phy_host_mode_t cvmx_helper_board_get_phy_host_mode(int ipd_port);
+
+/**
+ * @INTERNAL
+ * This function outputs the cvmx_phy_info_t data structure for the specified
+ * port.
+ *
+ * @param[out] - phy_info - phy info data structure
+ * @param ipd_port - port to get phy info for
+ *
+ * @return 0 for success, -1 if info not available
+ *
+ * NOTE: The phy_info data structure is subject to change.
+ */
+int cvmx_helper_board_get_phy_info(cvmx_phy_info_t *phy_info, int ipd_port);
+
+/**
+ * @INTERNAL
+ * Parse the device tree and set whether a port is valid or not.
+ *
+ * @param fdt_addr	Pointer to device tree
+ *
+ * @return 0 for success, -1 on error.
+ */
+int __cvmx_helper_parse_bgx_dt(const void *fdt_addr);
+
+/**
+ * @INTERNAL
+ * Parse the device tree and set whether a port is valid or not.
+ *
+ * @param fdt_addr	Pointer to device tree
+ *
+ * @return 0 for success, -1 on error.
+ */
+int __cvmx_helper_parse_bgx_rgmii_dt(const void *fdt_addr);
+
+/**
+ * @INTERNAL
+ * Updates any GPIO link LEDs if present
+ *
+ * @param xiface	Interface number
+ * @param index		Port index
+ * @param result	Link status result
+ */
+void cvmx_helper_update_link_led(int xiface, int index, cvmx_helper_link_info_t result);
+/**
+ * Update the RX activity LED for the specified interface and port index
+ *
+ * @param xiface	Interface number
+ * @param index		Port index
+ * @parma check_time	True if we should bail out before the polling interval
+ */
+void cvmx_update_rx_activity_led(int xiface, int index, bool check_time);
+
+/**
+ * @INTERNAL
+ * Figure out which mod_abs changed function to use based on the phy type
+ *
+ * @param	xiface	xinterface number
+ * @param	index	port index on interface
+ *
+ * @return	0 for success, -1 on error
+ *
+ * This function figures out the proper mod_abs_changed function to use and
+ * registers the appropriate function.  This should be called after the device
+ * tree has been fully parsed for the given port as well as after all SFP
+ * slots and any Microsemi VSC7224 devices have been parsed in the device tree.
+ */
+int cvmx_helper_phy_register_mod_abs_changed(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Return loss of signal
+ *
+ * @param	xiface	xinterface number
+ * @param	index	port index on interface
+ *
+ * @return	0 if signal present, 1 if loss of signal.
+ *
+ * @NOTE:	A result of 0 is possible in some cases where the signal is
+ *		not present.
+ *
+ * This is for use with __cvmx_qlm_rx_equilization
+ */
+int __cvmx_helper_get_los(int xiface, int index);
+
+/**
+ * Given the address of the MDIO registers, output the CPU node and MDIO bus
+ *
+ * @param	addr	64-bit address of MDIO registers (from device tree)
+ * @param[out]	node	CPU node number (78xx)
+ * @param[out]	bus	MDIO bus number
+ */
+void __cvmx_mdio_addr_to_node_bus(u64 addr, int *node, int *bus);
+
+/**
+ * Turn on the error LED
+ *
+ * @param	leds	LEDs belonging to port
+ * @param	error	true to turn on LED, false to turn off
+ */
+void cvmx_helper_leds_show_error(struct cvmx_phy_gpio_leds *leds, bool error);
+
+#endif /* __CVMX_HELPER_BOARD_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-cfg.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-cfg.h
new file mode 100644
index 0000000000..d4bd910b01
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-cfg.h
@@ -0,0 +1,884 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper Functions for the Configuration Framework
+ *
+ * OCTEON_CN68XX introduces a flexible hw interface configuration
+ * scheme. To cope with this change and the requirements of
+ * configurability for other system resources, e.g., IPD/PIP pknd and
+ * PKO ports and queues, a configuration framework for the SDK is
+ * designed. It has two goals: first to recognize and establish the
+ * default configuration and, second, to allow the user to define key
+ * parameters in a high-level language.
+ *
+ * The helper functions query the QLM setup to help achieving the
+ * first goal.
+ *
+ * The second goal is accomplished by generating
+ * cvmx_helper_cfg_init() from a high-level lanaguage.
+ */
+
+#ifndef __CVMX_HELPER_CFG_H__
+#define __CVMX_HELPER_CFG_H__
+
+#include "cvmx-helper-util.h"
+
+#define CVMX_HELPER_CFG_MAX_PKO_PORT	   128
+#define CVMX_HELPER_CFG_MAX_PIP_BPID	   64
+#define CVMX_HELPER_CFG_MAX_PIP_PKND	   64
+#define CVMX_HELPER_CFG_MAX_PKO_QUEUES	   256
+#define CVMX_HELPER_CFG_MAX_PORT_PER_IFACE 256
+
+#define CVMX_HELPER_CFG_INVALID_VALUE -1
+
+#define cvmx_helper_cfg_assert(cond)                                                               \
+	do {                                                                                       \
+		if (!(cond)) {                                                                     \
+			debug("cvmx_helper_cfg_assert (%s) at %s:%d\n", #cond, __FILE__,           \
+			      __LINE__);                                                           \
+		}                                                                                  \
+	} while (0)
+
+extern int cvmx_npi_max_pknds;
+
+/*
+ * Config Options
+ *
+ * These options have to be set via cvmx_helper_cfg_opt_set() before calling the
+ * routines that set up the hw. These routines process the options and set them
+ * correctly to take effect at runtime.
+ */
+enum cvmx_helper_cfg_option {
+	CVMX_HELPER_CFG_OPT_USE_DWB, /*
+					 * Global option to control if
+					 * the SDK configures units (DMA,
+					 * SSO, and PKO) to send don't
+					 * write back (DWB) requests for
+					 * freed buffers. Set to 1/0 to
+					 * enable/disable DWB.
+					 *
+					 * For programs that fit inside
+					 * L2, sending DWB just causes
+					 * more L2 operations without
+					 * benefit.
+					 */
+
+	CVMX_HELPER_CFG_OPT_MAX
+};
+
+typedef enum cvmx_helper_cfg_option cvmx_helper_cfg_option_t;
+
+struct cvmx_phy_info;
+struct cvmx_fdt_sfp_info;
+struct cvmx_vsc7224_chan;
+struct phy_device;
+
+struct cvmx_srio_port_param {
+	/** True to override SRIO CTLE zero setting */
+	bool srio_rx_ctle_zero_override : 1;
+	/** Equalization peaking control dft: 6 */
+	u8 srio_rx_ctle_zero : 4;
+	/** Set true to override CTLE taps */
+	bool srio_rx_ctle_agc_override : 1;
+	u8 srio_rx_agc_pre_ctle : 4;	    /** AGC pre-CTLE gain */
+	u8 srio_rx_agc_post_ctle : 4;	    /** AGC post-CTLE gain */
+	bool srio_tx_swing_override : 1;    /** True to override TX Swing */
+	u8 srio_tx_swing : 5;		    /** TX Swing */
+	bool srio_tx_gain_override : 1;	    /** True to override TX gain */
+	u8 srio_tx_gain : 3;		    /** TX gain */
+	bool srio_tx_premptap_override : 1; /** True to override premptap values */
+	u8 srio_tx_premptap_pre : 4;	    /** Pre premptap value */
+	u8 srio_tx_premptap_post : 5;	    /** Post premptap value */
+	bool srio_tx_vboost_override : 1;   /** True to override TX vboost setting */
+	bool srio_tx_vboost : 1;	    /** vboost setting (default 1) */
+};
+
+/*
+ * Per physical port
+ * Note: This struct is passed between linux and SE apps.
+ */
+struct cvmx_cfg_port_param {
+	int port_fdt_node;		/** Node offset in FDT of node */
+	int phy_fdt_node;		/** Node offset in FDT of PHY */
+	struct cvmx_phy_info *phy_info; /** Data structure with PHY information */
+	int8_t ccpp_pknd;
+	int8_t ccpp_bpid;
+	int8_t ccpp_pko_port_base;
+	int8_t ccpp_pko_num_ports;
+	u8 agl_rx_clk_skew;		  /** AGL rx clock skew setting (default 0) */
+	u8 rgmii_tx_clk_delay;		  /** RGMII TX clock delay value if not bypassed */
+	bool valid : 1;			  /** 1 = port valid, 0 = invalid */
+	bool sgmii_phy_mode : 1;	  /** 1 = port in PHY mode, 0 = MAC mode */
+	bool sgmii_1000x_mode : 1;	  /** 1 = 1000Base-X mode, 0 = SGMII mode */
+	bool agl_rx_clk_delay_bypass : 1; /** 1 = use rx clock delay bypass for AGL mode */
+	bool force_link_up : 1;		  /** Ignore PHY and always report link up */
+	bool disable_an : 1;		  /** true to disable autonegotiation */
+	bool link_down_pwr_dn : 1;	  /** Power PCS off when link is down */
+	bool phy_present : 1;		  /** true if PHY is present */
+	bool tx_clk_delay_bypass : 1;	  /** True to bypass the TX clock delay */
+	bool enable_fec : 1;		  /** True to enable FEC for 10/40G links */
+	/** Settings for short-run SRIO host */
+	struct cvmx_srio_port_param srio_short;
+	/** Settings for long-run SRIO host */
+	struct cvmx_srio_port_param srio_long;
+	u8 agl_refclk_sel; /** RGMII refclk select to use */
+	/** Set if local (non-PHY) LEDs are used */
+	struct cvmx_phy_gpio_leds *gpio_leds;
+	struct cvmx_fdt_sfp_info *sfp_info; /** SFP+/QSFP info for port */
+	/** Offset of SFP/SFP+/QSFP slot in device tree */
+	int sfp_of_offset;
+	/** Microsemi VSC7224 channel info data structure */
+	struct cvmx_vsc7224_chan *vsc7224_chan;
+	/** Avago AVSP-5410 Phy */
+	struct cvmx_avsp5410 *avsp5410;
+	struct phy_device *phydev;
+};
+
+/*
+ * Per pko_port
+ */
+struct cvmx_cfg_pko_port_param {
+	s16 ccppp_queue_base;
+	s16 ccppp_num_queues;
+};
+
+/*
+ * A map from pko_port to
+ *     interface,
+ *     index, and
+ *     pko engine id
+ */
+struct cvmx_cfg_pko_port_map {
+	s16 ccppl_interface;
+	s16 ccppl_index;
+	s16 ccppl_eid;
+};
+
+/*
+ * This is for looking up pko_base_port and pko_nport for ipd_port
+ */
+struct cvmx_cfg_pko_port_pair {
+	int8_t ccppp_base_port;
+	int8_t ccppp_nports;
+};
+
+typedef union cvmx_user_static_pko_queue_config {
+	struct {
+		struct pko_queues_cfg {
+			unsigned queues_per_port : 11, qos_enable : 1, pfc_enable : 1;
+		} pko_cfg_iface[6];
+		struct pko_queues_cfg pko_cfg_loop;
+		struct pko_queues_cfg pko_cfg_npi;
+	} pknd;
+	struct {
+		u8 pko_ports_per_interface[5];
+		u8 pko_queues_per_port_interface[5];
+		u8 pko_queues_per_port_loop;
+		u8 pko_queues_per_port_pci;
+		u8 pko_queues_per_port_srio[4];
+	} non_pknd;
+} cvmx_user_static_pko_queue_config_t;
+
+extern cvmx_user_static_pko_queue_config_t __cvmx_pko_queue_static_config[CVMX_MAX_NODES];
+extern struct cvmx_cfg_pko_port_map cvmx_cfg_pko_port_map[CVMX_HELPER_CFG_MAX_PKO_PORT];
+extern struct cvmx_cfg_port_param cvmx_cfg_port[CVMX_MAX_NODES][CVMX_HELPER_MAX_IFACE]
+					       [CVMX_HELPER_CFG_MAX_PORT_PER_IFACE];
+extern struct cvmx_cfg_pko_port_param cvmx_pko_queue_table[];
+extern int cvmx_enable_helper_flag;
+
+/*
+ * @INTERNAL
+ * Return configured pknd for the port
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ * @return the pknd
+ */
+int __cvmx_helper_cfg_pknd(int interface, int index);
+
+/*
+ * @INTERNAL
+ * Return the configured bpid for the port
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ * @return the bpid
+ */
+int __cvmx_helper_cfg_bpid(int interface, int index);
+
+/**
+ * @INTERNAL
+ * Return the configured pko_port base for the port
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ * @return the pko_port base
+ */
+int __cvmx_helper_cfg_pko_port_base(int interface, int index);
+
+/*
+ * @INTERNAL
+ * Return the configured number of pko_ports for the port
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ * @return the number of pko_ports
+ */
+int __cvmx_helper_cfg_pko_port_num(int interface, int index);
+
+/*
+ * @INTERNAL
+ * Return the configured pko_queue base for the pko_port
+ *
+ * @param pko_port
+ * @return the pko_queue base
+ */
+int __cvmx_helper_cfg_pko_queue_base(int pko_port);
+
+/*
+ * @INTERNAL
+ * Return the configured number of pko_queues for the pko_port
+ *
+ * @param pko_port
+ * @return the number of pko_queues
+ */
+int __cvmx_helper_cfg_pko_queue_num(int pko_port);
+
+/*
+ * @INTERNAL
+ * Return the interface the pko_port is configured for
+ *
+ * @param pko_port
+ * @return the interface for the pko_port
+ */
+int __cvmx_helper_cfg_pko_port_interface(int pko_port);
+
+/*
+ * @INTERNAL
+ * Return the index of the port the pko_port is configured for
+ *
+ * @param pko_port
+ * @return the index of the port
+ */
+int __cvmx_helper_cfg_pko_port_index(int pko_port);
+
+/*
+ * @INTERNAL
+ * Return the pko_eid of the pko_port
+ *
+ * @param pko_port
+ * @return the pko_eid
+ */
+int __cvmx_helper_cfg_pko_port_eid(int pko_port);
+
+/*
+ * @INTERNAL
+ * Return the max# of pko queues allocated.
+ *
+ * @return the max# of pko queues
+ *
+ * Note: there might be holes in the queue space depending on user
+ * configuration. The function returns the highest queue's index in
+ * use.
+ */
+int __cvmx_helper_cfg_pko_max_queue(void);
+
+/*
+ * @INTERNAL
+ * Return the max# of PKO DMA engines allocated.
+ *
+ * @return the max# of DMA engines
+ *
+ * NOTE: the DMA engines are allocated contiguously and starting from
+ * 0.
+ */
+int __cvmx_helper_cfg_pko_max_engine(void);
+
+/*
+ * Get the value set for the config option ``opt''.
+ *
+ * @param opt is the config option.
+ * @return the value set for the option
+ *
+ * LR: only used for DWB in NPI, POW, PKO1
+ */
+u64 cvmx_helper_cfg_opt_get(cvmx_helper_cfg_option_t opt);
+
+/*
+ * Set the value for a config option.
+ *
+ * @param opt is the config option.
+ * @param val is the value to set for the opt.
+ * @return 0 for success and -1 on error
+ *
+ * Note an option here is a config-time parameter and this means that
+ * it has to be set before calling the corresponding setup functions
+ * that actually sets the option in hw.
+ *
+ * LR: Not used.
+ */
+int cvmx_helper_cfg_opt_set(cvmx_helper_cfg_option_t opt, u64 val);
+
+/*
+ * Retrieve the pko_port base given ipd_port.
+ *
+ * @param ipd_port is the IPD eport
+ * @return the corresponding PKO port base for the physical port
+ * represented by the IPD eport or CVMX_HELPER_CFG_INVALID_VALUE.
+ */
+int cvmx_helper_cfg_ipd2pko_port_base(int ipd_port);
+
+/*
+ * Retrieve the number of pko_ports given ipd_port.
+ *
+ * @param ipd_port is the IPD eport
+ * @return the corresponding number of PKO ports for the physical port
+ *  represented by IPD eport or CVMX_HELPER_CFG_INVALID_VALUE.
+ */
+int cvmx_helper_cfg_ipd2pko_port_num(int ipd_port);
+
+/*
+ * @INTERNAL
+ * The init function
+ *
+ * @param node
+ * @return 0 for success.
+ *
+ * Note: this function is meant to be called to set the ``configured
+ * parameters,'' e.g., pknd, bpid, etc. and therefore should be before
+ * any of the corresponding cvmx_helper_cfg_xxxx() functions are
+ * called.
+ */
+int __cvmx_helper_init_port_config_data(int node);
+
+/*
+ * @INTERNAL
+ * The local init function
+ *
+ * @param none
+ * @return 0 for success.
+ *
+ * Note: this function is meant to be called to set the ``configured
+ * parameters locally,'' e.g., pknd, bpid, etc. and therefore should be before
+ * any of the corresponding cvmx_helper_cfg_xxxx() functions are
+ * called.
+ */
+int __cvmx_helper_init_port_config_data_local(void);
+
+/*
+ * Set the frame max size and jabber size to 65535.
+ *
+ */
+void cvmx_helper_cfg_set_jabber_and_frame_max(void);
+
+/*
+ * Enable storing short packets only in the WQE.
+ */
+void cvmx_helper_cfg_store_short_packets_in_wqe(void);
+
+/*
+ * Allocated a block of internal ports and queues for the specified
+ * interface/port
+ *
+ * @param  interface  the interface for which the internal ports and queues
+ *                    are requested
+ * @param  port       the index of the port within in the interface for which
+		      the internal ports and queues are requested.
+ * @param  pot_count  the number of internal ports requested
+ * @param  queue_cnt  the number of queues requested for each of the internal
+ *                    port. This call will allocate a total of
+ *		      (port_cnt * queue_cnt) queues
+ *
+ * @return  0 on success
+ *         -1 on failure
+ *
+ * LR: Called ONLY from comfig-parse!
+ */
+int cvmx_pko_alloc_iport_and_queues(int interface, int port, int port_cnt, int queue_cnt);
+
+/*
+ * Free the queues that are associated with the specified port
+ *
+ * @param  port   the internal port for which the queues are freed.
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_queue_free(u64 port);
+
+/*
+ * Initializes the pko queue range data structure.
+ * @return  0 on success
+ *         -1 on failure
+ */
+int init_cvmx_pko_que_range(void);
+
+/*
+ * Frees up all the allocated ques.
+ */
+void cvmx_pko_queue_free_all(void);
+
+/**
+ * Returns if port is valid for a given interface
+ *
+ * @param xiface     interface to check
+ * @param index      port index in the interface
+ *
+ * @return status of the port present or not.
+ */
+int cvmx_helper_is_port_valid(int xiface, int index);
+
+/**
+ * Set whether or not a port is valid
+ *
+ * @param interface interface to set
+ * @param index     port index to set
+ * @param valid     set 0 to make port invalid, 1 for valid
+ */
+void cvmx_helper_set_port_valid(int interface, int index, bool valid);
+
+/**
+ * @INTERNAL
+ * Return if port is in PHY mode
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ *
+ * @return 1 if port is in PHY mode, 0 if port is in MAC mode
+ */
+bool cvmx_helper_get_mac_phy_mode(int interface, int index);
+void cvmx_helper_set_mac_phy_mode(int interface, int index, bool valid);
+
+/**
+ * @INTERNAL
+ * Return if port is in 1000Base X mode
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ *
+ * @return 1 if port is in 1000Base X mode, 0 if port is in SGMII mode
+ */
+bool cvmx_helper_get_1000x_mode(int interface, int index);
+void cvmx_helper_set_1000x_mode(int interface, int index, bool valid);
+
+/**
+ * @INTERNAL
+ * Return if an AGL port should bypass the RX clock delay
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ */
+bool cvmx_helper_get_agl_rx_clock_delay_bypass(int interface, int index);
+void cvmx_helper_set_agl_rx_clock_delay_bypass(int interface, int index, bool valid);
+
+/**
+ * @INTERNAL
+ * Forces a link to always return that it is up ignoring the PHY (if present)
+ *
+ * @param interface the interface number
+ * @param index the port's index
+ */
+bool cvmx_helper_get_port_force_link_up(int interface, int index);
+void cvmx_helper_set_port_force_link_up(int interface, int index, bool value);
+
+/**
+ * @INTERNAL
+ * Return true if PHY is present to the passed xiface
+ *
+ * @param xiface the interface number
+ * @param index the port's index
+ */
+bool cvmx_helper_get_port_phy_present(int xiface, int index);
+void cvmx_helper_set_port_phy_present(int xiface, int index, bool value);
+
+/**
+ * @INTERNAL
+ * Return the AGL port rx clock skew, only used
+ * if agl_rx_clock_delay_bypass is set.
+ *
+ * @param interface the interface number
+ * @param index the port's index number
+ */
+u8 cvmx_helper_get_agl_rx_clock_skew(int interface, int index);
+void cvmx_helper_set_agl_rx_clock_skew(int interface, int index, u8 value);
+u8 cvmx_helper_get_agl_refclk_sel(int interface, int index);
+void cvmx_helper_set_agl_refclk_sel(int interface, int index, u8 value);
+
+/**
+ * @INTERNAL
+ * Store the FDT node offset in the device tree of a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param node_offset	node offset to store
+ */
+void cvmx_helper_set_port_fdt_node_offset(int xiface, int index, int node_offset);
+
+/**
+ * @INTERNAL
+ * Return the FDT node offset in the device tree of a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @return		node offset of port or -1 if invalid
+ */
+int cvmx_helper_get_port_fdt_node_offset(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Store the FDT node offset in the device tree of a phy
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param node_offset	node offset to store
+ */
+void cvmx_helper_set_phy_fdt_node_offset(int xiface, int index, int node_offset);
+
+/**
+ * @INTERNAL
+ * Return the FDT node offset in the device tree of a phy
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @return		node offset of phy or -1 if invalid
+ */
+int cvmx_helper_get_phy_fdt_node_offset(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Override default autonegotiation for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param enable	true to enable autonegotiation, false to force full
+ *			duplex, full speed.
+ */
+void cvmx_helper_set_port_autonegotiation(int xiface, int index, bool enable);
+
+/**
+ * @INTERNAL
+ * Returns if autonegotiation is enabled or not.
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return 0 if autonegotiation is disabled, 1 if enabled.
+ */
+bool cvmx_helper_get_port_autonegotiation(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Returns if forward error correction is enabled or not.
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return 0 if fec is disabled, 1 if enabled.
+ */
+bool cvmx_helper_get_port_fec(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Override default forward error correction for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param enable	true to enable fec, false to disable.
+ */
+void cvmx_helper_set_port_fec(int xiface, int index, bool enable);
+
+/**
+ * @INTERNAL
+ * Configure the SRIO RX interface AGC settings in host mode
+ *
+ * @param xiface	node and interface
+ * @param index		lane
+ * @param long_run	true for long run, false for short run
+ * @param agc_override	true to put AGC in manual mode
+ * @param ctle_zero	RX equalizer peaking control (default 0x6)
+ * @param agc_pre_ctle	AGC pre-CTLE gain (default 0x5)
+ * @param agc_post_ctle	AGC post-CTLE gain (default 0x4)
+ *
+ * NOTE: This must be called before SRIO is initialized to take effect
+ */
+void cvmx_helper_set_srio_rx(int xiface, int index, bool long_run, bool ctle_zero_override,
+			     u8 ctle_zero, bool agc_override, u8 agc_pre_ctle, u8 agc_post_ctle);
+
+/**
+ * @INTERNAL
+ * Get the SRIO RX interface AGC settings for host mode
+ *
+ * @param xiface	node and interface
+ * @param index		lane
+ * @param long_run	true for long run, false for short run
+ * @param[out] ctle_zero_override true if overridden
+ * @param[out] ctle_zero	RX equalizer peaking control (default 0x6)
+ * @param[out] agc_override	true to put AGC in manual mode
+ * @param[out] agc_pre_ctle	AGC pre-CTLE gain (default 0x5)
+ * @param[out] agc_post_ctle	AGC post-CTLE gain (default 0x4)
+ */
+void cvmx_helper_get_srio_rx(int xiface, int index, bool long_run, bool *ctle_zero_override,
+			     u8 *ctle_zero, bool *agc_override, u8 *agc_pre_ctle,
+			     u8 *agc_post_ctle);
+
+/**
+ * @INTERNAL
+ * Configure the SRIO TX interface for host mode
+ *
+ * @param xiface		node and interface
+ * @param index			lane
+ * @param long_run	true for long run, false for short run
+ * @param tx_swing		tx swing value to use (default 0x7), -1 to not
+ *				override.
+ * @param tx_gain		PCS SDS TX gain (default 0x3), -1 to not
+ *				override
+ * @param tx_premptap_override	true to override preemphasis control
+ * @param tx_premptap_pre	preemphasis pre tap value (default 0x0)
+ * @param tx_premptap_post	preemphasis post tap value (default 0xF)
+ * @param tx_vboost		vboost enable (1 = enable, -1 = don't override)
+ *				hardware default is 1.
+ *
+ * NOTE: This must be called before SRIO is initialized to take effect
+ */
+void cvmx_helper_set_srio_tx(int xiface, int index, bool long_run, int tx_swing, int tx_gain,
+			     bool tx_premptap_override, u8 tx_premptap_pre, u8 tx_premptap_post,
+			     int tx_vboost);
+
+/**
+ * @INTERNAL
+ * Get the SRIO TX interface settings for host mode
+ *
+ * @param xiface			node and interface
+ * @param index				lane
+ * @param long_run			true for long run, false for short run
+ * @param[out] tx_swing_override	true to override pcs_sds_txX_swing
+ * @param[out] tx_swing			tx swing value to use (default 0x7)
+ * @param[out] tx_gain_override		true to override default gain
+ * @param[out] tx_gain			PCS SDS TX gain (default 0x3)
+ * @param[out] tx_premptap_override	true to override preemphasis control
+ * @param[out] tx_premptap_pre		preemphasis pre tap value (default 0x0)
+ * @param[out] tx_premptap_post		preemphasis post tap value (default 0xF)
+ * @param[out] tx_vboost_override	override vboost setting
+ * @param[out] tx_vboost		vboost enable (default true)
+ */
+void cvmx_helper_get_srio_tx(int xiface, int index, bool long_run, bool *tx_swing_override,
+			     u8 *tx_swing, bool *tx_gain_override, u8 *tx_gain,
+			     bool *tx_premptap_override, u8 *tx_premptap_pre, u8 *tx_premptap_post,
+			     bool *tx_vboost_override, bool *tx_vboost);
+
+/**
+ * @INTERNAL
+ * Sets the PHY info data structure
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param[in] phy_info	phy information data structure pointer
+ */
+void cvmx_helper_set_port_phy_info(int xiface, int index, struct cvmx_phy_info *phy_info);
+/**
+ * @INTERNAL
+ * Returns the PHY information data structure for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return pointer to PHY information data structure or NULL if not set
+ */
+struct cvmx_phy_info *cvmx_helper_get_port_phy_info(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Returns a pointer to the PHY LED configuration (if local GPIOs drive them)
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ *
+ * @return pointer to the PHY LED information data structure or NULL if not
+ *	   present
+ */
+struct cvmx_phy_gpio_leds *cvmx_helper_get_port_phy_leds(int xiface, int index);
+
+/**
+ * @INTERNAL
+ * Sets a pointer to the PHY LED configuration (if local GPIOs drive them)
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param leds		pointer to led data structure
+ */
+void cvmx_helper_set_port_phy_leds(int xiface, int index, struct cvmx_phy_gpio_leds *leds);
+
+/**
+ * @INTERNAL
+ * Disables RGMII TX clock bypass and sets delay value
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param bypass	Set true to enable the clock bypass and false
+ *			to sync clock and data synchronously.
+ *			Default is false.
+ * @param clk_delay	Delay value to skew TXC from TXD
+ */
+void cvmx_helper_cfg_set_rgmii_tx_clk_delay(int xiface, int index, bool bypass, int clk_delay);
+
+/**
+ * @INTERNAL
+ * Gets RGMII TX clock bypass and delay value
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param bypass	Set true to enable the clock bypass and false
+ *			to sync clock and data synchronously.
+ *			Default is false.
+ * @param clk_delay	Delay value to skew TXC from TXD, default is 0.
+ */
+void cvmx_helper_cfg_get_rgmii_tx_clk_delay(int xiface, int index, bool *bypass, int *clk_delay);
+
+/**
+ * @INTERNAL
+ * Retrieve node-specific PKO Queue configuration.
+ *
+ * @param node		OCTEON3 node.
+ * @param pkocfg	PKO Queue static configuration.
+ */
+int cvmx_helper_pko_queue_config_get(int node, cvmx_user_static_pko_queue_config_t *cfg);
+
+/**
+ * @INTERNAL
+ * Update node-specific PKO Queue configuration.
+ *
+ * @param node		OCTEON3 node.
+ * @param pkocfg	PKO Queue static configuration.
+ */
+int cvmx_helper_pko_queue_config_set(int node, cvmx_user_static_pko_queue_config_t *cfg);
+
+/**
+ * @INTERNAL
+ * Retrieve the SFP node offset in the device tree
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return offset in device tree or -1 if error or not defined.
+ */
+int cvmx_helper_cfg_get_sfp_fdt_offset(int xiface, int index);
+
+/**
+ * Search for a port based on its FDT node offset
+ *
+ * @param	of_offset	Node offset of port to search for
+ *
+ * @return	ipd_port or -1 if not found
+ */
+int cvmx_helper_cfg_get_ipd_port_by_fdt_node_offset(int of_offset);
+
+/**
+ * @INTERNAL
+ * Sets the SFP node offset
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param sfp_of_offset	Offset of SFP node in device tree
+ */
+void cvmx_helper_cfg_set_sfp_fdt_offset(int xiface, int index, int sfp_of_offset);
+
+/**
+ * Search for a port based on its FDT node offset
+ *
+ * @param	of_offset	Node offset of port to search for
+ * @param[out]	xiface		xinterface of match
+ * @param[out]	index		port index of match
+ *
+ * @return	0 if found, -1 if not found
+ */
+int cvmx_helper_cfg_get_xiface_index_by_fdt_node_offset(int of_offset, int *xiface, int *index);
+
+/**
+ * Get data structure defining the Microsemi VSC7224 channel info
+ * or NULL if not present
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return pointer to vsc7224 data structure or NULL if not present
+ */
+struct cvmx_vsc7224_chan *cvmx_helper_cfg_get_vsc7224_chan_info(int xiface, int index);
+
+/**
+ * Sets the Microsemi VSC7224 channel data structure
+ *
+ * @param	xiface	node and interface
+ * @param	index	port index
+ * @param[in]	vsc7224_info	Microsemi VSC7224 data structure
+ */
+void cvmx_helper_cfg_set_vsc7224_chan_info(int xiface, int index,
+					   struct cvmx_vsc7224_chan *vsc7224_chan_info);
+
+/**
+ * Get data structure defining the Avago AVSP5410 phy info
+ * or NULL if not present
+ *
+ * @param xiface        node and interface
+ * @param index         port index
+ *
+ * @return pointer to avsp5410 data structure or NULL if not present
+ */
+struct cvmx_avsp5410 *cvmx_helper_cfg_get_avsp5410_info(int xiface, int index);
+
+/**
+ * Sets the Avago AVSP5410 phy info data structure
+ *
+ * @param       xiface  node and interface
+ * @param       index   port index
+ * @param[in]   avsp5410_info   Avago AVSP5410 data structure
+ */
+void cvmx_helper_cfg_set_avsp5410_info(int xiface, int index, struct cvmx_avsp5410 *avsp5410_info);
+
+/**
+ * Gets the SFP data associated with a port
+ *
+ * @param	xiface	node and interface
+ * @param	index	port index
+ *
+ * @return	pointer to SFP data structure or NULL if none
+ */
+struct cvmx_fdt_sfp_info *cvmx_helper_cfg_get_sfp_info(int xiface, int index);
+
+/**
+ * Sets the SFP data associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ * @param[in]	sfp_info	port SFP data or NULL for none
+ */
+void cvmx_helper_cfg_set_sfp_info(int xiface, int index, struct cvmx_fdt_sfp_info *sfp_info);
+
+/*
+ * Initializes cvmx with user specified config info.
+ */
+int cvmx_user_static_config(void);
+void cvmx_pko_queue_show(void);
+int cvmx_fpa_pool_init_from_cvmx_config(void);
+int __cvmx_helper_init_port_valid(void);
+
+/**
+ * Returns a pointer to the phy device associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ *
+ * return	pointer to phy device or NULL if none
+ */
+struct phy_device *cvmx_helper_cfg_get_phy_device(int xiface, int index);
+
+/**
+ * Sets the phy device associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ * @param[in]	phydev		phy device to assiciate
+ */
+void cvmx_helper_cfg_set_phy_device(int xiface, int index, struct phy_device *phydev);
+
+#endif /* __CVMX_HELPER_CFG_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-errata.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-errata.h
new file mode 100644
index 0000000000..9ed13c1626
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-errata.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Fixes and workaround for Octeon chip errata. This file
+ * contains functions called by cvmx-helper to workaround known
+ * chip errata. For the most part, code doesn't need to call
+ * these functions directly.
+ */
+
+#ifndef __CVMX_HELPER_ERRATA_H__
+#define __CVMX_HELPER_ERRATA_H__
+
+#include "cvmx-wqe.h"
+
+/**
+ * @INTERNAL
+ * Function to adjust internal IPD pointer alignments
+ *
+ * @return 0 on success
+ *         !0 on failure
+ */
+int __cvmx_helper_errata_fix_ipd_ptr_alignment(void);
+
+/**
+ * This function needs to be called on all Octeon chips with
+ * errata PKI-100.
+ *
+ * The Size field is 8 too large in WQE and next pointers
+ *
+ *  The Size field generated by IPD is 8 larger than it should
+ *  be. The Size field is <55:40> of both:
+ *      - WORD3 in the work queue entry, and
+ *      - the next buffer pointer (which precedes the packet data
+ *        in each buffer).
+ *
+ * @param work   Work queue entry to fix
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_helper_fix_ipd_packet_chain(cvmx_wqe_t *work);
+
+/**
+ * Due to errata G-720, the 2nd order CDR circuit on CN52XX pass
+ * 1 doesn't work properly. The following code disables 2nd order
+ * CDR for the specified QLM.
+ *
+ * @param qlm    QLM to disable 2nd order CDR for.
+ */
+void __cvmx_helper_errata_qlm_disable_2nd_order_cdr(int qlm);
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-fdt.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-fdt.h
new file mode 100644
index 0000000000..d3809aec29
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-fdt.h
@@ -0,0 +1,568 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * FDT Helper functions similar to those provided to U-Boot.
+ * If compiled for U-Boot, just provide wrappers to the equivalent U-Boot
+ * functions.
+ */
+
+#ifndef __CVMX_HELPER_FDT_H__
+#define __CVMX_HELPER_FDT_H__
+
+#include <fdt_support.h>
+#include <fdtdec.h>
+#include <time.h>
+#include <asm/global_data.h>
+#include <linux/libfdt.h>
+
+#include <mach/cvmx-helper-sfp.h>
+
+enum cvmx_i2c_bus_type {
+	CVMX_I2C_BUS_OCTEON,
+	CVMX_I2C_MUX_PCA9540,
+	CVMX_I2C_MUX_PCA9542,
+	CVMX_I2C_MUX_PCA9543,
+	CVMX_I2C_MUX_PCA9544,
+	CVMX_I2C_MUX_PCA9545,
+	CVMX_I2C_MUX_PCA9546,
+	CVMX_I2C_MUX_PCA9547,
+	CVMX_I2C_MUX_PCA9548,
+	CVMX_I2C_MUX_OTHER
+};
+
+struct cvmx_sfp_mod_info; /** Defined in cvmx-helper-sfp.h */
+struct cvmx_phy_info;	  /** Defined in cvmx-helper-board.h */
+
+/**
+ * This data structure holds information about various I2C muxes and switches
+ * that may be between a device and the Octeon chip.
+ */
+struct cvmx_fdt_i2c_bus_info {
+	/** Parent I2C bus, NULL if root */
+	struct cvmx_fdt_i2c_bus_info *parent;
+	/** Child I2C bus or NULL if last entry in the chain */
+	struct cvmx_fdt_i2c_bus_info *child;
+	/** Offset in device tree */
+	int of_offset;
+	/** Type of i2c bus or mux */
+	enum cvmx_i2c_bus_type type;
+	/** I2C address of mux */
+	u8 i2c_addr;
+	/** Mux channel number */
+	u8 channel;
+	/** For muxes, the bit(s) to set to enable them */
+	u8 enable_bit;
+	/** True if mux, false if switch */
+	bool is_mux;
+};
+
+/**
+ * Data structure containing information about SFP/QSFP slots
+ */
+struct cvmx_fdt_sfp_info {
+	/** Used for a linked list of slots */
+	struct cvmx_fdt_sfp_info *next, *prev;
+	/** Used when multiple SFP ports share the same IPD port */
+	struct cvmx_fdt_sfp_info *next_iface_sfp;
+	/** Name from device tree of slot */
+	const char *name;
+	/** I2C bus for slot EEPROM */
+	struct cvmx_fdt_i2c_bus_info *i2c_bus;
+	/** Data from SFP or QSFP EEPROM */
+	struct cvmx_sfp_mod_info sfp_info;
+	/** Data structure with PHY information */
+	struct cvmx_phy_info *phy_info;
+	/** IPD port(s) slot is connected to */
+	int ipd_port[4];
+	/** Offset in device tree of slot */
+	int of_offset;
+	/** EEPROM address of SFP module (usually 0x50) */
+	u8 i2c_eeprom_addr;
+	/** Diagnostic address of SFP module (usually 0x51) */
+	u8 i2c_diag_addr;
+	/** True if QSFP module */
+	bool is_qsfp;
+	/** True if EEPROM data is valid */
+	bool valid;
+	/** SFP tx_disable GPIO descriptor */
+	struct cvmx_fdt_gpio_info *tx_disable;
+	/** SFP mod_abs/QSFP mod_prs GPIO descriptor */
+	struct cvmx_fdt_gpio_info *mod_abs;
+	/** SFP tx_error GPIO descriptor */
+	struct cvmx_fdt_gpio_info *tx_error;
+	/** SFP rx_los GPIO discriptor */
+	struct cvmx_fdt_gpio_info *rx_los;
+	/** QSFP select GPIO descriptor */
+	struct cvmx_fdt_gpio_info *select;
+	/** QSFP reset GPIO descriptor */
+	struct cvmx_fdt_gpio_info *reset;
+	/** QSFP interrupt GPIO descriptor */
+	struct cvmx_fdt_gpio_info *interrupt;
+	/** QSFP lp_mode GPIO descriptor */
+	struct cvmx_fdt_gpio_info *lp_mode;
+	/** Last mod_abs value */
+	int last_mod_abs;
+	/** Last rx_los value */
+	int last_rx_los;
+	/** Function to call to check mod_abs */
+	int (*check_mod_abs)(struct cvmx_fdt_sfp_info *sfp_info, void *data);
+	/** User-defined data to pass to check_mod_abs */
+	void *mod_abs_data;
+	/** Function to call when mod_abs changes */
+	int (*mod_abs_changed)(struct cvmx_fdt_sfp_info *sfp_info, int val, void *data);
+	/** User-defined data to pass to mod_abs_changed */
+	void *mod_abs_changed_data;
+	/** Function to call when rx_los changes */
+	int (*rx_los_changed)(struct cvmx_fdt_sfp_info *sfp_info, int val, void *data);
+	/** User-defined data to pass to rx_los_changed */
+	void *rx_los_changed_data;
+	/** True if we're connected to a Microsemi VSC7224 reclocking chip */
+	bool is_vsc7224;
+	/** Data structure for first vsc7224 channel we're attached to */
+	struct cvmx_vsc7224_chan *vsc7224_chan;
+	/** True if we're connected to a Avago AVSP5410 phy */
+	bool is_avsp5410;
+	/** Data structure for avsp5410 phy we're attached to */
+	struct cvmx_avsp5410 *avsp5410;
+	/** xinterface we're on */
+	int xiface;
+	/** port index */
+	int index;
+};
+
+/**
+ * Look up a phandle and follow it to its node then return the offset of that
+ * node.
+ *
+ * @param[in]	fdt_addr	pointer to FDT blob
+ * @param	node		node to read phandle from
+ * @param[in]	prop_name	name of property to find
+ * @param[in,out] lenp		Number of phandles, input max number
+ * @param[out]	nodes		Array of phandle nodes
+ *
+ * @return	-ve error code on error or 0 for success
+ */
+int cvmx_fdt_lookup_phandles(const void *fdt_addr, int node, const char *prop_name, int *lenp,
+			     int *nodes);
+
+/**
+ * Helper to return the address property
+ *
+ * @param[in] fdt_addr	pointer to FDT blob
+ * @param node		node to read address from
+ * @param prop_name	property name to read
+ *
+ * @return address of property or FDT_ADDR_T_NONE if not found
+ */
+static inline fdt_addr_t cvmx_fdt_get_addr(const void *fdt_addr, int node, const char *prop_name)
+{
+	return fdtdec_get_addr(fdt_addr, node, prop_name);
+}
+
+/**
+ * Helper function to return an integer property
+ *
+ * @param[in] fdt_addr	pointer to FDT blob
+ * @param node		node to read integer from
+ * @param[in] prop_name	property name to read
+ * @param default_val	default value to return if property doesn't exist
+ *
+ * @return	integer value of property or default_val if it doesn't exist.
+ */
+static inline int cvmx_fdt_get_int(const void *fdt_addr, int node, const char *prop_name,
+				   int default_val)
+{
+	return fdtdec_get_int(fdt_addr, node, prop_name, default_val);
+}
+
+static inline bool cvmx_fdt_get_bool(const void *fdt_addr, int node, const char *prop_name)
+{
+	return fdtdec_get_bool(fdt_addr, node, prop_name);
+}
+
+static inline u64 cvmx_fdt_get_uint64(const void *fdt_addr, int node, const char *prop_name,
+				      u64 default_val)
+{
+	return fdtdec_get_uint64(fdt_addr, node, prop_name, default_val);
+}
+
+/**
+ * Look up a phandle and follow it to its node then return the offset of that
+ * node.
+ *
+ * @param[in] fdt_addr	pointer to FDT blob
+ * @param node		node to read phandle from
+ * @param[in] prop_name	name of property to find
+ *
+ * @return	node offset if found, -ve error code on error
+ */
+static inline int cvmx_fdt_lookup_phandle(const void *fdt_addr, int node, const char *prop_name)
+{
+	return fdtdec_lookup_phandle(fdt_addr, node, prop_name);
+}
+
+/**
+ * Translate an address from the device tree into a CPU physical address by
+ * walking up the device tree and applying bus mappings along the way.
+ *
+ * This uses #size-cells and #address-cells.
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	node		node to start translating from
+ * @param[in]	in_addr		Address to translate
+ *				NOTE: in_addr must be in the native ENDIAN
+ *				format.
+ *
+ * @return	Translated address or FDT_ADDR_T_NONE if address cannot be
+ *		translated.
+ */
+static inline u64 cvmx_fdt_translate_address(const void *fdt_addr, int node, const u32 *in_addr)
+{
+	return fdt_translate_address((void *)fdt_addr, node, in_addr);
+}
+
+/**
+ * Compare compatibile strings in the flat device tree.
+ *
+ * @param[in] s1	First string to compare
+ * @param[in] sw	Second string to compare
+ *
+ * @return	0 if no match
+ *		1 if only the part number matches and not the manufacturer
+ *		2 if both the part number and manufacturer match
+ */
+int cvmx_fdt_compat_match(const char *s1, const char *s2);
+
+/**
+ * Returns whether a list of strings contains the specified string
+ *
+ * @param[in]	slist	String list
+ * @param	llen	string list total length
+ * @param[in]	str	string to search for
+ *
+ * @return	1 if string list contains string, 0 if it does not.
+ */
+int cvmx_fdt_compat_list_contains(const char *slist, int llen, const char *str);
+
+/**
+ * Check if a node is compatible with the specified compat string
+ *
+ * @param[in]	fdt_addr	FDT address
+ * @param	node		node offset to check
+ * @param[in]	compat		compatible string to check
+ *
+ * @return	0 if compatible, 1 if not compatible, error if negative
+ */
+int cvmx_fdt_node_check_compatible(const void *fdt_addr, int node, const char *compat);
+
+/**
+ * @INTERNAL
+ * Compares a string to a compatible field.
+ *
+ * @param[in]	compat		compatible string
+ * @param[in]	str		string to check
+ *
+ * @return	0 if not compatible, 1 if manufacturer compatible, 2 if
+ *		part is compatible, 3 if both part and manufacturer are
+ *		compatible.
+ */
+int __cvmx_fdt_compat_match(const char *compat, const char *str);
+
+/**
+ * Given a phandle to a GPIO device return the type of GPIO device it is.
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	phandle		phandle to GPIO
+ * @param[out]	size		Number of pins (optional, may be NULL)
+ *
+ * @return	Type of GPIO device or PIN_ERROR if error
+ */
+enum cvmx_gpio_type cvmx_fdt_get_gpio_type(const void *fdt_addr, int phandle, int *size);
+
+/**
+ * Given a phandle to a GPIO node output the i2c bus and address
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	phandle		phandle of GPIO device
+ * @param[out]	bus		TWSI bus number with node in bits 1-3, can be
+ *				NULL for none.
+ * @param[out]	addr		TWSI address number, can be NULL for none
+ *
+ * @return	0 for success, error otherwise
+ */
+int cvmx_fdt_get_twsi_gpio_bus_addr(const void *fdt_addr, int phandle, int *bus, int *addr);
+
+/**
+ * Given a FDT node return the CPU node number
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	node		FDT node number
+ *
+ * @return	CPU node number or error if negative
+ */
+int cvmx_fdt_get_cpu_node(const void *fdt_addr, int node);
+
+/**
+ * Get the total size of the flat device tree
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ *
+ * @return	Size of flat device tree in bytes or -1 if error.
+ */
+int cvmx_fdt_get_fdt_size(const void *fdt_addr);
+
+/**
+ * Returns if a node is compatible with one of the items in the string list
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	node		Node offset to check
+ * @param[in]	strlist		Array of FDT device compatibility strings,
+ *				must end with NULL or empty string.
+ *
+ * @return	0 if at least one item matches, 1 if no matches
+ */
+int cvmx_fdt_node_check_compatible_list(const void *fdt_addr, int node, const char *const *strlist);
+
+/**
+ * Given a FDT node, return the next compatible node.
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	start_offset	Starting node offset or -1 to find the first
+ * @param	strlist		Array of FDT device compatibility strings, must
+ *				end with NULL or empty string.
+ *
+ * @return	next matching node or -1 if no more matches.
+ */
+int cvmx_fdt_node_offset_by_compatible_list(const void *fdt_addr, int startoffset,
+					    const char *const *strlist);
+
+/**
+ * Given the parent offset of an i2c device build up a list describing the bus
+ * which can contain i2c muxes and switches.
+ *
+ * @param[in]	fdt_addr	address of device tree
+ * @param	of_offset	Offset of the parent node of a GPIO device in
+ *				the device tree.
+ *
+ * @return	pointer to list of i2c devices starting from the root which
+ *		can include i2c muxes and switches or NULL if error.  Note that
+ *		all entries are allocated on the heap.
+ *
+ * @see cvmx_fdt_free_i2c_bus()
+ */
+struct cvmx_fdt_i2c_bus_info *cvmx_fdt_get_i2c_bus(const void *fdt_addr, int of_offset);
+
+/**
+ * Return the Octeon bus number for a bus descriptor
+ *
+ * @param[in]	bus	bus descriptor
+ *
+ * @return	Octeon twsi bus number or -1 on error
+ */
+int cvmx_fdt_i2c_get_root_bus(const struct cvmx_fdt_i2c_bus_info *bus);
+
+/**
+ * Frees all entries for an i2c bus descriptor
+ *
+ * @param	bus	bus to free
+ *
+ * @return	0
+ */
+int cvmx_fdt_free_i2c_bus(struct cvmx_fdt_i2c_bus_info *bus);
+
+/**
+ * Given the bus to a device, enable it.
+ *
+ * @param[in]	bus	i2c bus descriptor to enable or disable
+ * @param	enable	set to true to enable, false to disable
+ *
+ * @return	0 for success or -1 for invalid bus
+ *
+ * This enables the entire bus including muxes and switches in the path.
+ */
+int cvmx_fdt_enable_i2c_bus(const struct cvmx_fdt_i2c_bus_info *bus, bool enable);
+
+/**
+ * Return a GPIO handle given a GPIO phandle of the form <&gpio pin flags>
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	of_offset	node offset for property
+ * @param	prop_name	name of property
+ *
+ * @return	pointer to GPIO handle or NULL if error
+ */
+struct cvmx_fdt_gpio_info *cvmx_fdt_gpio_get_info_phandle(const void *fdt_addr, int of_offset,
+							  const char *prop_name);
+
+/**
+ * Sets a GPIO pin given the GPIO descriptor
+ *
+ * @param	pin	GPIO pin descriptor
+ * @param	value	value to set it to, 0 or 1
+ *
+ * @return	0 on success, -1 on error.
+ *
+ * NOTE: If the CVMX_GPIO_ACTIVE_LOW flag is set then the output value will be
+ * inverted.
+ */
+int cvmx_fdt_gpio_set(struct cvmx_fdt_gpio_info *pin, int value);
+
+/**
+ * Given a GPIO pin descriptor, input the value of that pin
+ *
+ * @param	pin	GPIO pin descriptor
+ *
+ * @return	0 if low, 1 if high, -1 on error.  Note that the input will be
+ *		inverted if the CVMX_GPIO_ACTIVE_LOW flag bit is set.
+ */
+int cvmx_fdt_gpio_get(struct cvmx_fdt_gpio_info *pin);
+
+/**
+ * Assigns an IPD port to a SFP slot
+ *
+ * @param	sfp		Handle to SFP data structure
+ * @param	ipd_port	Port to assign it to
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_sfp_set_ipd_port(struct cvmx_fdt_sfp_info *sfp, int ipd_port);
+
+/**
+ * Get the IPD port of a SFP slot
+ *
+ * @param[in]	sfp		Handle to SFP data structure
+ *
+ * @return	IPD port number for SFP slot
+ */
+static inline int cvmx_sfp_get_ipd_port(const struct cvmx_fdt_sfp_info *sfp)
+{
+	return sfp->ipd_port[0];
+}
+
+/**
+ * Get the IPD ports for a QSFP port
+ *
+ * @param[in]	sfp		Handle to SFP data structure
+ * @param[out]	ipd_ports	IPD ports for each lane, if running as 40G then
+ *				only ipd_ports[0] is valid and the others will
+ *				be set to -1.
+ */
+static inline void cvmx_qsfp_get_ipd_ports(const struct cvmx_fdt_sfp_info *sfp, int ipd_ports[4])
+{
+	int i;
+
+	for (i = 0; i < 4; i++)
+		ipd_ports[i] = sfp->ipd_port[i];
+}
+
+/**
+ * Attaches a PHY to a SFP or QSFP.
+ *
+ * @param	sfp		sfp to attach PHY to
+ * @param	phy_info	phy descriptor to attach or NULL to detach
+ */
+void cvmx_sfp_attach_phy(struct cvmx_fdt_sfp_info *sfp, struct cvmx_phy_info *phy_info);
+
+/**
+ * Returns a phy descriptor for a SFP slot
+ *
+ * @param[in]	sfp	SFP to get phy info from
+ *
+ * @return	phy descriptor or NULL if none.
+ */
+static inline struct cvmx_phy_info *cvmx_sfp_get_phy_info(const struct cvmx_fdt_sfp_info *sfp)
+{
+	return sfp->phy_info;
+}
+
+/**
+ * @INTERNAL
+ * Parses all instances of the Vitesse VSC7224 reclocking chip
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ *
+ * @return	0 for success, error otherwise
+ */
+int __cvmx_fdt_parse_vsc7224(const void *fdt_addr);
+
+/**
+ * @INTERNAL
+ * Parses all instances of the Avago AVSP5410 gearbox phy
+ *
+ * @param[in]   fdt_addr        Address of flat device tree
+ *
+ * @return      0 for success, error otherwise
+ */
+int __cvmx_fdt_parse_avsp5410(const void *fdt_addr);
+
+/**
+ * Parse SFP information from device tree
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ *
+ * @return pointer to sfp info or NULL if error
+ */
+struct cvmx_fdt_sfp_info *cvmx_helper_fdt_parse_sfp_info(const void *fdt_addr, int of_offset);
+
+/**
+ * @INTERNAL
+ * Parses either a CS4343 phy or a slice of the phy from the device tree
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	of_offset	offset of slice or phy in device tree
+ * @param	phy_info	phy_info data structure to fill in
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_fdt_parse_cs4343(const void *fdt_addr, int of_offset, struct cvmx_phy_info *phy_info);
+
+/**
+ * Given an i2c bus and device address, write an 8 bit value
+ *
+ * @param bus	i2c bus number
+ * @param addr	i2c device address (7 bits)
+ * @param val	8-bit value to write
+ *
+ * This is just an abstraction to ease support in both U-Boot and SE.
+ */
+void cvmx_fdt_i2c_reg_write(int bus, int addr, u8 val);
+
+/**
+ * Read an 8-bit value from an i2c bus and device address
+ *
+ * @param bus	i2c bus number
+ * @param addr	i2c device address (7 bits)
+ *
+ * @return 8-bit value or error if negative
+ */
+int cvmx_fdt_i2c_reg_read(int bus, int addr);
+
+/**
+ * Write an 8-bit value to a register indexed i2c device
+ *
+ * @param bus	i2c bus number to write to
+ * @param addr	i2c device address (7 bits)
+ * @param reg	i2c 8-bit register address
+ * @param val	8-bit value to write
+ *
+ * @return 0 for success, otherwise error
+ */
+int cvmx_fdt_i2c_write8(int bus, int addr, int reg, u8 val);
+
+/**
+ * Read an 8-bit value from a register indexed i2c device
+ *
+ * @param bus	i2c bus number to write to
+ * @param addr	i2c device address (7 bits)
+ * @param reg	i2c 8-bit register address
+ *
+ * @return value or error if negative
+ */
+int cvmx_fdt_i2c_read8(int bus, int addr, int reg);
+
+int cvmx_sfp_vsc7224_mod_abs_changed(struct cvmx_fdt_sfp_info *sfp_info,
+				     int val, void *data);
+int cvmx_sfp_avsp5410_mod_abs_changed(struct cvmx_fdt_sfp_info *sfp_info,
+				      int val, void *data);
+
+#endif /* CVMX_HELPER_FDT_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-fpa.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-fpa.h
new file mode 100644
index 0000000000..8b3a89bce4
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-fpa.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions for FPA setup.
+ */
+
+#ifndef __CVMX_HELPER_H_FPA__
+#define __CVMX_HELPER_H_FPA__
+
+/**
+ * Allocate memory and initialize the FPA pools using memory
+ * from cvmx-bootmem. Sizes of each element in the pools is
+ * controlled by the cvmx-config.h header file. Specifying
+ * zero for any parameter will cause that FPA pool to not be
+ * setup. This is useful if you aren't using some of the
+ * hardware and want to save memory.
+ *
+ * @param packet_buffers
+ *               Number of packet buffers to allocate
+ * @param work_queue_entries
+ *               Number of work queue entries
+ * @param pko_buffers
+ *               PKO Command buffers. You should at minimum have two per
+ *               each PKO queue.
+ * @param tim_buffers
+ *               TIM ring buffer command queues. At least two per timer bucket
+ *               is recommended.
+ * @param dfa_buffers
+ *               DFA command buffer. A relatively small (32 for example)
+ *               number should work.
+ * @return Zero on success, non-zero if out of memory
+ */
+int cvmx_helper_initialize_fpa(int packet_buffers, int work_queue_entries, int pko_buffers,
+			       int tim_buffers, int dfa_buffers);
+
+int __cvmx_helper_initialize_fpa_pool(int pool, u64 buffer_size, u64 buffers, const char *name);
+
+int cvmx_helper_shutdown_fpa_pools(int node);
+
+void cvmx_helper_fpa_dump(int node);
+
+#endif /* __CVMX_HELPER_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-gpio.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-gpio.h
new file mode 100644
index 0000000000..787eccf4aa
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-gpio.h
@@ -0,0 +1,427 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Defines some GPIO information used in multiple places
+ */
+
+#ifndef __CVMX_HELPER_GPIO_H__
+#define __CVMX_HELPER_GPIO_H__
+
+#define CVMX_GPIO_NAME_LEN 32 /** Length of name */
+
+enum cvmx_gpio_type {
+	CVMX_GPIO_PIN_OCTEON,  /** GPIO pin is directly connected to OCTEON */
+	CVMX_GPIO_PIN_PCA953X, /** GPIO pin is NXP PCA953X compat chip */
+	CVMX_GPIO_PIN_PCA957X,
+	CVMX_GPIO_PIN_PCF857X, /** GPIO pin is NXP PCF857X compat chip */
+	CVMX_GPIO_PIN_PCA9698, /** GPIO pin is NXP PCA9698 compat chip */
+	CVMX_GPIO_PIN_CS4343,  /** Inphi/Cortina CS4343 GPIO pins */
+	CVMX_GPIO_PIN_OTHER,   /** GPIO pin is something else */
+};
+
+enum cvmx_gpio_operation {
+	CVMX_GPIO_OP_CONFIG,	  /** Initial configuration of the GPIO pin */
+	CVMX_GPIO_OP_SET,	  /** Set pin */
+	CVMX_GPIO_OP_CLEAR,	  /** Clear pin */
+	CVMX_GPIO_OP_READ,	  /** Read pin */
+	CVMX_GPIO_OP_TOGGLE,	  /** Toggle pin */
+	CVMX_GPIO_OP_BLINK_START, /** Put in blink mode (if supported) */
+	CVMX_GPIO_OP_BLINK_STOP,  /** Takes the pin out of blink mode */
+	CVMX_GPIO_OP_SET_LINK,	  /** Put in link monitoring mode */
+	CVMX_GPIO_OP_SET_ACT,	  /** Put in RX activity mode */
+};
+
+/**
+ * Inphi CS4343 output source select values for the GPIO_GPIOX output_src_sel.
+ */
+enum cvmx_inphi_cs4343_gpio_gpio_output_src_sel {
+	GPIO_SEL_DRIVE = 0,	/** Value of GPIOX_DRIVE */
+	GPIO_SEL_DELAY = 1,	/** Drive delayed */
+	GPIO_SEL_TOGGLE = 2,	/** Used for blinking */
+	GPIO_SEL_EXT = 3,	/** External function */
+	GPIO_SEL_EXT_DELAY = 4, /** External function delayed */
+};
+
+/** Inphi GPIO_GPIOX configuration register */
+union cvmx_inphi_cs4343_gpio_cfg_reg {
+	u16 u;
+	struct {
+u16: 4;
+		/** Data source for the GPIO output */
+		u16 output_src_sel : 3;
+		/** 1 = GPIO output is inverted before being output */
+		u16 invert_output : 1;
+		/** 1 = GPIO input is inverted before being processed */
+		u16 invert_input : 1;
+		/** 0 = 2.5v/1.8v signalling, 1 = 1.2v signalling */
+		u16 iovddsel_1v2 : 1;
+		/**
+		 * 0 = output selected by outen bit
+		 * 1 = output controlled by selected GPIO output source
+		 */
+		u16 outen_ovr : 1;
+		/** 0 = GPIO is input only, 1 = GPIO output driver enabled */
+		u16 outen : 1;
+u16: 2;
+		u16 pullup_1k;	/** 1 = enable 1K pad pullup */
+		u16 pullup_10k; /** 1 = enable 10K pad pullup */
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_CFG_OFFSET 0x0
+
+/**
+ * This selects which port the GPIO gets its signals from when configured
+ * as an output.
+ */
+enum cvmx_inphi_cs4343_gpio_output_cfg_port {
+	PORT_0_HOST_RX = 0, /** Port pair 0 host RX */
+	PORT_0_LINE_RX = 1, /** Port pair 0 line RX */
+	PORT_1_HOST_RX = 2, /** Port pair 1 host RX */
+	PORT_1_LINE_RX = 3, /** Port pair 1 line RX */
+	PORT_3_HOST_RX = 4, /** Port pair 3 host RX */
+	PORT_3_LINE_RX = 5, /** Port pair 3 line RX */
+	PORT_2_HOST_RX = 6, /** Port pair 2 host RX */
+	PORT_2_LINE_RX = 7, /** Port pair 2 line RX */
+	COMMON = 8,	    /** Common */
+};
+
+enum cvmx_inphi_cs4343_gpio_output_cfg_function {
+	RX_LOS = 0,	   /** Port - 1 = Receive LOS (from DSP) */
+	RX_LOL = 1,	   /** Port - 1 = Receive LOL (inverted from MSEQ) */
+	EDC_CONVERGED = 2, /** Port - 1 = EDC converged (from DSP) */
+	/** Port - 1 = PRBS checker in sync (inverted from SDS) */
+	RX_PRBS_SYNC = 3,
+	COMMON_LOGIC_0 = 0,	 /** Common - Logic 0 */
+	COMMON_GPIO1_INPUT = 1,	 /** Common - GPIO 1 input */
+	COMMON_GPIO2_INPUT = 2,	 /** Common - GPIO 2 input */
+	COMMON_GPIO3_INPUT = 3,	 /** Common - GPIO 3 input */
+	COMMON_GPIO4_INPUT = 4,	 /** Common - GPIO 4 input */
+	COMMON_INTERR_INPUT = 5, /** Common - INTERR input */
+	/** Common - Interrupt output from GLOBAL_INT register */
+	COMMON_GLOBAL_INT = 6,
+	/** Common - Interrupt output from GPIO_INT register */
+	COMMON_GPIO_INT = 7,
+	/** Common - Temp/voltage monitor interrupt */
+	COMMON_MONITOR_INT = 8,
+	/** Common - Selected clock output of global clock monitor */
+	COMMON_GBL_CLKMON_CLK = 9,
+};
+
+union cvmx_inphi_cs4343_gpio_output_cfg {
+	u16 u;
+	struct {
+u16: 8;
+		u16 port : 4;	  /** port */
+		u16 function : 4; /** function */
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_OUTPUT_CFG_OFFSET 0x1
+
+union cvmx_inphi_cs4343_gpio_drive {
+	u16 u;
+	struct {
+u16: 15;
+		u16 value : 1; /** output value */
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_DRIVE_OFFSET 0x2
+
+union cvmx_inphi_cs4343_gpio_value {
+	u16 u;
+	struct {
+u16: 15;
+		u16 value : 1; /** input value (read-only) */
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_VALUE_OFFSET 0x3
+
+union cvmx_inphi_cs4343_gpio_toggle {
+	u16 u;
+	struct {
+		/** Toggle rate in ms, multiply by 2 to get period in ms */
+		u16 rate : 16;
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_TOGGLE_OFFSET 0x4
+
+union cvmx_inphi_cs4343_gpio_delay {
+	u16 u;
+	struct {
+		/** On delay for GPIO output in ms when enabled */
+		u16 on_delay : 16;
+	} s;
+};
+
+#define CVMX_INPHI_CS4343_GPIO_DELAY_OFFSET 0x5
+
+/**
+ * GPIO flags associated with a GPIO pin (can be combined)
+ */
+enum cvmx_gpio_flags {
+	CVMX_GPIO_ACTIVE_HIGH = 0,    /** Active high (default) */
+	CVMX_GPIO_ACTIVE_LOW = 1,     /** Active low (inverted) */
+	CVMX_GPIO_OPEN_COLLECTOR = 2, /** Output is open-collector */
+};
+
+/** Default timer number to use for outputting a frequency [0..3] */
+#define CVMX_GPIO_DEFAULT_TIMER 3
+
+/** Configuration data for native Octeon GPIO pins */
+struct cvmx_octeon_gpio_data {
+	int cpu_node; /** CPU node for GPIO pin */
+	int timer;    /** Timer number used when in toggle mode, 0-3 */
+};
+
+struct cvmx_pcf857x_gpio_data {
+	unsigned int latch_out;
+};
+
+#define CVMX_INPHI_CS4343_EFUSE_PDF_SKU_REG 0x19f
+#define CVMX_INPHI_CS4343_SKU_CS4223	    0x10
+#define CVMX_INPHI_CS4343_SKU_CS4224	    0x11
+#define CVMX_INPHI_CS4343_SKU_CS4343	    0x12
+#define CVMX_INPHI_CS4343_SKU_CS4221	    0x13
+#define CVMX_INPHI_CS4343_SKU_CS4227	    0x14
+#define CVMX_INPHI_CS4343_SKU_CS4341	    0x16
+
+struct cvmx_cs4343_gpio_data {
+	int reg_offset; /** Base register address for GPIO */
+	enum cvmx_gpio_operation last_op;
+	u8 link_port; /** Link port number for link status */
+	u16 sku;      /** Value from CS4224_EFUSE_PDF_SKU register */
+	u8 out_src_sel;
+	u8 field_func;
+	bool out_en;
+	bool is_cs4343; /** True if dual package */
+	struct phy_device *phydev;
+};
+
+struct cvmx_fdt_gpio_info;
+
+/** Function called for GPIO operations */
+typedef int (*cvmx_fdt_gpio_op_func_t)(struct cvmx_fdt_gpio_info *, enum cvmx_gpio_operation);
+
+/**
+ * GPIO descriptor
+ */
+struct cvmx_fdt_gpio_info {
+	struct cvmx_fdt_gpio_info *next; /** For list of GPIOs */
+	char name[CVMX_GPIO_NAME_LEN];	 /** Name of GPIO */
+	int pin;			 /** GPIO pin number */
+	enum cvmx_gpio_type gpio_type;	 /** Type of GPIO controller */
+	int of_offset;			 /** Offset in device tree */
+	int phandle;
+	struct cvmx_fdt_i2c_bus_info *i2c_bus; /** I2C bus descriptor */
+	int i2c_addr;			       /** Address on i2c bus */
+	enum cvmx_gpio_flags flags;	       /** Flags associated with pin */
+	int num_pins;			       /** Total number of pins */
+	unsigned int latch_out;		       /** Latched output for 857x */
+	/** Rate in ms between toggle states */
+	int toggle_rate;
+	/** Pointer to user data for user-defined functions */
+	void *data;
+	/** Function to set, clear, toggle, etc. */
+	cvmx_fdt_gpio_op_func_t op_func;
+	/* Two values are used to detect the initial case where nothing has
+	 * been configured.  Initially, all of the following will be false
+	 * which will force the initial state to be properly set.
+	 */
+	/** True if the GPIO pin is currently set, useful for toggle */
+	bool is_set;
+	/** Set if configured to invert */
+	bool invert_set;
+	/** Set if input is to be inverted */
+	bool invert_input;
+	/** Set if direction is configured as output */
+	bool dir_out;
+	/** Set if direction is configured as input */
+	bool dir_in;
+	/** Pin is set to toggle periodically */
+	bool toggle;
+	/** True if LED is used to indicate link status */
+	bool link_led;
+	/** True if LED is used to indicate rx activity */
+	bool rx_act_led;
+	/** True if LED is used to indicate tx activity */
+	bool tx_act_led;
+	/** True if LED is used to indicate networking errors */
+	bool error_led;
+	/** True if LED can automatically show link */
+	bool hw_link;
+};
+
+/** LED datastructure */
+struct cvmx_fdt_gpio_led {
+	struct cvmx_fdt_gpio_led *next, *prev; /** List of LEDs */
+	char name[CVMX_GPIO_NAME_LEN];	       /** Name */
+	struct cvmx_fdt_gpio_info *gpio;       /** GPIO for LED */
+	int of_offset;			       /** Device tree node */
+	/** True if active low, note that GPIO contains this info */
+	bool active_low;
+};
+
+/**
+ * Returns the operation function for the GPIO phandle
+ *
+ * @param[in]	fdt_addr	Pointer to FDT
+ * @param	phandle		phandle of GPIO entry
+ *
+ * @return	Pointer to op function or NULL if not found.
+ */
+cvmx_fdt_gpio_op_func_t cvmx_fdt_gpio_get_op_func(const void *fdt_addr, int phandle);
+
+/**
+ * Given a phandle to a GPIO device return the type of GPIO device it is.
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	phandle		phandle to GPIO
+ * @param[out]	size		Number of pins (optional, may be NULL)
+ *
+ * @return	Type of GPIO device or PIN_ERROR if error
+ */
+enum cvmx_gpio_type cvmx_fdt_get_gpio_type(const void *fdt_addr, int phandle, int *size);
+
+/**
+ * Return a GPIO handle given a GPIO phandle of the form <&gpio pin flags>
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	of_offset	node offset of GPIO device
+ * @param	prop_name	name of property
+ *
+ * @return	pointer to GPIO handle or NULL if error
+ */
+struct cvmx_fdt_gpio_info *cvmx_fdt_gpio_get_info(const void *fdt_addr, int of_offset,
+						  const char *prop_name);
+
+/**
+ * Return a GPIO handle given a GPIO phandle of the form <&gpio pin flags>
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	of_offset	node offset for property
+ * @param	prop_name	name of property
+ *
+ * @return	pointer to GPIO handle or NULL if error
+ */
+struct cvmx_fdt_gpio_info *cvmx_fdt_gpio_get_info_phandle(const void *fdt_addr, int of_offset,
+							  const char *prop_name);
+
+/**
+ * Parses a GPIO entry and fills in the gpio info data structure
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	phandle		phandle for GPIO
+ * @param	pin		pin number
+ * @param	flags		flags set (1 = invert)
+ * @param[out]	gpio		GPIO info data structure
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_fdt_parse_gpio(const void *fdt_addr, int phandle, int pin, u32 flags,
+			struct cvmx_fdt_gpio_info *gpio);
+
+/**
+ * @param	gpio	GPIO descriptor to assign timer to
+ * @param	timer	Octeon hardware timer number [0..3]
+ */
+void cvmx_fdt_gpio_set_timer(struct cvmx_fdt_gpio_info *gpio, int timer);
+
+/**
+ * Given a GPIO pin descriptor, input the value of that pin
+ *
+ * @param	pin	GPIO pin descriptor
+ *
+ * @return	0 if low, 1 if high, -1 on error.  Note that the input will be
+ *		inverted if the CVMX_GPIO_ACTIVE_LOW flag bit is set.
+ */
+int cvmx_fdt_gpio_get(struct cvmx_fdt_gpio_info *pin);
+
+/**
+ * Sets a GPIO pin given the GPIO descriptor
+ *
+ * @param	gpio	GPIO pin descriptor
+ * @param	value	value to set it to, 0 or 1
+ *
+ * @return	0 on success, -1 on error.
+ *
+ * NOTE: If the CVMX_GPIO_ACTIVE_LOW flag is set then the output value will be
+ * inverted.
+ */
+int cvmx_fdt_gpio_set(struct cvmx_fdt_gpio_info *gpio, int value);
+
+/**
+ * Sets the blink frequency for a GPIO pin
+ *
+ * @param gpio	GPIO handle
+ * @param freq	Frequency in hz [0..500]
+ */
+void cvmx_fdt_gpio_set_freq(struct cvmx_fdt_gpio_info *gpio, int freq);
+
+/**
+ * Enables or disables blinking a GPIO pin
+ *
+ * @param	gpio	GPIO handle
+ * @param	blink	True to start blinking, false to stop
+ *
+ * @return	0 for success, -1 on error
+ * NOTE: Not all GPIO types support blinking.
+ */
+int cvmx_fdt_gpio_set_blink(struct cvmx_fdt_gpio_info *gpio, bool blink);
+
+/**
+ * Alternates between link and blink mode
+ *
+ * @param	gpio	GPIO handle
+ * @param	blink	True to start blinking, false to use link status
+ *
+ * @return	0 for success, -1 on error
+ * NOTE: Not all GPIO types support this.
+ */
+int cvmx_fdt_gpio_set_link_blink(struct cvmx_fdt_gpio_info *gpio, bool blink);
+
+static inline bool cvmx_fdt_gpio_hw_link_supported(const struct cvmx_fdt_gpio_info *gpio)
+{
+	return gpio->hw_link;
+}
+
+/**
+ * Configures a GPIO pin as input or output
+ *
+ * @param	gpio	GPIO pin to configure
+ * @param	output	Set to true to make output, false for input
+ */
+void cvmx_fdt_gpio_set_output(struct cvmx_fdt_gpio_info *gpio, bool output);
+
+/**
+ * Allocates an LED data structure
+ * @param[in]	name		name to assign LED
+ * @param	of_offset	Device tree offset
+ * @param	gpio		GPIO assigned to LED (can be NULL)
+ * @param	last		Previous LED to build a list
+ *
+ * @return	pointer to LED data structure or NULL if out of memory
+ */
+struct cvmx_fdt_gpio_led *cvmx_alloc_led(const char *name, int of_offset,
+					 struct cvmx_fdt_gpio_info *gpio,
+					 struct cvmx_fdt_gpio_led *last);
+
+/**
+ * Parses an LED in the device tree
+ *
+ * @param[in]	fdt_addr		Pointer to flat device tree
+ * @param	led_of_offset		Device tree offset of LED
+ * @param	gpio			GPIO data structure to use (can be NULL)
+ * @param	last			Previous LED if this is a group of LEDs
+ *
+ * @return	Pointer to LED data structure or NULL if error
+ */
+struct cvmx_fdt_gpio_led *cvmx_fdt_parse_led(const void *fdt_addr, int led_of_offset,
+					     struct cvmx_fdt_gpio_info *gpio,
+					     struct cvmx_fdt_gpio_led *last);
+
+#endif /* __CVMX_HELPER_GPIO_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-ilk.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-ilk.h
new file mode 100644
index 0000000000..29af48e7a1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-ilk.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for ILK initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_ILK_H__
+#define __CVMX_HELPER_ILK_H__
+
+int __cvmx_helper_ilk_enumerate(int interface);
+
+/**
+ * @INTERNAL
+ * Clear all calendar entries to the xoff state. This
+ * means no data is sent or received.
+ *
+ * @param interface Interface whose calendar are to be initialized.
+ */
+void __cvmx_ilk_clear_cal(int interface);
+
+/**
+ * @INTERNAL
+ * Setup the channel's tx calendar entry.
+ *
+ * @param interface Interface channel belongs to
+ * @param channel Channel whose calendar entry is to be updated
+ * @param bpid Bpid assigned to the channel
+ */
+void __cvmx_ilk_write_tx_cal_entry(int interface, int channel, unsigned char bpid);
+
+/**
+ * @INTERNAL
+ * Setup the channel's rx calendar entry.
+ *
+ * @param interface Interface channel belongs to
+ * @param channel Channel whose calendar entry is to be updated
+ * @param pipe PKO assigned to the channel
+ */
+void __cvmx_ilk_write_rx_cal_entry(int interface, int channel, unsigned char pipe);
+
+/**
+ * @INTERNAL
+ * Probe a ILK interface and determine the number of ports
+ * connected to it. The ILK interface should still be down after
+ * this call.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_ilk_probe(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a ILK interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_ilk_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by ILK link status.
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_ilk_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_ilk_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+void __cvmx_helper_ilk_show_stats(void);
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-ipd.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-ipd.h
new file mode 100644
index 0000000000..025743d505
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-ipd.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions for IPD
+ */
+
+#ifndef __CVMX_HELPER_IPD_H__
+#define __CVMX_HELPER_IPD_H__
+
+void cvmx_helper_ipd_set_wqe_no_ptr_mode(bool mode);
+void cvmx_helper_ipd_pkt_wqe_le_mode(bool mode);
+int __cvmx_helper_ipd_global_setup(void);
+int __cvmx_helper_ipd_setup_interface(int interface);
+
+#endif /* __CVMX_HELPER_PKI_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-jtag.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-jtag.h
new file mode 100644
index 0000000000..fa379eaf55
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-jtag.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ *  Helper utilities for qlm_jtag.
+ */
+
+#ifndef __CVMX_HELPER_JTAG_H__
+#define __CVMX_HELPER_JTAG_H__
+
+/**
+ * The JTAG chain for CN52XX and CN56XX is 4 * 268 bits long, or 1072.
+ * CN5XXX full chain shift is:
+ *     new data => lane 3 => lane 2 => lane 1 => lane 0 => data out
+ * The JTAG chain for CN63XX is 4 * 300 bits long, or 1200.
+ * The JTAG chain for CN68XX is 4 * 304 bits long, or 1216.
+ * The JTAG chain for CN66XX/CN61XX/CNF71XX is 4 * 304 bits long, or 1216.
+ * CN6XXX full chain shift is:
+ *     new data => lane 0 => lane 1 => lane 2 => lane 3 => data out
+ * Shift LSB first, get LSB out
+ */
+extern const __cvmx_qlm_jtag_field_t __cvmx_qlm_jtag_field_cn63xx[];
+extern const __cvmx_qlm_jtag_field_t __cvmx_qlm_jtag_field_cn66xx[];
+extern const __cvmx_qlm_jtag_field_t __cvmx_qlm_jtag_field_cn68xx[];
+
+#define CVMX_QLM_JTAG_UINT32 40
+
+typedef u32 qlm_jtag_uint32_t[CVMX_QLM_JTAG_UINT32 * 8];
+
+/**
+ * Initialize the internal QLM JTAG logic to allow programming
+ * of the JTAG chain by the cvmx_helper_qlm_jtag_*() functions.
+ * These functions should only be used at the direction of Cavium
+ * Networks. Programming incorrect values into the JTAG chain
+ * can cause chip damage.
+ */
+void cvmx_helper_qlm_jtag_init(void);
+
+/**
+ * Write up to 32bits into the QLM jtag chain. Bits are shifted
+ * into the MSB and out the LSB, so you should shift in the low
+ * order bits followed by the high order bits. The JTAG chain for
+ * CN52XX and CN56XX is 4 * 268 bits long, or 1072. The JTAG chain
+ * for CN63XX is 4 * 300 bits long, or 1200.
+ *
+ * @param qlm    QLM to shift value into
+ * @param bits   Number of bits to shift in (1-32).
+ * @param data   Data to shift in. Bit 0 enters the chain first, followed by
+ *               bit 1, etc.
+ *
+ * @return The low order bits of the JTAG chain that shifted out of the
+ *         circle.
+ */
+u32 cvmx_helper_qlm_jtag_shift(int qlm, int bits, u32 data);
+
+/**
+ * Shift long sequences of zeros into the QLM JTAG chain. It is
+ * common to need to shift more than 32 bits of zeros into the
+ * chain. This function is a convience wrapper around
+ * cvmx_helper_qlm_jtag_shift() to shift more than 32 bits of
+ * zeros at a time.
+ *
+ * @param qlm    QLM to shift zeros into
+ * @param bits
+ */
+void cvmx_helper_qlm_jtag_shift_zeros(int qlm, int bits);
+
+/**
+ * Program the QLM JTAG chain into all lanes of the QLM. You must
+ * have already shifted in the proper number of bits into the
+ * JTAG chain. Updating invalid values can possibly cause chip damage.
+ *
+ * @param qlm    QLM to program
+ */
+void cvmx_helper_qlm_jtag_update(int qlm);
+
+/**
+ * Load the QLM JTAG chain with data from all lanes of the QLM.
+ *
+ * @param qlm    QLM to program
+ */
+void cvmx_helper_qlm_jtag_capture(int qlm);
+
+#endif /* __CVMX_HELPER_JTAG_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-loop.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-loop.h
new file mode 100644
index 0000000000..defd95551a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-loop.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for LOOP initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_LOOP_H__
+#define __CVMX_HELPER_LOOP_H__
+
+/**
+ * @INTERNAL
+ * Probe a LOOP interface and determine the number of ports
+ * connected to it. The LOOP interface should still be down after
+ * this call.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_loop_probe(int xiface);
+int __cvmx_helper_loop_enumerate(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a LOOP interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_loop_enable(int xiface);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-npi.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-npi.h
new file mode 100644
index 0000000000..6a600a017c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-npi.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for NPI initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_NPI_H__
+#define __CVMX_HELPER_NPI_H__
+
+/**
+ * @INTERNAL
+ * Probe a NPI interface and determine the number of ports
+ * connected to it. The NPI interface should still be down after
+ * this call.
+ *
+ * @param interface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_npi_probe(int interface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a NPI interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_npi_enable(int xiface);
+
+/**
+ * Sets the number of pipe used by SLI packet output in the variable,
+ * which then later used for setting it up in HW
+ */
+void cvmx_npi_config_set_num_pipes(int num_pipes);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-pki.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-pki.h
new file mode 100644
index 0000000000..f5933f24fa
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-pki.h
@@ -0,0 +1,319 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions for PKI
+ */
+
+#ifndef __CVMX_HELPER_PKI_H__
+#define __CVMX_HELPER_PKI_H__
+
+#include "cvmx-pki.h"
+
+/* Modify this if more than 8 ilk channels need to be supported */
+#define CVMX_MAX_PORT_PER_INTERFACE  64
+#define CVMX_MAX_QOS_PRIORITY	     64
+#define CVMX_PKI_FIND_AVAILABLE_RSRC (-1)
+
+struct cvmx_pki_qos_schd {
+	cvmx_fpa3_gaura_t _aura;
+	cvmx_fpa3_pool_t _pool;
+	bool pool_per_qos;
+	int pool_num;
+	char *pool_name;
+	u64 pool_buff_size;
+	u64 pool_max_buff;
+	bool aura_per_qos;
+	int aura_num;
+	char *aura_name;
+	u64 aura_buff_cnt;
+	bool sso_grp_per_qos;
+	int sso_grp;
+	u16 port_add;
+	int qpg_base;
+};
+
+struct cvmx_pki_prt_schd {
+	cvmx_fpa3_pool_t _pool;
+	cvmx_fpa3_gaura_t _aura;
+	bool cfg_port;
+	int style;
+	bool pool_per_prt;
+	int pool_num;
+	char *pool_name;
+	u64 pool_buff_size;
+	u64 pool_max_buff;
+	bool aura_per_prt;
+	int aura_num;
+	char *aura_name;
+	u64 aura_buff_cnt;
+	bool sso_grp_per_prt;
+	int sso_grp;
+	enum cvmx_pki_qpg_qos qpg_qos;
+	int qpg_base;
+	struct cvmx_pki_qos_schd qos_s[CVMX_MAX_QOS_PRIORITY];
+};
+
+struct cvmx_pki_intf_schd {
+	cvmx_fpa3_pool_t _pool;
+	cvmx_fpa3_gaura_t _aura;
+	bool style_per_intf;
+	int style;
+	bool pool_per_intf;
+	int pool_num;
+	char *pool_name;
+	u64 pool_buff_size;
+	u64 pool_max_buff;
+	bool aura_per_intf;
+	int aura_num;
+	char *aura_name;
+	u64 aura_buff_cnt;
+	bool sso_grp_per_intf;
+	int sso_grp;
+	bool qos_share_aura;
+	bool qos_share_grp;
+	int qpg_base;
+	struct cvmx_pki_prt_schd prt_s[CVMX_MAX_PORT_PER_INTERFACE];
+};
+
+struct cvmx_pki_global_schd {
+	bool setup_pool;
+	int pool_num;
+	char *pool_name;
+	u64 pool_buff_size;
+	u64 pool_max_buff;
+	bool setup_aura;
+	int aura_num;
+	char *aura_name;
+	u64 aura_buff_cnt;
+	bool setup_sso_grp;
+	int sso_grp;
+	cvmx_fpa3_pool_t _pool;
+	cvmx_fpa3_gaura_t _aura;
+};
+
+struct cvmx_pki_legacy_qos_watcher {
+	bool configured;
+	enum cvmx_pki_term field;
+	u32 data;
+	u32 data_mask;
+	u8 advance;
+	u8 sso_grp;
+};
+
+extern bool cvmx_pki_dflt_init[CVMX_MAX_NODES];
+
+extern struct cvmx_pki_pool_config pki_dflt_pool[CVMX_MAX_NODES];
+extern struct cvmx_pki_aura_config pki_dflt_aura[CVMX_MAX_NODES];
+extern struct cvmx_pki_style_config pki_dflt_style[CVMX_MAX_NODES];
+extern struct cvmx_pki_pkind_config pki_dflt_pkind[CVMX_MAX_NODES];
+extern u64 pkind_style_map[CVMX_MAX_NODES][CVMX_PKI_NUM_PKIND];
+extern struct cvmx_pki_sso_grp_config pki_dflt_sso_grp[CVMX_MAX_NODES];
+extern struct cvmx_pki_legacy_qos_watcher qos_watcher[8];
+
+/**
+ * This function Enabled the PKI hardware to
+ * start accepting/processing packets.
+ * @param node    node number
+ */
+void cvmx_helper_pki_enable(int node);
+
+/**
+ * This function frees up PKI resources consumed by that port.
+ * This function should only be called if port resources
+ * (fpa pools aura, style qpg entry pcam entry etc.) are not shared
+ * @param xipd_port     ipd port number for which resources need to
+ *                      be freed.
+ */
+int cvmx_helper_pki_port_shutdown(int xipd_port);
+
+/**
+ * This function shuts down complete PKI hardware
+ * and software resources.
+ * @param node          node number where PKI needs to shutdown.
+ */
+void cvmx_helper_pki_shutdown(int node);
+
+/**
+ * This function calculates how mant qpf entries will be needed for
+ * a particular QOS.
+ * @param qpg_qos       qos value for which entries need to be calculated.
+ */
+int cvmx_helper_pki_get_num_qpg_entry(enum cvmx_pki_qpg_qos qpg_qos);
+
+/**
+ * This function setups the qos table by allocating qpg entry and writing
+ * the provided parameters to that entry (offset).
+ * @param node          node number.
+ * @param qpg_cfg       pointer to struct containing qpg configuration
+ */
+int cvmx_helper_pki_set_qpg_entry(int node, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function sets up aura QOS for RED, backpressure and tail-drop.
+ *
+ * @param node       node number.
+ * @param aura       aura to configure.
+ * @param ena_red       enable RED based on [DROP] and [PASS] levels
+ *			1: enable 0:disable
+ * @param pass_thresh   pass threshold for RED.
+ * @param drop_thresh   drop threshold for RED
+ * @param ena_bp        enable backpressure based on [BP] level.
+ *			1:enable 0:disable
+ * @param bp_thresh     backpressure threshold.
+ * @param ena_drop      enable tail drop.
+ *			1:enable 0:disable
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_helper_setup_aura_qos(int node, int aura, bool ena_red, bool ena_drop, u64 pass_thresh,
+			       u64 drop_thresh, bool ena_bp, u64 bp_thresh);
+
+/**
+ * This function maps specified bpid to all the auras from which it can receive bp and
+ * then maps that bpid to all the channels, that bpid can asserrt bp on.
+ *
+ * @param node          node number.
+ * @param aura          aura number which will back pressure specified bpid.
+ * @param bpid          bpid to map.
+ * @param chl_map       array of channels to map to that bpid.
+ * @param chl_cnt	number of channel/ports to map to that bpid.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_helper_pki_map_aura_chl_bpid(int node, u16 aura, u16 bpid, u16 chl_map[], u16 chl_cnt);
+
+/**
+ * This function sets up the global pool, aura and sso group
+ * resources which application can use between any interfaces
+ * and ports.
+ * @param node          node number
+ * @param gblsch        pointer to struct containing global
+ *                      scheduling parameters.
+ */
+int cvmx_helper_pki_set_gbl_schd(int node, struct cvmx_pki_global_schd *gblsch);
+
+/**
+ * This function sets up scheduling parameters (pool, aura, sso group etc)
+ * of an ipd port.
+ * @param xipd_port     ipd port number
+ * @param prtsch        pointer to struct containing port's
+ *                      scheduling parameters.
+ */
+int cvmx_helper_pki_init_port(int xipd_port, struct cvmx_pki_prt_schd *prtsch);
+
+/**
+ * This function sets up scheduling parameters (pool, aura, sso group etc)
+ * of an interface (all ports/channels on that interface).
+ * @param xiface        interface number with node.
+ * @param intfsch      pointer to struct containing interface
+ *                      scheduling parameters.
+ * @param gblsch       pointer to struct containing global scheduling parameters
+ *                      (can be NULL if not used)
+ */
+int cvmx_helper_pki_init_interface(const int xiface, struct cvmx_pki_intf_schd *intfsch,
+				   struct cvmx_pki_global_schd *gblsch);
+/**
+ * This function gets all the PKI parameters related to that
+ * particular port from hardware.
+ * @param xipd_port     ipd port number to get parameter of
+ * @param port_cfg      pointer to structure where to store read parameters
+ */
+void cvmx_pki_get_port_config(int xipd_port, struct cvmx_pki_port_config *port_cfg);
+
+/**
+ * This function sets all the PKI parameters related to that
+ * particular port in hardware.
+ * @param xipd_port     ipd port number to get parameter of
+ * @param port_cfg      pointer to structure containing port parameters
+ */
+void cvmx_pki_set_port_config(int xipd_port, struct cvmx_pki_port_config *port_cfg);
+
+/**
+ * This function displays all the PKI parameters related to that
+ * particular port.
+ * @param xipd_port      ipd port number to display parameter of
+ */
+void cvmx_pki_show_port_config(int xipd_port);
+
+/**
+ * Modifies maximum frame length to check.
+ * It modifies the global frame length set used by this port, any other
+ * port using the same set will get affected too.
+ * @param xipd_port	ipd port for which to modify max len.
+ * @param max_size	maximum frame length
+ */
+void cvmx_pki_set_max_frm_len(int xipd_port, u32 max_size);
+
+/**
+ * This function sets up all the ports of particular interface
+ * for chosen fcs mode. (only use for backward compatibility).
+ * New application can control it via init_interface calls.
+ * @param node          node number.
+ * @param interface     interface number.
+ * @param nports        number of ports
+ * @param has_fcs       1 -- enable fcs check and fcs strip.
+ *                      0 -- disable fcs check.
+ */
+void cvmx_helper_pki_set_fcs_op(int node, int interface, int nports, int has_fcs);
+
+/**
+ * This function sets the wqe buffer mode of all ports. First packet data buffer can reside
+ * either in same buffer as wqe OR it can go in separate buffer. If used the later mode,
+ * make sure software allocate enough buffers to now have wqe separate from packet data.
+ * @param node	                node number.
+ * @param pkt_outside_wqe	0 = The packet link pointer will be at word [FIRST_SKIP]
+ *				    immediately followed by packet data, in the same buffer
+ *				    as the work queue entry.
+ *				1 = The packet link pointer will be at word [FIRST_SKIP] in a new
+ *				    buffer separate from the work queue entry. Words following the
+ *				    WQE in the same cache line will be zeroed, other lines in the
+ *				    buffer will not be modified and will retain stale data (from the
+ *				    buffer?s previous use). This setting may decrease the peak PKI
+ *				    performance by up to half on small packets.
+ */
+void cvmx_helper_pki_set_wqe_mode(int node, bool pkt_outside_wqe);
+
+/**
+ * This function sets the Packet mode of all ports and styles to little-endian.
+ * It Changes write operations of packet data to L2C to
+ * be in little-endian. Does not change the WQE header format, which is
+ * properly endian neutral.
+ * @param node	                node number.
+ */
+void cvmx_helper_pki_set_little_endian(int node);
+
+void cvmx_helper_pki_set_dflt_pool(int node, int pool, int buffer_size, int buffer_count);
+void cvmx_helper_pki_set_dflt_aura(int node, int aura, int pool, int buffer_count);
+void cvmx_helper_pki_set_dflt_pool_buffer(int node, int buffer_count);
+
+void cvmx_helper_pki_set_dflt_aura_buffer(int node, int buffer_count);
+
+void cvmx_helper_pki_set_dflt_pkind_map(int node, int pkind, int style);
+
+void cvmx_helper_pki_get_dflt_style(int node, struct cvmx_pki_style_config *style_cfg);
+void cvmx_helper_pki_set_dflt_style(int node, struct cvmx_pki_style_config *style_cfg);
+
+void cvmx_helper_pki_get_dflt_qpg(int node, struct cvmx_pki_qpg_config *qpg_cfg);
+void cvmx_helper_pki_set_dflt_qpg(int node, struct cvmx_pki_qpg_config *qpg_cfg);
+
+void cvmx_helper_pki_no_dflt_init(int node);
+
+void cvmx_helper_pki_set_dflt_bp_en(int node, bool bp_en);
+
+void cvmx_pki_dump_wqe(const cvmx_wqe_78xx_t *wqp);
+
+int __cvmx_helper_pki_port_setup(int node, int xipd_port);
+
+int __cvmx_helper_pki_global_setup(int node);
+void cvmx_helper_pki_show_port_config(int xipd_port);
+
+int __cvmx_helper_pki_install_dflt_vlan(int node);
+void __cvmx_helper_pki_set_dflt_ltype_map(int node);
+int cvmx_helper_pki_route_dmac(int node, int style, u64 mac_addr, u64 mac_addr_mask,
+			       int final_style);
+int cvmx_pki_clone_style(int node, int style, u64 cluster_mask);
+void cvmx_helper_pki_modify_prtgrp(int xipd_port, int grp_ok, int grp_bad);
+int cvmx_helper_pki_route_prt_dmac(int xipd_port, u64 mac_addr, u64 mac_addr_mask, int grp);
+
+void cvmx_helper_pki_errata(int node);
+
+#endif /* __CVMX_HELPER_PKI_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-pko.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-pko.h
new file mode 100644
index 0000000000..806102df22
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-pko.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * PKO helper, configuration API
+ */
+
+#ifndef __CVMX_HELPER_PKO_H__
+#define __CVMX_HELPER_PKO_H__
+
+/* CSR typedefs have been moved to cvmx-pko-defs.h */
+
+/**
+ * cvmx_override_pko_queue_priority(int ipd_port, u64
+ * priorities[16]) is a function pointer. It is meant to allow
+ * customization of the PKO queue priorities based on the port
+ * number. Users should set this pointer to a function before
+ * calling any cvmx-helper operations.
+ */
+void (*cvmx_override_pko_queue_priority)(int ipd_port, u8 *priorities);
+
+/**
+ * Gets the fpa pool number of pko pool
+ */
+s64 cvmx_fpa_get_pko_pool(void);
+
+/**
+ * Gets the buffer size of pko pool
+ */
+u64 cvmx_fpa_get_pko_pool_block_size(void);
+
+/**
+ * Gets the buffer size  of pko pool
+ */
+u64 cvmx_fpa_get_pko_pool_buffer_count(void);
+
+int cvmx_helper_pko_init(void);
+
+/*
+ * This function is a no-op
+ * included here for backwards compatibility only.
+ */
+static inline int cvmx_pko_initialize_local(void)
+{
+	return 0;
+}
+
+int __cvmx_helper_pko_drain(void);
+int __cvmx_helper_interface_setup_pko(int interface);
+
+#endif /* __CVMX_HELPER_PKO_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-pko3.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-pko3.h
new file mode 100644
index 0000000000..ca8d848bd1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-pko3.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_HELPER_PKO3_H__
+#define __CVMX_HELPER_PKO3_H__
+
+/*
+ * Initialize PKO3 unit on the current node.
+ *
+ * Covers the common hardware, memory and global configuration.
+ * Per-interface initialization is performed separately.
+ *
+ * @return 0 on success.
+ *
+ */
+int cvmx_helper_pko3_init_global(unsigned int node);
+int __cvmx_helper_pko3_init_global(unsigned int node, u16 gaura);
+
+/**
+ * Initialize a simple interface with a a given number of
+ * fair or prioritized queues.
+ * This function will assign one channel per sub-interface.
+ */
+int __cvmx_pko3_config_gen_interface(int xiface, u8 subif, u8 num_queues, bool prioritized);
+
+/*
+ * Configure and initialize PKO3 for an interface
+ *
+ * @param interface is the interface number to configure
+ * @return 0 on success.
+ *
+ */
+int cvmx_helper_pko3_init_interface(int xiface);
+int __cvmx_pko3_helper_dqs_activate(int xiface, int index, bool min_pad);
+
+/**
+ * Uninitialize PKO3 interface
+ *
+ * Release all resources held by PKO for an interface.
+ * The shutdown code is the same for all supported interfaces.
+ */
+int cvmx_helper_pko3_shut_interface(int xiface);
+
+/**
+ * Shutdown PKO3
+ *
+ * Should be called after all interfaces have been shut down on the PKO3.
+ *
+ * Disables the PKO, frees all its buffers.
+ */
+int cvmx_helper_pko3_shutdown(unsigned int node);
+
+/**
+ * Show integrated PKO configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_helper_pko3_config_dump(unsigned int node);
+
+/**
+ * Show integrated PKO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_helper_pko3_stats_dump(unsigned int node);
+
+/**
+ * Clear PKO statistics.
+ *
+ * @param node	   node number
+ */
+void cvmx_helper_pko3_stats_clear(unsigned int node);
+
+#endif /* __CVMX_HELPER_PKO3_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-rgmii.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-rgmii.h
new file mode 100644
index 0000000000..2a206a8827
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-rgmii.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for RGMII/GMII/MII initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_RGMII_H__
+#define __CVMX_HELPER_RGMII_H__
+
+/**
+ * @INTERNAL
+ * Probe RGMII ports and determine the number present
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of RGMII/GMII/MII ports (0-4).
+ */
+int __cvmx_helper_rgmii_probe(int xiface);
+
+/**
+ * Put an RGMII interface in loopback mode. Internal packets sent
+ * out will be received back again on the same port. Externally
+ * received packets will echo back out.
+ *
+ * @param port   IPD port number to loop.
+ */
+void cvmx_helper_rgmii_internal_loopback(int port);
+
+/**
+ * @INTERNAL
+ * Configure all of the ASX, GMX, and PKO regsiters required
+ * to get RGMII to function on the supplied interface.
+ *
+ * @param xiface PKO Interface to configure (0 or 1)
+ *
+ * @return Zero on success
+ */
+int __cvmx_helper_rgmii_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_gmii_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_rgmii_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_rgmii_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again.
+ *
+ * @param ipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int __cvmx_helper_rgmii_configure_loopback(int ipd_port, int enable_internal, int enable_external);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-sfp.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-sfp.h
new file mode 100644
index 0000000000..6fe55093b2
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-sfp.h
@@ -0,0 +1,437 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions to abstract SFP and QSFP connectors
+ */
+
+#ifndef __CVMX_HELPER_SFP_H__
+#define __CVMX_HELPER_SFP_H__
+
+/**
+ * Maximum size for the SFP EEPROM.  Currently only 96 bytes are used.
+ */
+#define CVMX_SFP_MAX_EEPROM_SIZE 0x100
+
+/**
+ * Default address of sfp EEPROM
+ */
+#define CVMX_SFP_DEFAULT_I2C_ADDR 0x50
+
+/**
+ * Default address of SFP diagnostics chip
+ */
+#define CVMX_SFP_DEFAULT_DIAG_I2C_ADDR 0x51
+
+struct cvmx_fdt_sfp_info;
+struct cvmx_fdt_gpio_info;
+/**
+ * Connector type for module, usually we only see SFP and QSFPP
+ */
+enum cvmx_phy_sfp_conn_type {
+	CVMX_SFP_CONN_GBIC = 0x01,	 /** GBIC */
+	CVMX_SFP_CONN_SFP = 0x03,	 /** SFP/SFP+/SFP28 */
+	CVMX_SFP_CONN_QSFP = 0x0C,	 /** 1G QSFP (obsolete) */
+	CVMX_SFP_CONN_QSFPP = 0x0D,	 /** QSFP+ or later */
+	CVMX_SFP_CONN_QSFP28 = 0x11,	 /** QSFP28 (100Gbps) */
+	CVMX_SFP_CONN_MICRO_QSFP = 0x17, /** Micro QSFP */
+	CVMX_SFP_CONN_QSFP_DD = 0x18,	 /** QSFP-DD Double Density 8X */
+	CVMX_SFP_CONN_SFP_DD = 0x1A,	 /** SFP-DD Double Density 2X */
+};
+
+/**
+ * module type plugged into a SFP/SFP+/QSFP+ port
+ */
+enum cvmx_phy_sfp_mod_type {
+	CVMX_SFP_MOD_UNKNOWN = 0, /** Unknown or unspecified */
+	/** Fiber optic module (LC connector) */
+	CVMX_SFP_MOD_OPTICAL_LC = 0x7,
+	/** Multiple optical */
+	CVMX_SFP_MOD_MULTIPLE_OPTICAL = 0x9,
+	/** Fiber optic module (pigtail, no connector) */
+	CVMX_SFP_MOD_OPTICAL_PIGTAIL = 0xB,
+	CVMX_SFP_MOD_COPPER_PIGTAIL = 0x21, /** copper module */
+	CVMX_SFP_MOD_RJ45 = 0x22,	    /** RJ45 (i.e. 10GBase-T) */
+	/** No separable connector (SFP28/copper) */
+	CVMX_SFP_MOD_NO_SEP_CONN = 0x23,
+	/** MXC 2X16 */
+	CVMX_SFP_MOD_MXC_2X16 = 0x24,
+	/** CS optical connector */
+	CVMX_SFP_MOD_CS_OPTICAL = 0x25,
+	/** Mini CS optical connector */
+	CVMX_SFP_MOD_MINI_CS_OPTICAL = 0x26,
+	/** Unknown/other module type */
+	CVMX_SFP_MOD_OTHER
+};
+
+/** Peak rate supported by SFP cable */
+enum cvmx_phy_sfp_rate {
+	CVMX_SFP_RATE_UNKNOWN, /** Unknown rate */
+	CVMX_SFP_RATE_1G,      /** 1Gbps */
+	CVMX_SFP_RATE_10G,     /** 10Gbps */
+	CVMX_SFP_RATE_25G,     /** 25Gbps */
+	CVMX_SFP_RATE_40G,     /** 40Gbps */
+	CVMX_SFP_RATE_100G     /** 100Gbps */
+};
+
+/**
+ * Cable compliance specification
+ * See table 4-4 from SFF-8024 for the extended specification compliance codes
+ */
+enum cvmx_phy_sfp_cable_ext_compliance {
+	CVMX_SFP_CABLE_UNSPEC = 0,
+	CVMX_SFP_CABLE_100G_25GAUI_C2M_AOC_HIGH_BER = 0x01, /** Active optical cable */
+	CVMX_SFP_CABLE_100G_SR4_25G_SR = 0x2,
+	CVMX_SFP_CABLE_100G_LR4_25G_LR = 0x3,
+	CVMX_SFP_CABLE_100G_ER4_25G_ER = 0x4,
+	CVMX_SFP_CABLE_100G_SR10 = 0x5,
+	CVMX_SFP_CABLE_100G_CWDM4_MSA = 0x6,
+	CVMX_SFP_CABLE_100G_PSM4 = 0x7,
+	CVMX_SFP_CABLE_100G_25GAUI_C2M_ACC_HIGH_BER = 0x8,
+	CVMX_SFP_CABLE_100G_CWDM4 = 0x9,
+	CVMX_SFP_CABLE_100G_CR4_25G_CR_CA_L = 0xB,
+	CVMX_SFP_CABLE_25G_CR_CA_S = 0xC,
+	CVMX_SFP_CABLE_25G_CR_CA_N = 0xD,
+	CVMX_SFP_CABLE_40G_ER4 = 0x10,
+	CVMX_SFP_CABLE_4X10G_SR = 0x11,
+	CVMX_SFP_CABLE_40G_PSM4 = 0x12,
+	CVMX_SFP_CABLE_G959_1_P1I1_2D1 = 0x13,
+	CVMX_SFP_CABLE_G959_1_P1S1_2D2 = 0x14,
+	CVMX_SFP_CABLE_G959_1_P1L1_2D2 = 0x15,
+	CVMX_SFP_CABLE_10GBASE_T = 0x16,
+	CVMX_SFP_CABLE_100G_CLR4 = 0x17,
+	CVMX_SFP_CABLE_100G_25GAUI_C2M_AOC_LOW_BER = 0x18,
+	CVMX_SFP_CABLE_100G_25GAUI_C2M_ACC_LOW_BER = 0x19,
+	CVMX_SFP_CABLE_100G_2_LAMBDA_DWDM = 0x1a,
+	CVMX_SFP_CABLE_100G_1550NM_WDM = 0x1b,
+	CVMX_SFP_CABLE_10GBASE_T_SR = 0x1c,
+	CVMX_SFP_CABLE_5GBASE_T = 0x1d,
+	CVMX_SFP_CABLE_2_5GBASE_T = 0x1e,
+	CVMX_SFP_CABLE_40G_SWDM4 = 0x1f,
+	CVMX_SFP_CABLE_100G_SWDM4 = 0x20,
+	CVMX_SFP_CABLE_100G_PAM4_BIDI = 0x21,
+	CVMX_SFP_CABLE_100G_4WDM_10_FEC_HOST = 0x22,
+	CVMX_SFP_CABLE_100G_4WDM_20_FEC_HOST = 0x23,
+	CVMX_SFP_CABLE_100G_4WDM_40_FEC_HOST = 0x24,
+	CVMX_SFP_CABLE_100GBASE_DR_CAUI4_NO_FEC = 0x25,
+	CVMX_SFP_CABLE_100G_FR_CAUI4_NO_FEC = 0x26,
+	CVMX_SFP_CABLE_100G_LR_CAUI4_NO_FEC = 0x27,
+	CVMX_SFP_CABLE_ACTIVE_COPPER_50_100_200GAUI_LOW_BER = 0x30,
+	CVMX_SFP_CABLE_ACTIVE_OPTICAL_50_100_200GAUI_LOW_BER = 0x31,
+	CVMX_SFP_CABLE_ACTIVE_COPPER_50_100_200GAUI_HI_BER = 0x32,
+	CVMX_SFP_CABLE_ACTIVE_OPTICAL_50_100_200GAUI_HI_BER = 0x33,
+	CVMX_SFP_CABLE_50_100_200G_CR = 0x40,
+	CVMX_SFP_CABLE_50_100_200G_SR = 0x41,
+	CVMX_SFP_CABLE_50GBASE_FR_200GBASE_DR4 = 0x42,
+	CVMX_SFP_CABLE_200GBASE_FR4 = 0x43,
+	CVMX_SFP_CABLE_200G_1550NM_PSM4 = 0x44,
+	CVMX_SFP_CABLE_50GBASE_LR = 0x45,
+	CVMX_SFP_CABLE_200GBASE_LR4 = 0x46,
+	CVMX_SFP_CABLE_64GFC_EA = 0x50,
+	CVMX_SFP_CABLE_64GFC_SW = 0x51,
+	CVMX_SFP_CABLE_64GFC_LW = 0x52,
+	CVMX_SFP_CABLE_128GFC_EA = 0x53,
+	CVMX_SFP_CABLE_128GFC_SW = 0x54,
+	CVMX_SFP_CABLE_128GFC_LW = 0x55,
+
+};
+
+/** Optical modes module is compliant with */
+enum cvmx_phy_sfp_10g_eth_compliance {
+	CVMX_SFP_CABLE_10GBASE_ER = 0x80,  /** 10G ER */
+	CVMX_SFP_CABLE_10GBASE_LRM = 0x40, /** 10G LRM */
+	CVMX_SFP_CABLE_10GBASE_LR = 0x20,  /** 10G LR */
+	CVMX_SFP_CABLE_10GBASE_SR = 0x10   /** 10G SR */
+};
+
+/** Diagnostic ASIC compatibility */
+enum cvmx_phy_sfp_sff_8472_diag_rev {
+	CVMX_SFP_SFF_8472_NO_DIAG = 0x00,
+	CVMX_SFP_SFF_8472_REV_9_3 = 0x01,
+	CVMX_SFP_SFF_8472_REV_9_5 = 0x02,
+	CVMX_SFP_SFF_8472_REV_10_2 = 0x03,
+	CVMX_SFP_SFF_8472_REV_10_4 = 0x04,
+	CVMX_SFP_SFF_8472_REV_11_0 = 0x05,
+	CVMX_SFP_SFF_8472_REV_11_3 = 0x06,
+	CVMX_SFP_SFF_8472_REV_11_4 = 0x07,
+	CVMX_SFP_SFF_8472_REV_12_0 = 0x08,
+	CVMX_SFP_SFF_8472_REV_UNALLOCATED = 0xff
+};
+
+/**
+ * Data structure describing the current SFP or QSFP EEPROM
+ */
+struct cvmx_sfp_mod_info {
+	enum cvmx_phy_sfp_conn_type conn_type; /** Connector type */
+	enum cvmx_phy_sfp_mod_type mod_type;   /** Module type */
+	enum cvmx_phy_sfp_rate rate;	       /** Rate of module */
+	/** 10G Ethernet Compliance codes (logical OR) */
+	enum cvmx_phy_sfp_10g_eth_compliance eth_comp;
+	/** Extended Cable compliance */
+	enum cvmx_phy_sfp_cable_ext_compliance cable_comp;
+	u8 vendor_name[17]; /** Module vendor name */
+	u8 vendor_oui[3];   /** vendor OUI */
+	u8 vendor_pn[17];   /** Vendor part number */
+	u8 vendor_rev[5];   /** Vendor revision */
+	u8 vendor_sn[17];   /** Vendor serial number */
+	u8 date_code[9];    /** Date code */
+	bool valid;	    /** True if module is valid */
+	bool active_cable;  /** False for passive copper */
+	bool copper_cable;  /** True if cable is copper */
+	/** True if module is limiting (i.e. not passive copper) */
+	bool limiting;
+	/** Maximum length of copper cable in meters */
+	int max_copper_cable_len;
+	/** Max single mode cable length in meters */
+	int max_single_mode_cable_length;
+	/** Max 50um OM2 cable length */
+	int max_50um_om2_cable_length;
+	/** Max 62.5um OM1 cable length */
+	int max_62_5um_om1_cable_length;
+	/** Max 50um OM4 cable length */
+	int max_50um_om4_cable_length;
+	/** Max 50um OM3 cable length */
+	int max_50um_om3_cable_length;
+	/** Minimum bitrate in Mbps */
+	int bitrate_min;
+	/** Maximum bitrate in Mbps */
+	int bitrate_max;
+	/**
+	 * Set to true if forward error correction is required,
+	 * for example, a 25GBase-CR CA-S cable.
+	 *
+	 * FEC should only be disabled at 25G with CA-N cables.  FEC is required
+	 * with 5M and longer cables.
+	 */
+	bool fec_required;
+	/** True if RX output is linear */
+	bool linear_rx_output;
+	/** Power level, can be 1, 2 or 3 */
+	int power_level;
+	/** False if conventional cooling is used, true for active cooling */
+	bool cooled_laser;
+	/** True if internal retimer or clock and data recovery circuit */
+	bool internal_cdr;
+	/** True if LoS is implemented */
+	bool los_implemented;
+	/** True if LoS is inverted from the standard */
+	bool los_inverted;
+	/** True if TX_FAULT is implemented */
+	bool tx_fault_implemented;
+	/** True if TX_DISABLE is implemented */
+	bool tx_disable_implemented;
+	/** True if RATE_SELECT is implemented */
+	bool rate_select_implemented;
+	/** True if tuneable transmitter technology is used */
+	bool tuneable_transmitter;
+	/** True if receiver decision threshold is implemented */
+	bool rx_decision_threshold_implemented;
+	/** True if diagnostic monitoring present */
+	bool diag_monitoring;
+	/** True if diagnostic address 0x7f is used for selecting the page */
+	bool diag_paging;
+	/** Diagnostic feature revision */
+	enum cvmx_phy_sfp_sff_8472_diag_rev diag_rev;
+	/** True if an address change sequence is required for diagnostics */
+	bool diag_addr_change_required;
+	/** True if RX power is averaged, false if OMA */
+	bool diag_rx_power_averaged;
+	/** True if diagnostics are externally calibrated */
+	bool diag_externally_calibrated;
+	/** True if diagnostics are internally calibrated */
+	bool diag_internally_calibrated;
+	/** True of soft rate select control implemented per SFF-8431 */
+	bool diag_soft_rate_select_control;
+	/** True if application select control implemented per SFF-8079 */
+	bool diag_app_select_control;
+	/** True if soft RATE_SELECT control and moonitoring implemented */
+	bool diag_soft_rate_select_implemented;
+	/** True if soft RX_LOS monitoring implemented */
+	bool diag_soft_rx_los_implemented;
+	/** True if soft TX_FAULT monitoring implemented */
+	bool diag_soft_tx_fault_implemented;
+	/** True if soft TX_DISABLE control and monitoring implemented */
+	bool diag_soft_tx_disable_implemented;
+	/** True if alarm/warning flags implemented */
+	bool diag_alarm_warning_flags_implemented;
+};
+
+/**
+ * Reads the SFP EEPROM using the i2c bus
+ *
+ * @param[out]	buffer		Buffer to store SFP EEPROM data in
+ *				The buffer should be SFP_MAX_EEPROM_SIZE bytes.
+ * @param	i2c_bus		i2c bus number to read from for SFP port
+ * @param	i2c_addr	i2c address to use, 0 for default
+ *
+ * @return	-1 if invalid bus or i2c read error, 0 for success
+ */
+int cvmx_phy_sfp_read_i2c_eeprom(u8 *buffer, int i2c_bus, int i2c_addr);
+
+/**
+ * Reads the SFP/SFP+/QSFP EEPROM and outputs the type of module or cable
+ * plugged in
+ *
+ * @param[out]	sfp_info	Info about SFP module
+ * @param[in]	buffer		SFP EEPROM buffer to parse
+ *
+ * @return	0 on success, -1 if error reading EEPROM or if EEPROM corrupt
+ */
+int cvmx_phy_sfp_parse_eeprom(struct cvmx_sfp_mod_info *sfp_info, const u8 *buffer);
+
+/**
+ * Prints out information about a SFP/QSFP device
+ *
+ * @param[in]	sfp_info	data structure to print
+ */
+void cvmx_phy_sfp_print_info(const struct cvmx_sfp_mod_info *sfp_info);
+
+/**
+ * Reads and parses SFP/QSFP EEPROM
+ *
+ * @param	sfp	sfp handle to read
+ *
+ * @return	0 for success, -1 on error.
+ */
+int cvmx_sfp_read_i2c_eeprom(struct cvmx_fdt_sfp_info *sfp);
+
+/**
+ * Returns the information about a SFP/QSFP device
+ *
+ * @param       sfp             sfp handle
+ *
+ * @return      sfp_info        Pointer sfp mod info data structure
+ */
+const struct cvmx_sfp_mod_info *cvmx_phy_get_sfp_mod_info(const struct cvmx_fdt_sfp_info *sfp);
+
+/**
+ * Function called to check and return the status of the mod_abs pin or
+ * mod_pres pin for QSFPs.
+ *
+ * @param	sfp	Handle to SFP information.
+ * @param	data	User-defined data passed to the function
+ *
+ * @return	0 if absent, 1 if present, -1 on error
+ */
+int cvmx_sfp_check_mod_abs(struct cvmx_fdt_sfp_info *sfp, void *data);
+
+/**
+ * Registers a function to be called to check mod_abs/mod_pres for a SFP/QSFP
+ * slot.
+ *
+ * @param	sfp		Handle to SFP data structure
+ * @param	check_mod_abs	Function to be called or NULL to remove
+ * @param	mod_abs_data	User-defined data to be passed to check_mod_abs
+ *
+ * @return	0 for success
+ */
+int cvmx_sfp_register_check_mod_abs(struct cvmx_fdt_sfp_info *sfp,
+				    int (*check_mod_abs)(struct cvmx_fdt_sfp_info *sfp, void *data),
+				    void *mod_abs_data);
+
+/**
+ * Registers a function to be called whenever the mod_abs/mod_pres signal
+ * changes.
+ *
+ * @param	sfp		Handle to SFP data structure
+ * @param	mod_abs_changed	Function called whenever mod_abs is changed
+ *				or NULL to remove.
+ * @param	mod_abs_changed_data	User-defined data passed to
+ *					mod_abs_changed
+ *
+ * @return	0 for success
+ */
+int cvmx_sfp_register_mod_abs_changed(struct cvmx_fdt_sfp_info *sfp,
+				      int (*mod_abs_changed)(struct cvmx_fdt_sfp_info *sfp, int val,
+							     void *data),
+				      void *mod_abs_changed_data);
+
+/**
+ * Function called to check and return the status of the tx_fault pin
+ *
+ * @param	sfp	Handle to SFP information.
+ * @param	data	User-defined data passed to the function
+ *
+ * @return	0 if signal present, 1 if signal absent, -1 on error
+ */
+int cvmx_sfp_check_tx_fault(struct cvmx_fdt_sfp_info *sfp, void *data);
+
+/**
+ * Function called to check and return the status of the rx_los pin
+ *
+ * @param	sfp	Handle to SFP information.
+ * @param	data	User-defined data passed to the function
+ *
+ * @return	0 if signal present, 1 if signal absent, -1 on error
+ */
+int cvmx_sfp_check_rx_los(struct cvmx_fdt_sfp_info *sfp, void *data);
+
+/**
+ * Registers a function to be called whenever rx_los changes
+ *
+ * @param	sfp		Handle to SFP data structure
+ * @param	rx_los_changed	Function to be called when rx_los changes
+ *				or NULL to remove the function
+ * @param	rx_los_changed_data	User-defined data passed to
+ *					rx_los_changed
+ *
+ * @return	0 for success
+ */
+int cvmx_sfp_register_rx_los_changed(struct cvmx_fdt_sfp_info *sfp,
+				     int (*rx_los_changed)(struct cvmx_fdt_sfp_info *sfp, int val,
+							   void *data),
+				     void *rx_los_changed_data);
+
+/**
+ * Parses the device tree for SFP and QSFP slots
+ *
+ * @param	fdt_addr	Address of flat device-tree
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_sfp_parse_device_tree(const void *fdt_addr);
+
+/**
+ * Given an IPD port number find the corresponding SFP or QSFP slot
+ *
+ * @param	ipd_port	IPD port number to search for
+ *
+ * @return	pointer to SFP data structure or NULL if not found
+ */
+struct cvmx_fdt_sfp_info *cvmx_sfp_find_slot_by_port(int ipd_port);
+
+/**
+ * Given a fdt node offset find the corresponding SFP or QSFP slot
+ *
+ * @param	of_offset	flat device tree node offset
+ *
+ * @return	pointer to SFP data structure or NULL if not found
+ */
+struct cvmx_fdt_sfp_info *cvmx_sfp_find_slot_by_fdt_node(int of_offset);
+
+/**
+ * Reads the EEPROMs of all SFP modules.
+ *
+ * @return 0 for success
+ */
+int cvmx_sfp_read_all_modules(void);
+
+/**
+ * Validates if the module is correct for the specified port
+ *
+ * @param[in]	sfp	SFP port to check
+ * @param	mode	interface mode
+ *
+ * @return	true if module is valid, false if invalid
+ * NOTE: This will also toggle the error LED, if present
+ */
+bool cvmx_sfp_validate_module(struct cvmx_fdt_sfp_info *sfp, int mode);
+
+/**
+ * Prints information about the SFP module
+ *
+ * @param[in]	sfp	sfp data structure
+ */
+void cvmx_sfp_print_info(const struct cvmx_fdt_sfp_info *sfp);
+
+#endif /* __CVMX_HELPER_SFP_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-sgmii.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-sgmii.h
new file mode 100644
index 0000000000..c5110c9513
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-sgmii.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for SGMII initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_SGMII_H__
+#define __CVMX_HELPER_SGMII_H__
+
+/**
+ * @INTERNAL
+ * Probe a SGMII interface and determine the number of ports
+ * connected to it. The SGMII interface should still be down after
+ * this call.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_sgmii_probe(int xiface);
+int __cvmx_helper_sgmii_enumerate(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a SGMII interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_sgmii_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_sgmii_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_sgmii_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again.
+ *
+ * @param ipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int __cvmx_helper_sgmii_configure_loopback(int ipd_port, int enable_internal, int enable_external);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-spi.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-spi.h
new file mode 100644
index 0000000000..cae72f2172
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-spi.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for SPI initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_SPI_H__
+#define __CVMX_HELPER_SPI_H__
+
+#include "cvmx-helper.h"
+
+/**
+ * @INTERNAL
+ * Probe a SPI interface and determine the number of ports
+ * connected to it. The SPI interface should still be down after
+ * this call.
+ *
+ * @param interface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_spi_probe(int interface);
+int __cvmx_helper_spi_enumerate(int interface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a SPI interface. After this call packet I/O
+ * should be fully functional. This is called with IPD enabled but
+ * PKO disabled.
+ *
+ * @param interface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_spi_enable(int interface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_spi_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_spi_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * Sets the spi timeout in config data
+ * @param timeout value
+ */
+void cvmx_spi_config_set_timeout(int timeout);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-srio.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-srio.h
new file mode 100644
index 0000000000..2b7571dced
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-srio.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for SRIO initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_SRIO_H__
+#define __CVMX_HELPER_SRIO_H__
+
+/**
+ * @INTERNAL
+ * Convert interface number to sRIO link number
+ * per SoC model.
+ *
+ * @param xiface Interface to convert
+ *
+ * @return Srio link number
+ */
+int __cvmx_helper_srio_port(int xiface);
+
+/**
+ * @INTERNAL
+ * Probe a SRIO interface and determine the number of ports
+ * connected to it. The SRIO interface should still be down after
+ * this call.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_srio_probe(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a SRIO interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_srio_enable(int xiface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by SRIO link status.
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_srio_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_srio_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-util.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-util.h
new file mode 100644
index 0000000000..cf98eaeba4
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-util.h
@@ -0,0 +1,412 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_HELPER_UTIL_H__
+#define __CVMX_HELPER_UTIL_H__
+
+#include "cvmx-mio-defs.h"
+#include "cvmx-helper.h"
+#include "cvmx-fpa.h"
+
+typedef char cvmx_pknd_t;
+typedef char cvmx_bpid_t;
+
+#define CVMX_INVALID_PKND ((cvmx_pknd_t)-1)
+#define CVMX_INVALID_BPID ((cvmx_bpid_t)-1)
+#define CVMX_MAX_PKND	  ((cvmx_pknd_t)64)
+#define CVMX_MAX_BPID	  ((cvmx_bpid_t)64)
+
+#define CVMX_HELPER_MAX_IFACE 11
+#define CVMX_HELPER_MAX_PORTS 16
+
+/* Maximum range for normalized (a.k.a. IPD) port numbers (12-bit field) */
+#define CVMX_PKO3_IPD_NUM_MAX 0x1000 //FIXME- take it from someplace else ?
+#define CVMX_PKO3_DQ_NUM_MAX  0x400  // 78xx has 1024 queues
+
+#define CVMX_PKO3_IPD_PORT_NULL (CVMX_PKO3_IPD_NUM_MAX - 1)
+#define CVMX_PKO3_IPD_PORT_LOOP 0
+
+struct cvmx_xport {
+	int node;
+	int port;
+};
+
+typedef struct cvmx_xport cvmx_xport_t;
+
+static inline struct cvmx_xport cvmx_helper_ipd_port_to_xport(int ipd_port)
+{
+	struct cvmx_xport r;
+
+	r.port = ipd_port & (CVMX_PKO3_IPD_NUM_MAX - 1);
+	r.node = (ipd_port >> 12) & CVMX_NODE_MASK;
+	return r;
+}
+
+static inline int cvmx_helper_node_to_ipd_port(int node, int index)
+{
+	return (node << 12) + index;
+}
+
+struct cvmx_xdq {
+	int node;
+	int queue;
+};
+
+typedef struct cvmx_xdq cvmx_xdq_t;
+
+static inline struct cvmx_xdq cvmx_helper_queue_to_xdq(int queue)
+{
+	struct cvmx_xdq r;
+
+	r.queue = queue & (CVMX_PKO3_DQ_NUM_MAX - 1);
+	r.node = (queue >> 10) & CVMX_NODE_MASK;
+	return r;
+}
+
+static inline int cvmx_helper_node_to_dq(int node, int queue)
+{
+	return (node << 10) + queue;
+}
+
+struct cvmx_xiface {
+	int node;
+	int interface;
+};
+
+typedef struct cvmx_xiface cvmx_xiface_t;
+
+/**
+ * Return node and interface number from XIFACE.
+ *
+ * @param xiface interface with node information
+ *
+ * @return struct that contains node and interface number.
+ */
+static inline struct cvmx_xiface cvmx_helper_xiface_to_node_interface(int xiface)
+{
+	cvmx_xiface_t interface_node;
+
+	/*
+	 * If the majic number 0xde0000 is not present in the
+	 * interface, then assume it is node 0.
+	 */
+
+	if (((xiface >> 0x8) & 0xff) == 0xde) {
+		interface_node.node = (xiface >> 16) & CVMX_NODE_MASK;
+		interface_node.interface = xiface & 0xff;
+	} else {
+		interface_node.node = cvmx_get_node_num();
+		interface_node.interface = xiface & 0xff;
+	}
+	return interface_node;
+}
+
+/* Used internally only*/
+static inline bool __cvmx_helper_xiface_is_null(int xiface)
+{
+	return (xiface & 0xff) == 0xff;
+}
+
+#define __CVMX_XIFACE_NULL 0xff
+
+/**
+ * Return interface with majic number and node information (XIFACE)
+ *
+ * @param node       node of the interface referred to
+ * @param interface  interface to use.
+ *
+ * @return
+ */
+static inline int cvmx_helper_node_interface_to_xiface(int node, int interface)
+{
+	return ((node & CVMX_NODE_MASK) << 16) | (0xde << 8) | (interface & 0xff);
+}
+
+/**
+ * Free the pip packet buffers contained in a work queue entry.
+ * The work queue entry is not freed.
+ *
+ * @param work   Work queue entry with packet to free
+ */
+static inline void cvmx_helper_free_pip_pkt_data(cvmx_wqe_t *work)
+{
+	u64 number_buffers;
+	cvmx_buf_ptr_t buffer_ptr;
+	cvmx_buf_ptr_t next_buffer_ptr;
+	u64 start_of_buffer;
+
+	number_buffers = work->word2.s.bufs;
+	if (number_buffers == 0)
+		return;
+
+	buffer_ptr = work->packet_ptr;
+
+	/* Since the number of buffers is not zero, we know this is not a dynamic
+	   short packet. We need to check if it is a packet received with
+	   IPD_CTL_STATUS[NO_WPTR]. If this is true, we need to free all buffers
+	   except for the first one. The caller doesn't expect their WQE pointer
+	   to be freed */
+	start_of_buffer = ((buffer_ptr.s.addr >> 7) - buffer_ptr.s.back) << 7;
+	if (cvmx_ptr_to_phys(work) == start_of_buffer) {
+		next_buffer_ptr = *(cvmx_buf_ptr_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr - 8);
+		buffer_ptr = next_buffer_ptr;
+		number_buffers--;
+	}
+
+	while (number_buffers--) {
+		/* Remember the back pointer is in cache lines, not 64bit words */
+		start_of_buffer = ((buffer_ptr.s.addr >> 7) - buffer_ptr.s.back) << 7;
+		/* Read pointer to next buffer before we free the current buffer. */
+		next_buffer_ptr = *(cvmx_buf_ptr_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr - 8);
+		cvmx_fpa_free(cvmx_phys_to_ptr(start_of_buffer), buffer_ptr.s.pool, 0);
+		buffer_ptr = next_buffer_ptr;
+	}
+}
+
+/**
+ * Free the pki packet buffers contained in a work queue entry.
+ * If first packet buffer contains wqe, wqe gets freed too so do not access
+ * wqe after calling this function.
+ * This function asssumes that buffers to be freed are from
+ * Naturally aligned pool/aura.
+ * It does not use don't write back.
+ * @param work   Work queue entry with packet to free
+ */
+static inline void cvmx_helper_free_pki_pkt_data(cvmx_wqe_t *work)
+{
+	u64 number_buffers;
+	u64 start_of_buffer;
+	cvmx_buf_ptr_pki_t next_buffer_ptr;
+	cvmx_buf_ptr_pki_t buffer_ptr;
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		return;
+	}
+	/* Make sure errata pki-20776 has been applied*/
+	cvmx_wqe_pki_errata_20776(work);
+	buffer_ptr = wqe->packet_ptr;
+	number_buffers = cvmx_wqe_get_bufs(work);
+
+	while (number_buffers--) {
+		/* FIXME: change WQE function prototype */
+		unsigned int x = cvmx_wqe_get_aura(work);
+		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(x >> 10, x & 0x3ff);
+		/* XXX- assumes the buffer is cache-line aligned and naturally aligned mode*/
+		start_of_buffer = (buffer_ptr.addr >> 7) << 7;
+		/* Read pointer to next buffer before we free the current buffer. */
+		next_buffer_ptr = *(cvmx_buf_ptr_pki_t *)cvmx_phys_to_ptr(buffer_ptr.addr - 8);
+		/* FPA AURA comes from WQE, includes node */
+		cvmx_fpa3_free(cvmx_phys_to_ptr(start_of_buffer), aura, 0);
+		buffer_ptr = next_buffer_ptr;
+	}
+}
+
+/**
+ * Free the pki wqe entry buffer.
+ * If wqe buffers contains first packet buffer, wqe does not get freed here.
+ * This function asssumes that buffers to be freed are from
+ * Naturally aligned pool/aura.
+ * It does not use don't write back.
+ * @param work   Work queue entry to free
+ */
+static inline void cvmx_wqe_pki_free(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+	unsigned int x;
+	cvmx_fpa3_gaura_t aura;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		return;
+	}
+	/* Do nothing if the first packet buffer shares WQE buffer */
+	if (!wqe->packet_ptr.packet_outside_wqe)
+		return;
+
+	/* FIXME change WQE function prototype */
+	x = cvmx_wqe_get_aura(work);
+	aura = __cvmx_fpa3_gaura(x >> 10, x & 0x3ff);
+
+	cvmx_fpa3_free(work, aura, 0);
+}
+
+/**
+ * Convert a interface mode into a human readable string
+ *
+ * @param mode   Mode to convert
+ *
+ * @return String
+ */
+const char *cvmx_helper_interface_mode_to_string(cvmx_helper_interface_mode_t mode);
+
+/**
+ * Debug routine to dump the packet structure to the console
+ *
+ * @param work   Work queue entry containing the packet to dump
+ * @return
+ */
+int cvmx_helper_dump_packet(cvmx_wqe_t *work);
+
+/**
+ * Get the version of the CVMX libraries.
+ *
+ * @return Version string. Note this buffer is allocated statically
+ *         and will be shared by all callers.
+ */
+const char *cvmx_helper_get_version(void);
+
+/**
+ * @INTERNAL
+ * Setup the common GMX settings that determine the number of
+ * ports. These setting apply to almost all configurations of all
+ * chips.
+ *
+ * @param xiface Interface to configure
+ * @param num_ports Number of ports on the interface
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_setup_gmx(int xiface, int num_ports);
+
+/**
+ * @INTERNAL
+ * Get the number of pko_ports on an interface.
+ *
+ * @param interface
+ *
+ * @return the number of pko_ports on the interface.
+ */
+int __cvmx_helper_get_num_pko_ports(int interface);
+
+/**
+ * Returns the IPD port number for a port on the given
+ * interface.
+ *
+ * @param interface Interface to use
+ * @param port      Port on the interface
+ *
+ * @return IPD port number
+ */
+int cvmx_helper_get_ipd_port(int interface, int port);
+
+/**
+ * Returns the PKO port number for a port on the given interface,
+ * This is the base pko_port for o68 and ipd_port for older models.
+ *
+ * @param interface Interface to use
+ * @param port      Port on the interface
+ *
+ * @return PKO port number and -1 on error.
+ */
+int cvmx_helper_get_pko_port(int interface, int port);
+
+/**
+ * Returns the IPD/PKO port number for the first port on the given
+ * interface.
+ *
+ * @param interface Interface to use
+ *
+ * @return IPD/PKO port number
+ */
+static inline int cvmx_helper_get_first_ipd_port(int interface)
+{
+	return cvmx_helper_get_ipd_port(interface, 0);
+}
+
+int cvmx_helper_ports_on_interface(int interface);
+
+/**
+ * Returns the IPD/PKO port number for the last port on the given
+ * interface.
+ *
+ * @param interface Interface to use
+ *
+ * @return IPD/PKO port number
+ *
+ * Note: for o68, the last ipd port on an interface does not always equal to
+ * the first plus the number of ports as the ipd ports are not contiguous in
+ * some cases, e.g., SGMII.
+ *
+ * Note: code that makes the assumption of contiguous ipd port numbers needs to
+ * be aware of this.
+ */
+static inline int cvmx_helper_get_last_ipd_port(int interface)
+{
+	return cvmx_helper_get_ipd_port(interface, cvmx_helper_ports_on_interface(interface) - 1);
+}
+
+/**
+ * Free the packet buffers contained in a work queue entry.
+ * The work queue entry is not freed.
+ * Note that this function will not free the work queue entry
+ * even if it contains a non-redundant data packet, and hence
+ * it is not really comparable to how the PKO would free a packet
+ * buffers if requested.
+ *
+ * @param work   Work queue entry with packet to free
+ */
+void cvmx_helper_free_packet_data(cvmx_wqe_t *work);
+
+/**
+ * Returns the interface number for an IPD/PKO port number.
+ *
+ * @param ipd_port IPD/PKO port number
+ *
+ * @return Interface number
+ */
+int cvmx_helper_get_interface_num(int ipd_port);
+
+/**
+ * Returns the interface index number for an IPD/PKO port
+ * number.
+ *
+ * @param ipd_port IPD/PKO port number
+ *
+ * @return Interface index number
+ */
+int cvmx_helper_get_interface_index_num(int ipd_port);
+
+/**
+ * Get port kind for a given port in an interface.
+ *
+ * @param xiface  Interface
+ * @param index   index of the port in the interface
+ *
+ * @return port kind on sucicess  and -1 on failure
+ */
+int cvmx_helper_get_pknd(int xiface, int index);
+
+/**
+ * Get bpid for a given port in an interface.
+ *
+ * @param interface  Interface
+ * @param port       index of the port in the interface
+ *
+ * @return port kind on sucicess  and -1 on failure
+ */
+int cvmx_helper_get_bpid(int interface, int port);
+
+/**
+ * Internal functions.
+ */
+int __cvmx_helper_post_init_interfaces(void);
+int cvmx_helper_setup_red(int pass_thresh, int drop_thresh);
+void cvmx_helper_show_stats(int port);
+
+/*
+ * Return number of array alements
+ */
+#define NUM_ELEMENTS(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+/**
+ * Prints out a buffer with the address, hex bytes, and ASCII
+ *
+ * @param	addr	Start address to print on the left
+ * @param[in]	buffer	array of bytes to print
+ * @param	count	Number of bytes to print
+ */
+void cvmx_print_buffer_u8(unsigned int addr, const u8 *buffer, size_t count);
+
+#endif /* __CVMX_HELPER_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper-xaui.h b/arch/mips/mach-octeon/include/mach/cvmx-helper-xaui.h
new file mode 100644
index 0000000000..6ff4576f23
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper-xaui.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Functions for XAUI initialization, configuration,
+ * and monitoring.
+ */
+
+#ifndef __CVMX_HELPER_XAUI_H__
+#define __CVMX_HELPER_XAUI_H__
+
+/**
+ * @INTERNAL
+ * Probe a XAUI interface and determine the number of ports
+ * connected to it. The XAUI interface should still be down
+ * after this call.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Number of ports on the interface. Zero to disable.
+ */
+int __cvmx_helper_xaui_probe(int xiface);
+int __cvmx_helper_xaui_enumerate(int xiface);
+
+/**
+ * @INTERNAL
+ * Bringup and enable a XAUI interface. After this call packet
+ * I/O should be fully functional. This is called with IPD
+ * enabled but PKO disabled.
+ *
+ * @param xiface Interface to bring up
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_xaui_enable(int xiface);
+
+/**
+ * Retrain XAUI interface.
+ *
+ * GMX is disabled as part of retraining.
+ * While GMX is disabled, new received packets are dropped.
+ * If GMX was in the middle of recieving a packet when disabled,
+ * that packet will be received before GMX idles.
+ * Transmitted packets are buffered normally, but not sent.
+ * If GMX was in the middle of transmitting a packet when disabled,
+ * that packet will be transmitted before GMX idles.
+ *
+ * @param interface Interface to retrain
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_xaui_link_retrain(int interface);
+
+/**
+ * Reinitialize XAUI interface.  Does a probe without changing the hardware
+ * state.
+ *
+ * @param interface	Interface to reinitialize
+ *
+ * @return	0 on success, negative on failure
+ */
+int cvmx_helper_xaui_link_reinit(int interface);
+
+/**
+ * @INTERNAL
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t __cvmx_helper_xaui_link_get(int ipd_port);
+
+/**
+ * @INTERNAL
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_xaui_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again.
+ *
+ * @param ipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int __cvmx_helper_xaui_configure_loopback(int ipd_port, int enable_internal, int enable_external);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-helper.h b/arch/mips/mach-octeon/include/mach/cvmx-helper.h
new file mode 100644
index 0000000000..b82e201269
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-helper.h
@@ -0,0 +1,565 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions for common, but complicated tasks.
+ */
+
+#ifndef __CVMX_HELPER_H__
+#define __CVMX_HELPER_H__
+
+#include "cvmx-wqe.h"
+
+/* Max number of GMXX */
+#define CVMX_HELPER_MAX_GMX                                                                        \
+	(OCTEON_IS_MODEL(OCTEON_CN78XX) ?                                                          \
+		       6 :                                                                               \
+		       (OCTEON_IS_MODEL(OCTEON_CN68XX) ?                                                 \
+				5 :                                                                      \
+				(OCTEON_IS_MODEL(OCTEON_CN73XX) ?                                        \
+					 3 :                                                             \
+					 (OCTEON_IS_MODEL(OCTEON_CNF75XX) ? 1 : 2))))
+
+#define CVMX_HELPER_CSR_INIT0                                                                      \
+	0 /* Do not change as
+						   CVMX_HELPER_WRITE_CSR()
+						   assumes it */
+#define CVMX_HELPER_CSR_INIT_READ -1
+
+/*
+ * CVMX_HELPER_WRITE_CSR--set a field in a CSR with a value.
+ *
+ * @param chcsr_init    initial value of the csr (CVMX_HELPER_CSR_INIT_READ
+ *                      means to use the existing csr value as the
+ *                      initial value.)
+ * @param chcsr_csr     the name of the csr
+ * @param chcsr_type    the type of the csr (see the -defs.h)
+ * @param chcsr_chip    the chip for the csr/field
+ * @param chcsr_fld     the field in the csr
+ * @param chcsr_val     the value for field
+ */
+#define CVMX_HELPER_WRITE_CSR(chcsr_init, chcsr_csr, chcsr_type, chcsr_chip, chcsr_fld, chcsr_val) \
+	do {                                                                                       \
+		chcsr_type csr;                                                                    \
+		if ((chcsr_init) == CVMX_HELPER_CSR_INIT_READ)                                     \
+			csr.u64 = cvmx_read_csr(chcsr_csr);                                        \
+		else                                                                               \
+			csr.u64 = (chcsr_init);                                                    \
+		csr.chcsr_chip.chcsr_fld = (chcsr_val);                                            \
+		cvmx_write_csr((chcsr_csr), csr.u64);                                              \
+	} while (0)
+
+/*
+ * CVMX_HELPER_WRITE_CSR0--set a field in a CSR with the initial value of 0
+ */
+#define CVMX_HELPER_WRITE_CSR0(chcsr_csr, chcsr_type, chcsr_chip, chcsr_fld, chcsr_val)            \
+	CVMX_HELPER_WRITE_CSR(CVMX_HELPER_CSR_INIT0, chcsr_csr, chcsr_type, chcsr_chip, chcsr_fld, \
+			      chcsr_val)
+
+/*
+ * CVMX_HELPER_WRITE_CSR1--set a field in a CSR with the initial value of
+ *                      the CSR's current value.
+ */
+#define CVMX_HELPER_WRITE_CSR1(chcsr_csr, chcsr_type, chcsr_chip, chcsr_fld, chcsr_val)            \
+	CVMX_HELPER_WRITE_CSR(CVMX_HELPER_CSR_INIT_READ, chcsr_csr, chcsr_type, chcsr_chip,        \
+			      chcsr_fld, chcsr_val)
+
+/* These flags are passed to __cvmx_helper_packet_hardware_enable */
+
+typedef enum {
+	CVMX_HELPER_INTERFACE_MODE_DISABLED,
+	CVMX_HELPER_INTERFACE_MODE_RGMII,
+	CVMX_HELPER_INTERFACE_MODE_GMII,
+	CVMX_HELPER_INTERFACE_MODE_SPI,
+	CVMX_HELPER_INTERFACE_MODE_PCIE,
+	CVMX_HELPER_INTERFACE_MODE_XAUI,
+	CVMX_HELPER_INTERFACE_MODE_SGMII,
+	CVMX_HELPER_INTERFACE_MODE_PICMG,
+	CVMX_HELPER_INTERFACE_MODE_NPI,
+	CVMX_HELPER_INTERFACE_MODE_LOOP,
+	CVMX_HELPER_INTERFACE_MODE_SRIO,
+	CVMX_HELPER_INTERFACE_MODE_ILK,
+	CVMX_HELPER_INTERFACE_MODE_RXAUI,
+	CVMX_HELPER_INTERFACE_MODE_QSGMII,
+	CVMX_HELPER_INTERFACE_MODE_AGL,
+	CVMX_HELPER_INTERFACE_MODE_XLAUI,
+	CVMX_HELPER_INTERFACE_MODE_XFI,
+	CVMX_HELPER_INTERFACE_MODE_10G_KR,
+	CVMX_HELPER_INTERFACE_MODE_40G_KR4,
+	CVMX_HELPER_INTERFACE_MODE_MIXED,
+} cvmx_helper_interface_mode_t;
+
+typedef union cvmx_helper_link_info {
+	u64 u64;
+	struct {
+		u64 reserved_20_63 : 43;
+		u64 init_success : 1;
+		u64 link_up : 1;
+		u64 full_duplex : 1;
+		u64 speed : 18;
+	} s;
+} cvmx_helper_link_info_t;
+
+/**
+ * Sets the back pressure configuration in internal data structure.
+ * @param backpressure_dis disable/enable backpressure
+ */
+void cvmx_rgmii_set_back_pressure(u64 backpressure_dis);
+
+#include "cvmx-helper-fpa.h"
+
+#include "cvmx-helper-agl.h"
+#include "cvmx-helper-errata.h"
+#include "cvmx-helper-ilk.h"
+#include "cvmx-helper-loop.h"
+#include "cvmx-helper-npi.h"
+#include "cvmx-helper-rgmii.h"
+#include "cvmx-helper-sgmii.h"
+#include "cvmx-helper-spi.h"
+#include "cvmx-helper-srio.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-xaui.h"
+
+#include "cvmx-fpa3.h"
+
+enum cvmx_pko_padding {
+	CVMX_PKO_PADDING_NONE = 0,
+	CVMX_PKO_PADDING_60 = 1,
+};
+
+/**
+ * This function enables the IPD and also enables the packet interfaces.
+ * The packet interfaces (RGMII and SPI) must be enabled after the
+ * IPD.  This should be called by the user program after any additional
+ * IPD configuration changes are made if CVMX_HELPER_ENABLE_IPD
+ * is not set in the executive-config.h file.
+ *
+ * @return 0 on success
+ *         -1 on failure
+ */
+int cvmx_helper_ipd_and_packet_input_enable_node(int node);
+int cvmx_helper_ipd_and_packet_input_enable(void);
+
+/**
+ * Initialize and allocate memory for the SSO.
+ *
+ * @param wqe_entries The maximum number of work queue entries to be
+ * supported.
+ *
+ * @return Zero on success, non-zero on failure.
+ */
+int cvmx_helper_initialize_sso(int wqe_entries);
+
+/**
+ * Initialize and allocate memory for the SSO on a specific node.
+ *
+ * @param node Node SSO to initialize
+ * @param wqe_entries The maximum number of work queue entries to be
+ * supported.
+ *
+ * @return Zero on success, non-zero on failure.
+ */
+int cvmx_helper_initialize_sso_node(unsigned int node, int wqe_entries);
+
+/**
+ * Undo the effect of cvmx_helper_initialize_sso().
+ *
+ * @return Zero on success, non-zero on failure.
+ */
+int cvmx_helper_uninitialize_sso(void);
+
+/**
+ * Undo the effect of cvmx_helper_initialize_sso_node().
+ *
+ * @param node Node SSO to initialize
+ *
+ * @return Zero on success, non-zero on failure.
+ */
+int cvmx_helper_uninitialize_sso_node(unsigned int node);
+
+/**
+ * Initialize the PIP, IPD, and PKO hardware to support
+ * simple priority based queues for the ethernet ports. Each
+ * port is configured with a number of priority queues based
+ * on CVMX_PKO_QUEUES_PER_PORT_* where each queue is lower
+ * priority than the previous.
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_global(void);
+/**
+ * Initialize the PIP, IPD, and PKO hardware to support
+ * simple priority based queues for the ethernet ports. Each
+ * port is configured with a number of priority queues based
+ * on CVMX_PKO_QUEUES_PER_PORT_* where each queue is lower
+ * priority than the previous.
+ *
+ * @param node Node on which to initialize packet io hardware
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_node(unsigned int node);
+
+/**
+ * Does core local initialization for packet io
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_local(void);
+
+/**
+ * Undo the initialization performed in
+ * cvmx_helper_initialize_packet_io_global(). After calling this routine and the
+ * local version on each core, packet IO for Octeon will be disabled and placed
+ * in the initial reset state. It will then be safe to call the initialize
+ * later on. Note that this routine does not empty the FPA pools. It frees all
+ * buffers used by the packet IO hardware to the FPA so a function emptying the
+ * FPA after shutdown should find all packet buffers in the FPA.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_helper_shutdown_packet_io_global(void);
+
+/**
+ * Helper function for 78xx global packet IO shutdown
+ */
+int cvmx_helper_shutdown_packet_io_global_cn78xx(int node);
+
+/**
+ * Does core local shutdown of packet io
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_shutdown_packet_io_local(void);
+
+/**
+ * Returns the number of ports on the given interface.
+ * The interface must be initialized before the port count
+ * can be returned.
+ *
+ * @param interface Which interface to return port count for.
+ *
+ * @return Port count for interface
+ *         -1 for uninitialized interface
+ */
+int cvmx_helper_ports_on_interface(int interface);
+
+/**
+ * Return the number of interfaces the chip has. Each interface
+ * may have multiple ports. Most chips support two interfaces,
+ * but the CNX0XX and CNX1XX are exceptions. These only support
+ * one interface.
+ *
+ * @return Number of interfaces on chip
+ */
+int cvmx_helper_get_number_of_interfaces(void);
+
+/**
+ * Get the operating mode of an interface. Depending on the Octeon
+ * chip and configuration, this function returns an enumeration
+ * of the type of packet I/O supported by an interface.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Mode of the interface. Unknown or unsupported interfaces return
+ *         DISABLED.
+ */
+cvmx_helper_interface_mode_t cvmx_helper_interface_get_mode(int xiface);
+
+/**
+ * Auto configure an IPD/PKO port link state and speed. This
+ * function basically does the equivalent of:
+ * cvmx_helper_link_set(ipd_port, cvmx_helper_link_get(ipd_port));
+ *
+ * @param ipd_port IPD/PKO port to auto configure
+ *
+ * @return Link state after configure
+ */
+cvmx_helper_link_info_t cvmx_helper_link_autoconf(int ipd_port);
+
+/**
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param ipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t cvmx_helper_link_get(int ipd_port);
+
+/**
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param ipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_link_set(int ipd_port, cvmx_helper_link_info_t link_info);
+
+/**
+ * This function probes an interface to determine the actual number of
+ * hardware ports connected to it. It does some setup the ports but
+ * doesn't enable them. The main goal here is to set the global
+ * interface_port_count[interface] correctly. Final hardware setup of
+ * the ports will be performed later.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_interface_probe(int xiface);
+
+/**
+ * Determine the actual number of hardware ports connected to an
+ * interface. It doesn't setup the ports or enable them.
+ *
+ * @param xiface Interface to enumerate
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_interface_enumerate(int xiface);
+
+/**
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again.
+ *
+ * @param ipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_helper_configure_loopback(int ipd_port, int enable_internal, int enable_external);
+
+/**
+ * Returns the number of ports on the given interface.
+ *
+ * @param interface Which interface to return port count for.
+ *
+ * @return Port count for interface
+ *         -1 for uninitialized interface
+ */
+int __cvmx_helper_early_ports_on_interface(int interface);
+
+void cvmx_helper_setup_simulator_io_buffer_counts(int node, int num_packet_buffers,
+						  int pko_buffers);
+
+void cvmx_helper_set_wqe_no_ptr_mode(bool mode);
+void cvmx_helper_set_pkt_wqe_le_mode(bool mode);
+int cvmx_helper_shutdown_fpa_pools(int node);
+
+/**
+ * Convert Ethernet QoS/PCP value to system-level priority
+ *
+ * In OCTEON, highest priority is 0, in Ethernet 802.1p PCP field
+ * the highest priority is 7, lowest is 1. Here is the full conversion
+ * table between QoS (PCP) and OCTEON priority values, per IEEE 802.1Q-2005:
+ *
+ * PCP	Priority	Acronym	Traffic Types
+ * 1	7 (lowest)	BK	Background
+ * 0	6	BE	Best Effort
+ * 2	5	EE	Excellent Effort
+ * 3	4	CA	Critical Applications
+ * 4	3	VI	Video, < 100 ms latency and jitter
+ * 5	2	VO	Voice, < 10 ms latency and jitter
+ * 6	1	IC	Internetwork Control
+ * 7	0 (highest)	NC	Network Control
+ */
+static inline u8 cvmx_helper_qos2prio(u8 qos)
+{
+	static const unsigned int pcp_map = 6 << (4 * 0) | 7 << (4 * 1) | 5 << (4 * 2) |
+					    4 << (4 * 3) | 3 << (4 * 4) | 2 << (4 * 5) |
+					    1 << (4 * 6) | 0 << (4 * 7);
+
+	return (pcp_map >> ((qos & 0x7) << 2)) & 0x7;
+}
+
+/**
+ * Convert system-level priority to Ethernet QoS/PCP value
+ *
+ * Calculate the reverse of cvmx_helper_qos2prio() per IEEE 802.1Q-2005.
+ */
+static inline u8 cvmx_helper_prio2qos(u8 prio)
+{
+	static const unsigned int prio_map = 7 << (4 * 0) | 6 << (4 * 1) | 5 << (4 * 2) |
+					     4 << (4 * 3) | 3 << (4 * 4) | 2 << (4 * 5) |
+					     0 << (4 * 6) | 1 << (4 * 7);
+
+	return (prio_map >> ((prio & 0x7) << 2)) & 0x7;
+}
+
+/**
+ * @INTERNAL
+ * Get the number of ipd_ports on an interface.
+ *
+ * @param xiface
+ *
+ * @return the number of ipd_ports on the interface and -1 for error.
+ */
+int __cvmx_helper_get_num_ipd_ports(int xiface);
+
+enum cvmx_pko_padding __cvmx_helper_get_pko_padding(int xiface);
+
+/**
+ * @INTERNAL
+ *
+ * @param xiface
+ * @param num_ipd_ports is the number of ipd_ports on the interface
+ * @param has_fcs indicates if PKO does FCS for the ports on this
+ * @param pad The padding that PKO should apply.
+ * interface.
+ *
+ * @return 0 for success and -1 for failure
+ */
+int __cvmx_helper_init_interface(int xiface, int num_ipd_ports, int has_fcs,
+				 enum cvmx_pko_padding pad);
+
+void __cvmx_helper_shutdown_interfaces(void);
+
+/*
+ * @INTERNAL
+ * Enable packet input/output from the hardware. This function is
+ * called after all internal setup is complete and IPD is enabled.
+ * After this function completes, packets will be accepted from the
+ * hardware ports. PKO should still be disabled to make sure packets
+ * aren't sent out partially setup hardware.
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_packet_hardware_enable(int xiface);
+
+/*
+ * @INTERNAL
+ *
+ * @return 0 for success and -1 for failure
+ */
+int __cvmx_helper_set_link_info(int xiface, int index, cvmx_helper_link_info_t link_info);
+
+/**
+ * @INTERNAL
+ *
+ * @param xiface
+ * @param port
+ *
+ * @return valid link_info on success or -1 on failure
+ */
+cvmx_helper_link_info_t __cvmx_helper_get_link_info(int xiface, int port);
+
+/**
+ * @INTERNAL
+ *
+ * @param xiface
+ *
+ * @return 0 if PKO does not do FCS and 1 otherwise.
+ */
+int __cvmx_helper_get_has_fcs(int xiface);
+
+void *cvmx_helper_mem_alloc(int node, u64 alloc_size, u64 align);
+void cvmx_helper_mem_free(void *buffer, u64 size);
+
+#define CVMX_QOS_NUM 8 /* Number of QoS priority classes */
+
+typedef enum {
+	CVMX_QOS_PROTO_NONE,  /* Disable QOS */
+	CVMX_QOS_PROTO_PAUSE, /* IEEE 802.3 PAUSE */
+	CVMX_QOS_PROTO_PFC    /* IEEE 802.1Qbb-2011 PFC/CBFC */
+} cvmx_qos_proto_t;
+
+typedef enum {
+	CVMX_QOS_PKT_MODE_HWONLY, /* PAUSE packets processed in Hardware only. */
+	CVMX_QOS_PKT_MODE_SWONLY, /* PAUSE packets processed in Software only. */
+	CVMX_QOS_PKT_MODE_HWSW,	  /* PAUSE packets processed in both HW and SW. */
+	CVMX_QOS_PKT_MODE_DROP	  /* Ignore PAUSE packets. */
+} cvmx_qos_pkt_mode_t;
+
+typedef enum {
+	CVMX_QOS_POOL_PER_PORT, /* Pool per Physical Port */
+	CVMX_QOS_POOL_PER_CLASS /* Pool per Priority Class */
+} cvmx_qos_pool_mode_t;
+
+typedef struct cvmx_qos_config {
+	cvmx_qos_proto_t qos_proto;	/* QoS protocol.*/
+	cvmx_qos_pkt_mode_t pkt_mode;	/* PAUSE processing mode.*/
+	cvmx_qos_pool_mode_t pool_mode; /* FPA Pool mode.*/
+	int pktbuf_size;		/* Packet buffer size */
+	int aura_size;			/* Number of buffers */
+	int drop_thresh[CVMX_QOS_NUM];	/* DROP threashold in % */
+	int red_thresh[CVMX_QOS_NUM];	/* RED threashold in % */
+	int bp_thresh[CVMX_QOS_NUM];	/* BP threashold in % */
+	int groups[CVMX_QOS_NUM];	/* Base SSO group for QOS group set. */
+	int group_prio[CVMX_QOS_NUM];	/* SSO group priorities.*/
+	int pko_pfc_en;			/* Enable PKO PFC layout. */
+	int vlan_num;			/* VLAN number: 0 = 1st or 1 = 2nd. */
+	int p_time;			/* PAUSE packets send time (in number of 512 bit-times).*/
+	int p_interval; /* PAUSE packet send interval (in number of 512 bit-times).*/
+	/* Internal parameters (should not be used by application developer): */
+	cvmx_fpa3_pool_t gpools[CVMX_QOS_NUM];	/* Pool to use.*/
+	cvmx_fpa3_gaura_t gauras[CVMX_QOS_NUM]; /* Global auras -- one per priority class. */
+	int bpids[CVMX_QOS_NUM];		/* PKI BPID.*/
+	int qpg_base;				/* QPG Table base index.*/
+} cvmx_qos_config_t;
+
+/**
+ * Initialize QoS configuraiton with the SDK defaults.
+ *
+ * @param qos_cfg   User QOS configuration parameters.
+ * @return Zero on success, negative number otherwise.
+ */
+int cvmx_helper_qos_config_init(cvmx_qos_proto_t qos_proto, cvmx_qos_config_t *qos_cfg);
+
+/**
+ * Update the user static processor configuration.
+ * It should be done before any initialization of the DP units is performed.
+ *
+ * @param xipdport  Global IPD port
+ * @param qos_cfg   User QOS configuration parameters.
+ * @return Zero on success, negative number otherwise.
+ */
+int cvmx_helper_qos_port_config_update(int xipdport, cvmx_qos_config_t *qos_cfg);
+
+/**
+ * Configure the Data Path components for QOS function.
+ * This function is called after the global processor initialization is
+ * performed.
+ *
+ * @param xipdport  Global IPD port
+ * @param qos_cfg   User QOS configuration parameters.
+ * @return Zero on success, negative number otherwise.
+ */
+int cvmx_helper_qos_port_setup(int xipdport, cvmx_qos_config_t *qos_cfg);
+
+/**
+ * Configure the SSO for QOS function.
+ * This function is called after the global processor initialization is
+ * performed.
+ *
+ * @param node      OCTEON3 node number.
+ * @param qos_cfg   User QOS configuration parameters.
+ * @return Zero on success, negative number otherwise.
+ */
+int cvmx_helper_qos_sso_setup(int node, cvmx_qos_config_t *qos_cfg);
+
+/**
+ * Return PKI_CHAN_E channel name based on the provided index.
+ * @param chan     Channel index.
+ * @param namebuf  Name buffer (output).
+ * @param buflen   Name maximum length.
+ * @return Length of name (in bytes) on success, negative number otherwise.
+ */
+int cvmx_helper_get_chan_e_name(int chan, char *namebuf, int buflen);
+
+#ifdef CVMX_DUMP_DIAGNOSTICS
+void cvmx_helper_dump_for_diagnostics(int node);
+#endif
+
+#endif /* __CVMX_HELPER_H__ */
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 03/50] mips: octeon: Add cvmx-agl-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 01/50] mips: global_data.h: Add Octeon specific data to arch_global_data struct Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 02/50] mips: octeon: Add misc cvmx-helper header files Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 04/50] mips: octeon: Add cvmx-asxx-defs.h " Stefan Roese
                   ` (49 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-agl-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-agl-defs.h  | 3135 +++++++++++++++++
 1 file changed, 3135 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-agl-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-agl-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-agl-defs.h
new file mode 100644
index 0000000000..bbf1f5936b
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-agl-defs.h
@@ -0,0 +1,3135 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon agl.
+ *
+ */
+
+#ifndef __CVMX_AGL_DEFS_H__
+#define __CVMX_AGL_DEFS_H__
+
+#define CVMX_AGL_GMX_BAD_REG			    (0x00011800E0000518ull)
+#define CVMX_AGL_GMX_BIST			    (0x00011800E0000400ull)
+#define CVMX_AGL_GMX_DRV_CTL			    (0x00011800E00007F0ull)
+#define CVMX_AGL_GMX_INF_MODE			    (0x00011800E00007F8ull)
+#define CVMX_AGL_GMX_PRTX_CFG(offset)		    (0x00011800E0000010ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM0(offset)	    (0x00011800E0000180ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM1(offset)	    (0x00011800E0000188ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM2(offset)	    (0x00011800E0000190ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM3(offset)	    (0x00011800E0000198ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM4(offset)	    (0x00011800E00001A0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM5(offset)	    (0x00011800E00001A8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CAM_EN(offset)	    (0x00011800E0000108ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_ADR_CTL(offset)	    (0x00011800E0000100ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_DECISION(offset)	    (0x00011800E0000040ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_FRM_CHK(offset)	    (0x00011800E0000020ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_FRM_CTL(offset)	    (0x00011800E0000018ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_FRM_MAX(offset)	    (0x00011800E0000030ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_FRM_MIN(offset)	    (0x00011800E0000028ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_IFG(offset)		    (0x00011800E0000058ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_INT_EN(offset)		    (0x00011800E0000008ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_INT_REG(offset)	    (0x00011800E0000000ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_JABBER(offset)		    (0x00011800E0000038ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_PAUSE_DROP_TIME(offset)    (0x00011800E0000068ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_RX_INBND(offset)	    (0x00011800E0000060ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_CTL(offset)	    (0x00011800E0000050ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_OCTS(offset)	    (0x00011800E0000088ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_OCTS_CTL(offset)	    (0x00011800E0000098ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_OCTS_DMAC(offset)    (0x00011800E00000A8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_OCTS_DRP(offset)	    (0x00011800E00000B8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_PKTS(offset)	    (0x00011800E0000080ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_PKTS_BAD(offset)	    (0x00011800E00000C0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_PKTS_CTL(offset)	    (0x00011800E0000090ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_PKTS_DMAC(offset)    (0x00011800E00000A0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_STATS_PKTS_DRP(offset)	    (0x00011800E00000B0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RXX_UDD_SKP(offset)	    (0x00011800E0000048ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_RX_BP_DROPX(offset)	    (0x00011800E0000420ull + ((offset) & 1) * 8)
+#define CVMX_AGL_GMX_RX_BP_OFFX(offset)		    (0x00011800E0000460ull + ((offset) & 1) * 8)
+#define CVMX_AGL_GMX_RX_BP_ONX(offset)		    (0x00011800E0000440ull + ((offset) & 1) * 8)
+#define CVMX_AGL_GMX_RX_PRT_INFO		    (0x00011800E00004E8ull)
+#define CVMX_AGL_GMX_RX_TX_STATUS		    (0x00011800E00007E8ull)
+#define CVMX_AGL_GMX_SMACX(offset)		    (0x00011800E0000230ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_STAT_BP			    (0x00011800E0000520ull)
+#define CVMX_AGL_GMX_TXX_APPEND(offset)		    (0x00011800E0000218ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_CLK(offset)		    (0x00011800E0000208ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_CTL(offset)		    (0x00011800E0000270ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_MIN_PKT(offset)	    (0x00011800E0000240ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_PAUSE_PKT_INTERVAL(offset) (0x00011800E0000248ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_PAUSE_PKT_TIME(offset)	    (0x00011800E0000238ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_PAUSE_TOGO(offset)	    (0x00011800E0000258ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_PAUSE_ZERO(offset)	    (0x00011800E0000260ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_SOFT_PAUSE(offset)	    (0x00011800E0000250ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT0(offset)		    (0x00011800E0000280ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT1(offset)		    (0x00011800E0000288ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT2(offset)		    (0x00011800E0000290ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT3(offset)		    (0x00011800E0000298ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT4(offset)		    (0x00011800E00002A0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT5(offset)		    (0x00011800E00002A8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT6(offset)		    (0x00011800E00002B0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT7(offset)		    (0x00011800E00002B8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT8(offset)		    (0x00011800E00002C0ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STAT9(offset)		    (0x00011800E00002C8ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_STATS_CTL(offset)	    (0x00011800E0000268ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TXX_THRESH(offset)		    (0x00011800E0000210ull + ((offset) & 1) * 2048)
+#define CVMX_AGL_GMX_TX_BP			    (0x00011800E00004D0ull)
+#define CVMX_AGL_GMX_TX_COL_ATTEMPT		    (0x00011800E0000498ull)
+#define CVMX_AGL_GMX_TX_IFG			    (0x00011800E0000488ull)
+#define CVMX_AGL_GMX_TX_INT_EN			    (0x00011800E0000508ull)
+#define CVMX_AGL_GMX_TX_INT_REG			    (0x00011800E0000500ull)
+#define CVMX_AGL_GMX_TX_JAM			    (0x00011800E0000490ull)
+#define CVMX_AGL_GMX_TX_LFSR			    (0x00011800E00004F8ull)
+#define CVMX_AGL_GMX_TX_OVR_BP			    (0x00011800E00004C8ull)
+#define CVMX_AGL_GMX_TX_PAUSE_PKT_DMAC		    (0x00011800E00004A0ull)
+#define CVMX_AGL_GMX_TX_PAUSE_PKT_TYPE		    (0x00011800E00004A8ull)
+#define CVMX_AGL_GMX_WOL_CTL			    (0x00011800E0000780ull)
+#define CVMX_AGL_PRTX_CTL(offset)		    (0x00011800E0002000ull + ((offset) & 1) * 8)
+
+/**
+ * cvmx_agl_gmx_bad_reg
+ *
+ * AGL_GMX_BAD_REG = A collection of things that have gone very, very wrong
+ *
+ *
+ * Notes:
+ * OUT_OVR[0], LOSTSTAT[0], OVRFLW, TXPOP, TXPSH    will be reset when MIX0_CTL[RESET] is set to 1.
+ * OUT_OVR[1], LOSTSTAT[1], OVRFLW1, TXPOP1, TXPSH1 will be reset when MIX1_CTL[RESET] is set to 1.
+ * STATOVR will be reset when both MIX0/1_CTL[RESET] are set to 1.
+ */
+union cvmx_agl_gmx_bad_reg {
+	u64 u64;
+	struct cvmx_agl_gmx_bad_reg_s {
+		u64 reserved_38_63 : 26;
+		u64 txpsh1 : 1;
+		u64 txpop1 : 1;
+		u64 ovrflw1 : 1;
+		u64 txpsh : 1;
+		u64 txpop : 1;
+		u64 ovrflw : 1;
+		u64 reserved_27_31 : 5;
+		u64 statovr : 1;
+		u64 reserved_24_25 : 2;
+		u64 loststat : 2;
+		u64 reserved_4_21 : 18;
+		u64 out_ovr : 2;
+		u64 reserved_0_1 : 2;
+	} s;
+	struct cvmx_agl_gmx_bad_reg_cn52xx {
+		u64 reserved_38_63 : 26;
+		u64 txpsh1 : 1;
+		u64 txpop1 : 1;
+		u64 ovrflw1 : 1;
+		u64 txpsh : 1;
+		u64 txpop : 1;
+		u64 ovrflw : 1;
+		u64 reserved_27_31 : 5;
+		u64 statovr : 1;
+		u64 reserved_23_25 : 3;
+		u64 loststat : 1;
+		u64 reserved_4_21 : 18;
+		u64 out_ovr : 2;
+		u64 reserved_0_1 : 2;
+	} cn52xx;
+	struct cvmx_agl_gmx_bad_reg_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_bad_reg_cn56xx {
+		u64 reserved_35_63 : 29;
+		u64 txpsh : 1;
+		u64 txpop : 1;
+		u64 ovrflw : 1;
+		u64 reserved_27_31 : 5;
+		u64 statovr : 1;
+		u64 reserved_23_25 : 3;
+		u64 loststat : 1;
+		u64 reserved_3_21 : 19;
+		u64 out_ovr : 1;
+		u64 reserved_0_1 : 2;
+	} cn56xx;
+	struct cvmx_agl_gmx_bad_reg_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_bad_reg_s cn61xx;
+	struct cvmx_agl_gmx_bad_reg_s cn63xx;
+	struct cvmx_agl_gmx_bad_reg_s cn63xxp1;
+	struct cvmx_agl_gmx_bad_reg_s cn66xx;
+	struct cvmx_agl_gmx_bad_reg_s cn68xx;
+	struct cvmx_agl_gmx_bad_reg_s cn68xxp1;
+	struct cvmx_agl_gmx_bad_reg_cn56xx cn70xx;
+	struct cvmx_agl_gmx_bad_reg_cn56xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_bad_reg cvmx_agl_gmx_bad_reg_t;
+
+/**
+ * cvmx_agl_gmx_bist
+ *
+ * AGL_GMX_BIST = GMX BIST Results
+ *
+ *
+ * Notes:
+ * Not reset when MIX*_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_bist {
+	u64 u64;
+	struct cvmx_agl_gmx_bist_s {
+		u64 reserved_25_63 : 39;
+		u64 status : 25;
+	} s;
+	struct cvmx_agl_gmx_bist_cn52xx {
+		u64 reserved_10_63 : 54;
+		u64 status : 10;
+	} cn52xx;
+	struct cvmx_agl_gmx_bist_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_bist_cn52xx cn56xx;
+	struct cvmx_agl_gmx_bist_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_bist_s cn61xx;
+	struct cvmx_agl_gmx_bist_s cn63xx;
+	struct cvmx_agl_gmx_bist_s cn63xxp1;
+	struct cvmx_agl_gmx_bist_s cn66xx;
+	struct cvmx_agl_gmx_bist_s cn68xx;
+	struct cvmx_agl_gmx_bist_s cn68xxp1;
+	struct cvmx_agl_gmx_bist_s cn70xx;
+	struct cvmx_agl_gmx_bist_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_bist cvmx_agl_gmx_bist_t;
+
+/**
+ * cvmx_agl_gmx_drv_ctl
+ *
+ * AGL_GMX_DRV_CTL = GMX Drive Control
+ *
+ *
+ * Notes:
+ * NCTL, PCTL, BYP_EN    will be reset when MIX0_CTL[RESET] is set to 1.
+ * NCTL1, PCTL1, BYP_EN1 will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_drv_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_drv_ctl_s {
+		u64 reserved_49_63 : 15;
+		u64 byp_en1 : 1;
+		u64 reserved_45_47 : 3;
+		u64 pctl1 : 5;
+		u64 reserved_37_39 : 3;
+		u64 nctl1 : 5;
+		u64 reserved_17_31 : 15;
+		u64 byp_en : 1;
+		u64 reserved_13_15 : 3;
+		u64 pctl : 5;
+		u64 reserved_5_7 : 3;
+		u64 nctl : 5;
+	} s;
+	struct cvmx_agl_gmx_drv_ctl_s cn52xx;
+	struct cvmx_agl_gmx_drv_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_drv_ctl_cn56xx {
+		u64 reserved_17_63 : 47;
+		u64 byp_en : 1;
+		u64 reserved_13_15 : 3;
+		u64 pctl : 5;
+		u64 reserved_5_7 : 3;
+		u64 nctl : 5;
+	} cn56xx;
+	struct cvmx_agl_gmx_drv_ctl_cn56xx cn56xxp1;
+};
+
+typedef union cvmx_agl_gmx_drv_ctl cvmx_agl_gmx_drv_ctl_t;
+
+/**
+ * cvmx_agl_gmx_inf_mode
+ *
+ * AGL_GMX_INF_MODE = Interface Mode
+ *
+ *
+ * Notes:
+ * Not reset when MIX*_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_inf_mode {
+	u64 u64;
+	struct cvmx_agl_gmx_inf_mode_s {
+		u64 reserved_2_63 : 62;
+		u64 en : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_agl_gmx_inf_mode_s cn52xx;
+	struct cvmx_agl_gmx_inf_mode_s cn52xxp1;
+	struct cvmx_agl_gmx_inf_mode_s cn56xx;
+	struct cvmx_agl_gmx_inf_mode_s cn56xxp1;
+};
+
+typedef union cvmx_agl_gmx_inf_mode cvmx_agl_gmx_inf_mode_t;
+
+/**
+ * cvmx_agl_gmx_prt#_cfg
+ *
+ * AGL_GMX_PRT_CFG = Port description
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_prtx_cfg {
+	u64 u64;
+	struct cvmx_agl_gmx_prtx_cfg_s {
+		u64 reserved_14_63 : 50;
+		u64 tx_idle : 1;
+		u64 rx_idle : 1;
+		u64 reserved_9_11 : 3;
+		u64 speed_msb : 1;
+		u64 reserved_7_7 : 1;
+		u64 burst : 1;
+		u64 tx_en : 1;
+		u64 rx_en : 1;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 en : 1;
+	} s;
+	struct cvmx_agl_gmx_prtx_cfg_cn52xx {
+		u64 reserved_6_63 : 58;
+		u64 tx_en : 1;
+		u64 rx_en : 1;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 en : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_prtx_cfg_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_prtx_cfg_cn52xx cn56xx;
+	struct cvmx_agl_gmx_prtx_cfg_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_prtx_cfg_s cn61xx;
+	struct cvmx_agl_gmx_prtx_cfg_s cn63xx;
+	struct cvmx_agl_gmx_prtx_cfg_s cn63xxp1;
+	struct cvmx_agl_gmx_prtx_cfg_s cn66xx;
+	struct cvmx_agl_gmx_prtx_cfg_s cn68xx;
+	struct cvmx_agl_gmx_prtx_cfg_s cn68xxp1;
+	struct cvmx_agl_gmx_prtx_cfg_s cn70xx;
+	struct cvmx_agl_gmx_prtx_cfg_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_prtx_cfg cvmx_agl_gmx_prtx_cfg_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam0
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam0 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam0_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam0 cvmx_agl_gmx_rxx_adr_cam0_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam1
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam1 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam1_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam1 cvmx_agl_gmx_rxx_adr_cam1_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam2
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam2 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam2_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam2 cvmx_agl_gmx_rxx_adr_cam2_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam3
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam3 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam3_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam3 cvmx_agl_gmx_rxx_adr_cam3_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam4
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam4 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam4_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam4 cvmx_agl_gmx_rxx_adr_cam4_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam5
+ *
+ * AGL_GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam5 {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam5_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam5 cvmx_agl_gmx_rxx_adr_cam5_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_cam_en
+ *
+ * AGL_GMX_RX_ADR_CAM_EN = Address Filtering Control Enable
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_adr_cam_en {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s {
+		u64 reserved_8_63 : 56;
+		u64 en : 8;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_cam_en_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_cam_en cvmx_agl_gmx_rxx_adr_cam_en_t;
+
+/**
+ * cvmx_agl_gmx_rx#_adr_ctl
+ *
+ * AGL_GMX_RX_ADR_CTL = Address Filtering Control
+ *
+ *
+ * Notes:
+ * * ALGORITHM
+ *   Here is some pseudo code that represents the address filter behavior.
+ *
+ *      @verbatim
+ *      bool dmac_addr_filter(uint8 prt, uint48 dmac) [
+ *        ASSERT(prt >= 0 && prt <= 3);
+ *        if (is_bcst(dmac))                               // broadcast accept
+ *          return (AGL_GMX_RX[prt]_ADR_CTL[BCST] ? ACCEPT : REJECT);
+ *        if (is_mcst(dmac) & AGL_GMX_RX[prt]_ADR_CTL[MCST] == 1)   // multicast reject
+ *          return REJECT;
+ *        if (is_mcst(dmac) & AGL_GMX_RX[prt]_ADR_CTL[MCST] == 2)   // multicast accept
+ *          return ACCEPT;
+ *
+ *        cam_hit = 0;
+ *
+ *        for (i=0; i<8; i++) [
+ *          if (AGL_GMX_RX[prt]_ADR_CAM_EN[EN<i>] == 0)
+ *            continue;
+ *          uint48 unswizzled_mac_adr = 0x0;
+ *          for (j=5; j>=0; j--) [
+ *             unswizzled_mac_adr = (unswizzled_mac_adr << 8) | AGL_GMX_RX[prt]_ADR_CAM[j][ADR<i*8+7:i*8>];
+ *          ]
+ *          if (unswizzled_mac_adr == dmac) [
+ *            cam_hit = 1;
+ *            break;
+ *          ]
+ *        ]
+ *
+ *        if (cam_hit)
+ *          return (AGL_GMX_RX[prt]_ADR_CTL[CAM_MODE] ? ACCEPT : REJECT);
+ *        else
+ *          return (AGL_GMX_RX[prt]_ADR_CTL[CAM_MODE] ? REJECT : ACCEPT);
+ *      ]
+ *      @endverbatim
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_adr_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s {
+		u64 reserved_4_63 : 60;
+		u64 cam_mode : 1;
+		u64 mcst : 2;
+		u64 bcst : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn52xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn56xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn61xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn63xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn66xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn68xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn70xx;
+	struct cvmx_agl_gmx_rxx_adr_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_adr_ctl cvmx_agl_gmx_rxx_adr_ctl_t;
+
+/**
+ * cvmx_agl_gmx_rx#_decision
+ *
+ * AGL_GMX_RX_DECISION = The byte count to decide when to accept or filter a packet
+ *
+ *
+ * Notes:
+ * As each byte in a packet is received by GMX, the L2 byte count is compared
+ * against the AGL_GMX_RX_DECISION[CNT].  The L2 byte count is the number of bytes
+ * from the beginning of the L2 header (DMAC).  In normal operation, the L2
+ * header begins after the PREAMBLE+SFD (AGL_GMX_RX_FRM_CTL[PRE_CHK]=1) and any
+ * optional UDD skip data (AGL_GMX_RX_UDD_SKP[LEN]).
+ *
+ * When AGL_GMX_RX_FRM_CTL[PRE_CHK] is clear, PREAMBLE+SFD are prepended to the
+ * packet and would require UDD skip length to account for them.
+ *
+ *                                                 L2 Size
+ * Port Mode             <=AGL_GMX_RX_DECISION bytes (default=24)  >AGL_GMX_RX_DECISION bytes (default=24)
+ *
+ * MII/Full Duplex       accept packet                             apply filters
+ *                       no filtering is applied                   accept packet based on DMAC and PAUSE packet filters
+ *
+ * MII/Half Duplex       drop packet                               apply filters
+ *                       packet is unconditionally dropped         accept packet based on DMAC
+ *
+ * where l2_size = MAX(0, total_packet_size - AGL_GMX_RX_UDD_SKP[LEN] - ((AGL_GMX_RX_FRM_CTL[PRE_CHK]==1)*8)
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_decision {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_decision_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_agl_gmx_rxx_decision_s cn52xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_decision_s cn56xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_decision_s cn61xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn63xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_decision_s cn66xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn68xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_decision_s cn70xx;
+	struct cvmx_agl_gmx_rxx_decision_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_decision cvmx_agl_gmx_rxx_decision_t;
+
+/**
+ * cvmx_agl_gmx_rx#_frm_chk
+ *
+ * AGL_GMX_RX_FRM_CHK = Which frame errors will set the ERR bit of the frame
+ *
+ *
+ * Notes:
+ * If AGL_GMX_RX_UDD_SKP[LEN] != 0, then LENERR will be forced to zero in HW.
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_frm_chk {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_frm_chk_s {
+		u64 reserved_10_63 : 54;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_frm_chk_cn52xx {
+		u64 reserved_9_63 : 55;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 reserved_1_1 : 1;
+		u64 minerr : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_rxx_frm_chk_cn52xx cn56xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn61xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn63xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn66xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn68xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn70xx;
+	struct cvmx_agl_gmx_rxx_frm_chk_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_frm_chk cvmx_agl_gmx_rxx_frm_chk_t;
+
+/**
+ * cvmx_agl_gmx_rx#_frm_ctl
+ *
+ * AGL_GMX_RX_FRM_CTL = Frame Control
+ *
+ *
+ * Notes:
+ * * PRE_STRP
+ *   When PRE_CHK is set (indicating that the PREAMBLE will be sent), PRE_STRP
+ *   determines if the PREAMBLE+SFD bytes are thrown away or sent to the Octane
+ *   core as part of the packet.
+ *
+ *   In either mode, the PREAMBLE+SFD bytes are not counted toward the packet
+ *   size when checking against the MIN and MAX bounds.  Furthermore, the bytes
+ *   are skipped when locating the start of the L2 header for DMAC and Control
+ *   frame recognition.
+ *
+ * * CTL_BCK/CTL_DRP
+ *   These bits control how the HW handles incoming PAUSE packets.  Here are
+ *   the most common modes of operation:
+ *     CTL_BCK=1,CTL_DRP=1   - HW does it all
+ *     CTL_BCK=0,CTL_DRP=0   - SW sees all pause frames
+ *     CTL_BCK=0,CTL_DRP=1   - all pause frames are completely ignored
+ *
+ *   These control bits should be set to CTL_BCK=0,CTL_DRP=0 in halfdup mode.
+ *   Since PAUSE packets only apply to fulldup operation, any PAUSE packet
+ *   would constitute an exception which should be handled by the processing
+ *   cores.  PAUSE packets should not be forwarded.
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_frm_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s {
+		u64 reserved_13_63 : 51;
+		u64 ptp_mode : 1;
+		u64 reserved_11_11 : 1;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_frm_ctl_cn52xx {
+		u64 reserved_10_63 : 54;
+		u64 pre_align : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_rxx_frm_ctl_cn52xx cn56xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn61xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn63xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn66xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn68xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn70xx;
+	struct cvmx_agl_gmx_rxx_frm_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_frm_ctl cvmx_agl_gmx_rxx_frm_ctl_t;
+
+/**
+ * cvmx_agl_gmx_rx#_frm_max
+ *
+ * AGL_GMX_RX_FRM_MAX = Frame Max length
+ *
+ *
+ * Notes:
+ * When changing the LEN field, be sure that LEN does not exceed
+ * AGL_GMX_RX_JABBER[CNT]. Failure to meet this constraint will cause packets that
+ * are within the maximum length parameter to be rejected because they exceed
+ * the AGL_GMX_RX_JABBER[CNT] limit.
+ *
+ * Notes:
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_frm_max {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_frm_max_s {
+		u64 reserved_16_63 : 48;
+		u64 len : 16;
+	} s;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn52xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn56xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn61xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn63xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn66xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn68xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn70xx;
+	struct cvmx_agl_gmx_rxx_frm_max_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_frm_max cvmx_agl_gmx_rxx_frm_max_t;
+
+/**
+ * cvmx_agl_gmx_rx#_frm_min
+ *
+ * AGL_GMX_RX_FRM_MIN = Frame Min length
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_frm_min {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_frm_min_s {
+		u64 reserved_16_63 : 48;
+		u64 len : 16;
+	} s;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn52xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn56xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn61xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn63xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn66xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn68xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn70xx;
+	struct cvmx_agl_gmx_rxx_frm_min_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_frm_min cvmx_agl_gmx_rxx_frm_min_t;
+
+/**
+ * cvmx_agl_gmx_rx#_ifg
+ *
+ * AGL_GMX_RX_IFG = RX Min IFG
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_ifg {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_ifg_s {
+		u64 reserved_4_63 : 60;
+		u64 ifg : 4;
+	} s;
+	struct cvmx_agl_gmx_rxx_ifg_s cn52xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_ifg_s cn56xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_ifg_s cn61xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn63xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_ifg_s cn66xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn68xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_ifg_s cn70xx;
+	struct cvmx_agl_gmx_rxx_ifg_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_ifg cvmx_agl_gmx_rxx_ifg_t;
+
+/**
+ * cvmx_agl_gmx_rx#_int_en
+ *
+ * AGL_GMX_RX_INT_EN = Interrupt Enable
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_int_en {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_int_en_s {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 reserved_20_28 : 9;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_int_en_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 reserved_1_1 : 1;
+		u64 minerr : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_rxx_int_en_cn52xx cn56xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn61xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx cn63xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx cn63xxp1;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx cn66xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx cn68xx;
+	struct cvmx_agl_gmx_rxx_int_en_cn61xx cn68xxp1;
+	struct cvmx_agl_gmx_rxx_int_en_s cn70xx;
+	struct cvmx_agl_gmx_rxx_int_en_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_int_en cvmx_agl_gmx_rxx_int_en_t;
+
+/**
+ * cvmx_agl_gmx_rx#_int_reg
+ *
+ * AGL_GMX_RX_INT_REG = Interrupt Register
+ *
+ *
+ * Notes:
+ * (1) exceptions will only be raised to the control processor if the
+ *     corresponding bit in the AGL_GMX_RX_INT_EN register is set.
+ *
+ * (2) exception conditions 10:0 can also set the rcv/opcode in the received
+ *     packet's workQ entry.  The AGL_GMX_RX_FRM_CHK register provides a bit mask
+ *     for configuring which conditions set the error.
+ *
+ * (3) in half duplex operation, the expectation is that collisions will appear
+ *     as MINERRs.
+ *
+ * (4) JABBER - An RX Jabber error indicates that a packet was received which
+ *              is longer than the maximum allowed packet as defined by the
+ *              system.  GMX will truncate the packet at the JABBER count.
+ *              Failure to do so could lead to system instabilty.
+ *
+ * (6) MAXERR - for untagged frames, the total frame DA+SA+TL+DATA+PAD+FCS >
+ *              AGL_GMX_RX_FRM_MAX.  For tagged frames, DA+SA+VLAN+TL+DATA+PAD+FCS
+ *              > AGL_GMX_RX_FRM_MAX + 4*VLAN_VAL + 4*VLAN_STACKED.
+ *
+ * (7) MINERR - total frame DA+SA+TL+DATA+PAD+FCS < AGL_GMX_RX_FRM_MIN.
+ *
+ * (8) ALNERR - Indicates that the packet received was not an integer number of
+ *              bytes.  If FCS checking is enabled, ALNERR will only assert if
+ *              the FCS is bad.  If FCS checking is disabled, ALNERR will
+ *              assert in all non-integer frame cases.
+ *
+ * (9) Collisions - Collisions can only occur in half-duplex mode.  A collision
+ *                  is assumed by the receiver when the received
+ *                  frame < AGL_GMX_RX_FRM_MIN - this is normally a MINERR
+ *
+ * (A) LENERR - Length errors occur when the received packet does not match the
+ *              length field.  LENERR is only checked for packets between 64
+ *              and 1500 bytes.  For untagged frames, the length must exact
+ *              match.  For tagged frames the length or length+4 must match.
+ *
+ * (B) PCTERR - checks that the frame begins with a valid PREAMBLE sequence.
+ *              Does not check the number of PREAMBLE cycles.
+ *
+ * (C) OVRERR -
+ *
+ *              OVRERR is an architectural assertion check internal to GMX to
+ *              make sure no assumption was violated.  In a correctly operating
+ *              system, this interrupt can never fire.
+ *
+ *              GMX has an internal arbiter which selects which of 4 ports to
+ *              buffer in the main RX FIFO.  If we normally buffer 8 bytes,
+ *              then each port will typically push a tick every 8 cycles - if
+ *              the packet interface is going as fast as possible.  If there
+ *              are four ports, they push every two cycles.  So that's the
+ *              assumption.  That the inbound module will always be able to
+ *              consume the tick before another is produced.  If that doesn't
+ *              happen - that's when OVRERR will assert.
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_int_reg {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_int_reg_s {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 reserved_20_28 : 9;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_int_reg_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 reserved_1_1 : 1;
+		u64 minerr : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_rxx_int_reg_cn52xx cn56xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn52xx cn56xxp1;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn61xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx cn63xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx cn63xxp1;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx cn66xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx cn68xx;
+	struct cvmx_agl_gmx_rxx_int_reg_cn61xx cn68xxp1;
+	struct cvmx_agl_gmx_rxx_int_reg_s cn70xx;
+	struct cvmx_agl_gmx_rxx_int_reg_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_int_reg cvmx_agl_gmx_rxx_int_reg_t;
+
+/**
+ * cvmx_agl_gmx_rx#_jabber
+ *
+ * AGL_GMX_RX_JABBER = The max size packet after which GMX will truncate
+ *
+ *
+ * Notes:
+ * CNT must be 8-byte aligned such that CNT[2:0] == 0
+ *
+ *   The packet that will be sent to the packet input logic will have an
+ *   additionl 8 bytes if AGL_GMX_RX_FRM_CTL[PRE_CHK] is set and
+ *   AGL_GMX_RX_FRM_CTL[PRE_STRP] is clear.  The max packet that will be sent is
+ *   defined as...
+ *
+ *        max_sized_packet = AGL_GMX_RX_JABBER[CNT]+((AGL_GMX_RX_FRM_CTL[PRE_CHK] & !AGL_GMX_RX_FRM_CTL[PRE_STRP])*8)
+ *
+ *   Be sure the CNT field value is at least as large as the
+ *   AGL_GMX_RX_FRM_MAX[LEN] value. Failure to meet this constraint will cause
+ *   packets that are within the AGL_GMX_RX_FRM_MAX[LEN] length to be rejected
+ *   because they exceed the CNT limit.
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_jabber {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_jabber_s {
+		u64 reserved_16_63 : 48;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_agl_gmx_rxx_jabber_s cn52xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_jabber_s cn56xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_jabber_s cn61xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn63xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_jabber_s cn66xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn68xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_jabber_s cn70xx;
+	struct cvmx_agl_gmx_rxx_jabber_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_jabber cvmx_agl_gmx_rxx_jabber_t;
+
+/**
+ * cvmx_agl_gmx_rx#_pause_drop_time
+ *
+ * AGL_GMX_RX_PAUSE_DROP_TIME = The TIME field in a PAUSE Packet which was dropped due to GMX RX FIFO full condition
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_pause_drop_time {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s {
+		u64 reserved_16_63 : 48;
+		u64 status : 16;
+	} s;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn52xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn56xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn61xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn63xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn66xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn68xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn70xx;
+	struct cvmx_agl_gmx_rxx_pause_drop_time_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_pause_drop_time cvmx_agl_gmx_rxx_pause_drop_time_t;
+
+/**
+ * cvmx_agl_gmx_rx#_rx_inbnd
+ *
+ * AGL_GMX_RX_INBND = RGMII InBand Link Status
+ *
+ */
+union cvmx_agl_gmx_rxx_rx_inbnd {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s {
+		u64 reserved_4_63 : 60;
+		u64 duplex : 1;
+		u64 speed : 2;
+		u64 status : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn61xx;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn63xx;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn66xx;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn68xx;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn70xx;
+	struct cvmx_agl_gmx_rxx_rx_inbnd_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_rx_inbnd cvmx_agl_gmx_rxx_rx_inbnd_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_ctl
+ *
+ * AGL_GMX_RX_STATS_CTL = RX Stats Control register
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rxx_stats_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 rd_clr : 1;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_ctl cvmx_agl_gmx_rxx_stats_ctl_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_octs
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_stats_octs {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_octs_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_octs cvmx_agl_gmx_rxx_stats_octs_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_octs_ctl
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_stats_octs_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_octs_ctl cvmx_agl_gmx_rxx_stats_octs_ctl_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_octs_dmac
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_stats_octs_dmac {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_dmac_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_octs_dmac cvmx_agl_gmx_rxx_stats_octs_dmac_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_octs_drp
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_stats_octs_drp {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_octs_drp_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_octs_drp cvmx_agl_gmx_rxx_stats_octs_drp_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_pkts
+ *
+ * Count of good received packets - packets that are not recognized as PAUSE
+ * packets, dropped due the DMAC filter, dropped due FIFO full status, or
+ * have any other OPCODE (FCS, Length, etc).
+ */
+union cvmx_agl_gmx_rxx_stats_pkts {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_pkts cvmx_agl_gmx_rxx_stats_pkts_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_pkts_bad
+ *
+ * Count of all packets received with some error that were not dropped
+ * either due to the dmac filter or lack of room in the receive FIFO.
+ */
+union cvmx_agl_gmx_rxx_stats_pkts_bad {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_bad_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_pkts_bad cvmx_agl_gmx_rxx_stats_pkts_bad_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_pkts_ctl
+ *
+ * Count of all packets received that were recognized as Flow Control or
+ * PAUSE packets.  PAUSE packets with any kind of error are counted in
+ * AGL_GMX_RX_STATS_PKTS_BAD.  Pause packets can be optionally dropped or
+ * forwarded based on the AGL_GMX_RX_FRM_CTL[CTL_DRP] bit.  This count
+ * increments regardless of whether the packet is dropped.  Pause packets
+ * will never be counted in AGL_GMX_RX_STATS_PKTS.  Packets dropped due the dmac
+ * filter will be counted in AGL_GMX_RX_STATS_PKTS_DMAC and not here.
+ */
+union cvmx_agl_gmx_rxx_stats_pkts_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_pkts_ctl cvmx_agl_gmx_rxx_stats_pkts_ctl_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_pkts_dmac
+ *
+ * Count of all packets received that were dropped by the dmac filter.
+ * Packets that match the DMAC will be dropped and counted here regardless
+ * of if they were bad packets.  These packets will never be counted in
+ * AGL_GMX_RX_STATS_PKTS.
+ * Some packets that were not able to satisify the DECISION_CNT may not
+ * actually be dropped by Octeon, but they will be counted here as if they
+ * were dropped.
+ */
+union cvmx_agl_gmx_rxx_stats_pkts_dmac {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_dmac_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_pkts_dmac cvmx_agl_gmx_rxx_stats_pkts_dmac_t;
+
+/**
+ * cvmx_agl_gmx_rx#_stats_pkts_drp
+ *
+ * Count of all packets received that were dropped due to a full receive
+ * FIFO.  This counts good and bad packets received - all packets dropped by
+ * the FIFO.  It does not count packets dropped by the dmac or pause packet
+ * filters.
+ */
+union cvmx_agl_gmx_rxx_stats_pkts_drp {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn52xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn56xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn61xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn63xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn66xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn68xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn70xx;
+	struct cvmx_agl_gmx_rxx_stats_pkts_drp_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_stats_pkts_drp cvmx_agl_gmx_rxx_stats_pkts_drp_t;
+
+/**
+ * cvmx_agl_gmx_rx#_udd_skp
+ *
+ * AGL_GMX_RX_UDD_SKP = Amount of User-defined data before the start of the L2 data
+ *
+ *
+ * Notes:
+ * (1) The skip bytes are part of the packet and will be sent down the NCB
+ *     packet interface and will be handled by PKI.
+ *
+ * (2) The system can determine if the UDD bytes are included in the FCS check
+ *     by using the FCSSEL field - if the FCS check is enabled.
+ *
+ * (3) Assume that the preamble/sfd is always at the start of the frame - even
+ *     before UDD bytes.  In most cases, there will be no preamble in these
+ *     cases since it will be MII to MII communication without a PHY
+ *     involved.
+ *
+ * (4) We can still do address filtering and control packet filtering is the
+ *     user desires.
+ *
+ * (5) UDD_SKP must be 0 in half-duplex operation unless
+ *     AGL_GMX_RX_FRM_CTL[PRE_CHK] is clear.  If AGL_GMX_RX_FRM_CTL[PRE_CHK] is set,
+ *     then UDD_SKP will normally be 8.
+ *
+ * (6) In all cases, the UDD bytes will be sent down the packet interface as
+ *     part of the packet.  The UDD bytes are never stripped from the actual
+ *     packet.
+ *
+ * (7) If LEN != 0, then AGL_GMX_RX_FRM_CHK[LENERR] will be disabled and AGL_GMX_RX_INT_REG[LENERR] will be zero
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rxx_udd_skp {
+	u64 u64;
+	struct cvmx_agl_gmx_rxx_udd_skp_s {
+		u64 reserved_9_63 : 55;
+		u64 fcssel : 1;
+		u64 reserved_7_7 : 1;
+		u64 len : 7;
+	} s;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn52xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn52xxp1;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn56xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn56xxp1;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn61xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn63xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn63xxp1;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn66xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn68xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn68xxp1;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn70xx;
+	struct cvmx_agl_gmx_rxx_udd_skp_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rxx_udd_skp cvmx_agl_gmx_rxx_udd_skp_t;
+
+/**
+ * cvmx_agl_gmx_rx_bp_drop#
+ *
+ * AGL_GMX_RX_BP_DROP = FIFO mark for packet drop
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rx_bp_dropx {
+	u64 u64;
+	struct cvmx_agl_gmx_rx_bp_dropx_s {
+		u64 reserved_6_63 : 58;
+		u64 mark : 6;
+	} s;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn52xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn52xxp1;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn56xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn56xxp1;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn61xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn63xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn63xxp1;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn66xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn68xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn68xxp1;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn70xx;
+	struct cvmx_agl_gmx_rx_bp_dropx_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rx_bp_dropx cvmx_agl_gmx_rx_bp_dropx_t;
+
+/**
+ * cvmx_agl_gmx_rx_bp_off#
+ *
+ * AGL_GMX_RX_BP_OFF = Lowater mark for packet drop
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rx_bp_offx {
+	u64 u64;
+	struct cvmx_agl_gmx_rx_bp_offx_s {
+		u64 reserved_6_63 : 58;
+		u64 mark : 6;
+	} s;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn52xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn52xxp1;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn56xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn56xxp1;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn61xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn63xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn63xxp1;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn66xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn68xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn68xxp1;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn70xx;
+	struct cvmx_agl_gmx_rx_bp_offx_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rx_bp_offx cvmx_agl_gmx_rx_bp_offx_t;
+
+/**
+ * cvmx_agl_gmx_rx_bp_on#
+ *
+ * AGL_GMX_RX_BP_ON = Hiwater mark for port/interface backpressure
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_rx_bp_onx {
+	u64 u64;
+	struct cvmx_agl_gmx_rx_bp_onx_s {
+		u64 reserved_9_63 : 55;
+		u64 mark : 9;
+	} s;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn52xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn52xxp1;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn56xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn56xxp1;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn61xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn63xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn63xxp1;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn66xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn68xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn68xxp1;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn70xx;
+	struct cvmx_agl_gmx_rx_bp_onx_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rx_bp_onx cvmx_agl_gmx_rx_bp_onx_t;
+
+/**
+ * cvmx_agl_gmx_rx_prt_info
+ *
+ * AGL_GMX_RX_PRT_INFO = state information for the ports
+ *
+ *
+ * Notes:
+ * COMMIT[0], DROP[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * COMMIT[1], DROP[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rx_prt_info {
+	u64 u64;
+	struct cvmx_agl_gmx_rx_prt_info_s {
+		u64 reserved_18_63 : 46;
+		u64 drop : 2;
+		u64 reserved_2_15 : 14;
+		u64 commit : 2;
+	} s;
+	struct cvmx_agl_gmx_rx_prt_info_s cn52xx;
+	struct cvmx_agl_gmx_rx_prt_info_s cn52xxp1;
+	struct cvmx_agl_gmx_rx_prt_info_cn56xx {
+		u64 reserved_17_63 : 47;
+		u64 drop : 1;
+		u64 reserved_1_15 : 15;
+		u64 commit : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_rx_prt_info_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_rx_prt_info_s cn61xx;
+	struct cvmx_agl_gmx_rx_prt_info_s cn63xx;
+	struct cvmx_agl_gmx_rx_prt_info_s cn63xxp1;
+	struct cvmx_agl_gmx_rx_prt_info_s cn66xx;
+	struct cvmx_agl_gmx_rx_prt_info_s cn68xx;
+	struct cvmx_agl_gmx_rx_prt_info_s cn68xxp1;
+	struct cvmx_agl_gmx_rx_prt_info_cn56xx cn70xx;
+	struct cvmx_agl_gmx_rx_prt_info_cn56xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rx_prt_info cvmx_agl_gmx_rx_prt_info_t;
+
+/**
+ * cvmx_agl_gmx_rx_tx_status
+ *
+ * AGL_GMX_RX_TX_STATUS = GMX RX/TX Status
+ *
+ *
+ * Notes:
+ * RX[0], TX[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * RX[1], TX[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_rx_tx_status {
+	u64 u64;
+	struct cvmx_agl_gmx_rx_tx_status_s {
+		u64 reserved_6_63 : 58;
+		u64 tx : 2;
+		u64 reserved_2_3 : 2;
+		u64 rx : 2;
+	} s;
+	struct cvmx_agl_gmx_rx_tx_status_s cn52xx;
+	struct cvmx_agl_gmx_rx_tx_status_s cn52xxp1;
+	struct cvmx_agl_gmx_rx_tx_status_cn56xx {
+		u64 reserved_5_63 : 59;
+		u64 tx : 1;
+		u64 reserved_1_3 : 3;
+		u64 rx : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_rx_tx_status_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_rx_tx_status_s cn61xx;
+	struct cvmx_agl_gmx_rx_tx_status_s cn63xx;
+	struct cvmx_agl_gmx_rx_tx_status_s cn63xxp1;
+	struct cvmx_agl_gmx_rx_tx_status_s cn66xx;
+	struct cvmx_agl_gmx_rx_tx_status_s cn68xx;
+	struct cvmx_agl_gmx_rx_tx_status_s cn68xxp1;
+	struct cvmx_agl_gmx_rx_tx_status_cn56xx cn70xx;
+	struct cvmx_agl_gmx_rx_tx_status_cn56xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_rx_tx_status cvmx_agl_gmx_rx_tx_status_t;
+
+/**
+ * cvmx_agl_gmx_smac#
+ *
+ * AGL_GMX_SMAC = Packet SMAC
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_smacx {
+	u64 u64;
+	struct cvmx_agl_gmx_smacx_s {
+		u64 reserved_48_63 : 16;
+		u64 smac : 48;
+	} s;
+	struct cvmx_agl_gmx_smacx_s cn52xx;
+	struct cvmx_agl_gmx_smacx_s cn52xxp1;
+	struct cvmx_agl_gmx_smacx_s cn56xx;
+	struct cvmx_agl_gmx_smacx_s cn56xxp1;
+	struct cvmx_agl_gmx_smacx_s cn61xx;
+	struct cvmx_agl_gmx_smacx_s cn63xx;
+	struct cvmx_agl_gmx_smacx_s cn63xxp1;
+	struct cvmx_agl_gmx_smacx_s cn66xx;
+	struct cvmx_agl_gmx_smacx_s cn68xx;
+	struct cvmx_agl_gmx_smacx_s cn68xxp1;
+	struct cvmx_agl_gmx_smacx_s cn70xx;
+	struct cvmx_agl_gmx_smacx_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_smacx cvmx_agl_gmx_smacx_t;
+
+/**
+ * cvmx_agl_gmx_stat_bp
+ *
+ * AGL_GMX_STAT_BP = Number of cycles that the TX/Stats block has help up operation
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ *
+ *
+ * It has no relationship with the TX FIFO per se.  The TX engine sends packets
+ * from PKO and upon completion, sends a command to the TX stats block for an
+ * update based on the packet size.  The stats operation can take a few cycles -
+ * normally not enough to be visible considering the 64B min packet size that is
+ * ethernet convention.
+ *
+ * In the rare case in which SW attempted to schedule really, really, small packets
+ * or the sclk (6xxx) is running ass-slow, then the stats updates may not happen in
+ * real time and can back up the TX engine.
+ *
+ * This counter is the number of cycles in which the TX engine was stalled.  In
+ * normal operation, it should always be zeros.
+ */
+union cvmx_agl_gmx_stat_bp {
+	u64 u64;
+	struct cvmx_agl_gmx_stat_bp_s {
+		u64 reserved_17_63 : 47;
+		u64 bp : 1;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_agl_gmx_stat_bp_s cn52xx;
+	struct cvmx_agl_gmx_stat_bp_s cn52xxp1;
+	struct cvmx_agl_gmx_stat_bp_s cn56xx;
+	struct cvmx_agl_gmx_stat_bp_s cn56xxp1;
+	struct cvmx_agl_gmx_stat_bp_s cn61xx;
+	struct cvmx_agl_gmx_stat_bp_s cn63xx;
+	struct cvmx_agl_gmx_stat_bp_s cn63xxp1;
+	struct cvmx_agl_gmx_stat_bp_s cn66xx;
+	struct cvmx_agl_gmx_stat_bp_s cn68xx;
+	struct cvmx_agl_gmx_stat_bp_s cn68xxp1;
+	struct cvmx_agl_gmx_stat_bp_s cn70xx;
+	struct cvmx_agl_gmx_stat_bp_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_stat_bp cvmx_agl_gmx_stat_bp_t;
+
+/**
+ * cvmx_agl_gmx_tx#_append
+ *
+ * AGL_GMX_TX_APPEND = Packet TX Append Control
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_append {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_append_s {
+		u64 reserved_4_63 : 60;
+		u64 force_fcs : 1;
+		u64 fcs : 1;
+		u64 pad : 1;
+		u64 preamble : 1;
+	} s;
+	struct cvmx_agl_gmx_txx_append_s cn52xx;
+	struct cvmx_agl_gmx_txx_append_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_append_s cn56xx;
+	struct cvmx_agl_gmx_txx_append_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_append_s cn61xx;
+	struct cvmx_agl_gmx_txx_append_s cn63xx;
+	struct cvmx_agl_gmx_txx_append_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_append_s cn66xx;
+	struct cvmx_agl_gmx_txx_append_s cn68xx;
+	struct cvmx_agl_gmx_txx_append_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_append_s cn70xx;
+	struct cvmx_agl_gmx_txx_append_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_append cvmx_agl_gmx_txx_append_t;
+
+/**
+ * cvmx_agl_gmx_tx#_clk
+ *
+ * AGL_GMX_TX_CLK = RGMII TX Clock Generation Register
+ *
+ *
+ * Notes:
+ * Normal Programming Values:
+ *  (1) RGMII, 1000Mbs   (AGL_GMX_PRT_CFG[SPEED]==1), CLK_CNT == 1
+ *  (2) RGMII, 10/100Mbs (AGL_GMX_PRT_CFG[SPEED]==0), CLK_CNT == 50/5
+ *  (3) MII,   10/100Mbs (AGL_GMX_PRT_CFG[SPEED]==0), CLK_CNT == 1
+ *
+ * RGMII Example:
+ *  Given a 125MHz PLL reference clock...
+ *   CLK_CNT ==  1 ==> 125.0MHz TXC clock period (8ns* 1)
+ *   CLK_CNT ==  5 ==>  25.0MHz TXC clock period (8ns* 5)
+ *   CLK_CNT == 50 ==>   2.5MHz TXC clock period (8ns*50)
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_clk {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_clk_s {
+		u64 reserved_6_63 : 58;
+		u64 clk_cnt : 6;
+	} s;
+	struct cvmx_agl_gmx_txx_clk_s cn61xx;
+	struct cvmx_agl_gmx_txx_clk_s cn63xx;
+	struct cvmx_agl_gmx_txx_clk_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_clk_s cn66xx;
+	struct cvmx_agl_gmx_txx_clk_s cn68xx;
+	struct cvmx_agl_gmx_txx_clk_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_clk_s cn70xx;
+	struct cvmx_agl_gmx_txx_clk_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_clk cvmx_agl_gmx_txx_clk_t;
+
+/**
+ * cvmx_agl_gmx_tx#_ctl
+ *
+ * AGL_GMX_TX_CTL = TX Control register
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 xsdef_en : 1;
+		u64 xscol_en : 1;
+	} s;
+	struct cvmx_agl_gmx_txx_ctl_s cn52xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_ctl_s cn56xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_ctl_s cn61xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn63xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_ctl_s cn66xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn68xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_ctl_s cn70xx;
+	struct cvmx_agl_gmx_txx_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_ctl cvmx_agl_gmx_txx_ctl_t;
+
+/**
+ * cvmx_agl_gmx_tx#_min_pkt
+ *
+ * AGL_GMX_TX_MIN_PKT = Packet TX Min Size Packet (PAD upto min size)
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_min_pkt {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_min_pkt_s {
+		u64 reserved_8_63 : 56;
+		u64 min_size : 8;
+	} s;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn52xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn56xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn61xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn63xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn66xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn68xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn70xx;
+	struct cvmx_agl_gmx_txx_min_pkt_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_min_pkt cvmx_agl_gmx_txx_min_pkt_t;
+
+/**
+ * cvmx_agl_gmx_tx#_pause_pkt_interval
+ *
+ * AGL_GMX_TX_PAUSE_PKT_INTERVAL = Packet TX Pause Packet transmission interval - how often PAUSE packets will be sent
+ *
+ *
+ * Notes:
+ * Choosing proper values of AGL_GMX_TX_PAUSE_PKT_TIME[TIME] and
+ * AGL_GMX_TX_PAUSE_PKT_INTERVAL[INTERVAL] can be challenging to the system
+ * designer.  It is suggested that TIME be much greater than INTERVAL and
+ * AGL_GMX_TX_PAUSE_ZERO[SEND] be set.  This allows a periodic refresh of the PAUSE
+ * count and then when the backpressure condition is lifted, a PAUSE packet
+ * with TIME==0 will be sent indicating that Octane is ready for additional
+ * data.
+ *
+ * If the system chooses to not set AGL_GMX_TX_PAUSE_ZERO[SEND], then it is
+ * suggested that TIME and INTERVAL are programmed such that they satisify the
+ * following rule...
+ *
+ *    INTERVAL <= TIME - (largest_pkt_size + IFG + pause_pkt_size)
+ *
+ * where largest_pkt_size is that largest packet that the system can send
+ * (normally 1518B), IFG is the interframe gap and pause_pkt_size is the size
+ * of the PAUSE packet (normally 64B).
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_pause_pkt_interval {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s {
+		u64 reserved_16_63 : 48;
+		u64 interval : 16;
+	} s;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn52xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn56xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn61xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn63xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn66xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn68xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn70xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_interval_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_pause_pkt_interval cvmx_agl_gmx_txx_pause_pkt_interval_t;
+
+/**
+ * cvmx_agl_gmx_tx#_pause_pkt_time
+ *
+ * AGL_GMX_TX_PAUSE_PKT_TIME = Packet TX Pause Packet pause_time field
+ *
+ *
+ * Notes:
+ * Choosing proper values of AGL_GMX_TX_PAUSE_PKT_TIME[TIME] and
+ * AGL_GMX_TX_PAUSE_PKT_INTERVAL[INTERVAL] can be challenging to the system
+ * designer.  It is suggested that TIME be much greater than INTERVAL and
+ * AGL_GMX_TX_PAUSE_ZERO[SEND] be set.  This allows a periodic refresh of the PAUSE
+ * count and then when the backpressure condition is lifted, a PAUSE packet
+ * with TIME==0 will be sent indicating that Octane is ready for additional
+ * data.
+ *
+ * If the system chooses to not set AGL_GMX_TX_PAUSE_ZERO[SEND], then it is
+ * suggested that TIME and INTERVAL are programmed such that they satisify the
+ * following rule...
+ *
+ *    INTERVAL <= TIME - (largest_pkt_size + IFG + pause_pkt_size)
+ *
+ * where largest_pkt_size is that largest packet that the system can send
+ * (normally 1518B), IFG is the interframe gap and pause_pkt_size is the size
+ * of the PAUSE packet (normally 64B).
+ *
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_pause_pkt_time {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} s;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn52xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn56xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn61xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn63xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn66xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn68xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn70xx;
+	struct cvmx_agl_gmx_txx_pause_pkt_time_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_pause_pkt_time cvmx_agl_gmx_txx_pause_pkt_time_t;
+
+/**
+ * cvmx_agl_gmx_tx#_pause_togo
+ *
+ * AGL_GMX_TX_PAUSE_TOGO = Packet TX Amount of time remaining to backpressure
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_pause_togo {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_pause_togo_s {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} s;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn52xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn56xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn61xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn63xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn66xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn68xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn70xx;
+	struct cvmx_agl_gmx_txx_pause_togo_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_pause_togo cvmx_agl_gmx_txx_pause_togo_t;
+
+/**
+ * cvmx_agl_gmx_tx#_pause_zero
+ *
+ * AGL_GMX_TX_PAUSE_ZERO = Packet TX Amount of time remaining to backpressure
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_pause_zero {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_pause_zero_s {
+		u64 reserved_1_63 : 63;
+		u64 send : 1;
+	} s;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn52xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn56xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn61xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn63xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn66xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn68xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn70xx;
+	struct cvmx_agl_gmx_txx_pause_zero_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_pause_zero cvmx_agl_gmx_txx_pause_zero_t;
+
+/**
+ * cvmx_agl_gmx_tx#_soft_pause
+ *
+ * AGL_GMX_TX_SOFT_PAUSE = Packet TX Software Pause
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_soft_pause {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_soft_pause_s {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} s;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn52xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn56xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn61xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn63xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn66xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn68xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn70xx;
+	struct cvmx_agl_gmx_txx_soft_pause_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_soft_pause cvmx_agl_gmx_txx_soft_pause_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat0
+ *
+ * AGL_GMX_TX_STAT0 = AGL_GMX_TX_STATS_XSDEF / AGL_GMX_TX_STATS_XSCOL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat0 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat0_s {
+		u64 xsdef : 32;
+		u64 xscol : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat0_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat0_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat0_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat0_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat0_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat0_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat0 cvmx_agl_gmx_txx_stat0_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat1
+ *
+ * AGL_GMX_TX_STAT1 = AGL_GMX_TX_STATS_SCOL  / AGL_GMX_TX_STATS_MCOL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat1 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat1_s {
+		u64 scol : 32;
+		u64 mcol : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat1_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat1_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat1_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat1_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat1_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat1_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat1 cvmx_agl_gmx_txx_stat1_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat2
+ *
+ * AGL_GMX_TX_STAT2 = AGL_GMX_TX_STATS_OCTS
+ *
+ *
+ * Notes:
+ * - Octect counts are the sum of all data transmitted on the wire including
+ *   packet data, pad bytes, fcs bytes, pause bytes, and jam bytes.  The octect
+ *   counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat2 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_agl_gmx_txx_stat2_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat2_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat2_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat2_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat2_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat2_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat2 cvmx_agl_gmx_txx_stat2_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat3
+ *
+ * AGL_GMX_TX_STAT3 = AGL_GMX_TX_STATS_PKTS
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat3 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat3_s {
+		u64 reserved_32_63 : 32;
+		u64 pkts : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat3_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat3_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat3_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat3_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat3_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat3_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat3 cvmx_agl_gmx_txx_stat3_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat4
+ *
+ * AGL_GMX_TX_STAT4 = AGL_GMX_TX_STATS_HIST1 (64) / AGL_GMX_TX_STATS_HIST0 (<64)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat4 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat4_s {
+		u64 hist1 : 32;
+		u64 hist0 : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat4_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat4_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat4_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat4_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat4_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat4_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat4 cvmx_agl_gmx_txx_stat4_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat5
+ *
+ * AGL_GMX_TX_STAT5 = AGL_GMX_TX_STATS_HIST3 (128- 255) / AGL_GMX_TX_STATS_HIST2 (65- 127)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat5 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat5_s {
+		u64 hist3 : 32;
+		u64 hist2 : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat5_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat5_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat5_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat5_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat5_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat5_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat5 cvmx_agl_gmx_txx_stat5_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat6
+ *
+ * AGL_GMX_TX_STAT6 = AGL_GMX_TX_STATS_HIST5 (512-1023) / AGL_GMX_TX_STATS_HIST4 (256-511)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat6 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat6_s {
+		u64 hist5 : 32;
+		u64 hist4 : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat6_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat6_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat6_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat6_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat6_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat6_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat6 cvmx_agl_gmx_txx_stat6_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat7
+ *
+ * AGL_GMX_TX_STAT7 = AGL_GMX_TX_STATS_HIST7 (1024-1518) / AGL_GMX_TX_STATS_HIST6 (>1518)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat7 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat7_s {
+		u64 hist7 : 32;
+		u64 hist6 : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat7_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat7_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat7_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat7_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat7_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat7_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat7 cvmx_agl_gmx_txx_stat7_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat8
+ *
+ * AGL_GMX_TX_STAT8 = AGL_GMX_TX_STATS_MCST  / AGL_GMX_TX_STATS_BCST
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Note, GMX determines if the packet is MCST or BCST from the DMAC of the
+ *   packet.  GMX assumes that the DMAC lies in the first 6 bytes of the packet
+ *   as per the 802.3 frame definition.  If the system requires additional data
+ *   before the L2 header, then the MCST and BCST counters may not reflect
+ *   reality and should be ignored by software.
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat8 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat8_s {
+		u64 mcst : 32;
+		u64 bcst : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat8_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat8_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat8_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat8_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat8_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat8_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat8 cvmx_agl_gmx_txx_stat8_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stat9
+ *
+ * AGL_GMX_TX_STAT9 = AGL_GMX_TX_STATS_UNDFLW / AGL_GMX_TX_STATS_CTL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when AGL_GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Not reset when MIX*_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_txx_stat9 {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stat9_s {
+		u64 undflw : 32;
+		u64 ctl : 32;
+	} s;
+	struct cvmx_agl_gmx_txx_stat9_s cn52xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stat9_s cn56xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stat9_s cn61xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn63xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stat9_s cn66xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn68xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stat9_s cn70xx;
+	struct cvmx_agl_gmx_txx_stat9_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stat9 cvmx_agl_gmx_txx_stat9_t;
+
+/**
+ * cvmx_agl_gmx_tx#_stats_ctl
+ *
+ * AGL_GMX_TX_STATS_CTL = TX Stats Control register
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_stats_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_stats_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 rd_clr : 1;
+	} s;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn52xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn56xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn61xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn63xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn66xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn68xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn70xx;
+	struct cvmx_agl_gmx_txx_stats_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_stats_ctl cvmx_agl_gmx_txx_stats_ctl_t;
+
+/**
+ * cvmx_agl_gmx_tx#_thresh
+ *
+ * AGL_GMX_TX_THRESH = Packet TX Threshold
+ *
+ *
+ * Notes:
+ * Additionally reset when MIX<prt>_CTL[RESET] is set to 1.
+ *
+ */
+union cvmx_agl_gmx_txx_thresh {
+	u64 u64;
+	struct cvmx_agl_gmx_txx_thresh_s {
+		u64 reserved_6_63 : 58;
+		u64 cnt : 6;
+	} s;
+	struct cvmx_agl_gmx_txx_thresh_s cn52xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn52xxp1;
+	struct cvmx_agl_gmx_txx_thresh_s cn56xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn56xxp1;
+	struct cvmx_agl_gmx_txx_thresh_s cn61xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn63xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn63xxp1;
+	struct cvmx_agl_gmx_txx_thresh_s cn66xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn68xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn68xxp1;
+	struct cvmx_agl_gmx_txx_thresh_s cn70xx;
+	struct cvmx_agl_gmx_txx_thresh_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_txx_thresh cvmx_agl_gmx_txx_thresh_t;
+
+/**
+ * cvmx_agl_gmx_tx_bp
+ *
+ * AGL_GMX_TX_BP = Packet TX BackPressure Register
+ *
+ *
+ * Notes:
+ * BP[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * BP[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_tx_bp {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_bp_s {
+		u64 reserved_2_63 : 62;
+		u64 bp : 2;
+	} s;
+	struct cvmx_agl_gmx_tx_bp_s cn52xx;
+	struct cvmx_agl_gmx_tx_bp_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_bp_cn56xx {
+		u64 reserved_1_63 : 63;
+		u64 bp : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_tx_bp_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_tx_bp_s cn61xx;
+	struct cvmx_agl_gmx_tx_bp_s cn63xx;
+	struct cvmx_agl_gmx_tx_bp_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_bp_s cn66xx;
+	struct cvmx_agl_gmx_tx_bp_s cn68xx;
+	struct cvmx_agl_gmx_tx_bp_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_bp_cn56xx cn70xx;
+	struct cvmx_agl_gmx_tx_bp_cn56xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_bp cvmx_agl_gmx_tx_bp_t;
+
+/**
+ * cvmx_agl_gmx_tx_col_attempt
+ *
+ * AGL_GMX_TX_COL_ATTEMPT = Packet TX collision attempts before dropping frame
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ */
+union cvmx_agl_gmx_tx_col_attempt {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_col_attempt_s {
+		u64 reserved_5_63 : 59;
+		u64 limit : 5;
+	} s;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn52xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn56xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn61xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn63xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn66xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn68xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn70xx;
+	struct cvmx_agl_gmx_tx_col_attempt_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_col_attempt cvmx_agl_gmx_tx_col_attempt_t;
+
+/**
+ * cvmx_agl_gmx_tx_ifg
+ *
+ * Common
+ * AGL_GMX_TX_IFG = Packet TX Interframe Gap
+ */
+union cvmx_agl_gmx_tx_ifg {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_ifg_s {
+		u64 reserved_8_63 : 56;
+		u64 ifg2 : 4;
+		u64 ifg1 : 4;
+	} s;
+	struct cvmx_agl_gmx_tx_ifg_s cn52xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_ifg_s cn56xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_ifg_s cn61xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn63xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_ifg_s cn66xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn68xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_ifg_s cn70xx;
+	struct cvmx_agl_gmx_tx_ifg_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_ifg cvmx_agl_gmx_tx_ifg_t;
+
+/**
+ * cvmx_agl_gmx_tx_int_en
+ *
+ * AGL_GMX_TX_INT_EN = Interrupt Enable
+ *
+ *
+ * Notes:
+ * UNDFLW[0], XSCOL[0], XSDEF[0], LATE_COL[0], PTP_LOST[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * UNDFLW[1], XSCOL[1], XSDEF[1], LATE_COL[1], PTP_LOST[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ * PKO_NXA will bee reset when both MIX0/1_CTL[RESET] are set to 1.
+ */
+union cvmx_agl_gmx_tx_int_en {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_int_en_s {
+		u64 reserved_22_63 : 42;
+		u64 ptp_lost : 2;
+		u64 reserved_18_19 : 2;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} s;
+	struct cvmx_agl_gmx_tx_int_en_cn52xx {
+		u64 reserved_18_63 : 46;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_tx_int_en_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_tx_int_en_cn56xx {
+		u64 reserved_17_63 : 47;
+		u64 late_col : 1;
+		u64 reserved_13_15 : 3;
+		u64 xsdef : 1;
+		u64 reserved_9_11 : 3;
+		u64 xscol : 1;
+		u64 reserved_3_7 : 5;
+		u64 undflw : 1;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_tx_int_en_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_tx_int_en_s cn61xx;
+	struct cvmx_agl_gmx_tx_int_en_s cn63xx;
+	struct cvmx_agl_gmx_tx_int_en_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_int_en_s cn66xx;
+	struct cvmx_agl_gmx_tx_int_en_s cn68xx;
+	struct cvmx_agl_gmx_tx_int_en_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_int_en_cn70xx {
+		u64 reserved_21_63 : 43;
+		u64 ptp_lost : 1;
+		u64 reserved_17_19 : 3;
+		u64 late_col : 1;
+		u64 reserved_13_15 : 3;
+		u64 xsdef : 1;
+		u64 reserved_9_11 : 3;
+		u64 xscol : 1;
+		u64 reserved_3_7 : 5;
+		u64 undflw : 1;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn70xx;
+	struct cvmx_agl_gmx_tx_int_en_cn70xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_int_en cvmx_agl_gmx_tx_int_en_t;
+
+/**
+ * cvmx_agl_gmx_tx_int_reg
+ *
+ * AGL_GMX_TX_INT_REG = Interrupt Register
+ *
+ *
+ * Notes:
+ * UNDFLW[0], XSCOL[0], XSDEF[0], LATE_COL[0], PTP_LOST[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * UNDFLW[1], XSCOL[1], XSDEF[1], LATE_COL[1], PTP_LOST[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ * PKO_NXA will bee reset when both MIX0/1_CTL[RESET] are set to 1.
+ */
+union cvmx_agl_gmx_tx_int_reg {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_int_reg_s {
+		u64 reserved_22_63 : 42;
+		u64 ptp_lost : 2;
+		u64 reserved_18_19 : 2;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} s;
+	struct cvmx_agl_gmx_tx_int_reg_cn52xx {
+		u64 reserved_18_63 : 46;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn52xx;
+	struct cvmx_agl_gmx_tx_int_reg_cn52xx cn52xxp1;
+	struct cvmx_agl_gmx_tx_int_reg_cn56xx {
+		u64 reserved_17_63 : 47;
+		u64 late_col : 1;
+		u64 reserved_13_15 : 3;
+		u64 xsdef : 1;
+		u64 reserved_9_11 : 3;
+		u64 xscol : 1;
+		u64 reserved_3_7 : 5;
+		u64 undflw : 1;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_tx_int_reg_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_tx_int_reg_s cn61xx;
+	struct cvmx_agl_gmx_tx_int_reg_s cn63xx;
+	struct cvmx_agl_gmx_tx_int_reg_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_int_reg_s cn66xx;
+	struct cvmx_agl_gmx_tx_int_reg_s cn68xx;
+	struct cvmx_agl_gmx_tx_int_reg_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_int_reg_cn70xx {
+		u64 reserved_21_63 : 43;
+		u64 ptp_lost : 1;
+		u64 reserved_17_19 : 3;
+		u64 late_col : 1;
+		u64 reserved_13_15 : 3;
+		u64 xsdef : 1;
+		u64 reserved_9_11 : 3;
+		u64 xscol : 1;
+		u64 reserved_3_7 : 5;
+		u64 undflw : 1;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn70xx;
+	struct cvmx_agl_gmx_tx_int_reg_cn70xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_int_reg cvmx_agl_gmx_tx_int_reg_t;
+
+/**
+ * cvmx_agl_gmx_tx_jam
+ *
+ * AGL_GMX_TX_JAM = Packet TX Jam Pattern
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ */
+union cvmx_agl_gmx_tx_jam {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_jam_s {
+		u64 reserved_8_63 : 56;
+		u64 jam : 8;
+	} s;
+	struct cvmx_agl_gmx_tx_jam_s cn52xx;
+	struct cvmx_agl_gmx_tx_jam_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_jam_s cn56xx;
+	struct cvmx_agl_gmx_tx_jam_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_jam_s cn61xx;
+	struct cvmx_agl_gmx_tx_jam_s cn63xx;
+	struct cvmx_agl_gmx_tx_jam_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_jam_s cn66xx;
+	struct cvmx_agl_gmx_tx_jam_s cn68xx;
+	struct cvmx_agl_gmx_tx_jam_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_jam_s cn70xx;
+	struct cvmx_agl_gmx_tx_jam_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_jam cvmx_agl_gmx_tx_jam_t;
+
+/**
+ * cvmx_agl_gmx_tx_lfsr
+ *
+ * AGL_GMX_TX_LFSR = LFSR used to implement truncated binary exponential backoff
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ */
+union cvmx_agl_gmx_tx_lfsr {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_lfsr_s {
+		u64 reserved_16_63 : 48;
+		u64 lfsr : 16;
+	} s;
+	struct cvmx_agl_gmx_tx_lfsr_s cn52xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_lfsr_s cn56xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_lfsr_s cn61xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn63xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_lfsr_s cn66xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn68xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_lfsr_s cn70xx;
+	struct cvmx_agl_gmx_tx_lfsr_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_lfsr cvmx_agl_gmx_tx_lfsr_t;
+
+/**
+ * cvmx_agl_gmx_tx_ovr_bp
+ *
+ * AGL_GMX_TX_OVR_BP = Packet TX Override BackPressure
+ *
+ *
+ * Notes:
+ * IGN_FULL[0], BP[0], EN[0] will be reset when MIX0_CTL[RESET] is set to 1.
+ * IGN_FULL[1], BP[1], EN[1] will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_gmx_tx_ovr_bp {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_ovr_bp_s {
+		u64 reserved_10_63 : 54;
+		u64 en : 2;
+		u64 reserved_6_7 : 2;
+		u64 bp : 2;
+		u64 reserved_2_3 : 2;
+		u64 ign_full : 2;
+	} s;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn52xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_ovr_bp_cn56xx {
+		u64 reserved_9_63 : 55;
+		u64 en : 1;
+		u64 reserved_5_7 : 3;
+		u64 bp : 1;
+		u64 reserved_1_3 : 3;
+		u64 ign_full : 1;
+	} cn56xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_cn56xx cn56xxp1;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn61xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn63xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn66xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn68xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_ovr_bp_cn56xx cn70xx;
+	struct cvmx_agl_gmx_tx_ovr_bp_cn56xx cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_ovr_bp cvmx_agl_gmx_tx_ovr_bp_t;
+
+/**
+ * cvmx_agl_gmx_tx_pause_pkt_dmac
+ *
+ * AGL_GMX_TX_PAUSE_PKT_DMAC = Packet TX Pause Packet DMAC field
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ */
+union cvmx_agl_gmx_tx_pause_pkt_dmac {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 dmac : 48;
+	} s;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn52xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn56xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn61xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn63xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn66xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn68xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn70xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_dmac_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_pause_pkt_dmac cvmx_agl_gmx_tx_pause_pkt_dmac_t;
+
+/**
+ * cvmx_agl_gmx_tx_pause_pkt_type
+ *
+ * AGL_GMX_TX_PAUSE_PKT_TYPE = Packet TX Pause Packet TYPE field
+ *
+ *
+ * Notes:
+ * Additionally reset when both MIX0/1_CTL[RESET] are set to 1.
+ *
+ */
+union cvmx_agl_gmx_tx_pause_pkt_type {
+	u64 u64;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s {
+		u64 reserved_16_63 : 48;
+		u64 type : 16;
+	} s;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn52xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn52xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn56xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn56xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn61xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn63xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn63xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn66xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn68xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn68xxp1;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn70xx;
+	struct cvmx_agl_gmx_tx_pause_pkt_type_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_tx_pause_pkt_type cvmx_agl_gmx_tx_pause_pkt_type_t;
+
+/**
+ * cvmx_agl_gmx_wol_ctl
+ */
+union cvmx_agl_gmx_wol_ctl {
+	u64 u64;
+	struct cvmx_agl_gmx_wol_ctl_s {
+		u64 reserved_33_63 : 31;
+		u64 magic_en : 1;
+		u64 reserved_17_31 : 15;
+		u64 direct_en : 1;
+		u64 reserved_1_15 : 15;
+		u64 en : 1;
+	} s;
+	struct cvmx_agl_gmx_wol_ctl_s cn70xx;
+	struct cvmx_agl_gmx_wol_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_agl_gmx_wol_ctl cvmx_agl_gmx_wol_ctl_t;
+
+/**
+ * cvmx_agl_prt#_ctl
+ *
+ * AGL_PRT_CTL = AGL Port Control
+ *
+ *
+ * Notes:
+ * The RGMII timing specification requires that devices transmit clock and
+ * data synchronously. The specification requires external sources (namely
+ * the PC board trace routes) to introduce the appropriate 1.5 to 2.0 ns of
+ * delay.
+ *
+ * To eliminate the need for the PC board delays, the MIX RGMII interface
+ * has optional onboard DLL's for both transmit and receive. For correct
+ * operation, at most one of the transmitter, board, or receiver involved
+ * in an RGMII link should introduce delay. By default/reset,
+ * the MIX RGMII receivers delay the received clock, and the MIX
+ * RGMII transmitters do not delay the transmitted clock. Whether this
+ * default works as-is with a given link partner depends on the behavior
+ * of the link partner and the PC board.
+ *
+ * These are the possible modes of MIX RGMII receive operation:
+ *  o AGL_PRTx_CTL[CLKRX_BYP] = 0 (reset value) - The OCTEON MIX RGMII
+ *    receive interface introduces clock delay using its internal DLL.
+ *    This mode is appropriate if neither the remote
+ *    transmitter nor the PC board delays the clock.
+ *  o AGL_PRTx_CTL[CLKRX_BYP] = 1, [CLKRX_SET] = 0x0 - The OCTEON MIX
+ *    RGMII receive interface introduces no clock delay. This mode
+ *    is appropriate if either the remote transmitter or the PC board
+ *    delays the clock.
+ *
+ * These are the possible modes of MIX RGMII transmit operation:
+ *  o AGL_PRTx_CTL[CLKTX_BYP] = 1, [CLKTX_SET] = 0x0 (reset value) -
+ *    The OCTEON MIX RGMII transmit interface introduces no clock
+ *    delay. This mode is appropriate is either the remote receiver
+ *    or the PC board delays the clock.
+ *  o AGL_PRTx_CTL[CLKTX_BYP] = 0 - The OCTEON MIX RGMII transmit
+ *    interface introduces clock delay using its internal DLL.
+ *    This mode is appropriate if neither the remote receiver
+ *    nor the PC board delays the clock.
+ *
+ * AGL_PRT0_CTL will be reset when MIX0_CTL[RESET] is set to 1.
+ * AGL_PRT1_CTL will be reset when MIX1_CTL[RESET] is set to 1.
+ */
+union cvmx_agl_prtx_ctl {
+	u64 u64;
+	struct cvmx_agl_prtx_ctl_s {
+		u64 drv_byp : 1;
+		u64 reserved_62_62 : 1;
+		u64 cmp_pctl : 6;
+		u64 reserved_54_55 : 2;
+		u64 cmp_nctl : 6;
+		u64 reserved_46_47 : 2;
+		u64 drv_pctl : 6;
+		u64 reserved_38_39 : 2;
+		u64 drv_nctl : 6;
+		u64 reserved_31_31 : 1;
+		u64 clk_set : 7;
+		u64 clkrx_byp : 1;
+		u64 clkrx_set : 7;
+		u64 clktx_byp : 1;
+		u64 clktx_set : 7;
+		u64 refclk_sel : 2;
+		u64 reserved_5_5 : 1;
+		u64 dllrst : 1;
+		u64 comp : 1;
+		u64 enable : 1;
+		u64 clkrst : 1;
+		u64 mode : 1;
+	} s;
+	struct cvmx_agl_prtx_ctl_cn61xx {
+		u64 drv_byp : 1;
+		u64 reserved_62_62 : 1;
+		u64 cmp_pctl : 6;
+		u64 reserved_54_55 : 2;
+		u64 cmp_nctl : 6;
+		u64 reserved_46_47 : 2;
+		u64 drv_pctl : 6;
+		u64 reserved_38_39 : 2;
+		u64 drv_nctl : 6;
+		u64 reserved_29_31 : 3;
+		u64 clk_set : 5;
+		u64 clkrx_byp : 1;
+		u64 reserved_21_22 : 2;
+		u64 clkrx_set : 5;
+		u64 clktx_byp : 1;
+		u64 reserved_13_14 : 2;
+		u64 clktx_set : 5;
+		u64 reserved_5_7 : 3;
+		u64 dllrst : 1;
+		u64 comp : 1;
+		u64 enable : 1;
+		u64 clkrst : 1;
+		u64 mode : 1;
+	} cn61xx;
+	struct cvmx_agl_prtx_ctl_cn61xx cn63xx;
+	struct cvmx_agl_prtx_ctl_cn61xx cn63xxp1;
+	struct cvmx_agl_prtx_ctl_cn61xx cn66xx;
+	struct cvmx_agl_prtx_ctl_cn61xx cn68xx;
+	struct cvmx_agl_prtx_ctl_cn61xx cn68xxp1;
+	struct cvmx_agl_prtx_ctl_cn70xx {
+		u64 drv_byp : 1;
+		u64 reserved_61_62 : 2;
+		u64 cmp_pctl : 5;
+		u64 reserved_53_55 : 3;
+		u64 cmp_nctl : 5;
+		u64 reserved_45_47 : 3;
+		u64 drv_pctl : 5;
+		u64 reserved_37_39 : 3;
+		u64 drv_nctl : 5;
+		u64 reserved_31_31 : 1;
+		u64 clk_set : 7;
+		u64 clkrx_byp : 1;
+		u64 clkrx_set : 7;
+		u64 clktx_byp : 1;
+		u64 clktx_set : 7;
+		u64 refclk_sel : 2;
+		u64 reserved_5_5 : 1;
+		u64 dllrst : 1;
+		u64 comp : 1;
+		u64 enable : 1;
+		u64 clkrst : 1;
+		u64 mode : 1;
+	} cn70xx;
+	struct cvmx_agl_prtx_ctl_cn70xx cn70xxp1;
+};
+
+typedef union cvmx_agl_prtx_ctl cvmx_agl_prtx_ctl_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 04/50] mips: octeon: Add cvmx-asxx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (2 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 03/50] mips: octeon: Add cvmx-agl-defs.h header file Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 05/50] mips: octeon: Add cvmx-bgxx-defs.h " Stefan Roese
                   ` (48 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-asxx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-asxx-defs.h | 709 ++++++++++++++++++
 1 file changed, 709 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-asxx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-asxx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-asxx-defs.h
new file mode 100644
index 0000000000..2af1a29d63
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-asxx-defs.h
@@ -0,0 +1,709 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon asxx.
+ */
+
+#ifndef __CVMX_ASXX_DEFS_H__
+#define __CVMX_ASXX_DEFS_H__
+
+#define CVMX_ASXX_GMII_RX_CLK_SET(offset)    (0x00011800B0000180ull)
+#define CVMX_ASXX_GMII_RX_DAT_SET(offset)    (0x00011800B0000188ull)
+#define CVMX_ASXX_INT_EN(offset)	     (0x00011800B0000018ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_INT_REG(offset)	     (0x00011800B0000010ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_MII_RX_DAT_SET(offset)     (0x00011800B0000190ull)
+#define CVMX_ASXX_PRT_LOOP(offset)	     (0x00011800B0000040ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_BYPASS(offset)	     (0x00011800B0000248ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_BYPASS_SETTING(offset) (0x00011800B0000250ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_COMP(offset)	     (0x00011800B0000220ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_DATA_DRV(offset)	     (0x00011800B0000218ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_FCRAM_MODE(offset)     (0x00011800B0000210ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_NCTL_STRONG(offset)    (0x00011800B0000230ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_NCTL_WEAK(offset)	     (0x00011800B0000240ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_PCTL_STRONG(offset)    (0x00011800B0000228ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_PCTL_WEAK(offset)	     (0x00011800B0000238ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RLD_SETTING(offset)	     (0x00011800B0000258ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RX_CLK_SETX(offset, block_id)                                                    \
+	(0x00011800B0000020ull + (((offset) & 3) + ((block_id) & 1) * 0x1000000ull) * 8)
+#define CVMX_ASXX_RX_PRT_EN(offset)    (0x00011800B0000000ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RX_WOL(offset)       (0x00011800B0000100ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RX_WOL_MSK(offset)   (0x00011800B0000108ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RX_WOL_POWOK(offset) (0x00011800B0000118ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_RX_WOL_SIG(offset)   (0x00011800B0000110ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_TX_CLK_SETX(offset, block_id)                                                    \
+	(0x00011800B0000048ull + (((offset) & 3) + ((block_id) & 1) * 0x1000000ull) * 8)
+#define CVMX_ASXX_TX_COMP_BYP(offset) (0x00011800B0000068ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_ASXX_TX_HI_WATERX(offset, block_id)                                                   \
+	(0x00011800B0000080ull + (((offset) & 3) + ((block_id) & 1) * 0x1000000ull) * 8)
+#define CVMX_ASXX_TX_PRT_EN(offset) (0x00011800B0000008ull + ((offset) & 1) * 0x8000000ull)
+
+/**
+ * cvmx_asx#_gmii_rx_clk_set
+ *
+ * ASX_GMII_RX_CLK_SET = GMII Clock delay setting
+ *
+ */
+union cvmx_asxx_gmii_rx_clk_set {
+	u64 u64;
+	struct cvmx_asxx_gmii_rx_clk_set_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_gmii_rx_clk_set_s cn30xx;
+	struct cvmx_asxx_gmii_rx_clk_set_s cn31xx;
+	struct cvmx_asxx_gmii_rx_clk_set_s cn50xx;
+};
+
+typedef union cvmx_asxx_gmii_rx_clk_set cvmx_asxx_gmii_rx_clk_set_t;
+
+/**
+ * cvmx_asx#_gmii_rx_dat_set
+ *
+ * ASX_GMII_RX_DAT_SET = GMII Clock delay setting
+ *
+ */
+union cvmx_asxx_gmii_rx_dat_set {
+	u64 u64;
+	struct cvmx_asxx_gmii_rx_dat_set_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_gmii_rx_dat_set_s cn30xx;
+	struct cvmx_asxx_gmii_rx_dat_set_s cn31xx;
+	struct cvmx_asxx_gmii_rx_dat_set_s cn50xx;
+};
+
+typedef union cvmx_asxx_gmii_rx_dat_set cvmx_asxx_gmii_rx_dat_set_t;
+
+/**
+ * cvmx_asx#_int_en
+ *
+ * ASX_INT_EN = Interrupt Enable
+ *
+ */
+union cvmx_asxx_int_en {
+	u64 u64;
+	struct cvmx_asxx_int_en_s {
+		u64 reserved_12_63 : 52;
+		u64 txpsh : 4;
+		u64 txpop : 4;
+		u64 ovrflw : 4;
+	} s;
+	struct cvmx_asxx_int_en_cn30xx {
+		u64 reserved_11_63 : 53;
+		u64 txpsh : 3;
+		u64 reserved_7_7 : 1;
+		u64 txpop : 3;
+		u64 reserved_3_3 : 1;
+		u64 ovrflw : 3;
+	} cn30xx;
+	struct cvmx_asxx_int_en_cn30xx cn31xx;
+	struct cvmx_asxx_int_en_s cn38xx;
+	struct cvmx_asxx_int_en_s cn38xxp2;
+	struct cvmx_asxx_int_en_cn30xx cn50xx;
+	struct cvmx_asxx_int_en_s cn58xx;
+	struct cvmx_asxx_int_en_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_int_en cvmx_asxx_int_en_t;
+
+/**
+ * cvmx_asx#_int_reg
+ *
+ * ASX_INT_REG = Interrupt Register
+ *
+ */
+union cvmx_asxx_int_reg {
+	u64 u64;
+	struct cvmx_asxx_int_reg_s {
+		u64 reserved_12_63 : 52;
+		u64 txpsh : 4;
+		u64 txpop : 4;
+		u64 ovrflw : 4;
+	} s;
+	struct cvmx_asxx_int_reg_cn30xx {
+		u64 reserved_11_63 : 53;
+		u64 txpsh : 3;
+		u64 reserved_7_7 : 1;
+		u64 txpop : 3;
+		u64 reserved_3_3 : 1;
+		u64 ovrflw : 3;
+	} cn30xx;
+	struct cvmx_asxx_int_reg_cn30xx cn31xx;
+	struct cvmx_asxx_int_reg_s cn38xx;
+	struct cvmx_asxx_int_reg_s cn38xxp2;
+	struct cvmx_asxx_int_reg_cn30xx cn50xx;
+	struct cvmx_asxx_int_reg_s cn58xx;
+	struct cvmx_asxx_int_reg_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_int_reg cvmx_asxx_int_reg_t;
+
+/**
+ * cvmx_asx#_mii_rx_dat_set
+ *
+ * ASX_MII_RX_DAT_SET = GMII Clock delay setting
+ *
+ */
+union cvmx_asxx_mii_rx_dat_set {
+	u64 u64;
+	struct cvmx_asxx_mii_rx_dat_set_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_mii_rx_dat_set_s cn30xx;
+	struct cvmx_asxx_mii_rx_dat_set_s cn50xx;
+};
+
+typedef union cvmx_asxx_mii_rx_dat_set cvmx_asxx_mii_rx_dat_set_t;
+
+/**
+ * cvmx_asx#_prt_loop
+ *
+ * ASX_PRT_LOOP = Internal Loopback mode - TX FIFO output goes into RX FIFO (and maybe pins)
+ *
+ */
+union cvmx_asxx_prt_loop {
+	u64 u64;
+	struct cvmx_asxx_prt_loop_s {
+		u64 reserved_8_63 : 56;
+		u64 ext_loop : 4;
+		u64 int_loop : 4;
+	} s;
+	struct cvmx_asxx_prt_loop_cn30xx {
+		u64 reserved_7_63 : 57;
+		u64 ext_loop : 3;
+		u64 reserved_3_3 : 1;
+		u64 int_loop : 3;
+	} cn30xx;
+	struct cvmx_asxx_prt_loop_cn30xx cn31xx;
+	struct cvmx_asxx_prt_loop_s cn38xx;
+	struct cvmx_asxx_prt_loop_s cn38xxp2;
+	struct cvmx_asxx_prt_loop_cn30xx cn50xx;
+	struct cvmx_asxx_prt_loop_s cn58xx;
+	struct cvmx_asxx_prt_loop_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_prt_loop cvmx_asxx_prt_loop_t;
+
+/**
+ * cvmx_asx#_rld_bypass
+ *
+ * ASX_RLD_BYPASS
+ *
+ */
+union cvmx_asxx_rld_bypass {
+	u64 u64;
+	struct cvmx_asxx_rld_bypass_s {
+		u64 reserved_1_63 : 63;
+		u64 bypass : 1;
+	} s;
+	struct cvmx_asxx_rld_bypass_s cn38xx;
+	struct cvmx_asxx_rld_bypass_s cn38xxp2;
+	struct cvmx_asxx_rld_bypass_s cn58xx;
+	struct cvmx_asxx_rld_bypass_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_bypass cvmx_asxx_rld_bypass_t;
+
+/**
+ * cvmx_asx#_rld_bypass_setting
+ *
+ * ASX_RLD_BYPASS_SETTING
+ *
+ */
+union cvmx_asxx_rld_bypass_setting {
+	u64 u64;
+	struct cvmx_asxx_rld_bypass_setting_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_rld_bypass_setting_s cn38xx;
+	struct cvmx_asxx_rld_bypass_setting_s cn38xxp2;
+	struct cvmx_asxx_rld_bypass_setting_s cn58xx;
+	struct cvmx_asxx_rld_bypass_setting_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_bypass_setting cvmx_asxx_rld_bypass_setting_t;
+
+/**
+ * cvmx_asx#_rld_comp
+ *
+ * ASX_RLD_COMP
+ *
+ */
+union cvmx_asxx_rld_comp {
+	u64 u64;
+	struct cvmx_asxx_rld_comp_s {
+		u64 reserved_9_63 : 55;
+		u64 pctl : 5;
+		u64 nctl : 4;
+	} s;
+	struct cvmx_asxx_rld_comp_cn38xx {
+		u64 reserved_8_63 : 56;
+		u64 pctl : 4;
+		u64 nctl : 4;
+	} cn38xx;
+	struct cvmx_asxx_rld_comp_cn38xx cn38xxp2;
+	struct cvmx_asxx_rld_comp_s cn58xx;
+	struct cvmx_asxx_rld_comp_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_comp cvmx_asxx_rld_comp_t;
+
+/**
+ * cvmx_asx#_rld_data_drv
+ *
+ * ASX_RLD_DATA_DRV
+ *
+ */
+union cvmx_asxx_rld_data_drv {
+	u64 u64;
+	struct cvmx_asxx_rld_data_drv_s {
+		u64 reserved_8_63 : 56;
+		u64 pctl : 4;
+		u64 nctl : 4;
+	} s;
+	struct cvmx_asxx_rld_data_drv_s cn38xx;
+	struct cvmx_asxx_rld_data_drv_s cn38xxp2;
+	struct cvmx_asxx_rld_data_drv_s cn58xx;
+	struct cvmx_asxx_rld_data_drv_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_data_drv cvmx_asxx_rld_data_drv_t;
+
+/**
+ * cvmx_asx#_rld_fcram_mode
+ *
+ * ASX_RLD_FCRAM_MODE
+ *
+ */
+union cvmx_asxx_rld_fcram_mode {
+	u64 u64;
+	struct cvmx_asxx_rld_fcram_mode_s {
+		u64 reserved_1_63 : 63;
+		u64 mode : 1;
+	} s;
+	struct cvmx_asxx_rld_fcram_mode_s cn38xx;
+	struct cvmx_asxx_rld_fcram_mode_s cn38xxp2;
+};
+
+typedef union cvmx_asxx_rld_fcram_mode cvmx_asxx_rld_fcram_mode_t;
+
+/**
+ * cvmx_asx#_rld_nctl_strong
+ *
+ * ASX_RLD_NCTL_STRONG
+ *
+ */
+union cvmx_asxx_rld_nctl_strong {
+	u64 u64;
+	struct cvmx_asxx_rld_nctl_strong_s {
+		u64 reserved_5_63 : 59;
+		u64 nctl : 5;
+	} s;
+	struct cvmx_asxx_rld_nctl_strong_s cn38xx;
+	struct cvmx_asxx_rld_nctl_strong_s cn38xxp2;
+	struct cvmx_asxx_rld_nctl_strong_s cn58xx;
+	struct cvmx_asxx_rld_nctl_strong_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_nctl_strong cvmx_asxx_rld_nctl_strong_t;
+
+/**
+ * cvmx_asx#_rld_nctl_weak
+ *
+ * ASX_RLD_NCTL_WEAK
+ *
+ */
+union cvmx_asxx_rld_nctl_weak {
+	u64 u64;
+	struct cvmx_asxx_rld_nctl_weak_s {
+		u64 reserved_5_63 : 59;
+		u64 nctl : 5;
+	} s;
+	struct cvmx_asxx_rld_nctl_weak_s cn38xx;
+	struct cvmx_asxx_rld_nctl_weak_s cn38xxp2;
+	struct cvmx_asxx_rld_nctl_weak_s cn58xx;
+	struct cvmx_asxx_rld_nctl_weak_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_nctl_weak cvmx_asxx_rld_nctl_weak_t;
+
+/**
+ * cvmx_asx#_rld_pctl_strong
+ *
+ * ASX_RLD_PCTL_STRONG
+ *
+ */
+union cvmx_asxx_rld_pctl_strong {
+	u64 u64;
+	struct cvmx_asxx_rld_pctl_strong_s {
+		u64 reserved_5_63 : 59;
+		u64 pctl : 5;
+	} s;
+	struct cvmx_asxx_rld_pctl_strong_s cn38xx;
+	struct cvmx_asxx_rld_pctl_strong_s cn38xxp2;
+	struct cvmx_asxx_rld_pctl_strong_s cn58xx;
+	struct cvmx_asxx_rld_pctl_strong_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_pctl_strong cvmx_asxx_rld_pctl_strong_t;
+
+/**
+ * cvmx_asx#_rld_pctl_weak
+ *
+ * ASX_RLD_PCTL_WEAK
+ *
+ */
+union cvmx_asxx_rld_pctl_weak {
+	u64 u64;
+	struct cvmx_asxx_rld_pctl_weak_s {
+		u64 reserved_5_63 : 59;
+		u64 pctl : 5;
+	} s;
+	struct cvmx_asxx_rld_pctl_weak_s cn38xx;
+	struct cvmx_asxx_rld_pctl_weak_s cn38xxp2;
+	struct cvmx_asxx_rld_pctl_weak_s cn58xx;
+	struct cvmx_asxx_rld_pctl_weak_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_pctl_weak cvmx_asxx_rld_pctl_weak_t;
+
+/**
+ * cvmx_asx#_rld_setting
+ *
+ * ASX_RLD_SETTING
+ *
+ */
+union cvmx_asxx_rld_setting {
+	u64 u64;
+	struct cvmx_asxx_rld_setting_s {
+		u64 reserved_13_63 : 51;
+		u64 dfaset : 5;
+		u64 dfalag : 1;
+		u64 dfalead : 1;
+		u64 dfalock : 1;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_rld_setting_cn38xx {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} cn38xx;
+	struct cvmx_asxx_rld_setting_cn38xx cn38xxp2;
+	struct cvmx_asxx_rld_setting_s cn58xx;
+	struct cvmx_asxx_rld_setting_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rld_setting cvmx_asxx_rld_setting_t;
+
+/**
+ * cvmx_asx#_rx_clk_set#
+ *
+ * ASX_RX_CLK_SET = RGMII Clock delay setting
+ *
+ *
+ * Notes:
+ * Setting to place on the open-loop RXC (RGMII receive clk)
+ * delay line, which can delay the received clock. This
+ * can be used if the board and/or transmitting device
+ * has not otherwise delayed the clock.
+ *
+ * A value of SETTING=0 disables the delay line. The delay
+ * line should be disabled unless the transmitter or board
+ * does not delay the clock.
+ *
+ * Note that this delay line provides only a coarse control
+ * over the delay. Generally, it can only reliably provide
+ * a delay in the range 1.25-2.5ns, which may not be adequate
+ * for some system applications.
+ *
+ * The open loop delay line selects
+ * from among a series of tap positions. Each incremental
+ * tap position adds a delay of 50ps to 135ps per tap, depending
+ * on the chip, its temperature, and the voltage.
+ * To achieve from 1.25-2.5ns of delay on the received
+ * clock, a fixed value of SETTING=24 may work.
+ * For more precision, we recommend the following settings
+ * based on the chip voltage:
+ *
+ *    VDD           SETTING
+ *  -----------------------------
+ *    1.0             18
+ *    1.05            19
+ *    1.1             21
+ *    1.15            22
+ *    1.2             23
+ *    1.25            24
+ *    1.3             25
+ */
+union cvmx_asxx_rx_clk_setx {
+	u64 u64;
+	struct cvmx_asxx_rx_clk_setx_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_rx_clk_setx_s cn30xx;
+	struct cvmx_asxx_rx_clk_setx_s cn31xx;
+	struct cvmx_asxx_rx_clk_setx_s cn38xx;
+	struct cvmx_asxx_rx_clk_setx_s cn38xxp2;
+	struct cvmx_asxx_rx_clk_setx_s cn50xx;
+	struct cvmx_asxx_rx_clk_setx_s cn58xx;
+	struct cvmx_asxx_rx_clk_setx_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rx_clk_setx cvmx_asxx_rx_clk_setx_t;
+
+/**
+ * cvmx_asx#_rx_prt_en
+ *
+ * ASX_RX_PRT_EN = RGMII Port Enable
+ *
+ */
+union cvmx_asxx_rx_prt_en {
+	u64 u64;
+	struct cvmx_asxx_rx_prt_en_s {
+		u64 reserved_4_63 : 60;
+		u64 prt_en : 4;
+	} s;
+	struct cvmx_asxx_rx_prt_en_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 prt_en : 3;
+	} cn30xx;
+	struct cvmx_asxx_rx_prt_en_cn30xx cn31xx;
+	struct cvmx_asxx_rx_prt_en_s cn38xx;
+	struct cvmx_asxx_rx_prt_en_s cn38xxp2;
+	struct cvmx_asxx_rx_prt_en_cn30xx cn50xx;
+	struct cvmx_asxx_rx_prt_en_s cn58xx;
+	struct cvmx_asxx_rx_prt_en_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_rx_prt_en cvmx_asxx_rx_prt_en_t;
+
+/**
+ * cvmx_asx#_rx_wol
+ *
+ * ASX_RX_WOL = RGMII RX Wake on LAN status register
+ *
+ */
+union cvmx_asxx_rx_wol {
+	u64 u64;
+	struct cvmx_asxx_rx_wol_s {
+		u64 reserved_2_63 : 62;
+		u64 status : 1;
+		u64 enable : 1;
+	} s;
+	struct cvmx_asxx_rx_wol_s cn38xx;
+	struct cvmx_asxx_rx_wol_s cn38xxp2;
+};
+
+typedef union cvmx_asxx_rx_wol cvmx_asxx_rx_wol_t;
+
+/**
+ * cvmx_asx#_rx_wol_msk
+ *
+ * ASX_RX_WOL_MSK = RGMII RX Wake on LAN byte mask
+ *
+ */
+union cvmx_asxx_rx_wol_msk {
+	u64 u64;
+	struct cvmx_asxx_rx_wol_msk_s {
+		u64 msk : 64;
+	} s;
+	struct cvmx_asxx_rx_wol_msk_s cn38xx;
+	struct cvmx_asxx_rx_wol_msk_s cn38xxp2;
+};
+
+typedef union cvmx_asxx_rx_wol_msk cvmx_asxx_rx_wol_msk_t;
+
+/**
+ * cvmx_asx#_rx_wol_powok
+ *
+ * ASX_RX_WOL_POWOK = RGMII RX Wake on LAN Power OK
+ *
+ */
+union cvmx_asxx_rx_wol_powok {
+	u64 u64;
+	struct cvmx_asxx_rx_wol_powok_s {
+		u64 reserved_1_63 : 63;
+		u64 powerok : 1;
+	} s;
+	struct cvmx_asxx_rx_wol_powok_s cn38xx;
+	struct cvmx_asxx_rx_wol_powok_s cn38xxp2;
+};
+
+typedef union cvmx_asxx_rx_wol_powok cvmx_asxx_rx_wol_powok_t;
+
+/**
+ * cvmx_asx#_rx_wol_sig
+ *
+ * ASX_RX_WOL_SIG = RGMII RX Wake on LAN CRC signature
+ *
+ */
+union cvmx_asxx_rx_wol_sig {
+	u64 u64;
+	struct cvmx_asxx_rx_wol_sig_s {
+		u64 reserved_32_63 : 32;
+		u64 sig : 32;
+	} s;
+	struct cvmx_asxx_rx_wol_sig_s cn38xx;
+	struct cvmx_asxx_rx_wol_sig_s cn38xxp2;
+};
+
+typedef union cvmx_asxx_rx_wol_sig cvmx_asxx_rx_wol_sig_t;
+
+/**
+ * cvmx_asx#_tx_clk_set#
+ *
+ * ASX_TX_CLK_SET = RGMII Clock delay setting
+ *
+ *
+ * Notes:
+ * Setting to place on the open-loop TXC (RGMII transmit clk)
+ * delay line, which can delay the transmited clock. This
+ * can be used if the board and/or transmitting device
+ * has not otherwise delayed the clock.
+ *
+ * A value of SETTING=0 disables the delay line. The delay
+ * line should be disabled unless the transmitter or board
+ * does not delay the clock.
+ *
+ * Note that this delay line provides only a coarse control
+ * over the delay. Generally, it can only reliably provide
+ * a delay in the range 1.25-2.5ns, which may not be adequate
+ * for some system applications.
+ *
+ * The open loop delay line selects
+ * from among a series of tap positions. Each incremental
+ * tap position adds a delay of 50ps to 135ps per tap, depending
+ * on the chip, its temperature, and the voltage.
+ * To achieve from 1.25-2.5ns of delay on the received
+ * clock, a fixed value of SETTING=24 may work.
+ * For more precision, we recommend the following settings
+ * based on the chip voltage:
+ *
+ *    VDD           SETTING
+ *  -----------------------------
+ *    1.0             18
+ *    1.05            19
+ *    1.1             21
+ *    1.15            22
+ *    1.2             23
+ *    1.25            24
+ *    1.3             25
+ */
+union cvmx_asxx_tx_clk_setx {
+	u64 u64;
+	struct cvmx_asxx_tx_clk_setx_s {
+		u64 reserved_5_63 : 59;
+		u64 setting : 5;
+	} s;
+	struct cvmx_asxx_tx_clk_setx_s cn30xx;
+	struct cvmx_asxx_tx_clk_setx_s cn31xx;
+	struct cvmx_asxx_tx_clk_setx_s cn38xx;
+	struct cvmx_asxx_tx_clk_setx_s cn38xxp2;
+	struct cvmx_asxx_tx_clk_setx_s cn50xx;
+	struct cvmx_asxx_tx_clk_setx_s cn58xx;
+	struct cvmx_asxx_tx_clk_setx_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_tx_clk_setx cvmx_asxx_tx_clk_setx_t;
+
+/**
+ * cvmx_asx#_tx_comp_byp
+ *
+ * ASX_TX_COMP_BYP = RGMII Clock delay setting
+ *
+ */
+union cvmx_asxx_tx_comp_byp {
+	u64 u64;
+	struct cvmx_asxx_tx_comp_byp_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_asxx_tx_comp_byp_cn30xx {
+		u64 reserved_9_63 : 55;
+		u64 bypass : 1;
+		u64 pctl : 4;
+		u64 nctl : 4;
+	} cn30xx;
+	struct cvmx_asxx_tx_comp_byp_cn30xx cn31xx;
+	struct cvmx_asxx_tx_comp_byp_cn38xx {
+		u64 reserved_8_63 : 56;
+		u64 pctl : 4;
+		u64 nctl : 4;
+	} cn38xx;
+	struct cvmx_asxx_tx_comp_byp_cn38xx cn38xxp2;
+	struct cvmx_asxx_tx_comp_byp_cn50xx {
+		u64 reserved_17_63 : 47;
+		u64 bypass : 1;
+		u64 reserved_13_15 : 3;
+		u64 pctl : 5;
+		u64 reserved_5_7 : 3;
+		u64 nctl : 5;
+	} cn50xx;
+	struct cvmx_asxx_tx_comp_byp_cn58xx {
+		u64 reserved_13_63 : 51;
+		u64 pctl : 5;
+		u64 reserved_5_7 : 3;
+		u64 nctl : 5;
+	} cn58xx;
+	struct cvmx_asxx_tx_comp_byp_cn58xx cn58xxp1;
+};
+
+typedef union cvmx_asxx_tx_comp_byp cvmx_asxx_tx_comp_byp_t;
+
+/**
+ * cvmx_asx#_tx_hi_water#
+ *
+ * ASX_TX_HI_WATER = RGMII TX FIFO Hi WaterMark
+ *
+ */
+union cvmx_asxx_tx_hi_waterx {
+	u64 u64;
+	struct cvmx_asxx_tx_hi_waterx_s {
+		u64 reserved_4_63 : 60;
+		u64 mark : 4;
+	} s;
+	struct cvmx_asxx_tx_hi_waterx_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 mark : 3;
+	} cn30xx;
+	struct cvmx_asxx_tx_hi_waterx_cn30xx cn31xx;
+	struct cvmx_asxx_tx_hi_waterx_s cn38xx;
+	struct cvmx_asxx_tx_hi_waterx_s cn38xxp2;
+	struct cvmx_asxx_tx_hi_waterx_cn30xx cn50xx;
+	struct cvmx_asxx_tx_hi_waterx_s cn58xx;
+	struct cvmx_asxx_tx_hi_waterx_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_tx_hi_waterx cvmx_asxx_tx_hi_waterx_t;
+
+/**
+ * cvmx_asx#_tx_prt_en
+ *
+ * ASX_TX_PRT_EN = RGMII Port Enable
+ *
+ */
+union cvmx_asxx_tx_prt_en {
+	u64 u64;
+	struct cvmx_asxx_tx_prt_en_s {
+		u64 reserved_4_63 : 60;
+		u64 prt_en : 4;
+	} s;
+	struct cvmx_asxx_tx_prt_en_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 prt_en : 3;
+	} cn30xx;
+	struct cvmx_asxx_tx_prt_en_cn30xx cn31xx;
+	struct cvmx_asxx_tx_prt_en_s cn38xx;
+	struct cvmx_asxx_tx_prt_en_s cn38xxp2;
+	struct cvmx_asxx_tx_prt_en_cn30xx cn50xx;
+	struct cvmx_asxx_tx_prt_en_s cn58xx;
+	struct cvmx_asxx_tx_prt_en_s cn58xxp1;
+};
+
+typedef union cvmx_asxx_tx_prt_en cvmx_asxx_tx_prt_en_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 05/50] mips: octeon: Add cvmx-bgxx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (3 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 04/50] mips: octeon: Add cvmx-asxx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 06/50] mips: octeon: Add cvmx-ciu-defs.h " Stefan Roese
                   ` (47 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-bgxx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-bgxx-defs.h | 4106 +++++++++++++++++
 1 file changed, 4106 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-bgxx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-bgxx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-bgxx-defs.h
new file mode 100644
index 0000000000..7bcf805827
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-bgxx-defs.h
@@ -0,0 +1,4106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon bgxx.
+ */
+
+#ifndef __CVMX_BGXX_DEFS_H__
+#define __CVMX_BGXX_DEFS_H__
+
+#define CVMX_BGXX_CMRX_CONFIG(offset, block_id)                                                    \
+	(0x00011800E0000000ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_INT(offset, block_id)                                                       \
+	(0x00011800E0000020ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_PRT_CBFC_CTL(offset, block_id)                                              \
+	(0x00011800E0000408ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_ADR_CTL(offset, block_id)                                                \
+	(0x00011800E00000A0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_BP_DROP(offset, block_id)                                                \
+	(0x00011800E0000080ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_BP_OFF(offset, block_id)                                                 \
+	(0x00011800E0000090ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_BP_ON(offset, block_id)                                                  \
+	(0x00011800E0000088ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_BP_STATUS(offset, block_id)                                              \
+	(0x00011800E00000A8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_FIFO_LEN(offset, block_id)                                               \
+	(0x00011800E00000C0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_ID_MAP(offset, block_id)                                                 \
+	(0x00011800E0000028ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_LOGL_XOFF(offset, block_id)                                              \
+	(0x00011800E00000B0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_LOGL_XON(offset, block_id)                                               \
+	(0x00011800E00000B8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_PAUSE_DROP_TIME(offset, block_id)                                        \
+	(0x00011800E0000030ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT0(offset, block_id)                                                  \
+	(0x00011800E0000038ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT1(offset, block_id)                                                  \
+	(0x00011800E0000040ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT2(offset, block_id)                                                  \
+	(0x00011800E0000048ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT3(offset, block_id)                                                  \
+	(0x00011800E0000050ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT4(offset, block_id)                                                  \
+	(0x00011800E0000058ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT5(offset, block_id)                                                  \
+	(0x00011800E0000060ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT6(offset, block_id)                                                  \
+	(0x00011800E0000068ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT7(offset, block_id)                                                  \
+	(0x00011800E0000070ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_STAT8(offset, block_id)                                                  \
+	(0x00011800E0000078ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_RX_WEIGHT(offset, block_id)                                                 \
+	(0x00011800E0000098ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_CHANNEL(offset, block_id)                                                \
+	(0x00011800E0000400ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_FIFO_LEN(offset, block_id)                                               \
+	(0x00011800E0000418ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_HG2_STATUS(offset, block_id)                                             \
+	(0x00011800E0000410ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_OVR_BP(offset, block_id)                                                 \
+	(0x00011800E0000420ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT0(offset, block_id)                                                  \
+	(0x00011800E0000508ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT1(offset, block_id)                                                  \
+	(0x00011800E0000510ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT10(offset, block_id)                                                 \
+	(0x00011800E0000558ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT11(offset, block_id)                                                 \
+	(0x00011800E0000560ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT12(offset, block_id)                                                 \
+	(0x00011800E0000568ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT13(offset, block_id)                                                 \
+	(0x00011800E0000570ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT14(offset, block_id)                                                 \
+	(0x00011800E0000578ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT15(offset, block_id)                                                 \
+	(0x00011800E0000580ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT16(offset, block_id)                                                 \
+	(0x00011800E0000588ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT17(offset, block_id)                                                 \
+	(0x00011800E0000590ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT2(offset, block_id)                                                  \
+	(0x00011800E0000518ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT3(offset, block_id)                                                  \
+	(0x00011800E0000520ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT4(offset, block_id)                                                  \
+	(0x00011800E0000528ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT5(offset, block_id)                                                  \
+	(0x00011800E0000530ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT6(offset, block_id)                                                  \
+	(0x00011800E0000538ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT7(offset, block_id)                                                  \
+	(0x00011800E0000540ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT8(offset, block_id)                                                  \
+	(0x00011800E0000548ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMRX_TX_STAT9(offset, block_id)                                                  \
+	(0x00011800E0000550ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_CMR_BAD(offset)	    (0x00011800E0001020ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_BIST_STATUS(offset)   (0x00011800E0000300ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_CHAN_MSK_AND(offset)  (0x00011800E0000200ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_CHAN_MSK_OR(offset)   (0x00011800E0000208ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_ECO(offset)	    (0x00011800E0001028ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_GLOBAL_CONFIG(offset) (0x00011800E0000008ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_MEM_CTRL(offset)	    (0x00011800E0000018ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_MEM_INT(offset)	    (0x00011800E0000010ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_NXC_ADR(offset)	    (0x00011800E0001018ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_RX_ADRX_CAM(offset, block_id)                                                \
+	(0x00011800E0000100ull + (((offset) & 31) + ((block_id) & 7) * 0x200000ull) * 8)
+#define CVMX_BGXX_CMR_RX_LMACS(offset)	(0x00011800E0000308ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_RX_OVR_BP(offset) (0x00011800E0000318ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_CMR_TX_LMACS(offset)	(0x00011800E0001000ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_PRTX_CFG(offset, block_id)                                               \
+	(0x00011800E0038010ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_DECISION(offset, block_id)                                           \
+	(0x00011800E0038040ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_FRM_CHK(offset, block_id)                                            \
+	(0x00011800E0038020ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_FRM_CTL(offset, block_id)                                            \
+	(0x00011800E0038018ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_IFG(offset, block_id)                                                \
+	(0x00011800E0038058ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_INT(offset, block_id)                                                \
+	(0x00011800E0038000ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_JABBER(offset, block_id)                                             \
+	(0x00011800E0038038ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_RXX_UDD_SKP(offset, block_id)                                            \
+	(0x00011800E0038048ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_SMACX(offset, block_id)                                                  \
+	(0x00011800E0038230ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_APPEND(offset, block_id)                                             \
+	(0x00011800E0038218ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_BURST(offset, block_id)                                              \
+	(0x00011800E0038228ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_CTL(offset, block_id)                                                \
+	(0x00011800E0038270ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_INT(offset, block_id)                                                \
+	(0x00011800E0038500ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_MIN_PKT(offset, block_id)                                            \
+	(0x00011800E0038240ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_PAUSE_PKT_INTERVAL(offset, block_id)                                 \
+	(0x00011800E0038248ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_PAUSE_PKT_TIME(offset, block_id)                                     \
+	(0x00011800E0038238ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_PAUSE_TOGO(offset, block_id)                                         \
+	(0x00011800E0038258ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_PAUSE_ZERO(offset, block_id)                                         \
+	(0x00011800E0038260ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_SGMII_CTL(offset, block_id)                                          \
+	(0x00011800E0038300ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_SLOT(offset, block_id)                                               \
+	(0x00011800E0038220ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_SOFT_PAUSE(offset, block_id)                                         \
+	(0x00011800E0038250ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TXX_THRESH(offset, block_id)                                             \
+	(0x00011800E0038210ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_GMI_TX_COL_ATTEMPT(offset)                                                   \
+	(0x00011800E0039010ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_TX_IFG(offset)  (0x00011800E0039000ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_TX_JAM(offset)  (0x00011800E0039008ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_TX_LFSR(offset) (0x00011800E0039028ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_TX_PAUSE_PKT_DMAC(offset)                                                \
+	(0x00011800E0039018ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_GMI_TX_PAUSE_PKT_TYPE(offset)                                                \
+	(0x00011800E0039020ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_GMP_PCS_ANX_ADV(offset, block_id)                                                \
+	(0x00011800E0030010ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_ANX_EXT_ST(offset, block_id)                                             \
+	(0x00011800E0030028ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_ANX_LP_ABIL(offset, block_id)                                            \
+	(0x00011800E0030018ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_ANX_RESULTS(offset, block_id)                                            \
+	(0x00011800E0030020ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_INTX(offset, block_id)                                                   \
+	(0x00011800E0030080ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_LINKX_TIMER(offset, block_id)                                            \
+	(0x00011800E0030040ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_MISCX_CTL(offset, block_id)                                              \
+	(0x00011800E0030078ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_MRX_CONTROL(offset, block_id)                                            \
+	(0x00011800E0030000ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_MRX_STATUS(offset, block_id)                                             \
+	(0x00011800E0030008ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_RXX_STATES(offset, block_id)                                             \
+	(0x00011800E0030058ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_RXX_SYNC(offset, block_id)                                               \
+	(0x00011800E0030050ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_SGMX_AN_ADV(offset, block_id)                                            \
+	(0x00011800E0030068ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_SGMX_LP_ADV(offset, block_id)                                            \
+	(0x00011800E0030070ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_TXX_STATES(offset, block_id)                                             \
+	(0x00011800E0030060ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_GMP_PCS_TX_RXX_POLARITY(offset, block_id)                                        \
+	(0x00011800E0030048ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_CBFC_CTL(offset, block_id)                                                  \
+	(0x00011800E0020218ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_CTRL(offset, block_id)                                                      \
+	(0x00011800E0020200ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_EXT_LOOPBACK(offset, block_id)                                              \
+	(0x00011800E0020208ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_HG2_CONTROL(offset, block_id)                                               \
+	(0x00011800E0020210ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_BAD_COL_HI(offset, block_id)                                             \
+	(0x00011800E0020040ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_BAD_COL_LO(offset, block_id)                                             \
+	(0x00011800E0020038ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_CTL(offset, block_id)                                                    \
+	(0x00011800E0020030ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_DECISION(offset, block_id)                                               \
+	(0x00011800E0020020ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_FRM_CHK(offset, block_id)                                                \
+	(0x00011800E0020010ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_FRM_CTL(offset, block_id)                                                \
+	(0x00011800E0020008ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_INT(offset, block_id)                                                    \
+	(0x00011800E0020000ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_JABBER(offset, block_id)                                                 \
+	(0x00011800E0020018ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_RX_UDD_SKP(offset, block_id)                                                \
+	(0x00011800E0020028ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_SMAC(offset, block_id)                                                      \
+	(0x00011800E0020108ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_APPEND(offset, block_id)                                                 \
+	(0x00011800E0020100ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_CTL(offset, block_id)                                                    \
+	(0x00011800E0020160ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_IFG(offset, block_id)                                                    \
+	(0x00011800E0020148ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_INT(offset, block_id)                                                    \
+	(0x00011800E0020140ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_MIN_PKT(offset, block_id)                                                \
+	(0x00011800E0020118ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_PKT_DMAC(offset, block_id)                                         \
+	(0x00011800E0020150ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_PKT_INTERVAL(offset, block_id)                                     \
+	(0x00011800E0020120ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_PKT_TIME(offset, block_id)                                         \
+	(0x00011800E0020110ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_PKT_TYPE(offset, block_id)                                         \
+	(0x00011800E0020158ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_TOGO(offset, block_id)                                             \
+	(0x00011800E0020130ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_PAUSE_ZERO(offset, block_id)                                             \
+	(0x00011800E0020138ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_SOFT_PAUSE(offset, block_id)                                             \
+	(0x00011800E0020128ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SMUX_TX_THRESH(offset, block_id)                                                 \
+	(0x00011800E0020168ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_ADV(offset, block_id)                                                    \
+	(0x00011800E00100D8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_BP_STATUS(offset, block_id)                                              \
+	(0x00011800E00100F8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_CONTROL(offset, block_id)                                                \
+	(0x00011800E00100C8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_LP_BASE(offset, block_id)                                                \
+	(0x00011800E00100E0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_LP_XNP(offset, block_id)                                                 \
+	(0x00011800E00100F0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_STATUS(offset, block_id)                                                 \
+	(0x00011800E00100D0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_AN_XNP_TX(offset, block_id)                                                 \
+	(0x00011800E00100E8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_ALGN_STATUS(offset, block_id)                                            \
+	(0x00011800E0010050ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_BIP_ERR_CNT(offset, block_id)                                            \
+	(0x00011800E0010058ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_LANE_MAP(offset, block_id)                                               \
+	(0x00011800E0010060ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_CONTROL(offset, block_id)                                            \
+	(0x00011800E0010068ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_LD_CUP(offset, block_id)                                             \
+	(0x00011800E0010088ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_LD_REP(offset, block_id)                                             \
+	(0x00011800E0010090ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_LP_CUP(offset, block_id)                                             \
+	(0x00011800E0010078ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_LP_REP(offset, block_id)                                             \
+	(0x00011800E0010080ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_PMD_STATUS(offset, block_id)                                             \
+	(0x00011800E0010070ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_STATUS1(offset, block_id)                                                \
+	(0x00011800E0010030ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_STATUS2(offset, block_id)                                                \
+	(0x00011800E0010038ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_TP_CONTROL(offset, block_id)                                             \
+	(0x00011800E0010040ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BR_TP_ERR_CNT(offset, block_id)                                             \
+	(0x00011800E0010048ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_BX_STATUS(offset, block_id)                                                 \
+	(0x00011800E0010028ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_CONTROL1(offset, block_id)                                                  \
+	(0x00011800E0010000ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_CONTROL2(offset, block_id)                                                  \
+	(0x00011800E0010018ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_ABIL(offset, block_id)                                                  \
+	(0x00011800E0010098ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_CONTROL(offset, block_id)                                               \
+	(0x00011800E00100A0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_CORR_BLKS01(offset, block_id)                                           \
+	(0x00011800E00100A8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_CORR_BLKS23(offset, block_id)                                           \
+	(0x00011800E00100B0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_UNCORR_BLKS01(offset, block_id)                                         \
+	(0x00011800E00100B8ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_FEC_UNCORR_BLKS23(offset, block_id)                                         \
+	(0x00011800E00100C0ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_INT(offset, block_id)                                                       \
+	(0x00011800E0010220ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_LPCS_STATES(offset, block_id)                                               \
+	(0x00011800E0010208ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_MISC_CONTROL(offset, block_id)                                              \
+	(0x00011800E0010218ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_SPD_ABIL(offset, block_id)                                                  \
+	(0x00011800E0010010ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_STATUS1(offset, block_id)                                                   \
+	(0x00011800E0010008ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPUX_STATUS2(offset, block_id)                                                   \
+	(0x00011800E0010020ull + (((offset) & 3) + ((block_id) & 7) * 0x10ull) * 1048576)
+#define CVMX_BGXX_SPU_BIST_STATUS(offset) (0x00011800E0010318ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_SPU_DBG_CONTROL(offset) (0x00011800E0010300ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_SPU_MEM_INT(offset)	  (0x00011800E0010310ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_SPU_MEM_STATUS(offset)  (0x00011800E0010308ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_BGXX_SPU_SDSX_SKEW_STATUS(offset, block_id)                                           \
+	(0x00011800E0010320ull + (((offset) & 3) + ((block_id) & 7) * 0x200000ull) * 8)
+#define CVMX_BGXX_SPU_SDSX_STATES(offset, block_id)                                                \
+	(0x00011800E0010340ull + (((offset) & 3) + ((block_id) & 7) * 0x200000ull) * 8)
+
+/**
+ * cvmx_bgx#_cmr#_config
+ *
+ * Logical MAC/PCS configuration registers; one per LMAC. The maximum number of LMACs (and
+ * maximum LMAC ID) that can be enabled by these registers is limited by
+ * BGX()_CMR_RX_LMACS[LMACS] and BGX()_CMR_TX_LMACS[LMACS]. When multiple LMACs are
+ * enabled, they must be configured with the same [LMAC_TYPE] value.
+ */
+union cvmx_bgxx_cmrx_config {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_config_s {
+		u64 reserved_16_63 : 48;
+		u64 enable : 1;
+		u64 data_pkt_rx_en : 1;
+		u64 data_pkt_tx_en : 1;
+		u64 int_beat_gen : 1;
+		u64 mix_en : 1;
+		u64 lmac_type : 3;
+		u64 lane_to_sds : 8;
+	} s;
+	struct cvmx_bgxx_cmrx_config_s cn73xx;
+	struct cvmx_bgxx_cmrx_config_s cn78xx;
+	struct cvmx_bgxx_cmrx_config_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_config_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_config cvmx_bgxx_cmrx_config_t;
+
+/**
+ * cvmx_bgx#_cmr#_int
+ */
+union cvmx_bgxx_cmrx_int {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_int_s {
+		u64 reserved_3_63 : 61;
+		u64 pko_nxc : 1;
+		u64 overflw : 1;
+		u64 pause_drp : 1;
+	} s;
+	struct cvmx_bgxx_cmrx_int_s cn73xx;
+	struct cvmx_bgxx_cmrx_int_s cn78xx;
+	struct cvmx_bgxx_cmrx_int_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_int cvmx_bgxx_cmrx_int_t;
+
+/**
+ * cvmx_bgx#_cmr#_prt_cbfc_ctl
+ *
+ * See XOFF definition listed under BGX()_SMU()_CBFC_CTL.
+ *
+ */
+union cvmx_bgxx_cmrx_prt_cbfc_ctl {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_prt_cbfc_ctl_s {
+		u64 reserved_32_63 : 32;
+		u64 phys_bp : 16;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_prt_cbfc_ctl_s cn73xx;
+	struct cvmx_bgxx_cmrx_prt_cbfc_ctl_s cn78xx;
+	struct cvmx_bgxx_cmrx_prt_cbfc_ctl_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_prt_cbfc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_prt_cbfc_ctl cvmx_bgxx_cmrx_prt_cbfc_ctl_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_adr_ctl
+ */
+union cvmx_bgxx_cmrx_rx_adr_ctl {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_adr_ctl_s {
+		u64 reserved_4_63 : 60;
+		u64 cam_accept : 1;
+		u64 mcst_mode : 2;
+		u64 bcst_accept : 1;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_adr_ctl_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_adr_ctl_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_adr_ctl_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_adr_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_adr_ctl cvmx_bgxx_cmrx_rx_adr_ctl_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_bp_drop
+ */
+union cvmx_bgxx_cmrx_rx_bp_drop {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_bp_drop_s {
+		u64 reserved_7_63 : 57;
+		u64 mark : 7;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_bp_drop_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_bp_drop_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_bp_drop_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_bp_drop_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_bp_drop cvmx_bgxx_cmrx_rx_bp_drop_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_bp_off
+ */
+union cvmx_bgxx_cmrx_rx_bp_off {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_bp_off_s {
+		u64 reserved_7_63 : 57;
+		u64 mark : 7;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_bp_off_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_bp_off_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_bp_off_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_bp_off_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_bp_off cvmx_bgxx_cmrx_rx_bp_off_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_bp_on
+ */
+union cvmx_bgxx_cmrx_rx_bp_on {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_bp_on_s {
+		u64 reserved_12_63 : 52;
+		u64 mark : 12;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_bp_on_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_bp_on_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_bp_on_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_bp_on_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_bp_on cvmx_bgxx_cmrx_rx_bp_on_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_bp_status
+ */
+union cvmx_bgxx_cmrx_rx_bp_status {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_bp_status_s {
+		u64 reserved_1_63 : 63;
+		u64 bp : 1;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_bp_status_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_bp_status_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_bp_status_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_bp_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_bp_status cvmx_bgxx_cmrx_rx_bp_status_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_fifo_len
+ */
+union cvmx_bgxx_cmrx_rx_fifo_len {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_fifo_len_s {
+		u64 reserved_13_63 : 51;
+		u64 fifo_len : 13;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_fifo_len_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_fifo_len_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_fifo_len_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_fifo_len_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_fifo_len cvmx_bgxx_cmrx_rx_fifo_len_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_id_map
+ *
+ * These registers set the RX LMAC ID mapping for X2P/PKI.
+ *
+ */
+union cvmx_bgxx_cmrx_rx_id_map {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_id_map_s {
+		u64 reserved_15_63 : 49;
+		u64 rid : 7;
+		u64 pknd : 8;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_id_map_cn73xx {
+		u64 reserved_15_63 : 49;
+		u64 rid : 7;
+		u64 reserved_6_7 : 2;
+		u64 pknd : 6;
+	} cn73xx;
+	struct cvmx_bgxx_cmrx_rx_id_map_cn73xx cn78xx;
+	struct cvmx_bgxx_cmrx_rx_id_map_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_id_map_cn73xx cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_id_map cvmx_bgxx_cmrx_rx_id_map_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_logl_xoff
+ */
+union cvmx_bgxx_cmrx_rx_logl_xoff {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_logl_xoff_s {
+		u64 reserved_16_63 : 48;
+		u64 xoff : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_logl_xoff_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_logl_xoff_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_logl_xoff_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_logl_xoff_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_logl_xoff cvmx_bgxx_cmrx_rx_logl_xoff_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_logl_xon
+ */
+union cvmx_bgxx_cmrx_rx_logl_xon {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_logl_xon_s {
+		u64 reserved_16_63 : 48;
+		u64 xon : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_logl_xon_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_logl_xon_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_logl_xon_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_logl_xon_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_logl_xon cvmx_bgxx_cmrx_rx_logl_xon_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_pause_drop_time
+ */
+union cvmx_bgxx_cmrx_rx_pause_drop_time {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_pause_drop_time_s {
+		u64 reserved_16_63 : 48;
+		u64 pause_time : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_pause_drop_time_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_pause_drop_time_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_pause_drop_time_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_pause_drop_time_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_pause_drop_time cvmx_bgxx_cmrx_rx_pause_drop_time_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat0
+ *
+ * These registers provide a count of received packets that meet the following conditions:
+ * * are not recognized as PAUSE packets.
+ * * are not dropped due DMAC filtering.
+ * * are not dropped due FIFO full status.
+ * * do not have any other OPCODE (FCS, Length, etc).
+ */
+union cvmx_bgxx_cmrx_rx_stat0 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat0_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat0_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat0_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat0_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat0_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat0 cvmx_bgxx_cmrx_rx_stat0_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat1
+ *
+ * These registers provide a count of octets of received packets.
+ *
+ */
+union cvmx_bgxx_cmrx_rx_stat1 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat1_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat1_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat1_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat1_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat1_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat1 cvmx_bgxx_cmrx_rx_stat1_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat2
+ *
+ * These registers provide a count of all packets received that were recognized as flow-control
+ * or PAUSE packets. PAUSE packets with any kind of error are counted in
+ * BGX()_CMR()_RX_STAT8 (error stats register). Pause packets can be optionally dropped
+ * or forwarded based on BGX()_SMU()_RX_FRM_CTL[CTL_DRP]. This count increments
+ * regardless of whether the packet is dropped. PAUSE packets are never counted in
+ * BGX()_CMR()_RX_STAT0.
+ */
+union cvmx_bgxx_cmrx_rx_stat2 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat2_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat2_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat2_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat2_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat2 cvmx_bgxx_cmrx_rx_stat2_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat3
+ *
+ * These registers provide a count of octets of received PAUSE and control packets.
+ *
+ */
+union cvmx_bgxx_cmrx_rx_stat3 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat3_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat3_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat3_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat3_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat3_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat3 cvmx_bgxx_cmrx_rx_stat3_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat4
+ *
+ * These registers provide a count of all packets received that were dropped by the DMAC filter.
+ * Packets that match the DMAC are dropped and counted here regardless of whether they were ERR
+ * packets, but does not include those reported in BGX()_CMR()_RX_STAT6. These packets
+ * are never counted in BGX()_CMR()_RX_STAT0. Eight-byte packets as the result of
+ * truncation or other means are not dropped by CNXXXX and will never appear in this count.
+ */
+union cvmx_bgxx_cmrx_rx_stat4 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat4_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat4_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat4_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat4_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat4_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat4 cvmx_bgxx_cmrx_rx_stat4_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat5
+ *
+ * These registers provide a count of octets of filtered DMAC packets.
+ *
+ */
+union cvmx_bgxx_cmrx_rx_stat5 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat5_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat5_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat5_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat5_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat5_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat5 cvmx_bgxx_cmrx_rx_stat5_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat6
+ *
+ * These registers provide a count of all packets received that were dropped due to a full
+ * receive FIFO. They do not count any packet that is truncated at the point of overflow and sent
+ * on to the PKI. These registers count all entire packets dropped by the FIFO for a given LMAC
+ * regardless of DMAC or PAUSE type.
+ */
+union cvmx_bgxx_cmrx_rx_stat6 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat6_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat6_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat6_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat6_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat6_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat6 cvmx_bgxx_cmrx_rx_stat6_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat7
+ *
+ * These registers provide a count of octets of received packets that were dropped due to a full
+ * receive FIFO.
+ */
+union cvmx_bgxx_cmrx_rx_stat7 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat7_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat7_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat7_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat7_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat7_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat7 cvmx_bgxx_cmrx_rx_stat7_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_stat8
+ *
+ * These registers provide a count of all packets received with some error that were not dropped
+ * either due to the DMAC filter or lack of room in the receive FIFO.
+ * This does not include packets which were counted in
+ * BGX()_CMR()_RX_STAT2, BGX()_CMR()_RX_STAT4 nor
+ * BGX()_CMR()_RX_STAT6.
+ *
+ * Which statistics are updated on control packet errors and drops are shown below:
+ *
+ * <pre>
+ * if dropped [
+ *   if !errored STAT8
+ *   if overflow STAT6
+ *   else if dmac drop STAT4
+ *   else if filter drop STAT2
+ * ] else [
+ *   if errored STAT2
+ *   else STAT8
+ * ]
+ * </pre>
+ */
+union cvmx_bgxx_cmrx_rx_stat8 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_stat8_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_stat8_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_stat8_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_stat8_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_stat8_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_stat8 cvmx_bgxx_cmrx_rx_stat8_t;
+
+/**
+ * cvmx_bgx#_cmr#_rx_weight
+ */
+union cvmx_bgxx_cmrx_rx_weight {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_rx_weight_s {
+		u64 reserved_4_63 : 60;
+		u64 weight : 4;
+	} s;
+	struct cvmx_bgxx_cmrx_rx_weight_s cn73xx;
+	struct cvmx_bgxx_cmrx_rx_weight_s cn78xx;
+	struct cvmx_bgxx_cmrx_rx_weight_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_rx_weight_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_rx_weight cvmx_bgxx_cmrx_rx_weight_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_channel
+ */
+union cvmx_bgxx_cmrx_tx_channel {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_channel_s {
+		u64 reserved_32_63 : 32;
+		u64 msk : 16;
+		u64 dis : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_channel_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_channel_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_channel_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_channel_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_channel cvmx_bgxx_cmrx_tx_channel_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_fifo_len
+ */
+union cvmx_bgxx_cmrx_tx_fifo_len {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_fifo_len_s {
+		u64 reserved_14_63 : 50;
+		u64 lmac_idle : 1;
+		u64 fifo_len : 13;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_fifo_len_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_fifo_len_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_fifo_len_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_fifo_len_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_fifo_len cvmx_bgxx_cmrx_tx_fifo_len_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_hg2_status
+ */
+union cvmx_bgxx_cmrx_tx_hg2_status {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_hg2_status_s {
+		u64 reserved_32_63 : 32;
+		u64 xof : 16;
+		u64 lgtim2go : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_hg2_status_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_hg2_status_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_hg2_status_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_hg2_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_hg2_status cvmx_bgxx_cmrx_tx_hg2_status_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_ovr_bp
+ */
+union cvmx_bgxx_cmrx_tx_ovr_bp {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_ovr_bp_s {
+		u64 reserved_16_63 : 48;
+		u64 tx_chan_bp : 16;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_ovr_bp_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_ovr_bp_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_ovr_bp_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_ovr_bp_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_ovr_bp cvmx_bgxx_cmrx_tx_ovr_bp_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat0
+ */
+union cvmx_bgxx_cmrx_tx_stat0 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat0_s {
+		u64 reserved_48_63 : 16;
+		u64 xscol : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat0_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat0_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat0_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat0_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat0 cvmx_bgxx_cmrx_tx_stat0_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat1
+ */
+union cvmx_bgxx_cmrx_tx_stat1 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat1_s {
+		u64 reserved_48_63 : 16;
+		u64 xsdef : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat1_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat1_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat1_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat1_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat1 cvmx_bgxx_cmrx_tx_stat1_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat10
+ */
+union cvmx_bgxx_cmrx_tx_stat10 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat10_s {
+		u64 reserved_48_63 : 16;
+		u64 hist4 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat10_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat10_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat10_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat10_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat10 cvmx_bgxx_cmrx_tx_stat10_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat11
+ */
+union cvmx_bgxx_cmrx_tx_stat11 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat11_s {
+		u64 reserved_48_63 : 16;
+		u64 hist5 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat11_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat11_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat11_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat11_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat11 cvmx_bgxx_cmrx_tx_stat11_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat12
+ */
+union cvmx_bgxx_cmrx_tx_stat12 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat12_s {
+		u64 reserved_48_63 : 16;
+		u64 hist6 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat12_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat12_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat12_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat12_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat12 cvmx_bgxx_cmrx_tx_stat12_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat13
+ */
+union cvmx_bgxx_cmrx_tx_stat13 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat13_s {
+		u64 reserved_48_63 : 16;
+		u64 hist7 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat13_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat13_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat13_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat13_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat13 cvmx_bgxx_cmrx_tx_stat13_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat14
+ */
+union cvmx_bgxx_cmrx_tx_stat14 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat14_s {
+		u64 reserved_48_63 : 16;
+		u64 bcst : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat14_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat14_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat14_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat14_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat14 cvmx_bgxx_cmrx_tx_stat14_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat15
+ */
+union cvmx_bgxx_cmrx_tx_stat15 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat15_s {
+		u64 reserved_48_63 : 16;
+		u64 mcst : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat15_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat15_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat15_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat15_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat15 cvmx_bgxx_cmrx_tx_stat15_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat16
+ */
+union cvmx_bgxx_cmrx_tx_stat16 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat16_s {
+		u64 reserved_48_63 : 16;
+		u64 undflw : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat16_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat16_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat16_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat16_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat16 cvmx_bgxx_cmrx_tx_stat16_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat17
+ */
+union cvmx_bgxx_cmrx_tx_stat17 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat17_s {
+		u64 reserved_48_63 : 16;
+		u64 ctl : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat17_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat17_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat17_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat17_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat17 cvmx_bgxx_cmrx_tx_stat17_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat2
+ */
+union cvmx_bgxx_cmrx_tx_stat2 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 mcol : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat2_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat2_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat2_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat2_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat2 cvmx_bgxx_cmrx_tx_stat2_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat3
+ */
+union cvmx_bgxx_cmrx_tx_stat3 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat3_s {
+		u64 reserved_48_63 : 16;
+		u64 scol : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat3_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat3_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat3_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat3_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat3 cvmx_bgxx_cmrx_tx_stat3_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat4
+ */
+union cvmx_bgxx_cmrx_tx_stat4 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat4_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat4_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat4_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat4_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat4_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat4 cvmx_bgxx_cmrx_tx_stat4_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat5
+ */
+union cvmx_bgxx_cmrx_tx_stat5 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat5_s {
+		u64 reserved_48_63 : 16;
+		u64 pkts : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat5_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat5_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat5_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat5_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat5 cvmx_bgxx_cmrx_tx_stat5_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat6
+ */
+union cvmx_bgxx_cmrx_tx_stat6 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat6_s {
+		u64 reserved_48_63 : 16;
+		u64 hist0 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat6_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat6_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat6_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat6_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat6 cvmx_bgxx_cmrx_tx_stat6_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat7
+ */
+union cvmx_bgxx_cmrx_tx_stat7 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat7_s {
+		u64 reserved_48_63 : 16;
+		u64 hist1 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat7_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat7_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat7_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat7_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat7 cvmx_bgxx_cmrx_tx_stat7_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat8
+ */
+union cvmx_bgxx_cmrx_tx_stat8 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat8_s {
+		u64 reserved_48_63 : 16;
+		u64 hist2 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat8_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat8_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat8_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat8_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat8 cvmx_bgxx_cmrx_tx_stat8_t;
+
+/**
+ * cvmx_bgx#_cmr#_tx_stat9
+ */
+union cvmx_bgxx_cmrx_tx_stat9 {
+	u64 u64;
+	struct cvmx_bgxx_cmrx_tx_stat9_s {
+		u64 reserved_48_63 : 16;
+		u64 hist3 : 48;
+	} s;
+	struct cvmx_bgxx_cmrx_tx_stat9_s cn73xx;
+	struct cvmx_bgxx_cmrx_tx_stat9_s cn78xx;
+	struct cvmx_bgxx_cmrx_tx_stat9_s cn78xxp1;
+	struct cvmx_bgxx_cmrx_tx_stat9_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmrx_tx_stat9 cvmx_bgxx_cmrx_tx_stat9_t;
+
+/**
+ * cvmx_bgx#_cmr_bad
+ */
+union cvmx_bgxx_cmr_bad {
+	u64 u64;
+	struct cvmx_bgxx_cmr_bad_s {
+		u64 reserved_1_63 : 63;
+		u64 rxb_nxl : 1;
+	} s;
+	struct cvmx_bgxx_cmr_bad_s cn73xx;
+	struct cvmx_bgxx_cmr_bad_s cn78xx;
+	struct cvmx_bgxx_cmr_bad_s cn78xxp1;
+	struct cvmx_bgxx_cmr_bad_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_bad cvmx_bgxx_cmr_bad_t;
+
+/**
+ * cvmx_bgx#_cmr_bist_status
+ */
+union cvmx_bgxx_cmr_bist_status {
+	u64 u64;
+	struct cvmx_bgxx_cmr_bist_status_s {
+		u64 reserved_25_63 : 39;
+		u64 status : 25;
+	} s;
+	struct cvmx_bgxx_cmr_bist_status_s cn73xx;
+	struct cvmx_bgxx_cmr_bist_status_s cn78xx;
+	struct cvmx_bgxx_cmr_bist_status_s cn78xxp1;
+	struct cvmx_bgxx_cmr_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_bist_status cvmx_bgxx_cmr_bist_status_t;
+
+/**
+ * cvmx_bgx#_cmr_chan_msk_and
+ */
+union cvmx_bgxx_cmr_chan_msk_and {
+	u64 u64;
+	struct cvmx_bgxx_cmr_chan_msk_and_s {
+		u64 msk_and : 64;
+	} s;
+	struct cvmx_bgxx_cmr_chan_msk_and_s cn73xx;
+	struct cvmx_bgxx_cmr_chan_msk_and_s cn78xx;
+	struct cvmx_bgxx_cmr_chan_msk_and_s cn78xxp1;
+	struct cvmx_bgxx_cmr_chan_msk_and_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_chan_msk_and cvmx_bgxx_cmr_chan_msk_and_t;
+
+/**
+ * cvmx_bgx#_cmr_chan_msk_or
+ */
+union cvmx_bgxx_cmr_chan_msk_or {
+	u64 u64;
+	struct cvmx_bgxx_cmr_chan_msk_or_s {
+		u64 msk_or : 64;
+	} s;
+	struct cvmx_bgxx_cmr_chan_msk_or_s cn73xx;
+	struct cvmx_bgxx_cmr_chan_msk_or_s cn78xx;
+	struct cvmx_bgxx_cmr_chan_msk_or_s cn78xxp1;
+	struct cvmx_bgxx_cmr_chan_msk_or_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_chan_msk_or cvmx_bgxx_cmr_chan_msk_or_t;
+
+/**
+ * cvmx_bgx#_cmr_eco
+ */
+union cvmx_bgxx_cmr_eco {
+	u64 u64;
+	struct cvmx_bgxx_cmr_eco_s {
+		u64 eco_ro : 32;
+		u64 eco_rw : 32;
+	} s;
+	struct cvmx_bgxx_cmr_eco_s cn73xx;
+	struct cvmx_bgxx_cmr_eco_s cn78xx;
+	struct cvmx_bgxx_cmr_eco_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_eco cvmx_bgxx_cmr_eco_t;
+
+/**
+ * cvmx_bgx#_cmr_global_config
+ *
+ * These registers configure the global CMR, PCS, and MAC.
+ *
+ */
+union cvmx_bgxx_cmr_global_config {
+	u64 u64;
+	struct cvmx_bgxx_cmr_global_config_s {
+		u64 reserved_5_63 : 59;
+		u64 cmr_mix1_reset : 1;
+		u64 cmr_mix0_reset : 1;
+		u64 cmr_x2p_reset : 1;
+		u64 bgx_clk_enable : 1;
+		u64 pmux_sds_sel : 1;
+	} s;
+	struct cvmx_bgxx_cmr_global_config_s cn73xx;
+	struct cvmx_bgxx_cmr_global_config_s cn78xx;
+	struct cvmx_bgxx_cmr_global_config_s cn78xxp1;
+	struct cvmx_bgxx_cmr_global_config_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_global_config cvmx_bgxx_cmr_global_config_t;
+
+/**
+ * cvmx_bgx#_cmr_mem_ctrl
+ */
+union cvmx_bgxx_cmr_mem_ctrl {
+	u64 u64;
+	struct cvmx_bgxx_cmr_mem_ctrl_s {
+		u64 reserved_24_63 : 40;
+		u64 txb_skid_synd : 2;
+		u64 txb_skid_cor_dis : 1;
+		u64 txb_fif_bk1_syn : 2;
+		u64 txb_fif_bk1_cdis : 1;
+		u64 txb_fif_bk0_syn : 2;
+		u64 txb_fif_bk0_cdis : 1;
+		u64 rxb_skid_synd : 2;
+		u64 rxb_skid_cor_dis : 1;
+		u64 rxb_fif_bk1_syn1 : 2;
+		u64 rxb_fif_bk1_cdis1 : 1;
+		u64 rxb_fif_bk1_syn0 : 2;
+		u64 rxb_fif_bk1_cdis0 : 1;
+		u64 rxb_fif_bk0_syn1 : 2;
+		u64 rxb_fif_bk0_cdis1 : 1;
+		u64 rxb_fif_bk0_syn0 : 2;
+		u64 rxb_fif_bk0_cdis0 : 1;
+	} s;
+	struct cvmx_bgxx_cmr_mem_ctrl_s cn73xx;
+	struct cvmx_bgxx_cmr_mem_ctrl_s cn78xx;
+	struct cvmx_bgxx_cmr_mem_ctrl_s cn78xxp1;
+	struct cvmx_bgxx_cmr_mem_ctrl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_mem_ctrl cvmx_bgxx_cmr_mem_ctrl_t;
+
+/**
+ * cvmx_bgx#_cmr_mem_int
+ */
+union cvmx_bgxx_cmr_mem_int {
+	u64 u64;
+	struct cvmx_bgxx_cmr_mem_int_s {
+		u64 reserved_18_63 : 46;
+		u64 smu_in_overfl : 1;
+		u64 gmp_in_overfl : 1;
+		u64 txb_skid_sbe : 1;
+		u64 txb_skid_dbe : 1;
+		u64 txb_fif_bk1_sbe : 1;
+		u64 txb_fif_bk1_dbe : 1;
+		u64 txb_fif_bk0_sbe : 1;
+		u64 txb_fif_bk0_dbe : 1;
+		u64 rxb_skid_sbe : 1;
+		u64 rxb_skid_dbe : 1;
+		u64 rxb_fif_bk1_sbe1 : 1;
+		u64 rxb_fif_bk1_dbe1 : 1;
+		u64 rxb_fif_bk1_sbe0 : 1;
+		u64 rxb_fif_bk1_dbe0 : 1;
+		u64 rxb_fif_bk0_sbe1 : 1;
+		u64 rxb_fif_bk0_dbe1 : 1;
+		u64 rxb_fif_bk0_sbe0 : 1;
+		u64 rxb_fif_bk0_dbe0 : 1;
+	} s;
+	struct cvmx_bgxx_cmr_mem_int_s cn73xx;
+	struct cvmx_bgxx_cmr_mem_int_s cn78xx;
+	struct cvmx_bgxx_cmr_mem_int_s cn78xxp1;
+	struct cvmx_bgxx_cmr_mem_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_mem_int cvmx_bgxx_cmr_mem_int_t;
+
+/**
+ * cvmx_bgx#_cmr_nxc_adr
+ */
+union cvmx_bgxx_cmr_nxc_adr {
+	u64 u64;
+	struct cvmx_bgxx_cmr_nxc_adr_s {
+		u64 reserved_16_63 : 48;
+		u64 lmac_id : 4;
+		u64 channel : 12;
+	} s;
+	struct cvmx_bgxx_cmr_nxc_adr_s cn73xx;
+	struct cvmx_bgxx_cmr_nxc_adr_s cn78xx;
+	struct cvmx_bgxx_cmr_nxc_adr_s cn78xxp1;
+	struct cvmx_bgxx_cmr_nxc_adr_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_nxc_adr cvmx_bgxx_cmr_nxc_adr_t;
+
+/**
+ * cvmx_bgx#_cmr_rx_adr#_cam
+ *
+ * These registers provide access to the 32 DMAC CAM entries in BGX.
+ *
+ */
+union cvmx_bgxx_cmr_rx_adrx_cam {
+	u64 u64;
+	struct cvmx_bgxx_cmr_rx_adrx_cam_s {
+		u64 reserved_54_63 : 10;
+		u64 id : 2;
+		u64 reserved_49_51 : 3;
+		u64 en : 1;
+		u64 adr : 48;
+	} s;
+	struct cvmx_bgxx_cmr_rx_adrx_cam_s cn73xx;
+	struct cvmx_bgxx_cmr_rx_adrx_cam_s cn78xx;
+	struct cvmx_bgxx_cmr_rx_adrx_cam_s cn78xxp1;
+	struct cvmx_bgxx_cmr_rx_adrx_cam_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_rx_adrx_cam cvmx_bgxx_cmr_rx_adrx_cam_t;
+
+/**
+ * cvmx_bgx#_cmr_rx_lmacs
+ */
+union cvmx_bgxx_cmr_rx_lmacs {
+	u64 u64;
+	struct cvmx_bgxx_cmr_rx_lmacs_s {
+		u64 reserved_3_63 : 61;
+		u64 lmacs : 3;
+	} s;
+	struct cvmx_bgxx_cmr_rx_lmacs_s cn73xx;
+	struct cvmx_bgxx_cmr_rx_lmacs_s cn78xx;
+	struct cvmx_bgxx_cmr_rx_lmacs_s cn78xxp1;
+	struct cvmx_bgxx_cmr_rx_lmacs_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_rx_lmacs cvmx_bgxx_cmr_rx_lmacs_t;
+
+/**
+ * cvmx_bgx#_cmr_rx_ovr_bp
+ *
+ * BGX()_CMR_RX_OVR_BP[EN<0>] must be set to one and BGX()_CMR_RX_OVR_BP[BP<0>] must be
+ * cleared to zero (to forcibly disable hardware-automatic 802.3 PAUSE packet generation) with
+ * the HiGig2 Protocol when BGX()_SMU()_HG2_CONTROL[HG2TX_EN]=0. (The HiGig2 protocol is
+ * indicated by BGX()_SMU()_TX_CTL[HG_EN]=1 and BGX()_SMU()_RX_UDD_SKP[LEN]=16).
+ * Hardware can only auto-generate backpressure through HiGig2 messages (optionally, when
+ * BGX()_SMU()_HG2_CONTROL[HG2TX_EN]=1) with the HiGig2 protocol.
+ */
+union cvmx_bgxx_cmr_rx_ovr_bp {
+	u64 u64;
+	struct cvmx_bgxx_cmr_rx_ovr_bp_s {
+		u64 reserved_12_63 : 52;
+		u64 en : 4;
+		u64 bp : 4;
+		u64 ign_fifo_bp : 4;
+	} s;
+	struct cvmx_bgxx_cmr_rx_ovr_bp_s cn73xx;
+	struct cvmx_bgxx_cmr_rx_ovr_bp_s cn78xx;
+	struct cvmx_bgxx_cmr_rx_ovr_bp_s cn78xxp1;
+	struct cvmx_bgxx_cmr_rx_ovr_bp_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_rx_ovr_bp cvmx_bgxx_cmr_rx_ovr_bp_t;
+
+/**
+ * cvmx_bgx#_cmr_tx_lmacs
+ *
+ * This register sets the number of LMACs allowed on the TX interface. The value is important for
+ * defining the partitioning of the transmit FIFO.
+ */
+union cvmx_bgxx_cmr_tx_lmacs {
+	u64 u64;
+	struct cvmx_bgxx_cmr_tx_lmacs_s {
+		u64 reserved_3_63 : 61;
+		u64 lmacs : 3;
+	} s;
+	struct cvmx_bgxx_cmr_tx_lmacs_s cn73xx;
+	struct cvmx_bgxx_cmr_tx_lmacs_s cn78xx;
+	struct cvmx_bgxx_cmr_tx_lmacs_s cn78xxp1;
+	struct cvmx_bgxx_cmr_tx_lmacs_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_cmr_tx_lmacs cvmx_bgxx_cmr_tx_lmacs_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_prt#_cfg
+ *
+ * This register controls the configuration of the LMAC.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_prtx_cfg {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_prtx_cfg_s {
+		u64 reserved_14_63 : 50;
+		u64 tx_idle : 1;
+		u64 rx_idle : 1;
+		u64 reserved_9_11 : 3;
+		u64 speed_msb : 1;
+		u64 reserved_4_7 : 4;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_prtx_cfg_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_prtx_cfg_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_prtx_cfg_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_prtx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_prtx_cfg cvmx_bgxx_gmp_gmi_prtx_cfg_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_decision
+ *
+ * This register specifies the byte count used to determine when to accept or to filter a packet.
+ * As each byte in a packet is received by GMI, the L2 byte count is compared against
+ * [CNT]. In normal operation, the L2 header begins after the
+ * PREAMBLE + SFD (BGX()_GMP_GMI_RX()_FRM_CTL[PRE_CHK] = 1) and any optional UDD skip
+ * data (BGX()_GMP_GMI_RX()_UDD_SKP[LEN]).
+ */
+union cvmx_bgxx_gmp_gmi_rxx_decision {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_decision_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_decision_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_decision_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_decision_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_decision_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_decision cvmx_bgxx_gmp_gmi_rxx_decision_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_frm_chk
+ */
+union cvmx_bgxx_gmp_gmi_rxx_frm_chk {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_chk_s {
+		u64 reserved_9_63 : 55;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_chk_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_chk_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_chk_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_chk_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_frm_chk cvmx_bgxx_gmp_gmi_rxx_frm_chk_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_frm_ctl
+ *
+ * This register controls the handling of the frames.
+ * The [CTL_BCK] and [CTL_DRP] bits control how the hardware handles incoming PAUSE packets. The
+ * most
+ * common modes of operation:
+ * _ [CTL_BCK] = 1, [CTL_DRP] = 1: hardware handles everything.
+ * _ [CTL_BCK] = 0, [CTL_DRP] = 0: software sees all PAUSE frames.
+ * _ [CTL_BCK] = 0, [CTL_DRP] = 1: all PAUSE frames are completely ignored.
+ *
+ * These control bits should be set to [CTL_BCK] = 0, [CTL_DRP] = 0 in half-duplex mode. Since
+ * PAUSE
+ * packets only apply to full duplex operation, any PAUSE packet would constitute an exception
+ * which should be handled by the processing cores. PAUSE packets should not be forwarded.
+ */
+union cvmx_bgxx_gmp_gmi_rxx_frm_ctl {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_ctl_s {
+		u64 reserved_13_63 : 51;
+		u64 ptp_mode : 1;
+		u64 reserved_11_11 : 1;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 reserved_7_8 : 2;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_ctl_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_ctl_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_ctl_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_frm_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_frm_ctl cvmx_bgxx_gmp_gmi_rxx_frm_ctl_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_ifg
+ *
+ * This register specifies the minimum number of interframe-gap (IFG) cycles between packets.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_rxx_ifg {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_ifg_s {
+		u64 reserved_4_63 : 60;
+		u64 ifg : 4;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_ifg_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_ifg_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_ifg_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_ifg_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_ifg cvmx_bgxx_gmp_gmi_rxx_ifg_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_int
+ *
+ * '"These registers allow interrupts to be sent to the control processor.
+ * * Exception conditions <10:0> can also set the rcv/opcode in the received packet's work-queue
+ * entry. BGX()_GMP_GMI_RX()_FRM_CHK provides a bit mask for configuring which conditions
+ * set the error.
+ * In half duplex operation, the expectation is that collisions will appear as either MINERR or
+ * CAREXT errors.'
+ */
+union cvmx_bgxx_gmp_gmi_rxx_int {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_int_s {
+		u64 reserved_12_63 : 52;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_int_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_int_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_int_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_int cvmx_bgxx_gmp_gmi_rxx_int_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_jabber
+ *
+ * This register specifies the maximum size for packets, beyond which the GMI truncates.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_rxx_jabber {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_jabber_s {
+		u64 reserved_16_63 : 48;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_jabber_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_jabber_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_jabber_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_jabber_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_jabber cvmx_bgxx_gmp_gmi_rxx_jabber_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_rx#_udd_skp
+ *
+ * This register specifies the amount of user-defined data (UDD) added before the start of the
+ * L2C data.
+ */
+union cvmx_bgxx_gmp_gmi_rxx_udd_skp {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_rxx_udd_skp_s {
+		u64 reserved_9_63 : 55;
+		u64 fcssel : 1;
+		u64 reserved_7_7 : 1;
+		u64 len : 7;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_rxx_udd_skp_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_udd_skp_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_rxx_udd_skp_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_rxx_udd_skp_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_rxx_udd_skp cvmx_bgxx_gmp_gmi_rxx_udd_skp_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_smac#
+ */
+union cvmx_bgxx_gmp_gmi_smacx {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_smacx_s {
+		u64 reserved_48_63 : 16;
+		u64 smac : 48;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_smacx_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_smacx_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_smacx_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_smacx_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_smacx cvmx_bgxx_gmp_gmi_smacx_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_append
+ */
+union cvmx_bgxx_gmp_gmi_txx_append {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_append_s {
+		u64 reserved_4_63 : 60;
+		u64 force_fcs : 1;
+		u64 fcs : 1;
+		u64 pad : 1;
+		u64 preamble : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_append_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_append_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_append_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_append_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_append cvmx_bgxx_gmp_gmi_txx_append_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_burst
+ */
+union cvmx_bgxx_gmp_gmi_txx_burst {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_burst_s {
+		u64 reserved_16_63 : 48;
+		u64 burst : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_burst_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_burst_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_burst_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_burst_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_burst cvmx_bgxx_gmp_gmi_txx_burst_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_ctl
+ */
+union cvmx_bgxx_gmp_gmi_txx_ctl {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 xsdef_en : 1;
+		u64 xscol_en : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_ctl_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_ctl_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_ctl_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_ctl cvmx_bgxx_gmp_gmi_txx_ctl_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_int
+ */
+union cvmx_bgxx_gmp_gmi_txx_int {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_int_s {
+		u64 reserved_5_63 : 59;
+		u64 ptp_lost : 1;
+		u64 late_col : 1;
+		u64 xsdef : 1;
+		u64 xscol : 1;
+		u64 undflw : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_int_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_int_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_int_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_int cvmx_bgxx_gmp_gmi_txx_int_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_min_pkt
+ */
+union cvmx_bgxx_gmp_gmi_txx_min_pkt {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_min_pkt_s {
+		u64 reserved_8_63 : 56;
+		u64 min_size : 8;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_min_pkt_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_min_pkt_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_min_pkt_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_min_pkt_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_min_pkt cvmx_bgxx_gmp_gmi_txx_min_pkt_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_pause_pkt_interval
+ *
+ * This register specifies how often PAUSE packets are sent.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_s {
+		u64 reserved_16_63 : 48;
+		u64 interval : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval cvmx_bgxx_gmp_gmi_txx_pause_pkt_interval_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_pause_pkt_time
+ */
+union cvmx_bgxx_gmp_gmi_txx_pause_pkt_time {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_s {
+		u64 reserved_16_63 : 48;
+		u64 ptime : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_pause_pkt_time cvmx_bgxx_gmp_gmi_txx_pause_pkt_time_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_pause_togo
+ */
+union cvmx_bgxx_gmp_gmi_txx_pause_togo {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_togo_s {
+		u64 reserved_16_63 : 48;
+		u64 ptime : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_togo_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_togo_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_togo_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_togo_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_pause_togo cvmx_bgxx_gmp_gmi_txx_pause_togo_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_pause_zero
+ */
+union cvmx_bgxx_gmp_gmi_txx_pause_zero {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_zero_s {
+		u64 reserved_1_63 : 63;
+		u64 send : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_zero_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_zero_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_zero_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_pause_zero_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_pause_zero cvmx_bgxx_gmp_gmi_txx_pause_zero_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_sgmii_ctl
+ */
+union cvmx_bgxx_gmp_gmi_txx_sgmii_ctl {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 align : 1;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_sgmii_ctl cvmx_bgxx_gmp_gmi_txx_sgmii_ctl_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_slot
+ */
+union cvmx_bgxx_gmp_gmi_txx_slot {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_slot_s {
+		u64 reserved_10_63 : 54;
+		u64 slot : 10;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_slot_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_slot_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_slot_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_slot_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_slot cvmx_bgxx_gmp_gmi_txx_slot_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_soft_pause
+ */
+union cvmx_bgxx_gmp_gmi_txx_soft_pause {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_soft_pause_s {
+		u64 reserved_16_63 : 48;
+		u64 ptime : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_soft_pause_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_soft_pause_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_soft_pause_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_soft_pause_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_soft_pause cvmx_bgxx_gmp_gmi_txx_soft_pause_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx#_thresh
+ */
+union cvmx_bgxx_gmp_gmi_txx_thresh {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_txx_thresh_s {
+		u64 reserved_11_63 : 53;
+		u64 cnt : 11;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_txx_thresh_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_txx_thresh_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_txx_thresh_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_txx_thresh_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_txx_thresh cvmx_bgxx_gmp_gmi_txx_thresh_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_col_attempt
+ */
+union cvmx_bgxx_gmp_gmi_tx_col_attempt {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_col_attempt_s {
+		u64 reserved_5_63 : 59;
+		u64 limit : 5;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_col_attempt_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_col_attempt_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_col_attempt_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_col_attempt_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_col_attempt cvmx_bgxx_gmp_gmi_tx_col_attempt_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_ifg
+ *
+ * Consider the following when programming IFG1 and IFG2:
+ * * For 10/100/1000 Mb/s half-duplex systems that require IEEE 802.3 compatibility, IFG1 must be
+ * in the range of 1-8, IFG2 must be in the range of 4-12, and the IFG1 + IFG2 sum must be 12.
+ * * For 10/100/1000 Mb/s full-duplex systems that require IEEE 802.3 compatibility, IFG1 must be
+ * in the range of 1-11, IFG2 must be in the range of 1-11, and the IFG1 + IFG2 sum must be 12.
+ * For all other systems, IFG1 and IFG2 can be any value in the range of 1-15, allowing for a
+ * total possible IFG sum of 2-30.
+ */
+union cvmx_bgxx_gmp_gmi_tx_ifg {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_ifg_s {
+		u64 reserved_8_63 : 56;
+		u64 ifg2 : 4;
+		u64 ifg1 : 4;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_ifg_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_ifg_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_ifg_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_ifg_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_ifg cvmx_bgxx_gmp_gmi_tx_ifg_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_jam
+ *
+ * This register provides the pattern used in JAM bytes.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_tx_jam {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_jam_s {
+		u64 reserved_8_63 : 56;
+		u64 jam : 8;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_jam_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_jam_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_jam_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_jam_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_jam cvmx_bgxx_gmp_gmi_tx_jam_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_lfsr
+ *
+ * This register shows the contents of the linear feedback shift register (LFSR), which is used
+ * to implement truncated binary exponential backoff.
+ */
+union cvmx_bgxx_gmp_gmi_tx_lfsr {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_lfsr_s {
+		u64 reserved_16_63 : 48;
+		u64 lfsr : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_lfsr_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_lfsr_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_lfsr_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_lfsr_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_lfsr cvmx_bgxx_gmp_gmi_tx_lfsr_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_pause_pkt_dmac
+ */
+union cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 dmac : 48;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac cvmx_bgxx_gmp_gmi_tx_pause_pkt_dmac_t;
+
+/**
+ * cvmx_bgx#_gmp_gmi_tx_pause_pkt_type
+ *
+ * This register provides the PTYPE field that is placed in outbound PAUSE packets.
+ *
+ */
+union cvmx_bgxx_gmp_gmi_tx_pause_pkt_type {
+	u64 u64;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_s {
+		u64 reserved_16_63 : 48;
+		u64 ptype : 16;
+	} s;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_s cn73xx;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_s cn78xx;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_s cn78xxp1;
+	struct cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_gmi_tx_pause_pkt_type cvmx_bgxx_gmp_gmi_tx_pause_pkt_type_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_an#_adv
+ */
+union cvmx_bgxx_gmp_pcs_anx_adv {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_anx_adv_s {
+		u64 reserved_16_63 : 48;
+		u64 np : 1;
+		u64 reserved_14_14 : 1;
+		u64 rem_flt : 2;
+		u64 reserved_9_11 : 3;
+		u64 pause : 2;
+		u64 hfd : 1;
+		u64 fd : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_anx_adv_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_anx_adv_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_anx_adv_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_anx_adv_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_anx_adv cvmx_bgxx_gmp_pcs_anx_adv_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_an#_ext_st
+ */
+union cvmx_bgxx_gmp_pcs_anx_ext_st {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_anx_ext_st_s {
+		u64 reserved_16_63 : 48;
+		u64 thou_xfd : 1;
+		u64 thou_xhd : 1;
+		u64 thou_tfd : 1;
+		u64 thou_thd : 1;
+		u64 reserved_0_11 : 12;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_anx_ext_st_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_anx_ext_st_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_anx_ext_st_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_anx_ext_st_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_anx_ext_st cvmx_bgxx_gmp_pcs_anx_ext_st_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_an#_lp_abil
+ *
+ * This is the autonegotiation link partner ability register 5 as per IEEE 802.3, Clause 37.
+ *
+ */
+union cvmx_bgxx_gmp_pcs_anx_lp_abil {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_anx_lp_abil_s {
+		u64 reserved_16_63 : 48;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 rem_flt : 2;
+		u64 reserved_9_11 : 3;
+		u64 pause : 2;
+		u64 hfd : 1;
+		u64 fd : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_anx_lp_abil_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_anx_lp_abil_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_anx_lp_abil_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_anx_lp_abil_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_anx_lp_abil cvmx_bgxx_gmp_pcs_anx_lp_abil_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_an#_results
+ *
+ * This register is not valid when BGX()_GMP_PCS_MISC()_CTL[AN_OVRD] is set to 1. If
+ * BGX()_GMP_PCS_MISC()_CTL[AN_OVRD] is set to 0 and
+ * BGX()_GMP_PCS_AN()_RESULTS[AN_CPT] is set to 1, this register is valid.
+ */
+union cvmx_bgxx_gmp_pcs_anx_results {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_anx_results_s {
+		u64 reserved_7_63 : 57;
+		u64 pause : 2;
+		u64 spd : 2;
+		u64 an_cpt : 1;
+		u64 dup : 1;
+		u64 link_ok : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_anx_results_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_anx_results_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_anx_results_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_anx_results_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_anx_results cvmx_bgxx_gmp_pcs_anx_results_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_int#
+ */
+union cvmx_bgxx_gmp_pcs_intx {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_intx_s {
+		u64 reserved_13_63 : 51;
+		u64 dbg_sync : 1;
+		u64 dup : 1;
+		u64 sync_bad : 1;
+		u64 an_bad : 1;
+		u64 rxlock : 1;
+		u64 rxbad : 1;
+		u64 rxerr : 1;
+		u64 txbad : 1;
+		u64 txfifo : 1;
+		u64 txfifu : 1;
+		u64 an_err : 1;
+		u64 xmit : 1;
+		u64 lnkspd : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_intx_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_intx_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_intx_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_intx_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_intx cvmx_bgxx_gmp_pcs_intx_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_link#_timer
+ *
+ * This is the 1.6 ms nominal link timer register.
+ *
+ */
+union cvmx_bgxx_gmp_pcs_linkx_timer {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_linkx_timer_s {
+		u64 reserved_16_63 : 48;
+		u64 count : 16;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_linkx_timer_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_linkx_timer_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_linkx_timer_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_linkx_timer_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_linkx_timer cvmx_bgxx_gmp_pcs_linkx_timer_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_misc#_ctl
+ */
+union cvmx_bgxx_gmp_pcs_miscx_ctl {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_miscx_ctl_s {
+		u64 reserved_13_63 : 51;
+		u64 sgmii : 1;
+		u64 gmxeno : 1;
+		u64 loopbck2 : 1;
+		u64 mac_phy : 1;
+		u64 mode : 1;
+		u64 an_ovrd : 1;
+		u64 samp_pt : 7;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_miscx_ctl_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_miscx_ctl_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_miscx_ctl_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_miscx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_miscx_ctl cvmx_bgxx_gmp_pcs_miscx_ctl_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_mr#_control
+ */
+union cvmx_bgxx_gmp_pcs_mrx_control {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_mrx_control_s {
+		u64 reserved_16_63 : 48;
+		u64 reset : 1;
+		u64 loopbck1 : 1;
+		u64 spdlsb : 1;
+		u64 an_en : 1;
+		u64 pwr_dn : 1;
+		u64 reserved_10_10 : 1;
+		u64 rst_an : 1;
+		u64 dup : 1;
+		u64 coltst : 1;
+		u64 spdmsb : 1;
+		u64 uni : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_mrx_control_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_mrx_control_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_mrx_control_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_mrx_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_mrx_control cvmx_bgxx_gmp_pcs_mrx_control_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_mr#_status
+ *
+ * Bits <15:9> in this register indicate the ability to operate when
+ * BGX()_GMP_PCS_MISC()_CTL[MAC_PHY] is set to MAC mode. Bits <15:9> are always read as
+ * 0, indicating that the chip cannot operate in the corresponding modes. The field [RM_FLT] is a
+ * 'don't care' when the selected mode is SGMII.
+ */
+union cvmx_bgxx_gmp_pcs_mrx_status {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_mrx_status_s {
+		u64 reserved_16_63 : 48;
+		u64 hun_t4 : 1;
+		u64 hun_xfd : 1;
+		u64 hun_xhd : 1;
+		u64 ten_fd : 1;
+		u64 ten_hd : 1;
+		u64 hun_t2fd : 1;
+		u64 hun_t2hd : 1;
+		u64 ext_st : 1;
+		u64 reserved_7_7 : 1;
+		u64 prb_sup : 1;
+		u64 an_cpt : 1;
+		u64 rm_flt : 1;
+		u64 an_abil : 1;
+		u64 lnk_st : 1;
+		u64 reserved_1_1 : 1;
+		u64 extnd : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_mrx_status_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_mrx_status_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_mrx_status_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_mrx_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_mrx_status cvmx_bgxx_gmp_pcs_mrx_status_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_rx#_states
+ */
+union cvmx_bgxx_gmp_pcs_rxx_states {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_rxx_states_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_bad : 1;
+		u64 rx_st : 5;
+		u64 sync_bad : 1;
+		u64 sync : 4;
+		u64 an_bad : 1;
+		u64 an_st : 4;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_rxx_states_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_rxx_states_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_rxx_states_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_rxx_states_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_rxx_states cvmx_bgxx_gmp_pcs_rxx_states_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_rx#_sync
+ */
+union cvmx_bgxx_gmp_pcs_rxx_sync {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_rxx_sync_s {
+		u64 reserved_2_63 : 62;
+		u64 sync : 1;
+		u64 bit_lock : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_rxx_sync_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_rxx_sync_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_rxx_sync_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_rxx_sync_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_rxx_sync cvmx_bgxx_gmp_pcs_rxx_sync_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_sgm#_an_adv
+ *
+ * This is the SGMII autonegotiation advertisement register (sent out as tx_Config_Reg<15:0> as
+ * defined in IEEE 802.3 clause 37). This register is sent during autonegotiation if
+ * BGX()_GMP_PCS_MISC()_CTL[MAC_PHY] is set (1 = PHY mode). If the bit is not set (0 =
+ * MAC mode), then tx_Config_Reg<14> becomes ACK bit and tx_Config_Reg<0> is always 1. All other
+ * bits in tx_Config_Reg sent will be 0. The PHY dictates the autonegotiation results.
+ */
+union cvmx_bgxx_gmp_pcs_sgmx_an_adv {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_sgmx_an_adv_s {
+		u64 reserved_16_63 : 48;
+		u64 link : 1;
+		u64 ack : 1;
+		u64 reserved_13_13 : 1;
+		u64 dup : 1;
+		u64 speed : 2;
+		u64 reserved_1_9 : 9;
+		u64 one : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_sgmx_an_adv_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_sgmx_an_adv_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_sgmx_an_adv_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_sgmx_an_adv_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_sgmx_an_adv cvmx_bgxx_gmp_pcs_sgmx_an_adv_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_sgm#_lp_adv
+ *
+ * This is the SGMII link partner advertisement register (received as rx_Config_Reg<15:0> as
+ * defined in IEEE 802.3 clause 37).
+ */
+union cvmx_bgxx_gmp_pcs_sgmx_lp_adv {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_sgmx_lp_adv_s {
+		u64 reserved_16_63 : 48;
+		u64 link : 1;
+		u64 reserved_13_14 : 2;
+		u64 dup : 1;
+		u64 speed : 2;
+		u64 reserved_1_9 : 9;
+		u64 one : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_sgmx_lp_adv_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_sgmx_lp_adv_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_sgmx_lp_adv_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_sgmx_lp_adv_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_sgmx_lp_adv cvmx_bgxx_gmp_pcs_sgmx_lp_adv_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_tx#_states
+ */
+union cvmx_bgxx_gmp_pcs_txx_states {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_txx_states_s {
+		u64 reserved_7_63 : 57;
+		u64 xmit : 2;
+		u64 tx_bad : 1;
+		u64 ord_st : 4;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_txx_states_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_txx_states_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_txx_states_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_txx_states_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_txx_states cvmx_bgxx_gmp_pcs_txx_states_t;
+
+/**
+ * cvmx_bgx#_gmp_pcs_tx_rx#_polarity
+ *
+ * BGX()_GMP_PCS_TX_RX()_POLARITY[AUTORXPL] shows correct polarity needed on the link
+ * receive path after code group synchronization is achieved.
+ */
+union cvmx_bgxx_gmp_pcs_tx_rxx_polarity {
+	u64 u64;
+	struct cvmx_bgxx_gmp_pcs_tx_rxx_polarity_s {
+		u64 reserved_4_63 : 60;
+		u64 rxovrd : 1;
+		u64 autorxpl : 1;
+		u64 rxplrt : 1;
+		u64 txplrt : 1;
+	} s;
+	struct cvmx_bgxx_gmp_pcs_tx_rxx_polarity_s cn73xx;
+	struct cvmx_bgxx_gmp_pcs_tx_rxx_polarity_s cn78xx;
+	struct cvmx_bgxx_gmp_pcs_tx_rxx_polarity_s cn78xxp1;
+	struct cvmx_bgxx_gmp_pcs_tx_rxx_polarity_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_gmp_pcs_tx_rxx_polarity cvmx_bgxx_gmp_pcs_tx_rxx_polarity_t;
+
+/**
+ * cvmx_bgx#_smu#_cbfc_ctl
+ */
+union cvmx_bgxx_smux_cbfc_ctl {
+	u64 u64;
+	struct cvmx_bgxx_smux_cbfc_ctl_s {
+		u64 phys_en : 16;
+		u64 logl_en : 16;
+		u64 reserved_4_31 : 28;
+		u64 bck_en : 1;
+		u64 drp_en : 1;
+		u64 tx_en : 1;
+		u64 rx_en : 1;
+	} s;
+	struct cvmx_bgxx_smux_cbfc_ctl_s cn73xx;
+	struct cvmx_bgxx_smux_cbfc_ctl_s cn78xx;
+	struct cvmx_bgxx_smux_cbfc_ctl_s cn78xxp1;
+	struct cvmx_bgxx_smux_cbfc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_cbfc_ctl cvmx_bgxx_smux_cbfc_ctl_t;
+
+/**
+ * cvmx_bgx#_smu#_ctrl
+ */
+union cvmx_bgxx_smux_ctrl {
+	u64 u64;
+	struct cvmx_bgxx_smux_ctrl_s {
+		u64 reserved_2_63 : 62;
+		u64 tx_idle : 1;
+		u64 rx_idle : 1;
+	} s;
+	struct cvmx_bgxx_smux_ctrl_s cn73xx;
+	struct cvmx_bgxx_smux_ctrl_s cn78xx;
+	struct cvmx_bgxx_smux_ctrl_s cn78xxp1;
+	struct cvmx_bgxx_smux_ctrl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_ctrl cvmx_bgxx_smux_ctrl_t;
+
+/**
+ * cvmx_bgx#_smu#_ext_loopback
+ *
+ * In loopback mode, the IFG1+IFG2 of local and remote parties must match exactly; otherwise one
+ * of the two sides' loopback FIFO will overrun: BGX()_SMU()_TX_INT[LB_OVRFLW].
+ */
+union cvmx_bgxx_smux_ext_loopback {
+	u64 u64;
+	struct cvmx_bgxx_smux_ext_loopback_s {
+		u64 reserved_5_63 : 59;
+		u64 en : 1;
+		u64 thresh : 4;
+	} s;
+	struct cvmx_bgxx_smux_ext_loopback_s cn73xx;
+	struct cvmx_bgxx_smux_ext_loopback_s cn78xx;
+	struct cvmx_bgxx_smux_ext_loopback_s cn78xxp1;
+	struct cvmx_bgxx_smux_ext_loopback_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_ext_loopback cvmx_bgxx_smux_ext_loopback_t;
+
+/**
+ * cvmx_bgx#_smu#_hg2_control
+ *
+ * HiGig2 TX- and RX-enable are normally set together for HiGig2 messaging. Setting just the TX
+ * or RX bit results in only the HG2 message transmit or receive capability.
+ *
+ * Setting [PHYS_EN] and [LOGL_EN] to 1 allows link PAUSE or backpressure to PKO as per the
+ * received HiGig2 message. Setting these fields to 0 disables link PAUSE and backpressure to PKO
+ * in response to received messages.
+ *
+ * BGX()_SMU()_TX_CTL[HG_EN] must be set (to enable HiGig) whenever either [HG2TX_EN] or
+ * [HG2RX_EN] are set. BGX()_SMU()_RX_UDD_SKP[LEN] must be set to 16 (to select HiGig2)
+ * whenever either [HG2TX_EN] or [HG2RX_EN] are set.
+ *
+ * BGX()_CMR_RX_OVR_BP[EN]<0> must be set and BGX()_CMR_RX_OVR_BP[BP]<0> must be cleared
+ * to 0 (to forcibly disable hardware-automatic 802.3 PAUSE packet generation) with the HiGig2
+ * Protocol when [HG2TX_EN] = 0. (The HiGig2 protocol is indicated
+ * by BGX()_SMU()_TX_CTL[HG_EN] = 1 and BGX()_SMU()_RX_UDD_SKP[LEN]=16.) Hardware
+ * can only autogenerate backpressure via HiGig2 messages (optionally, when [HG2TX_EN] = 1) with
+ * the HiGig2 protocol.
+ */
+union cvmx_bgxx_smux_hg2_control {
+	u64 u64;
+	struct cvmx_bgxx_smux_hg2_control_s {
+		u64 reserved_19_63 : 45;
+		u64 hg2tx_en : 1;
+		u64 hg2rx_en : 1;
+		u64 phys_en : 1;
+		u64 logl_en : 16;
+	} s;
+	struct cvmx_bgxx_smux_hg2_control_s cn73xx;
+	struct cvmx_bgxx_smux_hg2_control_s cn78xx;
+	struct cvmx_bgxx_smux_hg2_control_s cn78xxp1;
+	struct cvmx_bgxx_smux_hg2_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_hg2_control cvmx_bgxx_smux_hg2_control_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_bad_col_hi
+ */
+union cvmx_bgxx_smux_rx_bad_col_hi {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_bad_col_hi_s {
+		u64 reserved_17_63 : 47;
+		u64 val : 1;
+		u64 state : 8;
+		u64 lane_rxc : 8;
+	} s;
+	struct cvmx_bgxx_smux_rx_bad_col_hi_s cn73xx;
+	struct cvmx_bgxx_smux_rx_bad_col_hi_s cn78xx;
+	struct cvmx_bgxx_smux_rx_bad_col_hi_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_bad_col_hi_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_bad_col_hi cvmx_bgxx_smux_rx_bad_col_hi_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_bad_col_lo
+ */
+union cvmx_bgxx_smux_rx_bad_col_lo {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_bad_col_lo_s {
+		u64 lane_rxd : 64;
+	} s;
+	struct cvmx_bgxx_smux_rx_bad_col_lo_s cn73xx;
+	struct cvmx_bgxx_smux_rx_bad_col_lo_s cn78xx;
+	struct cvmx_bgxx_smux_rx_bad_col_lo_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_bad_col_lo_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_bad_col_lo cvmx_bgxx_smux_rx_bad_col_lo_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_ctl
+ */
+union cvmx_bgxx_smux_rx_ctl {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 status : 2;
+	} s;
+	struct cvmx_bgxx_smux_rx_ctl_s cn73xx;
+	struct cvmx_bgxx_smux_rx_ctl_s cn78xx;
+	struct cvmx_bgxx_smux_rx_ctl_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_ctl cvmx_bgxx_smux_rx_ctl_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_decision
+ *
+ * This register specifies the byte count used to determine when to accept or to filter a packet.
+ * As each byte in a packet is received by BGX, the L2 byte count (i.e. the number of bytes from
+ * the beginning of the L2 header (DMAC)) is compared against CNT. In normal operation, the L2
+ * header begins after the PREAMBLE + SFD (BGX()_SMU()_RX_FRM_CTL[PRE_CHK] = 1) and any
+ * optional UDD skip data (BGX()_SMU()_RX_UDD_SKP[LEN]).
+ */
+union cvmx_bgxx_smux_rx_decision {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_decision_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_bgxx_smux_rx_decision_s cn73xx;
+	struct cvmx_bgxx_smux_rx_decision_s cn78xx;
+	struct cvmx_bgxx_smux_rx_decision_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_decision_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_decision cvmx_bgxx_smux_rx_decision_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_frm_chk
+ *
+ * The CSRs provide the enable bits for a subset of errors passed to CMR encoded.
+ *
+ */
+union cvmx_bgxx_smux_rx_frm_chk {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_frm_chk_s {
+		u64 reserved_9_63 : 55;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_6_6 : 1;
+		u64 fcserr_c : 1;
+		u64 fcserr_d : 1;
+		u64 jabber : 1;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_bgxx_smux_rx_frm_chk_s cn73xx;
+	struct cvmx_bgxx_smux_rx_frm_chk_s cn78xx;
+	struct cvmx_bgxx_smux_rx_frm_chk_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_frm_chk_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_frm_chk cvmx_bgxx_smux_rx_frm_chk_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_frm_ctl
+ *
+ * This register controls the handling of the frames.
+ * The [CTL_BCK] and [CTL_DRP] bits control how the hardware handles incoming PAUSE packets. The
+ * most
+ * common modes of operation:
+ * _ [CTL_BCK] = 1, [CTL_DRP] = 1: hardware handles everything
+ * _ [CTL_BCK] = 0, [CTL_DRP] = 0: software sees all PAUSE frames
+ * _ [CTL_BCK] = 0, [CTL_DRP] = 1: all PAUSE frames are completely ignored
+ *
+ * These control bits should be set to [CTL_BCK] = 0, [CTL_DRP] = 0 in half-duplex mode. Since
+ * PAUSE
+ * packets only apply to full duplex operation, any PAUSE packet would constitute an exception
+ * which should be handled by the processing cores. PAUSE packets should not be forwarded.
+ */
+union cvmx_bgxx_smux_rx_frm_ctl {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_frm_ctl_s {
+		u64 reserved_13_63 : 51;
+		u64 ptp_mode : 1;
+		u64 reserved_6_11 : 6;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} s;
+	struct cvmx_bgxx_smux_rx_frm_ctl_s cn73xx;
+	struct cvmx_bgxx_smux_rx_frm_ctl_s cn78xx;
+	struct cvmx_bgxx_smux_rx_frm_ctl_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_frm_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_frm_ctl cvmx_bgxx_smux_rx_frm_ctl_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_int
+ *
+ * SMU Interrupt Register.
+ *
+ */
+union cvmx_bgxx_smux_rx_int {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_int_s {
+		u64 reserved_12_63 : 52;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+	} s;
+	struct cvmx_bgxx_smux_rx_int_s cn73xx;
+	struct cvmx_bgxx_smux_rx_int_s cn78xx;
+	struct cvmx_bgxx_smux_rx_int_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_int cvmx_bgxx_smux_rx_int_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_jabber
+ *
+ * This register specifies the maximum size for packets, beyond which the SMU truncates. In
+ * XAUI/RXAUI mode, port 0 is used for checking.
+ */
+union cvmx_bgxx_smux_rx_jabber {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_jabber_s {
+		u64 reserved_16_63 : 48;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_bgxx_smux_rx_jabber_s cn73xx;
+	struct cvmx_bgxx_smux_rx_jabber_s cn78xx;
+	struct cvmx_bgxx_smux_rx_jabber_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_jabber_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_jabber cvmx_bgxx_smux_rx_jabber_t;
+
+/**
+ * cvmx_bgx#_smu#_rx_udd_skp
+ *
+ * This register specifies the amount of user-defined data (UDD) added before the start of the
+ * L2C data.
+ */
+union cvmx_bgxx_smux_rx_udd_skp {
+	u64 u64;
+	struct cvmx_bgxx_smux_rx_udd_skp_s {
+		u64 reserved_9_63 : 55;
+		u64 fcssel : 1;
+		u64 reserved_7_7 : 1;
+		u64 len : 7;
+	} s;
+	struct cvmx_bgxx_smux_rx_udd_skp_s cn73xx;
+	struct cvmx_bgxx_smux_rx_udd_skp_s cn78xx;
+	struct cvmx_bgxx_smux_rx_udd_skp_s cn78xxp1;
+	struct cvmx_bgxx_smux_rx_udd_skp_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_rx_udd_skp cvmx_bgxx_smux_rx_udd_skp_t;
+
+/**
+ * cvmx_bgx#_smu#_smac
+ */
+union cvmx_bgxx_smux_smac {
+	u64 u64;
+	struct cvmx_bgxx_smux_smac_s {
+		u64 reserved_48_63 : 16;
+		u64 smac : 48;
+	} s;
+	struct cvmx_bgxx_smux_smac_s cn73xx;
+	struct cvmx_bgxx_smux_smac_s cn78xx;
+	struct cvmx_bgxx_smux_smac_s cn78xxp1;
+	struct cvmx_bgxx_smux_smac_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_smac cvmx_bgxx_smux_smac_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_append
+ *
+ * For more details on the interactions between FCS and PAD, see also the description of
+ * BGX()_SMU()_TX_MIN_PKT[MIN_SIZE].
+ */
+union cvmx_bgxx_smux_tx_append {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_append_s {
+		u64 reserved_4_63 : 60;
+		u64 fcs_c : 1;
+		u64 fcs_d : 1;
+		u64 pad : 1;
+		u64 preamble : 1;
+	} s;
+	struct cvmx_bgxx_smux_tx_append_s cn73xx;
+	struct cvmx_bgxx_smux_tx_append_s cn78xx;
+	struct cvmx_bgxx_smux_tx_append_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_append_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_append cvmx_bgxx_smux_tx_append_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_ctl
+ */
+union cvmx_bgxx_smux_tx_ctl {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_ctl_s {
+		u64 reserved_31_63 : 33;
+		u64 spu_mrk_cnt : 20;
+		u64 hg_pause_hgi : 2;
+		u64 hg_en : 1;
+		u64 l2p_bp_conv : 1;
+		u64 ls_byp : 1;
+		u64 ls : 2;
+		u64 reserved_3_3 : 1;
+		u64 x4a_dis : 1;
+		u64 uni_en : 1;
+		u64 dic_en : 1;
+	} s;
+	struct cvmx_bgxx_smux_tx_ctl_s cn73xx;
+	struct cvmx_bgxx_smux_tx_ctl_s cn78xx;
+	struct cvmx_bgxx_smux_tx_ctl_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_ctl cvmx_bgxx_smux_tx_ctl_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_ifg
+ *
+ * Programming IFG1 and IFG2:
+ * * For XAUI/RXAUI/10Gbs/40Gbs systems that require IEEE 802.3 compatibility, the IFG1+IFG2 sum
+ * must be 12.
+ * * In loopback mode, the IFG1+IFG2 of local and remote parties must match exactly; otherwise
+ * one of the two sides' loopback FIFO will overrun: BGX()_SMU()_TX_INT[LB_OVRFLW].
+ */
+union cvmx_bgxx_smux_tx_ifg {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_ifg_s {
+		u64 reserved_8_63 : 56;
+		u64 ifg2 : 4;
+		u64 ifg1 : 4;
+	} s;
+	struct cvmx_bgxx_smux_tx_ifg_s cn73xx;
+	struct cvmx_bgxx_smux_tx_ifg_s cn78xx;
+	struct cvmx_bgxx_smux_tx_ifg_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_ifg_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_ifg cvmx_bgxx_smux_tx_ifg_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_int
+ */
+union cvmx_bgxx_smux_tx_int {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_int_s {
+		u64 reserved_5_63 : 59;
+		u64 lb_ovrflw : 1;
+		u64 lb_undflw : 1;
+		u64 fake_commit : 1;
+		u64 xchange : 1;
+		u64 undflw : 1;
+	} s;
+	struct cvmx_bgxx_smux_tx_int_s cn73xx;
+	struct cvmx_bgxx_smux_tx_int_s cn78xx;
+	struct cvmx_bgxx_smux_tx_int_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_int cvmx_bgxx_smux_tx_int_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_min_pkt
+ */
+union cvmx_bgxx_smux_tx_min_pkt {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_min_pkt_s {
+		u64 reserved_8_63 : 56;
+		u64 min_size : 8;
+	} s;
+	struct cvmx_bgxx_smux_tx_min_pkt_s cn73xx;
+	struct cvmx_bgxx_smux_tx_min_pkt_s cn78xx;
+	struct cvmx_bgxx_smux_tx_min_pkt_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_min_pkt_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_min_pkt cvmx_bgxx_smux_tx_min_pkt_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_pkt_dmac
+ *
+ * This register provides the DMAC value that is placed in outbound PAUSE packets.
+ *
+ */
+union cvmx_bgxx_smux_tx_pause_pkt_dmac {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_pkt_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 dmac : 48;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_pkt_dmac_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_dmac_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_dmac_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_pkt_dmac_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_pkt_dmac cvmx_bgxx_smux_tx_pause_pkt_dmac_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_pkt_interval
+ *
+ * This register specifies how often PAUSE packets are sent.
+ *
+ */
+union cvmx_bgxx_smux_tx_pause_pkt_interval {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_pkt_interval_s {
+		u64 reserved_33_63 : 31;
+		u64 hg2_intra_en : 1;
+		u64 hg2_intra_interval : 16;
+		u64 interval : 16;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_pkt_interval_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_interval_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_interval_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_pkt_interval_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_pkt_interval cvmx_bgxx_smux_tx_pause_pkt_interval_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_pkt_time
+ */
+union cvmx_bgxx_smux_tx_pause_pkt_time {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_pkt_time_s {
+		u64 reserved_16_63 : 48;
+		u64 p_time : 16;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_pkt_time_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_time_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_time_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_pkt_time_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_pkt_time cvmx_bgxx_smux_tx_pause_pkt_time_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_pkt_type
+ *
+ * This register provides the P_TYPE field that is placed in outbound PAUSE packets.
+ *
+ */
+union cvmx_bgxx_smux_tx_pause_pkt_type {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_pkt_type_s {
+		u64 reserved_16_63 : 48;
+		u64 p_type : 16;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_pkt_type_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_type_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_pkt_type_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_pkt_type_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_pkt_type cvmx_bgxx_smux_tx_pause_pkt_type_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_togo
+ */
+union cvmx_bgxx_smux_tx_pause_togo {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_togo_s {
+		u64 reserved_32_63 : 32;
+		u64 msg_time : 16;
+		u64 p_time : 16;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_togo_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_togo_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_togo_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_togo_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_togo cvmx_bgxx_smux_tx_pause_togo_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_pause_zero
+ */
+union cvmx_bgxx_smux_tx_pause_zero {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_pause_zero_s {
+		u64 reserved_1_63 : 63;
+		u64 send : 1;
+	} s;
+	struct cvmx_bgxx_smux_tx_pause_zero_s cn73xx;
+	struct cvmx_bgxx_smux_tx_pause_zero_s cn78xx;
+	struct cvmx_bgxx_smux_tx_pause_zero_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_pause_zero_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_pause_zero cvmx_bgxx_smux_tx_pause_zero_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_soft_pause
+ */
+union cvmx_bgxx_smux_tx_soft_pause {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_soft_pause_s {
+		u64 reserved_16_63 : 48;
+		u64 p_time : 16;
+	} s;
+	struct cvmx_bgxx_smux_tx_soft_pause_s cn73xx;
+	struct cvmx_bgxx_smux_tx_soft_pause_s cn78xx;
+	struct cvmx_bgxx_smux_tx_soft_pause_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_soft_pause_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_soft_pause cvmx_bgxx_smux_tx_soft_pause_t;
+
+/**
+ * cvmx_bgx#_smu#_tx_thresh
+ */
+union cvmx_bgxx_smux_tx_thresh {
+	u64 u64;
+	struct cvmx_bgxx_smux_tx_thresh_s {
+		u64 reserved_11_63 : 53;
+		u64 cnt : 11;
+	} s;
+	struct cvmx_bgxx_smux_tx_thresh_s cn73xx;
+	struct cvmx_bgxx_smux_tx_thresh_s cn78xx;
+	struct cvmx_bgxx_smux_tx_thresh_s cn78xxp1;
+	struct cvmx_bgxx_smux_tx_thresh_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_smux_tx_thresh cvmx_bgxx_smux_tx_thresh_t;
+
+/**
+ * cvmx_bgx#_spu#_an_adv
+ *
+ * Software programs this register with the contents of the AN-link code word base page to be
+ * transmitted during autonegotiation. (See IEEE 802.3 section 73.6 for details.) Any write
+ * operations to this register prior to completion of autonegotiation, as indicated by
+ * BGX()_SPU()_AN_STATUS[AN_COMPLETE], should be followed by a renegotiation in order for
+ * the new values to take effect. Renegotiation is initiated by setting
+ * BGX()_SPU()_AN_CONTROL[AN_RESTART]. Once autonegotiation has completed, software can
+ * examine this register along with BGX()_SPU()_AN_LP_BASE to determine the highest
+ * common denominator technology.
+ */
+union cvmx_bgxx_spux_an_adv {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_adv_s {
+		u64 reserved_48_63 : 16;
+		u64 fec_req : 1;
+		u64 fec_able : 1;
+		u64 arsv : 19;
+		u64 a100g_cr10 : 1;
+		u64 a40g_cr4 : 1;
+		u64 a40g_kr4 : 1;
+		u64 a10g_kr : 1;
+		u64 a10g_kx4 : 1;
+		u64 a1g_kx : 1;
+		u64 t : 5;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 rf : 1;
+		u64 xnp_able : 1;
+		u64 asm_dir : 1;
+		u64 pause : 1;
+		u64 e : 5;
+		u64 s : 5;
+	} s;
+	struct cvmx_bgxx_spux_an_adv_s cn73xx;
+	struct cvmx_bgxx_spux_an_adv_s cn78xx;
+	struct cvmx_bgxx_spux_an_adv_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_adv_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_adv cvmx_bgxx_spux_an_adv_t;
+
+/**
+ * cvmx_bgx#_spu#_an_bp_status
+ *
+ * The contents of this register are updated
+ * during autonegotiation and are valid when BGX()_SPU()_AN_STATUS[AN_COMPLETE] is set.
+ * At that time, one of the port type bits ([N100G_CR10], [N40G_CR4], [N40G_KR4], [N10G_KR],
+ * [N10G_KX4],
+ * [N1G_KX]) will be set depending on the AN priority resolution. If a BASE-R type is negotiated,
+ * then [FEC] will be set to indicate that FEC operation has been negotiated, and will be
+ * clear otherwise.
+ */
+union cvmx_bgxx_spux_an_bp_status {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_bp_status_s {
+		u64 reserved_9_63 : 55;
+		u64 n100g_cr10 : 1;
+		u64 reserved_7_7 : 1;
+		u64 n40g_cr4 : 1;
+		u64 n40g_kr4 : 1;
+		u64 fec : 1;
+		u64 n10g_kr : 1;
+		u64 n10g_kx4 : 1;
+		u64 n1g_kx : 1;
+		u64 bp_an_able : 1;
+	} s;
+	struct cvmx_bgxx_spux_an_bp_status_s cn73xx;
+	struct cvmx_bgxx_spux_an_bp_status_s cn78xx;
+	struct cvmx_bgxx_spux_an_bp_status_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_bp_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_bp_status cvmx_bgxx_spux_an_bp_status_t;
+
+/**
+ * cvmx_bgx#_spu#_an_control
+ */
+union cvmx_bgxx_spux_an_control {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_control_s {
+		u64 reserved_16_63 : 48;
+		u64 an_reset : 1;
+		u64 reserved_14_14 : 1;
+		u64 xnp_en : 1;
+		u64 an_en : 1;
+		u64 reserved_10_11 : 2;
+		u64 an_restart : 1;
+		u64 reserved_0_8 : 9;
+	} s;
+	struct cvmx_bgxx_spux_an_control_s cn73xx;
+	struct cvmx_bgxx_spux_an_control_s cn78xx;
+	struct cvmx_bgxx_spux_an_control_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_control cvmx_bgxx_spux_an_control_t;
+
+/**
+ * cvmx_bgx#_spu#_an_lp_base
+ *
+ * This register captures the contents of the latest AN link code word base page received from
+ * the link partner during autonegotiation. (See IEEE 802.3 section 73.6 for details.)
+ * BGX()_SPU()_AN_STATUS[PAGE_RX] is set when this register is updated by hardware.
+ */
+union cvmx_bgxx_spux_an_lp_base {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_lp_base_s {
+		u64 reserved_48_63 : 16;
+		u64 fec_req : 1;
+		u64 fec_able : 1;
+		u64 arsv : 19;
+		u64 a100g_cr10 : 1;
+		u64 a40g_cr4 : 1;
+		u64 a40g_kr4 : 1;
+		u64 a10g_kr : 1;
+		u64 a10g_kx4 : 1;
+		u64 a1g_kx : 1;
+		u64 t : 5;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 rf : 1;
+		u64 xnp_able : 1;
+		u64 asm_dir : 1;
+		u64 pause : 1;
+		u64 e : 5;
+		u64 s : 5;
+	} s;
+	struct cvmx_bgxx_spux_an_lp_base_s cn73xx;
+	struct cvmx_bgxx_spux_an_lp_base_s cn78xx;
+	struct cvmx_bgxx_spux_an_lp_base_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_lp_base_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_lp_base cvmx_bgxx_spux_an_lp_base_t;
+
+/**
+ * cvmx_bgx#_spu#_an_lp_xnp
+ *
+ * This register captures the contents of the latest next page code word received from the link
+ * partner during autonegotiation, if any. See section 802.3 section 73.7.7 for details.
+ */
+union cvmx_bgxx_spux_an_lp_xnp {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_lp_xnp_s {
+		u64 reserved_48_63 : 16;
+		u64 u : 32;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 mp : 1;
+		u64 ack2 : 1;
+		u64 toggle : 1;
+		u64 m_u : 11;
+	} s;
+	struct cvmx_bgxx_spux_an_lp_xnp_s cn73xx;
+	struct cvmx_bgxx_spux_an_lp_xnp_s cn78xx;
+	struct cvmx_bgxx_spux_an_lp_xnp_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_lp_xnp_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_lp_xnp cvmx_bgxx_spux_an_lp_xnp_t;
+
+/**
+ * cvmx_bgx#_spu#_an_status
+ */
+union cvmx_bgxx_spux_an_status {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_status_s {
+		u64 reserved_10_63 : 54;
+		u64 prl_flt : 1;
+		u64 reserved_8_8 : 1;
+		u64 xnp_stat : 1;
+		u64 page_rx : 1;
+		u64 an_complete : 1;
+		u64 rmt_flt : 1;
+		u64 an_able : 1;
+		u64 link_status : 1;
+		u64 reserved_1_1 : 1;
+		u64 lp_an_able : 1;
+	} s;
+	struct cvmx_bgxx_spux_an_status_s cn73xx;
+	struct cvmx_bgxx_spux_an_status_s cn78xx;
+	struct cvmx_bgxx_spux_an_status_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_status cvmx_bgxx_spux_an_status_t;
+
+/**
+ * cvmx_bgx#_spu#_an_xnp_tx
+ *
+ * Software programs this register with the contents of the AN message next page or unformatted
+ * next page link code word to be transmitted during autonegotiation. Next page exchange occurs
+ * after the base link code words have been exchanged if either end of the link segment sets the
+ * NP bit to 1, indicating that it has at least one next page to send. Once initiated, next page
+ * exchange continues until both ends of the link segment set their NP bits to 0. See section
+ * 802.3 section 73.7.7 for details.
+ */
+union cvmx_bgxx_spux_an_xnp_tx {
+	u64 u64;
+	struct cvmx_bgxx_spux_an_xnp_tx_s {
+		u64 reserved_48_63 : 16;
+		u64 u : 32;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 mp : 1;
+		u64 ack2 : 1;
+		u64 toggle : 1;
+		u64 m_u : 11;
+	} s;
+	struct cvmx_bgxx_spux_an_xnp_tx_s cn73xx;
+	struct cvmx_bgxx_spux_an_xnp_tx_s cn78xx;
+	struct cvmx_bgxx_spux_an_xnp_tx_s cn78xxp1;
+	struct cvmx_bgxx_spux_an_xnp_tx_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_an_xnp_tx cvmx_bgxx_spux_an_xnp_tx_t;
+
+/**
+ * cvmx_bgx#_spu#_br_algn_status
+ *
+ * This register implements the IEEE 802.3 multilane BASE-R PCS alignment status 1-4 registers
+ * (3.50-3.53). It is valid only when the LPCS type is 40GBASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x4), and always returns 0x0 for all other LPCS
+ * types. IEEE 802.3 bits that are not applicable to 40GBASE-R (e.g. status bits for PCS lanes
+ * 19-4) are not implemented and marked as reserved. PCS lanes 3-0 are valid and are mapped to
+ * physical SerDes lanes based on the programming of BGX()_CMR()_CONFIG[LANE_TO_SDS].
+ */
+union cvmx_bgxx_spux_br_algn_status {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_algn_status_s {
+		u64 reserved_36_63 : 28;
+		u64 marker_lock : 4;
+		u64 reserved_13_31 : 19;
+		u64 alignd : 1;
+		u64 reserved_4_11 : 8;
+		u64 block_lock : 4;
+	} s;
+	struct cvmx_bgxx_spux_br_algn_status_s cn73xx;
+	struct cvmx_bgxx_spux_br_algn_status_s cn78xx;
+	struct cvmx_bgxx_spux_br_algn_status_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_algn_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_algn_status cvmx_bgxx_spux_br_algn_status_t;
+
+/**
+ * cvmx_bgx#_spu#_br_bip_err_cnt
+ *
+ * This register implements the IEEE 802.3 BIP error-counter registers for PCS lanes 0-3
+ * (3.200-3.203). It is valid only when the LPCS type is 40GBASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x4), and always returns 0x0 for all other LPCS
+ * types. The counters are indexed by the RX PCS lane number based on the Alignment Marker
+ * detected on each lane and captured in BGX()_SPU()_BR_LANE_MAP. Each counter counts the
+ * BIP errors for its PCS lane, and is held at all ones in case of overflow. The counters are
+ * reset to all 0s when this register is read by software.
+ *
+ * The reset operation takes precedence over the increment operation; if the register is read on
+ * the same clock cycle as an increment operation, the counter is reset to all 0s and the
+ * increment operation is lost. The counters are writable for test purposes, rather than read-
+ * only as specified in IEEE 802.3.
+ */
+union cvmx_bgxx_spux_br_bip_err_cnt {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_bip_err_cnt_s {
+		u64 bip_err_cnt_ln3 : 16;
+		u64 bip_err_cnt_ln2 : 16;
+		u64 bip_err_cnt_ln1 : 16;
+		u64 bip_err_cnt_ln0 : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_bip_err_cnt_s cn73xx;
+	struct cvmx_bgxx_spux_br_bip_err_cnt_s cn78xx;
+	struct cvmx_bgxx_spux_br_bip_err_cnt_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_bip_err_cnt_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_bip_err_cnt cvmx_bgxx_spux_br_bip_err_cnt_t;
+
+/**
+ * cvmx_bgx#_spu#_br_lane_map
+ *
+ * This register implements the IEEE 802.3 lane 0-3 mapping registers (3.400-3.403). It is valid
+ * only when the LPCS type is 40GBASE-R (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x4), and always
+ * returns 0x0 for all other LPCS types. The LNx_MAPPING field for each programmed PCS lane
+ * (called service interface in 802.3ba-2010) is valid when that lane has achieved alignment
+ * marker lock on the receive side (i.e. the associated
+ * BGX()_SPU()_BR_ALGN_STATUS[MARKER_LOCK] = 1), and is invalid otherwise. When valid, it
+ * returns the actual detected receive PCS lane number based on the received alignment marker
+ * contents received on that service interface.
+ *
+ * The mapping is flexible because IEEE 802.3 allows multilane BASE-R receive lanes to be re-
+ * ordered. Note that for the transmit side, each PCS lane is mapped to a physical SerDes lane
+ * based on the programming of BGX()_CMR()_CONFIG[LANE_TO_SDS]. For the receive side,
+ * BGX()_CMR()_CONFIG[LANE_TO_SDS] specifies the service interface to physical SerDes
+ * lane mapping, and this register specifies the service interface to PCS lane mapping.
+ */
+union cvmx_bgxx_spux_br_lane_map {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_lane_map_s {
+		u64 reserved_54_63 : 10;
+		u64 ln3_mapping : 6;
+		u64 reserved_38_47 : 10;
+		u64 ln2_mapping : 6;
+		u64 reserved_22_31 : 10;
+		u64 ln1_mapping : 6;
+		u64 reserved_6_15 : 10;
+		u64 ln0_mapping : 6;
+	} s;
+	struct cvmx_bgxx_spux_br_lane_map_s cn73xx;
+	struct cvmx_bgxx_spux_br_lane_map_s cn78xx;
+	struct cvmx_bgxx_spux_br_lane_map_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_lane_map_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_lane_map cvmx_bgxx_spux_br_lane_map_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_control
+ */
+union cvmx_bgxx_spux_br_pmd_control {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_control_s {
+		u64 reserved_2_63 : 62;
+		u64 train_en : 1;
+		u64 train_restart : 1;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_control_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_control_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_control_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_control cvmx_bgxx_spux_br_pmd_control_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_ld_cup
+ *
+ * This register implements 802.3 MDIO register 1.153 for 10GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 10G_R)
+ * and MDIO registers 1.1300-1.1303 for 40GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 40G_R). It is automatically cleared at the start of training.
+ * When link training
+ * is in progress, each field reflects the contents of the coefficient update field in the
+ * associated lane's outgoing training frame. The fields in this register are read/write even
+ * though they are specified as read-only in 802.3.
+ *
+ * If BGX()_SPU_DBG_CONTROL[BR_PMD_TRAIN_SOFT_EN] is set, then this register must be updated
+ * by software during link training and hardware updates are disabled. If
+ * BGX()_SPU_DBG_CONTROL[BR_PMD_TRAIN_SOFT_EN] is clear, this register is automatically
+ * updated by hardware, and it should not be written by software. The lane fields in this
+ * register are indexed by logical PCS lane ID.
+ *
+ * The lane 0 field (LN0_*) is valid for both
+ * 10GBASE-R and 40GBASE-R. The remaining fields (LN1_*, LN2_*, LN3_*) are only valid for
+ * 40GBASE-R.
+ */
+union cvmx_bgxx_spux_br_pmd_ld_cup {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_ld_cup_s {
+		u64 ln3_cup : 16;
+		u64 ln2_cup : 16;
+		u64 ln1_cup : 16;
+		u64 ln0_cup : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_ld_cup_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_ld_cup_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_ld_cup_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_ld_cup_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_ld_cup cvmx_bgxx_spux_br_pmd_ld_cup_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_ld_rep
+ *
+ * This register implements 802.3 MDIO register 1.154 for 10GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 10G_R) and MDIO registers 1.1400-1.1403 for 40GBASE-R
+ * (when BGX()_CMR()_CONFIG[LMAC_TYPE] = 40G_R). It is automatically cleared at the start of
+ * training. Each field
+ * reflects the contents of the status report field in the associated lane's outgoing training
+ * frame. The fields in this register are read/write even though they are specified as read-only
+ * in 802.3. If BGX()_SPU_DBG_CONTROL[BR_PMD_TRAIN_SOFT_EN] is set, then this register must
+ * be updated by software during link training and hardware updates are disabled. If
+ * BGX()_SPU_DBG_CONTROL[BR_PMD_TRAIN_SOFT_EN] is clear, this register is automatically
+ * updated by hardware, and it should not be written by software. The lane fields in this
+ * register are indexed by logical PCS lane ID.
+ *
+ * The lane 0 field (LN0_*) is valid for both
+ * 10GBASE-R and 40GBASE-R. The remaining fields (LN1_*, LN2_*, LN3_*) are only valid for
+ * 40GBASE-R.
+ */
+union cvmx_bgxx_spux_br_pmd_ld_rep {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_ld_rep_s {
+		u64 ln3_rep : 16;
+		u64 ln2_rep : 16;
+		u64 ln1_rep : 16;
+		u64 ln0_rep : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_ld_rep_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_ld_rep_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_ld_rep_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_ld_rep_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_ld_rep cvmx_bgxx_spux_br_pmd_ld_rep_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_lp_cup
+ *
+ * This register implements 802.3 MDIO register 1.152 for 10GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 10G_R)
+ * and MDIO registers 1.1100-1.1103 for 40GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 40G_R). It is automatically cleared at the start of training.
+ * Each field reflects
+ * the contents of the coefficient update field in the lane's most recently received training
+ * frame. This register should not be written when link training is enabled, i.e. when
+ * BGX()_SPU()_BR_PMD_CONTROL[TRAIN_EN] is set. The lane fields in this register are indexed by
+ * logical PCS lane ID.
+ *
+ * The lane 0 field (LN0_*) is valid for both 10GBASE-R and 40GBASE-R. The remaining fields
+ * (LN1_*, LN2_*, LN3_*) are only valid for 40GBASE-R.
+ */
+union cvmx_bgxx_spux_br_pmd_lp_cup {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_lp_cup_s {
+		u64 ln3_cup : 16;
+		u64 ln2_cup : 16;
+		u64 ln1_cup : 16;
+		u64 ln0_cup : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_lp_cup_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_lp_cup_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_lp_cup_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_lp_cup_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_lp_cup cvmx_bgxx_spux_br_pmd_lp_cup_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_lp_rep
+ *
+ * This register implements 802.3 MDIO register 1.153 for 10GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 10G_R)
+ * and MDIO registers 1.1200-1.1203 for 40GBASE-R (when
+ * BGX()_CMR()_CONFIG[LMAC_TYPE] = 40G_R). It is automatically cleared at the start of training.
+ * Each field reflects
+ * the contents of the status report field in the associated lane's most recently received
+ * training frame. The lane fields in this register are indexed by logical PCS lane ID.
+ *
+ * The lane
+ * 0 field (LN0_*) is valid for both 10GBASE-R and 40GBASE-R. The remaining fields (LN1_*, LN2_*,
+ * LN3_*) are only valid for 40GBASE-R.
+ */
+union cvmx_bgxx_spux_br_pmd_lp_rep {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_lp_rep_s {
+		u64 ln3_rep : 16;
+		u64 ln2_rep : 16;
+		u64 ln1_rep : 16;
+		u64 ln0_rep : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_lp_rep_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_lp_rep_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_lp_rep_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_lp_rep_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_lp_rep cvmx_bgxx_spux_br_pmd_lp_rep_t;
+
+/**
+ * cvmx_bgx#_spu#_br_pmd_status
+ *
+ * The lane fields in this register are indexed by logical PCS lane ID. The lane 0 field (LN0_*)
+ * is valid for both 10GBASE-R and 40GBASE-R. The remaining fields (LN1_*, LN2_*, LN3_*) are only
+ * valid for 40GBASE-R.
+ */
+union cvmx_bgxx_spux_br_pmd_status {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_pmd_status_s {
+		u64 reserved_16_63 : 48;
+		u64 ln3_train_status : 4;
+		u64 ln2_train_status : 4;
+		u64 ln1_train_status : 4;
+		u64 ln0_train_status : 4;
+	} s;
+	struct cvmx_bgxx_spux_br_pmd_status_s cn73xx;
+	struct cvmx_bgxx_spux_br_pmd_status_s cn78xx;
+	struct cvmx_bgxx_spux_br_pmd_status_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_pmd_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_pmd_status cvmx_bgxx_spux_br_pmd_status_t;
+
+/**
+ * cvmx_bgx#_spu#_br_status1
+ */
+union cvmx_bgxx_spux_br_status1 {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_status1_s {
+		u64 reserved_13_63 : 51;
+		u64 rcv_lnk : 1;
+		u64 reserved_4_11 : 8;
+		u64 prbs9 : 1;
+		u64 prbs31 : 1;
+		u64 hi_ber : 1;
+		u64 blk_lock : 1;
+	} s;
+	struct cvmx_bgxx_spux_br_status1_s cn73xx;
+	struct cvmx_bgxx_spux_br_status1_s cn78xx;
+	struct cvmx_bgxx_spux_br_status1_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_status1_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_status1 cvmx_bgxx_spux_br_status1_t;
+
+/**
+ * cvmx_bgx#_spu#_br_status2
+ *
+ * This register implements a combination of the following IEEE 802.3 registers:
+ * * BASE-R PCS status 2 (MDIO address 3.33).
+ * * BASE-R BER high-order counter (MDIO address 3.44).
+ * * Errored-blocks high-order counter (MDIO address 3.45).
+ *
+ * Note that the relative locations of some fields have been moved from IEEE 802.3 in order to
+ * make the register layout more software friendly: the BER counter high-order and low-order bits
+ * from sections 3.44 and 3.33 have been combined into the contiguous, 22-bit [BER_CNT] field;
+ * likewise, the errored-blocks counter high-order and low-order bits from section 3.45 have been
+ * combined into the contiguous, 22-bit [ERR_BLKS] field.
+ */
+union cvmx_bgxx_spux_br_status2 {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_status2_s {
+		u64 reserved_62_63 : 2;
+		u64 err_blks : 22;
+		u64 reserved_38_39 : 2;
+		u64 ber_cnt : 22;
+		u64 latched_lock : 1;
+		u64 latched_ber : 1;
+		u64 reserved_0_13 : 14;
+	} s;
+	struct cvmx_bgxx_spux_br_status2_s cn73xx;
+	struct cvmx_bgxx_spux_br_status2_s cn78xx;
+	struct cvmx_bgxx_spux_br_status2_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_status2_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_status2 cvmx_bgxx_spux_br_status2_t;
+
+/**
+ * cvmx_bgx#_spu#_br_tp_control
+ *
+ * Refer to the test pattern methodology described in 802.3 sections 49.2.8 and 82.2.10.
+ *
+ */
+union cvmx_bgxx_spux_br_tp_control {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_tp_control_s {
+		u64 reserved_8_63 : 56;
+		u64 scramble_tp : 1;
+		u64 prbs9_tx : 1;
+		u64 prbs31_rx : 1;
+		u64 prbs31_tx : 1;
+		u64 tx_tp_en : 1;
+		u64 rx_tp_en : 1;
+		u64 tp_sel : 1;
+		u64 dp_sel : 1;
+	} s;
+	struct cvmx_bgxx_spux_br_tp_control_s cn73xx;
+	struct cvmx_bgxx_spux_br_tp_control_s cn78xx;
+	struct cvmx_bgxx_spux_br_tp_control_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_tp_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_tp_control cvmx_bgxx_spux_br_tp_control_t;
+
+/**
+ * cvmx_bgx#_spu#_br_tp_err_cnt
+ *
+ * This register provides the BASE-R PCS test-pattern error counter.
+ *
+ */
+union cvmx_bgxx_spux_br_tp_err_cnt {
+	u64 u64;
+	struct cvmx_bgxx_spux_br_tp_err_cnt_s {
+		u64 reserved_16_63 : 48;
+		u64 err_cnt : 16;
+	} s;
+	struct cvmx_bgxx_spux_br_tp_err_cnt_s cn73xx;
+	struct cvmx_bgxx_spux_br_tp_err_cnt_s cn78xx;
+	struct cvmx_bgxx_spux_br_tp_err_cnt_s cn78xxp1;
+	struct cvmx_bgxx_spux_br_tp_err_cnt_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_br_tp_err_cnt cvmx_bgxx_spux_br_tp_err_cnt_t;
+
+/**
+ * cvmx_bgx#_spu#_bx_status
+ */
+union cvmx_bgxx_spux_bx_status {
+	u64 u64;
+	struct cvmx_bgxx_spux_bx_status_s {
+		u64 reserved_13_63 : 51;
+		u64 alignd : 1;
+		u64 pattst : 1;
+		u64 reserved_4_10 : 7;
+		u64 lsync : 4;
+	} s;
+	struct cvmx_bgxx_spux_bx_status_s cn73xx;
+	struct cvmx_bgxx_spux_bx_status_s cn78xx;
+	struct cvmx_bgxx_spux_bx_status_s cn78xxp1;
+	struct cvmx_bgxx_spux_bx_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_bx_status cvmx_bgxx_spux_bx_status_t;
+
+/**
+ * cvmx_bgx#_spu#_control1
+ */
+union cvmx_bgxx_spux_control1 {
+	u64 u64;
+	struct cvmx_bgxx_spux_control1_s {
+		u64 reserved_16_63 : 48;
+		u64 reset : 1;
+		u64 loopbck : 1;
+		u64 spdsel1 : 1;
+		u64 reserved_12_12 : 1;
+		u64 lo_pwr : 1;
+		u64 reserved_7_10 : 4;
+		u64 spdsel0 : 1;
+		u64 spd : 4;
+		u64 reserved_0_1 : 2;
+	} s;
+	struct cvmx_bgxx_spux_control1_s cn73xx;
+	struct cvmx_bgxx_spux_control1_s cn78xx;
+	struct cvmx_bgxx_spux_control1_s cn78xxp1;
+	struct cvmx_bgxx_spux_control1_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_control1 cvmx_bgxx_spux_control1_t;
+
+/**
+ * cvmx_bgx#_spu#_control2
+ */
+union cvmx_bgxx_spux_control2 {
+	u64 u64;
+	struct cvmx_bgxx_spux_control2_s {
+		u64 reserved_3_63 : 61;
+		u64 pcs_type : 3;
+	} s;
+	struct cvmx_bgxx_spux_control2_s cn73xx;
+	struct cvmx_bgxx_spux_control2_s cn78xx;
+	struct cvmx_bgxx_spux_control2_s cn78xxp1;
+	struct cvmx_bgxx_spux_control2_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_control2 cvmx_bgxx_spux_control2_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_abil
+ */
+union cvmx_bgxx_spux_fec_abil {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_abil_s {
+		u64 reserved_2_63 : 62;
+		u64 err_abil : 1;
+		u64 fec_abil : 1;
+	} s;
+	struct cvmx_bgxx_spux_fec_abil_s cn73xx;
+	struct cvmx_bgxx_spux_fec_abil_s cn78xx;
+	struct cvmx_bgxx_spux_fec_abil_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_abil_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_abil cvmx_bgxx_spux_fec_abil_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_control
+ */
+union cvmx_bgxx_spux_fec_control {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_control_s {
+		u64 reserved_2_63 : 62;
+		u64 err_en : 1;
+		u64 fec_en : 1;
+	} s;
+	struct cvmx_bgxx_spux_fec_control_s cn73xx;
+	struct cvmx_bgxx_spux_fec_control_s cn78xx;
+	struct cvmx_bgxx_spux_fec_control_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_control cvmx_bgxx_spux_fec_control_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_corr_blks01
+ *
+ * This register is valid only when the LPCS type is BASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x3 or 0x4). The FEC corrected-block counters are
+ * defined in IEEE 802.3 section 74.8.4.1. Each corrected-blocks counter increments by 1 for a
+ * corrected FEC block, i.e. an FEC block that has been received with invalid parity on the
+ * associated PCS lane and has been corrected by the FEC decoder. The counter is reset to all 0s
+ * when the register is read, and held at all 1s in case of overflow.
+ *
+ * The reset operation takes precedence over the increment operation; if the register is read on
+ * the same clock cycle as an increment operation, the counter is reset to all 0s and the
+ * increment operation is lost. The counters are writable for test purposes, rather than read-
+ * only as specified in IEEE 802.3.
+ */
+union cvmx_bgxx_spux_fec_corr_blks01 {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_corr_blks01_s {
+		u64 ln1_corr_blks : 32;
+		u64 ln0_corr_blks : 32;
+	} s;
+	struct cvmx_bgxx_spux_fec_corr_blks01_s cn73xx;
+	struct cvmx_bgxx_spux_fec_corr_blks01_s cn78xx;
+	struct cvmx_bgxx_spux_fec_corr_blks01_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_corr_blks01_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_corr_blks01 cvmx_bgxx_spux_fec_corr_blks01_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_corr_blks23
+ *
+ * This register is valid only when the LPCS type is 40GBASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x4). The FEC corrected-block counters are defined in
+ * IEEE 802.3 section 74.8.4.1. Each corrected-blocks counter increments by 1 for a corrected FEC
+ * block, i.e. an FEC block that has been received with invalid parity on the associated PCS lane
+ * and has been corrected by the FEC decoder. The counter is reset to all 0s when the register is
+ * read, and held at all 1s in case of overflow.
+ *
+ * The reset operation takes precedence over the increment operation; if the register is read on
+ * the same clock cycle as an increment operation, the counter is reset to all 0s and the
+ * increment operation is lost. The counters are writable for test purposes, rather than read-
+ * only as specified in IEEE 802.3.
+ */
+union cvmx_bgxx_spux_fec_corr_blks23 {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_corr_blks23_s {
+		u64 ln3_corr_blks : 32;
+		u64 ln2_corr_blks : 32;
+	} s;
+	struct cvmx_bgxx_spux_fec_corr_blks23_s cn73xx;
+	struct cvmx_bgxx_spux_fec_corr_blks23_s cn78xx;
+	struct cvmx_bgxx_spux_fec_corr_blks23_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_corr_blks23_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_corr_blks23 cvmx_bgxx_spux_fec_corr_blks23_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_uncorr_blks01
+ *
+ * This register is valid only when the LPCS type is BASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x3 or 0x4). The FEC corrected-block counters are
+ * defined in IEEE 802.3 section 74.8.4.2. Each uncorrected-blocks counter increments by 1 for an
+ * uncorrected FEC block, i.e. an FEC block that has been received with invalid parity on the
+ * associated PCS lane and has not been corrected by the FEC decoder. The counter is reset to all
+ * 0s when the register is read, and held at all 1s in case of overflow.
+ *
+ * The reset operation takes precedence over the increment operation; if the register is read on
+ * the same clock cycle as an increment operation, the counter is reset to all 0s and the
+ * increment operation is lost. The counters are writable for test purposes, rather than read-
+ * only as specified in IEEE 802.3.
+ */
+union cvmx_bgxx_spux_fec_uncorr_blks01 {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_uncorr_blks01_s {
+		u64 ln1_uncorr_blks : 32;
+		u64 ln0_uncorr_blks : 32;
+	} s;
+	struct cvmx_bgxx_spux_fec_uncorr_blks01_s cn73xx;
+	struct cvmx_bgxx_spux_fec_uncorr_blks01_s cn78xx;
+	struct cvmx_bgxx_spux_fec_uncorr_blks01_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_uncorr_blks01_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_uncorr_blks01 cvmx_bgxx_spux_fec_uncorr_blks01_t;
+
+/**
+ * cvmx_bgx#_spu#_fec_uncorr_blks23
+ *
+ * This register is valid only when the LPCS type is 40GBASE-R
+ * (BGX()_CMR()_CONFIG[LMAC_TYPE] = 0x4). The FEC uncorrected-block counters are defined
+ * in IEEE 802.3 section 74.8.4.2. Each corrected-blocks counter increments by 1 for an
+ * uncorrected FEC block, i.e. an FEC block that has been received with invalid parity on the
+ * associated PCS lane and has not been corrected by the FEC decoder. The counter is reset to all
+ * 0s when the register is read, and held at all 1s in case of overflow.
+ *
+ * The reset operation takes precedence over the increment operation; if the register is read on
+ * the same clock cycle as an increment operation, the counter is reset to all 0s and the
+ * increment operation is lost. The counters are writable for test purposes, rather than read-
+ * only as specified in IEEE 802.3.
+ */
+union cvmx_bgxx_spux_fec_uncorr_blks23 {
+	u64 u64;
+	struct cvmx_bgxx_spux_fec_uncorr_blks23_s {
+		u64 ln3_uncorr_blks : 32;
+		u64 ln2_uncorr_blks : 32;
+	} s;
+	struct cvmx_bgxx_spux_fec_uncorr_blks23_s cn73xx;
+	struct cvmx_bgxx_spux_fec_uncorr_blks23_s cn78xx;
+	struct cvmx_bgxx_spux_fec_uncorr_blks23_s cn78xxp1;
+	struct cvmx_bgxx_spux_fec_uncorr_blks23_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_fec_uncorr_blks23 cvmx_bgxx_spux_fec_uncorr_blks23_t;
+
+/**
+ * cvmx_bgx#_spu#_int
+ */
+union cvmx_bgxx_spux_int {
+	u64 u64;
+	struct cvmx_bgxx_spux_int_s {
+		u64 reserved_15_63 : 49;
+		u64 training_failure : 1;
+		u64 training_done : 1;
+		u64 an_complete : 1;
+		u64 an_link_good : 1;
+		u64 an_page_rx : 1;
+		u64 fec_uncorr : 1;
+		u64 fec_corr : 1;
+		u64 bip_err : 1;
+		u64 dbg_sync : 1;
+		u64 algnlos : 1;
+		u64 synlos : 1;
+		u64 bitlckls : 1;
+		u64 err_blk : 1;
+		u64 rx_link_down : 1;
+		u64 rx_link_up : 1;
+	} s;
+	struct cvmx_bgxx_spux_int_s cn73xx;
+	struct cvmx_bgxx_spux_int_s cn78xx;
+	struct cvmx_bgxx_spux_int_s cn78xxp1;
+	struct cvmx_bgxx_spux_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_int cvmx_bgxx_spux_int_t;
+
+/**
+ * cvmx_bgx#_spu#_lpcs_states
+ */
+union cvmx_bgxx_spux_lpcs_states {
+	u64 u64;
+	struct cvmx_bgxx_spux_lpcs_states_s {
+		u64 reserved_15_63 : 49;
+		u64 br_rx_sm : 3;
+		u64 reserved_10_11 : 2;
+		u64 bx_rx_sm : 2;
+		u64 deskew_am_found : 4;
+		u64 reserved_3_3 : 1;
+		u64 deskew_sm : 3;
+	} s;
+	struct cvmx_bgxx_spux_lpcs_states_s cn73xx;
+	struct cvmx_bgxx_spux_lpcs_states_s cn78xx;
+	struct cvmx_bgxx_spux_lpcs_states_s cn78xxp1;
+	struct cvmx_bgxx_spux_lpcs_states_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_lpcs_states cvmx_bgxx_spux_lpcs_states_t;
+
+/**
+ * cvmx_bgx#_spu#_misc_control
+ *
+ * "* RX logical PCS lane polarity vector <3:0> = [XOR_RXPLRT]<3:0> ^ [4[[RXPLRT]]].
+ *  * TX logical PCS lane polarity vector <3:0> = [XOR_TXPLRT]<3:0> ^ [4[[TXPLRT]]].
+ *
+ *  In short, keep [RXPLRT] and [TXPLRT] cleared, and use [XOR_RXPLRT] and [XOR_TXPLRT] fields to
+ *  define
+ *  the polarity per logical PCS lane. Only bit 0 of vector is used for 10GBASE-R, and only bits
+ * - 1:0 of vector are used for RXAUI."
+ */
+union cvmx_bgxx_spux_misc_control {
+	u64 u64;
+	struct cvmx_bgxx_spux_misc_control_s {
+		u64 reserved_13_63 : 51;
+		u64 rx_packet_dis : 1;
+		u64 skip_after_term : 1;
+		u64 intlv_rdisp : 1;
+		u64 xor_rxplrt : 4;
+		u64 xor_txplrt : 4;
+		u64 rxplrt : 1;
+		u64 txplrt : 1;
+	} s;
+	struct cvmx_bgxx_spux_misc_control_s cn73xx;
+	struct cvmx_bgxx_spux_misc_control_s cn78xx;
+	struct cvmx_bgxx_spux_misc_control_s cn78xxp1;
+	struct cvmx_bgxx_spux_misc_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_misc_control cvmx_bgxx_spux_misc_control_t;
+
+/**
+ * cvmx_bgx#_spu#_spd_abil
+ */
+union cvmx_bgxx_spux_spd_abil {
+	u64 u64;
+	struct cvmx_bgxx_spux_spd_abil_s {
+		u64 reserved_4_63 : 60;
+		u64 hundredgb : 1;
+		u64 fortygb : 1;
+		u64 tenpasst : 1;
+		u64 tengb : 1;
+	} s;
+	struct cvmx_bgxx_spux_spd_abil_s cn73xx;
+	struct cvmx_bgxx_spux_spd_abil_s cn78xx;
+	struct cvmx_bgxx_spux_spd_abil_s cn78xxp1;
+	struct cvmx_bgxx_spux_spd_abil_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_spd_abil cvmx_bgxx_spux_spd_abil_t;
+
+/**
+ * cvmx_bgx#_spu#_status1
+ */
+union cvmx_bgxx_spux_status1 {
+	u64 u64;
+	struct cvmx_bgxx_spux_status1_s {
+		u64 reserved_8_63 : 56;
+		u64 flt : 1;
+		u64 reserved_3_6 : 4;
+		u64 rcv_lnk : 1;
+		u64 lpable : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_bgxx_spux_status1_s cn73xx;
+	struct cvmx_bgxx_spux_status1_s cn78xx;
+	struct cvmx_bgxx_spux_status1_s cn78xxp1;
+	struct cvmx_bgxx_spux_status1_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_status1 cvmx_bgxx_spux_status1_t;
+
+/**
+ * cvmx_bgx#_spu#_status2
+ */
+union cvmx_bgxx_spux_status2 {
+	u64 u64;
+	struct cvmx_bgxx_spux_status2_s {
+		u64 reserved_16_63 : 48;
+		u64 dev : 2;
+		u64 reserved_12_13 : 2;
+		u64 xmtflt : 1;
+		u64 rcvflt : 1;
+		u64 reserved_6_9 : 4;
+		u64 hundredgb_r : 1;
+		u64 fortygb_r : 1;
+		u64 tengb_t : 1;
+		u64 tengb_w : 1;
+		u64 tengb_x : 1;
+		u64 tengb_r : 1;
+	} s;
+	struct cvmx_bgxx_spux_status2_s cn73xx;
+	struct cvmx_bgxx_spux_status2_s cn78xx;
+	struct cvmx_bgxx_spux_status2_s cn78xxp1;
+	struct cvmx_bgxx_spux_status2_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spux_status2 cvmx_bgxx_spux_status2_t;
+
+/**
+ * cvmx_bgx#_spu_bist_status
+ *
+ * This register provides memory BIST status from the SPU receive buffer lane FIFOs.
+ *
+ */
+union cvmx_bgxx_spu_bist_status {
+	u64 u64;
+	struct cvmx_bgxx_spu_bist_status_s {
+		u64 reserved_4_63 : 60;
+		u64 rx_buf_bist_status : 4;
+	} s;
+	struct cvmx_bgxx_spu_bist_status_s cn73xx;
+	struct cvmx_bgxx_spu_bist_status_s cn78xx;
+	struct cvmx_bgxx_spu_bist_status_s cn78xxp1;
+	struct cvmx_bgxx_spu_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_bist_status cvmx_bgxx_spu_bist_status_t;
+
+/**
+ * cvmx_bgx#_spu_dbg_control
+ */
+union cvmx_bgxx_spu_dbg_control {
+	u64 u64;
+	struct cvmx_bgxx_spu_dbg_control_s {
+		u64 reserved_56_63 : 8;
+		u64 ms_clk_period : 12;
+		u64 us_clk_period : 12;
+		u64 reserved_31_31 : 1;
+		u64 br_ber_mon_dis : 1;
+		u64 an_nonce_match_dis : 1;
+		u64 timestamp_norm_dis : 1;
+		u64 rx_buf_flip_synd : 8;
+		u64 br_pmd_train_soft_en : 1;
+		u64 an_arb_link_chk_en : 1;
+		u64 rx_buf_cor_dis : 1;
+		u64 scramble_dis : 1;
+		u64 reserved_15_15 : 1;
+		u64 marker_rxp : 15;
+	} s;
+	struct cvmx_bgxx_spu_dbg_control_s cn73xx;
+	struct cvmx_bgxx_spu_dbg_control_s cn78xx;
+	struct cvmx_bgxx_spu_dbg_control_s cn78xxp1;
+	struct cvmx_bgxx_spu_dbg_control_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_dbg_control cvmx_bgxx_spu_dbg_control_t;
+
+/**
+ * cvmx_bgx#_spu_mem_int
+ */
+union cvmx_bgxx_spu_mem_int {
+	u64 u64;
+	struct cvmx_bgxx_spu_mem_int_s {
+		u64 reserved_8_63 : 56;
+		u64 rx_buf_sbe : 4;
+		u64 rx_buf_dbe : 4;
+	} s;
+	struct cvmx_bgxx_spu_mem_int_s cn73xx;
+	struct cvmx_bgxx_spu_mem_int_s cn78xx;
+	struct cvmx_bgxx_spu_mem_int_s cn78xxp1;
+	struct cvmx_bgxx_spu_mem_int_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_mem_int cvmx_bgxx_spu_mem_int_t;
+
+/**
+ * cvmx_bgx#_spu_mem_status
+ *
+ * This register provides memory ECC status from the SPU receive buffer lane FIFOs.
+ *
+ */
+union cvmx_bgxx_spu_mem_status {
+	u64 u64;
+	struct cvmx_bgxx_spu_mem_status_s {
+		u64 reserved_32_63 : 32;
+		u64 rx_buf_ecc_synd : 32;
+	} s;
+	struct cvmx_bgxx_spu_mem_status_s cn73xx;
+	struct cvmx_bgxx_spu_mem_status_s cn78xx;
+	struct cvmx_bgxx_spu_mem_status_s cn78xxp1;
+	struct cvmx_bgxx_spu_mem_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_mem_status cvmx_bgxx_spu_mem_status_t;
+
+/**
+ * cvmx_bgx#_spu_sds#_skew_status
+ *
+ * This register provides SerDes lane skew status. One register per physical SerDes lane.
+ *
+ */
+union cvmx_bgxx_spu_sdsx_skew_status {
+	u64 u64;
+	struct cvmx_bgxx_spu_sdsx_skew_status_s {
+		u64 reserved_32_63 : 32;
+		u64 skew_status : 32;
+	} s;
+	struct cvmx_bgxx_spu_sdsx_skew_status_s cn73xx;
+	struct cvmx_bgxx_spu_sdsx_skew_status_s cn78xx;
+	struct cvmx_bgxx_spu_sdsx_skew_status_s cn78xxp1;
+	struct cvmx_bgxx_spu_sdsx_skew_status_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_sdsx_skew_status cvmx_bgxx_spu_sdsx_skew_status_t;
+
+/**
+ * cvmx_bgx#_spu_sds#_states
+ *
+ * This register provides SerDes lane states. One register per physical SerDes lane.
+ *
+ */
+union cvmx_bgxx_spu_sdsx_states {
+	u64 u64;
+	struct cvmx_bgxx_spu_sdsx_states_s {
+		u64 reserved_52_63 : 12;
+		u64 am_lock_invld_cnt : 2;
+		u64 am_lock_sm : 2;
+		u64 reserved_45_47 : 3;
+		u64 train_sm : 3;
+		u64 train_code_viol : 1;
+		u64 train_frame_lock : 1;
+		u64 train_lock_found_1st_marker : 1;
+		u64 train_lock_bad_markers : 3;
+		u64 reserved_35_35 : 1;
+		u64 an_arb_sm : 3;
+		u64 an_rx_sm : 2;
+		u64 reserved_29_29 : 1;
+		u64 fec_block_sync : 1;
+		u64 fec_sync_cnt : 4;
+		u64 reserved_23_23 : 1;
+		u64 br_sh_invld_cnt : 7;
+		u64 br_block_lock : 1;
+		u64 br_sh_cnt : 11;
+		u64 bx_sync_sm : 4;
+	} s;
+	struct cvmx_bgxx_spu_sdsx_states_s cn73xx;
+	struct cvmx_bgxx_spu_sdsx_states_s cn78xx;
+	struct cvmx_bgxx_spu_sdsx_states_s cn78xxp1;
+	struct cvmx_bgxx_spu_sdsx_states_s cnf75xx;
+};
+
+typedef union cvmx_bgxx_spu_sdsx_states cvmx_bgxx_spu_sdsx_states_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 06/50] mips: octeon: Add cvmx-ciu-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (4 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 05/50] mips: octeon: Add cvmx-bgxx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 07/50] mips: octeon: Add cvmx-dbg-defs.h " Stefan Roese
                   ` (46 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-ciu-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-ciu-defs.h  | 7351 +++++++++++++++++
 1 file changed, 7351 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ciu-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ciu-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-ciu-defs.h
new file mode 100644
index 0000000000..e67d916971
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ciu-defs.h
@@ -0,0 +1,7351 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon ciu.
+ */
+
+#ifndef __CVMX_CIU_DEFS_H__
+#define __CVMX_CIU_DEFS_H__
+
+#define CVMX_CIU_BIST				  (0x0001070000000730ull)
+#define CVMX_CIU_BLOCK_INT			  (0x00010700000007C0ull)
+#define CVMX_CIU_CIB_L2C_ENX(offset)		  (0x000107000000E100ull)
+#define CVMX_CIU_CIB_L2C_RAWX(offset)		  (0x000107000000E000ull)
+#define CVMX_CIU_CIB_LMCX_ENX(offset, block_id)	  (0x000107000000E300ull)
+#define CVMX_CIU_CIB_LMCX_RAWX(offset, block_id)  (0x000107000000E200ull)
+#define CVMX_CIU_CIB_OCLAX_ENX(offset, block_id)  (0x000107000000EE00ull)
+#define CVMX_CIU_CIB_OCLAX_RAWX(offset, block_id) (0x000107000000EC00ull)
+#define CVMX_CIU_CIB_RST_ENX(offset)		  (0x000107000000E500ull)
+#define CVMX_CIU_CIB_RST_RAWX(offset)		  (0x000107000000E400ull)
+#define CVMX_CIU_CIB_SATA_ENX(offset)		  (0x000107000000E700ull)
+#define CVMX_CIU_CIB_SATA_RAWX(offset)		  (0x000107000000E600ull)
+#define CVMX_CIU_CIB_USBDRDX_ENX(offset, block_id)                                                 \
+	(0x000107000000EA00ull + ((block_id) & 1) * 0x100ull)
+#define CVMX_CIU_CIB_USBDRDX_RAWX(offset, block_id)                                                \
+	(0x000107000000E800ull + ((block_id) & 1) * 0x100ull)
+#define CVMX_CIU_DINT CVMX_CIU_DINT_FUNC()
+static inline u64 CVMX_CIU_DINT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000720ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000000180ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000000180ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000000180ull;
+	}
+	return 0x0001010000000180ull;
+}
+
+#define CVMX_CIU_EN2_IOX_INT(offset)	 (0x000107000000A600ull + ((offset) & 1) * 8)
+#define CVMX_CIU_EN2_IOX_INT_W1C(offset) (0x000107000000CE00ull + ((offset) & 1) * 8)
+#define CVMX_CIU_EN2_IOX_INT_W1S(offset) (0x000107000000AE00ull + ((offset) & 1) * 8)
+#define CVMX_CIU_EN2_PPX_IP2(offset)	 (0x000107000000A000ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP2_W1C(offset) (0x000107000000C800ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP2_W1S(offset) (0x000107000000A800ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP3(offset)	 (0x000107000000A200ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP3_W1C(offset) (0x000107000000CA00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP3_W1S(offset) (0x000107000000AA00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP4(offset)	 (0x000107000000A400ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP4_W1C(offset) (0x000107000000CC00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_EN2_PPX_IP4_W1S(offset) (0x000107000000AC00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_FUSE			 CVMX_CIU_FUSE_FUNC()
+static inline u64 CVMX_CIU_FUSE_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000728ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00010100000001A0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00010100000001A0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00010100000001A0ull;
+	}
+	return 0x00010100000001A0ull;
+}
+
+#define CVMX_CIU_GSTOP			(0x0001070000000710ull)
+#define CVMX_CIU_INT33_SUM0		(0x0001070000000110ull)
+#define CVMX_CIU_INTR_SLOWDOWN		(0x00010700000007D0ull)
+#define CVMX_CIU_INTX_EN0(offset)	(0x0001070000000200ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN0_W1C(offset)	(0x0001070000002200ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN0_W1S(offset)	(0x0001070000006200ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN1(offset)	(0x0001070000000208ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN1_W1C(offset)	(0x0001070000002208ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN1_W1S(offset)	(0x0001070000006208ull + ((offset) & 63) * 16)
+#define CVMX_CIU_INTX_EN4_0(offset)	(0x0001070000000C80ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_EN4_0_W1C(offset) (0x0001070000002C80ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_EN4_0_W1S(offset) (0x0001070000006C80ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_EN4_1(offset)	(0x0001070000000C88ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_EN4_1_W1C(offset) (0x0001070000002C88ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_EN4_1_W1S(offset) (0x0001070000006C88ull + ((offset) & 15) * 16)
+#define CVMX_CIU_INTX_SUM0(offset)	(0x0001070000000000ull + ((offset) & 63) * 8)
+#define CVMX_CIU_INTX_SUM4(offset)	(0x0001070000000C00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_INT_DBG_SEL		(0x00010700000007D0ull)
+#define CVMX_CIU_INT_SUM1		(0x0001070000000108ull)
+static inline u64 CVMX_CIU_MBOX_CLRX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000680ull + (offset) * 8;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000680ull + (offset) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000680ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070100100600ull + (offset) * 8;
+	}
+	return 0x0001070000000680ull + (offset) * 8;
+}
+
+static inline u64 CVMX_CIU_MBOX_SETX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000600ull + (offset) * 8;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000600ull + (offset) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000600ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070100100400ull + (offset) * 8;
+	}
+	return 0x0001070000000600ull + (offset) * 8;
+}
+
+#define CVMX_CIU_NMI	      (0x0001070000000718ull)
+#define CVMX_CIU_PCI_INTA     (0x0001070000000750ull)
+#define CVMX_CIU_PP_BIST_STAT (0x00010700000007E0ull)
+#define CVMX_CIU_PP_DBG	      CVMX_CIU_PP_DBG_FUNC()
+static inline u64 CVMX_CIU_PP_DBG_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000708ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000000120ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000000120ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000000120ull;
+	}
+	return 0x0001010000000120ull;
+}
+
+static inline u64 CVMX_CIU_PP_POKEX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000580ull + (offset) * 8;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000580ull + (offset) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000580ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000030000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000030000ull + (offset) * 8;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000030000ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070100100200ull + (offset) * 8;
+	}
+	return 0x0001010000030000ull + (offset) * 8;
+}
+
+#define CVMX_CIU_PP_RST CVMX_CIU_PP_RST_FUNC()
+static inline u64 CVMX_CIU_PP_RST_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000700ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000000100ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000000100ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000000100ull;
+	}
+	return 0x0001010000000100ull;
+}
+
+#define CVMX_CIU_PP_RST_PENDING CVMX_CIU_PP_RST_PENDING_FUNC()
+static inline u64 CVMX_CIU_PP_RST_PENDING_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000000110ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000000110ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000000110ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000740ull;
+	}
+	return 0x0001010000000110ull;
+}
+
+#define CVMX_CIU_QLM0		      (0x0001070000000780ull)
+#define CVMX_CIU_QLM1		      (0x0001070000000788ull)
+#define CVMX_CIU_QLM2		      (0x0001070000000790ull)
+#define CVMX_CIU_QLM3		      (0x0001070000000798ull)
+#define CVMX_CIU_QLM4		      (0x00010700000007A0ull)
+#define CVMX_CIU_QLM_DCOK	      (0x0001070000000760ull)
+#define CVMX_CIU_QLM_JTGC	      (0x0001070000000768ull)
+#define CVMX_CIU_QLM_JTGD	      (0x0001070000000770ull)
+#define CVMX_CIU_SOFT_BIST	      (0x0001070000000738ull)
+#define CVMX_CIU_SOFT_PRST	      (0x0001070000000748ull)
+#define CVMX_CIU_SOFT_PRST1	      (0x0001070000000758ull)
+#define CVMX_CIU_SOFT_PRST2	      (0x00010700000007D8ull)
+#define CVMX_CIU_SOFT_PRST3	      (0x00010700000007E0ull)
+#define CVMX_CIU_SOFT_RST	      (0x0001070000000740ull)
+#define CVMX_CIU_SUM1_IOX_INT(offset) (0x0001070000008600ull + ((offset) & 1) * 8)
+#define CVMX_CIU_SUM1_PPX_IP2(offset) (0x0001070000008000ull + ((offset) & 15) * 8)
+#define CVMX_CIU_SUM1_PPX_IP3(offset) (0x0001070000008200ull + ((offset) & 15) * 8)
+#define CVMX_CIU_SUM1_PPX_IP4(offset) (0x0001070000008400ull + ((offset) & 15) * 8)
+#define CVMX_CIU_SUM2_IOX_INT(offset) (0x0001070000008E00ull + ((offset) & 1) * 8)
+#define CVMX_CIU_SUM2_PPX_IP2(offset) (0x0001070000008800ull + ((offset) & 15) * 8)
+#define CVMX_CIU_SUM2_PPX_IP3(offset) (0x0001070000008A00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_SUM2_PPX_IP4(offset) (0x0001070000008C00ull + ((offset) & 15) * 8)
+#define CVMX_CIU_TIMX(offset)	      (0x0001070000000480ull + ((offset) & 15) * 8)
+#define CVMX_CIU_TIM_MULTI_CAST	      CVMX_CIU_TIM_MULTI_CAST_FUNC()
+static inline u64 CVMX_CIU_TIM_MULTI_CAST_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00010700000004F0ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x000107000000C200ull;
+	}
+	return 0x000107000000C200ull;
+}
+
+static inline u64 CVMX_CIU_WDOGX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000500ull + (offset) * 8;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000500ull + (offset) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001070000000500ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001010000020000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001010000020000ull + (offset) * 8;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001010000020000ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001070100100000ull + (offset) * 8;
+	}
+	return 0x0001010000020000ull + (offset) * 8;
+}
+
+/**
+ * cvmx_ciu_bist
+ */
+union cvmx_ciu_bist {
+	u64 u64;
+	struct cvmx_ciu_bist_s {
+		u64 reserved_7_63 : 57;
+		u64 bist : 7;
+	} s;
+	struct cvmx_ciu_bist_cn30xx {
+		u64 reserved_4_63 : 60;
+		u64 bist : 4;
+	} cn30xx;
+	struct cvmx_ciu_bist_cn30xx cn31xx;
+	struct cvmx_ciu_bist_cn30xx cn38xx;
+	struct cvmx_ciu_bist_cn30xx cn38xxp2;
+	struct cvmx_ciu_bist_cn50xx {
+		u64 reserved_2_63 : 62;
+		u64 bist : 2;
+	} cn50xx;
+	struct cvmx_ciu_bist_cn52xx {
+		u64 reserved_3_63 : 61;
+		u64 bist : 3;
+	} cn52xx;
+	struct cvmx_ciu_bist_cn52xx cn52xxp1;
+	struct cvmx_ciu_bist_cn30xx cn56xx;
+	struct cvmx_ciu_bist_cn30xx cn56xxp1;
+	struct cvmx_ciu_bist_cn30xx cn58xx;
+	struct cvmx_ciu_bist_cn30xx cn58xxp1;
+	struct cvmx_ciu_bist_cn61xx {
+		u64 reserved_6_63 : 58;
+		u64 bist : 6;
+	} cn61xx;
+	struct cvmx_ciu_bist_cn63xx {
+		u64 reserved_5_63 : 59;
+		u64 bist : 5;
+	} cn63xx;
+	struct cvmx_ciu_bist_cn63xx cn63xxp1;
+	struct cvmx_ciu_bist_cn61xx cn66xx;
+	struct cvmx_ciu_bist_s cn68xx;
+	struct cvmx_ciu_bist_s cn68xxp1;
+	struct cvmx_ciu_bist_cn52xx cn70xx;
+	struct cvmx_ciu_bist_cn52xx cn70xxp1;
+	struct cvmx_ciu_bist_cn61xx cnf71xx;
+};
+
+typedef union cvmx_ciu_bist cvmx_ciu_bist_t;
+
+/**
+ * cvmx_ciu_block_int
+ *
+ * CIU_BLOCK_INT = CIU Blocks Interrupt
+ *
+ * The interrupt lines from the various chip blocks.
+ */
+union cvmx_ciu_block_int {
+	u64 u64;
+	struct cvmx_ciu_block_int_s {
+		u64 reserved_62_63 : 2;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_43_59 : 17;
+		u64 ptp : 1;
+		u64 dpi : 1;
+		u64 dfm : 1;
+		u64 reserved_34_39 : 6;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 reserved_31_31 : 1;
+		u64 iob : 1;
+		u64 reserved_29_29 : 1;
+		u64 agl : 1;
+		u64 reserved_27_27 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 reserved_24_24 : 1;
+		u64 asxpcs1 : 1;
+		u64 asxpcs0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 reserved_18_19 : 2;
+		u64 lmc0 : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_8_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 sli : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} s;
+	struct cvmx_ciu_block_int_cn61xx {
+		u64 reserved_43_63 : 21;
+		u64 ptp : 1;
+		u64 dpi : 1;
+		u64 reserved_31_40 : 10;
+		u64 iob : 1;
+		u64 reserved_29_29 : 1;
+		u64 agl : 1;
+		u64 reserved_27_27 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 reserved_24_24 : 1;
+		u64 asxpcs1 : 1;
+		u64 asxpcs0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 reserved_18_19 : 2;
+		u64 lmc0 : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_8_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 sli : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn61xx;
+	struct cvmx_ciu_block_int_cn63xx {
+		u64 reserved_43_63 : 21;
+		u64 ptp : 1;
+		u64 dpi : 1;
+		u64 dfm : 1;
+		u64 reserved_34_39 : 6;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 reserved_31_31 : 1;
+		u64 iob : 1;
+		u64 reserved_29_29 : 1;
+		u64 agl : 1;
+		u64 reserved_27_27 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 reserved_23_24 : 2;
+		u64 asxpcs0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 reserved_18_19 : 2;
+		u64 lmc0 : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_8_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 sli : 1;
+		u64 reserved_2_2 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn63xx;
+	struct cvmx_ciu_block_int_cn63xx cn63xxp1;
+	struct cvmx_ciu_block_int_cn66xx {
+		u64 reserved_62_63 : 2;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_43_59 : 17;
+		u64 ptp : 1;
+		u64 dpi : 1;
+		u64 dfm : 1;
+		u64 reserved_33_39 : 7;
+		u64 srio0 : 1;
+		u64 reserved_31_31 : 1;
+		u64 iob : 1;
+		u64 reserved_29_29 : 1;
+		u64 agl : 1;
+		u64 reserved_27_27 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 reserved_24_24 : 1;
+		u64 asxpcs1 : 1;
+		u64 asxpcs0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 reserved_18_19 : 2;
+		u64 lmc0 : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_8_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 sli : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn66xx;
+	struct cvmx_ciu_block_int_cnf71xx {
+		u64 reserved_43_63 : 21;
+		u64 ptp : 1;
+		u64 dpi : 1;
+		u64 reserved_31_40 : 10;
+		u64 iob : 1;
+		u64 reserved_27_29 : 3;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 reserved_23_24 : 2;
+		u64 asxpcs0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 reserved_18_19 : 2;
+		u64 lmc0 : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_6_8 : 3;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 sli : 1;
+		u64 reserved_2_2 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_block_int cvmx_ciu_block_int_t;
+
+/**
+ * cvmx_ciu_cib_l2c_en#
+ */
+union cvmx_ciu_cib_l2c_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_l2c_enx_s {
+		u64 reserved_23_63 : 41;
+		u64 cbcx_int_ioccmddbe : 1;
+		u64 cbcx_int_ioccmdsbe : 1;
+		u64 cbcx_int_rsddbe : 1;
+		u64 cbcx_int_rsdsbe : 1;
+		u64 mcix_int_vbfdbe : 1;
+		u64 mcix_int_vbfsbe : 1;
+		u64 tadx_int_rtgdbe : 1;
+		u64 tadx_int_rtgsbe : 1;
+		u64 tadx_int_rddislmc : 1;
+		u64 tadx_int_wrdislmc : 1;
+		u64 tadx_int_bigrd : 1;
+		u64 tadx_int_bigwr : 1;
+		u64 tadx_int_holerd : 1;
+		u64 tadx_int_holewr : 1;
+		u64 tadx_int_noway : 1;
+		u64 tadx_int_tagdbe : 1;
+		u64 tadx_int_tagsbe : 1;
+		u64 tadx_int_fbfdbe : 1;
+		u64 tadx_int_fbfsbe : 1;
+		u64 tadx_int_sbfdbe : 1;
+		u64 tadx_int_sbfsbe : 1;
+		u64 tadx_int_l2ddbe : 1;
+		u64 tadx_int_l2dsbe : 1;
+	} s;
+	struct cvmx_ciu_cib_l2c_enx_s cn70xx;
+	struct cvmx_ciu_cib_l2c_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_l2c_enx cvmx_ciu_cib_l2c_enx_t;
+
+/**
+ * cvmx_ciu_cib_l2c_raw#
+ */
+union cvmx_ciu_cib_l2c_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_l2c_rawx_s {
+		u64 reserved_23_63 : 41;
+		u64 cbcx_int_ioccmddbe : 1;
+		u64 cbcx_int_ioccmdsbe : 1;
+		u64 cbcx_int_rsddbe : 1;
+		u64 cbcx_int_rsdsbe : 1;
+		u64 mcix_int_vbfdbe : 1;
+		u64 mcix_int_vbfsbe : 1;
+		u64 tadx_int_rtgdbe : 1;
+		u64 tadx_int_rtgsbe : 1;
+		u64 tadx_int_rddislmc : 1;
+		u64 tadx_int_wrdislmc : 1;
+		u64 tadx_int_bigrd : 1;
+		u64 tadx_int_bigwr : 1;
+		u64 tadx_int_holerd : 1;
+		u64 tadx_int_holewr : 1;
+		u64 tadx_int_noway : 1;
+		u64 tadx_int_tagdbe : 1;
+		u64 tadx_int_tagsbe : 1;
+		u64 tadx_int_fbfdbe : 1;
+		u64 tadx_int_fbfsbe : 1;
+		u64 tadx_int_sbfdbe : 1;
+		u64 tadx_int_sbfsbe : 1;
+		u64 tadx_int_l2ddbe : 1;
+		u64 tadx_int_l2dsbe : 1;
+	} s;
+	struct cvmx_ciu_cib_l2c_rawx_s cn70xx;
+	struct cvmx_ciu_cib_l2c_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_l2c_rawx cvmx_ciu_cib_l2c_rawx_t;
+
+/**
+ * cvmx_ciu_cib_lmc#_en#
+ */
+union cvmx_ciu_cib_lmcx_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_lmcx_enx_s {
+		u64 reserved_12_63 : 52;
+		u64 int_ddr_err : 1;
+		u64 int_dlc_ded : 1;
+		u64 int_dlc_sec : 1;
+		u64 int_ded_errx : 4;
+		u64 int_sec_errx : 4;
+		u64 int_nxm_wr_err : 1;
+	} s;
+	struct cvmx_ciu_cib_lmcx_enx_s cn70xx;
+	struct cvmx_ciu_cib_lmcx_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_lmcx_enx cvmx_ciu_cib_lmcx_enx_t;
+
+/**
+ * cvmx_ciu_cib_lmc#_raw#
+ */
+union cvmx_ciu_cib_lmcx_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_lmcx_rawx_s {
+		u64 reserved_12_63 : 52;
+		u64 int_ddr_err : 1;
+		u64 int_dlc_ded : 1;
+		u64 int_dlc_sec : 1;
+		u64 int_ded_errx : 4;
+		u64 int_sec_errx : 4;
+		u64 int_nxm_wr_err : 1;
+	} s;
+	struct cvmx_ciu_cib_lmcx_rawx_s cn70xx;
+	struct cvmx_ciu_cib_lmcx_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_lmcx_rawx cvmx_ciu_cib_lmcx_rawx_t;
+
+/**
+ * cvmx_ciu_cib_ocla#_en#
+ */
+union cvmx_ciu_cib_oclax_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_oclax_enx_s {
+		u64 reserved_15_63 : 49;
+		u64 state_ddrfull : 1;
+		u64 state_wmark : 1;
+		u64 state_overfull : 1;
+		u64 state_trigfull : 1;
+		u64 state_captured : 1;
+		u64 state_fsm1_int : 1;
+		u64 state_fsm0_int : 1;
+		u64 state_mcdx : 3;
+		u64 state_trig : 1;
+		u64 state_ovflx : 4;
+	} s;
+	struct cvmx_ciu_cib_oclax_enx_s cn70xx;
+	struct cvmx_ciu_cib_oclax_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_oclax_enx cvmx_ciu_cib_oclax_enx_t;
+
+/**
+ * cvmx_ciu_cib_ocla#_raw#
+ */
+union cvmx_ciu_cib_oclax_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_oclax_rawx_s {
+		u64 reserved_15_63 : 49;
+		u64 state_ddrfull : 1;
+		u64 state_wmark : 1;
+		u64 state_overfull : 1;
+		u64 state_trigfull : 1;
+		u64 state_captured : 1;
+		u64 state_fsm1_int : 1;
+		u64 state_fsm0_int : 1;
+		u64 state_mcdx : 3;
+		u64 state_trig : 1;
+		u64 state_ovflx : 4;
+	} s;
+	struct cvmx_ciu_cib_oclax_rawx_s cn70xx;
+	struct cvmx_ciu_cib_oclax_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_oclax_rawx cvmx_ciu_cib_oclax_rawx_t;
+
+/**
+ * cvmx_ciu_cib_rst_en#
+ */
+union cvmx_ciu_cib_rst_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_rst_enx_s {
+		u64 reserved_6_63 : 58;
+		u64 int_perstx : 3;
+		u64 int_linkx : 3;
+	} s;
+	struct cvmx_ciu_cib_rst_enx_s cn70xx;
+	struct cvmx_ciu_cib_rst_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_rst_enx cvmx_ciu_cib_rst_enx_t;
+
+/**
+ * cvmx_ciu_cib_rst_raw#
+ */
+union cvmx_ciu_cib_rst_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_rst_rawx_s {
+		u64 reserved_6_63 : 58;
+		u64 int_perstx : 3;
+		u64 int_linkx : 3;
+	} s;
+	struct cvmx_ciu_cib_rst_rawx_s cn70xx;
+	struct cvmx_ciu_cib_rst_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_rst_rawx cvmx_ciu_cib_rst_rawx_t;
+
+/**
+ * cvmx_ciu_cib_sata_en#
+ */
+union cvmx_ciu_cib_sata_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_sata_enx_s {
+		u64 reserved_4_63 : 60;
+		u64 uahc_pme_req_ip : 1;
+		u64 uahc_intrq_ip : 1;
+		u64 intstat_xm_bad_dma : 1;
+		u64 intstat_xs_ncb_oob : 1;
+	} s;
+	struct cvmx_ciu_cib_sata_enx_s cn70xx;
+	struct cvmx_ciu_cib_sata_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_sata_enx cvmx_ciu_cib_sata_enx_t;
+
+/**
+ * cvmx_ciu_cib_sata_raw#
+ */
+union cvmx_ciu_cib_sata_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_sata_rawx_s {
+		u64 reserved_4_63 : 60;
+		u64 uahc_pme_req_ip : 1;
+		u64 uahc_intrq_ip : 1;
+		u64 intstat_xm_bad_dma : 1;
+		u64 intstat_xs_ncb_oob : 1;
+	} s;
+	struct cvmx_ciu_cib_sata_rawx_s cn70xx;
+	struct cvmx_ciu_cib_sata_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_sata_rawx cvmx_ciu_cib_sata_rawx_t;
+
+/**
+ * cvmx_ciu_cib_usbdrd#_en#
+ */
+union cvmx_ciu_cib_usbdrdx_enx {
+	u64 u64;
+	struct cvmx_ciu_cib_usbdrdx_enx_s {
+		u64 reserved_11_63 : 53;
+		u64 uahc_dev_int : 1;
+		u64 uahc_imanx_ip : 1;
+		u64 uahc_usbsts_hse : 1;
+		u64 intstat_ram2_dbe : 1;
+		u64 intstat_ram2_sbe : 1;
+		u64 intstat_ram1_dbe : 1;
+		u64 intstat_ram1_sbe : 1;
+		u64 intstat_ram0_dbe : 1;
+		u64 intstat_ram0_sbe : 1;
+		u64 intstat_xm_bad_dma : 1;
+		u64 intstat_xs_ncb_oob : 1;
+	} s;
+	struct cvmx_ciu_cib_usbdrdx_enx_s cn70xx;
+	struct cvmx_ciu_cib_usbdrdx_enx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_usbdrdx_enx cvmx_ciu_cib_usbdrdx_enx_t;
+
+/**
+ * cvmx_ciu_cib_usbdrd#_raw#
+ */
+union cvmx_ciu_cib_usbdrdx_rawx {
+	u64 u64;
+	struct cvmx_ciu_cib_usbdrdx_rawx_s {
+		u64 reserved_11_63 : 53;
+		u64 uahc_dev_int : 1;
+		u64 uahc_imanx_ip : 1;
+		u64 uahc_usbsts_hse : 1;
+		u64 intstat_ram2_dbe : 1;
+		u64 intstat_ram2_sbe : 1;
+		u64 intstat_ram1_dbe : 1;
+		u64 intstat_ram1_sbe : 1;
+		u64 intstat_ram0_dbe : 1;
+		u64 intstat_ram0_sbe : 1;
+		u64 intstat_xm_bad_dma : 1;
+		u64 intstat_xs_ncb_oob : 1;
+	} s;
+	struct cvmx_ciu_cib_usbdrdx_rawx_s cn70xx;
+	struct cvmx_ciu_cib_usbdrdx_rawx_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_cib_usbdrdx_rawx cvmx_ciu_cib_usbdrdx_rawx_t;
+
+/**
+ * cvmx_ciu_dint
+ */
+union cvmx_ciu_dint {
+	u64 u64;
+	struct cvmx_ciu_dint_s {
+		u64 reserved_48_63 : 16;
+		u64 dint : 48;
+	} s;
+	struct cvmx_ciu_dint_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 dint : 1;
+	} cn30xx;
+	struct cvmx_ciu_dint_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 dint : 2;
+	} cn31xx;
+	struct cvmx_ciu_dint_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 dint : 16;
+	} cn38xx;
+	struct cvmx_ciu_dint_cn38xx cn38xxp2;
+	struct cvmx_ciu_dint_cn31xx cn50xx;
+	struct cvmx_ciu_dint_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 dint : 4;
+	} cn52xx;
+	struct cvmx_ciu_dint_cn52xx cn52xxp1;
+	struct cvmx_ciu_dint_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 dint : 12;
+	} cn56xx;
+	struct cvmx_ciu_dint_cn56xx cn56xxp1;
+	struct cvmx_ciu_dint_cn38xx cn58xx;
+	struct cvmx_ciu_dint_cn38xx cn58xxp1;
+	struct cvmx_ciu_dint_cn52xx cn61xx;
+	struct cvmx_ciu_dint_cn63xx {
+		u64 reserved_6_63 : 58;
+		u64 dint : 6;
+	} cn63xx;
+	struct cvmx_ciu_dint_cn63xx cn63xxp1;
+	struct cvmx_ciu_dint_cn66xx {
+		u64 reserved_10_63 : 54;
+		u64 dint : 10;
+	} cn66xx;
+	struct cvmx_ciu_dint_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 dint : 32;
+	} cn68xx;
+	struct cvmx_ciu_dint_cn68xx cn68xxp1;
+	struct cvmx_ciu_dint_cn52xx cn70xx;
+	struct cvmx_ciu_dint_cn52xx cn70xxp1;
+	struct cvmx_ciu_dint_cn38xx cn73xx;
+	struct cvmx_ciu_dint_s cn78xx;
+	struct cvmx_ciu_dint_s cn78xxp1;
+	struct cvmx_ciu_dint_cn52xx cnf71xx;
+	struct cvmx_ciu_dint_cn38xx cnf75xx;
+};
+
+typedef union cvmx_ciu_dint cvmx_ciu_dint_t;
+
+/**
+ * cvmx_ciu_en2_io#_int
+ *
+ * CIU_EN2_IO0_INT is for PEM0, CIU_EN2_IO1_INT is reserved.
+ *
+ */
+union cvmx_ciu_en2_iox_int {
+	u64 u64;
+	struct cvmx_ciu_en2_iox_int_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_iox_int_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_iox_int_cn61xx cn66xx;
+	struct cvmx_ciu_en2_iox_int_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_iox_int_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_iox_int_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_iox_int cvmx_ciu_en2_iox_int_t;
+
+/**
+ * cvmx_ciu_en2_io#_int_w1c
+ *
+ * CIU_EN2_IO0_INT_W1C is for PEM0, CIU_EN2_IO1_INT_W1C is reserved.
+ *
+ */
+union cvmx_ciu_en2_iox_int_w1c {
+	u64 u64;
+	struct cvmx_ciu_en2_iox_int_w1c_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_iox_int_w1c_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_iox_int_w1c_cn61xx cn66xx;
+	struct cvmx_ciu_en2_iox_int_w1c_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_iox_int_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_iox_int_w1c_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_iox_int_w1c cvmx_ciu_en2_iox_int_w1c_t;
+
+/**
+ * cvmx_ciu_en2_io#_int_w1s
+ *
+ * CIU_EN2_IO0_INT_W1S is for PEM0, CIU_EN2_IO1_INT_W1S is reserved.
+ *
+ */
+union cvmx_ciu_en2_iox_int_w1s {
+	u64 u64;
+	struct cvmx_ciu_en2_iox_int_w1s_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_iox_int_w1s_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_iox_int_w1s_cn61xx cn66xx;
+	struct cvmx_ciu_en2_iox_int_w1s_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_iox_int_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_iox_int_w1s_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_iox_int_w1s cvmx_ciu_en2_iox_int_w1s_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip2
+ *
+ * Notes:
+ * These SUM2 CSR's did not exist prior to pass 1.2. CIU_TIM4-9 did not exist prior to pass 1.2.
+ *
+ */
+union cvmx_ciu_en2_ppx_ip2 {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip2_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip2_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip2_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip2_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip2_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip2_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip2 cvmx_ciu_en2_ppx_ip2_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip2_w1c
+ *
+ * Write-1-to-clear version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip2_w1c {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip2_w1c_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip2_w1c cvmx_ciu_en2_ppx_ip2_w1c_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip2_w1s
+ *
+ * Write-1-to-set version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip2_w1s {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip2_w1s_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip2_w1s cvmx_ciu_en2_ppx_ip2_w1s_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip3
+ *
+ * Notes:
+ * These SUM2 CSR's did not exist prior to pass 1.2. CIU_TIM4-9 did not exist prior to pass 1.2.
+ *
+ */
+union cvmx_ciu_en2_ppx_ip3 {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip3_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip3_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip3_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip3_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip3_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip3_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip3 cvmx_ciu_en2_ppx_ip3_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip3_w1c
+ *
+ * Notes:
+ * Write-1-to-clear version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip3_w1c {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip3_w1c_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip3_w1c cvmx_ciu_en2_ppx_ip3_w1c_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip3_w1s
+ *
+ * Notes:
+ * Write-1-to-set version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip3_w1s {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip3_w1s_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip3_w1s cvmx_ciu_en2_ppx_ip3_w1s_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip4
+ *
+ * Notes:
+ * These SUM2 CSR's did not exist prior to pass 1.2. CIU_TIM4-9 did not exist prior to pass 1.2.
+ *
+ */
+union cvmx_ciu_en2_ppx_ip4 {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip4_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip4_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip4_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip4_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip4_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip4_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip4 cvmx_ciu_en2_ppx_ip4_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip4_w1c
+ *
+ * Notes:
+ * Write-1-to-clear version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip4_w1c {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip4_w1c_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip4_w1c cvmx_ciu_en2_ppx_ip4_w1c_t;
+
+/**
+ * cvmx_ciu_en2_pp#_ip4_w1s
+ *
+ * Notes:
+ * Write-1-to-set version of the CIU_EN2_PP(IO)X_IPx(INT) register, read back corresponding
+ * CIU_EN2_PP(IO)X_IPx(INT) value.
+ */
+union cvmx_ciu_en2_ppx_ip4_w1s {
+	u64 u64;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_cn61xx cn66xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_en2_ppx_ip4_w1s_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_en2_ppx_ip4_w1s cvmx_ciu_en2_ppx_ip4_w1s_t;
+
+/**
+ * cvmx_ciu_fuse
+ */
+union cvmx_ciu_fuse {
+	u64 u64;
+	struct cvmx_ciu_fuse_s {
+		u64 reserved_48_63 : 16;
+		u64 fuse : 48;
+	} s;
+	struct cvmx_ciu_fuse_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 fuse : 1;
+	} cn30xx;
+	struct cvmx_ciu_fuse_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 fuse : 2;
+	} cn31xx;
+	struct cvmx_ciu_fuse_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 fuse : 16;
+	} cn38xx;
+	struct cvmx_ciu_fuse_cn38xx cn38xxp2;
+	struct cvmx_ciu_fuse_cn31xx cn50xx;
+	struct cvmx_ciu_fuse_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 fuse : 4;
+	} cn52xx;
+	struct cvmx_ciu_fuse_cn52xx cn52xxp1;
+	struct cvmx_ciu_fuse_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 fuse : 12;
+	} cn56xx;
+	struct cvmx_ciu_fuse_cn56xx cn56xxp1;
+	struct cvmx_ciu_fuse_cn38xx cn58xx;
+	struct cvmx_ciu_fuse_cn38xx cn58xxp1;
+	struct cvmx_ciu_fuse_cn52xx cn61xx;
+	struct cvmx_ciu_fuse_cn63xx {
+		u64 reserved_6_63 : 58;
+		u64 fuse : 6;
+	} cn63xx;
+	struct cvmx_ciu_fuse_cn63xx cn63xxp1;
+	struct cvmx_ciu_fuse_cn66xx {
+		u64 reserved_10_63 : 54;
+		u64 fuse : 10;
+	} cn66xx;
+	struct cvmx_ciu_fuse_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 fuse : 32;
+	} cn68xx;
+	struct cvmx_ciu_fuse_cn68xx cn68xxp1;
+	struct cvmx_ciu_fuse_cn52xx cn70xx;
+	struct cvmx_ciu_fuse_cn52xx cn70xxp1;
+	struct cvmx_ciu_fuse_cn38xx cn73xx;
+	struct cvmx_ciu_fuse_s cn78xx;
+	struct cvmx_ciu_fuse_s cn78xxp1;
+	struct cvmx_ciu_fuse_cn52xx cnf71xx;
+	struct cvmx_ciu_fuse_cn38xx cnf75xx;
+};
+
+typedef union cvmx_ciu_fuse cvmx_ciu_fuse_t;
+
+/**
+ * cvmx_ciu_gstop
+ */
+union cvmx_ciu_gstop {
+	u64 u64;
+	struct cvmx_ciu_gstop_s {
+		u64 reserved_1_63 : 63;
+		u64 gstop : 1;
+	} s;
+	struct cvmx_ciu_gstop_s cn30xx;
+	struct cvmx_ciu_gstop_s cn31xx;
+	struct cvmx_ciu_gstop_s cn38xx;
+	struct cvmx_ciu_gstop_s cn38xxp2;
+	struct cvmx_ciu_gstop_s cn50xx;
+	struct cvmx_ciu_gstop_s cn52xx;
+	struct cvmx_ciu_gstop_s cn52xxp1;
+	struct cvmx_ciu_gstop_s cn56xx;
+	struct cvmx_ciu_gstop_s cn56xxp1;
+	struct cvmx_ciu_gstop_s cn58xx;
+	struct cvmx_ciu_gstop_s cn58xxp1;
+	struct cvmx_ciu_gstop_s cn61xx;
+	struct cvmx_ciu_gstop_s cn63xx;
+	struct cvmx_ciu_gstop_s cn63xxp1;
+	struct cvmx_ciu_gstop_s cn66xx;
+	struct cvmx_ciu_gstop_s cn68xx;
+	struct cvmx_ciu_gstop_s cn68xxp1;
+	struct cvmx_ciu_gstop_s cn70xx;
+	struct cvmx_ciu_gstop_s cn70xxp1;
+	struct cvmx_ciu_gstop_s cnf71xx;
+};
+
+typedef union cvmx_ciu_gstop cvmx_ciu_gstop_t;
+
+/**
+ * cvmx_ciu_int#_en0
+ *
+ * CIU_INT0_EN0:  PP0/IP2
+ * CIU_INT1_EN0:  PP0/IP3
+ * CIU_INT2_EN0:  PP1/IP2
+ * CIU_INT3_EN0:  PP1/IP3
+ * CIU_INT4_EN0:  PP2/IP2
+ * CIU_INT5_EN0:  PP2/IP3
+ * CIU_INT6_EN0:  PP3/IP2
+ * CIU_INT7_EN0:  PP3/IP3
+ * - .....
+ * (hole)
+ * CIU_INT32_EN0: IO 0 (PEM0)
+ * CIU_INT33_EN0: IO 1 (reserved in o70).
+ */
+union cvmx_ciu_intx_en0 {
+	u64 u64;
+	struct cvmx_ciu_intx_en0_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en0_cn30xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 reserved_47_47 : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn30xx;
+	struct cvmx_ciu_intx_en0_cn31xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn31xx;
+	struct cvmx_ciu_intx_en0_cn38xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn38xx;
+	struct cvmx_ciu_intx_en0_cn38xx cn38xxp2;
+	struct cvmx_ciu_intx_en0_cn30xx cn50xx;
+	struct cvmx_ciu_intx_en0_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en0_cn52xx cn52xxp1;
+	struct cvmx_ciu_intx_en0_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en0_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_en0_cn38xx cn58xx;
+	struct cvmx_ciu_intx_en0_cn38xx cn58xxp1;
+	struct cvmx_ciu_intx_en0_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en0_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en0_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en0_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en0_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en0_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en0_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en0 cvmx_ciu_intx_en0_t;
+
+/**
+ * cvmx_ciu_int#_en0_w1c
+ *
+ * Write-1-to-clear version of the CIU_INTx_EN0 register, read back corresponding CIU_INTx_EN0
+ * value.
+ * CIU_INT33_EN0_W1C is reserved.
+ */
+union cvmx_ciu_intx_en0_w1c {
+	u64 u64;
+	struct cvmx_ciu_intx_en0_w1c_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en0_w1c_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en0_w1c_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en0_w1c_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en0_w1c_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en0_w1c_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en0_w1c_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en0_w1c_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en0_w1c_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en0_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en0_w1c_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en0_w1c cvmx_ciu_intx_en0_w1c_t;
+
+/**
+ * cvmx_ciu_int#_en0_w1s
+ *
+ * Write-1-to-set version of the CIU_INTx_EN0 register, read back corresponding CIU_INTx_EN0
+ * value.
+ * CIU_INT33_EN0_W1S is reserved.
+ */
+union cvmx_ciu_intx_en0_w1s {
+	u64 u64;
+	struct cvmx_ciu_intx_en0_w1s_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en0_w1s_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en0_w1s_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en0_w1s_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en0_w1s_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en0_w1s_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en0_w1s_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en0_w1s_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en0_w1s_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en0_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en0_w1s_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en0_w1s cvmx_ciu_intx_en0_w1s_t;
+
+/**
+ * cvmx_ciu_int#_en1
+ *
+ * Enables for CIU_SUM1_PPX_IPx  or CIU_SUM1_IOX_INT
+ * CIU_INT0_EN1:  PP0/IP2
+ * CIU_INT1_EN1:  PP0/IP3
+ * CIU_INT2_EN1:  PP1/IP2
+ * CIU_INT3_EN1:  PP1/IP3
+ * CIU_INT4_EN1:  PP2/IP2
+ * CIU_INT5_EN1:  PP2/IP3
+ * CIU_INT6_EN1:  PP3/IP2
+ * CIU_INT7_EN1:  PP3/IP3
+ * - .....
+ * (hole)
+ * CIU_INT32_EN1: IO0 (PEM0)
+ * CIU_INT33_EN1: IO1 (Reserved for o70)
+ *
+ * PPx/IP2 will be raised when...
+ *
+ * n = x*2
+ * PPx/IP2 = |([CIU_SUM2_PPx_IP2,CIU_SUM1_PPx_IP2, CIU_INTn_SUM0] &
+ * [CIU_EN2_PPx_IP2,CIU_INTn_EN1, CIU_INTn_EN0])
+ *
+ * PPx/IP3 will be raised when...
+ *
+ * n = x*2 + 1
+ * PPx/IP3 =  |([CIU_SUM2_PPx_IP3,CIU_SUM1_PPx_IP3, CIU_INTn_SUM0] &
+ * [CIU_EN2_PPx_IP3,CIU_INTn_EN1, CIU_INTn_EN0])
+ *
+ * PPx/IP4 will be raised when...
+ * PPx/IP4 = |([CIU_SUM1_PPx_IP4, CIU_INTx_SUM4] & [CIU_INTx_EN4_1, CIU_INTx_EN4_0])
+ *
+ * PCI/INT will be raised when...
+ *
+ * PCI/INT0 (PEM0)
+ * PCI/INT0 = |([CIU_SUM2_IO0_INT,CIU_SUM1_IO0_INT, CIU_INT32_SUM0] &
+ * [CIU_EN2_IO0_INT,CIU_INT32_EN1, CIU_INT32_EN0])
+ *
+ * PCI/INT1 is reserved for o70.
+ * PCI/INT1 = |([CIU_SUM2_IO1_INT,CIU_SUM1_IO1_INT, CIU_INT33_SUM0] &
+ * [CIU_EN2_IO1_INT,CIU_INT33_EN1, CIU_INT33_EN0])
+ */
+union cvmx_ciu_intx_en1 {
+	u64 u64;
+	struct cvmx_ciu_intx_en1_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en1_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 wdog : 1;
+	} cn30xx;
+	struct cvmx_ciu_intx_en1_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 wdog : 2;
+	} cn31xx;
+	struct cvmx_ciu_intx_en1_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn38xx;
+	struct cvmx_ciu_intx_en1_cn38xx cn38xxp2;
+	struct cvmx_ciu_intx_en1_cn31xx cn50xx;
+	struct cvmx_ciu_intx_en1_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en1_cn52xxp1 {
+		u64 reserved_19_63 : 45;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xxp1;
+	struct cvmx_ciu_intx_en1_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en1_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_en1_cn38xx cn58xx;
+	struct cvmx_ciu_intx_en1_cn38xx cn58xxp1;
+	struct cvmx_ciu_intx_en1_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en1_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en1_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en1_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en1_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_39_38 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en1_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en1_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en1 cvmx_ciu_intx_en1_t;
+
+/**
+ * cvmx_ciu_int#_en1_w1c
+ *
+ * Write-1-to-clear version of the CIU_INTX_EN1 register, read back corresponding CIU_INTX_EN1
+ * value.
+ * CIU_INT33_EN1_W1C is reserved.
+ */
+union cvmx_ciu_intx_en1_w1c {
+	u64 u64;
+	struct cvmx_ciu_intx_en1_w1c_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en1_w1c_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en1_w1c_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en1_w1c_cn58xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en1_w1c_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en1_w1c_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en1_w1c_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en1_w1c_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en1_w1c_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en1_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en1_w1c_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en1_w1c cvmx_ciu_intx_en1_w1c_t;
+
+/**
+ * cvmx_ciu_int#_en1_w1s
+ *
+ * Write-1-to-set version of the CIU_INTX_EN1 register, read back corresponding CIU_INTX_EN1
+ * value.
+ * CIU_INT33_EN1_W1S is reserved.
+ */
+union cvmx_ciu_intx_en1_w1s {
+	u64 u64;
+	struct cvmx_ciu_intx_en1_w1s_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en1_w1s_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en1_w1s_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en1_w1s_cn58xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en1_w1s_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en1_w1s_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en1_w1s_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en1_w1s_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en1_w1s_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en1_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en1_w1s_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en1_w1s cvmx_ciu_intx_en1_w1s_t;
+
+/**
+ * cvmx_ciu_int#_en4_0
+ *
+ * CIU_INT0_EN4_0:   PP0  /IP4
+ * CIU_INT1_EN4_0:   PP1  /IP4
+ * - ...
+ * CIU_INT3_EN4_0:   PP3  /IP4
+ */
+union cvmx_ciu_intx_en4_0 {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_0_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_0_cn50xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 reserved_47_47 : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn50xx;
+	struct cvmx_ciu_intx_en4_0_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_0_cn52xx cn52xxp1;
+	struct cvmx_ciu_intx_en4_0_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_0_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_en4_0_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_0_cn58xx cn58xxp1;
+	struct cvmx_ciu_intx_en4_0_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_0_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en4_0_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_0_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_0_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_0_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_0_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_0 cvmx_ciu_intx_en4_0_t;
+
+/**
+ * cvmx_ciu_int#_en4_0_w1c
+ *
+ * Write-1-to-clear version of the CIU_INTx_EN4_0 register, read back corresponding
+ * CIU_INTx_EN4_0 value.
+ */
+union cvmx_ciu_intx_en4_0_w1c {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_0_w1c_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_0_w1c_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_0_w1c_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_0_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_0_w1c_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_0_w1c cvmx_ciu_intx_en4_0_w1c_t;
+
+/**
+ * cvmx_ciu_int#_en4_0_w1s
+ *
+ * Write-1-to-set version of the CIU_INTX_EN4_0 register, read back corresponding CIU_INTX_EN4_0
+ * value.
+ */
+union cvmx_ciu_intx_en4_0_w1s {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_0_w1s_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_0_w1s_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn52xx cn63xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_0_w1s_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_0_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_0_w1s_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 reserved_44_44 : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_0_w1s cvmx_ciu_intx_en4_0_w1s_t;
+
+/**
+ * cvmx_ciu_int#_en4_1
+ *
+ * PPx/IP4 will be raised when...
+ * PPx/IP4 = |([CIU_SUM1_PPx_IP4, CIU_INTx_SUM4] & [CIU_INTx_EN4_1, CIU_INTx_EN4_0])
+ */
+union cvmx_ciu_intx_en4_1 {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_1_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_1_cn50xx {
+		u64 reserved_2_63 : 62;
+		u64 wdog : 2;
+	} cn50xx;
+	struct cvmx_ciu_intx_en4_1_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_1_cn52xxp1 {
+		u64 reserved_19_63 : 45;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xxp1;
+	struct cvmx_ciu_intx_en4_1_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_1_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_en4_1_cn58xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_1_cn58xx cn58xxp1;
+	struct cvmx_ciu_intx_en4_1_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_1_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en4_1_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_1_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_1_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_39_38 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_1_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_1_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_1 cvmx_ciu_intx_en4_1_t;
+
+/**
+ * cvmx_ciu_int#_en4_1_w1c
+ *
+ * Write-1-to-clear version of the CIU_INTX_EN4_1 register, read back corresponding
+ * CIU_INTX_EN4_1 value.
+ */
+union cvmx_ciu_intx_en4_1_w1c {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_1_w1c_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_1_w1c_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn58xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_1_w1c_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_1_w1c_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_1_w1c_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_1_w1c cvmx_ciu_intx_en4_1_w1c_t;
+
+/**
+ * cvmx_ciu_int#_en4_1_w1s
+ *
+ * Write-1-to-set version of the CIU_INTX_EN4_1 register, read back corresponding CIU_INTX_EN4_1
+ * value.
+ */
+union cvmx_ciu_intx_en4_1_w1s {
+	u64 u64;
+	struct cvmx_ciu_intx_en4_1_w1s_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_intx_en4_1_w1s_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn58xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn63xx cn63xxp1;
+	struct cvmx_ciu_intx_en4_1_w1s_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_intx_en4_1_w1s_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_en4_1_w1s_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_en4_1_w1s cvmx_ciu_intx_en4_1_w1s_t;
+
+/**
+ * cvmx_ciu_int#_sum0
+ *
+ * The remaining IP4 summary bits will be CIU_INTX_SUM4.
+ * CIU_INT0_SUM0:  PP0/IP2
+ * CIU_INT1_SUM0:  PP0/IP3
+ * CIU_INT2_SUM0:  PP1/IP2
+ * CIU_INT3_SUM0:  PP1/IP3
+ * CIU_INT4_SUM0:  PP2/IP2
+ * CIU_INT5_SUM0:  PP2/IP3
+ * CIU_INT6_SUM0:  PP3/IP2
+ * CIU_INT7_SUM0:  PP3/IP3
+ *  - .....
+ * (hole)
+ * CIU_INT32_SUM0: IO 0 (PEM0).
+ * CIU_INT33_SUM0: IO 1 (Reserved in o70, in separate address group)
+ */
+union cvmx_ciu_intx_sum0 {
+	u64 u64;
+	struct cvmx_ciu_intx_sum0_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_sum0_cn30xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 reserved_47_47 : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn30xx;
+	struct cvmx_ciu_intx_sum0_cn31xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn31xx;
+	struct cvmx_ciu_intx_sum0_cn38xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn38xx;
+	struct cvmx_ciu_intx_sum0_cn38xx cn38xxp2;
+	struct cvmx_ciu_intx_sum0_cn30xx cn50xx;
+	struct cvmx_ciu_intx_sum0_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_sum0_cn52xx cn52xxp1;
+	struct cvmx_ciu_intx_sum0_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_sum0_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_sum0_cn38xx cn58xx;
+	struct cvmx_ciu_intx_sum0_cn38xx cn58xxp1;
+	struct cvmx_ciu_intx_sum0_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_sum0_cn52xx cn63xx;
+	struct cvmx_ciu_intx_sum0_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_sum0_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_sum0_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_sum0_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_sum0_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_sum0 cvmx_ciu_intx_sum0_t;
+
+/**
+ * cvmx_ciu_int#_sum4
+ *
+ * CIU_INT0_SUM4:   PP0  /IP4
+ * CIU_INT1_SUM4:   PP1  /IP4
+ * - ...
+ * CIU_INT3_SUM4:   PP3  /IP4
+ */
+union cvmx_ciu_intx_sum4 {
+	u64 u64;
+	struct cvmx_ciu_intx_sum4_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_intx_sum4_cn50xx {
+		u64 reserved_59_63 : 5;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 reserved_47_47 : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn50xx;
+	struct cvmx_ciu_intx_sum4_cn52xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn52xx;
+	struct cvmx_ciu_intx_sum4_cn52xx cn52xxp1;
+	struct cvmx_ciu_intx_sum4_cn56xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn56xx;
+	struct cvmx_ciu_intx_sum4_cn56xx cn56xxp1;
+	struct cvmx_ciu_intx_sum4_cn58xx {
+		u64 reserved_56_63 : 8;
+		u64 timer : 4;
+		u64 key_zero : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn58xx;
+	struct cvmx_ciu_intx_sum4_cn58xx cn58xxp1;
+	struct cvmx_ciu_intx_sum4_cn61xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn61xx;
+	struct cvmx_ciu_intx_sum4_cn52xx cn63xx;
+	struct cvmx_ciu_intx_sum4_cn52xx cn63xxp1;
+	struct cvmx_ciu_intx_sum4_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_intx_sum4_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_46_47 : 2;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_intx_sum4_cn70xx cn70xxp1;
+	struct cvmx_ciu_intx_sum4_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_intx_sum4 cvmx_ciu_intx_sum4_t;
+
+/**
+ * cvmx_ciu_int33_sum0
+ *
+ * This bit is associated with CIU_INTX_SUM0. Reserved for o70 for future expansion.
+ *
+ */
+union cvmx_ciu_int33_sum0 {
+	u64 u64;
+	struct cvmx_ciu_int33_sum0_s {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} s;
+	struct cvmx_ciu_int33_sum0_s cn61xx;
+	struct cvmx_ciu_int33_sum0_cn63xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 reserved_57_58 : 2;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 reserved_51_51 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn63xx;
+	struct cvmx_ciu_int33_sum0_cn63xx cn63xxp1;
+	struct cvmx_ciu_int33_sum0_cn66xx {
+		u64 bootdma : 1;
+		u64 mii : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 reserved_57_57 : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn66xx;
+	struct cvmx_ciu_int33_sum0_cn70xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 reserved_56_56 : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 gmx_drp : 2;
+		u64 reserved_47_46 : 2;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cn70xx;
+	struct cvmx_ciu_int33_sum0_cn70xx cn70xxp1;
+	struct cvmx_ciu_int33_sum0_cnf71xx {
+		u64 bootdma : 1;
+		u64 reserved_62_62 : 1;
+		u64 ipdppthr : 1;
+		u64 powiq : 1;
+		u64 twsi2 : 1;
+		u64 mpi : 1;
+		u64 pcm : 1;
+		u64 usb : 1;
+		u64 timer : 4;
+		u64 sum2 : 1;
+		u64 ipd_drp : 1;
+		u64 reserved_49_49 : 1;
+		u64 gmx_drp : 1;
+		u64 trace : 1;
+		u64 rml : 1;
+		u64 twsi : 1;
+		u64 wdog_sum : 1;
+		u64 pci_msi : 4;
+		u64 pci_int : 4;
+		u64 uart : 2;
+		u64 mbox : 2;
+		u64 gpio : 16;
+		u64 workq : 16;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_int33_sum0 cvmx_ciu_int33_sum0_t;
+
+/**
+ * cvmx_ciu_int_dbg_sel
+ */
+union cvmx_ciu_int_dbg_sel {
+	u64 u64;
+	struct cvmx_ciu_int_dbg_sel_s {
+		u64 reserved_19_63 : 45;
+		u64 sel : 3;
+		u64 reserved_10_15 : 6;
+		u64 irq : 2;
+		u64 reserved_5_7 : 3;
+		u64 pp : 5;
+	} s;
+	struct cvmx_ciu_int_dbg_sel_cn61xx {
+		u64 reserved_19_63 : 45;
+		u64 sel : 3;
+		u64 reserved_10_15 : 6;
+		u64 irq : 2;
+		u64 reserved_4_7 : 4;
+		u64 pp : 4;
+	} cn61xx;
+	struct cvmx_ciu_int_dbg_sel_cn63xx {
+		u64 reserved_19_63 : 45;
+		u64 sel : 3;
+		u64 reserved_10_15 : 6;
+		u64 irq : 2;
+		u64 reserved_3_7 : 5;
+		u64 pp : 3;
+	} cn63xx;
+	struct cvmx_ciu_int_dbg_sel_cn61xx cn66xx;
+	struct cvmx_ciu_int_dbg_sel_s cn68xx;
+	struct cvmx_ciu_int_dbg_sel_s cn68xxp1;
+	struct cvmx_ciu_int_dbg_sel_cn61xx cnf71xx;
+};
+
+typedef union cvmx_ciu_int_dbg_sel cvmx_ciu_int_dbg_sel_t;
+
+/**
+ * cvmx_ciu_int_sum1
+ *
+ * CIU_INT_SUM1 is kept to keep backward compatible.
+ * Refer to CIU_SUM1_PPX_IPx which is the one should use.
+ */
+union cvmx_ciu_int_sum1 {
+	u64 u64;
+	struct cvmx_ciu_int_sum1_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 reserved_50_50 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 wdog : 16;
+	} s;
+	struct cvmx_ciu_int_sum1_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 wdog : 1;
+	} cn30xx;
+	struct cvmx_ciu_int_sum1_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 wdog : 2;
+	} cn31xx;
+	struct cvmx_ciu_int_sum1_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 wdog : 16;
+	} cn38xx;
+	struct cvmx_ciu_int_sum1_cn38xx cn38xxp2;
+	struct cvmx_ciu_int_sum1_cn31xx cn50xx;
+	struct cvmx_ciu_int_sum1_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xx;
+	struct cvmx_ciu_int_sum1_cn52xxp1 {
+		u64 reserved_19_63 : 45;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 uart2 : 1;
+		u64 reserved_4_15 : 12;
+		u64 wdog : 4;
+	} cn52xxp1;
+	struct cvmx_ciu_int_sum1_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 wdog : 12;
+	} cn56xx;
+	struct cvmx_ciu_int_sum1_cn56xx cn56xxp1;
+	struct cvmx_ciu_int_sum1_cn38xx cn58xx;
+	struct cvmx_ciu_int_sum1_cn38xx cn58xxp1;
+	struct cvmx_ciu_int_sum1_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_int_sum1_cn63xx {
+		u64 rst : 1;
+		u64 reserved_57_62 : 6;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 srio1 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_37_45 : 9;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_6_17 : 12;
+		u64 wdog : 6;
+	} cn63xx;
+	struct cvmx_ciu_int_sum1_cn63xx cn63xxp1;
+	struct cvmx_ciu_int_sum1_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_int_sum1_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_int_sum1_cn70xx cn70xxp1;
+	struct cvmx_ciu_int_sum1_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_37_46 : 10;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_int_sum1 cvmx_ciu_int_sum1_t;
+
+/**
+ * cvmx_ciu_intr_slowdown
+ */
+union cvmx_ciu_intr_slowdown {
+	u64 u64;
+	struct cvmx_ciu_intr_slowdown_s {
+		u64 reserved_3_63 : 61;
+		u64 ctl : 3;
+	} s;
+	struct cvmx_ciu_intr_slowdown_s cn70xx;
+	struct cvmx_ciu_intr_slowdown_s cn70xxp1;
+};
+
+typedef union cvmx_ciu_intr_slowdown cvmx_ciu_intr_slowdown_t;
+
+/**
+ * cvmx_ciu_mbox_clr#
+ */
+union cvmx_ciu_mbox_clrx {
+	u64 u64;
+	struct cvmx_ciu_mbox_clrx_s {
+		u64 reserved_32_63 : 32;
+		u64 bits : 32;
+	} s;
+	struct cvmx_ciu_mbox_clrx_s cn30xx;
+	struct cvmx_ciu_mbox_clrx_s cn31xx;
+	struct cvmx_ciu_mbox_clrx_s cn38xx;
+	struct cvmx_ciu_mbox_clrx_s cn38xxp2;
+	struct cvmx_ciu_mbox_clrx_s cn50xx;
+	struct cvmx_ciu_mbox_clrx_s cn52xx;
+	struct cvmx_ciu_mbox_clrx_s cn52xxp1;
+	struct cvmx_ciu_mbox_clrx_s cn56xx;
+	struct cvmx_ciu_mbox_clrx_s cn56xxp1;
+	struct cvmx_ciu_mbox_clrx_s cn58xx;
+	struct cvmx_ciu_mbox_clrx_s cn58xxp1;
+	struct cvmx_ciu_mbox_clrx_s cn61xx;
+	struct cvmx_ciu_mbox_clrx_s cn63xx;
+	struct cvmx_ciu_mbox_clrx_s cn63xxp1;
+	struct cvmx_ciu_mbox_clrx_s cn66xx;
+	struct cvmx_ciu_mbox_clrx_s cn68xx;
+	struct cvmx_ciu_mbox_clrx_s cn68xxp1;
+	struct cvmx_ciu_mbox_clrx_s cn70xx;
+	struct cvmx_ciu_mbox_clrx_s cn70xxp1;
+	struct cvmx_ciu_mbox_clrx_s cnf71xx;
+};
+
+typedef union cvmx_ciu_mbox_clrx cvmx_ciu_mbox_clrx_t;
+
+/**
+ * cvmx_ciu_mbox_set#
+ */
+union cvmx_ciu_mbox_setx {
+	u64 u64;
+	struct cvmx_ciu_mbox_setx_s {
+		u64 reserved_32_63 : 32;
+		u64 bits : 32;
+	} s;
+	struct cvmx_ciu_mbox_setx_s cn30xx;
+	struct cvmx_ciu_mbox_setx_s cn31xx;
+	struct cvmx_ciu_mbox_setx_s cn38xx;
+	struct cvmx_ciu_mbox_setx_s cn38xxp2;
+	struct cvmx_ciu_mbox_setx_s cn50xx;
+	struct cvmx_ciu_mbox_setx_s cn52xx;
+	struct cvmx_ciu_mbox_setx_s cn52xxp1;
+	struct cvmx_ciu_mbox_setx_s cn56xx;
+	struct cvmx_ciu_mbox_setx_s cn56xxp1;
+	struct cvmx_ciu_mbox_setx_s cn58xx;
+	struct cvmx_ciu_mbox_setx_s cn58xxp1;
+	struct cvmx_ciu_mbox_setx_s cn61xx;
+	struct cvmx_ciu_mbox_setx_s cn63xx;
+	struct cvmx_ciu_mbox_setx_s cn63xxp1;
+	struct cvmx_ciu_mbox_setx_s cn66xx;
+	struct cvmx_ciu_mbox_setx_s cn68xx;
+	struct cvmx_ciu_mbox_setx_s cn68xxp1;
+	struct cvmx_ciu_mbox_setx_s cn70xx;
+	struct cvmx_ciu_mbox_setx_s cn70xxp1;
+	struct cvmx_ciu_mbox_setx_s cnf71xx;
+};
+
+typedef union cvmx_ciu_mbox_setx cvmx_ciu_mbox_setx_t;
+
+/**
+ * cvmx_ciu_nmi
+ */
+union cvmx_ciu_nmi {
+	u64 u64;
+	struct cvmx_ciu_nmi_s {
+		u64 reserved_32_63 : 32;
+		u64 nmi : 32;
+	} s;
+	struct cvmx_ciu_nmi_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 nmi : 1;
+	} cn30xx;
+	struct cvmx_ciu_nmi_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 nmi : 2;
+	} cn31xx;
+	struct cvmx_ciu_nmi_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 nmi : 16;
+	} cn38xx;
+	struct cvmx_ciu_nmi_cn38xx cn38xxp2;
+	struct cvmx_ciu_nmi_cn31xx cn50xx;
+	struct cvmx_ciu_nmi_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 nmi : 4;
+	} cn52xx;
+	struct cvmx_ciu_nmi_cn52xx cn52xxp1;
+	struct cvmx_ciu_nmi_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 nmi : 12;
+	} cn56xx;
+	struct cvmx_ciu_nmi_cn56xx cn56xxp1;
+	struct cvmx_ciu_nmi_cn38xx cn58xx;
+	struct cvmx_ciu_nmi_cn38xx cn58xxp1;
+	struct cvmx_ciu_nmi_cn52xx cn61xx;
+	struct cvmx_ciu_nmi_cn63xx {
+		u64 reserved_6_63 : 58;
+		u64 nmi : 6;
+	} cn63xx;
+	struct cvmx_ciu_nmi_cn63xx cn63xxp1;
+	struct cvmx_ciu_nmi_cn66xx {
+		u64 reserved_10_63 : 54;
+		u64 nmi : 10;
+	} cn66xx;
+	struct cvmx_ciu_nmi_s cn68xx;
+	struct cvmx_ciu_nmi_s cn68xxp1;
+	struct cvmx_ciu_nmi_cn52xx cn70xx;
+	struct cvmx_ciu_nmi_cn52xx cn70xxp1;
+	struct cvmx_ciu_nmi_cn52xx cnf71xx;
+};
+
+typedef union cvmx_ciu_nmi cvmx_ciu_nmi_t;
+
+/**
+ * cvmx_ciu_pci_inta
+ */
+union cvmx_ciu_pci_inta {
+	u64 u64;
+	struct cvmx_ciu_pci_inta_s {
+		u64 reserved_2_63 : 62;
+		u64 intr : 2;
+	} s;
+	struct cvmx_ciu_pci_inta_s cn30xx;
+	struct cvmx_ciu_pci_inta_s cn31xx;
+	struct cvmx_ciu_pci_inta_s cn38xx;
+	struct cvmx_ciu_pci_inta_s cn38xxp2;
+	struct cvmx_ciu_pci_inta_s cn50xx;
+	struct cvmx_ciu_pci_inta_s cn52xx;
+	struct cvmx_ciu_pci_inta_s cn52xxp1;
+	struct cvmx_ciu_pci_inta_s cn56xx;
+	struct cvmx_ciu_pci_inta_s cn56xxp1;
+	struct cvmx_ciu_pci_inta_s cn58xx;
+	struct cvmx_ciu_pci_inta_s cn58xxp1;
+	struct cvmx_ciu_pci_inta_s cn61xx;
+	struct cvmx_ciu_pci_inta_s cn63xx;
+	struct cvmx_ciu_pci_inta_s cn63xxp1;
+	struct cvmx_ciu_pci_inta_s cn66xx;
+	struct cvmx_ciu_pci_inta_s cn68xx;
+	struct cvmx_ciu_pci_inta_s cn68xxp1;
+	struct cvmx_ciu_pci_inta_s cn70xx;
+	struct cvmx_ciu_pci_inta_s cn70xxp1;
+	struct cvmx_ciu_pci_inta_s cnf71xx;
+};
+
+typedef union cvmx_ciu_pci_inta cvmx_ciu_pci_inta_t;
+
+/**
+ * cvmx_ciu_pp_bist_stat
+ */
+union cvmx_ciu_pp_bist_stat {
+	u64 u64;
+	struct cvmx_ciu_pp_bist_stat_s {
+		u64 reserved_32_63 : 32;
+		u64 pp_bist : 32;
+	} s;
+	struct cvmx_ciu_pp_bist_stat_s cn68xx;
+	struct cvmx_ciu_pp_bist_stat_s cn68xxp1;
+};
+
+typedef union cvmx_ciu_pp_bist_stat cvmx_ciu_pp_bist_stat_t;
+
+/**
+ * cvmx_ciu_pp_dbg
+ */
+union cvmx_ciu_pp_dbg {
+	u64 u64;
+	struct cvmx_ciu_pp_dbg_s {
+		u64 reserved_48_63 : 16;
+		u64 ppdbg : 48;
+	} s;
+	struct cvmx_ciu_pp_dbg_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 ppdbg : 1;
+	} cn30xx;
+	struct cvmx_ciu_pp_dbg_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 ppdbg : 2;
+	} cn31xx;
+	struct cvmx_ciu_pp_dbg_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 ppdbg : 16;
+	} cn38xx;
+	struct cvmx_ciu_pp_dbg_cn38xx cn38xxp2;
+	struct cvmx_ciu_pp_dbg_cn31xx cn50xx;
+	struct cvmx_ciu_pp_dbg_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 ppdbg : 4;
+	} cn52xx;
+	struct cvmx_ciu_pp_dbg_cn52xx cn52xxp1;
+	struct cvmx_ciu_pp_dbg_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 ppdbg : 12;
+	} cn56xx;
+	struct cvmx_ciu_pp_dbg_cn56xx cn56xxp1;
+	struct cvmx_ciu_pp_dbg_cn38xx cn58xx;
+	struct cvmx_ciu_pp_dbg_cn38xx cn58xxp1;
+	struct cvmx_ciu_pp_dbg_cn52xx cn61xx;
+	struct cvmx_ciu_pp_dbg_cn63xx {
+		u64 reserved_6_63 : 58;
+		u64 ppdbg : 6;
+	} cn63xx;
+	struct cvmx_ciu_pp_dbg_cn63xx cn63xxp1;
+	struct cvmx_ciu_pp_dbg_cn66xx {
+		u64 reserved_10_63 : 54;
+		u64 ppdbg : 10;
+	} cn66xx;
+	struct cvmx_ciu_pp_dbg_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 ppdbg : 32;
+	} cn68xx;
+	struct cvmx_ciu_pp_dbg_cn68xx cn68xxp1;
+	struct cvmx_ciu_pp_dbg_cn52xx cn70xx;
+	struct cvmx_ciu_pp_dbg_cn52xx cn70xxp1;
+	struct cvmx_ciu_pp_dbg_cn38xx cn73xx;
+	struct cvmx_ciu_pp_dbg_s cn78xx;
+	struct cvmx_ciu_pp_dbg_s cn78xxp1;
+	struct cvmx_ciu_pp_dbg_cn52xx cnf71xx;
+	struct cvmx_ciu_pp_dbg_cn38xx cnf75xx;
+};
+
+typedef union cvmx_ciu_pp_dbg cvmx_ciu_pp_dbg_t;
+
+/**
+ * cvmx_ciu_pp_poke#
+ *
+ * CIU_PP_POKE for CIU_WDOG
+ *
+ */
+union cvmx_ciu_pp_pokex {
+	u64 u64;
+	struct cvmx_ciu_pp_pokex_s {
+		u64 poke : 64;
+	} s;
+	struct cvmx_ciu_pp_pokex_s cn30xx;
+	struct cvmx_ciu_pp_pokex_s cn31xx;
+	struct cvmx_ciu_pp_pokex_s cn38xx;
+	struct cvmx_ciu_pp_pokex_s cn38xxp2;
+	struct cvmx_ciu_pp_pokex_s cn50xx;
+	struct cvmx_ciu_pp_pokex_s cn52xx;
+	struct cvmx_ciu_pp_pokex_s cn52xxp1;
+	struct cvmx_ciu_pp_pokex_s cn56xx;
+	struct cvmx_ciu_pp_pokex_s cn56xxp1;
+	struct cvmx_ciu_pp_pokex_s cn58xx;
+	struct cvmx_ciu_pp_pokex_s cn58xxp1;
+	struct cvmx_ciu_pp_pokex_s cn61xx;
+	struct cvmx_ciu_pp_pokex_s cn63xx;
+	struct cvmx_ciu_pp_pokex_s cn63xxp1;
+	struct cvmx_ciu_pp_pokex_s cn66xx;
+	struct cvmx_ciu_pp_pokex_s cn68xx;
+	struct cvmx_ciu_pp_pokex_s cn68xxp1;
+	struct cvmx_ciu_pp_pokex_s cn70xx;
+	struct cvmx_ciu_pp_pokex_s cn70xxp1;
+	struct cvmx_ciu_pp_pokex_cn73xx {
+		u64 reserved_1_63 : 63;
+		u64 poke : 1;
+	} cn73xx;
+	struct cvmx_ciu_pp_pokex_cn73xx cn78xx;
+	struct cvmx_ciu_pp_pokex_cn73xx cn78xxp1;
+	struct cvmx_ciu_pp_pokex_s cnf71xx;
+	struct cvmx_ciu_pp_pokex_cn73xx cnf75xx;
+};
+
+typedef union cvmx_ciu_pp_pokex cvmx_ciu_pp_pokex_t;
+
+/**
+ * cvmx_ciu_pp_rst
+ *
+ * This register contains the reset control for each core. A 1 holds a core in reset, 0 release
+ * from reset. It resets to all ones when REMOTE_BOOT is enabled or all ones excluding bit 0 when
+ * REMOTE_BOOT is disabled. Writes to this register should occur only if the CIU_PP_RST_PENDING
+ * register is cleared.
+ */
+union cvmx_ciu_pp_rst {
+	u64 u64;
+	struct cvmx_ciu_pp_rst_s {
+		u64 reserved_48_63 : 16;
+		u64 rst : 47;
+		u64 rst0 : 1;
+	} s;
+	struct cvmx_ciu_pp_rst_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 rst0 : 1;
+	} cn30xx;
+	struct cvmx_ciu_pp_rst_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 rst : 1;
+		u64 rst0 : 1;
+	} cn31xx;
+	struct cvmx_ciu_pp_rst_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 rst : 15;
+		u64 rst0 : 1;
+	} cn38xx;
+	struct cvmx_ciu_pp_rst_cn38xx cn38xxp2;
+	struct cvmx_ciu_pp_rst_cn31xx cn50xx;
+	struct cvmx_ciu_pp_rst_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 rst : 3;
+		u64 rst0 : 1;
+	} cn52xx;
+	struct cvmx_ciu_pp_rst_cn52xx cn52xxp1;
+	struct cvmx_ciu_pp_rst_cn56xx {
+		u64 reserved_12_63 : 52;
+		u64 rst : 11;
+		u64 rst0 : 1;
+	} cn56xx;
+	struct cvmx_ciu_pp_rst_cn56xx cn56xxp1;
+	struct cvmx_ciu_pp_rst_cn38xx cn58xx;
+	struct cvmx_ciu_pp_rst_cn38xx cn58xxp1;
+	struct cvmx_ciu_pp_rst_cn52xx cn61xx;
+	struct cvmx_ciu_pp_rst_cn63xx {
+		u64 reserved_6_63 : 58;
+		u64 rst : 5;
+		u64 rst0 : 1;
+	} cn63xx;
+	struct cvmx_ciu_pp_rst_cn63xx cn63xxp1;
+	struct cvmx_ciu_pp_rst_cn66xx {
+		u64 reserved_10_63 : 54;
+		u64 rst : 9;
+		u64 rst0 : 1;
+	} cn66xx;
+	struct cvmx_ciu_pp_rst_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 rst : 31;
+		u64 rst0 : 1;
+	} cn68xx;
+	struct cvmx_ciu_pp_rst_cn68xx cn68xxp1;
+	struct cvmx_ciu_pp_rst_cn52xx cn70xx;
+	struct cvmx_ciu_pp_rst_cn52xx cn70xxp1;
+	struct cvmx_ciu_pp_rst_cn38xx cn73xx;
+	struct cvmx_ciu_pp_rst_s cn78xx;
+	struct cvmx_ciu_pp_rst_s cn78xxp1;
+	struct cvmx_ciu_pp_rst_cn52xx cnf71xx;
+	struct cvmx_ciu_pp_rst_cn38xx cnf75xx;
+};
+
+typedef union cvmx_ciu_pp_rst cvmx_ciu_pp_rst_t;
+
+/**
+ * cvmx_ciu_pp_rst_pending
+ *
+ * This register contains the reset status for each core.
+ *
+ */
+union cvmx_ciu_pp_rst_pending {
+	u64 u64;
+	struct cvmx_ciu_pp_rst_pending_s {
+		u64 reserved_48_63 : 16;
+		u64 pend : 48;
+	} s;
+	struct cvmx_ciu_pp_rst_pending_s cn70xx;
+	struct cvmx_ciu_pp_rst_pending_s cn70xxp1;
+	struct cvmx_ciu_pp_rst_pending_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 pend : 16;
+	} cn73xx;
+	struct cvmx_ciu_pp_rst_pending_s cn78xx;
+	struct cvmx_ciu_pp_rst_pending_s cn78xxp1;
+	struct cvmx_ciu_pp_rst_pending_cn73xx cnf75xx;
+};
+
+typedef union cvmx_ciu_pp_rst_pending cvmx_ciu_pp_rst_pending_t;
+
+/**
+ * cvmx_ciu_qlm0
+ *
+ * Notes:
+ * This register is only reset by cold reset.
+ *
+ */
+union cvmx_ciu_qlm0 {
+	u64 u64;
+	struct cvmx_ciu_qlm0_s {
+		u64 g2bypass : 1;
+		u64 reserved_53_62 : 10;
+		u64 g2deemph : 5;
+		u64 reserved_45_47 : 3;
+		u64 g2margin : 5;
+		u64 reserved_32_39 : 8;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} s;
+	struct cvmx_ciu_qlm0_s cn61xx;
+	struct cvmx_ciu_qlm0_s cn63xx;
+	struct cvmx_ciu_qlm0_cn63xxp1 {
+		u64 reserved_32_63 : 32;
+		u64 txbypass : 1;
+		u64 reserved_20_30 : 11;
+		u64 txdeemph : 4;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} cn63xxp1;
+	struct cvmx_ciu_qlm0_s cn66xx;
+	struct cvmx_ciu_qlm0_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} cn68xx;
+	struct cvmx_ciu_qlm0_cn68xx cn68xxp1;
+	struct cvmx_ciu_qlm0_s cnf71xx;
+};
+
+typedef union cvmx_ciu_qlm0 cvmx_ciu_qlm0_t;
+
+/**
+ * cvmx_ciu_qlm1
+ *
+ * Notes:
+ * This register is only reset by cold reset.
+ *
+ */
+union cvmx_ciu_qlm1 {
+	u64 u64;
+	struct cvmx_ciu_qlm1_s {
+		u64 g2bypass : 1;
+		u64 reserved_53_62 : 10;
+		u64 g2deemph : 5;
+		u64 reserved_45_47 : 3;
+		u64 g2margin : 5;
+		u64 reserved_32_39 : 8;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} s;
+	struct cvmx_ciu_qlm1_s cn61xx;
+	struct cvmx_ciu_qlm1_s cn63xx;
+	struct cvmx_ciu_qlm1_cn63xxp1 {
+		u64 reserved_32_63 : 32;
+		u64 txbypass : 1;
+		u64 reserved_20_30 : 11;
+		u64 txdeemph : 4;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} cn63xxp1;
+	struct cvmx_ciu_qlm1_s cn66xx;
+	struct cvmx_ciu_qlm1_s cn68xx;
+	struct cvmx_ciu_qlm1_s cn68xxp1;
+	struct cvmx_ciu_qlm1_s cnf71xx;
+};
+
+typedef union cvmx_ciu_qlm1 cvmx_ciu_qlm1_t;
+
+/**
+ * cvmx_ciu_qlm2
+ *
+ * Notes:
+ * This register is only reset by cold reset.
+ *
+ */
+union cvmx_ciu_qlm2 {
+	u64 u64;
+	struct cvmx_ciu_qlm2_s {
+		u64 g2bypass : 1;
+		u64 reserved_53_62 : 10;
+		u64 g2deemph : 5;
+		u64 reserved_45_47 : 3;
+		u64 g2margin : 5;
+		u64 reserved_32_39 : 8;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} s;
+	struct cvmx_ciu_qlm2_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} cn61xx;
+	struct cvmx_ciu_qlm2_cn61xx cn63xx;
+	struct cvmx_ciu_qlm2_cn63xxp1 {
+		u64 reserved_32_63 : 32;
+		u64 txbypass : 1;
+		u64 reserved_20_30 : 11;
+		u64 txdeemph : 4;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} cn63xxp1;
+	struct cvmx_ciu_qlm2_cn61xx cn66xx;
+	struct cvmx_ciu_qlm2_s cn68xx;
+	struct cvmx_ciu_qlm2_s cn68xxp1;
+	struct cvmx_ciu_qlm2_cn61xx cnf71xx;
+};
+
+typedef union cvmx_ciu_qlm2 cvmx_ciu_qlm2_t;
+
+/**
+ * cvmx_ciu_qlm3
+ *
+ * Notes:
+ * This register is only reset by cold reset.
+ *
+ */
+union cvmx_ciu_qlm3 {
+	u64 u64;
+	struct cvmx_ciu_qlm3_s {
+		u64 g2bypass : 1;
+		u64 reserved_53_62 : 10;
+		u64 g2deemph : 5;
+		u64 reserved_45_47 : 3;
+		u64 g2margin : 5;
+		u64 reserved_32_39 : 8;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} s;
+	struct cvmx_ciu_qlm3_s cn68xx;
+	struct cvmx_ciu_qlm3_s cn68xxp1;
+};
+
+typedef union cvmx_ciu_qlm3 cvmx_ciu_qlm3_t;
+
+/**
+ * cvmx_ciu_qlm4
+ *
+ * Notes:
+ * This register is only reset by cold reset.
+ *
+ */
+union cvmx_ciu_qlm4 {
+	u64 u64;
+	struct cvmx_ciu_qlm4_s {
+		u64 g2bypass : 1;
+		u64 reserved_53_62 : 10;
+		u64 g2deemph : 5;
+		u64 reserved_45_47 : 3;
+		u64 g2margin : 5;
+		u64 reserved_32_39 : 8;
+		u64 txbypass : 1;
+		u64 reserved_21_30 : 10;
+		u64 txdeemph : 5;
+		u64 reserved_13_15 : 3;
+		u64 txmargin : 5;
+		u64 reserved_4_7 : 4;
+		u64 lane_en : 4;
+	} s;
+	struct cvmx_ciu_qlm4_s cn68xx;
+	struct cvmx_ciu_qlm4_s cn68xxp1;
+};
+
+typedef union cvmx_ciu_qlm4 cvmx_ciu_qlm4_t;
+
+/**
+ * cvmx_ciu_qlm_dcok
+ */
+union cvmx_ciu_qlm_dcok {
+	u64 u64;
+	struct cvmx_ciu_qlm_dcok_s {
+		u64 reserved_4_63 : 60;
+		u64 qlm_dcok : 4;
+	} s;
+	struct cvmx_ciu_qlm_dcok_cn52xx {
+		u64 reserved_2_63 : 62;
+		u64 qlm_dcok : 2;
+	} cn52xx;
+	struct cvmx_ciu_qlm_dcok_cn52xx cn52xxp1;
+	struct cvmx_ciu_qlm_dcok_s cn56xx;
+	struct cvmx_ciu_qlm_dcok_s cn56xxp1;
+};
+
+typedef union cvmx_ciu_qlm_dcok cvmx_ciu_qlm_dcok_t;
+
+/**
+ * cvmx_ciu_qlm_jtgc
+ */
+union cvmx_ciu_qlm_jtgc {
+	u64 u64;
+	struct cvmx_ciu_qlm_jtgc_s {
+		u64 reserved_17_63 : 47;
+		u64 bypass_ext : 1;
+		u64 reserved_11_15 : 5;
+		u64 clk_div : 3;
+		u64 reserved_7_7 : 1;
+		u64 mux_sel : 3;
+		u64 bypass : 4;
+	} s;
+	struct cvmx_ciu_qlm_jtgc_cn52xx {
+		u64 reserved_11_63 : 53;
+		u64 clk_div : 3;
+		u64 reserved_5_7 : 3;
+		u64 mux_sel : 1;
+		u64 reserved_2_3 : 2;
+		u64 bypass : 2;
+	} cn52xx;
+	struct cvmx_ciu_qlm_jtgc_cn52xx cn52xxp1;
+	struct cvmx_ciu_qlm_jtgc_cn56xx {
+		u64 reserved_11_63 : 53;
+		u64 clk_div : 3;
+		u64 reserved_6_7 : 2;
+		u64 mux_sel : 2;
+		u64 bypass : 4;
+	} cn56xx;
+	struct cvmx_ciu_qlm_jtgc_cn56xx cn56xxp1;
+	struct cvmx_ciu_qlm_jtgc_cn61xx {
+		u64 reserved_11_63 : 53;
+		u64 clk_div : 3;
+		u64 reserved_6_7 : 2;
+		u64 mux_sel : 2;
+		u64 reserved_3_3 : 1;
+		u64 bypass : 3;
+	} cn61xx;
+	struct cvmx_ciu_qlm_jtgc_cn61xx cn63xx;
+	struct cvmx_ciu_qlm_jtgc_cn61xx cn63xxp1;
+	struct cvmx_ciu_qlm_jtgc_cn61xx cn66xx;
+	struct cvmx_ciu_qlm_jtgc_s cn68xx;
+	struct cvmx_ciu_qlm_jtgc_s cn68xxp1;
+	struct cvmx_ciu_qlm_jtgc_cn61xx cnf71xx;
+};
+
+typedef union cvmx_ciu_qlm_jtgc cvmx_ciu_qlm_jtgc_t;
+
+/**
+ * cvmx_ciu_qlm_jtgd
+ */
+union cvmx_ciu_qlm_jtgd {
+	u64 u64;
+	struct cvmx_ciu_qlm_jtgd_s {
+		u64 capture : 1;
+		u64 shift : 1;
+		u64 update : 1;
+		u64 reserved_45_60 : 16;
+		u64 select : 5;
+		u64 reserved_37_39 : 3;
+		u64 shft_cnt : 5;
+		u64 shft_reg : 32;
+	} s;
+	struct cvmx_ciu_qlm_jtgd_cn52xx {
+		u64 capture : 1;
+		u64 shift : 1;
+		u64 update : 1;
+		u64 reserved_42_60 : 19;
+		u64 select : 2;
+		u64 reserved_37_39 : 3;
+		u64 shft_cnt : 5;
+		u64 shft_reg : 32;
+	} cn52xx;
+	struct cvmx_ciu_qlm_jtgd_cn52xx cn52xxp1;
+	struct cvmx_ciu_qlm_jtgd_cn56xx {
+		u64 capture : 1;
+		u64 shift : 1;
+		u64 update : 1;
+		u64 reserved_44_60 : 17;
+		u64 select : 4;
+		u64 reserved_37_39 : 3;
+		u64 shft_cnt : 5;
+		u64 shft_reg : 32;
+	} cn56xx;
+	struct cvmx_ciu_qlm_jtgd_cn56xxp1 {
+		u64 capture : 1;
+		u64 shift : 1;
+		u64 update : 1;
+		u64 reserved_37_60 : 24;
+		u64 shft_cnt : 5;
+		u64 shft_reg : 32;
+	} cn56xxp1;
+	struct cvmx_ciu_qlm_jtgd_cn61xx {
+		u64 capture : 1;
+		u64 shift : 1;
+		u64 update : 1;
+		u64 reserved_43_60 : 18;
+		u64 select : 3;
+		u64 reserved_37_39 : 3;
+		u64 shft_cnt : 5;
+		u64 shft_reg : 32;
+	} cn61xx;
+	struct cvmx_ciu_qlm_jtgd_cn61xx cn63xx;
+	struct cvmx_ciu_qlm_jtgd_cn61xx cn63xxp1;
+	struct cvmx_ciu_qlm_jtgd_cn61xx cn66xx;
+	struct cvmx_ciu_qlm_jtgd_s cn68xx;
+	struct cvmx_ciu_qlm_jtgd_s cn68xxp1;
+	struct cvmx_ciu_qlm_jtgd_cn61xx cnf71xx;
+};
+
+typedef union cvmx_ciu_qlm_jtgd cvmx_ciu_qlm_jtgd_t;
+
+/**
+ * cvmx_ciu_soft_bist
+ */
+union cvmx_ciu_soft_bist {
+	u64 u64;
+	struct cvmx_ciu_soft_bist_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_bist : 1;
+	} s;
+	struct cvmx_ciu_soft_bist_s cn30xx;
+	struct cvmx_ciu_soft_bist_s cn31xx;
+	struct cvmx_ciu_soft_bist_s cn38xx;
+	struct cvmx_ciu_soft_bist_s cn38xxp2;
+	struct cvmx_ciu_soft_bist_s cn50xx;
+	struct cvmx_ciu_soft_bist_s cn52xx;
+	struct cvmx_ciu_soft_bist_s cn52xxp1;
+	struct cvmx_ciu_soft_bist_s cn56xx;
+	struct cvmx_ciu_soft_bist_s cn56xxp1;
+	struct cvmx_ciu_soft_bist_s cn58xx;
+	struct cvmx_ciu_soft_bist_s cn58xxp1;
+	struct cvmx_ciu_soft_bist_s cn61xx;
+	struct cvmx_ciu_soft_bist_s cn63xx;
+	struct cvmx_ciu_soft_bist_s cn63xxp1;
+	struct cvmx_ciu_soft_bist_s cn66xx;
+	struct cvmx_ciu_soft_bist_s cn68xx;
+	struct cvmx_ciu_soft_bist_s cn68xxp1;
+	struct cvmx_ciu_soft_bist_s cn70xx;
+	struct cvmx_ciu_soft_bist_s cn70xxp1;
+	struct cvmx_ciu_soft_bist_s cnf71xx;
+};
+
+typedef union cvmx_ciu_soft_bist cvmx_ciu_soft_bist_t;
+
+/**
+ * cvmx_ciu_soft_prst
+ */
+union cvmx_ciu_soft_prst {
+	u64 u64;
+	struct cvmx_ciu_soft_prst_s {
+		u64 reserved_3_63 : 61;
+		u64 host64 : 1;
+		u64 npi : 1;
+		u64 soft_prst : 1;
+	} s;
+	struct cvmx_ciu_soft_prst_s cn30xx;
+	struct cvmx_ciu_soft_prst_s cn31xx;
+	struct cvmx_ciu_soft_prst_s cn38xx;
+	struct cvmx_ciu_soft_prst_s cn38xxp2;
+	struct cvmx_ciu_soft_prst_s cn50xx;
+	struct cvmx_ciu_soft_prst_cn52xx {
+		u64 reserved_1_63 : 63;
+		u64 soft_prst : 1;
+	} cn52xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn52xxp1;
+	struct cvmx_ciu_soft_prst_cn52xx cn56xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn56xxp1;
+	struct cvmx_ciu_soft_prst_s cn58xx;
+	struct cvmx_ciu_soft_prst_s cn58xxp1;
+	struct cvmx_ciu_soft_prst_cn52xx cn61xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn63xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn63xxp1;
+	struct cvmx_ciu_soft_prst_cn52xx cn66xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn68xx;
+	struct cvmx_ciu_soft_prst_cn52xx cn68xxp1;
+	struct cvmx_ciu_soft_prst_cn52xx cnf71xx;
+};
+
+typedef union cvmx_ciu_soft_prst cvmx_ciu_soft_prst_t;
+
+/**
+ * cvmx_ciu_soft_prst1
+ */
+union cvmx_ciu_soft_prst1 {
+	u64 u64;
+	struct cvmx_ciu_soft_prst1_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_prst : 1;
+	} s;
+	struct cvmx_ciu_soft_prst1_s cn52xx;
+	struct cvmx_ciu_soft_prst1_s cn52xxp1;
+	struct cvmx_ciu_soft_prst1_s cn56xx;
+	struct cvmx_ciu_soft_prst1_s cn56xxp1;
+	struct cvmx_ciu_soft_prst1_s cn61xx;
+	struct cvmx_ciu_soft_prst1_s cn63xx;
+	struct cvmx_ciu_soft_prst1_s cn63xxp1;
+	struct cvmx_ciu_soft_prst1_s cn66xx;
+	struct cvmx_ciu_soft_prst1_s cn68xx;
+	struct cvmx_ciu_soft_prst1_s cn68xxp1;
+	struct cvmx_ciu_soft_prst1_s cnf71xx;
+};
+
+typedef union cvmx_ciu_soft_prst1 cvmx_ciu_soft_prst1_t;
+
+/**
+ * cvmx_ciu_soft_prst2
+ */
+union cvmx_ciu_soft_prst2 {
+	u64 u64;
+	struct cvmx_ciu_soft_prst2_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_prst : 1;
+	} s;
+	struct cvmx_ciu_soft_prst2_s cn66xx;
+};
+
+typedef union cvmx_ciu_soft_prst2 cvmx_ciu_soft_prst2_t;
+
+/**
+ * cvmx_ciu_soft_prst3
+ */
+union cvmx_ciu_soft_prst3 {
+	u64 u64;
+	struct cvmx_ciu_soft_prst3_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_prst : 1;
+	} s;
+	struct cvmx_ciu_soft_prst3_s cn66xx;
+};
+
+typedef union cvmx_ciu_soft_prst3 cvmx_ciu_soft_prst3_t;
+
+/**
+ * cvmx_ciu_soft_rst
+ */
+union cvmx_ciu_soft_rst {
+	u64 u64;
+	struct cvmx_ciu_soft_rst_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_rst : 1;
+	} s;
+	struct cvmx_ciu_soft_rst_s cn30xx;
+	struct cvmx_ciu_soft_rst_s cn31xx;
+	struct cvmx_ciu_soft_rst_s cn38xx;
+	struct cvmx_ciu_soft_rst_s cn38xxp2;
+	struct cvmx_ciu_soft_rst_s cn50xx;
+	struct cvmx_ciu_soft_rst_s cn52xx;
+	struct cvmx_ciu_soft_rst_s cn52xxp1;
+	struct cvmx_ciu_soft_rst_s cn56xx;
+	struct cvmx_ciu_soft_rst_s cn56xxp1;
+	struct cvmx_ciu_soft_rst_s cn58xx;
+	struct cvmx_ciu_soft_rst_s cn58xxp1;
+	struct cvmx_ciu_soft_rst_s cn61xx;
+	struct cvmx_ciu_soft_rst_s cn63xx;
+	struct cvmx_ciu_soft_rst_s cn63xxp1;
+	struct cvmx_ciu_soft_rst_s cn66xx;
+	struct cvmx_ciu_soft_rst_s cn68xx;
+	struct cvmx_ciu_soft_rst_s cn68xxp1;
+	struct cvmx_ciu_soft_rst_s cnf71xx;
+};
+
+typedef union cvmx_ciu_soft_rst cvmx_ciu_soft_rst_t;
+
+/**
+ * cvmx_ciu_sum1_io#_int
+ *
+ * CIU_SUM1_IO0_INT is for PEM0, CIU_SUM1_IO1_INT is reserved.
+ *
+ */
+union cvmx_ciu_sum1_iox_int {
+	u64 u64;
+	struct cvmx_ciu_sum1_iox_int_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 reserved_10_16 : 7;
+		u64 wdog : 10;
+	} s;
+	struct cvmx_ciu_sum1_iox_int_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum1_iox_int_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_sum1_iox_int_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum1_iox_int_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum1_iox_int_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum1_iox_int cvmx_ciu_sum1_iox_int_t;
+
+/**
+ * cvmx_ciu_sum1_pp#_ip2
+ *
+ * SUM1 becomes per IPx in o65/6 and afterwards. Only Field <40> DPI_DMA will have
+ * different value per PP(IP) for  $CIU_SUM1_PPx_IPy, and <40> DPI_DMA will always
+ * be zero for  $CIU_SUM1_IOX_INT. All other fields ([63:41] and [39:0]) values  are idential for
+ * different PPs, same value as $CIU_INT_SUM1.
+ * Write to any IRQ's PTP fields will clear PTP for all IRQ's PTP field.
+ */
+union cvmx_ciu_sum1_ppx_ip2 {
+	u64 u64;
+	struct cvmx_ciu_sum1_ppx_ip2_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 reserved_10_16 : 7;
+		u64 wdog : 10;
+	} s;
+	struct cvmx_ciu_sum1_ppx_ip2_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum1_ppx_ip2_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_sum1_ppx_ip2_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum1_ppx_ip2_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum1_ppx_ip2_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum1_ppx_ip2 cvmx_ciu_sum1_ppx_ip2_t;
+
+/**
+ * cvmx_ciu_sum1_pp#_ip3
+ *
+ * Notes:
+ * SUM1 becomes per IPx in o65/6 and afterwards. Only Field <40> DPI_DMA will have
+ * different value per PP(IP) for  $CIU_SUM1_PPx_IPy, and <40> DPI_DMA will always
+ * be zero for  $CIU_SUM1_IOX_INT. All other fields ([63:41] and [39:0]) values  are idential for
+ * different PPs, same value as $CIU_INT_SUM1.
+ * Write to any IRQ's PTP fields will clear PTP for all IRQ's PTP field.
+ */
+union cvmx_ciu_sum1_ppx_ip3 {
+	u64 u64;
+	struct cvmx_ciu_sum1_ppx_ip3_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 reserved_10_16 : 7;
+		u64 wdog : 10;
+	} s;
+	struct cvmx_ciu_sum1_ppx_ip3_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum1_ppx_ip3_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_sum1_ppx_ip3_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum1_ppx_ip3_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum1_ppx_ip3_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum1_ppx_ip3 cvmx_ciu_sum1_ppx_ip3_t;
+
+/**
+ * cvmx_ciu_sum1_pp#_ip4
+ *
+ * Notes:
+ * SUM1 becomes per IPx in o65/6 and afterwards. Only Field <40> DPI_DMA will have
+ * different value per PP(IP) for  $CIU_SUM1_PPx_IPy, and <40> DPI_DMA will always
+ * be zero for  $CIU_SUM1_IOX_INT. All other fields ([63:41] and [39:0]) values  are idential for
+ * different PPs, same value as $CIU_INT_SUM1.
+ * Write to any IRQ's PTP fields will clear PTP for all IRQ's PTP field.
+ */
+union cvmx_ciu_sum1_ppx_ip4 {
+	u64 u64;
+	struct cvmx_ciu_sum1_ppx_ip4_s {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 usb1 : 1;
+		u64 reserved_10_16 : 7;
+		u64 wdog : 10;
+	} s;
+	struct cvmx_ciu_sum1_ppx_ip4_cn61xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_4_17 : 14;
+		u64 wdog : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum1_ppx_ip4_cn66xx {
+		u64 rst : 1;
+		u64 reserved_62_62 : 1;
+		u64 srio3 : 1;
+		u64 srio2 : 1;
+		u64 reserved_57_59 : 3;
+		u64 dfm : 1;
+		u64 reserved_53_55 : 3;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 srio0 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_38_45 : 8;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 zip : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 mii1 : 1;
+		u64 reserved_10_17 : 8;
+		u64 wdog : 10;
+	} cn66xx;
+	struct cvmx_ciu_sum1_ppx_ip4_cn70xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_51_51 : 1;
+		u64 pem2 : 1;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 agl : 1;
+		u64 reserved_41_45 : 5;
+		u64 dpi_dma : 1;
+		u64 reserved_38_39 : 2;
+		u64 agx1 : 1;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 dfa : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_18_18 : 1;
+		u64 usb1 : 1;
+		u64 reserved_4_16 : 13;
+		u64 wdog : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum1_ppx_ip4_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum1_ppx_ip4_cnf71xx {
+		u64 rst : 1;
+		u64 reserved_53_62 : 10;
+		u64 lmc0 : 1;
+		u64 reserved_50_51 : 2;
+		u64 pem1 : 1;
+		u64 pem0 : 1;
+		u64 ptp : 1;
+		u64 reserved_41_46 : 6;
+		u64 dpi_dma : 1;
+		u64 reserved_37_39 : 3;
+		u64 agx0 : 1;
+		u64 dpi : 1;
+		u64 sli : 1;
+		u64 usb : 1;
+		u64 reserved_32_32 : 1;
+		u64 key : 1;
+		u64 rad : 1;
+		u64 tim : 1;
+		u64 reserved_28_28 : 1;
+		u64 pko : 1;
+		u64 pip : 1;
+		u64 ipd : 1;
+		u64 l2c : 1;
+		u64 pow : 1;
+		u64 fpa : 1;
+		u64 iob : 1;
+		u64 mio : 1;
+		u64 nand : 1;
+		u64 reserved_4_18 : 15;
+		u64 wdog : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum1_ppx_ip4 cvmx_ciu_sum1_ppx_ip4_t;
+
+/**
+ * cvmx_ciu_sum2_io#_int
+ *
+ * CIU_SUM2_IO0_INT is for PEM0, CIU_SUM2_IO1_INT is reserved.
+ *
+ */
+union cvmx_ciu_sum2_iox_int {
+	u64 u64;
+	struct cvmx_ciu_sum2_iox_int_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_sum2_iox_int_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum2_iox_int_cn61xx cn66xx;
+	struct cvmx_ciu_sum2_iox_int_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum2_iox_int_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum2_iox_int_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum2_iox_int cvmx_ciu_sum2_iox_int_t;
+
+/**
+ * cvmx_ciu_sum2_pp#_ip2
+ *
+ * Only TIMER field may have different value per PP(IP).
+ * All other fields  values  are idential for different PPs.
+ */
+union cvmx_ciu_sum2_ppx_ip2 {
+	u64 u64;
+	struct cvmx_ciu_sum2_ppx_ip2_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_sum2_ppx_ip2_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum2_ppx_ip2_cn61xx cn66xx;
+	struct cvmx_ciu_sum2_ppx_ip2_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum2_ppx_ip2_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum2_ppx_ip2_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum2_ppx_ip2 cvmx_ciu_sum2_ppx_ip2_t;
+
+/**
+ * cvmx_ciu_sum2_pp#_ip3
+ *
+ * Notes:
+ * These SUM2 CSR's did not exist prior to pass 1.2. CIU_TIM4-9 did not exist prior to pass 1.2.
+ *
+ */
+union cvmx_ciu_sum2_ppx_ip3 {
+	u64 u64;
+	struct cvmx_ciu_sum2_ppx_ip3_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_sum2_ppx_ip3_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum2_ppx_ip3_cn61xx cn66xx;
+	struct cvmx_ciu_sum2_ppx_ip3_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum2_ppx_ip3_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum2_ppx_ip3_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum2_ppx_ip3 cvmx_ciu_sum2_ppx_ip3_t;
+
+/**
+ * cvmx_ciu_sum2_pp#_ip4
+ *
+ * Notes:
+ * These SUM2 CSR's did not exist prior to pass 1.2. CIU_TIM4-9 did not exist prior to pass 1.2.
+ *
+ */
+union cvmx_ciu_sum2_ppx_ip4 {
+	u64 u64;
+	struct cvmx_ciu_sum2_ppx_ip4_s {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_15_15 : 1;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_ciu_sum2_ppx_ip4_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_ciu_sum2_ppx_ip4_cn61xx cn66xx;
+	struct cvmx_ciu_sum2_ppx_ip4_cn70xx {
+		u64 reserved_20_63 : 44;
+		u64 bch : 1;
+		u64 agl_drp : 1;
+		u64 ocla : 1;
+		u64 sata : 1;
+		u64 reserved_10_15 : 6;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_ciu_sum2_ppx_ip4_cn70xx cn70xxp1;
+	struct cvmx_ciu_sum2_ppx_ip4_cnf71xx {
+		u64 reserved_15_63 : 49;
+		u64 endor : 2;
+		u64 eoi : 1;
+		u64 reserved_10_11 : 2;
+		u64 timer : 6;
+		u64 reserved_0_3 : 4;
+	} cnf71xx;
+};
+
+typedef union cvmx_ciu_sum2_ppx_ip4 cvmx_ciu_sum2_ppx_ip4_t;
+
+/**
+ * cvmx_ciu_tim#
+ *
+ * Notes:
+ * CIU_TIM4-9 did not exist prior to pass 1.2
+ *
+ */
+union cvmx_ciu_timx {
+	u64 u64;
+	struct cvmx_ciu_timx_s {
+		u64 reserved_37_63 : 27;
+		u64 one_shot : 1;
+		u64 len : 36;
+	} s;
+	struct cvmx_ciu_timx_s cn30xx;
+	struct cvmx_ciu_timx_s cn31xx;
+	struct cvmx_ciu_timx_s cn38xx;
+	struct cvmx_ciu_timx_s cn38xxp2;
+	struct cvmx_ciu_timx_s cn50xx;
+	struct cvmx_ciu_timx_s cn52xx;
+	struct cvmx_ciu_timx_s cn52xxp1;
+	struct cvmx_ciu_timx_s cn56xx;
+	struct cvmx_ciu_timx_s cn56xxp1;
+	struct cvmx_ciu_timx_s cn58xx;
+	struct cvmx_ciu_timx_s cn58xxp1;
+	struct cvmx_ciu_timx_s cn61xx;
+	struct cvmx_ciu_timx_s cn63xx;
+	struct cvmx_ciu_timx_s cn63xxp1;
+	struct cvmx_ciu_timx_s cn66xx;
+	struct cvmx_ciu_timx_s cn68xx;
+	struct cvmx_ciu_timx_s cn68xxp1;
+	struct cvmx_ciu_timx_s cn70xx;
+	struct cvmx_ciu_timx_s cn70xxp1;
+	struct cvmx_ciu_timx_s cnf71xx;
+};
+
+typedef union cvmx_ciu_timx cvmx_ciu_timx_t;
+
+/**
+ * cvmx_ciu_tim_multi_cast
+ *
+ * Notes:
+ * This register does not exist prior to pass 1.2 silicon. Those earlier chip passes operate as if
+ * EN==0.
+ */
+union cvmx_ciu_tim_multi_cast {
+	u64 u64;
+	struct cvmx_ciu_tim_multi_cast_s {
+		u64 reserved_1_63 : 63;
+		u64 en : 1;
+	} s;
+	struct cvmx_ciu_tim_multi_cast_s cn61xx;
+	struct cvmx_ciu_tim_multi_cast_s cn66xx;
+	struct cvmx_ciu_tim_multi_cast_s cn70xx;
+	struct cvmx_ciu_tim_multi_cast_s cn70xxp1;
+	struct cvmx_ciu_tim_multi_cast_s cnf71xx;
+};
+
+typedef union cvmx_ciu_tim_multi_cast cvmx_ciu_tim_multi_cast_t;
+
+/**
+ * cvmx_ciu_wdog#
+ */
+union cvmx_ciu_wdogx {
+	u64 u64;
+	struct cvmx_ciu_wdogx_s {
+		u64 reserved_46_63 : 18;
+		u64 gstopen : 1;
+		u64 dstop : 1;
+		u64 cnt : 24;
+		u64 len : 16;
+		u64 state : 2;
+		u64 mode : 2;
+	} s;
+	struct cvmx_ciu_wdogx_s cn30xx;
+	struct cvmx_ciu_wdogx_s cn31xx;
+	struct cvmx_ciu_wdogx_s cn38xx;
+	struct cvmx_ciu_wdogx_s cn38xxp2;
+	struct cvmx_ciu_wdogx_s cn50xx;
+	struct cvmx_ciu_wdogx_s cn52xx;
+	struct cvmx_ciu_wdogx_s cn52xxp1;
+	struct cvmx_ciu_wdogx_s cn56xx;
+	struct cvmx_ciu_wdogx_s cn56xxp1;
+	struct cvmx_ciu_wdogx_s cn58xx;
+	struct cvmx_ciu_wdogx_s cn58xxp1;
+	struct cvmx_ciu_wdogx_s cn61xx;
+	struct cvmx_ciu_wdogx_s cn63xx;
+	struct cvmx_ciu_wdogx_s cn63xxp1;
+	struct cvmx_ciu_wdogx_s cn66xx;
+	struct cvmx_ciu_wdogx_s cn68xx;
+	struct cvmx_ciu_wdogx_s cn68xxp1;
+	struct cvmx_ciu_wdogx_s cn70xx;
+	struct cvmx_ciu_wdogx_s cn70xxp1;
+	struct cvmx_ciu_wdogx_s cn73xx;
+	struct cvmx_ciu_wdogx_s cn78xx;
+	struct cvmx_ciu_wdogx_s cn78xxp1;
+	struct cvmx_ciu_wdogx_s cnf71xx;
+	struct cvmx_ciu_wdogx_s cnf75xx;
+};
+
+typedef union cvmx_ciu_wdogx cvmx_ciu_wdogx_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 07/50] mips: octeon: Add cvmx-dbg-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (5 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 06/50] mips: octeon: Add cvmx-ciu-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 08/50] mips: octeon: Add cvmx-dpi-defs.h " Stefan Roese
                   ` (45 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-dbg-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-dbg-defs.h  | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dbg-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-dbg-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-dbg-defs.h
new file mode 100644
index 0000000000..9f91feec18
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-dbg-defs.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon dbg.
+ */
+
+#ifndef __CVMX_DBG_DEFS_H__
+#define __CVMX_DBG_DEFS_H__
+
+#define CVMX_DBG_DATA (0x00011F00000001E8ull)
+
+/**
+ * cvmx_dbg_data
+ *
+ * DBG_DATA = Debug Data Register
+ *
+ * Value returned on the debug-data lines from the RSLs
+ */
+union cvmx_dbg_data {
+	u64 u64;
+	struct cvmx_dbg_data_s {
+		u64 reserved_23_63 : 41;
+		u64 c_mul : 5;
+		u64 dsel_ext : 1;
+		u64 data : 17;
+	} s;
+};
+
+typedef union cvmx_dbg_data cvmx_dbg_data_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 08/50] mips: octeon: Add cvmx-dpi-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (6 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 07/50] mips: octeon: Add cvmx-dbg-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 09/50] mips: octeon: Add cvmx-dtx-defs.h " Stefan Roese
                   ` (44 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-dpi-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-dpi-defs.h  | 1460 +++++++++++++++++
 1 file changed, 1460 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dpi-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-dpi-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-dpi-defs.h
new file mode 100644
index 0000000000..680989463b
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-dpi-defs.h
@@ -0,0 +1,1460 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon dpi.
+ */
+
+#ifndef __CVMX_DPI_DEFS_H__
+#define __CVMX_DPI_DEFS_H__
+
+#define CVMX_DPI_BIST_STATUS		     (0x0001DF0000000000ull)
+#define CVMX_DPI_CTL			     (0x0001DF0000000040ull)
+#define CVMX_DPI_DMAX_COUNTS(offset)	     (0x0001DF0000000300ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_DBELL(offset)	     (0x0001DF0000000200ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_ERR_RSP_STATUS(offset) (0x0001DF0000000A80ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_IBUFF_SADDR(offset)    (0x0001DF0000000280ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_IFLIGHT(offset)	     (0x0001DF0000000A00ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_NADDR(offset)	     (0x0001DF0000000380ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_REQBNK0(offset)	     (0x0001DF0000000400ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_REQBNK1(offset)	     (0x0001DF0000000480ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMAX_REQQ_CTL(offset)	     (0x0001DF0000000180ull + ((offset) & 7) * 8)
+#define CVMX_DPI_DMA_CONTROL		     (0x0001DF0000000048ull)
+#define CVMX_DPI_DMA_ENGX_EN(offset)	     (0x0001DF0000000080ull + ((offset) & 7) * 8)
+static inline u64 CVMX_DPI_DMA_PPX_CNT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001DF0000000B00ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001DF0000000B00ull + (offset) * 8;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001DF0000000C00ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001DF0000000C00ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001DF0000000C00ull + (offset) * 8;
+	}
+	return 0x0001DF0000000C00ull + (offset) * 8;
+}
+
+#define CVMX_DPI_DMA_PP_INT	      (0x0001DF0000000038ull)
+#define CVMX_DPI_ECC_CTL	      (0x0001DF0000000018ull)
+#define CVMX_DPI_ECC_INT	      (0x0001DF0000000020ull)
+#define CVMX_DPI_ENGX_BUF(offset)     (0x0001DF0000000880ull + ((offset) & 7) * 8)
+#define CVMX_DPI_INFO_REG	      (0x0001DF0000000980ull)
+#define CVMX_DPI_INT_EN		      (0x0001DF0000000010ull)
+#define CVMX_DPI_INT_REG	      (0x0001DF0000000008ull)
+#define CVMX_DPI_NCBX_CFG(offset)     (0x0001DF0000000800ull)
+#define CVMX_DPI_NCB_CTL	      (0x0001DF0000000028ull)
+#define CVMX_DPI_PINT_INFO	      (0x0001DF0000000830ull)
+#define CVMX_DPI_PKT_ERR_RSP	      (0x0001DF0000000078ull)
+#define CVMX_DPI_REQ_ERR_RSP	      (0x0001DF0000000058ull)
+#define CVMX_DPI_REQ_ERR_RSP_EN	      (0x0001DF0000000068ull)
+#define CVMX_DPI_REQ_ERR_RST	      (0x0001DF0000000060ull)
+#define CVMX_DPI_REQ_ERR_RST_EN	      (0x0001DF0000000070ull)
+#define CVMX_DPI_REQ_ERR_SKIP_COMP    (0x0001DF0000000838ull)
+#define CVMX_DPI_REQ_GBL_EN	      (0x0001DF0000000050ull)
+#define CVMX_DPI_SLI_PRTX_CFG(offset) (0x0001DF0000000900ull + ((offset) & 3) * 8)
+static inline u64 CVMX_DPI_SLI_PRTX_ERR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001DF0000000920ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001DF0000000920ull + (offset) * 8;
+		return 0x0001DF0000000920ull + (offset) * 8;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+
+		if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
+			return 0x0001DF0000000928ull + (offset) * 8;
+
+		if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS2))
+			return 0x0001DF0000000920ull + (offset) * 8;
+		return 0x0001DF0000000920ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0001DF0000000920ull + (offset) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001DF0000000928ull + (offset) * 8;
+	}
+	return 0x0001DF0000000920ull + (offset) * 8;
+}
+
+#define CVMX_DPI_SLI_PRTX_ERR_INFO(offset) (0x0001DF0000000940ull + ((offset) & 3) * 8)
+#define CVMX_DPI_SRIO_RX_BELLX(offset)	   (0x0001DF0000080200ull + ((offset) & 31) * 8)
+#define CVMX_DPI_SRIO_RX_BELL_SEQX(offset) (0x0001DF0000080400ull + ((offset) & 31) * 8)
+#define CVMX_DPI_SWA_Q_VMID		   (0x0001DF0000000030ull)
+
+/**
+ * cvmx_dpi_bist_status
+ *
+ * This is the built-in self-test (BIST) status register. Each bit is the BIST result of an
+ * individual memory (per bit, 0 = pass and 1 = fail).
+ */
+union cvmx_dpi_bist_status {
+	u64 u64;
+	struct cvmx_dpi_bist_status_s {
+		u64 reserved_57_63 : 7;
+		u64 bist : 57;
+	} s;
+	struct cvmx_dpi_bist_status_cn61xx {
+		u64 reserved_47_63 : 17;
+		u64 bist : 47;
+	} cn61xx;
+	struct cvmx_dpi_bist_status_cn63xx {
+		u64 reserved_45_63 : 19;
+		u64 bist : 45;
+	} cn63xx;
+	struct cvmx_dpi_bist_status_cn63xxp1 {
+		u64 reserved_37_63 : 27;
+		u64 bist : 37;
+	} cn63xxp1;
+	struct cvmx_dpi_bist_status_cn61xx cn66xx;
+	struct cvmx_dpi_bist_status_cn63xx cn68xx;
+	struct cvmx_dpi_bist_status_cn63xx cn68xxp1;
+	struct cvmx_dpi_bist_status_cn61xx cn70xx;
+	struct cvmx_dpi_bist_status_cn61xx cn70xxp1;
+	struct cvmx_dpi_bist_status_s cn73xx;
+	struct cvmx_dpi_bist_status_s cn78xx;
+	struct cvmx_dpi_bist_status_cn78xxp1 {
+		u64 reserved_51_63 : 13;
+		u64 bist : 51;
+	} cn78xxp1;
+	struct cvmx_dpi_bist_status_cn61xx cnf71xx;
+	struct cvmx_dpi_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_dpi_bist_status cvmx_dpi_bist_status_t;
+
+/**
+ * cvmx_dpi_ctl
+ *
+ * This register provides the enable bit for the DMA and packet state machines.
+ *
+ */
+union cvmx_dpi_ctl {
+	u64 u64;
+	struct cvmx_dpi_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 clk : 1;
+		u64 en : 1;
+	} s;
+	struct cvmx_dpi_ctl_cn61xx {
+		u64 reserved_1_63 : 63;
+		u64 en : 1;
+	} cn61xx;
+	struct cvmx_dpi_ctl_s cn63xx;
+	struct cvmx_dpi_ctl_s cn63xxp1;
+	struct cvmx_dpi_ctl_s cn66xx;
+	struct cvmx_dpi_ctl_s cn68xx;
+	struct cvmx_dpi_ctl_s cn68xxp1;
+	struct cvmx_dpi_ctl_cn61xx cn70xx;
+	struct cvmx_dpi_ctl_cn61xx cn70xxp1;
+	struct cvmx_dpi_ctl_cn61xx cn73xx;
+	struct cvmx_dpi_ctl_cn61xx cn78xx;
+	struct cvmx_dpi_ctl_cn61xx cn78xxp1;
+	struct cvmx_dpi_ctl_cn61xx cnf71xx;
+	struct cvmx_dpi_ctl_cn61xx cnf75xx;
+};
+
+typedef union cvmx_dpi_ctl cvmx_dpi_ctl_t;
+
+/**
+ * cvmx_dpi_dma#_counts
+ *
+ * These registers provide values for determining the number of instructions in the local
+ * instruction FIFO.
+ */
+union cvmx_dpi_dmax_counts {
+	u64 u64;
+	struct cvmx_dpi_dmax_counts_s {
+		u64 reserved_39_63 : 25;
+		u64 fcnt : 7;
+		u64 dbell : 32;
+	} s;
+	struct cvmx_dpi_dmax_counts_s cn61xx;
+	struct cvmx_dpi_dmax_counts_s cn63xx;
+	struct cvmx_dpi_dmax_counts_s cn63xxp1;
+	struct cvmx_dpi_dmax_counts_s cn66xx;
+	struct cvmx_dpi_dmax_counts_s cn68xx;
+	struct cvmx_dpi_dmax_counts_s cn68xxp1;
+	struct cvmx_dpi_dmax_counts_s cn70xx;
+	struct cvmx_dpi_dmax_counts_s cn70xxp1;
+	struct cvmx_dpi_dmax_counts_s cn73xx;
+	struct cvmx_dpi_dmax_counts_s cn78xx;
+	struct cvmx_dpi_dmax_counts_s cn78xxp1;
+	struct cvmx_dpi_dmax_counts_s cnf71xx;
+	struct cvmx_dpi_dmax_counts_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_counts cvmx_dpi_dmax_counts_t;
+
+/**
+ * cvmx_dpi_dma#_dbell
+ *
+ * This is the door bell register for the eight DMA instruction queues.
+ *
+ */
+union cvmx_dpi_dmax_dbell {
+	u64 u64;
+	struct cvmx_dpi_dmax_dbell_s {
+		u64 reserved_16_63 : 48;
+		u64 dbell : 16;
+	} s;
+	struct cvmx_dpi_dmax_dbell_s cn61xx;
+	struct cvmx_dpi_dmax_dbell_s cn63xx;
+	struct cvmx_dpi_dmax_dbell_s cn63xxp1;
+	struct cvmx_dpi_dmax_dbell_s cn66xx;
+	struct cvmx_dpi_dmax_dbell_s cn68xx;
+	struct cvmx_dpi_dmax_dbell_s cn68xxp1;
+	struct cvmx_dpi_dmax_dbell_s cn70xx;
+	struct cvmx_dpi_dmax_dbell_s cn70xxp1;
+	struct cvmx_dpi_dmax_dbell_s cn73xx;
+	struct cvmx_dpi_dmax_dbell_s cn78xx;
+	struct cvmx_dpi_dmax_dbell_s cn78xxp1;
+	struct cvmx_dpi_dmax_dbell_s cnf71xx;
+	struct cvmx_dpi_dmax_dbell_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_dbell cvmx_dpi_dmax_dbell_t;
+
+/**
+ * cvmx_dpi_dma#_err_rsp_status
+ */
+union cvmx_dpi_dmax_err_rsp_status {
+	u64 u64;
+	struct cvmx_dpi_dmax_err_rsp_status_s {
+		u64 reserved_6_63 : 58;
+		u64 status : 6;
+	} s;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn61xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn66xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn68xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn68xxp1;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn70xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn70xxp1;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn73xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn78xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cn78xxp1;
+	struct cvmx_dpi_dmax_err_rsp_status_s cnf71xx;
+	struct cvmx_dpi_dmax_err_rsp_status_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_err_rsp_status cvmx_dpi_dmax_err_rsp_status_t;
+
+/**
+ * cvmx_dpi_dma#_ibuff_saddr
+ *
+ * These registers provide the address to start reading instructions for the eight DMA
+ * instruction queues.
+ */
+union cvmx_dpi_dmax_ibuff_saddr {
+	u64 u64;
+	struct cvmx_dpi_dmax_ibuff_saddr_s {
+		u64 reserved_62_63 : 2;
+		u64 csize : 14;
+		u64 reserved_0_47 : 48;
+	} s;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx {
+		u64 reserved_62_63 : 2;
+		u64 csize : 14;
+		u64 reserved_41_47 : 7;
+		u64 idle : 1;
+		u64 reserved_36_39 : 4;
+		u64 saddr : 29;
+		u64 reserved_0_6 : 7;
+	} cn61xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cn63xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cn63xxp1;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cn66xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn68xx {
+		u64 reserved_62_63 : 2;
+		u64 csize : 14;
+		u64 reserved_41_47 : 7;
+		u64 idle : 1;
+		u64 saddr : 33;
+		u64 reserved_0_6 : 7;
+	} cn68xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn68xx cn68xxp1;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cn70xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cn70xxp1;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn73xx {
+		u64 idle : 1;
+		u64 reserved_62_62 : 1;
+		u64 csize : 14;
+		u64 reserved_42_47 : 6;
+		u64 saddr : 35;
+		u64 reserved_0_6 : 7;
+	} cn73xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn73xx cn78xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn73xx cn78xxp1;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn61xx cnf71xx;
+	struct cvmx_dpi_dmax_ibuff_saddr_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_ibuff_saddr cvmx_dpi_dmax_ibuff_saddr_t;
+
+/**
+ * cvmx_dpi_dma#_iflight
+ */
+union cvmx_dpi_dmax_iflight {
+	u64 u64;
+	struct cvmx_dpi_dmax_iflight_s {
+		u64 reserved_3_63 : 61;
+		u64 cnt : 3;
+	} s;
+	struct cvmx_dpi_dmax_iflight_s cn61xx;
+	struct cvmx_dpi_dmax_iflight_s cn66xx;
+	struct cvmx_dpi_dmax_iflight_s cn68xx;
+	struct cvmx_dpi_dmax_iflight_s cn68xxp1;
+	struct cvmx_dpi_dmax_iflight_s cn70xx;
+	struct cvmx_dpi_dmax_iflight_s cn70xxp1;
+	struct cvmx_dpi_dmax_iflight_s cn73xx;
+	struct cvmx_dpi_dmax_iflight_s cn78xx;
+	struct cvmx_dpi_dmax_iflight_s cn78xxp1;
+	struct cvmx_dpi_dmax_iflight_s cnf71xx;
+	struct cvmx_dpi_dmax_iflight_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_iflight cvmx_dpi_dmax_iflight_t;
+
+/**
+ * cvmx_dpi_dma#_naddr
+ *
+ * These registers provide the L2C addresses to read the next Ichunk data.
+ *
+ */
+union cvmx_dpi_dmax_naddr {
+	u64 u64;
+	struct cvmx_dpi_dmax_naddr_s {
+		u64 reserved_42_63 : 22;
+		u64 addr : 42;
+	} s;
+	struct cvmx_dpi_dmax_naddr_cn61xx {
+		u64 reserved_36_63 : 28;
+		u64 addr : 36;
+	} cn61xx;
+	struct cvmx_dpi_dmax_naddr_cn61xx cn63xx;
+	struct cvmx_dpi_dmax_naddr_cn61xx cn63xxp1;
+	struct cvmx_dpi_dmax_naddr_cn61xx cn66xx;
+	struct cvmx_dpi_dmax_naddr_cn68xx {
+		u64 reserved_40_63 : 24;
+		u64 addr : 40;
+	} cn68xx;
+	struct cvmx_dpi_dmax_naddr_cn68xx cn68xxp1;
+	struct cvmx_dpi_dmax_naddr_cn61xx cn70xx;
+	struct cvmx_dpi_dmax_naddr_cn61xx cn70xxp1;
+	struct cvmx_dpi_dmax_naddr_s cn73xx;
+	struct cvmx_dpi_dmax_naddr_s cn78xx;
+	struct cvmx_dpi_dmax_naddr_s cn78xxp1;
+	struct cvmx_dpi_dmax_naddr_cn61xx cnf71xx;
+	struct cvmx_dpi_dmax_naddr_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_naddr cvmx_dpi_dmax_naddr_t;
+
+/**
+ * cvmx_dpi_dma#_reqbnk0
+ *
+ * These registers provide the current contents of the request state machine, bank 0.
+ *
+ */
+union cvmx_dpi_dmax_reqbnk0 {
+	u64 u64;
+	struct cvmx_dpi_dmax_reqbnk0_s {
+		u64 state : 64;
+	} s;
+	struct cvmx_dpi_dmax_reqbnk0_s cn61xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn63xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn63xxp1;
+	struct cvmx_dpi_dmax_reqbnk0_s cn66xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn68xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn68xxp1;
+	struct cvmx_dpi_dmax_reqbnk0_s cn70xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn70xxp1;
+	struct cvmx_dpi_dmax_reqbnk0_s cn73xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn78xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cn78xxp1;
+	struct cvmx_dpi_dmax_reqbnk0_s cnf71xx;
+	struct cvmx_dpi_dmax_reqbnk0_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_reqbnk0 cvmx_dpi_dmax_reqbnk0_t;
+
+/**
+ * cvmx_dpi_dma#_reqbnk1
+ *
+ * These registers provide the current contents of the request state machine, bank 1.
+ *
+ */
+union cvmx_dpi_dmax_reqbnk1 {
+	u64 u64;
+	struct cvmx_dpi_dmax_reqbnk1_s {
+		u64 state : 64;
+	} s;
+	struct cvmx_dpi_dmax_reqbnk1_s cn61xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn63xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn63xxp1;
+	struct cvmx_dpi_dmax_reqbnk1_s cn66xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn68xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn68xxp1;
+	struct cvmx_dpi_dmax_reqbnk1_s cn70xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn70xxp1;
+	struct cvmx_dpi_dmax_reqbnk1_s cn73xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn78xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cn78xxp1;
+	struct cvmx_dpi_dmax_reqbnk1_s cnf71xx;
+	struct cvmx_dpi_dmax_reqbnk1_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_reqbnk1 cvmx_dpi_dmax_reqbnk1_t;
+
+/**
+ * cvmx_dpi_dma#_reqq_ctl
+ *
+ * This register contains the control bits for transactions on the eight request queues.
+ *
+ */
+union cvmx_dpi_dmax_reqq_ctl {
+	u64 u64;
+	struct cvmx_dpi_dmax_reqq_ctl_s {
+		u64 reserved_9_63 : 55;
+		u64 st_cmd : 1;
+		u64 reserved_2_7 : 6;
+		u64 ld_cmd : 2;
+	} s;
+	struct cvmx_dpi_dmax_reqq_ctl_s cn73xx;
+	struct cvmx_dpi_dmax_reqq_ctl_s cn78xx;
+	struct cvmx_dpi_dmax_reqq_ctl_s cn78xxp1;
+	struct cvmx_dpi_dmax_reqq_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dmax_reqq_ctl cvmx_dpi_dmax_reqq_ctl_t;
+
+/**
+ * cvmx_dpi_dma_control
+ *
+ * This register controls the operation of DMA input and output.
+ *
+ */
+union cvmx_dpi_dma_control {
+	u64 u64;
+	struct cvmx_dpi_dma_control_s {
+		u64 reserved_62_63 : 2;
+		u64 dici_mode : 1;
+		u64 pkt_en1 : 1;
+		u64 ffp_dis : 1;
+		u64 commit_mode : 1;
+		u64 pkt_hp : 1;
+		u64 pkt_en : 1;
+		u64 reserved_54_55 : 2;
+		u64 dma_enb : 6;
+		u64 wqecsdis : 1;
+		u64 wqecsoff : 7;
+		u64 zbwcsen : 1;
+		u64 wqecsmode : 2;
+		u64 reserved_35_36 : 2;
+		u64 ncb_tag : 1;
+		u64 b0_lend : 1;
+		u64 reserved_20_32 : 13;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 reserved_0_13 : 14;
+	} s;
+	struct cvmx_dpi_dma_control_cn61xx {
+		u64 reserved_62_63 : 2;
+		u64 dici_mode : 1;
+		u64 pkt_en1 : 1;
+		u64 ffp_dis : 1;
+		u64 commit_mode : 1;
+		u64 pkt_hp : 1;
+		u64 pkt_en : 1;
+		u64 reserved_54_55 : 2;
+		u64 dma_enb : 6;
+		u64 reserved_34_47 : 14;
+		u64 b0_lend : 1;
+		u64 dwb_denb : 1;
+		u64 dwb_ichk : 9;
+		u64 fpa_que : 3;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 reserved_0_13 : 14;
+	} cn61xx;
+	struct cvmx_dpi_dma_control_cn63xx {
+		u64 reserved_61_63 : 3;
+		u64 pkt_en1 : 1;
+		u64 ffp_dis : 1;
+		u64 commit_mode : 1;
+		u64 pkt_hp : 1;
+		u64 pkt_en : 1;
+		u64 reserved_54_55 : 2;
+		u64 dma_enb : 6;
+		u64 reserved_34_47 : 14;
+		u64 b0_lend : 1;
+		u64 dwb_denb : 1;
+		u64 dwb_ichk : 9;
+		u64 fpa_que : 3;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 reserved_0_13 : 14;
+	} cn63xx;
+	struct cvmx_dpi_dma_control_cn63xxp1 {
+		u64 reserved_59_63 : 5;
+		u64 commit_mode : 1;
+		u64 pkt_hp : 1;
+		u64 pkt_en : 1;
+		u64 reserved_54_55 : 2;
+		u64 dma_enb : 6;
+		u64 reserved_34_47 : 14;
+		u64 b0_lend : 1;
+		u64 dwb_denb : 1;
+		u64 dwb_ichk : 9;
+		u64 fpa_que : 3;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 reserved_0_13 : 14;
+	} cn63xxp1;
+	struct cvmx_dpi_dma_control_cn63xx cn66xx;
+	struct cvmx_dpi_dma_control_cn61xx cn68xx;
+	struct cvmx_dpi_dma_control_cn63xx cn68xxp1;
+	struct cvmx_dpi_dma_control_cn61xx cn70xx;
+	struct cvmx_dpi_dma_control_cn61xx cn70xxp1;
+	struct cvmx_dpi_dma_control_cn73xx {
+		u64 reserved_60_63 : 4;
+		u64 ffp_dis : 1;
+		u64 commit_mode : 1;
+		u64 reserved_57_57 : 1;
+		u64 pkt_en : 1;
+		u64 reserved_54_55 : 2;
+		u64 dma_enb : 6;
+		u64 wqecsdis : 1;
+		u64 wqecsoff : 7;
+		u64 zbwcsen : 1;
+		u64 wqecsmode : 2;
+		u64 reserved_35_36 : 2;
+		u64 ncb_tag : 1;
+		u64 b0_lend : 1;
+		u64 ldwb : 1;
+		u64 aura_ichk : 12;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 reserved_0_13 : 14;
+	} cn73xx;
+	struct cvmx_dpi_dma_control_cn73xx cn78xx;
+	struct cvmx_dpi_dma_control_cn73xx cn78xxp1;
+	struct cvmx_dpi_dma_control_cn61xx cnf71xx;
+	struct cvmx_dpi_dma_control_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_dma_control cvmx_dpi_dma_control_t;
+
+/**
+ * cvmx_dpi_dma_eng#_en
+ *
+ * These registers provide control for the DMA engines.
+ *
+ */
+union cvmx_dpi_dma_engx_en {
+	u64 u64;
+	struct cvmx_dpi_dma_engx_en_s {
+		u64 reserved_39_63 : 25;
+		u64 eng_molr : 7;
+		u64 reserved_8_31 : 24;
+		u64 qen : 8;
+	} s;
+	struct cvmx_dpi_dma_engx_en_cn61xx {
+		u64 reserved_8_63 : 56;
+		u64 qen : 8;
+	} cn61xx;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn63xx;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn63xxp1;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn66xx;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn68xx;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn68xxp1;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn70xx;
+	struct cvmx_dpi_dma_engx_en_cn61xx cn70xxp1;
+	struct cvmx_dpi_dma_engx_en_s cn73xx;
+	struct cvmx_dpi_dma_engx_en_s cn78xx;
+	struct cvmx_dpi_dma_engx_en_s cn78xxp1;
+	struct cvmx_dpi_dma_engx_en_cn61xx cnf71xx;
+	struct cvmx_dpi_dma_engx_en_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dma_engx_en cvmx_dpi_dma_engx_en_t;
+
+/**
+ * cvmx_dpi_dma_pp#_cnt
+ *
+ * DPI_DMA_PP[0..3]_CNT  = DMA per PP Instr Done Counter
+ * When DMA Instruction Completion Interrupt Mode DPI_DMA_CONTROL.DICI_MODE is enabled, every dma
+ * instruction
+ * that has the WQP=0 and a PTR value of 1..4 will incremrement DPI_DMA_PPx_CNT value-1 counter.
+ * Instructions with WQP=0 and PTR values higher then 0x3F will still send a zero byte write.
+ * Hardware reserves that values 5..63 for future use and will treat them as a PTR of 0 and do
+ * nothing.
+ */
+union cvmx_dpi_dma_ppx_cnt {
+	u64 u64;
+	struct cvmx_dpi_dma_ppx_cnt_s {
+		u64 reserved_16_63 : 48;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_dpi_dma_ppx_cnt_s cn61xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cn68xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cn70xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cn70xxp1;
+	struct cvmx_dpi_dma_ppx_cnt_s cn73xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cn78xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cn78xxp1;
+	struct cvmx_dpi_dma_ppx_cnt_s cnf71xx;
+	struct cvmx_dpi_dma_ppx_cnt_s cnf75xx;
+};
+
+typedef union cvmx_dpi_dma_ppx_cnt cvmx_dpi_dma_ppx_cnt_t;
+
+/**
+ * cvmx_dpi_dma_pp_int
+ */
+union cvmx_dpi_dma_pp_int {
+	u64 u64;
+	struct cvmx_dpi_dma_pp_int_s {
+		u64 reserved_48_63 : 16;
+		u64 complete : 48;
+	} s;
+	struct cvmx_dpi_dma_pp_int_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 complete : 16;
+	} cn73xx;
+	struct cvmx_dpi_dma_pp_int_s cn78xx;
+	struct cvmx_dpi_dma_pp_int_s cn78xxp1;
+	struct cvmx_dpi_dma_pp_int_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_dma_pp_int cvmx_dpi_dma_pp_int_t;
+
+/**
+ * cvmx_dpi_ecc_ctl
+ *
+ * This register allows inserting ECC errors for testing.
+ *
+ */
+union cvmx_dpi_ecc_ctl {
+	u64 u64;
+	struct cvmx_dpi_ecc_ctl_s {
+		u64 reserved_33_63 : 31;
+		u64 ram_cdis : 1;
+		u64 reserved_17_31 : 15;
+		u64 ram_flip1 : 1;
+		u64 reserved_1_15 : 15;
+		u64 ram_flip0 : 1;
+	} s;
+	struct cvmx_dpi_ecc_ctl_s cn73xx;
+	struct cvmx_dpi_ecc_ctl_s cn78xx;
+	struct cvmx_dpi_ecc_ctl_s cn78xxp1;
+	struct cvmx_dpi_ecc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dpi_ecc_ctl cvmx_dpi_ecc_ctl_t;
+
+/**
+ * cvmx_dpi_ecc_int
+ *
+ * This register contains ECC error interrupt summary bits.
+ *
+ */
+union cvmx_dpi_ecc_int {
+	u64 u64;
+	struct cvmx_dpi_ecc_int_s {
+		u64 reserved_47_63 : 17;
+		u64 ram_sbe : 15;
+		u64 reserved_15_31 : 17;
+		u64 ram_dbe : 15;
+	} s;
+	struct cvmx_dpi_ecc_int_s cn73xx;
+	struct cvmx_dpi_ecc_int_s cn78xx;
+	struct cvmx_dpi_ecc_int_s cn78xxp1;
+	struct cvmx_dpi_ecc_int_s cnf75xx;
+};
+
+typedef union cvmx_dpi_ecc_int cvmx_dpi_ecc_int_t;
+
+/**
+ * cvmx_dpi_eng#_buf
+ *
+ * Notes:
+ * The total amount of storage allocated to the 6 DPI DMA engines (via DPI_ENG*_BUF[BLKS]) must not exceed 8KB.
+ *
+ */
+union cvmx_dpi_engx_buf {
+	u64 u64;
+	struct cvmx_dpi_engx_buf_s {
+		u64 reserved_38_63 : 26;
+		u64 compblks : 6;
+		u64 reserved_10_31 : 22;
+		u64 base : 6;
+		u64 blks : 4;
+	} s;
+	struct cvmx_dpi_engx_buf_cn61xx {
+		u64 reserved_37_63 : 27;
+		u64 compblks : 5;
+		u64 reserved_9_31 : 23;
+		u64 base : 5;
+		u64 blks : 4;
+	} cn61xx;
+	struct cvmx_dpi_engx_buf_cn63xx {
+		u64 reserved_8_63 : 56;
+		u64 base : 4;
+		u64 blks : 4;
+	} cn63xx;
+	struct cvmx_dpi_engx_buf_cn63xx cn63xxp1;
+	struct cvmx_dpi_engx_buf_cn61xx cn66xx;
+	struct cvmx_dpi_engx_buf_cn61xx cn68xx;
+	struct cvmx_dpi_engx_buf_cn61xx cn68xxp1;
+	struct cvmx_dpi_engx_buf_cn61xx cn70xx;
+	struct cvmx_dpi_engx_buf_cn61xx cn70xxp1;
+	struct cvmx_dpi_engx_buf_s cn73xx;
+	struct cvmx_dpi_engx_buf_s cn78xx;
+	struct cvmx_dpi_engx_buf_s cn78xxp1;
+	struct cvmx_dpi_engx_buf_cn61xx cnf71xx;
+	struct cvmx_dpi_engx_buf_s cnf75xx;
+};
+
+typedef union cvmx_dpi_engx_buf cvmx_dpi_engx_buf_t;
+
+/**
+ * cvmx_dpi_info_reg
+ */
+union cvmx_dpi_info_reg {
+	u64 u64;
+	struct cvmx_dpi_info_reg_s {
+		u64 reserved_8_63 : 56;
+		u64 ffp : 4;
+		u64 reserved_2_3 : 2;
+		u64 ncb : 1;
+		u64 rsl : 1;
+	} s;
+	struct cvmx_dpi_info_reg_s cn61xx;
+	struct cvmx_dpi_info_reg_s cn63xx;
+	struct cvmx_dpi_info_reg_cn63xxp1 {
+		u64 reserved_2_63 : 62;
+		u64 ncb : 1;
+		u64 rsl : 1;
+	} cn63xxp1;
+	struct cvmx_dpi_info_reg_s cn66xx;
+	struct cvmx_dpi_info_reg_s cn68xx;
+	struct cvmx_dpi_info_reg_s cn68xxp1;
+	struct cvmx_dpi_info_reg_s cn70xx;
+	struct cvmx_dpi_info_reg_s cn70xxp1;
+	struct cvmx_dpi_info_reg_s cn73xx;
+	struct cvmx_dpi_info_reg_s cn78xx;
+	struct cvmx_dpi_info_reg_s cn78xxp1;
+	struct cvmx_dpi_info_reg_s cnf71xx;
+	struct cvmx_dpi_info_reg_s cnf75xx;
+};
+
+typedef union cvmx_dpi_info_reg cvmx_dpi_info_reg_t;
+
+/**
+ * cvmx_dpi_int_en
+ */
+union cvmx_dpi_int_en {
+	u64 u64;
+	struct cvmx_dpi_int_en_s {
+		u64 reserved_28_63 : 36;
+		u64 sprt3_rst : 1;
+		u64 sprt2_rst : 1;
+		u64 sprt1_rst : 1;
+		u64 sprt0_rst : 1;
+		u64 reserved_23_23 : 1;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_2_7 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} s;
+	struct cvmx_dpi_int_en_s cn61xx;
+	struct cvmx_dpi_int_en_cn63xx {
+		u64 reserved_26_63 : 38;
+		u64 sprt1_rst : 1;
+		u64 sprt0_rst : 1;
+		u64 reserved_23_23 : 1;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_2_7 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} cn63xx;
+	struct cvmx_dpi_int_en_cn63xx cn63xxp1;
+	struct cvmx_dpi_int_en_s cn66xx;
+	struct cvmx_dpi_int_en_cn63xx cn68xx;
+	struct cvmx_dpi_int_en_cn63xx cn68xxp1;
+	struct cvmx_dpi_int_en_cn70xx {
+		u64 reserved_28_63 : 36;
+		u64 sprt3_rst : 1;
+		u64 sprt2_rst : 1;
+		u64 sprt1_rst : 1;
+		u64 sprt0_rst : 1;
+		u64 reserved_23_23 : 1;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_7_2 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} cn70xx;
+	struct cvmx_dpi_int_en_cn70xx cn70xxp1;
+	struct cvmx_dpi_int_en_s cnf71xx;
+};
+
+typedef union cvmx_dpi_int_en cvmx_dpi_int_en_t;
+
+/**
+ * cvmx_dpi_int_reg
+ *
+ * This register contains error flags for DPI.
+ *
+ */
+union cvmx_dpi_int_reg {
+	u64 u64;
+	struct cvmx_dpi_int_reg_s {
+		u64 reserved_28_63 : 36;
+		u64 sprt3_rst : 1;
+		u64 sprt2_rst : 1;
+		u64 sprt1_rst : 1;
+		u64 sprt0_rst : 1;
+		u64 reserved_23_23 : 1;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_2_7 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} s;
+	struct cvmx_dpi_int_reg_s cn61xx;
+	struct cvmx_dpi_int_reg_cn63xx {
+		u64 reserved_26_63 : 38;
+		u64 sprt1_rst : 1;
+		u64 sprt0_rst : 1;
+		u64 reserved_23_23 : 1;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_2_7 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} cn63xx;
+	struct cvmx_dpi_int_reg_cn63xx cn63xxp1;
+	struct cvmx_dpi_int_reg_s cn66xx;
+	struct cvmx_dpi_int_reg_cn63xx cn68xx;
+	struct cvmx_dpi_int_reg_cn63xx cn68xxp1;
+	struct cvmx_dpi_int_reg_s cn70xx;
+	struct cvmx_dpi_int_reg_s cn70xxp1;
+	struct cvmx_dpi_int_reg_cn73xx {
+		u64 reserved_23_63 : 41;
+		u64 req_badfil : 1;
+		u64 req_inull : 1;
+		u64 req_anull : 1;
+		u64 req_undflw : 1;
+		u64 req_ovrflw : 1;
+		u64 req_badlen : 1;
+		u64 req_badadr : 1;
+		u64 dmadbo : 8;
+		u64 reserved_2_7 : 6;
+		u64 nfovr : 1;
+		u64 nderr : 1;
+	} cn73xx;
+	struct cvmx_dpi_int_reg_cn73xx cn78xx;
+	struct cvmx_dpi_int_reg_s cn78xxp1;
+	struct cvmx_dpi_int_reg_s cnf71xx;
+	struct cvmx_dpi_int_reg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_int_reg cvmx_dpi_int_reg_t;
+
+/**
+ * cvmx_dpi_ncb#_cfg
+ */
+union cvmx_dpi_ncbx_cfg {
+	u64 u64;
+	struct cvmx_dpi_ncbx_cfg_s {
+		u64 reserved_6_63 : 58;
+		u64 molr : 6;
+	} s;
+	struct cvmx_dpi_ncbx_cfg_s cn61xx;
+	struct cvmx_dpi_ncbx_cfg_s cn66xx;
+	struct cvmx_dpi_ncbx_cfg_s cn68xx;
+	struct cvmx_dpi_ncbx_cfg_s cn70xx;
+	struct cvmx_dpi_ncbx_cfg_s cn70xxp1;
+	struct cvmx_dpi_ncbx_cfg_s cn73xx;
+	struct cvmx_dpi_ncbx_cfg_s cn78xx;
+	struct cvmx_dpi_ncbx_cfg_s cn78xxp1;
+	struct cvmx_dpi_ncbx_cfg_s cnf71xx;
+	struct cvmx_dpi_ncbx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_dpi_ncbx_cfg cvmx_dpi_ncbx_cfg_t;
+
+/**
+ * cvmx_dpi_ncb_ctl
+ *
+ * This register chooses which NCB interface DPI uses for L2/DRAM reads/writes.
+ *
+ */
+union cvmx_dpi_ncb_ctl {
+	u64 u64;
+	struct cvmx_dpi_ncb_ctl_s {
+		u64 reserved_25_63 : 39;
+		u64 ncbsel_prt_xor_dis : 1;
+		u64 reserved_21_23 : 3;
+		u64 ncbsel_zbw : 1;
+		u64 reserved_17_19 : 3;
+		u64 ncbsel_req : 1;
+		u64 reserved_13_15 : 3;
+		u64 ncbsel_dst : 1;
+		u64 reserved_9_11 : 3;
+		u64 ncbsel_src : 1;
+		u64 reserved_1_7 : 7;
+		u64 prt : 1;
+	} s;
+	struct cvmx_dpi_ncb_ctl_cn73xx {
+		u64 reserved_25_63 : 39;
+		u64 ncbsel_prt_xor_dis : 1;
+		u64 reserved_21_23 : 3;
+		u64 ncbsel_zbw : 1;
+		u64 reserved_17_19 : 3;
+		u64 ncbsel_req : 1;
+		u64 reserved_13_15 : 3;
+		u64 ncbsel_dst : 1;
+		u64 reserved_9_11 : 3;
+		u64 ncbsel_src : 1;
+		u64 reserved_0_7 : 8;
+	} cn73xx;
+	struct cvmx_dpi_ncb_ctl_s cn78xx;
+	struct cvmx_dpi_ncb_ctl_s cn78xxp1;
+	struct cvmx_dpi_ncb_ctl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_ncb_ctl cvmx_dpi_ncb_ctl_t;
+
+/**
+ * cvmx_dpi_pint_info
+ *
+ * This register provides DPI packet interrupt information.
+ *
+ */
+union cvmx_dpi_pint_info {
+	u64 u64;
+	struct cvmx_dpi_pint_info_s {
+		u64 reserved_14_63 : 50;
+		u64 iinfo : 6;
+		u64 reserved_6_7 : 2;
+		u64 sinfo : 6;
+	} s;
+	struct cvmx_dpi_pint_info_s cn61xx;
+	struct cvmx_dpi_pint_info_s cn63xx;
+	struct cvmx_dpi_pint_info_s cn63xxp1;
+	struct cvmx_dpi_pint_info_s cn66xx;
+	struct cvmx_dpi_pint_info_s cn68xx;
+	struct cvmx_dpi_pint_info_s cn68xxp1;
+	struct cvmx_dpi_pint_info_s cn70xx;
+	struct cvmx_dpi_pint_info_s cn70xxp1;
+	struct cvmx_dpi_pint_info_s cn73xx;
+	struct cvmx_dpi_pint_info_s cn78xx;
+	struct cvmx_dpi_pint_info_s cn78xxp1;
+	struct cvmx_dpi_pint_info_s cnf71xx;
+	struct cvmx_dpi_pint_info_s cnf75xx;
+};
+
+typedef union cvmx_dpi_pint_info cvmx_dpi_pint_info_t;
+
+/**
+ * cvmx_dpi_pkt_err_rsp
+ */
+union cvmx_dpi_pkt_err_rsp {
+	u64 u64;
+	struct cvmx_dpi_pkt_err_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 pkterr : 1;
+	} s;
+	struct cvmx_dpi_pkt_err_rsp_s cn61xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn63xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn63xxp1;
+	struct cvmx_dpi_pkt_err_rsp_s cn66xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn68xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn68xxp1;
+	struct cvmx_dpi_pkt_err_rsp_s cn70xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn70xxp1;
+	struct cvmx_dpi_pkt_err_rsp_s cn73xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn78xx;
+	struct cvmx_dpi_pkt_err_rsp_s cn78xxp1;
+	struct cvmx_dpi_pkt_err_rsp_s cnf71xx;
+	struct cvmx_dpi_pkt_err_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dpi_pkt_err_rsp cvmx_dpi_pkt_err_rsp_t;
+
+/**
+ * cvmx_dpi_req_err_rsp
+ */
+union cvmx_dpi_req_err_rsp {
+	u64 u64;
+	struct cvmx_dpi_req_err_rsp_s {
+		u64 reserved_8_63 : 56;
+		u64 qerr : 8;
+	} s;
+	struct cvmx_dpi_req_err_rsp_s cn61xx;
+	struct cvmx_dpi_req_err_rsp_s cn63xx;
+	struct cvmx_dpi_req_err_rsp_s cn63xxp1;
+	struct cvmx_dpi_req_err_rsp_s cn66xx;
+	struct cvmx_dpi_req_err_rsp_s cn68xx;
+	struct cvmx_dpi_req_err_rsp_s cn68xxp1;
+	struct cvmx_dpi_req_err_rsp_s cn70xx;
+	struct cvmx_dpi_req_err_rsp_s cn70xxp1;
+	struct cvmx_dpi_req_err_rsp_s cn73xx;
+	struct cvmx_dpi_req_err_rsp_s cn78xx;
+	struct cvmx_dpi_req_err_rsp_s cn78xxp1;
+	struct cvmx_dpi_req_err_rsp_s cnf71xx;
+	struct cvmx_dpi_req_err_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_err_rsp cvmx_dpi_req_err_rsp_t;
+
+/**
+ * cvmx_dpi_req_err_rsp_en
+ */
+union cvmx_dpi_req_err_rsp_en {
+	u64 u64;
+	struct cvmx_dpi_req_err_rsp_en_s {
+		u64 reserved_8_63 : 56;
+		u64 en : 8;
+	} s;
+	struct cvmx_dpi_req_err_rsp_en_s cn61xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn63xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn63xxp1;
+	struct cvmx_dpi_req_err_rsp_en_s cn66xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn68xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn68xxp1;
+	struct cvmx_dpi_req_err_rsp_en_s cn70xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn70xxp1;
+	struct cvmx_dpi_req_err_rsp_en_s cn73xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn78xx;
+	struct cvmx_dpi_req_err_rsp_en_s cn78xxp1;
+	struct cvmx_dpi_req_err_rsp_en_s cnf71xx;
+	struct cvmx_dpi_req_err_rsp_en_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_err_rsp_en cvmx_dpi_req_err_rsp_en_t;
+
+/**
+ * cvmx_dpi_req_err_rst
+ */
+union cvmx_dpi_req_err_rst {
+	u64 u64;
+	struct cvmx_dpi_req_err_rst_s {
+		u64 reserved_8_63 : 56;
+		u64 qerr : 8;
+	} s;
+	struct cvmx_dpi_req_err_rst_s cn61xx;
+	struct cvmx_dpi_req_err_rst_s cn63xx;
+	struct cvmx_dpi_req_err_rst_s cn63xxp1;
+	struct cvmx_dpi_req_err_rst_s cn66xx;
+	struct cvmx_dpi_req_err_rst_s cn68xx;
+	struct cvmx_dpi_req_err_rst_s cn68xxp1;
+	struct cvmx_dpi_req_err_rst_s cn70xx;
+	struct cvmx_dpi_req_err_rst_s cn70xxp1;
+	struct cvmx_dpi_req_err_rst_s cn73xx;
+	struct cvmx_dpi_req_err_rst_s cn78xx;
+	struct cvmx_dpi_req_err_rst_s cn78xxp1;
+	struct cvmx_dpi_req_err_rst_s cnf71xx;
+	struct cvmx_dpi_req_err_rst_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_err_rst cvmx_dpi_req_err_rst_t;
+
+/**
+ * cvmx_dpi_req_err_rst_en
+ */
+union cvmx_dpi_req_err_rst_en {
+	u64 u64;
+	struct cvmx_dpi_req_err_rst_en_s {
+		u64 reserved_8_63 : 56;
+		u64 en : 8;
+	} s;
+	struct cvmx_dpi_req_err_rst_en_s cn61xx;
+	struct cvmx_dpi_req_err_rst_en_s cn63xx;
+	struct cvmx_dpi_req_err_rst_en_s cn63xxp1;
+	struct cvmx_dpi_req_err_rst_en_s cn66xx;
+	struct cvmx_dpi_req_err_rst_en_s cn68xx;
+	struct cvmx_dpi_req_err_rst_en_s cn68xxp1;
+	struct cvmx_dpi_req_err_rst_en_s cn70xx;
+	struct cvmx_dpi_req_err_rst_en_s cn70xxp1;
+	struct cvmx_dpi_req_err_rst_en_s cn73xx;
+	struct cvmx_dpi_req_err_rst_en_s cn78xx;
+	struct cvmx_dpi_req_err_rst_en_s cn78xxp1;
+	struct cvmx_dpi_req_err_rst_en_s cnf71xx;
+	struct cvmx_dpi_req_err_rst_en_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_err_rst_en cvmx_dpi_req_err_rst_en_t;
+
+/**
+ * cvmx_dpi_req_err_skip_comp
+ */
+union cvmx_dpi_req_err_skip_comp {
+	u64 u64;
+	struct cvmx_dpi_req_err_skip_comp_s {
+		u64 reserved_24_63 : 40;
+		u64 en_rst : 8;
+		u64 reserved_8_15 : 8;
+		u64 en_rsp : 8;
+	} s;
+	struct cvmx_dpi_req_err_skip_comp_s cn61xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn66xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn68xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn68xxp1;
+	struct cvmx_dpi_req_err_skip_comp_s cn70xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn70xxp1;
+	struct cvmx_dpi_req_err_skip_comp_s cn73xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn78xx;
+	struct cvmx_dpi_req_err_skip_comp_s cn78xxp1;
+	struct cvmx_dpi_req_err_skip_comp_s cnf71xx;
+	struct cvmx_dpi_req_err_skip_comp_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_err_skip_comp cvmx_dpi_req_err_skip_comp_t;
+
+/**
+ * cvmx_dpi_req_gbl_en
+ */
+union cvmx_dpi_req_gbl_en {
+	u64 u64;
+	struct cvmx_dpi_req_gbl_en_s {
+		u64 reserved_8_63 : 56;
+		u64 qen : 8;
+	} s;
+	struct cvmx_dpi_req_gbl_en_s cn61xx;
+	struct cvmx_dpi_req_gbl_en_s cn63xx;
+	struct cvmx_dpi_req_gbl_en_s cn63xxp1;
+	struct cvmx_dpi_req_gbl_en_s cn66xx;
+	struct cvmx_dpi_req_gbl_en_s cn68xx;
+	struct cvmx_dpi_req_gbl_en_s cn68xxp1;
+	struct cvmx_dpi_req_gbl_en_s cn70xx;
+	struct cvmx_dpi_req_gbl_en_s cn70xxp1;
+	struct cvmx_dpi_req_gbl_en_s cn73xx;
+	struct cvmx_dpi_req_gbl_en_s cn78xx;
+	struct cvmx_dpi_req_gbl_en_s cn78xxp1;
+	struct cvmx_dpi_req_gbl_en_s cnf71xx;
+	struct cvmx_dpi_req_gbl_en_s cnf75xx;
+};
+
+typedef union cvmx_dpi_req_gbl_en cvmx_dpi_req_gbl_en_t;
+
+/**
+ * cvmx_dpi_sli_prt#_cfg
+ *
+ * This register configures the max read request size, max payload size, and max number of SLI
+ * tags in use. Indexed by SLI_PORT_E.
+ */
+union cvmx_dpi_sli_prtx_cfg {
+	u64 u64;
+	struct cvmx_dpi_sli_prtx_cfg_s {
+		u64 reserved_29_63 : 35;
+		u64 ncbsel : 1;
+		u64 reserved_25_27 : 3;
+		u64 halt : 1;
+		u64 qlm_cfg : 4;
+		u64 reserved_17_19 : 3;
+		u64 rd_mode : 1;
+		u64 reserved_15_15 : 1;
+		u64 molr : 7;
+		u64 mps_lim : 1;
+		u64 reserved_5_6 : 2;
+		u64 mps : 1;
+		u64 mrrs_lim : 1;
+		u64 reserved_2_2 : 1;
+		u64 mrrs : 2;
+	} s;
+	struct cvmx_dpi_sli_prtx_cfg_cn61xx {
+		u64 reserved_25_63 : 39;
+		u64 halt : 1;
+		u64 qlm_cfg : 4;
+		u64 reserved_17_19 : 3;
+		u64 rd_mode : 1;
+		u64 reserved_14_15 : 2;
+		u64 molr : 6;
+		u64 mps_lim : 1;
+		u64 reserved_5_6 : 2;
+		u64 mps : 1;
+		u64 mrrs_lim : 1;
+		u64 reserved_2_2 : 1;
+		u64 mrrs : 2;
+	} cn61xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn63xx {
+		u64 reserved_25_63 : 39;
+		u64 halt : 1;
+		u64 reserved_21_23 : 3;
+		u64 qlm_cfg : 1;
+		u64 reserved_17_19 : 3;
+		u64 rd_mode : 1;
+		u64 reserved_14_15 : 2;
+		u64 molr : 6;
+		u64 mps_lim : 1;
+		u64 reserved_5_6 : 2;
+		u64 mps : 1;
+		u64 mrrs_lim : 1;
+		u64 reserved_2_2 : 1;
+		u64 mrrs : 2;
+	} cn63xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn63xx cn63xxp1;
+	struct cvmx_dpi_sli_prtx_cfg_cn61xx cn66xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn63xx cn68xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn63xx cn68xxp1;
+	struct cvmx_dpi_sli_prtx_cfg_cn70xx {
+		u64 reserved_25_63 : 39;
+		u64 halt : 1;
+		u64 reserved_17_23 : 7;
+		u64 rd_mode : 1;
+		u64 reserved_14_15 : 2;
+		u64 molr : 6;
+		u64 mps_lim : 1;
+		u64 reserved_5_6 : 2;
+		u64 mps : 1;
+		u64 mrrs_lim : 1;
+		u64 reserved_2_2 : 1;
+		u64 mrrs : 2;
+	} cn70xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn70xx cn70xxp1;
+	struct cvmx_dpi_sli_prtx_cfg_cn73xx {
+		u64 reserved_29_63 : 35;
+		u64 ncbsel : 1;
+		u64 reserved_25_27 : 3;
+		u64 halt : 1;
+		u64 reserved_21_23 : 3;
+		u64 qlm_cfg : 1;
+		u64 reserved_17_19 : 3;
+		u64 rd_mode : 1;
+		u64 reserved_15_15 : 1;
+		u64 molr : 7;
+		u64 mps_lim : 1;
+		u64 reserved_5_6 : 2;
+		u64 mps : 1;
+		u64 mrrs_lim : 1;
+		u64 reserved_2_2 : 1;
+		u64 mrrs : 2;
+	} cn73xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn73xx cn78xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn73xx cn78xxp1;
+	struct cvmx_dpi_sli_prtx_cfg_cn61xx cnf71xx;
+	struct cvmx_dpi_sli_prtx_cfg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_sli_prtx_cfg cvmx_dpi_sli_prtx_cfg_t;
+
+/**
+ * cvmx_dpi_sli_prt#_err
+ *
+ * This register logs the address associated with the reported SLI error response.
+ * Indexed by SLI_PORT_E.
+ */
+union cvmx_dpi_sli_prtx_err {
+	u64 u64;
+	struct cvmx_dpi_sli_prtx_err_s {
+		u64 addr : 61;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_dpi_sli_prtx_err_s cn61xx;
+	struct cvmx_dpi_sli_prtx_err_s cn63xx;
+	struct cvmx_dpi_sli_prtx_err_s cn63xxp1;
+	struct cvmx_dpi_sli_prtx_err_s cn66xx;
+	struct cvmx_dpi_sli_prtx_err_s cn68xx;
+	struct cvmx_dpi_sli_prtx_err_s cn68xxp1;
+	struct cvmx_dpi_sli_prtx_err_s cn70xx;
+	struct cvmx_dpi_sli_prtx_err_s cn70xxp1;
+	struct cvmx_dpi_sli_prtx_err_s cn73xx;
+	struct cvmx_dpi_sli_prtx_err_s cn78xx;
+	struct cvmx_dpi_sli_prtx_err_s cn78xxp1;
+	struct cvmx_dpi_sli_prtx_err_s cnf71xx;
+	struct cvmx_dpi_sli_prtx_err_s cnf75xx;
+};
+
+typedef union cvmx_dpi_sli_prtx_err cvmx_dpi_sli_prtx_err_t;
+
+/**
+ * cvmx_dpi_sli_prt#_err_info
+ *
+ * This register logs information associated with the reported SLI error response.
+ * Indexed by SLI_PORT_E.
+ */
+union cvmx_dpi_sli_prtx_err_info {
+	u64 u64;
+	struct cvmx_dpi_sli_prtx_err_info_s {
+		u64 reserved_9_63 : 55;
+		u64 lock : 1;
+		u64 reserved_5_7 : 3;
+		u64 type : 1;
+		u64 reserved_3_3 : 1;
+		u64 reqq : 3;
+	} s;
+	struct cvmx_dpi_sli_prtx_err_info_s cn61xx;
+	struct cvmx_dpi_sli_prtx_err_info_s cn63xx;
+	struct cvmx_dpi_sli_prtx_err_info_s cn63xxp1;
+	struct cvmx_dpi_sli_prtx_err_info_s cn66xx;
+	struct cvmx_dpi_sli_prtx_err_info_s cn68xx;
+	struct cvmx_dpi_sli_prtx_err_info_s cn68xxp1;
+	struct cvmx_dpi_sli_prtx_err_info_s cn70xx;
+	struct cvmx_dpi_sli_prtx_err_info_s cn70xxp1;
+	struct cvmx_dpi_sli_prtx_err_info_cn73xx {
+		u64 reserved_32_63 : 32;
+		u64 pvf : 16;
+		u64 reserved_9_15 : 7;
+		u64 lock : 1;
+		u64 reserved_5_7 : 3;
+		u64 type : 1;
+		u64 reserved_3_3 : 1;
+		u64 reqq : 3;
+	} cn73xx;
+	struct cvmx_dpi_sli_prtx_err_info_cn73xx cn78xx;
+	struct cvmx_dpi_sli_prtx_err_info_cn78xxp1 {
+		u64 reserved_23_63 : 41;
+		u64 vf : 7;
+		u64 reserved_9_15 : 7;
+		u64 lock : 1;
+		u64 reserved_5_7 : 3;
+		u64 type : 1;
+		u64 reserved_3_3 : 1;
+		u64 reqq : 3;
+	} cn78xxp1;
+	struct cvmx_dpi_sli_prtx_err_info_s cnf71xx;
+	struct cvmx_dpi_sli_prtx_err_info_cn73xx cnf75xx;
+};
+
+typedef union cvmx_dpi_sli_prtx_err_info cvmx_dpi_sli_prtx_err_info_t;
+
+/**
+ * cvmx_dpi_srio_rx_bell#
+ *
+ * Reading this register pops an entry off the corresponding SRIO RX doorbell FIFO.
+ * The chip supports 16 FIFOs per SRIO interface for a total of 32 FIFOs/Registers.
+ * The MSB of the registers indicates the MAC while the 4 LSBs indicate the FIFO.
+ * Information on the doorbell allocation can be found in SRIO()_RX_BELL_CTRL.
+ */
+union cvmx_dpi_srio_rx_bellx {
+	u64 u64;
+	struct cvmx_dpi_srio_rx_bellx_s {
+		u64 reserved_48_63 : 16;
+		u64 data : 16;
+		u64 sid : 16;
+		u64 count : 8;
+		u64 reserved_5_7 : 3;
+		u64 dest_id : 1;
+		u64 id16 : 1;
+		u64 reserved_2_2 : 1;
+		u64 dpriority : 2;
+	} s;
+	struct cvmx_dpi_srio_rx_bellx_s cnf75xx;
+};
+
+typedef union cvmx_dpi_srio_rx_bellx cvmx_dpi_srio_rx_bellx_t;
+
+/**
+ * cvmx_dpi_srio_rx_bell_seq#
+ *
+ * This register contains the value of the sequence counter when the doorbell
+ * was received and a shadow copy of the Bell FIFO Count that can be read without
+ * emptying the FIFO.  This register must be read prior to corresponding
+ * DPI_SRIO_RX_BELL register to link the doorbell and sequence number.
+ *
+ * Information on the Doorbell Allocation can be found in SRIO()_RX_BELL_CTRL.
+ */
+union cvmx_dpi_srio_rx_bell_seqx {
+	u64 u64;
+	struct cvmx_dpi_srio_rx_bell_seqx_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 8;
+		u64 sid : 32;
+	} s;
+	struct cvmx_dpi_srio_rx_bell_seqx_s cnf75xx;
+};
+
+typedef union cvmx_dpi_srio_rx_bell_seqx cvmx_dpi_srio_rx_bell_seqx_t;
+
+/**
+ * cvmx_dpi_swa_q_vmid
+ *
+ * Not used.
+ *
+ */
+union cvmx_dpi_swa_q_vmid {
+	u64 u64;
+	struct cvmx_dpi_swa_q_vmid_s {
+		u64 vmid7 : 8;
+		u64 vmid6 : 8;
+		u64 vmid5 : 8;
+		u64 vmid4 : 8;
+		u64 vmid3 : 8;
+		u64 vmid2 : 8;
+		u64 vmid1 : 8;
+		u64 vmid0 : 8;
+	} s;
+	struct cvmx_dpi_swa_q_vmid_s cn73xx;
+	struct cvmx_dpi_swa_q_vmid_s cn78xx;
+	struct cvmx_dpi_swa_q_vmid_s cn78xxp1;
+	struct cvmx_dpi_swa_q_vmid_s cnf75xx;
+};
+
+typedef union cvmx_dpi_swa_q_vmid cvmx_dpi_swa_q_vmid_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 09/50] mips: octeon: Add cvmx-dtx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (7 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 08/50] mips: octeon: Add cvmx-dpi-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 10/50] mips: octeon: Add cvmx-fpa-defs.h " Stefan Roese
                   ` (43 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-dtx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-dtx-defs.h  | 6962 +++++++++++++++++
 1 file changed, 6962 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dtx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-dtx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-dtx-defs.h
new file mode 100644
index 0000000000..afb581a5ea
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-dtx-defs.h
@@ -0,0 +1,6962 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon dtx.
+ */
+
+#ifndef __CVMX_DTX_DEFS_H__
+#define __CVMX_DTX_DEFS_H__
+
+#define CVMX_DTX_AGL_BCST_RSP	       (0x00011800FE700080ull)
+#define CVMX_DTX_AGL_CTL	       (0x00011800FE700060ull)
+#define CVMX_DTX_AGL_DATX(offset)      (0x00011800FE700040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_AGL_ENAX(offset)      (0x00011800FE700020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_AGL_SELX(offset)      (0x00011800FE700000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ASE_BCST_RSP	       (0x00011800FE6E8080ull)
+#define CVMX_DTX_ASE_CTL	       (0x00011800FE6E8060ull)
+#define CVMX_DTX_ASE_DATX(offset)      (0x00011800FE6E8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ASE_ENAX(offset)      (0x00011800FE6E8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ASE_SELX(offset)      (0x00011800FE6E8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX1I_BCST_RSP	       (0x00011800FED78080ull)
+#define CVMX_DTX_BBX1I_CTL	       (0x00011800FED78060ull)
+#define CVMX_DTX_BBX1I_DATX(offset)    (0x00011800FED78040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX1I_ENAX(offset)    (0x00011800FED78020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX1I_SELX(offset)    (0x00011800FED78000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX2I_BCST_RSP	       (0x00011800FED80080ull)
+#define CVMX_DTX_BBX2I_CTL	       (0x00011800FED80060ull)
+#define CVMX_DTX_BBX2I_DATX(offset)    (0x00011800FED80040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX2I_ENAX(offset)    (0x00011800FED80020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX2I_SELX(offset)    (0x00011800FED80000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX3I_BCST_RSP	       (0x00011800FED88080ull)
+#define CVMX_DTX_BBX3I_CTL	       (0x00011800FED88060ull)
+#define CVMX_DTX_BBX3I_DATX(offset)    (0x00011800FED88040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX3I_ENAX(offset)    (0x00011800FED88020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BBX3I_SELX(offset)    (0x00011800FED88000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BCH_BCST_RSP	       (0x00011800FE388080ull)
+#define CVMX_DTX_BCH_CTL	       (0x00011800FE388060ull)
+#define CVMX_DTX_BCH_DATX(offset)      (0x00011800FE388040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BCH_ENAX(offset)      (0x00011800FE388020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BCH_SELX(offset)      (0x00011800FE388000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BGXX_BCST_RSP(offset) (0x00011800FE700080ull + ((offset) & 7) * 32768)
+#define CVMX_DTX_BGXX_CTL(offset)      (0x00011800FE700060ull + ((offset) & 7) * 32768)
+#define CVMX_DTX_BGXX_DATX(offset, block_id)                                                       \
+	(0x00011800FE700040ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_BGXX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE700020ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_BGXX_SELX(offset, block_id)                                                       \
+	(0x00011800FE700000ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_BROADCAST_CTL		(0x00011800FE7F0060ull)
+#define CVMX_DTX_BROADCAST_ENAX(offset) (0x00011800FE7F0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BROADCAST_SELX(offset) (0x00011800FE7F0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BTS_BCST_RSP		(0x00011800FE5B0080ull)
+#define CVMX_DTX_BTS_CTL		(0x00011800FE5B0060ull)
+#define CVMX_DTX_BTS_DATX(offset)	(0x00011800FE5B0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BTS_ENAX(offset)	(0x00011800FE5B0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_BTS_SELX(offset)	(0x00011800FE5B0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_CIU_BCST_RSP		(0x00011800FE808080ull)
+#define CVMX_DTX_CIU_CTL		(0x00011800FE808060ull)
+#define CVMX_DTX_CIU_DATX(offset)	(0x00011800FE808040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_CIU_ENAX(offset)	(0x00011800FE808020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_CIU_SELX(offset)	(0x00011800FE808000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DENC_BCST_RSP		(0x00011800FED48080ull)
+#define CVMX_DTX_DENC_CTL		(0x00011800FED48060ull)
+#define CVMX_DTX_DENC_DATX(offset)	(0x00011800FED48040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DENC_ENAX(offset)	(0x00011800FED48020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DENC_SELX(offset)	(0x00011800FED48000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DFA_BCST_RSP		(0x00011800FE1B8080ull)
+#define CVMX_DTX_DFA_CTL		(0x00011800FE1B8060ull)
+#define CVMX_DTX_DFA_DATX(offset)	(0x00011800FE1B8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DFA_ENAX(offset)	(0x00011800FE1B8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DFA_SELX(offset)	(0x00011800FE1B8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DLFE_BCST_RSP		(0x00011800FED18080ull)
+#define CVMX_DTX_DLFE_CTL		(0x00011800FED18060ull)
+#define CVMX_DTX_DLFE_DATX(offset)	(0x00011800FED18040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DLFE_ENAX(offset)	(0x00011800FED18020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DLFE_SELX(offset)	(0x00011800FED18000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DPI_BCST_RSP		(0x00011800FEEF8080ull)
+#define CVMX_DTX_DPI_CTL		(0x00011800FEEF8060ull)
+#define CVMX_DTX_DPI_DATX(offset)	(0x00011800FEEF8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DPI_ENAX(offset)	(0x00011800FEEF8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_DPI_SELX(offset)	(0x00011800FEEF8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_FDEQX_BCST_RSP(offset) (0x00011800FED30080ull + ((offset) & 1) * 0x20000ull)
+#define CVMX_DTX_FDEQX_CTL(offset)	(0x00011800FED30060ull + ((offset) & 1) * 0x20000ull)
+#define CVMX_DTX_FDEQX_DATX(offset, block_id)                                                      \
+	(0x00011800FED30040ull + (((offset) & 1) + ((block_id) & 1) * 0x4000ull) * 8)
+#define CVMX_DTX_FDEQX_ENAX(offset, block_id)                                                      \
+	(0x00011800FED30020ull + (((offset) & 1) + ((block_id) & 1) * 0x4000ull) * 8)
+#define CVMX_DTX_FDEQX_SELX(offset, block_id)                                                      \
+	(0x00011800FED30000ull + (((offset) & 1) + ((block_id) & 1) * 0x4000ull) * 8)
+#define CVMX_DTX_FPA_BCST_RSP CVMX_DTX_FPA_BCST_RSP_FUNC()
+static inline u64 CVMX_DTX_FPA_BCST_RSP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE940080ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE940080ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE940080ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE140080ull;
+	}
+	return 0x00011800FE940080ull;
+}
+
+#define CVMX_DTX_FPA_CTL CVMX_DTX_FPA_CTL_FUNC()
+static inline u64 CVMX_DTX_FPA_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE940060ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE940060ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE940060ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE140060ull;
+	}
+	return 0x00011800FE940060ull;
+}
+
+static inline u64 CVMX_DTX_FPA_DATX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE940040ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE940040ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE940040ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE140040ull + (offset) * 8;
+	}
+	return 0x00011800FE940040ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_FPA_ENAX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE940020ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE940020ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE940020ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE140020ull + (offset) * 8;
+	}
+	return 0x00011800FE940020ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_FPA_SELX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE940000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE940000ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE940000ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE140000ull + (offset) * 8;
+	}
+	return 0x00011800FE940000ull + (offset) * 8;
+}
+
+#define CVMX_DTX_GMXX_BCST_RSP(offset) (0x00011800FE040080ull + ((offset) & 1) * 0x40000ull)
+#define CVMX_DTX_GMXX_CTL(offset)      (0x00011800FE040060ull + ((offset) & 1) * 0x40000ull)
+#define CVMX_DTX_GMXX_DATX(offset, block_id)                                                       \
+	(0x00011800FE040040ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_GMXX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE040020ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_GMXX_SELX(offset, block_id)                                                       \
+	(0x00011800FE040000ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_GSERX_BCST_RSP(offset) (0x00011800FE480080ull + ((offset) & 15) * 32768)
+#define CVMX_DTX_GSERX_CTL(offset)	(0x00011800FE480060ull + ((offset) & 15) * 32768)
+#define CVMX_DTX_GSERX_DATX(offset, block_id)                                                      \
+	(0x00011800FE480040ull + (((offset) & 1) + ((block_id) & 15) * 0x1000ull) * 8)
+#define CVMX_DTX_GSERX_ENAX(offset, block_id)                                                      \
+	(0x00011800FE480020ull + (((offset) & 1) + ((block_id) & 15) * 0x1000ull) * 8)
+#define CVMX_DTX_GSERX_SELX(offset, block_id)                                                      \
+	(0x00011800FE480000ull + (((offset) & 1) + ((block_id) & 15) * 0x1000ull) * 8)
+#define CVMX_DTX_HNA_BCST_RSP		   (0x00011800FE238080ull)
+#define CVMX_DTX_HNA_CTL		   (0x00011800FE238060ull)
+#define CVMX_DTX_HNA_DATX(offset)	   (0x00011800FE238040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_HNA_ENAX(offset)	   (0x00011800FE238020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_HNA_SELX(offset)	   (0x00011800FE238000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILA_BCST_RSP		   (0x00011800FE0B8080ull)
+#define CVMX_DTX_ILA_CTL		   (0x00011800FE0B8060ull)
+#define CVMX_DTX_ILA_DATX(offset)	   (0x00011800FE0B8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILA_ENAX(offset)	   (0x00011800FE0B8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILA_SELX(offset)	   (0x00011800FE0B8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILK_BCST_RSP		   (0x00011800FE0A0080ull)
+#define CVMX_DTX_ILK_CTL		   (0x00011800FE0A0060ull)
+#define CVMX_DTX_ILK_DATX(offset)	   (0x00011800FE0A0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILK_ENAX(offset)	   (0x00011800FE0A0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ILK_SELX(offset)	   (0x00011800FE0A0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBN_BCST_RSP		   (0x00011800FE780080ull)
+#define CVMX_DTX_IOBN_CTL		   (0x00011800FE780060ull)
+#define CVMX_DTX_IOBN_DATX(offset)	   (0x00011800FE780040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBN_ENAX(offset)	   (0x00011800FE780020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBN_SELX(offset)	   (0x00011800FE780000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBP_BCST_RSP		   (0x00011800FE7A0080ull)
+#define CVMX_DTX_IOBP_CTL		   (0x00011800FE7A0060ull)
+#define CVMX_DTX_IOBP_DATX(offset)	   (0x00011800FE7A0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBP_ENAX(offset)	   (0x00011800FE7A0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOBP_SELX(offset)	   (0x00011800FE7A0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOB_BCST_RSP		   (0x00011800FE780080ull)
+#define CVMX_DTX_IOB_CTL		   (0x00011800FE780060ull)
+#define CVMX_DTX_IOB_DATX(offset)	   (0x00011800FE780040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOB_ENAX(offset)	   (0x00011800FE780020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IOB_SELX(offset)	   (0x00011800FE780000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IPD_BCST_RSP		   (0x00011800FE278080ull)
+#define CVMX_DTX_IPD_CTL		   (0x00011800FE278060ull)
+#define CVMX_DTX_IPD_DATX(offset)	   (0x00011800FE278040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IPD_ENAX(offset)	   (0x00011800FE278020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_IPD_SELX(offset)	   (0x00011800FE278000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_KEY_BCST_RSP		   (0x00011800FE100080ull)
+#define CVMX_DTX_KEY_CTL		   (0x00011800FE100060ull)
+#define CVMX_DTX_KEY_DATX(offset)	   (0x00011800FE100040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_KEY_ENAX(offset)	   (0x00011800FE100020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_KEY_SELX(offset)	   (0x00011800FE100000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_L2C_CBCX_BCST_RSP(offset) (0x00011800FE420080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_L2C_CBCX_CTL(offset)	   (0x00011800FE420060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_L2C_CBCX_DATX(offset, block_id)                                                   \
+	(0x00011800FE420040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_CBCX_ENAX(offset, block_id)                                                   \
+	(0x00011800FE420020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_CBCX_SELX(offset, block_id)                                                   \
+	(0x00011800FE420000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_MCIX_BCST_RSP(offset) (0x00011800FE2E0080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_L2C_MCIX_CTL(offset)	   (0x00011800FE2E0060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_L2C_MCIX_DATX(offset, block_id)                                                   \
+	(0x00011800FE2E0040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_MCIX_ENAX(offset, block_id)                                                   \
+	(0x00011800FE2E0020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_MCIX_SELX(offset, block_id)                                                   \
+	(0x00011800FE2E0000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_TADX_BCST_RSP(offset) (0x00011800FE240080ull + ((offset) & 7) * 32768)
+#define CVMX_DTX_L2C_TADX_CTL(offset)	   (0x00011800FE240060ull + ((offset) & 7) * 32768)
+#define CVMX_DTX_L2C_TADX_DATX(offset, block_id)                                                   \
+	(0x00011800FE240040ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_TADX_ENAX(offset, block_id)                                                   \
+	(0x00011800FE240020ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_L2C_TADX_SELX(offset, block_id)                                                   \
+	(0x00011800FE240000ull + (((offset) & 1) + ((block_id) & 7) * 0x1000ull) * 8)
+#define CVMX_DTX_LAPX_BCST_RSP(offset) (0x00011800FE060080ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_LAPX_CTL(offset)      (0x00011800FE060060ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_LAPX_DATX(offset, block_id)                                                       \
+	(0x00011800FE060040ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_LAPX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE060020ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_LAPX_SELX(offset, block_id)                                                       \
+	(0x00011800FE060000ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_LBK_BCST_RSP	       (0x00011800FE090080ull)
+#define CVMX_DTX_LBK_CTL	       (0x00011800FE090060ull)
+#define CVMX_DTX_LBK_DATX(offset)      (0x00011800FE090040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_LBK_ENAX(offset)      (0x00011800FE090020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_LBK_SELX(offset)      (0x00011800FE090000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_LMCX_BCST_RSP(offset) (0x00011800FE440080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_LMCX_CTL(offset)      (0x00011800FE440060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_LMCX_DATX(offset, block_id)                                                       \
+	(0x00011800FE440040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_LMCX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE440020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_LMCX_SELX(offset, block_id)                                                       \
+	(0x00011800FE440000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_MDBX_BCST_RSP(offset) (0x00011800FEC00080ull + ((offset) & 31) * 32768)
+#define CVMX_DTX_MDBX_CTL(offset)      (0x00011800FEC00060ull + ((offset) & 31) * 32768)
+#define CVMX_DTX_MDBX_DATX(offset, block_id)                                                       \
+	(0x00011800FEC00040ull + (((offset) & 1) + ((block_id) & 31) * 0x1000ull) * 8)
+#define CVMX_DTX_MDBX_ENAX(offset, block_id)                                                       \
+	(0x00011800FEC00020ull + (((offset) & 1) + ((block_id) & 31) * 0x1000ull) * 8)
+#define CVMX_DTX_MDBX_SELX(offset, block_id)                                                       \
+	(0x00011800FEC00000ull + (((offset) & 1) + ((block_id) & 31) * 0x1000ull) * 8)
+#define CVMX_DTX_MHBW_BCST_RSP		   (0x00011800FE598080ull)
+#define CVMX_DTX_MHBW_CTL		   (0x00011800FE598060ull)
+#define CVMX_DTX_MHBW_DATX(offset)	   (0x00011800FE598040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_MHBW_ENAX(offset)	   (0x00011800FE598020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_MHBW_SELX(offset)	   (0x00011800FE598000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_MIO_BCST_RSP		   (0x00011800FE000080ull)
+#define CVMX_DTX_MIO_CTL		   (0x00011800FE000060ull)
+#define CVMX_DTX_MIO_DATX(offset)	   (0x00011800FE000040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_MIO_ENAX(offset)	   (0x00011800FE000020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_MIO_SELX(offset)	   (0x00011800FE000000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_BOT_BCST_RSP	   (0x00011800FE198080ull)
+#define CVMX_DTX_OCX_BOT_CTL		   (0x00011800FE198060ull)
+#define CVMX_DTX_OCX_BOT_DATX(offset)	   (0x00011800FE198040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_BOT_ENAX(offset)	   (0x00011800FE198020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_BOT_SELX(offset)	   (0x00011800FE198000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_LNKX_BCST_RSP(offset) (0x00011800FE180080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_OCX_LNKX_CTL(offset)	   (0x00011800FE180060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_OCX_LNKX_DATX(offset, block_id)                                                   \
+	(0x00011800FE180040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_LNKX_ENAX(offset, block_id)                                                   \
+	(0x00011800FE180020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_LNKX_SELX(offset, block_id)                                                   \
+	(0x00011800FE180000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_OLEX_BCST_RSP(offset) (0x00011800FE1A0080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_OCX_OLEX_CTL(offset)	   (0x00011800FE1A0060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_OCX_OLEX_DATX(offset, block_id)                                                   \
+	(0x00011800FE1A0040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_OLEX_ENAX(offset, block_id)                                                   \
+	(0x00011800FE1A0020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_OLEX_SELX(offset, block_id)                                                   \
+	(0x00011800FE1A0000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_OCX_TOP_BCST_RSP     (0x00011800FE088080ull)
+#define CVMX_DTX_OCX_TOP_CTL	      (0x00011800FE088060ull)
+#define CVMX_DTX_OCX_TOP_DATX(offset) (0x00011800FE088040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_TOP_ENAX(offset) (0x00011800FE088020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OCX_TOP_SELX(offset) (0x00011800FE088000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_OSM_BCST_RSP	      CVMX_DTX_OSM_BCST_RSP_FUNC()
+static inline u64 CVMX_DTX_OSM_BCST_RSP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE6E0080ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE6E0080ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEEE0080ull;
+	}
+	return 0x00011800FE6E0080ull;
+}
+
+#define CVMX_DTX_OSM_CTL CVMX_DTX_OSM_CTL_FUNC()
+static inline u64 CVMX_DTX_OSM_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE6E0060ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE6E0060ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEEE0060ull;
+	}
+	return 0x00011800FE6E0060ull;
+}
+
+static inline u64 CVMX_DTX_OSM_DATX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE6E0040ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE6E0040ull + (offset) * 8;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEEE0040ull + (offset) * 8;
+	}
+	return 0x00011800FE6E0040ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_OSM_ENAX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE6E0020ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE6E0020ull + (offset) * 8;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEEE0020ull + (offset) * 8;
+	}
+	return 0x00011800FE6E0020ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_OSM_SELX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FE6E0000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FE6E0000ull + (offset) * 8;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEEE0000ull + (offset) * 8;
+	}
+	return 0x00011800FE6E0000ull + (offset) * 8;
+}
+
+#define CVMX_DTX_PCSX_BCST_RSP(offset) (0x00011800FE580080ull + ((offset) & 1) * 0x40000ull)
+#define CVMX_DTX_PCSX_CTL(offset)      (0x00011800FE580060ull + ((offset) & 1) * 0x40000ull)
+#define CVMX_DTX_PCSX_DATX(offset, block_id)                                                       \
+	(0x00011800FE580040ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_PCSX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE580020ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_PCSX_SELX(offset, block_id)                                                       \
+	(0x00011800FE580000ull + (((offset) & 1) + ((block_id) & 1) * 0x8000ull) * 8)
+#define CVMX_DTX_PEMX_BCST_RSP(offset) (0x00011800FE600080ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_PEMX_CTL(offset)      (0x00011800FE600060ull + ((offset) & 3) * 32768)
+#define CVMX_DTX_PEMX_DATX(offset, block_id)                                                       \
+	(0x00011800FE600040ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_PEMX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE600020ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_PEMX_SELX(offset, block_id)                                                       \
+	(0x00011800FE600000ull + (((offset) & 1) + ((block_id) & 3) * 0x1000ull) * 8)
+#define CVMX_DTX_PIP_BCST_RSP	      (0x00011800FE500080ull)
+#define CVMX_DTX_PIP_CTL	      (0x00011800FE500060ull)
+#define CVMX_DTX_PIP_DATX(offset)     (0x00011800FE500040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PIP_ENAX(offset)     (0x00011800FE500020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PIP_SELX(offset)     (0x00011800FE500000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PBE_BCST_RSP     (0x00011800FE228080ull)
+#define CVMX_DTX_PKI_PBE_CTL	      (0x00011800FE228060ull)
+#define CVMX_DTX_PKI_PBE_DATX(offset) (0x00011800FE228040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PBE_ENAX(offset) (0x00011800FE228020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PBE_SELX(offset) (0x00011800FE228000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PFE_BCST_RSP     (0x00011800FE220080ull)
+#define CVMX_DTX_PKI_PFE_CTL	      (0x00011800FE220060ull)
+#define CVMX_DTX_PKI_PFE_DATX(offset) (0x00011800FE220040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PFE_ENAX(offset) (0x00011800FE220020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PFE_SELX(offset) (0x00011800FE220000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PIX_BCST_RSP     (0x00011800FE230080ull)
+#define CVMX_DTX_PKI_PIX_CTL	      (0x00011800FE230060ull)
+#define CVMX_DTX_PKI_PIX_DATX(offset) (0x00011800FE230040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PIX_ENAX(offset) (0x00011800FE230020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKI_PIX_SELX(offset) (0x00011800FE230000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PKO_BCST_RSP	      CVMX_DTX_PKO_BCST_RSP_FUNC()
+static inline u64 CVMX_DTX_PKO_BCST_RSP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FEAA0080ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FEAA0080ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEAA0080ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE280080ull;
+	}
+	return 0x00011800FEAA0080ull;
+}
+
+#define CVMX_DTX_PKO_CTL CVMX_DTX_PKO_CTL_FUNC()
+static inline u64 CVMX_DTX_PKO_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FEAA0060ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FEAA0060ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEAA0060ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE280060ull;
+	}
+	return 0x00011800FEAA0060ull;
+}
+
+static inline u64 CVMX_DTX_PKO_DATX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FEAA0040ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FEAA0040ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEAA0040ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE280040ull + (offset) * 8;
+	}
+	return 0x00011800FEAA0040ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_PKO_ENAX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FEAA0020ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FEAA0020ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEAA0020ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE280020ull + (offset) * 8;
+	}
+	return 0x00011800FEAA0020ull + (offset) * 8;
+}
+
+static inline u64 CVMX_DTX_PKO_SELX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800FEAA0000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800FEAA0000ull + (offset) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FEAA0000ull + (offset) * 8;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800FE280000ull + (offset) * 8;
+	}
+	return 0x00011800FEAA0000ull + (offset) * 8;
+}
+
+#define CVMX_DTX_PNBDX_BCST_RSP(offset) (0x00011800FED90080ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_PNBDX_CTL(offset)	(0x00011800FED90060ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_PNBDX_DATX(offset, block_id)                                                      \
+	(0x00011800FED90040ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_PNBDX_ENAX(offset, block_id)                                                      \
+	(0x00011800FED90020ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_PNBDX_SELX(offset, block_id)                                                      \
+	(0x00011800FED90000ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_PNBX_BCST_RSP(offset) (0x00011800FE580080ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_PNBX_CTL(offset)      (0x00011800FE580060ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_PNBX_DATX(offset, block_id)                                                       \
+	(0x00011800FE580040ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_PNBX_ENAX(offset, block_id)                                                       \
+	(0x00011800FE580020ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_PNBX_SELX(offset, block_id)                                                       \
+	(0x00011800FE580000ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_POW_BCST_RSP		(0x00011800FE338080ull)
+#define CVMX_DTX_POW_CTL		(0x00011800FE338060ull)
+#define CVMX_DTX_POW_DATX(offset)	(0x00011800FE338040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_POW_ENAX(offset)	(0x00011800FE338020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_POW_SELX(offset)	(0x00011800FE338000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PRCH_BCST_RSP		(0x00011800FED00080ull)
+#define CVMX_DTX_PRCH_CTL		(0x00011800FED00060ull)
+#define CVMX_DTX_PRCH_DATX(offset)	(0x00011800FED00040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PRCH_ENAX(offset)	(0x00011800FED00020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PRCH_SELX(offset)	(0x00011800FED00000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PSM_BCST_RSP		(0x00011800FEEA0080ull)
+#define CVMX_DTX_PSM_CTL		(0x00011800FEEA0060ull)
+#define CVMX_DTX_PSM_DATX(offset)	(0x00011800FEEA0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PSM_ENAX(offset)	(0x00011800FEEA0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_PSM_SELX(offset)	(0x00011800FEEA0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RAD_BCST_RSP		(0x00011800FE380080ull)
+#define CVMX_DTX_RAD_CTL		(0x00011800FE380060ull)
+#define CVMX_DTX_RAD_DATX(offset)	(0x00011800FE380040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RAD_ENAX(offset)	(0x00011800FE380020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RAD_SELX(offset)	(0x00011800FE380000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RDEC_BCST_RSP		(0x00011800FED68080ull)
+#define CVMX_DTX_RDEC_CTL		(0x00011800FED68060ull)
+#define CVMX_DTX_RDEC_DATX(offset)	(0x00011800FED68040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RDEC_ENAX(offset)	(0x00011800FED68020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RDEC_SELX(offset)	(0x00011800FED68000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RFIF_BCST_RSP		(0x00011800FE6A8080ull)
+#define CVMX_DTX_RFIF_CTL		(0x00011800FE6A8060ull)
+#define CVMX_DTX_RFIF_DATX(offset)	(0x00011800FE6A8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RFIF_ENAX(offset)	(0x00011800FE6A8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RFIF_SELX(offset)	(0x00011800FE6A8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RMAP_BCST_RSP		(0x00011800FED40080ull)
+#define CVMX_DTX_RMAP_CTL		(0x00011800FED40060ull)
+#define CVMX_DTX_RMAP_DATX(offset)	(0x00011800FED40040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RMAP_ENAX(offset)	(0x00011800FED40020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RMAP_SELX(offset)	(0x00011800FED40000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RNM_BCST_RSP		(0x00011800FE200080ull)
+#define CVMX_DTX_RNM_CTL		(0x00011800FE200060ull)
+#define CVMX_DTX_RNM_DATX(offset)	(0x00011800FE200040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RNM_ENAX(offset)	(0x00011800FE200020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RNM_SELX(offset)	(0x00011800FE200000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RST_BCST_RSP		(0x00011800FE030080ull)
+#define CVMX_DTX_RST_CTL		(0x00011800FE030060ull)
+#define CVMX_DTX_RST_DATX(offset)	(0x00011800FE030040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RST_ENAX(offset)	(0x00011800FE030020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_RST_SELX(offset)	(0x00011800FE030000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SATA_BCST_RSP		(0x00011800FE360080ull)
+#define CVMX_DTX_SATA_CTL		(0x00011800FE360060ull)
+#define CVMX_DTX_SATA_DATX(offset)	(0x00011800FE360040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SATA_ENAX(offset)	(0x00011800FE360020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SATA_SELX(offset)	(0x00011800FE360000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SLI_BCST_RSP		(0x00011800FE8F8080ull)
+#define CVMX_DTX_SLI_CTL		(0x00011800FE8F8060ull)
+#define CVMX_DTX_SLI_DATX(offset)	(0x00011800FE8F8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SLI_ENAX(offset)	(0x00011800FE8F8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SLI_SELX(offset)	(0x00011800FE8F8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SPEM_BCST_RSP		(0x00011800FE600080ull)
+#define CVMX_DTX_SPEM_CTL		(0x00011800FE600060ull)
+#define CVMX_DTX_SPEM_DATX(offset)	(0x00011800FE600040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SPEM_ENAX(offset)	(0x00011800FE600020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SPEM_SELX(offset)	(0x00011800FE600000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SRIOX_BCST_RSP(offset) (0x00011800FE640080ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_SRIOX_CTL(offset)	(0x00011800FE640060ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_SRIOX_DATX(offset, block_id)                                                      \
+	(0x00011800FE640040ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_SRIOX_ENAX(offset, block_id)                                                      \
+	(0x00011800FE640020ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_SRIOX_SELX(offset, block_id)                                                      \
+	(0x00011800FE640000ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_SSO_BCST_RSP		  (0x00011800FEB38080ull)
+#define CVMX_DTX_SSO_CTL		  (0x00011800FEB38060ull)
+#define CVMX_DTX_SSO_DATX(offset)	  (0x00011800FEB38040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SSO_ENAX(offset)	  (0x00011800FEB38020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_SSO_SELX(offset)	  (0x00011800FEB38000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TDEC_BCST_RSP		  (0x00011800FED60080ull)
+#define CVMX_DTX_TDEC_CTL		  (0x00011800FED60060ull)
+#define CVMX_DTX_TDEC_DATX(offset)	  (0x00011800FED60040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TDEC_ENAX(offset)	  (0x00011800FED60020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TDEC_SELX(offset)	  (0x00011800FED60000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TIM_BCST_RSP		  (0x00011800FE2C0080ull)
+#define CVMX_DTX_TIM_CTL		  (0x00011800FE2C0060ull)
+#define CVMX_DTX_TIM_DATX(offset)	  (0x00011800FE2C0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TIM_ENAX(offset)	  (0x00011800FE2C0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_TIM_SELX(offset)	  (0x00011800FE2C0000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ULFE_BCST_RSP		  (0x00011800FED08080ull)
+#define CVMX_DTX_ULFE_CTL		  (0x00011800FED08060ull)
+#define CVMX_DTX_ULFE_DATX(offset)	  (0x00011800FED08040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ULFE_ENAX(offset)	  (0x00011800FED08020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ULFE_SELX(offset)	  (0x00011800FED08000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_USBDRDX_BCST_RSP(offset) (0x00011800FE340080ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_USBDRDX_CTL(offset)	  (0x00011800FE340060ull + ((offset) & 1) * 32768)
+#define CVMX_DTX_USBDRDX_DATX(offset, block_id)                                                    \
+	(0x00011800FE340040ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_USBDRDX_ENAX(offset, block_id)                                                    \
+	(0x00011800FE340020ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_USBDRDX_SELX(offset, block_id)                                                    \
+	(0x00011800FE340000ull + (((offset) & 1) + ((block_id) & 1) * 0x1000ull) * 8)
+#define CVMX_DTX_USBHX_BCST_RSP(offset) (0x00011800FE340080ull)
+#define CVMX_DTX_USBHX_CTL(offset)	(0x00011800FE340060ull)
+#define CVMX_DTX_USBHX_DATX(offset, block_id)                                                      \
+	(0x00011800FE340040ull + (((offset) & 1) + ((block_id) & 0) * 0x0ull) * 8)
+#define CVMX_DTX_USBHX_ENAX(offset, block_id)                                                      \
+	(0x00011800FE340020ull + (((offset) & 1) + ((block_id) & 0) * 0x0ull) * 8)
+#define CVMX_DTX_USBHX_SELX(offset, block_id)                                                      \
+	(0x00011800FE340000ull + (((offset) & 1) + ((block_id) & 0) * 0x0ull) * 8)
+#define CVMX_DTX_VDEC_BCST_RSP	   (0x00011800FED70080ull)
+#define CVMX_DTX_VDEC_CTL	   (0x00011800FED70060ull)
+#define CVMX_DTX_VDEC_DATX(offset) (0x00011800FED70040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_VDEC_ENAX(offset) (0x00011800FED70020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_VDEC_SELX(offset) (0x00011800FED70000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WPSE_BCST_RSP	   (0x00011800FED10080ull)
+#define CVMX_DTX_WPSE_CTL	   (0x00011800FED10060ull)
+#define CVMX_DTX_WPSE_DATX(offset) (0x00011800FED10040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WPSE_ENAX(offset) (0x00011800FED10020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WPSE_SELX(offset) (0x00011800FED10000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRCE_BCST_RSP	   (0x00011800FED38080ull)
+#define CVMX_DTX_WRCE_CTL	   (0x00011800FED38060ull)
+#define CVMX_DTX_WRCE_DATX(offset) (0x00011800FED38040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRCE_ENAX(offset) (0x00011800FED38020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRCE_SELX(offset) (0x00011800FED38000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRDE_BCST_RSP	   (0x00011800FED58080ull)
+#define CVMX_DTX_WRDE_CTL	   (0x00011800FED58060ull)
+#define CVMX_DTX_WRDE_DATX(offset) (0x00011800FED58040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRDE_ENAX(offset) (0x00011800FED58020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRDE_SELX(offset) (0x00011800FED58000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRSE_BCST_RSP	   (0x00011800FED28080ull)
+#define CVMX_DTX_WRSE_CTL	   (0x00011800FED28060ull)
+#define CVMX_DTX_WRSE_DATX(offset) (0x00011800FED28040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRSE_ENAX(offset) (0x00011800FED28020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WRSE_SELX(offset) (0x00011800FED28000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WTXE_BCST_RSP	   (0x00011800FED20080ull)
+#define CVMX_DTX_WTXE_CTL	   (0x00011800FED20060ull)
+#define CVMX_DTX_WTXE_DATX(offset) (0x00011800FED20040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WTXE_ENAX(offset) (0x00011800FED20020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_WTXE_SELX(offset) (0x00011800FED20000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XCV_BCST_RSP	   (0x00011800FE6D8080ull)
+#define CVMX_DTX_XCV_CTL	   (0x00011800FE6D8060ull)
+#define CVMX_DTX_XCV_DATX(offset)  (0x00011800FE6D8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XCV_ENAX(offset)  (0x00011800FE6D8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XCV_SELX(offset)  (0x00011800FE6D8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XSX_BCST_RSP	   (0x00011800FE5A8080ull)
+#define CVMX_DTX_XSX_CTL	   (0x00011800FE5A8060ull)
+#define CVMX_DTX_XSX_DATX(offset)  (0x00011800FE5A8040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XSX_ENAX(offset)  (0x00011800FE5A8020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_XSX_SELX(offset)  (0x00011800FE5A8000ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ZIP_BCST_RSP	   (0x00011800FE1C0080ull)
+#define CVMX_DTX_ZIP_CTL	   (0x00011800FE1C0060ull)
+#define CVMX_DTX_ZIP_DATX(offset)  (0x00011800FE1C0040ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ZIP_ENAX(offset)  (0x00011800FE1C0020ull + ((offset) & 1) * 8)
+#define CVMX_DTX_ZIP_SELX(offset)  (0x00011800FE1C0000ull + ((offset) & 1) * 8)
+
+/**
+ * cvmx_dtx_agl_bcst_rsp
+ */
+union cvmx_dtx_agl_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_agl_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_agl_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_agl_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_agl_bcst_rsp cvmx_dtx_agl_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_agl_ctl
+ */
+union cvmx_dtx_agl_ctl {
+	u64 u64;
+	struct cvmx_dtx_agl_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_agl_ctl_s cn70xx;
+	struct cvmx_dtx_agl_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_agl_ctl cvmx_dtx_agl_ctl_t;
+
+/**
+ * cvmx_dtx_agl_dat#
+ */
+union cvmx_dtx_agl_datx {
+	u64 u64;
+	struct cvmx_dtx_agl_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_agl_datx_s cn70xx;
+	struct cvmx_dtx_agl_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_agl_datx cvmx_dtx_agl_datx_t;
+
+/**
+ * cvmx_dtx_agl_ena#
+ */
+union cvmx_dtx_agl_enax {
+	u64 u64;
+	struct cvmx_dtx_agl_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_agl_enax_s cn70xx;
+	struct cvmx_dtx_agl_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_agl_enax cvmx_dtx_agl_enax_t;
+
+/**
+ * cvmx_dtx_agl_sel#
+ */
+union cvmx_dtx_agl_selx {
+	u64 u64;
+	struct cvmx_dtx_agl_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_agl_selx_s cn70xx;
+	struct cvmx_dtx_agl_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_agl_selx cvmx_dtx_agl_selx_t;
+
+/**
+ * cvmx_dtx_ase_bcst_rsp
+ */
+union cvmx_dtx_ase_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ase_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ase_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ase_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ase_bcst_rsp cvmx_dtx_ase_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ase_ctl
+ */
+union cvmx_dtx_ase_ctl {
+	u64 u64;
+	struct cvmx_dtx_ase_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ase_ctl_s cn78xx;
+	struct cvmx_dtx_ase_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ase_ctl cvmx_dtx_ase_ctl_t;
+
+/**
+ * cvmx_dtx_ase_dat#
+ */
+union cvmx_dtx_ase_datx {
+	u64 u64;
+	struct cvmx_dtx_ase_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ase_datx_s cn78xx;
+	struct cvmx_dtx_ase_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ase_datx cvmx_dtx_ase_datx_t;
+
+/**
+ * cvmx_dtx_ase_ena#
+ */
+union cvmx_dtx_ase_enax {
+	u64 u64;
+	struct cvmx_dtx_ase_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ase_enax_s cn78xx;
+	struct cvmx_dtx_ase_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ase_enax cvmx_dtx_ase_enax_t;
+
+/**
+ * cvmx_dtx_ase_sel#
+ */
+union cvmx_dtx_ase_selx {
+	u64 u64;
+	struct cvmx_dtx_ase_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ase_selx_s cn78xx;
+	struct cvmx_dtx_ase_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ase_selx cvmx_dtx_ase_selx_t;
+
+/**
+ * cvmx_dtx_bbx1i_bcst_rsp
+ */
+union cvmx_dtx_bbx1i_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bbx1i_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bbx1i_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx1i_bcst_rsp cvmx_dtx_bbx1i_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bbx1i_ctl
+ */
+union cvmx_dtx_bbx1i_ctl {
+	u64 u64;
+	struct cvmx_dtx_bbx1i_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bbx1i_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx1i_ctl cvmx_dtx_bbx1i_ctl_t;
+
+/**
+ * cvmx_dtx_bbx1i_dat#
+ */
+union cvmx_dtx_bbx1i_datx {
+	u64 u64;
+	struct cvmx_dtx_bbx1i_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bbx1i_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx1i_datx cvmx_dtx_bbx1i_datx_t;
+
+/**
+ * cvmx_dtx_bbx1i_ena#
+ */
+union cvmx_dtx_bbx1i_enax {
+	u64 u64;
+	struct cvmx_dtx_bbx1i_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bbx1i_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx1i_enax cvmx_dtx_bbx1i_enax_t;
+
+/**
+ * cvmx_dtx_bbx1i_sel#
+ */
+union cvmx_dtx_bbx1i_selx {
+	u64 u64;
+	struct cvmx_dtx_bbx1i_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bbx1i_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx1i_selx cvmx_dtx_bbx1i_selx_t;
+
+/**
+ * cvmx_dtx_bbx2i_bcst_rsp
+ */
+union cvmx_dtx_bbx2i_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bbx2i_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bbx2i_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx2i_bcst_rsp cvmx_dtx_bbx2i_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bbx2i_ctl
+ */
+union cvmx_dtx_bbx2i_ctl {
+	u64 u64;
+	struct cvmx_dtx_bbx2i_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bbx2i_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx2i_ctl cvmx_dtx_bbx2i_ctl_t;
+
+/**
+ * cvmx_dtx_bbx2i_dat#
+ */
+union cvmx_dtx_bbx2i_datx {
+	u64 u64;
+	struct cvmx_dtx_bbx2i_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bbx2i_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx2i_datx cvmx_dtx_bbx2i_datx_t;
+
+/**
+ * cvmx_dtx_bbx2i_ena#
+ */
+union cvmx_dtx_bbx2i_enax {
+	u64 u64;
+	struct cvmx_dtx_bbx2i_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bbx2i_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx2i_enax cvmx_dtx_bbx2i_enax_t;
+
+/**
+ * cvmx_dtx_bbx2i_sel#
+ */
+union cvmx_dtx_bbx2i_selx {
+	u64 u64;
+	struct cvmx_dtx_bbx2i_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bbx2i_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx2i_selx cvmx_dtx_bbx2i_selx_t;
+
+/**
+ * cvmx_dtx_bbx3i_bcst_rsp
+ */
+union cvmx_dtx_bbx3i_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bbx3i_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bbx3i_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx3i_bcst_rsp cvmx_dtx_bbx3i_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bbx3i_ctl
+ */
+union cvmx_dtx_bbx3i_ctl {
+	u64 u64;
+	struct cvmx_dtx_bbx3i_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bbx3i_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx3i_ctl cvmx_dtx_bbx3i_ctl_t;
+
+/**
+ * cvmx_dtx_bbx3i_dat#
+ */
+union cvmx_dtx_bbx3i_datx {
+	u64 u64;
+	struct cvmx_dtx_bbx3i_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bbx3i_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx3i_datx cvmx_dtx_bbx3i_datx_t;
+
+/**
+ * cvmx_dtx_bbx3i_ena#
+ */
+union cvmx_dtx_bbx3i_enax {
+	u64 u64;
+	struct cvmx_dtx_bbx3i_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bbx3i_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx3i_enax cvmx_dtx_bbx3i_enax_t;
+
+/**
+ * cvmx_dtx_bbx3i_sel#
+ */
+union cvmx_dtx_bbx3i_selx {
+	u64 u64;
+	struct cvmx_dtx_bbx3i_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bbx3i_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bbx3i_selx cvmx_dtx_bbx3i_selx_t;
+
+/**
+ * cvmx_dtx_bch_bcst_rsp
+ */
+union cvmx_dtx_bch_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bch_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bch_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_bch_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bch_bcst_rsp cvmx_dtx_bch_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bch_ctl
+ */
+union cvmx_dtx_bch_ctl {
+	u64 u64;
+	struct cvmx_dtx_bch_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bch_ctl_s cn73xx;
+	struct cvmx_dtx_bch_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bch_ctl cvmx_dtx_bch_ctl_t;
+
+/**
+ * cvmx_dtx_bch_dat#
+ */
+union cvmx_dtx_bch_datx {
+	u64 u64;
+	struct cvmx_dtx_bch_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bch_datx_s cn73xx;
+	struct cvmx_dtx_bch_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bch_datx cvmx_dtx_bch_datx_t;
+
+/**
+ * cvmx_dtx_bch_ena#
+ */
+union cvmx_dtx_bch_enax {
+	u64 u64;
+	struct cvmx_dtx_bch_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bch_enax_s cn73xx;
+	struct cvmx_dtx_bch_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bch_enax cvmx_dtx_bch_enax_t;
+
+/**
+ * cvmx_dtx_bch_sel#
+ */
+union cvmx_dtx_bch_selx {
+	u64 u64;
+	struct cvmx_dtx_bch_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bch_selx_s cn73xx;
+	struct cvmx_dtx_bch_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bch_selx cvmx_dtx_bch_selx_t;
+
+/**
+ * cvmx_dtx_bgx#_bcst_rsp
+ */
+union cvmx_dtx_bgxx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bgxx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bgxx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_bgxx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_bgxx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_bgxx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bgxx_bcst_rsp cvmx_dtx_bgxx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bgx#_ctl
+ */
+union cvmx_dtx_bgxx_ctl {
+	u64 u64;
+	struct cvmx_dtx_bgxx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bgxx_ctl_s cn73xx;
+	struct cvmx_dtx_bgxx_ctl_s cn78xx;
+	struct cvmx_dtx_bgxx_ctl_s cn78xxp1;
+	struct cvmx_dtx_bgxx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bgxx_ctl cvmx_dtx_bgxx_ctl_t;
+
+/**
+ * cvmx_dtx_bgx#_dat#
+ */
+union cvmx_dtx_bgxx_datx {
+	u64 u64;
+	struct cvmx_dtx_bgxx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bgxx_datx_s cn73xx;
+	struct cvmx_dtx_bgxx_datx_s cn78xx;
+	struct cvmx_dtx_bgxx_datx_s cn78xxp1;
+	struct cvmx_dtx_bgxx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bgxx_datx cvmx_dtx_bgxx_datx_t;
+
+/**
+ * cvmx_dtx_bgx#_ena#
+ */
+union cvmx_dtx_bgxx_enax {
+	u64 u64;
+	struct cvmx_dtx_bgxx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bgxx_enax_s cn73xx;
+	struct cvmx_dtx_bgxx_enax_s cn78xx;
+	struct cvmx_dtx_bgxx_enax_s cn78xxp1;
+	struct cvmx_dtx_bgxx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bgxx_enax cvmx_dtx_bgxx_enax_t;
+
+/**
+ * cvmx_dtx_bgx#_sel#
+ */
+union cvmx_dtx_bgxx_selx {
+	u64 u64;
+	struct cvmx_dtx_bgxx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bgxx_selx_s cn73xx;
+	struct cvmx_dtx_bgxx_selx_s cn78xx;
+	struct cvmx_dtx_bgxx_selx_s cn78xxp1;
+	struct cvmx_dtx_bgxx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bgxx_selx cvmx_dtx_bgxx_selx_t;
+
+/**
+ * cvmx_dtx_broadcast_ctl
+ */
+union cvmx_dtx_broadcast_ctl {
+	u64 u64;
+	struct cvmx_dtx_broadcast_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_broadcast_ctl_s cn70xx;
+	struct cvmx_dtx_broadcast_ctl_s cn70xxp1;
+	struct cvmx_dtx_broadcast_ctl_s cn73xx;
+	struct cvmx_dtx_broadcast_ctl_s cn78xx;
+	struct cvmx_dtx_broadcast_ctl_s cn78xxp1;
+	struct cvmx_dtx_broadcast_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_broadcast_ctl cvmx_dtx_broadcast_ctl_t;
+
+/**
+ * cvmx_dtx_broadcast_ena#
+ */
+union cvmx_dtx_broadcast_enax {
+	u64 u64;
+	struct cvmx_dtx_broadcast_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_broadcast_enax_s cn70xx;
+	struct cvmx_dtx_broadcast_enax_s cn70xxp1;
+	struct cvmx_dtx_broadcast_enax_s cn73xx;
+	struct cvmx_dtx_broadcast_enax_s cn78xx;
+	struct cvmx_dtx_broadcast_enax_s cn78xxp1;
+	struct cvmx_dtx_broadcast_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_broadcast_enax cvmx_dtx_broadcast_enax_t;
+
+/**
+ * cvmx_dtx_broadcast_sel#
+ */
+union cvmx_dtx_broadcast_selx {
+	u64 u64;
+	struct cvmx_dtx_broadcast_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_broadcast_selx_s cn70xx;
+	struct cvmx_dtx_broadcast_selx_s cn70xxp1;
+	struct cvmx_dtx_broadcast_selx_s cn73xx;
+	struct cvmx_dtx_broadcast_selx_s cn78xx;
+	struct cvmx_dtx_broadcast_selx_s cn78xxp1;
+	struct cvmx_dtx_broadcast_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_broadcast_selx cvmx_dtx_broadcast_selx_t;
+
+/**
+ * cvmx_dtx_bts_bcst_rsp
+ */
+union cvmx_dtx_bts_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_bts_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_bts_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bts_bcst_rsp cvmx_dtx_bts_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_bts_ctl
+ */
+union cvmx_dtx_bts_ctl {
+	u64 u64;
+	struct cvmx_dtx_bts_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_bts_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bts_ctl cvmx_dtx_bts_ctl_t;
+
+/**
+ * cvmx_dtx_bts_dat#
+ */
+union cvmx_dtx_bts_datx {
+	u64 u64;
+	struct cvmx_dtx_bts_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_bts_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bts_datx cvmx_dtx_bts_datx_t;
+
+/**
+ * cvmx_dtx_bts_ena#
+ */
+union cvmx_dtx_bts_enax {
+	u64 u64;
+	struct cvmx_dtx_bts_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_bts_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bts_enax cvmx_dtx_bts_enax_t;
+
+/**
+ * cvmx_dtx_bts_sel#
+ */
+union cvmx_dtx_bts_selx {
+	u64 u64;
+	struct cvmx_dtx_bts_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_bts_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_bts_selx cvmx_dtx_bts_selx_t;
+
+/**
+ * cvmx_dtx_ciu_bcst_rsp
+ */
+union cvmx_dtx_ciu_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ciu_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ciu_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_ciu_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ciu_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_ciu_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ciu_bcst_rsp cvmx_dtx_ciu_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ciu_ctl
+ */
+union cvmx_dtx_ciu_ctl {
+	u64 u64;
+	struct cvmx_dtx_ciu_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ciu_ctl_s cn73xx;
+	struct cvmx_dtx_ciu_ctl_s cn78xx;
+	struct cvmx_dtx_ciu_ctl_s cn78xxp1;
+	struct cvmx_dtx_ciu_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ciu_ctl cvmx_dtx_ciu_ctl_t;
+
+/**
+ * cvmx_dtx_ciu_dat#
+ */
+union cvmx_dtx_ciu_datx {
+	u64 u64;
+	struct cvmx_dtx_ciu_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ciu_datx_s cn73xx;
+	struct cvmx_dtx_ciu_datx_s cn78xx;
+	struct cvmx_dtx_ciu_datx_s cn78xxp1;
+	struct cvmx_dtx_ciu_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ciu_datx cvmx_dtx_ciu_datx_t;
+
+/**
+ * cvmx_dtx_ciu_ena#
+ */
+union cvmx_dtx_ciu_enax {
+	u64 u64;
+	struct cvmx_dtx_ciu_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ciu_enax_s cn73xx;
+	struct cvmx_dtx_ciu_enax_s cn78xx;
+	struct cvmx_dtx_ciu_enax_s cn78xxp1;
+	struct cvmx_dtx_ciu_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ciu_enax cvmx_dtx_ciu_enax_t;
+
+/**
+ * cvmx_dtx_ciu_sel#
+ */
+union cvmx_dtx_ciu_selx {
+	u64 u64;
+	struct cvmx_dtx_ciu_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ciu_selx_s cn73xx;
+	struct cvmx_dtx_ciu_selx_s cn78xx;
+	struct cvmx_dtx_ciu_selx_s cn78xxp1;
+	struct cvmx_dtx_ciu_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ciu_selx cvmx_dtx_ciu_selx_t;
+
+/**
+ * cvmx_dtx_denc_bcst_rsp
+ */
+union cvmx_dtx_denc_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_denc_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_denc_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_denc_bcst_rsp cvmx_dtx_denc_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_denc_ctl
+ */
+union cvmx_dtx_denc_ctl {
+	u64 u64;
+	struct cvmx_dtx_denc_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_denc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_denc_ctl cvmx_dtx_denc_ctl_t;
+
+/**
+ * cvmx_dtx_denc_dat#
+ */
+union cvmx_dtx_denc_datx {
+	u64 u64;
+	struct cvmx_dtx_denc_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_denc_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_denc_datx cvmx_dtx_denc_datx_t;
+
+/**
+ * cvmx_dtx_denc_ena#
+ */
+union cvmx_dtx_denc_enax {
+	u64 u64;
+	struct cvmx_dtx_denc_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_denc_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_denc_enax cvmx_dtx_denc_enax_t;
+
+/**
+ * cvmx_dtx_denc_sel#
+ */
+union cvmx_dtx_denc_selx {
+	u64 u64;
+	struct cvmx_dtx_denc_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_denc_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_denc_selx cvmx_dtx_denc_selx_t;
+
+/**
+ * cvmx_dtx_dfa_bcst_rsp
+ */
+union cvmx_dtx_dfa_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_dfa_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_dfa_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_dfa_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_dfa_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_dfa_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_dfa_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_dfa_bcst_rsp cvmx_dtx_dfa_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_dfa_ctl
+ */
+union cvmx_dtx_dfa_ctl {
+	u64 u64;
+	struct cvmx_dtx_dfa_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_dfa_ctl_s cn70xx;
+	struct cvmx_dtx_dfa_ctl_s cn70xxp1;
+	struct cvmx_dtx_dfa_ctl_s cn73xx;
+	struct cvmx_dtx_dfa_ctl_s cn78xx;
+	struct cvmx_dtx_dfa_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_dfa_ctl cvmx_dtx_dfa_ctl_t;
+
+/**
+ * cvmx_dtx_dfa_dat#
+ */
+union cvmx_dtx_dfa_datx {
+	u64 u64;
+	struct cvmx_dtx_dfa_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_dfa_datx_s cn70xx;
+	struct cvmx_dtx_dfa_datx_s cn70xxp1;
+	struct cvmx_dtx_dfa_datx_s cn73xx;
+	struct cvmx_dtx_dfa_datx_s cn78xx;
+	struct cvmx_dtx_dfa_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_dfa_datx cvmx_dtx_dfa_datx_t;
+
+/**
+ * cvmx_dtx_dfa_ena#
+ */
+union cvmx_dtx_dfa_enax {
+	u64 u64;
+	struct cvmx_dtx_dfa_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_dfa_enax_s cn70xx;
+	struct cvmx_dtx_dfa_enax_s cn70xxp1;
+	struct cvmx_dtx_dfa_enax_s cn73xx;
+	struct cvmx_dtx_dfa_enax_s cn78xx;
+	struct cvmx_dtx_dfa_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_dfa_enax cvmx_dtx_dfa_enax_t;
+
+/**
+ * cvmx_dtx_dfa_sel#
+ */
+union cvmx_dtx_dfa_selx {
+	u64 u64;
+	struct cvmx_dtx_dfa_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_dfa_selx_s cn70xx;
+	struct cvmx_dtx_dfa_selx_s cn70xxp1;
+	struct cvmx_dtx_dfa_selx_s cn73xx;
+	struct cvmx_dtx_dfa_selx_s cn78xx;
+	struct cvmx_dtx_dfa_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_dfa_selx cvmx_dtx_dfa_selx_t;
+
+/**
+ * cvmx_dtx_dlfe_bcst_rsp
+ */
+union cvmx_dtx_dlfe_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_dlfe_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_dlfe_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dlfe_bcst_rsp cvmx_dtx_dlfe_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_dlfe_ctl
+ */
+union cvmx_dtx_dlfe_ctl {
+	u64 u64;
+	struct cvmx_dtx_dlfe_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_dlfe_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dlfe_ctl cvmx_dtx_dlfe_ctl_t;
+
+/**
+ * cvmx_dtx_dlfe_dat#
+ */
+union cvmx_dtx_dlfe_datx {
+	u64 u64;
+	struct cvmx_dtx_dlfe_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_dlfe_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dlfe_datx cvmx_dtx_dlfe_datx_t;
+
+/**
+ * cvmx_dtx_dlfe_ena#
+ */
+union cvmx_dtx_dlfe_enax {
+	u64 u64;
+	struct cvmx_dtx_dlfe_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_dlfe_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dlfe_enax cvmx_dtx_dlfe_enax_t;
+
+/**
+ * cvmx_dtx_dlfe_sel#
+ */
+union cvmx_dtx_dlfe_selx {
+	u64 u64;
+	struct cvmx_dtx_dlfe_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_dlfe_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dlfe_selx cvmx_dtx_dlfe_selx_t;
+
+/**
+ * cvmx_dtx_dpi_bcst_rsp
+ */
+union cvmx_dtx_dpi_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_dpi_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_dpi_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_dpi_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_dpi_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_dpi_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_dpi_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_dpi_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dpi_bcst_rsp cvmx_dtx_dpi_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_dpi_ctl
+ */
+union cvmx_dtx_dpi_ctl {
+	u64 u64;
+	struct cvmx_dtx_dpi_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_dpi_ctl_s cn70xx;
+	struct cvmx_dtx_dpi_ctl_s cn70xxp1;
+	struct cvmx_dtx_dpi_ctl_s cn73xx;
+	struct cvmx_dtx_dpi_ctl_s cn78xx;
+	struct cvmx_dtx_dpi_ctl_s cn78xxp1;
+	struct cvmx_dtx_dpi_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dpi_ctl cvmx_dtx_dpi_ctl_t;
+
+/**
+ * cvmx_dtx_dpi_dat#
+ */
+union cvmx_dtx_dpi_datx {
+	u64 u64;
+	struct cvmx_dtx_dpi_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_dpi_datx_s cn70xx;
+	struct cvmx_dtx_dpi_datx_s cn70xxp1;
+	struct cvmx_dtx_dpi_datx_s cn73xx;
+	struct cvmx_dtx_dpi_datx_s cn78xx;
+	struct cvmx_dtx_dpi_datx_s cn78xxp1;
+	struct cvmx_dtx_dpi_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dpi_datx cvmx_dtx_dpi_datx_t;
+
+/**
+ * cvmx_dtx_dpi_ena#
+ */
+union cvmx_dtx_dpi_enax {
+	u64 u64;
+	struct cvmx_dtx_dpi_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_dpi_enax_s cn70xx;
+	struct cvmx_dtx_dpi_enax_s cn70xxp1;
+	struct cvmx_dtx_dpi_enax_s cn73xx;
+	struct cvmx_dtx_dpi_enax_s cn78xx;
+	struct cvmx_dtx_dpi_enax_s cn78xxp1;
+	struct cvmx_dtx_dpi_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dpi_enax cvmx_dtx_dpi_enax_t;
+
+/**
+ * cvmx_dtx_dpi_sel#
+ */
+union cvmx_dtx_dpi_selx {
+	u64 u64;
+	struct cvmx_dtx_dpi_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_dpi_selx_s cn70xx;
+	struct cvmx_dtx_dpi_selx_s cn70xxp1;
+	struct cvmx_dtx_dpi_selx_s cn73xx;
+	struct cvmx_dtx_dpi_selx_s cn78xx;
+	struct cvmx_dtx_dpi_selx_s cn78xxp1;
+	struct cvmx_dtx_dpi_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_dpi_selx cvmx_dtx_dpi_selx_t;
+
+/**
+ * cvmx_dtx_fdeq#_bcst_rsp
+ */
+union cvmx_dtx_fdeqx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_fdeqx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_fdeqx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fdeqx_bcst_rsp cvmx_dtx_fdeqx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_fdeq#_ctl
+ */
+union cvmx_dtx_fdeqx_ctl {
+	u64 u64;
+	struct cvmx_dtx_fdeqx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_fdeqx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fdeqx_ctl cvmx_dtx_fdeqx_ctl_t;
+
+/**
+ * cvmx_dtx_fdeq#_dat#
+ */
+union cvmx_dtx_fdeqx_datx {
+	u64 u64;
+	struct cvmx_dtx_fdeqx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_fdeqx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fdeqx_datx cvmx_dtx_fdeqx_datx_t;
+
+/**
+ * cvmx_dtx_fdeq#_ena#
+ */
+union cvmx_dtx_fdeqx_enax {
+	u64 u64;
+	struct cvmx_dtx_fdeqx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_fdeqx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fdeqx_enax cvmx_dtx_fdeqx_enax_t;
+
+/**
+ * cvmx_dtx_fdeq#_sel#
+ */
+union cvmx_dtx_fdeqx_selx {
+	u64 u64;
+	struct cvmx_dtx_fdeqx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_fdeqx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fdeqx_selx cvmx_dtx_fdeqx_selx_t;
+
+/**
+ * cvmx_dtx_fpa_bcst_rsp
+ */
+union cvmx_dtx_fpa_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_fpa_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_fpa_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_fpa_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_fpa_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_fpa_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_fpa_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_fpa_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fpa_bcst_rsp cvmx_dtx_fpa_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_fpa_ctl
+ */
+union cvmx_dtx_fpa_ctl {
+	u64 u64;
+	struct cvmx_dtx_fpa_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_fpa_ctl_s cn70xx;
+	struct cvmx_dtx_fpa_ctl_s cn70xxp1;
+	struct cvmx_dtx_fpa_ctl_s cn73xx;
+	struct cvmx_dtx_fpa_ctl_s cn78xx;
+	struct cvmx_dtx_fpa_ctl_s cn78xxp1;
+	struct cvmx_dtx_fpa_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fpa_ctl cvmx_dtx_fpa_ctl_t;
+
+/**
+ * cvmx_dtx_fpa_dat#
+ */
+union cvmx_dtx_fpa_datx {
+	u64 u64;
+	struct cvmx_dtx_fpa_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_fpa_datx_s cn70xx;
+	struct cvmx_dtx_fpa_datx_s cn70xxp1;
+	struct cvmx_dtx_fpa_datx_s cn73xx;
+	struct cvmx_dtx_fpa_datx_s cn78xx;
+	struct cvmx_dtx_fpa_datx_s cn78xxp1;
+	struct cvmx_dtx_fpa_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fpa_datx cvmx_dtx_fpa_datx_t;
+
+/**
+ * cvmx_dtx_fpa_ena#
+ */
+union cvmx_dtx_fpa_enax {
+	u64 u64;
+	struct cvmx_dtx_fpa_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_fpa_enax_s cn70xx;
+	struct cvmx_dtx_fpa_enax_s cn70xxp1;
+	struct cvmx_dtx_fpa_enax_s cn73xx;
+	struct cvmx_dtx_fpa_enax_s cn78xx;
+	struct cvmx_dtx_fpa_enax_s cn78xxp1;
+	struct cvmx_dtx_fpa_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fpa_enax cvmx_dtx_fpa_enax_t;
+
+/**
+ * cvmx_dtx_fpa_sel#
+ */
+union cvmx_dtx_fpa_selx {
+	u64 u64;
+	struct cvmx_dtx_fpa_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_fpa_selx_s cn70xx;
+	struct cvmx_dtx_fpa_selx_s cn70xxp1;
+	struct cvmx_dtx_fpa_selx_s cn73xx;
+	struct cvmx_dtx_fpa_selx_s cn78xx;
+	struct cvmx_dtx_fpa_selx_s cn78xxp1;
+	struct cvmx_dtx_fpa_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_fpa_selx cvmx_dtx_fpa_selx_t;
+
+/**
+ * cvmx_dtx_gmx#_bcst_rsp
+ */
+union cvmx_dtx_gmxx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_gmxx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_gmxx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_gmxx_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_gmxx_bcst_rsp cvmx_dtx_gmxx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_gmx#_ctl
+ */
+union cvmx_dtx_gmxx_ctl {
+	u64 u64;
+	struct cvmx_dtx_gmxx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_gmxx_ctl_s cn70xx;
+	struct cvmx_dtx_gmxx_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_gmxx_ctl cvmx_dtx_gmxx_ctl_t;
+
+/**
+ * cvmx_dtx_gmx#_dat#
+ */
+union cvmx_dtx_gmxx_datx {
+	u64 u64;
+	struct cvmx_dtx_gmxx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_gmxx_datx_s cn70xx;
+	struct cvmx_dtx_gmxx_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_gmxx_datx cvmx_dtx_gmxx_datx_t;
+
+/**
+ * cvmx_dtx_gmx#_ena#
+ */
+union cvmx_dtx_gmxx_enax {
+	u64 u64;
+	struct cvmx_dtx_gmxx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_gmxx_enax_s cn70xx;
+	struct cvmx_dtx_gmxx_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_gmxx_enax cvmx_dtx_gmxx_enax_t;
+
+/**
+ * cvmx_dtx_gmx#_sel#
+ */
+union cvmx_dtx_gmxx_selx {
+	u64 u64;
+	struct cvmx_dtx_gmxx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_gmxx_selx_s cn70xx;
+	struct cvmx_dtx_gmxx_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_gmxx_selx cvmx_dtx_gmxx_selx_t;
+
+/**
+ * cvmx_dtx_gser#_bcst_rsp
+ */
+union cvmx_dtx_gserx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_gserx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_gserx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_gserx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_gserx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_gserx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_gserx_bcst_rsp cvmx_dtx_gserx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_gser#_ctl
+ */
+union cvmx_dtx_gserx_ctl {
+	u64 u64;
+	struct cvmx_dtx_gserx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_gserx_ctl_s cn73xx;
+	struct cvmx_dtx_gserx_ctl_s cn78xx;
+	struct cvmx_dtx_gserx_ctl_s cn78xxp1;
+	struct cvmx_dtx_gserx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_gserx_ctl cvmx_dtx_gserx_ctl_t;
+
+/**
+ * cvmx_dtx_gser#_dat#
+ */
+union cvmx_dtx_gserx_datx {
+	u64 u64;
+	struct cvmx_dtx_gserx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_gserx_datx_s cn73xx;
+	struct cvmx_dtx_gserx_datx_s cn78xx;
+	struct cvmx_dtx_gserx_datx_s cn78xxp1;
+	struct cvmx_dtx_gserx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_gserx_datx cvmx_dtx_gserx_datx_t;
+
+/**
+ * cvmx_dtx_gser#_ena#
+ */
+union cvmx_dtx_gserx_enax {
+	u64 u64;
+	struct cvmx_dtx_gserx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_gserx_enax_s cn73xx;
+	struct cvmx_dtx_gserx_enax_s cn78xx;
+	struct cvmx_dtx_gserx_enax_s cn78xxp1;
+	struct cvmx_dtx_gserx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_gserx_enax cvmx_dtx_gserx_enax_t;
+
+/**
+ * cvmx_dtx_gser#_sel#
+ */
+union cvmx_dtx_gserx_selx {
+	u64 u64;
+	struct cvmx_dtx_gserx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_gserx_selx_s cn73xx;
+	struct cvmx_dtx_gserx_selx_s cn78xx;
+	struct cvmx_dtx_gserx_selx_s cn78xxp1;
+	struct cvmx_dtx_gserx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_gserx_selx cvmx_dtx_gserx_selx_t;
+
+/**
+ * cvmx_dtx_hna_bcst_rsp
+ */
+union cvmx_dtx_hna_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_hna_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_hna_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_hna_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_hna_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_hna_bcst_rsp cvmx_dtx_hna_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_hna_ctl
+ */
+union cvmx_dtx_hna_ctl {
+	u64 u64;
+	struct cvmx_dtx_hna_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_hna_ctl_s cn73xx;
+	struct cvmx_dtx_hna_ctl_s cn78xx;
+	struct cvmx_dtx_hna_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_hna_ctl cvmx_dtx_hna_ctl_t;
+
+/**
+ * cvmx_dtx_hna_dat#
+ */
+union cvmx_dtx_hna_datx {
+	u64 u64;
+	struct cvmx_dtx_hna_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_hna_datx_s cn73xx;
+	struct cvmx_dtx_hna_datx_s cn78xx;
+	struct cvmx_dtx_hna_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_hna_datx cvmx_dtx_hna_datx_t;
+
+/**
+ * cvmx_dtx_hna_ena#
+ */
+union cvmx_dtx_hna_enax {
+	u64 u64;
+	struct cvmx_dtx_hna_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_hna_enax_s cn73xx;
+	struct cvmx_dtx_hna_enax_s cn78xx;
+	struct cvmx_dtx_hna_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_hna_enax cvmx_dtx_hna_enax_t;
+
+/**
+ * cvmx_dtx_hna_sel#
+ */
+union cvmx_dtx_hna_selx {
+	u64 u64;
+	struct cvmx_dtx_hna_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_hna_selx_s cn73xx;
+	struct cvmx_dtx_hna_selx_s cn78xx;
+	struct cvmx_dtx_hna_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_hna_selx cvmx_dtx_hna_selx_t;
+
+/**
+ * cvmx_dtx_ila_bcst_rsp
+ */
+union cvmx_dtx_ila_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ila_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ila_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ila_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ila_bcst_rsp cvmx_dtx_ila_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ila_ctl
+ */
+union cvmx_dtx_ila_ctl {
+	u64 u64;
+	struct cvmx_dtx_ila_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ila_ctl_s cn78xx;
+	struct cvmx_dtx_ila_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ila_ctl cvmx_dtx_ila_ctl_t;
+
+/**
+ * cvmx_dtx_ila_dat#
+ */
+union cvmx_dtx_ila_datx {
+	u64 u64;
+	struct cvmx_dtx_ila_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ila_datx_s cn78xx;
+	struct cvmx_dtx_ila_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ila_datx cvmx_dtx_ila_datx_t;
+
+/**
+ * cvmx_dtx_ila_ena#
+ */
+union cvmx_dtx_ila_enax {
+	u64 u64;
+	struct cvmx_dtx_ila_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ila_enax_s cn78xx;
+	struct cvmx_dtx_ila_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ila_enax cvmx_dtx_ila_enax_t;
+
+/**
+ * cvmx_dtx_ila_sel#
+ */
+union cvmx_dtx_ila_selx {
+	u64 u64;
+	struct cvmx_dtx_ila_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ila_selx_s cn78xx;
+	struct cvmx_dtx_ila_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ila_selx cvmx_dtx_ila_selx_t;
+
+/**
+ * cvmx_dtx_ilk_bcst_rsp
+ */
+union cvmx_dtx_ilk_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ilk_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ilk_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ilk_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ilk_bcst_rsp cvmx_dtx_ilk_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ilk_ctl
+ */
+union cvmx_dtx_ilk_ctl {
+	u64 u64;
+	struct cvmx_dtx_ilk_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ilk_ctl_s cn78xx;
+	struct cvmx_dtx_ilk_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ilk_ctl cvmx_dtx_ilk_ctl_t;
+
+/**
+ * cvmx_dtx_ilk_dat#
+ */
+union cvmx_dtx_ilk_datx {
+	u64 u64;
+	struct cvmx_dtx_ilk_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ilk_datx_s cn78xx;
+	struct cvmx_dtx_ilk_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ilk_datx cvmx_dtx_ilk_datx_t;
+
+/**
+ * cvmx_dtx_ilk_ena#
+ */
+union cvmx_dtx_ilk_enax {
+	u64 u64;
+	struct cvmx_dtx_ilk_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ilk_enax_s cn78xx;
+	struct cvmx_dtx_ilk_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ilk_enax cvmx_dtx_ilk_enax_t;
+
+/**
+ * cvmx_dtx_ilk_sel#
+ */
+union cvmx_dtx_ilk_selx {
+	u64 u64;
+	struct cvmx_dtx_ilk_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ilk_selx_s cn78xx;
+	struct cvmx_dtx_ilk_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ilk_selx cvmx_dtx_ilk_selx_t;
+
+/**
+ * cvmx_dtx_iob_bcst_rsp
+ */
+union cvmx_dtx_iob_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_iob_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_iob_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_iob_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_iob_bcst_rsp cvmx_dtx_iob_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_iob_ctl
+ */
+union cvmx_dtx_iob_ctl {
+	u64 u64;
+	struct cvmx_dtx_iob_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_iob_ctl_s cn70xx;
+	struct cvmx_dtx_iob_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_iob_ctl cvmx_dtx_iob_ctl_t;
+
+/**
+ * cvmx_dtx_iob_dat#
+ */
+union cvmx_dtx_iob_datx {
+	u64 u64;
+	struct cvmx_dtx_iob_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_iob_datx_s cn70xx;
+	struct cvmx_dtx_iob_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_iob_datx cvmx_dtx_iob_datx_t;
+
+/**
+ * cvmx_dtx_iob_ena#
+ */
+union cvmx_dtx_iob_enax {
+	u64 u64;
+	struct cvmx_dtx_iob_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_iob_enax_s cn70xx;
+	struct cvmx_dtx_iob_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_iob_enax cvmx_dtx_iob_enax_t;
+
+/**
+ * cvmx_dtx_iob_sel#
+ */
+union cvmx_dtx_iob_selx {
+	u64 u64;
+	struct cvmx_dtx_iob_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_iob_selx_s cn70xx;
+	struct cvmx_dtx_iob_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_iob_selx cvmx_dtx_iob_selx_t;
+
+/**
+ * cvmx_dtx_iobn_bcst_rsp
+ */
+union cvmx_dtx_iobn_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_iobn_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_iobn_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_iobn_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_iobn_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_iobn_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobn_bcst_rsp cvmx_dtx_iobn_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_iobn_ctl
+ */
+union cvmx_dtx_iobn_ctl {
+	u64 u64;
+	struct cvmx_dtx_iobn_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_iobn_ctl_s cn73xx;
+	struct cvmx_dtx_iobn_ctl_s cn78xx;
+	struct cvmx_dtx_iobn_ctl_s cn78xxp1;
+	struct cvmx_dtx_iobn_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobn_ctl cvmx_dtx_iobn_ctl_t;
+
+/**
+ * cvmx_dtx_iobn_dat#
+ */
+union cvmx_dtx_iobn_datx {
+	u64 u64;
+	struct cvmx_dtx_iobn_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_iobn_datx_s cn73xx;
+	struct cvmx_dtx_iobn_datx_s cn78xx;
+	struct cvmx_dtx_iobn_datx_s cn78xxp1;
+	struct cvmx_dtx_iobn_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobn_datx cvmx_dtx_iobn_datx_t;
+
+/**
+ * cvmx_dtx_iobn_ena#
+ */
+union cvmx_dtx_iobn_enax {
+	u64 u64;
+	struct cvmx_dtx_iobn_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_iobn_enax_s cn73xx;
+	struct cvmx_dtx_iobn_enax_s cn78xx;
+	struct cvmx_dtx_iobn_enax_s cn78xxp1;
+	struct cvmx_dtx_iobn_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobn_enax cvmx_dtx_iobn_enax_t;
+
+/**
+ * cvmx_dtx_iobn_sel#
+ */
+union cvmx_dtx_iobn_selx {
+	u64 u64;
+	struct cvmx_dtx_iobn_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_iobn_selx_s cn73xx;
+	struct cvmx_dtx_iobn_selx_s cn78xx;
+	struct cvmx_dtx_iobn_selx_s cn78xxp1;
+	struct cvmx_dtx_iobn_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobn_selx cvmx_dtx_iobn_selx_t;
+
+/**
+ * cvmx_dtx_iobp_bcst_rsp
+ */
+union cvmx_dtx_iobp_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_iobp_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_iobp_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_iobp_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_iobp_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_iobp_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobp_bcst_rsp cvmx_dtx_iobp_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_iobp_ctl
+ */
+union cvmx_dtx_iobp_ctl {
+	u64 u64;
+	struct cvmx_dtx_iobp_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_iobp_ctl_s cn73xx;
+	struct cvmx_dtx_iobp_ctl_s cn78xx;
+	struct cvmx_dtx_iobp_ctl_s cn78xxp1;
+	struct cvmx_dtx_iobp_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobp_ctl cvmx_dtx_iobp_ctl_t;
+
+/**
+ * cvmx_dtx_iobp_dat#
+ */
+union cvmx_dtx_iobp_datx {
+	u64 u64;
+	struct cvmx_dtx_iobp_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_iobp_datx_s cn73xx;
+	struct cvmx_dtx_iobp_datx_s cn78xx;
+	struct cvmx_dtx_iobp_datx_s cn78xxp1;
+	struct cvmx_dtx_iobp_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobp_datx cvmx_dtx_iobp_datx_t;
+
+/**
+ * cvmx_dtx_iobp_ena#
+ */
+union cvmx_dtx_iobp_enax {
+	u64 u64;
+	struct cvmx_dtx_iobp_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_iobp_enax_s cn73xx;
+	struct cvmx_dtx_iobp_enax_s cn78xx;
+	struct cvmx_dtx_iobp_enax_s cn78xxp1;
+	struct cvmx_dtx_iobp_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobp_enax cvmx_dtx_iobp_enax_t;
+
+/**
+ * cvmx_dtx_iobp_sel#
+ */
+union cvmx_dtx_iobp_selx {
+	u64 u64;
+	struct cvmx_dtx_iobp_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_iobp_selx_s cn73xx;
+	struct cvmx_dtx_iobp_selx_s cn78xx;
+	struct cvmx_dtx_iobp_selx_s cn78xxp1;
+	struct cvmx_dtx_iobp_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_iobp_selx cvmx_dtx_iobp_selx_t;
+
+/**
+ * cvmx_dtx_ipd_bcst_rsp
+ */
+union cvmx_dtx_ipd_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ipd_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ipd_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_ipd_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_ipd_bcst_rsp cvmx_dtx_ipd_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ipd_ctl
+ */
+union cvmx_dtx_ipd_ctl {
+	u64 u64;
+	struct cvmx_dtx_ipd_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ipd_ctl_s cn70xx;
+	struct cvmx_dtx_ipd_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_ipd_ctl cvmx_dtx_ipd_ctl_t;
+
+/**
+ * cvmx_dtx_ipd_dat#
+ */
+union cvmx_dtx_ipd_datx {
+	u64 u64;
+	struct cvmx_dtx_ipd_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ipd_datx_s cn70xx;
+	struct cvmx_dtx_ipd_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_ipd_datx cvmx_dtx_ipd_datx_t;
+
+/**
+ * cvmx_dtx_ipd_ena#
+ */
+union cvmx_dtx_ipd_enax {
+	u64 u64;
+	struct cvmx_dtx_ipd_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ipd_enax_s cn70xx;
+	struct cvmx_dtx_ipd_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_ipd_enax cvmx_dtx_ipd_enax_t;
+
+/**
+ * cvmx_dtx_ipd_sel#
+ */
+union cvmx_dtx_ipd_selx {
+	u64 u64;
+	struct cvmx_dtx_ipd_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ipd_selx_s cn70xx;
+	struct cvmx_dtx_ipd_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_ipd_selx cvmx_dtx_ipd_selx_t;
+
+/**
+ * cvmx_dtx_key_bcst_rsp
+ */
+union cvmx_dtx_key_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_key_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_key_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_key_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_key_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_key_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_key_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_key_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_key_bcst_rsp cvmx_dtx_key_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_key_ctl
+ */
+union cvmx_dtx_key_ctl {
+	u64 u64;
+	struct cvmx_dtx_key_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_key_ctl_s cn70xx;
+	struct cvmx_dtx_key_ctl_s cn70xxp1;
+	struct cvmx_dtx_key_ctl_s cn73xx;
+	struct cvmx_dtx_key_ctl_s cn78xx;
+	struct cvmx_dtx_key_ctl_s cn78xxp1;
+	struct cvmx_dtx_key_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_key_ctl cvmx_dtx_key_ctl_t;
+
+/**
+ * cvmx_dtx_key_dat#
+ */
+union cvmx_dtx_key_datx {
+	u64 u64;
+	struct cvmx_dtx_key_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_key_datx_s cn70xx;
+	struct cvmx_dtx_key_datx_s cn70xxp1;
+	struct cvmx_dtx_key_datx_s cn73xx;
+	struct cvmx_dtx_key_datx_s cn78xx;
+	struct cvmx_dtx_key_datx_s cn78xxp1;
+	struct cvmx_dtx_key_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_key_datx cvmx_dtx_key_datx_t;
+
+/**
+ * cvmx_dtx_key_ena#
+ */
+union cvmx_dtx_key_enax {
+	u64 u64;
+	struct cvmx_dtx_key_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_key_enax_s cn70xx;
+	struct cvmx_dtx_key_enax_s cn70xxp1;
+	struct cvmx_dtx_key_enax_s cn73xx;
+	struct cvmx_dtx_key_enax_s cn78xx;
+	struct cvmx_dtx_key_enax_s cn78xxp1;
+	struct cvmx_dtx_key_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_key_enax cvmx_dtx_key_enax_t;
+
+/**
+ * cvmx_dtx_key_sel#
+ */
+union cvmx_dtx_key_selx {
+	u64 u64;
+	struct cvmx_dtx_key_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_key_selx_s cn70xx;
+	struct cvmx_dtx_key_selx_s cn70xxp1;
+	struct cvmx_dtx_key_selx_s cn73xx;
+	struct cvmx_dtx_key_selx_s cn78xx;
+	struct cvmx_dtx_key_selx_s cn78xxp1;
+	struct cvmx_dtx_key_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_key_selx cvmx_dtx_key_selx_t;
+
+/**
+ * cvmx_dtx_l2c_cbc#_bcst_rsp
+ */
+union cvmx_dtx_l2c_cbcx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_l2c_cbcx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_cbcx_bcst_rsp cvmx_dtx_l2c_cbcx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_l2c_cbc#_ctl
+ */
+union cvmx_dtx_l2c_cbcx_ctl {
+	u64 u64;
+	struct cvmx_dtx_l2c_cbcx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cn70xx;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cn70xxp1;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cn73xx;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cn78xx;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cn78xxp1;
+	struct cvmx_dtx_l2c_cbcx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_cbcx_ctl cvmx_dtx_l2c_cbcx_ctl_t;
+
+/**
+ * cvmx_dtx_l2c_cbc#_dat#
+ */
+union cvmx_dtx_l2c_cbcx_datx {
+	u64 u64;
+	struct cvmx_dtx_l2c_cbcx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_l2c_cbcx_datx_s cn70xx;
+	struct cvmx_dtx_l2c_cbcx_datx_s cn70xxp1;
+	struct cvmx_dtx_l2c_cbcx_datx_s cn73xx;
+	struct cvmx_dtx_l2c_cbcx_datx_s cn78xx;
+	struct cvmx_dtx_l2c_cbcx_datx_s cn78xxp1;
+	struct cvmx_dtx_l2c_cbcx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_cbcx_datx cvmx_dtx_l2c_cbcx_datx_t;
+
+/**
+ * cvmx_dtx_l2c_cbc#_ena#
+ */
+union cvmx_dtx_l2c_cbcx_enax {
+	u64 u64;
+	struct cvmx_dtx_l2c_cbcx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_l2c_cbcx_enax_s cn70xx;
+	struct cvmx_dtx_l2c_cbcx_enax_s cn70xxp1;
+	struct cvmx_dtx_l2c_cbcx_enax_s cn73xx;
+	struct cvmx_dtx_l2c_cbcx_enax_s cn78xx;
+	struct cvmx_dtx_l2c_cbcx_enax_s cn78xxp1;
+	struct cvmx_dtx_l2c_cbcx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_cbcx_enax cvmx_dtx_l2c_cbcx_enax_t;
+
+/**
+ * cvmx_dtx_l2c_cbc#_sel#
+ */
+union cvmx_dtx_l2c_cbcx_selx {
+	u64 u64;
+	struct cvmx_dtx_l2c_cbcx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_l2c_cbcx_selx_s cn70xx;
+	struct cvmx_dtx_l2c_cbcx_selx_s cn70xxp1;
+	struct cvmx_dtx_l2c_cbcx_selx_s cn73xx;
+	struct cvmx_dtx_l2c_cbcx_selx_s cn78xx;
+	struct cvmx_dtx_l2c_cbcx_selx_s cn78xxp1;
+	struct cvmx_dtx_l2c_cbcx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_cbcx_selx cvmx_dtx_l2c_cbcx_selx_t;
+
+/**
+ * cvmx_dtx_l2c_mci#_bcst_rsp
+ */
+union cvmx_dtx_l2c_mcix_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_l2c_mcix_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_mcix_bcst_rsp cvmx_dtx_l2c_mcix_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_l2c_mci#_ctl
+ */
+union cvmx_dtx_l2c_mcix_ctl {
+	u64 u64;
+	struct cvmx_dtx_l2c_mcix_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_l2c_mcix_ctl_s cn70xx;
+	struct cvmx_dtx_l2c_mcix_ctl_s cn70xxp1;
+	struct cvmx_dtx_l2c_mcix_ctl_s cn73xx;
+	struct cvmx_dtx_l2c_mcix_ctl_s cn78xx;
+	struct cvmx_dtx_l2c_mcix_ctl_s cn78xxp1;
+	struct cvmx_dtx_l2c_mcix_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_mcix_ctl cvmx_dtx_l2c_mcix_ctl_t;
+
+/**
+ * cvmx_dtx_l2c_mci#_dat#
+ */
+union cvmx_dtx_l2c_mcix_datx {
+	u64 u64;
+	struct cvmx_dtx_l2c_mcix_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_l2c_mcix_datx_s cn70xx;
+	struct cvmx_dtx_l2c_mcix_datx_s cn70xxp1;
+	struct cvmx_dtx_l2c_mcix_datx_s cn73xx;
+	struct cvmx_dtx_l2c_mcix_datx_s cn78xx;
+	struct cvmx_dtx_l2c_mcix_datx_s cn78xxp1;
+	struct cvmx_dtx_l2c_mcix_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_mcix_datx cvmx_dtx_l2c_mcix_datx_t;
+
+/**
+ * cvmx_dtx_l2c_mci#_ena#
+ */
+union cvmx_dtx_l2c_mcix_enax {
+	u64 u64;
+	struct cvmx_dtx_l2c_mcix_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_l2c_mcix_enax_s cn70xx;
+	struct cvmx_dtx_l2c_mcix_enax_s cn70xxp1;
+	struct cvmx_dtx_l2c_mcix_enax_s cn73xx;
+	struct cvmx_dtx_l2c_mcix_enax_s cn78xx;
+	struct cvmx_dtx_l2c_mcix_enax_s cn78xxp1;
+	struct cvmx_dtx_l2c_mcix_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_mcix_enax cvmx_dtx_l2c_mcix_enax_t;
+
+/**
+ * cvmx_dtx_l2c_mci#_sel#
+ */
+union cvmx_dtx_l2c_mcix_selx {
+	u64 u64;
+	struct cvmx_dtx_l2c_mcix_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_l2c_mcix_selx_s cn70xx;
+	struct cvmx_dtx_l2c_mcix_selx_s cn70xxp1;
+	struct cvmx_dtx_l2c_mcix_selx_s cn73xx;
+	struct cvmx_dtx_l2c_mcix_selx_s cn78xx;
+	struct cvmx_dtx_l2c_mcix_selx_s cn78xxp1;
+	struct cvmx_dtx_l2c_mcix_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_mcix_selx cvmx_dtx_l2c_mcix_selx_t;
+
+/**
+ * cvmx_dtx_l2c_tad#_bcst_rsp
+ */
+union cvmx_dtx_l2c_tadx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_l2c_tadx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_tadx_bcst_rsp cvmx_dtx_l2c_tadx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_l2c_tad#_ctl
+ */
+union cvmx_dtx_l2c_tadx_ctl {
+	u64 u64;
+	struct cvmx_dtx_l2c_tadx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_l2c_tadx_ctl_s cn70xx;
+	struct cvmx_dtx_l2c_tadx_ctl_s cn70xxp1;
+	struct cvmx_dtx_l2c_tadx_ctl_s cn73xx;
+	struct cvmx_dtx_l2c_tadx_ctl_s cn78xx;
+	struct cvmx_dtx_l2c_tadx_ctl_s cn78xxp1;
+	struct cvmx_dtx_l2c_tadx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_tadx_ctl cvmx_dtx_l2c_tadx_ctl_t;
+
+/**
+ * cvmx_dtx_l2c_tad#_dat#
+ */
+union cvmx_dtx_l2c_tadx_datx {
+	u64 u64;
+	struct cvmx_dtx_l2c_tadx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_l2c_tadx_datx_s cn70xx;
+	struct cvmx_dtx_l2c_tadx_datx_s cn70xxp1;
+	struct cvmx_dtx_l2c_tadx_datx_s cn73xx;
+	struct cvmx_dtx_l2c_tadx_datx_s cn78xx;
+	struct cvmx_dtx_l2c_tadx_datx_s cn78xxp1;
+	struct cvmx_dtx_l2c_tadx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_tadx_datx cvmx_dtx_l2c_tadx_datx_t;
+
+/**
+ * cvmx_dtx_l2c_tad#_ena#
+ */
+union cvmx_dtx_l2c_tadx_enax {
+	u64 u64;
+	struct cvmx_dtx_l2c_tadx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_l2c_tadx_enax_s cn70xx;
+	struct cvmx_dtx_l2c_tadx_enax_s cn70xxp1;
+	struct cvmx_dtx_l2c_tadx_enax_s cn73xx;
+	struct cvmx_dtx_l2c_tadx_enax_s cn78xx;
+	struct cvmx_dtx_l2c_tadx_enax_s cn78xxp1;
+	struct cvmx_dtx_l2c_tadx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_tadx_enax cvmx_dtx_l2c_tadx_enax_t;
+
+/**
+ * cvmx_dtx_l2c_tad#_sel#
+ */
+union cvmx_dtx_l2c_tadx_selx {
+	u64 u64;
+	struct cvmx_dtx_l2c_tadx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_l2c_tadx_selx_s cn70xx;
+	struct cvmx_dtx_l2c_tadx_selx_s cn70xxp1;
+	struct cvmx_dtx_l2c_tadx_selx_s cn73xx;
+	struct cvmx_dtx_l2c_tadx_selx_s cn78xx;
+	struct cvmx_dtx_l2c_tadx_selx_s cn78xxp1;
+	struct cvmx_dtx_l2c_tadx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_l2c_tadx_selx cvmx_dtx_l2c_tadx_selx_t;
+
+/**
+ * cvmx_dtx_lap#_bcst_rsp
+ */
+union cvmx_dtx_lapx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_lapx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_lapx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_lapx_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_lapx_bcst_rsp cvmx_dtx_lapx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_lap#_ctl
+ */
+union cvmx_dtx_lapx_ctl {
+	u64 u64;
+	struct cvmx_dtx_lapx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_lapx_ctl_s cn78xx;
+	struct cvmx_dtx_lapx_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_lapx_ctl cvmx_dtx_lapx_ctl_t;
+
+/**
+ * cvmx_dtx_lap#_dat#
+ */
+union cvmx_dtx_lapx_datx {
+	u64 u64;
+	struct cvmx_dtx_lapx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_lapx_datx_s cn78xx;
+	struct cvmx_dtx_lapx_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_lapx_datx cvmx_dtx_lapx_datx_t;
+
+/**
+ * cvmx_dtx_lap#_ena#
+ */
+union cvmx_dtx_lapx_enax {
+	u64 u64;
+	struct cvmx_dtx_lapx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_lapx_enax_s cn78xx;
+	struct cvmx_dtx_lapx_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_lapx_enax cvmx_dtx_lapx_enax_t;
+
+/**
+ * cvmx_dtx_lap#_sel#
+ */
+union cvmx_dtx_lapx_selx {
+	u64 u64;
+	struct cvmx_dtx_lapx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_lapx_selx_s cn78xx;
+	struct cvmx_dtx_lapx_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_lapx_selx cvmx_dtx_lapx_selx_t;
+
+/**
+ * cvmx_dtx_lbk_bcst_rsp
+ */
+union cvmx_dtx_lbk_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_lbk_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_lbk_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_lbk_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_lbk_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_lbk_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lbk_bcst_rsp cvmx_dtx_lbk_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_lbk_ctl
+ */
+union cvmx_dtx_lbk_ctl {
+	u64 u64;
+	struct cvmx_dtx_lbk_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_lbk_ctl_s cn73xx;
+	struct cvmx_dtx_lbk_ctl_s cn78xx;
+	struct cvmx_dtx_lbk_ctl_s cn78xxp1;
+	struct cvmx_dtx_lbk_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lbk_ctl cvmx_dtx_lbk_ctl_t;
+
+/**
+ * cvmx_dtx_lbk_dat#
+ */
+union cvmx_dtx_lbk_datx {
+	u64 u64;
+	struct cvmx_dtx_lbk_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_lbk_datx_s cn73xx;
+	struct cvmx_dtx_lbk_datx_s cn78xx;
+	struct cvmx_dtx_lbk_datx_s cn78xxp1;
+	struct cvmx_dtx_lbk_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lbk_datx cvmx_dtx_lbk_datx_t;
+
+/**
+ * cvmx_dtx_lbk_ena#
+ */
+union cvmx_dtx_lbk_enax {
+	u64 u64;
+	struct cvmx_dtx_lbk_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_lbk_enax_s cn73xx;
+	struct cvmx_dtx_lbk_enax_s cn78xx;
+	struct cvmx_dtx_lbk_enax_s cn78xxp1;
+	struct cvmx_dtx_lbk_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lbk_enax cvmx_dtx_lbk_enax_t;
+
+/**
+ * cvmx_dtx_lbk_sel#
+ */
+union cvmx_dtx_lbk_selx {
+	u64 u64;
+	struct cvmx_dtx_lbk_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_lbk_selx_s cn73xx;
+	struct cvmx_dtx_lbk_selx_s cn78xx;
+	struct cvmx_dtx_lbk_selx_s cn78xxp1;
+	struct cvmx_dtx_lbk_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lbk_selx cvmx_dtx_lbk_selx_t;
+
+/**
+ * cvmx_dtx_lmc#_bcst_rsp
+ */
+union cvmx_dtx_lmcx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_lmcx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_lmcx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lmcx_bcst_rsp cvmx_dtx_lmcx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_lmc#_ctl
+ */
+union cvmx_dtx_lmcx_ctl {
+	u64 u64;
+	struct cvmx_dtx_lmcx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_lmcx_ctl_s cn70xx;
+	struct cvmx_dtx_lmcx_ctl_s cn70xxp1;
+	struct cvmx_dtx_lmcx_ctl_s cn73xx;
+	struct cvmx_dtx_lmcx_ctl_s cn78xx;
+	struct cvmx_dtx_lmcx_ctl_s cn78xxp1;
+	struct cvmx_dtx_lmcx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lmcx_ctl cvmx_dtx_lmcx_ctl_t;
+
+/**
+ * cvmx_dtx_lmc#_dat#
+ */
+union cvmx_dtx_lmcx_datx {
+	u64 u64;
+	struct cvmx_dtx_lmcx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_lmcx_datx_s cn70xx;
+	struct cvmx_dtx_lmcx_datx_s cn70xxp1;
+	struct cvmx_dtx_lmcx_datx_s cn73xx;
+	struct cvmx_dtx_lmcx_datx_s cn78xx;
+	struct cvmx_dtx_lmcx_datx_s cn78xxp1;
+	struct cvmx_dtx_lmcx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lmcx_datx cvmx_dtx_lmcx_datx_t;
+
+/**
+ * cvmx_dtx_lmc#_ena#
+ */
+union cvmx_dtx_lmcx_enax {
+	u64 u64;
+	struct cvmx_dtx_lmcx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_lmcx_enax_s cn70xx;
+	struct cvmx_dtx_lmcx_enax_s cn70xxp1;
+	struct cvmx_dtx_lmcx_enax_s cn73xx;
+	struct cvmx_dtx_lmcx_enax_s cn78xx;
+	struct cvmx_dtx_lmcx_enax_s cn78xxp1;
+	struct cvmx_dtx_lmcx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lmcx_enax cvmx_dtx_lmcx_enax_t;
+
+/**
+ * cvmx_dtx_lmc#_sel#
+ */
+union cvmx_dtx_lmcx_selx {
+	u64 u64;
+	struct cvmx_dtx_lmcx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_lmcx_selx_s cn70xx;
+	struct cvmx_dtx_lmcx_selx_s cn70xxp1;
+	struct cvmx_dtx_lmcx_selx_s cn73xx;
+	struct cvmx_dtx_lmcx_selx_s cn78xx;
+	struct cvmx_dtx_lmcx_selx_s cn78xxp1;
+	struct cvmx_dtx_lmcx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_lmcx_selx cvmx_dtx_lmcx_selx_t;
+
+/**
+ * cvmx_dtx_mdb#_bcst_rsp
+ */
+union cvmx_dtx_mdbx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_mdbx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_mdbx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mdbx_bcst_rsp cvmx_dtx_mdbx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_mdb#_ctl
+ */
+union cvmx_dtx_mdbx_ctl {
+	u64 u64;
+	struct cvmx_dtx_mdbx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_mdbx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mdbx_ctl cvmx_dtx_mdbx_ctl_t;
+
+/**
+ * cvmx_dtx_mdb#_dat#
+ */
+union cvmx_dtx_mdbx_datx {
+	u64 u64;
+	struct cvmx_dtx_mdbx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_mdbx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mdbx_datx cvmx_dtx_mdbx_datx_t;
+
+/**
+ * cvmx_dtx_mdb#_ena#
+ */
+union cvmx_dtx_mdbx_enax {
+	u64 u64;
+	struct cvmx_dtx_mdbx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_mdbx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mdbx_enax cvmx_dtx_mdbx_enax_t;
+
+/**
+ * cvmx_dtx_mdb#_sel#
+ */
+union cvmx_dtx_mdbx_selx {
+	u64 u64;
+	struct cvmx_dtx_mdbx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_mdbx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mdbx_selx cvmx_dtx_mdbx_selx_t;
+
+/**
+ * cvmx_dtx_mhbw_bcst_rsp
+ */
+union cvmx_dtx_mhbw_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_mhbw_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_mhbw_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mhbw_bcst_rsp cvmx_dtx_mhbw_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_mhbw_ctl
+ */
+union cvmx_dtx_mhbw_ctl {
+	u64 u64;
+	struct cvmx_dtx_mhbw_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_mhbw_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mhbw_ctl cvmx_dtx_mhbw_ctl_t;
+
+/**
+ * cvmx_dtx_mhbw_dat#
+ */
+union cvmx_dtx_mhbw_datx {
+	u64 u64;
+	struct cvmx_dtx_mhbw_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_mhbw_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mhbw_datx cvmx_dtx_mhbw_datx_t;
+
+/**
+ * cvmx_dtx_mhbw_ena#
+ */
+union cvmx_dtx_mhbw_enax {
+	u64 u64;
+	struct cvmx_dtx_mhbw_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_mhbw_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mhbw_enax cvmx_dtx_mhbw_enax_t;
+
+/**
+ * cvmx_dtx_mhbw_sel#
+ */
+union cvmx_dtx_mhbw_selx {
+	u64 u64;
+	struct cvmx_dtx_mhbw_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_mhbw_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mhbw_selx cvmx_dtx_mhbw_selx_t;
+
+/**
+ * cvmx_dtx_mio_bcst_rsp
+ */
+union cvmx_dtx_mio_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_mio_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_mio_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_mio_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_mio_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_mio_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_mio_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_mio_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mio_bcst_rsp cvmx_dtx_mio_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_mio_ctl
+ */
+union cvmx_dtx_mio_ctl {
+	u64 u64;
+	struct cvmx_dtx_mio_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_mio_ctl_s cn70xx;
+	struct cvmx_dtx_mio_ctl_s cn70xxp1;
+	struct cvmx_dtx_mio_ctl_s cn73xx;
+	struct cvmx_dtx_mio_ctl_s cn78xx;
+	struct cvmx_dtx_mio_ctl_s cn78xxp1;
+	struct cvmx_dtx_mio_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mio_ctl cvmx_dtx_mio_ctl_t;
+
+/**
+ * cvmx_dtx_mio_dat#
+ */
+union cvmx_dtx_mio_datx {
+	u64 u64;
+	struct cvmx_dtx_mio_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_mio_datx_s cn70xx;
+	struct cvmx_dtx_mio_datx_s cn70xxp1;
+	struct cvmx_dtx_mio_datx_s cn73xx;
+	struct cvmx_dtx_mio_datx_s cn78xx;
+	struct cvmx_dtx_mio_datx_s cn78xxp1;
+	struct cvmx_dtx_mio_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mio_datx cvmx_dtx_mio_datx_t;
+
+/**
+ * cvmx_dtx_mio_ena#
+ */
+union cvmx_dtx_mio_enax {
+	u64 u64;
+	struct cvmx_dtx_mio_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_mio_enax_s cn70xx;
+	struct cvmx_dtx_mio_enax_s cn70xxp1;
+	struct cvmx_dtx_mio_enax_s cn73xx;
+	struct cvmx_dtx_mio_enax_s cn78xx;
+	struct cvmx_dtx_mio_enax_s cn78xxp1;
+	struct cvmx_dtx_mio_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mio_enax cvmx_dtx_mio_enax_t;
+
+/**
+ * cvmx_dtx_mio_sel#
+ */
+union cvmx_dtx_mio_selx {
+	u64 u64;
+	struct cvmx_dtx_mio_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_mio_selx_s cn70xx;
+	struct cvmx_dtx_mio_selx_s cn70xxp1;
+	struct cvmx_dtx_mio_selx_s cn73xx;
+	struct cvmx_dtx_mio_selx_s cn78xx;
+	struct cvmx_dtx_mio_selx_s cn78xxp1;
+	struct cvmx_dtx_mio_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_mio_selx cvmx_dtx_mio_selx_t;
+
+/**
+ * cvmx_dtx_ocx_bot_bcst_rsp
+ */
+union cvmx_dtx_ocx_bot_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ocx_bot_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ocx_bot_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ocx_bot_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_bot_bcst_rsp cvmx_dtx_ocx_bot_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ocx_bot_ctl
+ */
+union cvmx_dtx_ocx_bot_ctl {
+	u64 u64;
+	struct cvmx_dtx_ocx_bot_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ocx_bot_ctl_s cn78xx;
+	struct cvmx_dtx_ocx_bot_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_bot_ctl cvmx_dtx_ocx_bot_ctl_t;
+
+/**
+ * cvmx_dtx_ocx_bot_dat#
+ */
+union cvmx_dtx_ocx_bot_datx {
+	u64 u64;
+	struct cvmx_dtx_ocx_bot_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ocx_bot_datx_s cn78xx;
+	struct cvmx_dtx_ocx_bot_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_bot_datx cvmx_dtx_ocx_bot_datx_t;
+
+/**
+ * cvmx_dtx_ocx_bot_ena#
+ */
+union cvmx_dtx_ocx_bot_enax {
+	u64 u64;
+	struct cvmx_dtx_ocx_bot_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ocx_bot_enax_s cn78xx;
+	struct cvmx_dtx_ocx_bot_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_bot_enax cvmx_dtx_ocx_bot_enax_t;
+
+/**
+ * cvmx_dtx_ocx_bot_sel#
+ */
+union cvmx_dtx_ocx_bot_selx {
+	u64 u64;
+	struct cvmx_dtx_ocx_bot_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ocx_bot_selx_s cn78xx;
+	struct cvmx_dtx_ocx_bot_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_bot_selx cvmx_dtx_ocx_bot_selx_t;
+
+/**
+ * cvmx_dtx_ocx_lnk#_bcst_rsp
+ */
+union cvmx_dtx_ocx_lnkx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ocx_lnkx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ocx_lnkx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ocx_lnkx_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_lnkx_bcst_rsp cvmx_dtx_ocx_lnkx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ocx_lnk#_ctl
+ */
+union cvmx_dtx_ocx_lnkx_ctl {
+	u64 u64;
+	struct cvmx_dtx_ocx_lnkx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ocx_lnkx_ctl_s cn78xx;
+	struct cvmx_dtx_ocx_lnkx_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_lnkx_ctl cvmx_dtx_ocx_lnkx_ctl_t;
+
+/**
+ * cvmx_dtx_ocx_lnk#_dat#
+ */
+union cvmx_dtx_ocx_lnkx_datx {
+	u64 u64;
+	struct cvmx_dtx_ocx_lnkx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ocx_lnkx_datx_s cn78xx;
+	struct cvmx_dtx_ocx_lnkx_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_lnkx_datx cvmx_dtx_ocx_lnkx_datx_t;
+
+/**
+ * cvmx_dtx_ocx_lnk#_ena#
+ */
+union cvmx_dtx_ocx_lnkx_enax {
+	u64 u64;
+	struct cvmx_dtx_ocx_lnkx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ocx_lnkx_enax_s cn78xx;
+	struct cvmx_dtx_ocx_lnkx_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_lnkx_enax cvmx_dtx_ocx_lnkx_enax_t;
+
+/**
+ * cvmx_dtx_ocx_lnk#_sel#
+ */
+union cvmx_dtx_ocx_lnkx_selx {
+	u64 u64;
+	struct cvmx_dtx_ocx_lnkx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ocx_lnkx_selx_s cn78xx;
+	struct cvmx_dtx_ocx_lnkx_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_lnkx_selx cvmx_dtx_ocx_lnkx_selx_t;
+
+/**
+ * cvmx_dtx_ocx_ole#_bcst_rsp
+ */
+union cvmx_dtx_ocx_olex_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ocx_olex_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ocx_olex_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ocx_olex_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_olex_bcst_rsp cvmx_dtx_ocx_olex_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ocx_ole#_ctl
+ */
+union cvmx_dtx_ocx_olex_ctl {
+	u64 u64;
+	struct cvmx_dtx_ocx_olex_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ocx_olex_ctl_s cn78xx;
+	struct cvmx_dtx_ocx_olex_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_olex_ctl cvmx_dtx_ocx_olex_ctl_t;
+
+/**
+ * cvmx_dtx_ocx_ole#_dat#
+ */
+union cvmx_dtx_ocx_olex_datx {
+	u64 u64;
+	struct cvmx_dtx_ocx_olex_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ocx_olex_datx_s cn78xx;
+	struct cvmx_dtx_ocx_olex_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_olex_datx cvmx_dtx_ocx_olex_datx_t;
+
+/**
+ * cvmx_dtx_ocx_ole#_ena#
+ */
+union cvmx_dtx_ocx_olex_enax {
+	u64 u64;
+	struct cvmx_dtx_ocx_olex_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ocx_olex_enax_s cn78xx;
+	struct cvmx_dtx_ocx_olex_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_olex_enax cvmx_dtx_ocx_olex_enax_t;
+
+/**
+ * cvmx_dtx_ocx_ole#_sel#
+ */
+union cvmx_dtx_ocx_olex_selx {
+	u64 u64;
+	struct cvmx_dtx_ocx_olex_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ocx_olex_selx_s cn78xx;
+	struct cvmx_dtx_ocx_olex_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_olex_selx cvmx_dtx_ocx_olex_selx_t;
+
+/**
+ * cvmx_dtx_ocx_top_bcst_rsp
+ */
+union cvmx_dtx_ocx_top_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ocx_top_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ocx_top_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_ocx_top_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_top_bcst_rsp cvmx_dtx_ocx_top_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ocx_top_ctl
+ */
+union cvmx_dtx_ocx_top_ctl {
+	u64 u64;
+	struct cvmx_dtx_ocx_top_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ocx_top_ctl_s cn78xx;
+	struct cvmx_dtx_ocx_top_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_top_ctl cvmx_dtx_ocx_top_ctl_t;
+
+/**
+ * cvmx_dtx_ocx_top_dat#
+ */
+union cvmx_dtx_ocx_top_datx {
+	u64 u64;
+	struct cvmx_dtx_ocx_top_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ocx_top_datx_s cn78xx;
+	struct cvmx_dtx_ocx_top_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_top_datx cvmx_dtx_ocx_top_datx_t;
+
+/**
+ * cvmx_dtx_ocx_top_ena#
+ */
+union cvmx_dtx_ocx_top_enax {
+	u64 u64;
+	struct cvmx_dtx_ocx_top_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ocx_top_enax_s cn78xx;
+	struct cvmx_dtx_ocx_top_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_top_enax cvmx_dtx_ocx_top_enax_t;
+
+/**
+ * cvmx_dtx_ocx_top_sel#
+ */
+union cvmx_dtx_ocx_top_selx {
+	u64 u64;
+	struct cvmx_dtx_ocx_top_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ocx_top_selx_s cn78xx;
+	struct cvmx_dtx_ocx_top_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_ocx_top_selx cvmx_dtx_ocx_top_selx_t;
+
+/**
+ * cvmx_dtx_osm_bcst_rsp
+ */
+union cvmx_dtx_osm_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_osm_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_osm_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_osm_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_osm_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_osm_bcst_rsp cvmx_dtx_osm_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_osm_ctl
+ */
+union cvmx_dtx_osm_ctl {
+	u64 u64;
+	struct cvmx_dtx_osm_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_osm_ctl_s cn73xx;
+	struct cvmx_dtx_osm_ctl_s cn78xx;
+	struct cvmx_dtx_osm_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_osm_ctl cvmx_dtx_osm_ctl_t;
+
+/**
+ * cvmx_dtx_osm_dat#
+ */
+union cvmx_dtx_osm_datx {
+	u64 u64;
+	struct cvmx_dtx_osm_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_osm_datx_s cn73xx;
+	struct cvmx_dtx_osm_datx_s cn78xx;
+	struct cvmx_dtx_osm_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_osm_datx cvmx_dtx_osm_datx_t;
+
+/**
+ * cvmx_dtx_osm_ena#
+ */
+union cvmx_dtx_osm_enax {
+	u64 u64;
+	struct cvmx_dtx_osm_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_osm_enax_s cn73xx;
+	struct cvmx_dtx_osm_enax_s cn78xx;
+	struct cvmx_dtx_osm_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_osm_enax cvmx_dtx_osm_enax_t;
+
+/**
+ * cvmx_dtx_osm_sel#
+ */
+union cvmx_dtx_osm_selx {
+	u64 u64;
+	struct cvmx_dtx_osm_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_osm_selx_s cn73xx;
+	struct cvmx_dtx_osm_selx_s cn78xx;
+	struct cvmx_dtx_osm_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_osm_selx cvmx_dtx_osm_selx_t;
+
+/**
+ * cvmx_dtx_pcs#_bcst_rsp
+ */
+union cvmx_dtx_pcsx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pcsx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pcsx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_pcsx_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pcsx_bcst_rsp cvmx_dtx_pcsx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pcs#_ctl
+ */
+union cvmx_dtx_pcsx_ctl {
+	u64 u64;
+	struct cvmx_dtx_pcsx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pcsx_ctl_s cn70xx;
+	struct cvmx_dtx_pcsx_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pcsx_ctl cvmx_dtx_pcsx_ctl_t;
+
+/**
+ * cvmx_dtx_pcs#_dat#
+ */
+union cvmx_dtx_pcsx_datx {
+	u64 u64;
+	struct cvmx_dtx_pcsx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pcsx_datx_s cn70xx;
+	struct cvmx_dtx_pcsx_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pcsx_datx cvmx_dtx_pcsx_datx_t;
+
+/**
+ * cvmx_dtx_pcs#_ena#
+ */
+union cvmx_dtx_pcsx_enax {
+	u64 u64;
+	struct cvmx_dtx_pcsx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pcsx_enax_s cn70xx;
+	struct cvmx_dtx_pcsx_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pcsx_enax cvmx_dtx_pcsx_enax_t;
+
+/**
+ * cvmx_dtx_pcs#_sel#
+ */
+union cvmx_dtx_pcsx_selx {
+	u64 u64;
+	struct cvmx_dtx_pcsx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pcsx_selx_s cn70xx;
+	struct cvmx_dtx_pcsx_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pcsx_selx cvmx_dtx_pcsx_selx_t;
+
+/**
+ * cvmx_dtx_pem#_bcst_rsp
+ */
+union cvmx_dtx_pemx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pemx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pemx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_pemx_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_pemx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_pemx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_pemx_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_pemx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pemx_bcst_rsp cvmx_dtx_pemx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pem#_ctl
+ */
+union cvmx_dtx_pemx_ctl {
+	u64 u64;
+	struct cvmx_dtx_pemx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pemx_ctl_s cn70xx;
+	struct cvmx_dtx_pemx_ctl_s cn70xxp1;
+	struct cvmx_dtx_pemx_ctl_s cn73xx;
+	struct cvmx_dtx_pemx_ctl_s cn78xx;
+	struct cvmx_dtx_pemx_ctl_s cn78xxp1;
+	struct cvmx_dtx_pemx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pemx_ctl cvmx_dtx_pemx_ctl_t;
+
+/**
+ * cvmx_dtx_pem#_dat#
+ */
+union cvmx_dtx_pemx_datx {
+	u64 u64;
+	struct cvmx_dtx_pemx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pemx_datx_s cn70xx;
+	struct cvmx_dtx_pemx_datx_s cn70xxp1;
+	struct cvmx_dtx_pemx_datx_s cn73xx;
+	struct cvmx_dtx_pemx_datx_s cn78xx;
+	struct cvmx_dtx_pemx_datx_s cn78xxp1;
+	struct cvmx_dtx_pemx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pemx_datx cvmx_dtx_pemx_datx_t;
+
+/**
+ * cvmx_dtx_pem#_ena#
+ */
+union cvmx_dtx_pemx_enax {
+	u64 u64;
+	struct cvmx_dtx_pemx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pemx_enax_s cn70xx;
+	struct cvmx_dtx_pemx_enax_s cn70xxp1;
+	struct cvmx_dtx_pemx_enax_s cn73xx;
+	struct cvmx_dtx_pemx_enax_s cn78xx;
+	struct cvmx_dtx_pemx_enax_s cn78xxp1;
+	struct cvmx_dtx_pemx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pemx_enax cvmx_dtx_pemx_enax_t;
+
+/**
+ * cvmx_dtx_pem#_sel#
+ */
+union cvmx_dtx_pemx_selx {
+	u64 u64;
+	struct cvmx_dtx_pemx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pemx_selx_s cn70xx;
+	struct cvmx_dtx_pemx_selx_s cn70xxp1;
+	struct cvmx_dtx_pemx_selx_s cn73xx;
+	struct cvmx_dtx_pemx_selx_s cn78xx;
+	struct cvmx_dtx_pemx_selx_s cn78xxp1;
+	struct cvmx_dtx_pemx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pemx_selx cvmx_dtx_pemx_selx_t;
+
+/**
+ * cvmx_dtx_pip_bcst_rsp
+ */
+union cvmx_dtx_pip_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pip_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pip_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_pip_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pip_bcst_rsp cvmx_dtx_pip_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pip_ctl
+ */
+union cvmx_dtx_pip_ctl {
+	u64 u64;
+	struct cvmx_dtx_pip_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pip_ctl_s cn70xx;
+	struct cvmx_dtx_pip_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pip_ctl cvmx_dtx_pip_ctl_t;
+
+/**
+ * cvmx_dtx_pip_dat#
+ */
+union cvmx_dtx_pip_datx {
+	u64 u64;
+	struct cvmx_dtx_pip_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pip_datx_s cn70xx;
+	struct cvmx_dtx_pip_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pip_datx cvmx_dtx_pip_datx_t;
+
+/**
+ * cvmx_dtx_pip_ena#
+ */
+union cvmx_dtx_pip_enax {
+	u64 u64;
+	struct cvmx_dtx_pip_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pip_enax_s cn70xx;
+	struct cvmx_dtx_pip_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pip_enax cvmx_dtx_pip_enax_t;
+
+/**
+ * cvmx_dtx_pip_sel#
+ */
+union cvmx_dtx_pip_selx {
+	u64 u64;
+	struct cvmx_dtx_pip_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pip_selx_s cn70xx;
+	struct cvmx_dtx_pip_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pip_selx cvmx_dtx_pip_selx_t;
+
+/**
+ * cvmx_dtx_pki_pbe_bcst_rsp
+ */
+union cvmx_dtx_pki_pbe_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pki_pbe_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pki_pbe_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_pki_pbe_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_pki_pbe_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_pki_pbe_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pbe_bcst_rsp cvmx_dtx_pki_pbe_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pki_pbe_ctl
+ */
+union cvmx_dtx_pki_pbe_ctl {
+	u64 u64;
+	struct cvmx_dtx_pki_pbe_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pki_pbe_ctl_s cn73xx;
+	struct cvmx_dtx_pki_pbe_ctl_s cn78xx;
+	struct cvmx_dtx_pki_pbe_ctl_s cn78xxp1;
+	struct cvmx_dtx_pki_pbe_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pbe_ctl cvmx_dtx_pki_pbe_ctl_t;
+
+/**
+ * cvmx_dtx_pki_pbe_dat#
+ */
+union cvmx_dtx_pki_pbe_datx {
+	u64 u64;
+	struct cvmx_dtx_pki_pbe_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pki_pbe_datx_s cn73xx;
+	struct cvmx_dtx_pki_pbe_datx_s cn78xx;
+	struct cvmx_dtx_pki_pbe_datx_s cn78xxp1;
+	struct cvmx_dtx_pki_pbe_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pbe_datx cvmx_dtx_pki_pbe_datx_t;
+
+/**
+ * cvmx_dtx_pki_pbe_ena#
+ */
+union cvmx_dtx_pki_pbe_enax {
+	u64 u64;
+	struct cvmx_dtx_pki_pbe_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pki_pbe_enax_s cn73xx;
+	struct cvmx_dtx_pki_pbe_enax_s cn78xx;
+	struct cvmx_dtx_pki_pbe_enax_s cn78xxp1;
+	struct cvmx_dtx_pki_pbe_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pbe_enax cvmx_dtx_pki_pbe_enax_t;
+
+/**
+ * cvmx_dtx_pki_pbe_sel#
+ */
+union cvmx_dtx_pki_pbe_selx {
+	u64 u64;
+	struct cvmx_dtx_pki_pbe_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pki_pbe_selx_s cn73xx;
+	struct cvmx_dtx_pki_pbe_selx_s cn78xx;
+	struct cvmx_dtx_pki_pbe_selx_s cn78xxp1;
+	struct cvmx_dtx_pki_pbe_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pbe_selx cvmx_dtx_pki_pbe_selx_t;
+
+/**
+ * cvmx_dtx_pki_pfe_bcst_rsp
+ */
+union cvmx_dtx_pki_pfe_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pki_pfe_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pki_pfe_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_pki_pfe_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_pki_pfe_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_pki_pfe_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pfe_bcst_rsp cvmx_dtx_pki_pfe_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pki_pfe_ctl
+ */
+union cvmx_dtx_pki_pfe_ctl {
+	u64 u64;
+	struct cvmx_dtx_pki_pfe_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pki_pfe_ctl_s cn73xx;
+	struct cvmx_dtx_pki_pfe_ctl_s cn78xx;
+	struct cvmx_dtx_pki_pfe_ctl_s cn78xxp1;
+	struct cvmx_dtx_pki_pfe_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pfe_ctl cvmx_dtx_pki_pfe_ctl_t;
+
+/**
+ * cvmx_dtx_pki_pfe_dat#
+ */
+union cvmx_dtx_pki_pfe_datx {
+	u64 u64;
+	struct cvmx_dtx_pki_pfe_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pki_pfe_datx_s cn73xx;
+	struct cvmx_dtx_pki_pfe_datx_s cn78xx;
+	struct cvmx_dtx_pki_pfe_datx_s cn78xxp1;
+	struct cvmx_dtx_pki_pfe_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pfe_datx cvmx_dtx_pki_pfe_datx_t;
+
+/**
+ * cvmx_dtx_pki_pfe_ena#
+ */
+union cvmx_dtx_pki_pfe_enax {
+	u64 u64;
+	struct cvmx_dtx_pki_pfe_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pki_pfe_enax_s cn73xx;
+	struct cvmx_dtx_pki_pfe_enax_s cn78xx;
+	struct cvmx_dtx_pki_pfe_enax_s cn78xxp1;
+	struct cvmx_dtx_pki_pfe_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pfe_enax cvmx_dtx_pki_pfe_enax_t;
+
+/**
+ * cvmx_dtx_pki_pfe_sel#
+ */
+union cvmx_dtx_pki_pfe_selx {
+	u64 u64;
+	struct cvmx_dtx_pki_pfe_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pki_pfe_selx_s cn73xx;
+	struct cvmx_dtx_pki_pfe_selx_s cn78xx;
+	struct cvmx_dtx_pki_pfe_selx_s cn78xxp1;
+	struct cvmx_dtx_pki_pfe_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pfe_selx cvmx_dtx_pki_pfe_selx_t;
+
+/**
+ * cvmx_dtx_pki_pix_bcst_rsp
+ */
+union cvmx_dtx_pki_pix_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pki_pix_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pki_pix_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_pki_pix_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_pki_pix_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_pki_pix_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pix_bcst_rsp cvmx_dtx_pki_pix_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pki_pix_ctl
+ */
+union cvmx_dtx_pki_pix_ctl {
+	u64 u64;
+	struct cvmx_dtx_pki_pix_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pki_pix_ctl_s cn73xx;
+	struct cvmx_dtx_pki_pix_ctl_s cn78xx;
+	struct cvmx_dtx_pki_pix_ctl_s cn78xxp1;
+	struct cvmx_dtx_pki_pix_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pix_ctl cvmx_dtx_pki_pix_ctl_t;
+
+/**
+ * cvmx_dtx_pki_pix_dat#
+ */
+union cvmx_dtx_pki_pix_datx {
+	u64 u64;
+	struct cvmx_dtx_pki_pix_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pki_pix_datx_s cn73xx;
+	struct cvmx_dtx_pki_pix_datx_s cn78xx;
+	struct cvmx_dtx_pki_pix_datx_s cn78xxp1;
+	struct cvmx_dtx_pki_pix_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pix_datx cvmx_dtx_pki_pix_datx_t;
+
+/**
+ * cvmx_dtx_pki_pix_ena#
+ */
+union cvmx_dtx_pki_pix_enax {
+	u64 u64;
+	struct cvmx_dtx_pki_pix_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pki_pix_enax_s cn73xx;
+	struct cvmx_dtx_pki_pix_enax_s cn78xx;
+	struct cvmx_dtx_pki_pix_enax_s cn78xxp1;
+	struct cvmx_dtx_pki_pix_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pix_enax cvmx_dtx_pki_pix_enax_t;
+
+/**
+ * cvmx_dtx_pki_pix_sel#
+ */
+union cvmx_dtx_pki_pix_selx {
+	u64 u64;
+	struct cvmx_dtx_pki_pix_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pki_pix_selx_s cn73xx;
+	struct cvmx_dtx_pki_pix_selx_s cn78xx;
+	struct cvmx_dtx_pki_pix_selx_s cn78xxp1;
+	struct cvmx_dtx_pki_pix_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pki_pix_selx cvmx_dtx_pki_pix_selx_t;
+
+/**
+ * cvmx_dtx_pko_bcst_rsp
+ */
+union cvmx_dtx_pko_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pko_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pko_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_pko_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_pko_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_pko_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_pko_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_pko_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pko_bcst_rsp cvmx_dtx_pko_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pko_ctl
+ */
+union cvmx_dtx_pko_ctl {
+	u64 u64;
+	struct cvmx_dtx_pko_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pko_ctl_s cn70xx;
+	struct cvmx_dtx_pko_ctl_s cn70xxp1;
+	struct cvmx_dtx_pko_ctl_s cn73xx;
+	struct cvmx_dtx_pko_ctl_s cn78xx;
+	struct cvmx_dtx_pko_ctl_s cn78xxp1;
+	struct cvmx_dtx_pko_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pko_ctl cvmx_dtx_pko_ctl_t;
+
+/**
+ * cvmx_dtx_pko_dat#
+ */
+union cvmx_dtx_pko_datx {
+	u64 u64;
+	struct cvmx_dtx_pko_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pko_datx_s cn70xx;
+	struct cvmx_dtx_pko_datx_s cn70xxp1;
+	struct cvmx_dtx_pko_datx_s cn73xx;
+	struct cvmx_dtx_pko_datx_s cn78xx;
+	struct cvmx_dtx_pko_datx_s cn78xxp1;
+	struct cvmx_dtx_pko_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pko_datx cvmx_dtx_pko_datx_t;
+
+/**
+ * cvmx_dtx_pko_ena#
+ */
+union cvmx_dtx_pko_enax {
+	u64 u64;
+	struct cvmx_dtx_pko_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pko_enax_s cn70xx;
+	struct cvmx_dtx_pko_enax_s cn70xxp1;
+	struct cvmx_dtx_pko_enax_s cn73xx;
+	struct cvmx_dtx_pko_enax_s cn78xx;
+	struct cvmx_dtx_pko_enax_s cn78xxp1;
+	struct cvmx_dtx_pko_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pko_enax cvmx_dtx_pko_enax_t;
+
+/**
+ * cvmx_dtx_pko_sel#
+ */
+union cvmx_dtx_pko_selx {
+	u64 u64;
+	struct cvmx_dtx_pko_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pko_selx_s cn70xx;
+	struct cvmx_dtx_pko_selx_s cn70xxp1;
+	struct cvmx_dtx_pko_selx_s cn73xx;
+	struct cvmx_dtx_pko_selx_s cn78xx;
+	struct cvmx_dtx_pko_selx_s cn78xxp1;
+	struct cvmx_dtx_pko_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pko_selx cvmx_dtx_pko_selx_t;
+
+/**
+ * cvmx_dtx_pnb#_bcst_rsp
+ */
+union cvmx_dtx_pnbx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pnbx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pnbx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbx_bcst_rsp cvmx_dtx_pnbx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pnb#_ctl
+ */
+union cvmx_dtx_pnbx_ctl {
+	u64 u64;
+	struct cvmx_dtx_pnbx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pnbx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbx_ctl cvmx_dtx_pnbx_ctl_t;
+
+/**
+ * cvmx_dtx_pnb#_dat#
+ */
+union cvmx_dtx_pnbx_datx {
+	u64 u64;
+	struct cvmx_dtx_pnbx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pnbx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbx_datx cvmx_dtx_pnbx_datx_t;
+
+/**
+ * cvmx_dtx_pnb#_ena#
+ */
+union cvmx_dtx_pnbx_enax {
+	u64 u64;
+	struct cvmx_dtx_pnbx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pnbx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbx_enax cvmx_dtx_pnbx_enax_t;
+
+/**
+ * cvmx_dtx_pnb#_sel#
+ */
+union cvmx_dtx_pnbx_selx {
+	u64 u64;
+	struct cvmx_dtx_pnbx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pnbx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbx_selx cvmx_dtx_pnbx_selx_t;
+
+/**
+ * cvmx_dtx_pnbd#_bcst_rsp
+ */
+union cvmx_dtx_pnbdx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pnbdx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pnbdx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbdx_bcst_rsp cvmx_dtx_pnbdx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pnbd#_ctl
+ */
+union cvmx_dtx_pnbdx_ctl {
+	u64 u64;
+	struct cvmx_dtx_pnbdx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pnbdx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbdx_ctl cvmx_dtx_pnbdx_ctl_t;
+
+/**
+ * cvmx_dtx_pnbd#_dat#
+ */
+union cvmx_dtx_pnbdx_datx {
+	u64 u64;
+	struct cvmx_dtx_pnbdx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pnbdx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbdx_datx cvmx_dtx_pnbdx_datx_t;
+
+/**
+ * cvmx_dtx_pnbd#_ena#
+ */
+union cvmx_dtx_pnbdx_enax {
+	u64 u64;
+	struct cvmx_dtx_pnbdx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pnbdx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbdx_enax cvmx_dtx_pnbdx_enax_t;
+
+/**
+ * cvmx_dtx_pnbd#_sel#
+ */
+union cvmx_dtx_pnbdx_selx {
+	u64 u64;
+	struct cvmx_dtx_pnbdx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pnbdx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_pnbdx_selx cvmx_dtx_pnbdx_selx_t;
+
+/**
+ * cvmx_dtx_pow_bcst_rsp
+ */
+union cvmx_dtx_pow_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_pow_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_pow_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_pow_bcst_rsp_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pow_bcst_rsp cvmx_dtx_pow_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_pow_ctl
+ */
+union cvmx_dtx_pow_ctl {
+	u64 u64;
+	struct cvmx_dtx_pow_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_pow_ctl_s cn70xx;
+	struct cvmx_dtx_pow_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pow_ctl cvmx_dtx_pow_ctl_t;
+
+/**
+ * cvmx_dtx_pow_dat#
+ */
+union cvmx_dtx_pow_datx {
+	u64 u64;
+	struct cvmx_dtx_pow_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_pow_datx_s cn70xx;
+	struct cvmx_dtx_pow_datx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pow_datx cvmx_dtx_pow_datx_t;
+
+/**
+ * cvmx_dtx_pow_ena#
+ */
+union cvmx_dtx_pow_enax {
+	u64 u64;
+	struct cvmx_dtx_pow_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_pow_enax_s cn70xx;
+	struct cvmx_dtx_pow_enax_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pow_enax cvmx_dtx_pow_enax_t;
+
+/**
+ * cvmx_dtx_pow_sel#
+ */
+union cvmx_dtx_pow_selx {
+	u64 u64;
+	struct cvmx_dtx_pow_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_pow_selx_s cn70xx;
+	struct cvmx_dtx_pow_selx_s cn70xxp1;
+};
+
+typedef union cvmx_dtx_pow_selx cvmx_dtx_pow_selx_t;
+
+/**
+ * cvmx_dtx_prch_bcst_rsp
+ */
+union cvmx_dtx_prch_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_prch_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_prch_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_prch_bcst_rsp cvmx_dtx_prch_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_prch_ctl
+ */
+union cvmx_dtx_prch_ctl {
+	u64 u64;
+	struct cvmx_dtx_prch_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_prch_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_prch_ctl cvmx_dtx_prch_ctl_t;
+
+/**
+ * cvmx_dtx_prch_dat#
+ */
+union cvmx_dtx_prch_datx {
+	u64 u64;
+	struct cvmx_dtx_prch_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_prch_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_prch_datx cvmx_dtx_prch_datx_t;
+
+/**
+ * cvmx_dtx_prch_ena#
+ */
+union cvmx_dtx_prch_enax {
+	u64 u64;
+	struct cvmx_dtx_prch_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_prch_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_prch_enax cvmx_dtx_prch_enax_t;
+
+/**
+ * cvmx_dtx_prch_sel#
+ */
+union cvmx_dtx_prch_selx {
+	u64 u64;
+	struct cvmx_dtx_prch_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_prch_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_prch_selx cvmx_dtx_prch_selx_t;
+
+/**
+ * cvmx_dtx_psm_bcst_rsp
+ */
+union cvmx_dtx_psm_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_psm_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_psm_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_psm_bcst_rsp cvmx_dtx_psm_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_psm_ctl
+ */
+union cvmx_dtx_psm_ctl {
+	u64 u64;
+	struct cvmx_dtx_psm_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_psm_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_psm_ctl cvmx_dtx_psm_ctl_t;
+
+/**
+ * cvmx_dtx_psm_dat#
+ */
+union cvmx_dtx_psm_datx {
+	u64 u64;
+	struct cvmx_dtx_psm_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_psm_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_psm_datx cvmx_dtx_psm_datx_t;
+
+/**
+ * cvmx_dtx_psm_ena#
+ */
+union cvmx_dtx_psm_enax {
+	u64 u64;
+	struct cvmx_dtx_psm_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_psm_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_psm_enax cvmx_dtx_psm_enax_t;
+
+/**
+ * cvmx_dtx_psm_sel#
+ */
+union cvmx_dtx_psm_selx {
+	u64 u64;
+	struct cvmx_dtx_psm_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_psm_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_psm_selx cvmx_dtx_psm_selx_t;
+
+/**
+ * cvmx_dtx_rad_bcst_rsp
+ */
+union cvmx_dtx_rad_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rad_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rad_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_rad_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_rad_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_rad_bcst_rsp cvmx_dtx_rad_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rad_ctl
+ */
+union cvmx_dtx_rad_ctl {
+	u64 u64;
+	struct cvmx_dtx_rad_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rad_ctl_s cn73xx;
+	struct cvmx_dtx_rad_ctl_s cn78xx;
+	struct cvmx_dtx_rad_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_rad_ctl cvmx_dtx_rad_ctl_t;
+
+/**
+ * cvmx_dtx_rad_dat#
+ */
+union cvmx_dtx_rad_datx {
+	u64 u64;
+	struct cvmx_dtx_rad_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rad_datx_s cn73xx;
+	struct cvmx_dtx_rad_datx_s cn78xx;
+	struct cvmx_dtx_rad_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_rad_datx cvmx_dtx_rad_datx_t;
+
+/**
+ * cvmx_dtx_rad_ena#
+ */
+union cvmx_dtx_rad_enax {
+	u64 u64;
+	struct cvmx_dtx_rad_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rad_enax_s cn73xx;
+	struct cvmx_dtx_rad_enax_s cn78xx;
+	struct cvmx_dtx_rad_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_rad_enax cvmx_dtx_rad_enax_t;
+
+/**
+ * cvmx_dtx_rad_sel#
+ */
+union cvmx_dtx_rad_selx {
+	u64 u64;
+	struct cvmx_dtx_rad_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rad_selx_s cn73xx;
+	struct cvmx_dtx_rad_selx_s cn78xx;
+	struct cvmx_dtx_rad_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_rad_selx cvmx_dtx_rad_selx_t;
+
+/**
+ * cvmx_dtx_rdec_bcst_rsp
+ */
+union cvmx_dtx_rdec_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rdec_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rdec_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rdec_bcst_rsp cvmx_dtx_rdec_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rdec_ctl
+ */
+union cvmx_dtx_rdec_ctl {
+	u64 u64;
+	struct cvmx_dtx_rdec_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rdec_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rdec_ctl cvmx_dtx_rdec_ctl_t;
+
+/**
+ * cvmx_dtx_rdec_dat#
+ */
+union cvmx_dtx_rdec_datx {
+	u64 u64;
+	struct cvmx_dtx_rdec_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rdec_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rdec_datx cvmx_dtx_rdec_datx_t;
+
+/**
+ * cvmx_dtx_rdec_ena#
+ */
+union cvmx_dtx_rdec_enax {
+	u64 u64;
+	struct cvmx_dtx_rdec_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rdec_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rdec_enax cvmx_dtx_rdec_enax_t;
+
+/**
+ * cvmx_dtx_rdec_sel#
+ */
+union cvmx_dtx_rdec_selx {
+	u64 u64;
+	struct cvmx_dtx_rdec_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rdec_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rdec_selx cvmx_dtx_rdec_selx_t;
+
+/**
+ * cvmx_dtx_rfif_bcst_rsp
+ */
+union cvmx_dtx_rfif_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rfif_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rfif_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rfif_bcst_rsp cvmx_dtx_rfif_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rfif_ctl
+ */
+union cvmx_dtx_rfif_ctl {
+	u64 u64;
+	struct cvmx_dtx_rfif_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rfif_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rfif_ctl cvmx_dtx_rfif_ctl_t;
+
+/**
+ * cvmx_dtx_rfif_dat#
+ */
+union cvmx_dtx_rfif_datx {
+	u64 u64;
+	struct cvmx_dtx_rfif_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rfif_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rfif_datx cvmx_dtx_rfif_datx_t;
+
+/**
+ * cvmx_dtx_rfif_ena#
+ */
+union cvmx_dtx_rfif_enax {
+	u64 u64;
+	struct cvmx_dtx_rfif_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rfif_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rfif_enax cvmx_dtx_rfif_enax_t;
+
+/**
+ * cvmx_dtx_rfif_sel#
+ */
+union cvmx_dtx_rfif_selx {
+	u64 u64;
+	struct cvmx_dtx_rfif_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rfif_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rfif_selx cvmx_dtx_rfif_selx_t;
+
+/**
+ * cvmx_dtx_rmap_bcst_rsp
+ */
+union cvmx_dtx_rmap_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rmap_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rmap_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rmap_bcst_rsp cvmx_dtx_rmap_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rmap_ctl
+ */
+union cvmx_dtx_rmap_ctl {
+	u64 u64;
+	struct cvmx_dtx_rmap_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rmap_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rmap_ctl cvmx_dtx_rmap_ctl_t;
+
+/**
+ * cvmx_dtx_rmap_dat#
+ */
+union cvmx_dtx_rmap_datx {
+	u64 u64;
+	struct cvmx_dtx_rmap_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rmap_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rmap_datx cvmx_dtx_rmap_datx_t;
+
+/**
+ * cvmx_dtx_rmap_ena#
+ */
+union cvmx_dtx_rmap_enax {
+	u64 u64;
+	struct cvmx_dtx_rmap_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rmap_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rmap_enax cvmx_dtx_rmap_enax_t;
+
+/**
+ * cvmx_dtx_rmap_sel#
+ */
+union cvmx_dtx_rmap_selx {
+	u64 u64;
+	struct cvmx_dtx_rmap_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rmap_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rmap_selx cvmx_dtx_rmap_selx_t;
+
+/**
+ * cvmx_dtx_rnm_bcst_rsp
+ */
+union cvmx_dtx_rnm_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rnm_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rnm_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_rnm_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_rnm_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_rnm_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rnm_bcst_rsp cvmx_dtx_rnm_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rnm_ctl
+ */
+union cvmx_dtx_rnm_ctl {
+	u64 u64;
+	struct cvmx_dtx_rnm_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rnm_ctl_s cn73xx;
+	struct cvmx_dtx_rnm_ctl_s cn78xx;
+	struct cvmx_dtx_rnm_ctl_s cn78xxp1;
+	struct cvmx_dtx_rnm_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rnm_ctl cvmx_dtx_rnm_ctl_t;
+
+/**
+ * cvmx_dtx_rnm_dat#
+ */
+union cvmx_dtx_rnm_datx {
+	u64 u64;
+	struct cvmx_dtx_rnm_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rnm_datx_s cn73xx;
+	struct cvmx_dtx_rnm_datx_s cn78xx;
+	struct cvmx_dtx_rnm_datx_s cn78xxp1;
+	struct cvmx_dtx_rnm_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rnm_datx cvmx_dtx_rnm_datx_t;
+
+/**
+ * cvmx_dtx_rnm_ena#
+ */
+union cvmx_dtx_rnm_enax {
+	u64 u64;
+	struct cvmx_dtx_rnm_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rnm_enax_s cn73xx;
+	struct cvmx_dtx_rnm_enax_s cn78xx;
+	struct cvmx_dtx_rnm_enax_s cn78xxp1;
+	struct cvmx_dtx_rnm_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rnm_enax cvmx_dtx_rnm_enax_t;
+
+/**
+ * cvmx_dtx_rnm_sel#
+ */
+union cvmx_dtx_rnm_selx {
+	u64 u64;
+	struct cvmx_dtx_rnm_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rnm_selx_s cn73xx;
+	struct cvmx_dtx_rnm_selx_s cn78xx;
+	struct cvmx_dtx_rnm_selx_s cn78xxp1;
+	struct cvmx_dtx_rnm_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rnm_selx cvmx_dtx_rnm_selx_t;
+
+/**
+ * cvmx_dtx_rst_bcst_rsp
+ */
+union cvmx_dtx_rst_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_rst_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_rst_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_rst_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_rst_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_rst_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_rst_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_rst_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rst_bcst_rsp cvmx_dtx_rst_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_rst_ctl
+ */
+union cvmx_dtx_rst_ctl {
+	u64 u64;
+	struct cvmx_dtx_rst_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_rst_ctl_s cn70xx;
+	struct cvmx_dtx_rst_ctl_s cn70xxp1;
+	struct cvmx_dtx_rst_ctl_s cn73xx;
+	struct cvmx_dtx_rst_ctl_s cn78xx;
+	struct cvmx_dtx_rst_ctl_s cn78xxp1;
+	struct cvmx_dtx_rst_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rst_ctl cvmx_dtx_rst_ctl_t;
+
+/**
+ * cvmx_dtx_rst_dat#
+ */
+union cvmx_dtx_rst_datx {
+	u64 u64;
+	struct cvmx_dtx_rst_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_rst_datx_s cn70xx;
+	struct cvmx_dtx_rst_datx_s cn70xxp1;
+	struct cvmx_dtx_rst_datx_s cn73xx;
+	struct cvmx_dtx_rst_datx_s cn78xx;
+	struct cvmx_dtx_rst_datx_s cn78xxp1;
+	struct cvmx_dtx_rst_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rst_datx cvmx_dtx_rst_datx_t;
+
+/**
+ * cvmx_dtx_rst_ena#
+ */
+union cvmx_dtx_rst_enax {
+	u64 u64;
+	struct cvmx_dtx_rst_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_rst_enax_s cn70xx;
+	struct cvmx_dtx_rst_enax_s cn70xxp1;
+	struct cvmx_dtx_rst_enax_s cn73xx;
+	struct cvmx_dtx_rst_enax_s cn78xx;
+	struct cvmx_dtx_rst_enax_s cn78xxp1;
+	struct cvmx_dtx_rst_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rst_enax cvmx_dtx_rst_enax_t;
+
+/**
+ * cvmx_dtx_rst_sel#
+ */
+union cvmx_dtx_rst_selx {
+	u64 u64;
+	struct cvmx_dtx_rst_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_rst_selx_s cn70xx;
+	struct cvmx_dtx_rst_selx_s cn70xxp1;
+	struct cvmx_dtx_rst_selx_s cn73xx;
+	struct cvmx_dtx_rst_selx_s cn78xx;
+	struct cvmx_dtx_rst_selx_s cn78xxp1;
+	struct cvmx_dtx_rst_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_rst_selx cvmx_dtx_rst_selx_t;
+
+/**
+ * cvmx_dtx_sata_bcst_rsp
+ */
+union cvmx_dtx_sata_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_sata_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_sata_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_sata_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_sata_bcst_rsp_s cn73xx;
+};
+
+typedef union cvmx_dtx_sata_bcst_rsp cvmx_dtx_sata_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_sata_ctl
+ */
+union cvmx_dtx_sata_ctl {
+	u64 u64;
+	struct cvmx_dtx_sata_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_sata_ctl_s cn70xx;
+	struct cvmx_dtx_sata_ctl_s cn70xxp1;
+	struct cvmx_dtx_sata_ctl_s cn73xx;
+};
+
+typedef union cvmx_dtx_sata_ctl cvmx_dtx_sata_ctl_t;
+
+/**
+ * cvmx_dtx_sata_dat#
+ */
+union cvmx_dtx_sata_datx {
+	u64 u64;
+	struct cvmx_dtx_sata_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_sata_datx_s cn70xx;
+	struct cvmx_dtx_sata_datx_s cn70xxp1;
+	struct cvmx_dtx_sata_datx_s cn73xx;
+};
+
+typedef union cvmx_dtx_sata_datx cvmx_dtx_sata_datx_t;
+
+/**
+ * cvmx_dtx_sata_ena#
+ */
+union cvmx_dtx_sata_enax {
+	u64 u64;
+	struct cvmx_dtx_sata_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_sata_enax_s cn70xx;
+	struct cvmx_dtx_sata_enax_s cn70xxp1;
+	struct cvmx_dtx_sata_enax_s cn73xx;
+};
+
+typedef union cvmx_dtx_sata_enax cvmx_dtx_sata_enax_t;
+
+/**
+ * cvmx_dtx_sata_sel#
+ */
+union cvmx_dtx_sata_selx {
+	u64 u64;
+	struct cvmx_dtx_sata_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_sata_selx_s cn70xx;
+	struct cvmx_dtx_sata_selx_s cn70xxp1;
+	struct cvmx_dtx_sata_selx_s cn73xx;
+};
+
+typedef union cvmx_dtx_sata_selx cvmx_dtx_sata_selx_t;
+
+/**
+ * cvmx_dtx_sli_bcst_rsp
+ */
+union cvmx_dtx_sli_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_sli_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_sli_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_sli_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_sli_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_sli_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_sli_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_sli_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sli_bcst_rsp cvmx_dtx_sli_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_sli_ctl
+ */
+union cvmx_dtx_sli_ctl {
+	u64 u64;
+	struct cvmx_dtx_sli_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_sli_ctl_s cn70xx;
+	struct cvmx_dtx_sli_ctl_s cn70xxp1;
+	struct cvmx_dtx_sli_ctl_s cn73xx;
+	struct cvmx_dtx_sli_ctl_s cn78xx;
+	struct cvmx_dtx_sli_ctl_s cn78xxp1;
+	struct cvmx_dtx_sli_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sli_ctl cvmx_dtx_sli_ctl_t;
+
+/**
+ * cvmx_dtx_sli_dat#
+ */
+union cvmx_dtx_sli_datx {
+	u64 u64;
+	struct cvmx_dtx_sli_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_sli_datx_s cn70xx;
+	struct cvmx_dtx_sli_datx_s cn70xxp1;
+	struct cvmx_dtx_sli_datx_s cn73xx;
+	struct cvmx_dtx_sli_datx_s cn78xx;
+	struct cvmx_dtx_sli_datx_s cn78xxp1;
+	struct cvmx_dtx_sli_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sli_datx cvmx_dtx_sli_datx_t;
+
+/**
+ * cvmx_dtx_sli_ena#
+ */
+union cvmx_dtx_sli_enax {
+	u64 u64;
+	struct cvmx_dtx_sli_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_sli_enax_s cn70xx;
+	struct cvmx_dtx_sli_enax_s cn70xxp1;
+	struct cvmx_dtx_sli_enax_s cn73xx;
+	struct cvmx_dtx_sli_enax_s cn78xx;
+	struct cvmx_dtx_sli_enax_s cn78xxp1;
+	struct cvmx_dtx_sli_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sli_enax cvmx_dtx_sli_enax_t;
+
+/**
+ * cvmx_dtx_sli_sel#
+ */
+union cvmx_dtx_sli_selx {
+	u64 u64;
+	struct cvmx_dtx_sli_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_sli_selx_s cn70xx;
+	struct cvmx_dtx_sli_selx_s cn70xxp1;
+	struct cvmx_dtx_sli_selx_s cn73xx;
+	struct cvmx_dtx_sli_selx_s cn78xx;
+	struct cvmx_dtx_sli_selx_s cn78xxp1;
+	struct cvmx_dtx_sli_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sli_selx cvmx_dtx_sli_selx_t;
+
+/**
+ * cvmx_dtx_spem_bcst_rsp
+ */
+union cvmx_dtx_spem_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_spem_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_spem_bcst_rsp_s cn73xx;
+};
+
+typedef union cvmx_dtx_spem_bcst_rsp cvmx_dtx_spem_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_spem_ctl
+ */
+union cvmx_dtx_spem_ctl {
+	u64 u64;
+	struct cvmx_dtx_spem_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_spem_ctl_s cn73xx;
+};
+
+typedef union cvmx_dtx_spem_ctl cvmx_dtx_spem_ctl_t;
+
+/**
+ * cvmx_dtx_spem_dat#
+ */
+union cvmx_dtx_spem_datx {
+	u64 u64;
+	struct cvmx_dtx_spem_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_spem_datx_s cn73xx;
+};
+
+typedef union cvmx_dtx_spem_datx cvmx_dtx_spem_datx_t;
+
+/**
+ * cvmx_dtx_spem_ena#
+ */
+union cvmx_dtx_spem_enax {
+	u64 u64;
+	struct cvmx_dtx_spem_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_spem_enax_s cn73xx;
+};
+
+typedef union cvmx_dtx_spem_enax cvmx_dtx_spem_enax_t;
+
+/**
+ * cvmx_dtx_spem_sel#
+ */
+union cvmx_dtx_spem_selx {
+	u64 u64;
+	struct cvmx_dtx_spem_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_spem_selx_s cn73xx;
+};
+
+typedef union cvmx_dtx_spem_selx cvmx_dtx_spem_selx_t;
+
+/**
+ * cvmx_dtx_srio#_bcst_rsp
+ */
+union cvmx_dtx_sriox_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_sriox_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_sriox_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sriox_bcst_rsp cvmx_dtx_sriox_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_srio#_ctl
+ */
+union cvmx_dtx_sriox_ctl {
+	u64 u64;
+	struct cvmx_dtx_sriox_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_sriox_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sriox_ctl cvmx_dtx_sriox_ctl_t;
+
+/**
+ * cvmx_dtx_srio#_dat#
+ */
+union cvmx_dtx_sriox_datx {
+	u64 u64;
+	struct cvmx_dtx_sriox_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_sriox_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sriox_datx cvmx_dtx_sriox_datx_t;
+
+/**
+ * cvmx_dtx_srio#_ena#
+ */
+union cvmx_dtx_sriox_enax {
+	u64 u64;
+	struct cvmx_dtx_sriox_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_sriox_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sriox_enax cvmx_dtx_sriox_enax_t;
+
+/**
+ * cvmx_dtx_srio#_sel#
+ */
+union cvmx_dtx_sriox_selx {
+	u64 u64;
+	struct cvmx_dtx_sriox_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_sriox_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sriox_selx cvmx_dtx_sriox_selx_t;
+
+/**
+ * cvmx_dtx_sso_bcst_rsp
+ */
+union cvmx_dtx_sso_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_sso_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_sso_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_sso_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_sso_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_sso_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sso_bcst_rsp cvmx_dtx_sso_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_sso_ctl
+ */
+union cvmx_dtx_sso_ctl {
+	u64 u64;
+	struct cvmx_dtx_sso_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_sso_ctl_s cn73xx;
+	struct cvmx_dtx_sso_ctl_s cn78xx;
+	struct cvmx_dtx_sso_ctl_s cn78xxp1;
+	struct cvmx_dtx_sso_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sso_ctl cvmx_dtx_sso_ctl_t;
+
+/**
+ * cvmx_dtx_sso_dat#
+ */
+union cvmx_dtx_sso_datx {
+	u64 u64;
+	struct cvmx_dtx_sso_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_sso_datx_s cn73xx;
+	struct cvmx_dtx_sso_datx_s cn78xx;
+	struct cvmx_dtx_sso_datx_s cn78xxp1;
+	struct cvmx_dtx_sso_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sso_datx cvmx_dtx_sso_datx_t;
+
+/**
+ * cvmx_dtx_sso_ena#
+ */
+union cvmx_dtx_sso_enax {
+	u64 u64;
+	struct cvmx_dtx_sso_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_sso_enax_s cn73xx;
+	struct cvmx_dtx_sso_enax_s cn78xx;
+	struct cvmx_dtx_sso_enax_s cn78xxp1;
+	struct cvmx_dtx_sso_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sso_enax cvmx_dtx_sso_enax_t;
+
+/**
+ * cvmx_dtx_sso_sel#
+ */
+union cvmx_dtx_sso_selx {
+	u64 u64;
+	struct cvmx_dtx_sso_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_sso_selx_s cn73xx;
+	struct cvmx_dtx_sso_selx_s cn78xx;
+	struct cvmx_dtx_sso_selx_s cn78xxp1;
+	struct cvmx_dtx_sso_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_sso_selx cvmx_dtx_sso_selx_t;
+
+/**
+ * cvmx_dtx_tdec_bcst_rsp
+ */
+union cvmx_dtx_tdec_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_tdec_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_tdec_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tdec_bcst_rsp cvmx_dtx_tdec_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_tdec_ctl
+ */
+union cvmx_dtx_tdec_ctl {
+	u64 u64;
+	struct cvmx_dtx_tdec_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_tdec_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tdec_ctl cvmx_dtx_tdec_ctl_t;
+
+/**
+ * cvmx_dtx_tdec_dat#
+ */
+union cvmx_dtx_tdec_datx {
+	u64 u64;
+	struct cvmx_dtx_tdec_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_tdec_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tdec_datx cvmx_dtx_tdec_datx_t;
+
+/**
+ * cvmx_dtx_tdec_ena#
+ */
+union cvmx_dtx_tdec_enax {
+	u64 u64;
+	struct cvmx_dtx_tdec_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_tdec_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tdec_enax cvmx_dtx_tdec_enax_t;
+
+/**
+ * cvmx_dtx_tdec_sel#
+ */
+union cvmx_dtx_tdec_selx {
+	u64 u64;
+	struct cvmx_dtx_tdec_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_tdec_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tdec_selx cvmx_dtx_tdec_selx_t;
+
+/**
+ * cvmx_dtx_tim_bcst_rsp
+ */
+union cvmx_dtx_tim_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_tim_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_tim_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_tim_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_tim_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_tim_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_tim_bcst_rsp_s cn78xxp1;
+	struct cvmx_dtx_tim_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tim_bcst_rsp cvmx_dtx_tim_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_tim_ctl
+ */
+union cvmx_dtx_tim_ctl {
+	u64 u64;
+	struct cvmx_dtx_tim_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_tim_ctl_s cn70xx;
+	struct cvmx_dtx_tim_ctl_s cn70xxp1;
+	struct cvmx_dtx_tim_ctl_s cn73xx;
+	struct cvmx_dtx_tim_ctl_s cn78xx;
+	struct cvmx_dtx_tim_ctl_s cn78xxp1;
+	struct cvmx_dtx_tim_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tim_ctl cvmx_dtx_tim_ctl_t;
+
+/**
+ * cvmx_dtx_tim_dat#
+ */
+union cvmx_dtx_tim_datx {
+	u64 u64;
+	struct cvmx_dtx_tim_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_tim_datx_s cn70xx;
+	struct cvmx_dtx_tim_datx_s cn70xxp1;
+	struct cvmx_dtx_tim_datx_s cn73xx;
+	struct cvmx_dtx_tim_datx_s cn78xx;
+	struct cvmx_dtx_tim_datx_s cn78xxp1;
+	struct cvmx_dtx_tim_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tim_datx cvmx_dtx_tim_datx_t;
+
+/**
+ * cvmx_dtx_tim_ena#
+ */
+union cvmx_dtx_tim_enax {
+	u64 u64;
+	struct cvmx_dtx_tim_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_tim_enax_s cn70xx;
+	struct cvmx_dtx_tim_enax_s cn70xxp1;
+	struct cvmx_dtx_tim_enax_s cn73xx;
+	struct cvmx_dtx_tim_enax_s cn78xx;
+	struct cvmx_dtx_tim_enax_s cn78xxp1;
+	struct cvmx_dtx_tim_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tim_enax cvmx_dtx_tim_enax_t;
+
+/**
+ * cvmx_dtx_tim_sel#
+ */
+union cvmx_dtx_tim_selx {
+	u64 u64;
+	struct cvmx_dtx_tim_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_tim_selx_s cn70xx;
+	struct cvmx_dtx_tim_selx_s cn70xxp1;
+	struct cvmx_dtx_tim_selx_s cn73xx;
+	struct cvmx_dtx_tim_selx_s cn78xx;
+	struct cvmx_dtx_tim_selx_s cn78xxp1;
+	struct cvmx_dtx_tim_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_tim_selx cvmx_dtx_tim_selx_t;
+
+/**
+ * cvmx_dtx_ulfe_bcst_rsp
+ */
+union cvmx_dtx_ulfe_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_ulfe_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_ulfe_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ulfe_bcst_rsp cvmx_dtx_ulfe_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_ulfe_ctl
+ */
+union cvmx_dtx_ulfe_ctl {
+	u64 u64;
+	struct cvmx_dtx_ulfe_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_ulfe_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ulfe_ctl cvmx_dtx_ulfe_ctl_t;
+
+/**
+ * cvmx_dtx_ulfe_dat#
+ */
+union cvmx_dtx_ulfe_datx {
+	u64 u64;
+	struct cvmx_dtx_ulfe_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_ulfe_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ulfe_datx cvmx_dtx_ulfe_datx_t;
+
+/**
+ * cvmx_dtx_ulfe_ena#
+ */
+union cvmx_dtx_ulfe_enax {
+	u64 u64;
+	struct cvmx_dtx_ulfe_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_ulfe_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ulfe_enax cvmx_dtx_ulfe_enax_t;
+
+/**
+ * cvmx_dtx_ulfe_sel#
+ */
+union cvmx_dtx_ulfe_selx {
+	u64 u64;
+	struct cvmx_dtx_ulfe_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_ulfe_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_ulfe_selx cvmx_dtx_ulfe_selx_t;
+
+/**
+ * cvmx_dtx_usbdrd#_bcst_rsp
+ */
+union cvmx_dtx_usbdrdx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_usbdrdx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_usbdrdx_bcst_rsp_s cn70xx;
+	struct cvmx_dtx_usbdrdx_bcst_rsp_s cn70xxp1;
+	struct cvmx_dtx_usbdrdx_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_usbdrdx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_usbdrdx_bcst_rsp cvmx_dtx_usbdrdx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_usbdrd#_ctl
+ */
+union cvmx_dtx_usbdrdx_ctl {
+	u64 u64;
+	struct cvmx_dtx_usbdrdx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_usbdrdx_ctl_s cn70xx;
+	struct cvmx_dtx_usbdrdx_ctl_s cn70xxp1;
+	struct cvmx_dtx_usbdrdx_ctl_s cn73xx;
+	struct cvmx_dtx_usbdrdx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_usbdrdx_ctl cvmx_dtx_usbdrdx_ctl_t;
+
+/**
+ * cvmx_dtx_usbdrd#_dat#
+ */
+union cvmx_dtx_usbdrdx_datx {
+	u64 u64;
+	struct cvmx_dtx_usbdrdx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_usbdrdx_datx_s cn70xx;
+	struct cvmx_dtx_usbdrdx_datx_s cn70xxp1;
+	struct cvmx_dtx_usbdrdx_datx_s cn73xx;
+	struct cvmx_dtx_usbdrdx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_usbdrdx_datx cvmx_dtx_usbdrdx_datx_t;
+
+/**
+ * cvmx_dtx_usbdrd#_ena#
+ */
+union cvmx_dtx_usbdrdx_enax {
+	u64 u64;
+	struct cvmx_dtx_usbdrdx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_usbdrdx_enax_s cn70xx;
+	struct cvmx_dtx_usbdrdx_enax_s cn70xxp1;
+	struct cvmx_dtx_usbdrdx_enax_s cn73xx;
+	struct cvmx_dtx_usbdrdx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_usbdrdx_enax cvmx_dtx_usbdrdx_enax_t;
+
+/**
+ * cvmx_dtx_usbdrd#_sel#
+ */
+union cvmx_dtx_usbdrdx_selx {
+	u64 u64;
+	struct cvmx_dtx_usbdrdx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_usbdrdx_selx_s cn70xx;
+	struct cvmx_dtx_usbdrdx_selx_s cn70xxp1;
+	struct cvmx_dtx_usbdrdx_selx_s cn73xx;
+	struct cvmx_dtx_usbdrdx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_usbdrdx_selx cvmx_dtx_usbdrdx_selx_t;
+
+/**
+ * cvmx_dtx_usbh#_bcst_rsp
+ */
+union cvmx_dtx_usbhx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_usbhx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_usbhx_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_usbhx_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_usbhx_bcst_rsp cvmx_dtx_usbhx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_usbh#_ctl
+ */
+union cvmx_dtx_usbhx_ctl {
+	u64 u64;
+	struct cvmx_dtx_usbhx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_usbhx_ctl_s cn78xx;
+	struct cvmx_dtx_usbhx_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_usbhx_ctl cvmx_dtx_usbhx_ctl_t;
+
+/**
+ * cvmx_dtx_usbh#_dat#
+ */
+union cvmx_dtx_usbhx_datx {
+	u64 u64;
+	struct cvmx_dtx_usbhx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_usbhx_datx_s cn78xx;
+	struct cvmx_dtx_usbhx_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_usbhx_datx cvmx_dtx_usbhx_datx_t;
+
+/**
+ * cvmx_dtx_usbh#_ena#
+ */
+union cvmx_dtx_usbhx_enax {
+	u64 u64;
+	struct cvmx_dtx_usbhx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_usbhx_enax_s cn78xx;
+	struct cvmx_dtx_usbhx_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_usbhx_enax cvmx_dtx_usbhx_enax_t;
+
+/**
+ * cvmx_dtx_usbh#_sel#
+ */
+union cvmx_dtx_usbhx_selx {
+	u64 u64;
+	struct cvmx_dtx_usbhx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_usbhx_selx_s cn78xx;
+	struct cvmx_dtx_usbhx_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_usbhx_selx cvmx_dtx_usbhx_selx_t;
+
+/**
+ * cvmx_dtx_vdec_bcst_rsp
+ */
+union cvmx_dtx_vdec_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_vdec_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_vdec_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_vdec_bcst_rsp cvmx_dtx_vdec_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_vdec_ctl
+ */
+union cvmx_dtx_vdec_ctl {
+	u64 u64;
+	struct cvmx_dtx_vdec_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_vdec_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_vdec_ctl cvmx_dtx_vdec_ctl_t;
+
+/**
+ * cvmx_dtx_vdec_dat#
+ */
+union cvmx_dtx_vdec_datx {
+	u64 u64;
+	struct cvmx_dtx_vdec_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_vdec_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_vdec_datx cvmx_dtx_vdec_datx_t;
+
+/**
+ * cvmx_dtx_vdec_ena#
+ */
+union cvmx_dtx_vdec_enax {
+	u64 u64;
+	struct cvmx_dtx_vdec_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_vdec_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_vdec_enax cvmx_dtx_vdec_enax_t;
+
+/**
+ * cvmx_dtx_vdec_sel#
+ */
+union cvmx_dtx_vdec_selx {
+	u64 u64;
+	struct cvmx_dtx_vdec_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_vdec_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_vdec_selx cvmx_dtx_vdec_selx_t;
+
+/**
+ * cvmx_dtx_wpse_bcst_rsp
+ */
+union cvmx_dtx_wpse_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_wpse_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_wpse_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wpse_bcst_rsp cvmx_dtx_wpse_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_wpse_ctl
+ */
+union cvmx_dtx_wpse_ctl {
+	u64 u64;
+	struct cvmx_dtx_wpse_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_wpse_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wpse_ctl cvmx_dtx_wpse_ctl_t;
+
+/**
+ * cvmx_dtx_wpse_dat#
+ */
+union cvmx_dtx_wpse_datx {
+	u64 u64;
+	struct cvmx_dtx_wpse_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_wpse_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wpse_datx cvmx_dtx_wpse_datx_t;
+
+/**
+ * cvmx_dtx_wpse_ena#
+ */
+union cvmx_dtx_wpse_enax {
+	u64 u64;
+	struct cvmx_dtx_wpse_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_wpse_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wpse_enax cvmx_dtx_wpse_enax_t;
+
+/**
+ * cvmx_dtx_wpse_sel#
+ */
+union cvmx_dtx_wpse_selx {
+	u64 u64;
+	struct cvmx_dtx_wpse_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_wpse_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wpse_selx cvmx_dtx_wpse_selx_t;
+
+/**
+ * cvmx_dtx_wrce_bcst_rsp
+ */
+union cvmx_dtx_wrce_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_wrce_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_wrce_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrce_bcst_rsp cvmx_dtx_wrce_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_wrce_ctl
+ */
+union cvmx_dtx_wrce_ctl {
+	u64 u64;
+	struct cvmx_dtx_wrce_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_wrce_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrce_ctl cvmx_dtx_wrce_ctl_t;
+
+/**
+ * cvmx_dtx_wrce_dat#
+ */
+union cvmx_dtx_wrce_datx {
+	u64 u64;
+	struct cvmx_dtx_wrce_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_wrce_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrce_datx cvmx_dtx_wrce_datx_t;
+
+/**
+ * cvmx_dtx_wrce_ena#
+ */
+union cvmx_dtx_wrce_enax {
+	u64 u64;
+	struct cvmx_dtx_wrce_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_wrce_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrce_enax cvmx_dtx_wrce_enax_t;
+
+/**
+ * cvmx_dtx_wrce_sel#
+ */
+union cvmx_dtx_wrce_selx {
+	u64 u64;
+	struct cvmx_dtx_wrce_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_wrce_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrce_selx cvmx_dtx_wrce_selx_t;
+
+/**
+ * cvmx_dtx_wrde_bcst_rsp
+ */
+union cvmx_dtx_wrde_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_wrde_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_wrde_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrde_bcst_rsp cvmx_dtx_wrde_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_wrde_ctl
+ */
+union cvmx_dtx_wrde_ctl {
+	u64 u64;
+	struct cvmx_dtx_wrde_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_wrde_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrde_ctl cvmx_dtx_wrde_ctl_t;
+
+/**
+ * cvmx_dtx_wrde_dat#
+ */
+union cvmx_dtx_wrde_datx {
+	u64 u64;
+	struct cvmx_dtx_wrde_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_wrde_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrde_datx cvmx_dtx_wrde_datx_t;
+
+/**
+ * cvmx_dtx_wrde_ena#
+ */
+union cvmx_dtx_wrde_enax {
+	u64 u64;
+	struct cvmx_dtx_wrde_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_wrde_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrde_enax cvmx_dtx_wrde_enax_t;
+
+/**
+ * cvmx_dtx_wrde_sel#
+ */
+union cvmx_dtx_wrde_selx {
+	u64 u64;
+	struct cvmx_dtx_wrde_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_wrde_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrde_selx cvmx_dtx_wrde_selx_t;
+
+/**
+ * cvmx_dtx_wrse_bcst_rsp
+ */
+union cvmx_dtx_wrse_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_wrse_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_wrse_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrse_bcst_rsp cvmx_dtx_wrse_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_wrse_ctl
+ */
+union cvmx_dtx_wrse_ctl {
+	u64 u64;
+	struct cvmx_dtx_wrse_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_wrse_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrse_ctl cvmx_dtx_wrse_ctl_t;
+
+/**
+ * cvmx_dtx_wrse_dat#
+ */
+union cvmx_dtx_wrse_datx {
+	u64 u64;
+	struct cvmx_dtx_wrse_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_wrse_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrse_datx cvmx_dtx_wrse_datx_t;
+
+/**
+ * cvmx_dtx_wrse_ena#
+ */
+union cvmx_dtx_wrse_enax {
+	u64 u64;
+	struct cvmx_dtx_wrse_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_wrse_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrse_enax cvmx_dtx_wrse_enax_t;
+
+/**
+ * cvmx_dtx_wrse_sel#
+ */
+union cvmx_dtx_wrse_selx {
+	u64 u64;
+	struct cvmx_dtx_wrse_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_wrse_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wrse_selx cvmx_dtx_wrse_selx_t;
+
+/**
+ * cvmx_dtx_wtxe_bcst_rsp
+ */
+union cvmx_dtx_wtxe_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_wtxe_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_wtxe_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wtxe_bcst_rsp cvmx_dtx_wtxe_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_wtxe_ctl
+ */
+union cvmx_dtx_wtxe_ctl {
+	u64 u64;
+	struct cvmx_dtx_wtxe_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_wtxe_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wtxe_ctl cvmx_dtx_wtxe_ctl_t;
+
+/**
+ * cvmx_dtx_wtxe_dat#
+ */
+union cvmx_dtx_wtxe_datx {
+	u64 u64;
+	struct cvmx_dtx_wtxe_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_wtxe_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wtxe_datx cvmx_dtx_wtxe_datx_t;
+
+/**
+ * cvmx_dtx_wtxe_ena#
+ */
+union cvmx_dtx_wtxe_enax {
+	u64 u64;
+	struct cvmx_dtx_wtxe_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_wtxe_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wtxe_enax cvmx_dtx_wtxe_enax_t;
+
+/**
+ * cvmx_dtx_wtxe_sel#
+ */
+union cvmx_dtx_wtxe_selx {
+	u64 u64;
+	struct cvmx_dtx_wtxe_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_wtxe_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_wtxe_selx cvmx_dtx_wtxe_selx_t;
+
+/**
+ * cvmx_dtx_xcv_bcst_rsp
+ */
+union cvmx_dtx_xcv_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_xcv_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_xcv_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_xcv_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xcv_bcst_rsp cvmx_dtx_xcv_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_xcv_ctl
+ */
+union cvmx_dtx_xcv_ctl {
+	u64 u64;
+	struct cvmx_dtx_xcv_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_xcv_ctl_s cn73xx;
+	struct cvmx_dtx_xcv_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xcv_ctl cvmx_dtx_xcv_ctl_t;
+
+/**
+ * cvmx_dtx_xcv_dat#
+ */
+union cvmx_dtx_xcv_datx {
+	u64 u64;
+	struct cvmx_dtx_xcv_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_xcv_datx_s cn73xx;
+	struct cvmx_dtx_xcv_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xcv_datx cvmx_dtx_xcv_datx_t;
+
+/**
+ * cvmx_dtx_xcv_ena#
+ */
+union cvmx_dtx_xcv_enax {
+	u64 u64;
+	struct cvmx_dtx_xcv_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_xcv_enax_s cn73xx;
+	struct cvmx_dtx_xcv_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xcv_enax cvmx_dtx_xcv_enax_t;
+
+/**
+ * cvmx_dtx_xcv_sel#
+ */
+union cvmx_dtx_xcv_selx {
+	u64 u64;
+	struct cvmx_dtx_xcv_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_xcv_selx_s cn73xx;
+	struct cvmx_dtx_xcv_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xcv_selx cvmx_dtx_xcv_selx_t;
+
+/**
+ * cvmx_dtx_xsx_bcst_rsp
+ */
+union cvmx_dtx_xsx_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_xsx_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_xsx_bcst_rsp_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xsx_bcst_rsp cvmx_dtx_xsx_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_xsx_ctl
+ */
+union cvmx_dtx_xsx_ctl {
+	u64 u64;
+	struct cvmx_dtx_xsx_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_xsx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xsx_ctl cvmx_dtx_xsx_ctl_t;
+
+/**
+ * cvmx_dtx_xsx_dat#
+ */
+union cvmx_dtx_xsx_datx {
+	u64 u64;
+	struct cvmx_dtx_xsx_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_xsx_datx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xsx_datx cvmx_dtx_xsx_datx_t;
+
+/**
+ * cvmx_dtx_xsx_ena#
+ */
+union cvmx_dtx_xsx_enax {
+	u64 u64;
+	struct cvmx_dtx_xsx_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_xsx_enax_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xsx_enax cvmx_dtx_xsx_enax_t;
+
+/**
+ * cvmx_dtx_xsx_sel#
+ */
+union cvmx_dtx_xsx_selx {
+	u64 u64;
+	struct cvmx_dtx_xsx_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_xsx_selx_s cnf75xx;
+};
+
+typedef union cvmx_dtx_xsx_selx cvmx_dtx_xsx_selx_t;
+
+/**
+ * cvmx_dtx_zip_bcst_rsp
+ */
+union cvmx_dtx_zip_bcst_rsp {
+	u64 u64;
+	struct cvmx_dtx_zip_bcst_rsp_s {
+		u64 reserved_1_63 : 63;
+		u64 ena : 1;
+	} s;
+	struct cvmx_dtx_zip_bcst_rsp_s cn73xx;
+	struct cvmx_dtx_zip_bcst_rsp_s cn78xx;
+	struct cvmx_dtx_zip_bcst_rsp_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_zip_bcst_rsp cvmx_dtx_zip_bcst_rsp_t;
+
+/**
+ * cvmx_dtx_zip_ctl
+ */
+union cvmx_dtx_zip_ctl {
+	u64 u64;
+	struct cvmx_dtx_zip_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 active : 1;
+		u64 reserved_2_3 : 2;
+		u64 echoen : 1;
+		u64 swap : 1;
+	} s;
+	struct cvmx_dtx_zip_ctl_s cn73xx;
+	struct cvmx_dtx_zip_ctl_s cn78xx;
+	struct cvmx_dtx_zip_ctl_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_zip_ctl cvmx_dtx_zip_ctl_t;
+
+/**
+ * cvmx_dtx_zip_dat#
+ */
+union cvmx_dtx_zip_datx {
+	u64 u64;
+	struct cvmx_dtx_zip_datx_s {
+		u64 reserved_36_63 : 28;
+		u64 raw : 36;
+	} s;
+	struct cvmx_dtx_zip_datx_s cn73xx;
+	struct cvmx_dtx_zip_datx_s cn78xx;
+	struct cvmx_dtx_zip_datx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_zip_datx cvmx_dtx_zip_datx_t;
+
+/**
+ * cvmx_dtx_zip_ena#
+ */
+union cvmx_dtx_zip_enax {
+	u64 u64;
+	struct cvmx_dtx_zip_enax_s {
+		u64 reserved_36_63 : 28;
+		u64 ena : 36;
+	} s;
+	struct cvmx_dtx_zip_enax_s cn73xx;
+	struct cvmx_dtx_zip_enax_s cn78xx;
+	struct cvmx_dtx_zip_enax_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_zip_enax cvmx_dtx_zip_enax_t;
+
+/**
+ * cvmx_dtx_zip_sel#
+ */
+union cvmx_dtx_zip_selx {
+	u64 u64;
+	struct cvmx_dtx_zip_selx_s {
+		u64 reserved_24_63 : 40;
+		u64 value : 24;
+	} s;
+	struct cvmx_dtx_zip_selx_s cn73xx;
+	struct cvmx_dtx_zip_selx_s cn78xx;
+	struct cvmx_dtx_zip_selx_s cn78xxp1;
+};
+
+typedef union cvmx_dtx_zip_selx cvmx_dtx_zip_selx_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 10/50] mips: octeon: Add cvmx-fpa-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (8 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 09/50] mips: octeon: Add cvmx-dtx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 11/50] mips: octeon: Add cvmx-gmxx-defs.h " Stefan Roese
                   ` (42 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-fpa-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-fpa-defs.h  | 1866 +++++++++++++++++
 1 file changed, 1866 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa-defs.h
new file mode 100644
index 0000000000..13ce7d8c96
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa-defs.h
@@ -0,0 +1,1866 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon fpa.
+ */
+
+#ifndef __CVMX_FPA_DEFS_H__
+#define __CVMX_FPA_DEFS_H__
+
+#define CVMX_FPA_ADDR_RANGE_ERROR CVMX_FPA_ADDR_RANGE_ERROR_FUNC()
+static inline u64 CVMX_FPA_ADDR_RANGE_ERROR_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000458ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001280000000458ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001280000000458ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001280000000458ull;
+	}
+	return 0x0001280000000458ull;
+}
+
+#define CVMX_FPA_AURAX_CFG(offset)	     (0x0001280020100000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_CNT(offset)	     (0x0001280020200000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_CNT_ADD(offset)	     (0x0001280020300000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_CNT_LEVELS(offset)    (0x0001280020800000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_CNT_LIMIT(offset)     (0x0001280020400000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_CNT_THRESHOLD(offset) (0x0001280020500000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_INT(offset)	     (0x0001280020600000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_POOL(offset)	     (0x0001280020000000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_AURAX_POOL_LEVELS(offset)   (0x0001280020700000ull + ((offset) & 1023) * 8)
+#define CVMX_FPA_BIST_STATUS		     CVMX_FPA_BIST_STATUS_FUNC()
+static inline u64 CVMX_FPA_BIST_STATUS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800280000E8ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00012800000000E8ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00012800000000E8ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00012800000000E8ull;
+	}
+	return 0x00012800000000E8ull;
+}
+
+#ifndef CVMX_FPA_CLK_COUNT // test-only (also in octeon_ddr.h)
+#define CVMX_FPA_CLK_COUNT (0x00012800000000F0ull)
+#endif
+#define CVMX_FPA_CTL_STATUS		 (0x0001180028000050ull)
+#define CVMX_FPA_ECC_CTL		 (0x0001280000000058ull)
+#define CVMX_FPA_ECC_INT		 (0x0001280000000068ull)
+#define CVMX_FPA_ERR_INT		 (0x0001280000000040ull)
+#define CVMX_FPA_FPF0_MARKS		 (0x0001180028000000ull)
+#define CVMX_FPA_FPF0_SIZE		 (0x0001180028000058ull)
+#define CVMX_FPA_FPF1_MARKS		 CVMX_FPA_FPFX_MARKS(1)
+#define CVMX_FPA_FPF2_MARKS		 CVMX_FPA_FPFX_MARKS(2)
+#define CVMX_FPA_FPF3_MARKS		 CVMX_FPA_FPFX_MARKS(3)
+#define CVMX_FPA_FPF4_MARKS		 CVMX_FPA_FPFX_MARKS(4)
+#define CVMX_FPA_FPF5_MARKS		 CVMX_FPA_FPFX_MARKS(5)
+#define CVMX_FPA_FPF6_MARKS		 CVMX_FPA_FPFX_MARKS(6)
+#define CVMX_FPA_FPF7_MARKS		 CVMX_FPA_FPFX_MARKS(7)
+#define CVMX_FPA_FPF8_MARKS		 (0x0001180028000240ull)
+#define CVMX_FPA_FPF8_SIZE		 (0x0001180028000248ull)
+#define CVMX_FPA_FPFX_MARKS(offset)	 (0x0001180028000008ull + ((offset) & 7) * 8 - 8 * 1)
+#define CVMX_FPA_FPFX_SIZE(offset)	 (0x0001180028000060ull + ((offset) & 7) * 8 - 8 * 1)
+#define CVMX_FPA_GEN_CFG		 (0x0001280000000050ull)
+#define CVMX_FPA_INT_ENB		 (0x0001180028000048ull)
+#define CVMX_FPA_INT_SUM		 (0x0001180028000040ull)
+#define CVMX_FPA_PACKET_THRESHOLD	 (0x0001180028000460ull)
+#define CVMX_FPA_POOLX_AVAILABLE(offset) (0x0001280010300000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_CFG(offset)	 (0x0001280010000000ull + ((offset) & 63) * 8)
+static inline u64 CVMX_FPA_POOLX_END_ADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000358ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000358ull + (offset) * 8;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001280010600000ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001280010600000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001280010600000ull + (offset) * 8;
+	}
+	return 0x0001280010600000ull + (offset) * 8;
+}
+
+#define CVMX_FPA_POOLX_FPF_MARKS(offset)  (0x0001280010100000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_INT(offset)	  (0x0001280010A00000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_OP_PC(offset)	  (0x0001280010F00000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_STACK_ADDR(offset) (0x0001280010900000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_STACK_BASE(offset) (0x0001280010700000ull + ((offset) & 63) * 8)
+#define CVMX_FPA_POOLX_STACK_END(offset)  (0x0001280010800000ull + ((offset) & 63) * 8)
+static inline u64 CVMX_FPA_POOLX_START_ADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000258ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000258ull + (offset) * 8;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001280010500000ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001280010500000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001280010500000ull + (offset) * 8;
+	}
+	return 0x0001280010500000ull + (offset) * 8;
+}
+
+static inline u64 CVMX_FPA_POOLX_THRESHOLD(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000140ull + (offset) * 8;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180028000140ull + (offset) * 8;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001280010400000ull + (offset) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001280010400000ull + (offset) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001280010400000ull + (offset) * 8;
+	}
+	return 0x0001280010400000ull + (offset) * 8;
+}
+
+#define CVMX_FPA_QUE0_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(0)
+#define CVMX_FPA_QUE1_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(1)
+#define CVMX_FPA_QUE2_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(2)
+#define CVMX_FPA_QUE3_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(3)
+#define CVMX_FPA_QUE4_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(4)
+#define CVMX_FPA_QUE5_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(5)
+#define CVMX_FPA_QUE6_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(6)
+#define CVMX_FPA_QUE7_PAGE_INDEX	 CVMX_FPA_QUEX_PAGE_INDEX(7)
+#define CVMX_FPA_QUE8_PAGE_INDEX	 (0x0001180028000250ull)
+#define CVMX_FPA_QUEX_AVAILABLE(offset)	 (0x0001180028000098ull + ((offset) & 15) * 8)
+#define CVMX_FPA_QUEX_PAGE_INDEX(offset) (0x00011800280000F0ull + ((offset) & 7) * 8)
+#define CVMX_FPA_QUE_ACT		 (0x0001180028000138ull)
+#define CVMX_FPA_QUE_EXP		 (0x0001180028000130ull)
+#define CVMX_FPA_RD_LATENCY_PC		 (0x0001280000000610ull)
+#define CVMX_FPA_RD_REQ_PC		 (0x0001280000000600ull)
+#define CVMX_FPA_RED_DELAY		 (0x0001280000000100ull)
+#define CVMX_FPA_SFT_RST		 (0x0001280000000000ull)
+#define CVMX_FPA_WART_CTL		 (0x00011800280000D8ull)
+#define CVMX_FPA_WART_STATUS		 (0x00011800280000E0ull)
+#define CVMX_FPA_WQE_THRESHOLD		 (0x0001180028000468ull)
+
+/**
+ * cvmx_fpa_addr_range_error
+ *
+ * When any FPA_POOL()_INT[RANGE] error occurs, this register is latched with additional
+ * error information.
+ */
+union cvmx_fpa_addr_range_error {
+	u64 u64;
+	struct cvmx_fpa_addr_range_error_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_fpa_addr_range_error_cn61xx {
+		u64 reserved_38_63 : 26;
+		u64 pool : 5;
+		u64 addr : 33;
+	} cn61xx;
+	struct cvmx_fpa_addr_range_error_cn61xx cn66xx;
+	struct cvmx_fpa_addr_range_error_cn61xx cn68xx;
+	struct cvmx_fpa_addr_range_error_cn61xx cn68xxp1;
+	struct cvmx_fpa_addr_range_error_cn61xx cn70xx;
+	struct cvmx_fpa_addr_range_error_cn61xx cn70xxp1;
+	struct cvmx_fpa_addr_range_error_cn73xx {
+		u64 reserved_54_63 : 10;
+		u64 pool : 6;
+		u64 reserved_42_47 : 6;
+		u64 addr : 42;
+	} cn73xx;
+	struct cvmx_fpa_addr_range_error_cn73xx cn78xx;
+	struct cvmx_fpa_addr_range_error_cn73xx cn78xxp1;
+	struct cvmx_fpa_addr_range_error_cn61xx cnf71xx;
+	struct cvmx_fpa_addr_range_error_cn73xx cnf75xx;
+};
+
+typedef union cvmx_fpa_addr_range_error cvmx_fpa_addr_range_error_t;
+
+/**
+ * cvmx_fpa_aura#_cfg
+ *
+ * This register configures aura backpressure, etc.
+ *
+ */
+union cvmx_fpa_aurax_cfg {
+	u64 u64;
+	struct cvmx_fpa_aurax_cfg_s {
+		u64 reserved_10_63 : 54;
+		u64 ptr_dis : 1;
+		u64 avg_con : 9;
+	} s;
+	struct cvmx_fpa_aurax_cfg_s cn73xx;
+	struct cvmx_fpa_aurax_cfg_s cn78xx;
+	struct cvmx_fpa_aurax_cfg_s cn78xxp1;
+	struct cvmx_fpa_aurax_cfg_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cfg cvmx_fpa_aurax_cfg_t;
+
+/**
+ * cvmx_fpa_aura#_cnt
+ */
+union cvmx_fpa_aurax_cnt {
+	u64 u64;
+	struct cvmx_fpa_aurax_cnt_s {
+		u64 reserved_40_63 : 24;
+		u64 cnt : 40;
+	} s;
+	struct cvmx_fpa_aurax_cnt_s cn73xx;
+	struct cvmx_fpa_aurax_cnt_s cn78xx;
+	struct cvmx_fpa_aurax_cnt_s cn78xxp1;
+	struct cvmx_fpa_aurax_cnt_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cnt cvmx_fpa_aurax_cnt_t;
+
+/**
+ * cvmx_fpa_aura#_cnt_add
+ */
+union cvmx_fpa_aurax_cnt_add {
+	u64 u64;
+	struct cvmx_fpa_aurax_cnt_add_s {
+		u64 reserved_40_63 : 24;
+		u64 cnt : 40;
+	} s;
+	struct cvmx_fpa_aurax_cnt_add_s cn73xx;
+	struct cvmx_fpa_aurax_cnt_add_s cn78xx;
+	struct cvmx_fpa_aurax_cnt_add_s cn78xxp1;
+	struct cvmx_fpa_aurax_cnt_add_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cnt_add cvmx_fpa_aurax_cnt_add_t;
+
+/**
+ * cvmx_fpa_aura#_cnt_levels
+ */
+union cvmx_fpa_aurax_cnt_levels {
+	u64 u64;
+	struct cvmx_fpa_aurax_cnt_levels_s {
+		u64 reserved_41_63 : 23;
+		u64 drop_dis : 1;
+		u64 bp_ena : 1;
+		u64 red_ena : 1;
+		u64 shift : 6;
+		u64 bp : 8;
+		u64 drop : 8;
+		u64 pass : 8;
+		u64 level : 8;
+	} s;
+	struct cvmx_fpa_aurax_cnt_levels_s cn73xx;
+	struct cvmx_fpa_aurax_cnt_levels_s cn78xx;
+	struct cvmx_fpa_aurax_cnt_levels_s cn78xxp1;
+	struct cvmx_fpa_aurax_cnt_levels_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cnt_levels cvmx_fpa_aurax_cnt_levels_t;
+
+/**
+ * cvmx_fpa_aura#_cnt_limit
+ */
+union cvmx_fpa_aurax_cnt_limit {
+	u64 u64;
+	struct cvmx_fpa_aurax_cnt_limit_s {
+		u64 reserved_40_63 : 24;
+		u64 limit : 40;
+	} s;
+	struct cvmx_fpa_aurax_cnt_limit_s cn73xx;
+	struct cvmx_fpa_aurax_cnt_limit_s cn78xx;
+	struct cvmx_fpa_aurax_cnt_limit_s cn78xxp1;
+	struct cvmx_fpa_aurax_cnt_limit_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cnt_limit cvmx_fpa_aurax_cnt_limit_t;
+
+/**
+ * cvmx_fpa_aura#_cnt_threshold
+ */
+union cvmx_fpa_aurax_cnt_threshold {
+	u64 u64;
+	struct cvmx_fpa_aurax_cnt_threshold_s {
+		u64 reserved_40_63 : 24;
+		u64 thresh : 40;
+	} s;
+	struct cvmx_fpa_aurax_cnt_threshold_s cn73xx;
+	struct cvmx_fpa_aurax_cnt_threshold_s cn78xx;
+	struct cvmx_fpa_aurax_cnt_threshold_s cn78xxp1;
+	struct cvmx_fpa_aurax_cnt_threshold_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_cnt_threshold cvmx_fpa_aurax_cnt_threshold_t;
+
+/**
+ * cvmx_fpa_aura#_int
+ */
+union cvmx_fpa_aurax_int {
+	u64 u64;
+	struct cvmx_fpa_aurax_int_s {
+		u64 reserved_1_63 : 63;
+		u64 thresh : 1;
+	} s;
+	struct cvmx_fpa_aurax_int_s cn73xx;
+	struct cvmx_fpa_aurax_int_s cn78xx;
+	struct cvmx_fpa_aurax_int_s cn78xxp1;
+	struct cvmx_fpa_aurax_int_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_int cvmx_fpa_aurax_int_t;
+
+/**
+ * cvmx_fpa_aura#_pool
+ *
+ * Provides the mapping from each aura to the pool number.
+ *
+ */
+union cvmx_fpa_aurax_pool {
+	u64 u64;
+	struct cvmx_fpa_aurax_pool_s {
+		u64 reserved_6_63 : 58;
+		u64 pool : 6;
+	} s;
+	struct cvmx_fpa_aurax_pool_s cn73xx;
+	struct cvmx_fpa_aurax_pool_s cn78xx;
+	struct cvmx_fpa_aurax_pool_s cn78xxp1;
+	struct cvmx_fpa_aurax_pool_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_pool cvmx_fpa_aurax_pool_t;
+
+/**
+ * cvmx_fpa_aura#_pool_levels
+ */
+union cvmx_fpa_aurax_pool_levels {
+	u64 u64;
+	struct cvmx_fpa_aurax_pool_levels_s {
+		u64 reserved_41_63 : 23;
+		u64 drop_dis : 1;
+		u64 bp_ena : 1;
+		u64 red_ena : 1;
+		u64 shift : 6;
+		u64 bp : 8;
+		u64 drop : 8;
+		u64 pass : 8;
+		u64 level : 8;
+	} s;
+	struct cvmx_fpa_aurax_pool_levels_s cn73xx;
+	struct cvmx_fpa_aurax_pool_levels_s cn78xx;
+	struct cvmx_fpa_aurax_pool_levels_s cn78xxp1;
+	struct cvmx_fpa_aurax_pool_levels_s cnf75xx;
+};
+
+typedef union cvmx_fpa_aurax_pool_levels cvmx_fpa_aurax_pool_levels_t;
+
+/**
+ * cvmx_fpa_bist_status
+ *
+ * This register provides the result of the BIST run on the FPA memories.
+ *
+ */
+union cvmx_fpa_bist_status {
+	u64 u64;
+	struct cvmx_fpa_bist_status_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_fpa_bist_status_cn30xx {
+		u64 reserved_5_63 : 59;
+		u64 frd : 1;
+		u64 fpf0 : 1;
+		u64 fpf1 : 1;
+		u64 ffr : 1;
+		u64 fdr : 1;
+	} cn30xx;
+	struct cvmx_fpa_bist_status_cn30xx cn31xx;
+	struct cvmx_fpa_bist_status_cn30xx cn38xx;
+	struct cvmx_fpa_bist_status_cn30xx cn38xxp2;
+	struct cvmx_fpa_bist_status_cn30xx cn50xx;
+	struct cvmx_fpa_bist_status_cn30xx cn52xx;
+	struct cvmx_fpa_bist_status_cn30xx cn52xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cn56xx;
+	struct cvmx_fpa_bist_status_cn30xx cn56xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cn58xx;
+	struct cvmx_fpa_bist_status_cn30xx cn58xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cn61xx;
+	struct cvmx_fpa_bist_status_cn30xx cn63xx;
+	struct cvmx_fpa_bist_status_cn30xx cn63xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cn66xx;
+	struct cvmx_fpa_bist_status_cn30xx cn68xx;
+	struct cvmx_fpa_bist_status_cn30xx cn68xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cn70xx;
+	struct cvmx_fpa_bist_status_cn30xx cn70xxp1;
+	struct cvmx_fpa_bist_status_cn73xx {
+		u64 reserved_38_63 : 26;
+		u64 status : 38;
+	} cn73xx;
+	struct cvmx_fpa_bist_status_cn73xx cn78xx;
+	struct cvmx_fpa_bist_status_cn73xx cn78xxp1;
+	struct cvmx_fpa_bist_status_cn30xx cnf71xx;
+	struct cvmx_fpa_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_fpa_bist_status cvmx_fpa_bist_status_t;
+
+/**
+ * cvmx_fpa_clk_count
+ *
+ * This register counts the number of coprocessor-clock cycles since the deassertion of reset.
+ *
+ */
+union cvmx_fpa_clk_count {
+	u64 u64;
+	struct cvmx_fpa_clk_count_s {
+		u64 clk_cnt : 64;
+	} s;
+	struct cvmx_fpa_clk_count_s cn73xx;
+	struct cvmx_fpa_clk_count_s cn78xx;
+	struct cvmx_fpa_clk_count_s cn78xxp1;
+	struct cvmx_fpa_clk_count_s cnf75xx;
+};
+
+typedef union cvmx_fpa_clk_count cvmx_fpa_clk_count_t;
+
+/**
+ * cvmx_fpa_ctl_status
+ *
+ * The FPA's interrupt enable register.
+ *
+ */
+union cvmx_fpa_ctl_status {
+	u64 u64;
+	struct cvmx_fpa_ctl_status_s {
+		u64 reserved_21_63 : 43;
+		u64 free_en : 1;
+		u64 ret_off : 1;
+		u64 req_off : 1;
+		u64 reset : 1;
+		u64 use_ldt : 1;
+		u64 use_stt : 1;
+		u64 enb : 1;
+		u64 mem1_err : 7;
+		u64 mem0_err : 7;
+	} s;
+	struct cvmx_fpa_ctl_status_cn30xx {
+		u64 reserved_18_63 : 46;
+		u64 reset : 1;
+		u64 use_ldt : 1;
+		u64 use_stt : 1;
+		u64 enb : 1;
+		u64 mem1_err : 7;
+		u64 mem0_err : 7;
+	} cn30xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn31xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn38xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn38xxp2;
+	struct cvmx_fpa_ctl_status_cn30xx cn50xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn52xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn52xxp1;
+	struct cvmx_fpa_ctl_status_cn30xx cn56xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn56xxp1;
+	struct cvmx_fpa_ctl_status_cn30xx cn58xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn58xxp1;
+	struct cvmx_fpa_ctl_status_s cn61xx;
+	struct cvmx_fpa_ctl_status_s cn63xx;
+	struct cvmx_fpa_ctl_status_cn30xx cn63xxp1;
+	struct cvmx_fpa_ctl_status_s cn66xx;
+	struct cvmx_fpa_ctl_status_s cn68xx;
+	struct cvmx_fpa_ctl_status_s cn68xxp1;
+	struct cvmx_fpa_ctl_status_s cn70xx;
+	struct cvmx_fpa_ctl_status_s cn70xxp1;
+	struct cvmx_fpa_ctl_status_s cnf71xx;
+};
+
+typedef union cvmx_fpa_ctl_status cvmx_fpa_ctl_status_t;
+
+/**
+ * cvmx_fpa_ecc_ctl
+ *
+ * This register allows inserting ECC errors for testing.
+ *
+ */
+union cvmx_fpa_ecc_ctl {
+	u64 u64;
+	struct cvmx_fpa_ecc_ctl_s {
+		u64 reserved_62_63 : 2;
+		u64 ram_flip1 : 20;
+		u64 reserved_41_41 : 1;
+		u64 ram_flip0 : 20;
+		u64 reserved_20_20 : 1;
+		u64 ram_cdis : 20;
+	} s;
+	struct cvmx_fpa_ecc_ctl_s cn73xx;
+	struct cvmx_fpa_ecc_ctl_s cn78xx;
+	struct cvmx_fpa_ecc_ctl_s cn78xxp1;
+	struct cvmx_fpa_ecc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_fpa_ecc_ctl cvmx_fpa_ecc_ctl_t;
+
+/**
+ * cvmx_fpa_ecc_int
+ *
+ * This register contains ECC error interrupt summary bits.
+ *
+ */
+union cvmx_fpa_ecc_int {
+	u64 u64;
+	struct cvmx_fpa_ecc_int_s {
+		u64 reserved_52_63 : 12;
+		u64 ram_dbe : 20;
+		u64 reserved_20_31 : 12;
+		u64 ram_sbe : 20;
+	} s;
+	struct cvmx_fpa_ecc_int_s cn73xx;
+	struct cvmx_fpa_ecc_int_s cn78xx;
+	struct cvmx_fpa_ecc_int_s cn78xxp1;
+	struct cvmx_fpa_ecc_int_s cnf75xx;
+};
+
+typedef union cvmx_fpa_ecc_int cvmx_fpa_ecc_int_t;
+
+/**
+ * cvmx_fpa_err_int
+ *
+ * This register contains the global (non-pool) error interrupt summary bits of the FPA.
+ *
+ */
+union cvmx_fpa_err_int {
+	u64 u64;
+	struct cvmx_fpa_err_int_s {
+		u64 reserved_4_63 : 60;
+		u64 hw_sub : 1;
+		u64 hw_add : 1;
+		u64 cnt_sub : 1;
+		u64 cnt_add : 1;
+	} s;
+	struct cvmx_fpa_err_int_s cn73xx;
+	struct cvmx_fpa_err_int_s cn78xx;
+	struct cvmx_fpa_err_int_s cn78xxp1;
+	struct cvmx_fpa_err_int_s cnf75xx;
+};
+
+typedef union cvmx_fpa_err_int cvmx_fpa_err_int_t;
+
+/**
+ * cvmx_fpa_fpf#_marks
+ *
+ * "The high and low watermark register that determines when we write and read free pages from
+ * L2C
+ * for Queue 1. The value of FPF_RD and FPF_WR should have at least a 33 difference. Recommend
+ * value
+ * is FPF_RD == (FPA_FPF#_SIZE[FPF_SIZ] * .25) and FPF_WR == (FPA_FPF#_SIZE[FPF_SIZ] * .75)"
+ */
+union cvmx_fpa_fpfx_marks {
+	u64 u64;
+	struct cvmx_fpa_fpfx_marks_s {
+		u64 reserved_22_63 : 42;
+		u64 fpf_wr : 11;
+		u64 fpf_rd : 11;
+	} s;
+	struct cvmx_fpa_fpfx_marks_s cn38xx;
+	struct cvmx_fpa_fpfx_marks_s cn38xxp2;
+	struct cvmx_fpa_fpfx_marks_s cn56xx;
+	struct cvmx_fpa_fpfx_marks_s cn56xxp1;
+	struct cvmx_fpa_fpfx_marks_s cn58xx;
+	struct cvmx_fpa_fpfx_marks_s cn58xxp1;
+	struct cvmx_fpa_fpfx_marks_s cn61xx;
+	struct cvmx_fpa_fpfx_marks_s cn63xx;
+	struct cvmx_fpa_fpfx_marks_s cn63xxp1;
+	struct cvmx_fpa_fpfx_marks_s cn66xx;
+	struct cvmx_fpa_fpfx_marks_s cn68xx;
+	struct cvmx_fpa_fpfx_marks_s cn68xxp1;
+	struct cvmx_fpa_fpfx_marks_s cn70xx;
+	struct cvmx_fpa_fpfx_marks_s cn70xxp1;
+	struct cvmx_fpa_fpfx_marks_s cnf71xx;
+};
+
+typedef union cvmx_fpa_fpfx_marks cvmx_fpa_fpfx_marks_t;
+
+/**
+ * cvmx_fpa_fpf#_size
+ *
+ * "FPA_FPFX_SIZE = FPA's Queue 1-7 Free Page FIFO Size
+ * The number of page pointers that will be kept local to the FPA for this Queue. FPA Queues are
+ * assigned in order from Queue 0 to Queue 7, though only Queue 0 through Queue x can be used.
+ * The sum of the 8 (0-7) FPA_FPF#_SIZE registers must be limited to 2048."
+ */
+union cvmx_fpa_fpfx_size {
+	u64 u64;
+	struct cvmx_fpa_fpfx_size_s {
+		u64 reserved_11_63 : 53;
+		u64 fpf_siz : 11;
+	} s;
+	struct cvmx_fpa_fpfx_size_s cn38xx;
+	struct cvmx_fpa_fpfx_size_s cn38xxp2;
+	struct cvmx_fpa_fpfx_size_s cn56xx;
+	struct cvmx_fpa_fpfx_size_s cn56xxp1;
+	struct cvmx_fpa_fpfx_size_s cn58xx;
+	struct cvmx_fpa_fpfx_size_s cn58xxp1;
+	struct cvmx_fpa_fpfx_size_s cn61xx;
+	struct cvmx_fpa_fpfx_size_s cn63xx;
+	struct cvmx_fpa_fpfx_size_s cn63xxp1;
+	struct cvmx_fpa_fpfx_size_s cn66xx;
+	struct cvmx_fpa_fpfx_size_s cn68xx;
+	struct cvmx_fpa_fpfx_size_s cn68xxp1;
+	struct cvmx_fpa_fpfx_size_s cn70xx;
+	struct cvmx_fpa_fpfx_size_s cn70xxp1;
+	struct cvmx_fpa_fpfx_size_s cnf71xx;
+};
+
+typedef union cvmx_fpa_fpfx_size cvmx_fpa_fpfx_size_t;
+
+/**
+ * cvmx_fpa_fpf0_marks
+ *
+ * "The high and low watermark register that determines when we write and read free pages from
+ * L2C
+ * for Queue 0. The value of FPF_RD and FPF_WR should have at least a 33 difference. Recommend
+ * value
+ * is FPF_RD == (FPA_FPF#_SIZE[FPF_SIZ] * .25) and FPF_WR == (FPA_FPF#_SIZE[FPF_SIZ] * .75)"
+ */
+union cvmx_fpa_fpf0_marks {
+	u64 u64;
+	struct cvmx_fpa_fpf0_marks_s {
+		u64 reserved_24_63 : 40;
+		u64 fpf_wr : 12;
+		u64 fpf_rd : 12;
+	} s;
+	struct cvmx_fpa_fpf0_marks_s cn38xx;
+	struct cvmx_fpa_fpf0_marks_s cn38xxp2;
+	struct cvmx_fpa_fpf0_marks_s cn56xx;
+	struct cvmx_fpa_fpf0_marks_s cn56xxp1;
+	struct cvmx_fpa_fpf0_marks_s cn58xx;
+	struct cvmx_fpa_fpf0_marks_s cn58xxp1;
+	struct cvmx_fpa_fpf0_marks_s cn61xx;
+	struct cvmx_fpa_fpf0_marks_s cn63xx;
+	struct cvmx_fpa_fpf0_marks_s cn63xxp1;
+	struct cvmx_fpa_fpf0_marks_s cn66xx;
+	struct cvmx_fpa_fpf0_marks_s cn68xx;
+	struct cvmx_fpa_fpf0_marks_s cn68xxp1;
+	struct cvmx_fpa_fpf0_marks_s cn70xx;
+	struct cvmx_fpa_fpf0_marks_s cn70xxp1;
+	struct cvmx_fpa_fpf0_marks_s cnf71xx;
+};
+
+typedef union cvmx_fpa_fpf0_marks cvmx_fpa_fpf0_marks_t;
+
+/**
+ * cvmx_fpa_fpf0_size
+ *
+ * "The number of page pointers that will be kept local to the FPA for this Queue. FPA Queues are
+ * assigned in order from Queue 0 to Queue 7, though only Queue 0 through Queue x can be used.
+ * The sum of the 8 (0-7) FPA_FPF#_SIZE registers must be limited to 2048."
+ */
+union cvmx_fpa_fpf0_size {
+	u64 u64;
+	struct cvmx_fpa_fpf0_size_s {
+		u64 reserved_12_63 : 52;
+		u64 fpf_siz : 12;
+	} s;
+	struct cvmx_fpa_fpf0_size_s cn38xx;
+	struct cvmx_fpa_fpf0_size_s cn38xxp2;
+	struct cvmx_fpa_fpf0_size_s cn56xx;
+	struct cvmx_fpa_fpf0_size_s cn56xxp1;
+	struct cvmx_fpa_fpf0_size_s cn58xx;
+	struct cvmx_fpa_fpf0_size_s cn58xxp1;
+	struct cvmx_fpa_fpf0_size_s cn61xx;
+	struct cvmx_fpa_fpf0_size_s cn63xx;
+	struct cvmx_fpa_fpf0_size_s cn63xxp1;
+	struct cvmx_fpa_fpf0_size_s cn66xx;
+	struct cvmx_fpa_fpf0_size_s cn68xx;
+	struct cvmx_fpa_fpf0_size_s cn68xxp1;
+	struct cvmx_fpa_fpf0_size_s cn70xx;
+	struct cvmx_fpa_fpf0_size_s cn70xxp1;
+	struct cvmx_fpa_fpf0_size_s cnf71xx;
+};
+
+typedef union cvmx_fpa_fpf0_size cvmx_fpa_fpf0_size_t;
+
+/**
+ * cvmx_fpa_fpf8_marks
+ *
+ * Reserved through 0x238 for additional thresholds
+ *
+ *                  FPA_FPF8_MARKS = FPA's Queue 8 Free Page FIFO Read Write Marks
+ *
+ * The high and low watermark register that determines when we write and read free pages from L2C
+ * for Queue 8. The value of FPF_RD and FPF_WR should have at least a 33 difference. Recommend value
+ * is FPF_RD == (FPA_FPF#_SIZE[FPF_SIZ] * .25) and FPF_WR == (FPA_FPF#_SIZE[FPF_SIZ] * .75)
+ */
+union cvmx_fpa_fpf8_marks {
+	u64 u64;
+	struct cvmx_fpa_fpf8_marks_s {
+		u64 reserved_22_63 : 42;
+		u64 fpf_wr : 11;
+		u64 fpf_rd : 11;
+	} s;
+	struct cvmx_fpa_fpf8_marks_s cn68xx;
+	struct cvmx_fpa_fpf8_marks_s cn68xxp1;
+};
+
+typedef union cvmx_fpa_fpf8_marks cvmx_fpa_fpf8_marks_t;
+
+/**
+ * cvmx_fpa_fpf8_size
+ *
+ * FPA_FPF8_SIZE = FPA's Queue 8 Free Page FIFO Size
+ *
+ * The number of page pointers that will be kept local to the FPA for this Queue. FPA Queues are
+ * assigned in order from Queue 0 to Queue 7, though only Queue 0 through Queue x can be used.
+ * The sum of the 9 (0-8) FPA_FPF#_SIZE registers must be limited to 2048.
+ */
+union cvmx_fpa_fpf8_size {
+	u64 u64;
+	struct cvmx_fpa_fpf8_size_s {
+		u64 reserved_12_63 : 52;
+		u64 fpf_siz : 12;
+	} s;
+	struct cvmx_fpa_fpf8_size_s cn68xx;
+	struct cvmx_fpa_fpf8_size_s cn68xxp1;
+};
+
+typedef union cvmx_fpa_fpf8_size cvmx_fpa_fpf8_size_t;
+
+/**
+ * cvmx_fpa_gen_cfg
+ *
+ * This register provides FPA control and status information.
+ *
+ */
+union cvmx_fpa_gen_cfg {
+	u64 u64;
+	struct cvmx_fpa_gen_cfg_s {
+		u64 reserved_12_63 : 52;
+		u64 halfrate : 1;
+		u64 ocla_bp : 1;
+		u64 lvl_dly : 6;
+		u64 pools : 2;
+		u64 avg_en : 1;
+		u64 clk_override : 1;
+	} s;
+	struct cvmx_fpa_gen_cfg_s cn73xx;
+	struct cvmx_fpa_gen_cfg_s cn78xx;
+	struct cvmx_fpa_gen_cfg_s cn78xxp1;
+	struct cvmx_fpa_gen_cfg_s cnf75xx;
+};
+
+typedef union cvmx_fpa_gen_cfg cvmx_fpa_gen_cfg_t;
+
+/**
+ * cvmx_fpa_int_enb
+ *
+ * The FPA's interrupt enable register.
+ *
+ */
+union cvmx_fpa_int_enb {
+	u64 u64;
+	struct cvmx_fpa_int_enb_s {
+		u64 reserved_50_63 : 14;
+		u64 paddr_e : 1;
+		u64 reserved_44_48 : 5;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} s;
+	struct cvmx_fpa_int_enb_cn30xx {
+		u64 reserved_28_63 : 36;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn30xx;
+	struct cvmx_fpa_int_enb_cn30xx cn31xx;
+	struct cvmx_fpa_int_enb_cn30xx cn38xx;
+	struct cvmx_fpa_int_enb_cn30xx cn38xxp2;
+	struct cvmx_fpa_int_enb_cn30xx cn50xx;
+	struct cvmx_fpa_int_enb_cn30xx cn52xx;
+	struct cvmx_fpa_int_enb_cn30xx cn52xxp1;
+	struct cvmx_fpa_int_enb_cn30xx cn56xx;
+	struct cvmx_fpa_int_enb_cn30xx cn56xxp1;
+	struct cvmx_fpa_int_enb_cn30xx cn58xx;
+	struct cvmx_fpa_int_enb_cn30xx cn58xxp1;
+	struct cvmx_fpa_int_enb_cn61xx {
+		u64 reserved_50_63 : 14;
+		u64 paddr_e : 1;
+		u64 res_44 : 5;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn61xx;
+	struct cvmx_fpa_int_enb_cn63xx {
+		u64 reserved_44_63 : 20;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn63xx;
+	struct cvmx_fpa_int_enb_cn30xx cn63xxp1;
+	struct cvmx_fpa_int_enb_cn61xx cn66xx;
+	struct cvmx_fpa_int_enb_cn68xx {
+		u64 reserved_50_63 : 14;
+		u64 paddr_e : 1;
+		u64 pool8th : 1;
+		u64 q8_perr : 1;
+		u64 q8_coff : 1;
+		u64 q8_und : 1;
+		u64 free8 : 1;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn68xx;
+	struct cvmx_fpa_int_enb_cn68xx cn68xxp1;
+	struct cvmx_fpa_int_enb_cn61xx cn70xx;
+	struct cvmx_fpa_int_enb_cn61xx cn70xxp1;
+	struct cvmx_fpa_int_enb_cn61xx cnf71xx;
+};
+
+typedef union cvmx_fpa_int_enb cvmx_fpa_int_enb_t;
+
+/**
+ * cvmx_fpa_int_sum
+ *
+ * Contains the different interrupt summary bits of the FPA.
+ *
+ */
+union cvmx_fpa_int_sum {
+	u64 u64;
+	struct cvmx_fpa_int_sum_s {
+		u64 reserved_50_63 : 14;
+		u64 paddr_e : 1;
+		u64 pool8th : 1;
+		u64 q8_perr : 1;
+		u64 q8_coff : 1;
+		u64 q8_und : 1;
+		u64 free8 : 1;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} s;
+	struct cvmx_fpa_int_sum_cn30xx {
+		u64 reserved_28_63 : 36;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn30xx;
+	struct cvmx_fpa_int_sum_cn30xx cn31xx;
+	struct cvmx_fpa_int_sum_cn30xx cn38xx;
+	struct cvmx_fpa_int_sum_cn30xx cn38xxp2;
+	struct cvmx_fpa_int_sum_cn30xx cn50xx;
+	struct cvmx_fpa_int_sum_cn30xx cn52xx;
+	struct cvmx_fpa_int_sum_cn30xx cn52xxp1;
+	struct cvmx_fpa_int_sum_cn30xx cn56xx;
+	struct cvmx_fpa_int_sum_cn30xx cn56xxp1;
+	struct cvmx_fpa_int_sum_cn30xx cn58xx;
+	struct cvmx_fpa_int_sum_cn30xx cn58xxp1;
+	struct cvmx_fpa_int_sum_cn61xx {
+		u64 reserved_50_63 : 14;
+		u64 paddr_e : 1;
+		u64 reserved_44_48 : 5;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn61xx;
+	struct cvmx_fpa_int_sum_cn63xx {
+		u64 reserved_44_63 : 20;
+		u64 free7 : 1;
+		u64 free6 : 1;
+		u64 free5 : 1;
+		u64 free4 : 1;
+		u64 free3 : 1;
+		u64 free2 : 1;
+		u64 free1 : 1;
+		u64 free0 : 1;
+		u64 pool7th : 1;
+		u64 pool6th : 1;
+		u64 pool5th : 1;
+		u64 pool4th : 1;
+		u64 pool3th : 1;
+		u64 pool2th : 1;
+		u64 pool1th : 1;
+		u64 pool0th : 1;
+		u64 q7_perr : 1;
+		u64 q7_coff : 1;
+		u64 q7_und : 1;
+		u64 q6_perr : 1;
+		u64 q6_coff : 1;
+		u64 q6_und : 1;
+		u64 q5_perr : 1;
+		u64 q5_coff : 1;
+		u64 q5_und : 1;
+		u64 q4_perr : 1;
+		u64 q4_coff : 1;
+		u64 q4_und : 1;
+		u64 q3_perr : 1;
+		u64 q3_coff : 1;
+		u64 q3_und : 1;
+		u64 q2_perr : 1;
+		u64 q2_coff : 1;
+		u64 q2_und : 1;
+		u64 q1_perr : 1;
+		u64 q1_coff : 1;
+		u64 q1_und : 1;
+		u64 q0_perr : 1;
+		u64 q0_coff : 1;
+		u64 q0_und : 1;
+		u64 fed1_dbe : 1;
+		u64 fed1_sbe : 1;
+		u64 fed0_dbe : 1;
+		u64 fed0_sbe : 1;
+	} cn63xx;
+	struct cvmx_fpa_int_sum_cn30xx cn63xxp1;
+	struct cvmx_fpa_int_sum_cn61xx cn66xx;
+	struct cvmx_fpa_int_sum_s cn68xx;
+	struct cvmx_fpa_int_sum_s cn68xxp1;
+	struct cvmx_fpa_int_sum_cn61xx cn70xx;
+	struct cvmx_fpa_int_sum_cn61xx cn70xxp1;
+	struct cvmx_fpa_int_sum_cn61xx cnf71xx;
+};
+
+typedef union cvmx_fpa_int_sum cvmx_fpa_int_sum_t;
+
+/**
+ * cvmx_fpa_packet_threshold
+ *
+ * When the value of FPA_QUE0_AVAILABLE[QUE_SIZ] is Less than the value of this register a low
+ * pool count signal is sent to the
+ * PCIe packet instruction engine (to make it stop reading instructions) and to the Packet-
+ * Arbiter informing it to not give grants
+ * to packets MAC with the exception of the PCIe MAC.
+ */
+union cvmx_fpa_packet_threshold {
+	u64 u64;
+	struct cvmx_fpa_packet_threshold_s {
+		u64 reserved_32_63 : 32;
+		u64 thresh : 32;
+	} s;
+	struct cvmx_fpa_packet_threshold_s cn61xx;
+	struct cvmx_fpa_packet_threshold_s cn63xx;
+	struct cvmx_fpa_packet_threshold_s cn66xx;
+	struct cvmx_fpa_packet_threshold_s cn68xx;
+	struct cvmx_fpa_packet_threshold_s cn68xxp1;
+	struct cvmx_fpa_packet_threshold_s cn70xx;
+	struct cvmx_fpa_packet_threshold_s cn70xxp1;
+	struct cvmx_fpa_packet_threshold_s cnf71xx;
+};
+
+typedef union cvmx_fpa_packet_threshold cvmx_fpa_packet_threshold_t;
+
+/**
+ * cvmx_fpa_pool#_available
+ */
+union cvmx_fpa_poolx_available {
+	u64 u64;
+	struct cvmx_fpa_poolx_available_s {
+		u64 reserved_36_63 : 28;
+		u64 count : 36;
+	} s;
+	struct cvmx_fpa_poolx_available_s cn73xx;
+	struct cvmx_fpa_poolx_available_s cn78xx;
+	struct cvmx_fpa_poolx_available_s cn78xxp1;
+	struct cvmx_fpa_poolx_available_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_available cvmx_fpa_poolx_available_t;
+
+/**
+ * cvmx_fpa_pool#_cfg
+ */
+union cvmx_fpa_poolx_cfg {
+	u64 u64;
+	struct cvmx_fpa_poolx_cfg_s {
+		u64 reserved_43_63 : 21;
+		u64 buf_size : 11;
+		u64 reserved_31_31 : 1;
+		u64 buf_offset : 15;
+		u64 reserved_5_15 : 11;
+		u64 l_type : 2;
+		u64 s_type : 1;
+		u64 nat_align : 1;
+		u64 ena : 1;
+	} s;
+	struct cvmx_fpa_poolx_cfg_s cn73xx;
+	struct cvmx_fpa_poolx_cfg_s cn78xx;
+	struct cvmx_fpa_poolx_cfg_s cn78xxp1;
+	struct cvmx_fpa_poolx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_cfg cvmx_fpa_poolx_cfg_t;
+
+/**
+ * cvmx_fpa_pool#_end_addr
+ *
+ * Pointers sent to this pool after alignment must be equal to or less than this address.
+ *
+ */
+union cvmx_fpa_poolx_end_addr {
+	u64 u64;
+	struct cvmx_fpa_poolx_end_addr_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_fpa_poolx_end_addr_cn61xx {
+		u64 reserved_33_63 : 31;
+		u64 addr : 33;
+	} cn61xx;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cn66xx;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cn68xx;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cn68xxp1;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cn70xx;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cn70xxp1;
+	struct cvmx_fpa_poolx_end_addr_cn73xx {
+		u64 reserved_42_63 : 22;
+		u64 addr : 35;
+		u64 reserved_0_6 : 7;
+	} cn73xx;
+	struct cvmx_fpa_poolx_end_addr_cn73xx cn78xx;
+	struct cvmx_fpa_poolx_end_addr_cn73xx cn78xxp1;
+	struct cvmx_fpa_poolx_end_addr_cn61xx cnf71xx;
+	struct cvmx_fpa_poolx_end_addr_cn73xx cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_end_addr cvmx_fpa_poolx_end_addr_t;
+
+/**
+ * cvmx_fpa_pool#_fpf_marks
+ *
+ * The low watermark register that determines when we read free pages from L2C.
+ *
+ */
+union cvmx_fpa_poolx_fpf_marks {
+	u64 u64;
+	struct cvmx_fpa_poolx_fpf_marks_s {
+		u64 reserved_27_63 : 37;
+		u64 fpf_rd : 11;
+		u64 reserved_11_15 : 5;
+		u64 fpf_level : 11;
+	} s;
+	struct cvmx_fpa_poolx_fpf_marks_s cn73xx;
+	struct cvmx_fpa_poolx_fpf_marks_s cn78xx;
+	struct cvmx_fpa_poolx_fpf_marks_s cn78xxp1;
+	struct cvmx_fpa_poolx_fpf_marks_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_fpf_marks cvmx_fpa_poolx_fpf_marks_t;
+
+/**
+ * cvmx_fpa_pool#_int
+ *
+ * This register indicates pool interrupts.
+ *
+ */
+union cvmx_fpa_poolx_int {
+	u64 u64;
+	struct cvmx_fpa_poolx_int_s {
+		u64 reserved_4_63 : 60;
+		u64 thresh : 1;
+		u64 range : 1;
+		u64 crcerr : 1;
+		u64 ovfls : 1;
+	} s;
+	struct cvmx_fpa_poolx_int_s cn73xx;
+	struct cvmx_fpa_poolx_int_s cn78xx;
+	struct cvmx_fpa_poolx_int_s cn78xxp1;
+	struct cvmx_fpa_poolx_int_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_int cvmx_fpa_poolx_int_t;
+
+/**
+ * cvmx_fpa_pool#_op_pc
+ */
+union cvmx_fpa_poolx_op_pc {
+	u64 u64;
+	struct cvmx_fpa_poolx_op_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_fpa_poolx_op_pc_s cn73xx;
+	struct cvmx_fpa_poolx_op_pc_s cn78xx;
+	struct cvmx_fpa_poolx_op_pc_s cn78xxp1;
+	struct cvmx_fpa_poolx_op_pc_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_op_pc cvmx_fpa_poolx_op_pc_t;
+
+/**
+ * cvmx_fpa_pool#_stack_addr
+ */
+union cvmx_fpa_poolx_stack_addr {
+	u64 u64;
+	struct cvmx_fpa_poolx_stack_addr_s {
+		u64 reserved_42_63 : 22;
+		u64 addr : 35;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_fpa_poolx_stack_addr_s cn73xx;
+	struct cvmx_fpa_poolx_stack_addr_s cn78xx;
+	struct cvmx_fpa_poolx_stack_addr_s cn78xxp1;
+	struct cvmx_fpa_poolx_stack_addr_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_stack_addr cvmx_fpa_poolx_stack_addr_t;
+
+/**
+ * cvmx_fpa_pool#_stack_base
+ */
+union cvmx_fpa_poolx_stack_base {
+	u64 u64;
+	struct cvmx_fpa_poolx_stack_base_s {
+		u64 reserved_42_63 : 22;
+		u64 addr : 35;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_fpa_poolx_stack_base_s cn73xx;
+	struct cvmx_fpa_poolx_stack_base_s cn78xx;
+	struct cvmx_fpa_poolx_stack_base_s cn78xxp1;
+	struct cvmx_fpa_poolx_stack_base_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_stack_base cvmx_fpa_poolx_stack_base_t;
+
+/**
+ * cvmx_fpa_pool#_stack_end
+ */
+union cvmx_fpa_poolx_stack_end {
+	u64 u64;
+	struct cvmx_fpa_poolx_stack_end_s {
+		u64 reserved_42_63 : 22;
+		u64 addr : 35;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_fpa_poolx_stack_end_s cn73xx;
+	struct cvmx_fpa_poolx_stack_end_s cn78xx;
+	struct cvmx_fpa_poolx_stack_end_s cn78xxp1;
+	struct cvmx_fpa_poolx_stack_end_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_stack_end cvmx_fpa_poolx_stack_end_t;
+
+/**
+ * cvmx_fpa_pool#_start_addr
+ *
+ * Pointers sent to this pool after alignment must be equal to or greater than this address.
+ *
+ */
+union cvmx_fpa_poolx_start_addr {
+	u64 u64;
+	struct cvmx_fpa_poolx_start_addr_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_fpa_poolx_start_addr_cn61xx {
+		u64 reserved_33_63 : 31;
+		u64 addr : 33;
+	} cn61xx;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cn66xx;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cn68xx;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cn68xxp1;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cn70xx;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cn70xxp1;
+	struct cvmx_fpa_poolx_start_addr_cn73xx {
+		u64 reserved_42_63 : 22;
+		u64 addr : 35;
+		u64 reserved_0_6 : 7;
+	} cn73xx;
+	struct cvmx_fpa_poolx_start_addr_cn73xx cn78xx;
+	struct cvmx_fpa_poolx_start_addr_cn73xx cn78xxp1;
+	struct cvmx_fpa_poolx_start_addr_cn61xx cnf71xx;
+	struct cvmx_fpa_poolx_start_addr_cn73xx cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_start_addr cvmx_fpa_poolx_start_addr_t;
+
+/**
+ * cvmx_fpa_pool#_threshold
+ *
+ * FPA_POOLX_THRESHOLD = FPA's Pool 0-7 Threshold
+ * When the value of FPA_QUEX_AVAILABLE is equal to FPA_POOLX_THRESHOLD[THRESH] when a pointer is
+ * allocated
+ * or deallocated, set interrupt FPA_INT_SUM[POOLXTH].
+ */
+union cvmx_fpa_poolx_threshold {
+	u64 u64;
+	struct cvmx_fpa_poolx_threshold_s {
+		u64 reserved_36_63 : 28;
+		u64 thresh : 36;
+	} s;
+	struct cvmx_fpa_poolx_threshold_cn61xx {
+		u64 reserved_29_63 : 35;
+		u64 thresh : 29;
+	} cn61xx;
+	struct cvmx_fpa_poolx_threshold_cn61xx cn63xx;
+	struct cvmx_fpa_poolx_threshold_cn61xx cn66xx;
+	struct cvmx_fpa_poolx_threshold_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 thresh : 32;
+	} cn68xx;
+	struct cvmx_fpa_poolx_threshold_cn68xx cn68xxp1;
+	struct cvmx_fpa_poolx_threshold_cn61xx cn70xx;
+	struct cvmx_fpa_poolx_threshold_cn61xx cn70xxp1;
+	struct cvmx_fpa_poolx_threshold_s cn73xx;
+	struct cvmx_fpa_poolx_threshold_s cn78xx;
+	struct cvmx_fpa_poolx_threshold_s cn78xxp1;
+	struct cvmx_fpa_poolx_threshold_cn61xx cnf71xx;
+	struct cvmx_fpa_poolx_threshold_s cnf75xx;
+};
+
+typedef union cvmx_fpa_poolx_threshold cvmx_fpa_poolx_threshold_t;
+
+/**
+ * cvmx_fpa_que#_available
+ *
+ * FPA_QUEX_PAGES_AVAILABLE = FPA's Queue 0-7 Free Page Available Register
+ * The number of page pointers that are available in the FPA and local DRAM.
+ */
+union cvmx_fpa_quex_available {
+	u64 u64;
+	struct cvmx_fpa_quex_available_s {
+		u64 reserved_32_63 : 32;
+		u64 que_siz : 32;
+	} s;
+	struct cvmx_fpa_quex_available_cn30xx {
+		u64 reserved_29_63 : 35;
+		u64 que_siz : 29;
+	} cn30xx;
+	struct cvmx_fpa_quex_available_cn30xx cn31xx;
+	struct cvmx_fpa_quex_available_cn30xx cn38xx;
+	struct cvmx_fpa_quex_available_cn30xx cn38xxp2;
+	struct cvmx_fpa_quex_available_cn30xx cn50xx;
+	struct cvmx_fpa_quex_available_cn30xx cn52xx;
+	struct cvmx_fpa_quex_available_cn30xx cn52xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cn56xx;
+	struct cvmx_fpa_quex_available_cn30xx cn56xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cn58xx;
+	struct cvmx_fpa_quex_available_cn30xx cn58xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cn61xx;
+	struct cvmx_fpa_quex_available_cn30xx cn63xx;
+	struct cvmx_fpa_quex_available_cn30xx cn63xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cn66xx;
+	struct cvmx_fpa_quex_available_s cn68xx;
+	struct cvmx_fpa_quex_available_s cn68xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cn70xx;
+	struct cvmx_fpa_quex_available_cn30xx cn70xxp1;
+	struct cvmx_fpa_quex_available_cn30xx cnf71xx;
+};
+
+typedef union cvmx_fpa_quex_available cvmx_fpa_quex_available_t;
+
+/**
+ * cvmx_fpa_que#_page_index
+ *
+ * The present index page for queue 0 of the FPA.
+ * This number reflects the number of pages of pointers that have been written to memory
+ * for this queue.
+ */
+union cvmx_fpa_quex_page_index {
+	u64 u64;
+	struct cvmx_fpa_quex_page_index_s {
+		u64 reserved_25_63 : 39;
+		u64 pg_num : 25;
+	} s;
+	struct cvmx_fpa_quex_page_index_s cn30xx;
+	struct cvmx_fpa_quex_page_index_s cn31xx;
+	struct cvmx_fpa_quex_page_index_s cn38xx;
+	struct cvmx_fpa_quex_page_index_s cn38xxp2;
+	struct cvmx_fpa_quex_page_index_s cn50xx;
+	struct cvmx_fpa_quex_page_index_s cn52xx;
+	struct cvmx_fpa_quex_page_index_s cn52xxp1;
+	struct cvmx_fpa_quex_page_index_s cn56xx;
+	struct cvmx_fpa_quex_page_index_s cn56xxp1;
+	struct cvmx_fpa_quex_page_index_s cn58xx;
+	struct cvmx_fpa_quex_page_index_s cn58xxp1;
+	struct cvmx_fpa_quex_page_index_s cn61xx;
+	struct cvmx_fpa_quex_page_index_s cn63xx;
+	struct cvmx_fpa_quex_page_index_s cn63xxp1;
+	struct cvmx_fpa_quex_page_index_s cn66xx;
+	struct cvmx_fpa_quex_page_index_s cn68xx;
+	struct cvmx_fpa_quex_page_index_s cn68xxp1;
+	struct cvmx_fpa_quex_page_index_s cn70xx;
+	struct cvmx_fpa_quex_page_index_s cn70xxp1;
+	struct cvmx_fpa_quex_page_index_s cnf71xx;
+};
+
+typedef union cvmx_fpa_quex_page_index cvmx_fpa_quex_page_index_t;
+
+/**
+ * cvmx_fpa_que8_page_index
+ *
+ * FPA_QUE8_PAGE_INDEX = FPA's Queue7 Page Index
+ *
+ * The present index page for queue 7 of the FPA.
+ * This number reflects the number of pages of pointers that have been written to memory
+ * for this queue.
+ * Because the address space is 38-bits the number of 128 byte pages could cause this register value to wrap.
+ */
+union cvmx_fpa_que8_page_index {
+	u64 u64;
+	struct cvmx_fpa_que8_page_index_s {
+		u64 reserved_25_63 : 39;
+		u64 pg_num : 25;
+	} s;
+	struct cvmx_fpa_que8_page_index_s cn68xx;
+	struct cvmx_fpa_que8_page_index_s cn68xxp1;
+};
+
+typedef union cvmx_fpa_que8_page_index cvmx_fpa_que8_page_index_t;
+
+/**
+ * cvmx_fpa_que_act
+ *
+ * "When a INT_SUM[PERR#] occurs this will be latched with the value read from L2C.
+ * This is latched on the first error and will not latch again unitl all errors are cleared."
+ */
+union cvmx_fpa_que_act {
+	u64 u64;
+	struct cvmx_fpa_que_act_s {
+		u64 reserved_29_63 : 35;
+		u64 act_que : 3;
+		u64 act_indx : 26;
+	} s;
+	struct cvmx_fpa_que_act_s cn30xx;
+	struct cvmx_fpa_que_act_s cn31xx;
+	struct cvmx_fpa_que_act_s cn38xx;
+	struct cvmx_fpa_que_act_s cn38xxp2;
+	struct cvmx_fpa_que_act_s cn50xx;
+	struct cvmx_fpa_que_act_s cn52xx;
+	struct cvmx_fpa_que_act_s cn52xxp1;
+	struct cvmx_fpa_que_act_s cn56xx;
+	struct cvmx_fpa_que_act_s cn56xxp1;
+	struct cvmx_fpa_que_act_s cn58xx;
+	struct cvmx_fpa_que_act_s cn58xxp1;
+	struct cvmx_fpa_que_act_s cn61xx;
+	struct cvmx_fpa_que_act_s cn63xx;
+	struct cvmx_fpa_que_act_s cn63xxp1;
+	struct cvmx_fpa_que_act_s cn66xx;
+	struct cvmx_fpa_que_act_s cn68xx;
+	struct cvmx_fpa_que_act_s cn68xxp1;
+	struct cvmx_fpa_que_act_s cn70xx;
+	struct cvmx_fpa_que_act_s cn70xxp1;
+	struct cvmx_fpa_que_act_s cnf71xx;
+};
+
+typedef union cvmx_fpa_que_act cvmx_fpa_que_act_t;
+
+/**
+ * cvmx_fpa_que_exp
+ *
+ * "When a INT_SUM[PERR#] occurs this will be latched with the expected value.
+ * This is latched on the first error and will not latch again unitl all errors are cleared."
+ */
+union cvmx_fpa_que_exp {
+	u64 u64;
+	struct cvmx_fpa_que_exp_s {
+		u64 reserved_29_63 : 35;
+		u64 exp_que : 3;
+		u64 exp_indx : 26;
+	} s;
+	struct cvmx_fpa_que_exp_s cn30xx;
+	struct cvmx_fpa_que_exp_s cn31xx;
+	struct cvmx_fpa_que_exp_s cn38xx;
+	struct cvmx_fpa_que_exp_s cn38xxp2;
+	struct cvmx_fpa_que_exp_s cn50xx;
+	struct cvmx_fpa_que_exp_s cn52xx;
+	struct cvmx_fpa_que_exp_s cn52xxp1;
+	struct cvmx_fpa_que_exp_s cn56xx;
+	struct cvmx_fpa_que_exp_s cn56xxp1;
+	struct cvmx_fpa_que_exp_s cn58xx;
+	struct cvmx_fpa_que_exp_s cn58xxp1;
+	struct cvmx_fpa_que_exp_s cn61xx;
+	struct cvmx_fpa_que_exp_s cn63xx;
+	struct cvmx_fpa_que_exp_s cn63xxp1;
+	struct cvmx_fpa_que_exp_s cn66xx;
+	struct cvmx_fpa_que_exp_s cn68xx;
+	struct cvmx_fpa_que_exp_s cn68xxp1;
+	struct cvmx_fpa_que_exp_s cn70xx;
+	struct cvmx_fpa_que_exp_s cn70xxp1;
+	struct cvmx_fpa_que_exp_s cnf71xx;
+};
+
+typedef union cvmx_fpa_que_exp cvmx_fpa_que_exp_t;
+
+/**
+ * cvmx_fpa_rd_latency_pc
+ */
+union cvmx_fpa_rd_latency_pc {
+	u64 u64;
+	struct cvmx_fpa_rd_latency_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_fpa_rd_latency_pc_s cn73xx;
+	struct cvmx_fpa_rd_latency_pc_s cn78xx;
+	struct cvmx_fpa_rd_latency_pc_s cn78xxp1;
+	struct cvmx_fpa_rd_latency_pc_s cnf75xx;
+};
+
+typedef union cvmx_fpa_rd_latency_pc cvmx_fpa_rd_latency_pc_t;
+
+/**
+ * cvmx_fpa_rd_req_pc
+ */
+union cvmx_fpa_rd_req_pc {
+	u64 u64;
+	struct cvmx_fpa_rd_req_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_fpa_rd_req_pc_s cn73xx;
+	struct cvmx_fpa_rd_req_pc_s cn78xx;
+	struct cvmx_fpa_rd_req_pc_s cn78xxp1;
+	struct cvmx_fpa_rd_req_pc_s cnf75xx;
+};
+
+typedef union cvmx_fpa_rd_req_pc cvmx_fpa_rd_req_pc_t;
+
+/**
+ * cvmx_fpa_red_delay
+ */
+union cvmx_fpa_red_delay {
+	u64 u64;
+	struct cvmx_fpa_red_delay_s {
+		u64 reserved_14_63 : 50;
+		u64 avg_dly : 14;
+	} s;
+	struct cvmx_fpa_red_delay_s cn73xx;
+	struct cvmx_fpa_red_delay_s cn78xx;
+	struct cvmx_fpa_red_delay_s cn78xxp1;
+	struct cvmx_fpa_red_delay_s cnf75xx;
+};
+
+typedef union cvmx_fpa_red_delay cvmx_fpa_red_delay_t;
+
+/**
+ * cvmx_fpa_sft_rst
+ *
+ * Allows soft reset.
+ *
+ */
+union cvmx_fpa_sft_rst {
+	u64 u64;
+	struct cvmx_fpa_sft_rst_s {
+		u64 busy : 1;
+		u64 reserved_1_62 : 62;
+		u64 rst : 1;
+	} s;
+	struct cvmx_fpa_sft_rst_s cn73xx;
+	struct cvmx_fpa_sft_rst_s cn78xx;
+	struct cvmx_fpa_sft_rst_s cn78xxp1;
+	struct cvmx_fpa_sft_rst_s cnf75xx;
+};
+
+typedef union cvmx_fpa_sft_rst cvmx_fpa_sft_rst_t;
+
+/**
+ * cvmx_fpa_wart_ctl
+ *
+ * FPA_WART_CTL = FPA's WART Control
+ *
+ * Control and status for the WART block.
+ */
+union cvmx_fpa_wart_ctl {
+	u64 u64;
+	struct cvmx_fpa_wart_ctl_s {
+		u64 reserved_16_63 : 48;
+		u64 ctl : 16;
+	} s;
+	struct cvmx_fpa_wart_ctl_s cn30xx;
+	struct cvmx_fpa_wart_ctl_s cn31xx;
+	struct cvmx_fpa_wart_ctl_s cn38xx;
+	struct cvmx_fpa_wart_ctl_s cn38xxp2;
+	struct cvmx_fpa_wart_ctl_s cn50xx;
+	struct cvmx_fpa_wart_ctl_s cn52xx;
+	struct cvmx_fpa_wart_ctl_s cn52xxp1;
+	struct cvmx_fpa_wart_ctl_s cn56xx;
+	struct cvmx_fpa_wart_ctl_s cn56xxp1;
+	struct cvmx_fpa_wart_ctl_s cn58xx;
+	struct cvmx_fpa_wart_ctl_s cn58xxp1;
+};
+
+typedef union cvmx_fpa_wart_ctl cvmx_fpa_wart_ctl_t;
+
+/**
+ * cvmx_fpa_wart_status
+ *
+ * FPA_WART_STATUS = FPA's WART Status
+ *
+ * Control and status for the WART block.
+ */
+union cvmx_fpa_wart_status {
+	u64 u64;
+	struct cvmx_fpa_wart_status_s {
+		u64 reserved_32_63 : 32;
+		u64 status : 32;
+	} s;
+	struct cvmx_fpa_wart_status_s cn30xx;
+	struct cvmx_fpa_wart_status_s cn31xx;
+	struct cvmx_fpa_wart_status_s cn38xx;
+	struct cvmx_fpa_wart_status_s cn38xxp2;
+	struct cvmx_fpa_wart_status_s cn50xx;
+	struct cvmx_fpa_wart_status_s cn52xx;
+	struct cvmx_fpa_wart_status_s cn52xxp1;
+	struct cvmx_fpa_wart_status_s cn56xx;
+	struct cvmx_fpa_wart_status_s cn56xxp1;
+	struct cvmx_fpa_wart_status_s cn58xx;
+	struct cvmx_fpa_wart_status_s cn58xxp1;
+};
+
+typedef union cvmx_fpa_wart_status cvmx_fpa_wart_status_t;
+
+/**
+ * cvmx_fpa_wqe_threshold
+ *
+ * "When the value of FPA_QUE#_AVAILABLE[QUE_SIZ] (\# is determined by the value of
+ * IPD_WQE_FPA_QUEUE) is Less than the value of this
+ * register a low pool count signal is sent to the PCIe packet instruction engine (to make it
+ * stop reading instructions) and to the
+ * Packet-Arbiter informing it to not give grants to packets MAC with the exception of the PCIe
+ * MAC."
+ */
+union cvmx_fpa_wqe_threshold {
+	u64 u64;
+	struct cvmx_fpa_wqe_threshold_s {
+		u64 reserved_32_63 : 32;
+		u64 thresh : 32;
+	} s;
+	struct cvmx_fpa_wqe_threshold_s cn61xx;
+	struct cvmx_fpa_wqe_threshold_s cn63xx;
+	struct cvmx_fpa_wqe_threshold_s cn66xx;
+	struct cvmx_fpa_wqe_threshold_s cn68xx;
+	struct cvmx_fpa_wqe_threshold_s cn68xxp1;
+	struct cvmx_fpa_wqe_threshold_s cn70xx;
+	struct cvmx_fpa_wqe_threshold_s cn70xxp1;
+	struct cvmx_fpa_wqe_threshold_s cnf71xx;
+};
+
+typedef union cvmx_fpa_wqe_threshold cvmx_fpa_wqe_threshold_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 11/50] mips: octeon: Add cvmx-gmxx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (9 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 10/50] mips: octeon: Add cvmx-fpa-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 12/50] mips: octeon: Add cvmx-gserx-defs.h " Stefan Roese
                   ` (41 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-gmxx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-gmxx-defs.h | 6378 +++++++++++++++++
 1 file changed, 6378 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmxx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmxx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-gmxx-defs.h
new file mode 100644
index 0000000000..8231fef220
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-gmxx-defs.h
@@ -0,0 +1,6378 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon gmxx.
+ */
+
+#ifndef __CVMX_GMXX_DEFS_H__
+#define __CVMX_GMXX_DEFS_H__
+
+static inline u64 CVMX_GMXX_BAD_REG(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000518ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000518ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000518ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000518ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_BIST(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000400ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000400ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000400ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000400ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_BPID_MAPX(offset, block_id)                                                      \
+	(0x0001180008000680ull + (((offset) & 15) + ((block_id) & 7) * 0x200000ull) * 8)
+#define CVMX_GMXX_BPID_MSK(offset) (0x0001180008000700ull + ((offset) & 7) * 0x1000000ull)
+static inline u64 CVMX_GMXX_CLK_EN(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F0ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F0ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F0ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080007F0ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_EBP_DIS(offset) (0x0001180008000608ull + ((offset) & 7) * 0x1000000ull)
+#define CVMX_GMXX_EBP_MSK(offset) (0x0001180008000600ull + ((offset) & 7) * 0x1000000ull)
+static inline u64 CVMX_GMXX_HG2_CONTROL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000550ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000550ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000550ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000550ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_INF_MODE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007F8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080007F8ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_NXA_ADR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000510ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000510ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000510ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000510ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_PIPE_STATUS(offset) (0x0001180008000760ull + ((offset) & 7) * 0x1000000ull)
+static inline u64 CVMX_GMXX_PRTX_CBFC_CTL(unsigned long __attribute__((unused)) offset,
+					  unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000580ull + (block_id) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000580ull + (block_id) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000580ull + (block_id) * 0x1000000ull;
+	}
+	return 0x0001180008000580ull + (block_id) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_PRTX_CFG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000010ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000010ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000010ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_QSGMII_CTL(offset) (0x0001180008000760ull + ((offset) & 1) * 0x8000000ull)
+static inline u64 CVMX_GMXX_RXAUI_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000740ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000740ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000740ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM0(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000180ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000180ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000180ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM1(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000188ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000188ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000188ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM2(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000190ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000190ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000190ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM3(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000198ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000198ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000198ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM4(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080001A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080001A0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080001A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM5(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080001A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080001A8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080001A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM_ALL_EN(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000110ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000110ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000110ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000110ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CAM_EN(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000108ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000108ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000108ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_ADR_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000100ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000100ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000100ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_DECISION(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000040ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000040ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000040ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_FRM_CHK(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000020ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000020ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000020ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_FRM_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000018ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000018ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000018ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_RXX_FRM_MAX(offset, block_id)                                                    \
+	(0x0001180008000030ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+#define CVMX_GMXX_RXX_FRM_MIN(offset, block_id)                                                    \
+	(0x0001180008000028ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+static inline u64 CVMX_GMXX_RXX_IFG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000058ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000058ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000058ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_INT_EN(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000008ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000008ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000008ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_INT_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000000ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000000ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000000ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_JABBER(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000038ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000038ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000038ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_PAUSE_DROP_TIME(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000068ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000068ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000068ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000068ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000068ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_RXX_RX_INBND(offset, block_id)                                                   \
+	(0x0001180008000060ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+static inline u64 CVMX_GMXX_RXX_STATS_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000050ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000050ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000050ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_OCTS(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000088ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000088ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000088ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_OCTS_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000098ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000098ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000098ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_OCTS_DMAC(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000A8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080000A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_OCTS_DRP(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000B8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000B8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080000B8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_PKTS(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000080ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000080ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000080ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_PKTS_BAD(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000C0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000C0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080000C0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_PKTS_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000090ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000090ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000090ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_PKTS_DMAC(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000A0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080000A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_STATS_PKTS_DRP(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000B0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080000B0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080000B0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RXX_UDD_SKP(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000048ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000048ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000048ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_RX_BP_DROPX(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000420ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000420ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	}
+	return 0x0001180008000420ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+}
+
+static inline u64 CVMX_GMXX_RX_BP_OFFX(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000460ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000460ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	}
+	return 0x0001180008000460ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+}
+
+static inline u64 CVMX_GMXX_RX_BP_ONX(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000440ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000440ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	}
+	return 0x0001180008000440ull + ((offset) + (block_id) * 0x1000000ull) * 8;
+}
+
+static inline u64 CVMX_GMXX_RX_HG2_STATUS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000548ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000548ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000548ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000548ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_RX_PASS_EN(offset) (0x00011800080005F8ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_GMXX_RX_PASS_MAPX(offset, block_id)                                                   \
+	(0x0001180008000600ull + (((offset) & 15) + ((block_id) & 1) * 0x1000000ull) * 8)
+static inline u64 CVMX_GMXX_RX_PRTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000410ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000410ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000410ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000410ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_RX_PRT_INFO(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004E8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004E8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004E8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004E8ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_RX_TX_STATUS(offset) (0x00011800080007E8ull)
+static inline u64 CVMX_GMXX_RX_XAUI_BAD_COL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000538ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000538ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000538ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000538ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_RX_XAUI_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000530ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000530ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000530ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000530ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_SMACX(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000230ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000230ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000230ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_SOFT_BIST(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080007E8ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_GMXX_STAT_BP(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000520ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000520ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000520ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000520ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TB_REG(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E0ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E0ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080007E0ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080007E0ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TXX_APPEND(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000218ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000218ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000218ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_TXX_BCK_CRDT(offset, block_id)                                                   \
+	(0x0001180008000388ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+static inline u64 CVMX_GMXX_TXX_BURST(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000228ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000228ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000228ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_CBFC_XOFF(unsigned long __attribute__((unused)) offset,
+					  unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005A0ull + (block_id) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005A0ull + (block_id) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005A0ull + (block_id) * 0x1000000ull;
+	}
+	return 0x00011800080005A0ull + (block_id) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TXX_CBFC_XON(unsigned long __attribute__((unused)) offset,
+					 unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005C0ull + (block_id) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005C0ull + (block_id) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080005C0ull + (block_id) * 0x1000000ull;
+	}
+	return 0x00011800080005C0ull + (block_id) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_TXX_CLK(offset, block_id)                                                        \
+	(0x0001180008000208ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+static inline u64 CVMX_GMXX_TXX_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000270ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000270ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000270ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_TXX_JAM_MODE(offset, block_id)                                                   \
+	(0x0001180008000380ull + (((offset) & 3) + ((block_id) & 1) * 0x10000ull) * 2048)
+static inline u64 CVMX_GMXX_TXX_MIN_PKT(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000240ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000240ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000240ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_PAUSE_PKT_INTERVAL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000248ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000248ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000248ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_PAUSE_PKT_TIME(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000238ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000238ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000238ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_PAUSE_TOGO(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000258ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000258ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000258ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_PAUSE_ZERO(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000260ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000260ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000260ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+#define CVMX_GMXX_TXX_PIPE(offset, block_id)                                                       \
+	(0x0001180008000310ull + (((offset) & 3) + ((block_id) & 7) * 0x2000ull) * 2048)
+static inline u64 CVMX_GMXX_TXX_SGMII_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000300ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000300ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000300ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000300ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_SLOT(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000220ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000220ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000220ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_SOFT_PAUSE(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000250ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000250ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000250ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT0(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000280ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000280ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000280ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT1(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000288ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000288ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000288ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT2(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000290ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000290ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000290ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT3(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000298ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000298ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000298ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT4(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002A0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002A0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT5(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002A8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002A8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT6(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002B0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002B0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002B0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT7(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002B8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002B8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002B8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT8(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002C0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002C0ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002C0ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STAT9(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002C8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080002C8ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x00011800080002C8ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_STATS_CTL(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000268ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000268ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000268ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TXX_THRESH(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000210ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000210ull + ((offset) + (block_id) * 0x2000ull) * 2048;
+	}
+	return 0x0001180008000210ull + ((offset) + (block_id) * 0x10000ull) * 2048;
+}
+
+static inline u64 CVMX_GMXX_TX_BP(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D0ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D0ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D0ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004D0ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_TX_CLK_MSKX(offset, block_id)                                                    \
+	(0x0001180008000780ull + (((offset) & 1) + ((block_id) & 0) * 0x0ull) * 8)
+static inline u64 CVMX_GMXX_TX_COL_ATTEMPT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000498ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000498ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000498ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000498ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_CORRUPT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004D8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004D8ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_HG2_REG1(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000558ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000558ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000558ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000558ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_HG2_REG2(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000560ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000560ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000560ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000560ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_IFG(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000488ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000488ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000488ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000488ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_INT_EN(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000508ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000508ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000508ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000508ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_INT_REG(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000500ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000500ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000500ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000500ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_JAM(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000490ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000490ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000490ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000490ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_LFSR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004F8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004F8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004F8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004F8ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_OVR_BP(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004C8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004C8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004C8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004C8ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_PAUSE_PKT_DMAC(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A0ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A0ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A0ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004A0ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_PAUSE_PKT_TYPE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A8ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A8ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800080004A8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800080004A8ull + (offset) * 0x8000000ull;
+}
+
+static inline u64 CVMX_GMXX_TX_PRTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000480ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000480ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000480ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000480ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_TX_SPI_CTL(offset)   (0x00011800080004C0ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_GMXX_TX_SPI_DRAIN(offset) (0x00011800080004E0ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_GMXX_TX_SPI_MAX(offset)   (0x00011800080004B0ull + ((offset) & 1) * 0x8000000ull)
+#define CVMX_GMXX_TX_SPI_ROUNDX(offset, block_id)                                                  \
+	(0x0001180008000680ull + (((offset) & 31) + ((block_id) & 1) * 0x1000000ull) * 8)
+#define CVMX_GMXX_TX_SPI_THRESH(offset) (0x00011800080004B8ull + ((offset) & 1) * 0x8000000ull)
+static inline u64 CVMX_GMXX_TX_XAUI_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000528ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000528ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000528ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000528ull + (offset) * 0x8000000ull;
+}
+
+#define CVMX_GMXX_WOL_CTL(offset) (0x0001180008000780ull + ((offset) & 1) * 0x8000000ull)
+static inline u64 CVMX_GMXX_XAUI_EXT_LOOPBACK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000540ull + (offset) * 0x8000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000540ull + (offset) * 0x8000000ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180008000540ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180008000540ull + (offset) * 0x8000000ull;
+}
+
+/**
+ * cvmx_gmx#_bad_reg
+ *
+ * GMX_BAD_REG = A collection of things that have gone very, very wrong
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of INB_NXA, LOSTSTAT, OUT_OVR, are used.
+ *
+ */
+union cvmx_gmxx_bad_reg {
+	u64 u64;
+	struct cvmx_gmxx_bad_reg_s {
+		u64 reserved_31_63 : 33;
+		u64 inb_nxa : 4;
+		u64 statovr : 1;
+		u64 loststat : 4;
+		u64 reserved_18_21 : 4;
+		u64 out_ovr : 16;
+		u64 ncb_ovr : 1;
+		u64 out_col : 1;
+	} s;
+	struct cvmx_gmxx_bad_reg_cn30xx {
+		u64 reserved_31_63 : 33;
+		u64 inb_nxa : 4;
+		u64 statovr : 1;
+		u64 reserved_25_25 : 1;
+		u64 loststat : 3;
+		u64 reserved_5_21 : 17;
+		u64 out_ovr : 3;
+		u64 reserved_0_1 : 2;
+	} cn30xx;
+	struct cvmx_gmxx_bad_reg_cn30xx cn31xx;
+	struct cvmx_gmxx_bad_reg_s cn38xx;
+	struct cvmx_gmxx_bad_reg_s cn38xxp2;
+	struct cvmx_gmxx_bad_reg_cn30xx cn50xx;
+	struct cvmx_gmxx_bad_reg_cn52xx {
+		u64 reserved_31_63 : 33;
+		u64 inb_nxa : 4;
+		u64 statovr : 1;
+		u64 loststat : 4;
+		u64 reserved_6_21 : 16;
+		u64 out_ovr : 4;
+		u64 reserved_0_1 : 2;
+	} cn52xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn52xxp1;
+	struct cvmx_gmxx_bad_reg_cn52xx cn56xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn56xxp1;
+	struct cvmx_gmxx_bad_reg_s cn58xx;
+	struct cvmx_gmxx_bad_reg_s cn58xxp1;
+	struct cvmx_gmxx_bad_reg_cn52xx cn61xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn63xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn63xxp1;
+	struct cvmx_gmxx_bad_reg_cn52xx cn66xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn68xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn68xxp1;
+	struct cvmx_gmxx_bad_reg_cn52xx cn70xx;
+	struct cvmx_gmxx_bad_reg_cn52xx cn70xxp1;
+	struct cvmx_gmxx_bad_reg_cn52xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_bad_reg cvmx_gmxx_bad_reg_t;
+
+/**
+ * cvmx_gmx#_bist
+ *
+ * GMX_BIST = GMX BIST Results
+ *
+ */
+union cvmx_gmxx_bist {
+	u64 u64;
+	struct cvmx_gmxx_bist_s {
+		u64 reserved_25_63 : 39;
+		u64 status : 25;
+	} s;
+	struct cvmx_gmxx_bist_cn30xx {
+		u64 reserved_10_63 : 54;
+		u64 status : 10;
+	} cn30xx;
+	struct cvmx_gmxx_bist_cn30xx cn31xx;
+	struct cvmx_gmxx_bist_cn30xx cn38xx;
+	struct cvmx_gmxx_bist_cn30xx cn38xxp2;
+	struct cvmx_gmxx_bist_cn50xx {
+		u64 reserved_12_63 : 52;
+		u64 status : 12;
+	} cn50xx;
+	struct cvmx_gmxx_bist_cn52xx {
+		u64 reserved_16_63 : 48;
+		u64 status : 16;
+	} cn52xx;
+	struct cvmx_gmxx_bist_cn52xx cn52xxp1;
+	struct cvmx_gmxx_bist_cn52xx cn56xx;
+	struct cvmx_gmxx_bist_cn52xx cn56xxp1;
+	struct cvmx_gmxx_bist_cn58xx {
+		u64 reserved_17_63 : 47;
+		u64 status : 17;
+	} cn58xx;
+	struct cvmx_gmxx_bist_cn58xx cn58xxp1;
+	struct cvmx_gmxx_bist_s cn61xx;
+	struct cvmx_gmxx_bist_s cn63xx;
+	struct cvmx_gmxx_bist_s cn63xxp1;
+	struct cvmx_gmxx_bist_s cn66xx;
+	struct cvmx_gmxx_bist_s cn68xx;
+	struct cvmx_gmxx_bist_s cn68xxp1;
+	struct cvmx_gmxx_bist_s cn70xx;
+	struct cvmx_gmxx_bist_s cn70xxp1;
+	struct cvmx_gmxx_bist_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_bist cvmx_gmxx_bist_t;
+
+/**
+ * cvmx_gmx#_bpid_map#
+ *
+ * Notes:
+ * GMX will build BPID_VECTOR<15:0> using the 16 GMX_BPID_MAP entries and the BPID
+ * state from IPD.  In XAUI/RXAUI mode when PFC/CBFC/HiGig2 is used, the
+ * BPID_VECTOR becomes the logical backpressure.  In XAUI/RXAUI mode when
+ * PFC/CBFC/HiGig2 is not used or when in 4xSGMII mode, the BPID_VECTOR can be used
+ * with the GMX_BPID_MSK register to determine the physical backpressure.
+ *
+ * In XAUI/RXAUI mode, the entire BPID_VECTOR<15:0> is available determining physical
+ * backpressure for the single XAUI/RXAUI interface.
+ *
+ * In SGMII mode, BPID_VECTOR is broken up as follows:
+ *    SGMII interface0 uses BPID_VECTOR<3:0>
+ *    SGMII interface1 uses BPID_VECTOR<7:4>
+ *    SGMII interface2 uses BPID_VECTOR<11:8>
+ *    SGMII interface3 uses BPID_VECTOR<15:12>
+ *
+ * In all SGMII configurations, and in some XAUI/RXAUI configurations, the
+ * interface protocols only support physical backpressure. In these cases, a single
+ * BPID will commonly drive the physical backpressure for the physical
+ * interface. We provide example programmings for these simple cases.
+ *
+ * In XAUI/RXAUI mode where PFC/CBFC/HiGig2 is not used, an example programming
+ * would be as follows:
+ *
+ *    @verbatim
+ *    GMX_BPID_MAP0[VAL]    = 1;
+ *    GMX_BPID_MAP0[BPID]   = xaui_bpid;
+ *    GMX_BPID_MSK[MSK_OR]  = 1;
+ *    GMX_BPID_MSK[MSK_AND] = 0;
+ *    @endverbatim
+ *
+ * In SGMII mode, an example programming would be as follows:
+ *
+ *    @verbatim
+ *    for (i=0; i<4; i++) [
+ *       if (GMX_PRTi_CFG[EN]) [
+ *          GMX_BPID_MAP(i*4)[VAL]    = 1;
+ *          GMX_BPID_MAP(i*4)[BPID]   = sgmii_bpid(i);
+ *          GMX_BPID_MSK[MSK_OR]      = (1 << (i*4)) | GMX_BPID_MSK[MSK_OR];
+ *       ]
+ *    ]
+ *    GMX_BPID_MSK[MSK_AND] = 0;
+ *    @endverbatim
+ */
+union cvmx_gmxx_bpid_mapx {
+	u64 u64;
+	struct cvmx_gmxx_bpid_mapx_s {
+		u64 reserved_17_63 : 47;
+		u64 status : 1;
+		u64 reserved_9_15 : 7;
+		u64 val : 1;
+		u64 reserved_6_7 : 2;
+		u64 bpid : 6;
+	} s;
+	struct cvmx_gmxx_bpid_mapx_s cn68xx;
+	struct cvmx_gmxx_bpid_mapx_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_bpid_mapx cvmx_gmxx_bpid_mapx_t;
+
+/**
+ * cvmx_gmx#_bpid_msk
+ */
+union cvmx_gmxx_bpid_msk {
+	u64 u64;
+	struct cvmx_gmxx_bpid_msk_s {
+		u64 reserved_48_63 : 16;
+		u64 msk_or : 16;
+		u64 reserved_16_31 : 16;
+		u64 msk_and : 16;
+	} s;
+	struct cvmx_gmxx_bpid_msk_s cn68xx;
+	struct cvmx_gmxx_bpid_msk_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_bpid_msk cvmx_gmxx_bpid_msk_t;
+
+/**
+ * cvmx_gmx#_clk_en
+ *
+ * DON'T PUT IN HRM*
+ *
+ */
+union cvmx_gmxx_clk_en {
+	u64 u64;
+	struct cvmx_gmxx_clk_en_s {
+		u64 reserved_1_63 : 63;
+		u64 clk_en : 1;
+	} s;
+	struct cvmx_gmxx_clk_en_s cn52xx;
+	struct cvmx_gmxx_clk_en_s cn52xxp1;
+	struct cvmx_gmxx_clk_en_s cn56xx;
+	struct cvmx_gmxx_clk_en_s cn56xxp1;
+	struct cvmx_gmxx_clk_en_s cn61xx;
+	struct cvmx_gmxx_clk_en_s cn63xx;
+	struct cvmx_gmxx_clk_en_s cn63xxp1;
+	struct cvmx_gmxx_clk_en_s cn66xx;
+	struct cvmx_gmxx_clk_en_s cn68xx;
+	struct cvmx_gmxx_clk_en_s cn68xxp1;
+	struct cvmx_gmxx_clk_en_s cn70xx;
+	struct cvmx_gmxx_clk_en_s cn70xxp1;
+	struct cvmx_gmxx_clk_en_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_clk_en cvmx_gmxx_clk_en_t;
+
+/**
+ * cvmx_gmx#_ebp_dis
+ */
+union cvmx_gmxx_ebp_dis {
+	u64 u64;
+	struct cvmx_gmxx_ebp_dis_s {
+		u64 reserved_16_63 : 48;
+		u64 dis : 16;
+	} s;
+	struct cvmx_gmxx_ebp_dis_s cn68xx;
+	struct cvmx_gmxx_ebp_dis_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_ebp_dis cvmx_gmxx_ebp_dis_t;
+
+/**
+ * cvmx_gmx#_ebp_msk
+ */
+union cvmx_gmxx_ebp_msk {
+	u64 u64;
+	struct cvmx_gmxx_ebp_msk_s {
+		u64 reserved_16_63 : 48;
+		u64 msk : 16;
+	} s;
+	struct cvmx_gmxx_ebp_msk_s cn68xx;
+	struct cvmx_gmxx_ebp_msk_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_ebp_msk cvmx_gmxx_ebp_msk_t;
+
+/**
+ * cvmx_gmx#_hg2_control
+ *
+ * Notes:
+ * The HiGig2 TX and RX enable would normally be both set together for HiGig2 messaging. However
+ * setting just the TX or RX bit will result in only the HG2 message transmit or the receive
+ * capability.
+ * PHYS_EN and LOGL_EN bits when 1, allow link pause or back pressure to PKO as per received
+ * HiGig2 message. When 0, link pause and back pressure to PKO in response to received messages
+ * are disabled.
+ *
+ * GMX*_TX_XAUI_CTL[HG_EN] must be set to one(to enable HiGig) whenever either HG2TX_EN or HG2RX_EN
+ * are set.
+ *
+ * GMX*_RX0_UDD_SKP[LEN] must be set to 16 (to select HiGig2) whenever either HG2TX_EN or HG2RX_EN
+ * are set.
+ *
+ * GMX*_TX_OVR_BP[EN<0>] must be set to one and GMX*_TX_OVR_BP[BP<0>] must be cleared to zero
+ * (to forcibly disable HW-automatic 802.3 pause packet generation) with the HiGig2 Protocol when
+ * GMX*_HG2_CONTROL[HG2TX_EN]=0. (The HiGig2 protocol is indicated by GMX*_TX_XAUI_CTL[HG_EN]=1
+ * and GMX*_RX0_UDD_SKP[LEN]=16.) The HW can only auto-generate backpressure via HiGig2 messages
+ * (optionally, when HG2TX_EN=1) with the HiGig2 protocol.
+ */
+union cvmx_gmxx_hg2_control {
+	u64 u64;
+	struct cvmx_gmxx_hg2_control_s {
+		u64 reserved_19_63 : 45;
+		u64 hg2tx_en : 1;
+		u64 hg2rx_en : 1;
+		u64 phys_en : 1;
+		u64 logl_en : 16;
+	} s;
+	struct cvmx_gmxx_hg2_control_s cn52xx;
+	struct cvmx_gmxx_hg2_control_s cn52xxp1;
+	struct cvmx_gmxx_hg2_control_s cn56xx;
+	struct cvmx_gmxx_hg2_control_s cn61xx;
+	struct cvmx_gmxx_hg2_control_s cn63xx;
+	struct cvmx_gmxx_hg2_control_s cn63xxp1;
+	struct cvmx_gmxx_hg2_control_s cn66xx;
+	struct cvmx_gmxx_hg2_control_s cn68xx;
+	struct cvmx_gmxx_hg2_control_s cn68xxp1;
+	struct cvmx_gmxx_hg2_control_s cn70xx;
+	struct cvmx_gmxx_hg2_control_s cn70xxp1;
+	struct cvmx_gmxx_hg2_control_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_hg2_control cvmx_gmxx_hg2_control_t;
+
+/**
+ * cvmx_gmx#_inf_mode
+ *
+ * GMX_INF_MODE = Interface Mode
+ *
+ */
+union cvmx_gmxx_inf_mode {
+	u64 u64;
+	struct cvmx_gmxx_inf_mode_s {
+		u64 reserved_20_63 : 44;
+		u64 rate : 4;
+		u64 reserved_12_15 : 4;
+		u64 speed : 4;
+		u64 reserved_7_7 : 1;
+		u64 mode : 3;
+		u64 reserved_3_3 : 1;
+		u64 p0mii : 1;
+		u64 en : 1;
+		u64 type : 1;
+	} s;
+	struct cvmx_gmxx_inf_mode_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 p0mii : 1;
+		u64 en : 1;
+		u64 type : 1;
+	} cn30xx;
+	struct cvmx_gmxx_inf_mode_cn31xx {
+		u64 reserved_2_63 : 62;
+		u64 en : 1;
+		u64 type : 1;
+	} cn31xx;
+	struct cvmx_gmxx_inf_mode_cn31xx cn38xx;
+	struct cvmx_gmxx_inf_mode_cn31xx cn38xxp2;
+	struct cvmx_gmxx_inf_mode_cn30xx cn50xx;
+	struct cvmx_gmxx_inf_mode_cn52xx {
+		u64 reserved_10_63 : 54;
+		u64 speed : 2;
+		u64 reserved_6_7 : 2;
+		u64 mode : 2;
+		u64 reserved_2_3 : 2;
+		u64 en : 1;
+		u64 type : 1;
+	} cn52xx;
+	struct cvmx_gmxx_inf_mode_cn52xx cn52xxp1;
+	struct cvmx_gmxx_inf_mode_cn52xx cn56xx;
+	struct cvmx_gmxx_inf_mode_cn52xx cn56xxp1;
+	struct cvmx_gmxx_inf_mode_cn31xx cn58xx;
+	struct cvmx_gmxx_inf_mode_cn31xx cn58xxp1;
+	struct cvmx_gmxx_inf_mode_cn61xx {
+		u64 reserved_12_63 : 52;
+		u64 speed : 4;
+		u64 reserved_5_7 : 3;
+		u64 mode : 1;
+		u64 reserved_2_3 : 2;
+		u64 en : 1;
+		u64 type : 1;
+	} cn61xx;
+	struct cvmx_gmxx_inf_mode_cn61xx cn63xx;
+	struct cvmx_gmxx_inf_mode_cn61xx cn63xxp1;
+	struct cvmx_gmxx_inf_mode_cn66xx {
+		u64 reserved_20_63 : 44;
+		u64 rate : 4;
+		u64 reserved_12_15 : 4;
+		u64 speed : 4;
+		u64 reserved_5_7 : 3;
+		u64 mode : 1;
+		u64 reserved_2_3 : 2;
+		u64 en : 1;
+		u64 type : 1;
+	} cn66xx;
+	struct cvmx_gmxx_inf_mode_cn68xx {
+		u64 reserved_12_63 : 52;
+		u64 speed : 4;
+		u64 reserved_7_7 : 1;
+		u64 mode : 3;
+		u64 reserved_2_3 : 2;
+		u64 en : 1;
+		u64 type : 1;
+	} cn68xx;
+	struct cvmx_gmxx_inf_mode_cn68xx cn68xxp1;
+	struct cvmx_gmxx_inf_mode_cn70xx {
+		u64 reserved_6_63 : 58;
+		u64 mode : 2;
+		u64 reserved_2_3 : 2;
+		u64 en : 1;
+		u64 reserved_0_0 : 1;
+	} cn70xx;
+	struct cvmx_gmxx_inf_mode_cn70xx cn70xxp1;
+	struct cvmx_gmxx_inf_mode_cn61xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_inf_mode cvmx_gmxx_inf_mode_t;
+
+/**
+ * cvmx_gmx#_nxa_adr
+ *
+ * GMX_NXA_ADR = NXA Port Address
+ *
+ */
+union cvmx_gmxx_nxa_adr {
+	u64 u64;
+	struct cvmx_gmxx_nxa_adr_s {
+		u64 reserved_23_63 : 41;
+		u64 pipe : 7;
+		u64 reserved_6_15 : 10;
+		u64 prt : 6;
+	} s;
+	struct cvmx_gmxx_nxa_adr_cn30xx {
+		u64 reserved_6_63 : 58;
+		u64 prt : 6;
+	} cn30xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn31xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn38xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn38xxp2;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn50xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn52xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn52xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn56xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn56xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn58xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn58xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn61xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn63xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn63xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn66xx;
+	struct cvmx_gmxx_nxa_adr_s cn68xx;
+	struct cvmx_gmxx_nxa_adr_s cn68xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn70xx;
+	struct cvmx_gmxx_nxa_adr_cn30xx cn70xxp1;
+	struct cvmx_gmxx_nxa_adr_cn30xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_nxa_adr cvmx_gmxx_nxa_adr_t;
+
+/**
+ * cvmx_gmx#_pipe_status
+ *
+ * DON'T PUT IN HRM*
+ *
+ */
+union cvmx_gmxx_pipe_status {
+	u64 u64;
+	struct cvmx_gmxx_pipe_status_s {
+		u64 reserved_20_63 : 44;
+		u64 ovr : 4;
+		u64 reserved_12_15 : 4;
+		u64 bp : 4;
+		u64 reserved_4_7 : 4;
+		u64 stop : 4;
+	} s;
+	struct cvmx_gmxx_pipe_status_s cn68xx;
+	struct cvmx_gmxx_pipe_status_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_pipe_status cvmx_gmxx_pipe_status_t;
+
+/**
+ * cvmx_gmx#_prt#_cbfc_ctl
+ *
+ * ** HG2 message CSRs end
+ *
+ */
+union cvmx_gmxx_prtx_cbfc_ctl {
+	u64 u64;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s {
+		u64 phys_en : 16;
+		u64 logl_en : 16;
+		u64 phys_bp : 16;
+		u64 reserved_4_15 : 12;
+		u64 bck_en : 1;
+		u64 drp_en : 1;
+		u64 tx_en : 1;
+		u64 rx_en : 1;
+	} s;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn52xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn56xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn61xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn63xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn63xxp1;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn66xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn68xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn68xxp1;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn70xx;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cn70xxp1;
+	struct cvmx_gmxx_prtx_cbfc_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_prtx_cbfc_ctl cvmx_gmxx_prtx_cbfc_ctl_t;
+
+/**
+ * cvmx_gmx#_prt#_cfg
+ *
+ * GMX_PRT_CFG = Port description
+ *
+ */
+union cvmx_gmxx_prtx_cfg {
+	u64 u64;
+	struct cvmx_gmxx_prtx_cfg_s {
+		u64 reserved_22_63 : 42;
+		u64 pknd : 6;
+		u64 reserved_14_15 : 2;
+		u64 tx_idle : 1;
+		u64 rx_idle : 1;
+		u64 reserved_9_11 : 3;
+		u64 speed_msb : 1;
+		u64 reserved_4_7 : 4;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 en : 1;
+	} s;
+	struct cvmx_gmxx_prtx_cfg_cn30xx {
+		u64 reserved_4_63 : 60;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 en : 1;
+	} cn30xx;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn31xx;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn38xx;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn38xxp2;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn50xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx {
+		u64 reserved_14_63 : 50;
+		u64 tx_idle : 1;
+		u64 rx_idle : 1;
+		u64 reserved_9_11 : 3;
+		u64 speed_msb : 1;
+		u64 reserved_4_7 : 4;
+		u64 slottime : 1;
+		u64 duplex : 1;
+		u64 speed : 1;
+		u64 en : 1;
+	} cn52xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn52xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn56xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn56xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn58xx;
+	struct cvmx_gmxx_prtx_cfg_cn30xx cn58xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn61xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn63xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn63xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn66xx;
+	struct cvmx_gmxx_prtx_cfg_s cn68xx;
+	struct cvmx_gmxx_prtx_cfg_s cn68xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn70xx;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cn70xxp1;
+	struct cvmx_gmxx_prtx_cfg_cn52xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_prtx_cfg cvmx_gmxx_prtx_cfg_t;
+
+/**
+ * cvmx_gmx#_qsgmii_ctl
+ */
+union cvmx_gmxx_qsgmii_ctl {
+	u64 u64;
+	struct cvmx_gmxx_qsgmii_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 disparity : 1;
+	} s;
+	struct cvmx_gmxx_qsgmii_ctl_s cn70xx;
+	struct cvmx_gmxx_qsgmii_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_gmxx_qsgmii_ctl cvmx_gmxx_qsgmii_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam0
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam0 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam0_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam0_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam0_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam0 cvmx_gmxx_rxx_adr_cam0_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam1
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam1 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam1_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam1_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam1_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam1 cvmx_gmxx_rxx_adr_cam1_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam2
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam2 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam2_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam2_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam2_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam2 cvmx_gmxx_rxx_adr_cam2_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam3
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam3 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam3_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam3_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam3_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam3 cvmx_gmxx_rxx_adr_cam3_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam4
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam4 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam4_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam4_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam4_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam4 cvmx_gmxx_rxx_adr_cam4_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam5
+ *
+ * GMX_RX_ADR_CAM = Address Filtering Control
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam5 {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam5_s {
+		u64 adr : 64;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam5_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam5_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam5 cvmx_gmxx_rxx_adr_cam5_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam_all_en
+ *
+ * GMX_RX_ADR_CAM_ALL_EN = Address Filtering Control Enable
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam_all_en {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s {
+		u64 reserved_32_63 : 32;
+		u64 en : 32;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_all_en_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam_all_en cvmx_gmxx_rxx_adr_cam_all_en_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_cam_en
+ *
+ * GMX_RX_ADR_CAM_EN = Address Filtering Control Enable
+ *
+ */
+union cvmx_gmxx_rxx_adr_cam_en {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_cam_en_s {
+		u64 reserved_8_63 : 56;
+		u64 en : 8;
+	} s;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_cam_en_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_cam_en cvmx_gmxx_rxx_adr_cam_en_t;
+
+/**
+ * cvmx_gmx#_rx#_adr_ctl
+ *
+ * GMX_RX_ADR_CTL = Address Filtering Control
+ *
+ *
+ * Notes:
+ * * ALGORITHM
+ *   Here is some pseudo code that represents the address filter behavior.
+ *
+ *      @verbatim
+ *      bool dmac_addr_filter(uint8 prt, uint48 dmac) [
+ *        ASSERT(prt >= 0 && prt <= 3);
+ *        if (is_bcst(dmac))                               // broadcast accept
+ *          return (GMX_RX[prt]_ADR_CTL[BCST] ? ACCEPT : REJECT);
+ *        if (is_mcst(dmac) & GMX_RX[prt]_ADR_CTL[MCST] == 1)   // multicast reject
+ *          return REJECT;
+ *        if (is_mcst(dmac) & GMX_RX[prt]_ADR_CTL[MCST] == 2)   // multicast accept
+ *          return ACCEPT;
+ *
+ *        cam_hit = 0;
+ *
+ *        for (i=0; i<32; i++) [
+ *          if (GMX_RX[prt]_ADR_CAM_ALL_EN[EN<i>] == 0)
+ *            continue;
+ *          uint48 unswizzled_mac_adr = 0x0;
+ *          for (j=5; j>=0; j--) [
+ *             unswizzled_mac_adr = (unswizzled_mac_adr << 8) | GMX_RX[i>>3]_ADR_CAM[j][ADR<(i&7)*8+7:(i&7)*8>];
+ *          ]
+ *          if (unswizzled_mac_adr == dmac) [
+ *            cam_hit = 1;
+ *            break;
+ *          ]
+ *        ]
+ *
+ *        if (cam_hit)
+ *          return (GMX_RX[prt]_ADR_CTL[CAM_MODE] ? ACCEPT : REJECT);
+ *        else
+ *          return (GMX_RX[prt]_ADR_CTL[CAM_MODE] ? REJECT : ACCEPT);
+ *      ]
+ *      @endverbatim
+ *
+ * * XAUI Mode
+ *
+ *   In XAUI mode, only GMX_RX0_ADR_CTL is used.  GMX_RX[1,2,3]_ADR_CTL should not be used.
+ */
+union cvmx_gmxx_rxx_adr_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxx_adr_ctl_s {
+		u64 reserved_4_63 : 60;
+		u64 cam_mode : 1;
+		u64 mcst : 2;
+		u64 bcst : 1;
+	} s;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn30xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn31xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn38xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn38xxp2;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn50xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn52xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn52xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn56xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn56xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn58xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn58xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn61xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn63xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn63xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn66xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn68xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn70xx;
+	struct cvmx_gmxx_rxx_adr_ctl_s cn70xxp1;
+	struct cvmx_gmxx_rxx_adr_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_adr_ctl cvmx_gmxx_rxx_adr_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_decision
+ *
+ * GMX_RX_DECISION = The byte count to decide when to accept or filter a packet
+ *
+ *
+ * Notes:
+ * As each byte in a packet is received by GMX, the L2 byte count is compared
+ * against the GMX_RX_DECISION[CNT].  The L2 byte count is the number of bytes
+ * from the beginning of the L2 header (DMAC).  In normal operation, the L2
+ * header begins after the PREAMBLE+SFD (GMX_RX_FRM_CTL[PRE_CHK]=1) and any
+ * optional UDD skip data (GMX_RX_UDD_SKP[LEN]).
+ *
+ * When GMX_RX_FRM_CTL[PRE_CHK] is clear, PREAMBLE+SFD are prepended to the
+ * packet and would require UDD skip length to account for them.
+ *
+ *                                                 L2 Size
+ * Port Mode             <GMX_RX_DECISION bytes (default=24)       >=GMX_RX_DECISION bytes (default=24)
+ *
+ * Full Duplex           accept packet                             apply filters
+ *                       no filtering is applied                   accept packet based on DMAC and PAUSE packet filters
+ *
+ * Half Duplex           drop packet                               apply filters
+ *                       packet is unconditionally dropped         accept packet based on DMAC
+ *
+ * where l2_size = MAX(0, total_packet_size - GMX_RX_UDD_SKP[LEN] - ((GMX_RX_FRM_CTL[PRE_CHK]==1)*8)
+ */
+union cvmx_gmxx_rxx_decision {
+	u64 u64;
+	struct cvmx_gmxx_rxx_decision_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_gmxx_rxx_decision_s cn30xx;
+	struct cvmx_gmxx_rxx_decision_s cn31xx;
+	struct cvmx_gmxx_rxx_decision_s cn38xx;
+	struct cvmx_gmxx_rxx_decision_s cn38xxp2;
+	struct cvmx_gmxx_rxx_decision_s cn50xx;
+	struct cvmx_gmxx_rxx_decision_s cn52xx;
+	struct cvmx_gmxx_rxx_decision_s cn52xxp1;
+	struct cvmx_gmxx_rxx_decision_s cn56xx;
+	struct cvmx_gmxx_rxx_decision_s cn56xxp1;
+	struct cvmx_gmxx_rxx_decision_s cn58xx;
+	struct cvmx_gmxx_rxx_decision_s cn58xxp1;
+	struct cvmx_gmxx_rxx_decision_s cn61xx;
+	struct cvmx_gmxx_rxx_decision_s cn63xx;
+	struct cvmx_gmxx_rxx_decision_s cn63xxp1;
+	struct cvmx_gmxx_rxx_decision_s cn66xx;
+	struct cvmx_gmxx_rxx_decision_s cn68xx;
+	struct cvmx_gmxx_rxx_decision_s cn68xxp1;
+	struct cvmx_gmxx_rxx_decision_s cn70xx;
+	struct cvmx_gmxx_rxx_decision_s cn70xxp1;
+	struct cvmx_gmxx_rxx_decision_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_decision cvmx_gmxx_rxx_decision_t;
+
+/**
+ * cvmx_gmx#_rx#_frm_chk
+ *
+ * GMX_RX_FRM_CHK = Which frame errors will set the ERR bit of the frame
+ *
+ *
+ * Notes:
+ * If GMX_RX_UDD_SKP[LEN] != 0, then LENERR will be forced to zero in HW.
+ *
+ * In XAUI mode prt0 is used for checking.
+ */
+union cvmx_gmxx_rxx_frm_chk {
+	u64 u64;
+	struct cvmx_gmxx_rxx_frm_chk_s {
+		u64 reserved_10_63 : 54;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_gmxx_rxx_frm_chk_s cn30xx;
+	struct cvmx_gmxx_rxx_frm_chk_s cn31xx;
+	struct cvmx_gmxx_rxx_frm_chk_s cn38xx;
+	struct cvmx_gmxx_rxx_frm_chk_s cn38xxp2;
+	struct cvmx_gmxx_rxx_frm_chk_cn50xx {
+		u64 reserved_10_63 : 54;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_6_6 : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn50xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn52xx {
+		u64 reserved_9_63 : 55;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn52xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn52xx cn52xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_cn52xx cn56xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn52xx cn56xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_s cn58xx;
+	struct cvmx_gmxx_rxx_frm_chk_s cn58xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx {
+		u64 reserved_9_63 : 55;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn61xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn63xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn63xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn66xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn68xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn68xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn70xx;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cn70xxp1;
+	struct cvmx_gmxx_rxx_frm_chk_cn61xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_frm_chk cvmx_gmxx_rxx_frm_chk_t;
+
+/**
+ * cvmx_gmx#_rx#_frm_ctl
+ *
+ * GMX_RX_FRM_CTL = Frame Control
+ *
+ *
+ * Notes:
+ * * PRE_STRP
+ *   When PRE_CHK is set (indicating that the PREAMBLE will be sent), PRE_STRP
+ *   determines if the PREAMBLE+SFD bytes are thrown away or sent to the Octane
+ *   core as part of the packet.
+ *
+ *   In either mode, the PREAMBLE+SFD bytes are not counted toward the packet
+ *   size when checking against the MIN and MAX bounds.  Furthermore, the bytes
+ *   are skipped when locating the start of the L2 header for DMAC and Control
+ *   frame recognition.
+ *
+ * * CTL_BCK/CTL_DRP
+ *   These bits control how the HW handles incoming PAUSE packets.  Here are
+ *   the most common modes of operation:
+ *     CTL_BCK=1,CTL_DRP=1   - HW does it all
+ *     CTL_BCK=0,CTL_DRP=0   - SW sees all pause frames
+ *     CTL_BCK=0,CTL_DRP=1   - all pause frames are completely ignored
+ *
+ *   These control bits should be set to CTL_BCK=0,CTL_DRP=0 in halfdup mode.
+ *   Since PAUSE packets only apply to fulldup operation, any PAUSE packet
+ *   would constitute an exception which should be handled by the processing
+ *   cores.  PAUSE packets should not be forwarded.
+ */
+union cvmx_gmxx_rxx_frm_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxx_frm_ctl_s {
+		u64 reserved_13_63 : 51;
+		u64 ptp_mode : 1;
+		u64 reserved_11_11 : 1;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} s;
+	struct cvmx_gmxx_rxx_frm_ctl_cn30xx {
+		u64 reserved_9_63 : 55;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn30xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn31xx {
+		u64 reserved_8_63 : 56;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn31xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn30xx cn38xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn31xx cn38xxp2;
+	struct cvmx_gmxx_rxx_frm_ctl_cn50xx {
+		u64 reserved_11_63 : 53;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 reserved_7_8 : 2;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn50xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn50xx cn52xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn50xx cn52xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn50xx cn56xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn56xxp1 {
+		u64 reserved_10_63 : 54;
+		u64 pre_align : 1;
+		u64 reserved_7_8 : 2;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn56xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn58xx {
+		u64 reserved_11_63 : 53;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn58xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn30xx cn58xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx {
+		u64 reserved_13_63 : 51;
+		u64 ptp_mode : 1;
+		u64 reserved_11_11 : 1;
+		u64 null_dis : 1;
+		u64 pre_align : 1;
+		u64 reserved_7_8 : 2;
+		u64 pre_free : 1;
+		u64 ctl_smac : 1;
+		u64 ctl_mcst : 1;
+		u64 ctl_bck : 1;
+		u64 ctl_drp : 1;
+		u64 pre_strp : 1;
+		u64 pre_chk : 1;
+	} cn61xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn63xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn63xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn66xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn68xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn68xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn70xx;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cn70xxp1;
+	struct cvmx_gmxx_rxx_frm_ctl_cn61xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_frm_ctl cvmx_gmxx_rxx_frm_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_frm_max
+ *
+ * GMX_RX_FRM_MAX = Frame Max length
+ *
+ *
+ * Notes:
+ * In spi4 mode, all spi4 ports use prt0 for checking.
+ *
+ * When changing the LEN field, be sure that LEN does not exceed
+ * GMX_RX_JABBER[CNT]. Failure to meet this constraint will cause packets that
+ * are within the maximum length parameter to be rejected because they exceed
+ * the GMX_RX_JABBER[CNT] limit.
+ */
+union cvmx_gmxx_rxx_frm_max {
+	u64 u64;
+	struct cvmx_gmxx_rxx_frm_max_s {
+		u64 reserved_16_63 : 48;
+		u64 len : 16;
+	} s;
+	struct cvmx_gmxx_rxx_frm_max_s cn30xx;
+	struct cvmx_gmxx_rxx_frm_max_s cn31xx;
+	struct cvmx_gmxx_rxx_frm_max_s cn38xx;
+	struct cvmx_gmxx_rxx_frm_max_s cn38xxp2;
+	struct cvmx_gmxx_rxx_frm_max_s cn58xx;
+	struct cvmx_gmxx_rxx_frm_max_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_rxx_frm_max cvmx_gmxx_rxx_frm_max_t;
+
+/**
+ * cvmx_gmx#_rx#_frm_min
+ *
+ * GMX_RX_FRM_MIN = Frame Min length
+ *
+ *
+ * Notes:
+ * In spi4 mode, all spi4 ports use prt0 for checking.
+ *
+ */
+union cvmx_gmxx_rxx_frm_min {
+	u64 u64;
+	struct cvmx_gmxx_rxx_frm_min_s {
+		u64 reserved_16_63 : 48;
+		u64 len : 16;
+	} s;
+	struct cvmx_gmxx_rxx_frm_min_s cn30xx;
+	struct cvmx_gmxx_rxx_frm_min_s cn31xx;
+	struct cvmx_gmxx_rxx_frm_min_s cn38xx;
+	struct cvmx_gmxx_rxx_frm_min_s cn38xxp2;
+	struct cvmx_gmxx_rxx_frm_min_s cn58xx;
+	struct cvmx_gmxx_rxx_frm_min_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_rxx_frm_min cvmx_gmxx_rxx_frm_min_t;
+
+/**
+ * cvmx_gmx#_rx#_ifg
+ *
+ * GMX_RX_IFG = RX Min IFG
+ *
+ */
+union cvmx_gmxx_rxx_ifg {
+	u64 u64;
+	struct cvmx_gmxx_rxx_ifg_s {
+		u64 reserved_4_63 : 60;
+		u64 ifg : 4;
+	} s;
+	struct cvmx_gmxx_rxx_ifg_s cn30xx;
+	struct cvmx_gmxx_rxx_ifg_s cn31xx;
+	struct cvmx_gmxx_rxx_ifg_s cn38xx;
+	struct cvmx_gmxx_rxx_ifg_s cn38xxp2;
+	struct cvmx_gmxx_rxx_ifg_s cn50xx;
+	struct cvmx_gmxx_rxx_ifg_s cn52xx;
+	struct cvmx_gmxx_rxx_ifg_s cn52xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cn56xx;
+	struct cvmx_gmxx_rxx_ifg_s cn56xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cn58xx;
+	struct cvmx_gmxx_rxx_ifg_s cn58xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cn61xx;
+	struct cvmx_gmxx_rxx_ifg_s cn63xx;
+	struct cvmx_gmxx_rxx_ifg_s cn63xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cn66xx;
+	struct cvmx_gmxx_rxx_ifg_s cn68xx;
+	struct cvmx_gmxx_rxx_ifg_s cn68xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cn70xx;
+	struct cvmx_gmxx_rxx_ifg_s cn70xxp1;
+	struct cvmx_gmxx_rxx_ifg_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_ifg cvmx_gmxx_rxx_ifg_t;
+
+/**
+ * cvmx_gmx#_rx#_int_en
+ *
+ * GMX_RX_INT_EN = Interrupt Enable
+ *
+ *
+ * Notes:
+ * In XAUI mode prt0 is used for checking.
+ *
+ */
+union cvmx_gmxx_rxx_int_en {
+	u64 u64;
+	struct cvmx_gmxx_rxx_int_en_s {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_gmxx_rxx_int_en_cn30xx {
+		u64 reserved_19_63 : 45;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn30xx;
+	struct cvmx_gmxx_rxx_int_en_cn30xx cn31xx;
+	struct cvmx_gmxx_rxx_int_en_cn30xx cn38xx;
+	struct cvmx_gmxx_rxx_int_en_cn30xx cn38xxp2;
+	struct cvmx_gmxx_rxx_int_en_cn50xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_6_6 : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn50xx;
+	struct cvmx_gmxx_rxx_int_en_cn52xx {
+		u64 reserved_29_63 : 35;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn52xx;
+	struct cvmx_gmxx_rxx_int_en_cn52xx cn52xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn52xx cn56xx;
+	struct cvmx_gmxx_rxx_int_en_cn56xxp1 {
+		u64 reserved_27_63 : 37;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn56xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn58xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn58xx;
+	struct cvmx_gmxx_rxx_int_en_cn58xx cn58xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn61xx {
+		u64 reserved_29_63 : 35;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn61xx;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cn63xx;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cn63xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cn66xx;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cn68xx;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cn68xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn70xx {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn70xx;
+	struct cvmx_gmxx_rxx_int_en_cn70xx cn70xxp1;
+	struct cvmx_gmxx_rxx_int_en_cn61xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_int_en cvmx_gmxx_rxx_int_en_t;
+
+/**
+ * cvmx_gmx#_rx#_int_reg
+ *
+ * GMX_RX_INT_REG = Interrupt Register
+ *
+ *
+ * Notes:
+ * (1) exceptions will only be raised to the control processor if the
+ *     corresponding bit in the GMX_RX_INT_EN register is set.
+ *
+ * (2) exception conditions 10:0 can also set the rcv/opcode in the received
+ *     packet's workQ entry.  The GMX_RX_FRM_CHK register provides a bit mask
+ *     for configuring which conditions set the error.
+ *
+ * (3) in half duplex operation, the expectation is that collisions will appear
+ *     as either MINERR o r CAREXT errors.
+ *
+ * (4) JABBER - An RX Jabber error indicates that a packet was received which
+ *              is longer than the maximum allowed packet as defined by the
+ *              system.  GMX will truncate the packet at the JABBER count.
+ *              Failure to do so could lead to system instabilty.
+ *
+ * (5) NIBERR - This error is illegal at 1000Mbs speeds
+ *              (GMX_RX_PRT_CFG[SPEED]==0) and will never assert.
+ *
+ * (6) MAXERR - for untagged frames, the total frame DA+SA+TL+DATA+PAD+FCS >
+ *              GMX_RX_FRM_MAX.  For tagged frames, DA+SA+VLAN+TL+DATA+PAD+FCS
+ *              > GMX_RX_FRM_MAX + 4*VLAN_VAL + 4*VLAN_STACKED.
+ *
+ * (7) MINERR - total frame DA+SA+TL+DATA+PAD+FCS < 64
+ *
+ * (8) ALNERR - Indicates that the packet received was not an integer number of
+ *              bytes.  If FCS checking is enabled, ALNERR will only assert if
+ *              the FCS is bad.  If FCS checking is disabled, ALNERR will
+ *              assert in all non-integer frame cases.
+ *
+ * (9) Collisions - Collisions can only occur in half-duplex mode.  A collision
+ *                  is assumed by the receiver when the slottime
+ *                  (GMX_PRT_CFG[SLOTTIME]) is not satisfied.  In 10/100 mode,
+ *                  this will result in a frame < SLOTTIME.  In 1000 mode, it
+ *                  could result either in frame < SLOTTIME or a carrier extend
+ *                  error with the SLOTTIME.  These conditions are visible by...
+ *
+ *                  . transfer ended before slottime - COLDET
+ *                  . carrier extend error           - CAREXT
+ *
+ * (A) LENERR - Length errors occur when the received packet does not match the
+ *              length field.  LENERR is only checked for packets between 64
+ *              and 1500 bytes.  For untagged frames, the length must exact
+ *              match.  For tagged frames the length or length+4 must match.
+ *
+ * (B) PCTERR - checks that the frame begins with a valid PREAMBLE sequence.
+ *              Does not check the number of PREAMBLE cycles.
+ *
+ * (C) OVRERR -
+ *
+ *              OVRERR is an architectural assertion check internal to GMX to
+ *              make sure no assumption was violated.  In a correctly operating
+ *              system, this interrupt can never fire.
+ *
+ *              GMX has an internal arbiter which selects which of 4 ports to
+ *              buffer in the main RX FIFO.  If we normally buffer 8 bytes,
+ *              then each port will typically push a tick every 8 cycles - if
+ *              the packet interface is going as fast as possible.  If there
+ *              are four ports, they push every two cycles.  So that's the
+ *              assumption.  That the inbound module will always be able to
+ *              consume the tick before another is produced.  If that doesn't
+ *              happen - that's when OVRERR will assert.
+ *
+ * (D) In XAUI mode prt0 is used for interrupt logging.
+ */
+union cvmx_gmxx_rxx_int_reg {
+	u64 u64;
+	struct cvmx_gmxx_rxx_int_reg_s {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} s;
+	struct cvmx_gmxx_rxx_int_reg_cn30xx {
+		u64 reserved_19_63 : 45;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn30xx;
+	struct cvmx_gmxx_rxx_int_reg_cn30xx cn31xx;
+	struct cvmx_gmxx_rxx_int_reg_cn30xx cn38xx;
+	struct cvmx_gmxx_rxx_int_reg_cn30xx cn38xxp2;
+	struct cvmx_gmxx_rxx_int_reg_cn50xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_6_6 : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn50xx;
+	struct cvmx_gmxx_rxx_int_reg_cn52xx {
+		u64 reserved_29_63 : 35;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn52xx;
+	struct cvmx_gmxx_rxx_int_reg_cn52xx cn52xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn52xx cn56xx;
+	struct cvmx_gmxx_rxx_int_reg_cn56xxp1 {
+		u64 reserved_27_63 : 37;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 reserved_0_0 : 1;
+	} cn56xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn58xx {
+		u64 reserved_20_63 : 44;
+		u64 pause_drp : 1;
+		u64 phy_dupx : 1;
+		u64 phy_spd : 1;
+		u64 phy_link : 1;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 niberr : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 lenerr : 1;
+		u64 alnerr : 1;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 maxerr : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn58xx;
+	struct cvmx_gmxx_rxx_int_reg_cn58xx cn58xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx {
+		u64 reserved_29_63 : 35;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn61xx;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cn63xx;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cn63xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cn66xx;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cn68xx;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cn68xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn70xx {
+		u64 reserved_30_63 : 34;
+		u64 wol : 1;
+		u64 hg2cc : 1;
+		u64 hg2fld : 1;
+		u64 undat : 1;
+		u64 uneop : 1;
+		u64 unsop : 1;
+		u64 bad_term : 1;
+		u64 bad_seq : 1;
+		u64 rem_fault : 1;
+		u64 loc_fault : 1;
+		u64 pause_drp : 1;
+		u64 reserved_16_18 : 3;
+		u64 ifgerr : 1;
+		u64 coldet : 1;
+		u64 falerr : 1;
+		u64 rsverr : 1;
+		u64 pcterr : 1;
+		u64 ovrerr : 1;
+		u64 reserved_9_9 : 1;
+		u64 skperr : 1;
+		u64 rcverr : 1;
+		u64 reserved_5_6 : 2;
+		u64 fcserr : 1;
+		u64 jabber : 1;
+		u64 reserved_2_2 : 1;
+		u64 carext : 1;
+		u64 minerr : 1;
+	} cn70xx;
+	struct cvmx_gmxx_rxx_int_reg_cn70xx cn70xxp1;
+	struct cvmx_gmxx_rxx_int_reg_cn61xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_int_reg cvmx_gmxx_rxx_int_reg_t;
+
+/**
+ * cvmx_gmx#_rx#_jabber
+ *
+ * GMX_RX_JABBER = The max size packet after which GMX will truncate
+ *
+ *
+ * Notes:
+ * CNT must be 8-byte aligned such that CNT[2:0] == 0
+ *
+ * The packet that will be sent to the packet input logic will have an
+ * additionl 8 bytes if GMX_RX_FRM_CTL[PRE_CHK] is set and
+ * GMX_RX_FRM_CTL[PRE_STRP] is clear.  The max packet that will be sent is
+ * defined as...
+ *
+ *      max_sized_packet = GMX_RX_JABBER[CNT]+((GMX_RX_FRM_CTL[PRE_CHK] & !GMX_RX_FRM_CTL[PRE_STRP])*8)
+ *
+ * In XAUI mode prt0 is used for checking.
+ */
+union cvmx_gmxx_rxx_jabber {
+	u64 u64;
+	struct cvmx_gmxx_rxx_jabber_s {
+		u64 reserved_16_63 : 48;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_gmxx_rxx_jabber_s cn30xx;
+	struct cvmx_gmxx_rxx_jabber_s cn31xx;
+	struct cvmx_gmxx_rxx_jabber_s cn38xx;
+	struct cvmx_gmxx_rxx_jabber_s cn38xxp2;
+	struct cvmx_gmxx_rxx_jabber_s cn50xx;
+	struct cvmx_gmxx_rxx_jabber_s cn52xx;
+	struct cvmx_gmxx_rxx_jabber_s cn52xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cn56xx;
+	struct cvmx_gmxx_rxx_jabber_s cn56xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cn58xx;
+	struct cvmx_gmxx_rxx_jabber_s cn58xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cn61xx;
+	struct cvmx_gmxx_rxx_jabber_s cn63xx;
+	struct cvmx_gmxx_rxx_jabber_s cn63xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cn66xx;
+	struct cvmx_gmxx_rxx_jabber_s cn68xx;
+	struct cvmx_gmxx_rxx_jabber_s cn68xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cn70xx;
+	struct cvmx_gmxx_rxx_jabber_s cn70xxp1;
+	struct cvmx_gmxx_rxx_jabber_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_jabber cvmx_gmxx_rxx_jabber_t;
+
+/**
+ * cvmx_gmx#_rx#_pause_drop_time
+ *
+ * GMX_RX_PAUSE_DROP_TIME = The TIME field in a PAUSE Packet which was dropped due to GMX RX FIFO full condition
+ *
+ */
+union cvmx_gmxx_rxx_pause_drop_time {
+	u64 u64;
+	struct cvmx_gmxx_rxx_pause_drop_time_s {
+		u64 reserved_16_63 : 48;
+		u64 status : 16;
+	} s;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn50xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn52xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn52xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn56xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn56xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn58xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn58xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn61xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn63xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn63xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn66xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn68xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn68xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn70xx;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cn70xxp1;
+	struct cvmx_gmxx_rxx_pause_drop_time_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_pause_drop_time cvmx_gmxx_rxx_pause_drop_time_t;
+
+/**
+ * cvmx_gmx#_rx#_rx_inbnd
+ *
+ * GMX_RX_INBND = RGMII InBand Link Status
+ *
+ *
+ * Notes:
+ * These fields are only valid if the attached PHY is operating in RGMII mode
+ * and supports the optional in-band status (see section 3.4.1 of the RGMII
+ * specification, version 1.3 for more information).
+ */
+union cvmx_gmxx_rxx_rx_inbnd {
+	u64 u64;
+	struct cvmx_gmxx_rxx_rx_inbnd_s {
+		u64 reserved_4_63 : 60;
+		u64 duplex : 1;
+		u64 speed : 2;
+		u64 status : 1;
+	} s;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn30xx;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn31xx;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn38xx;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn38xxp2;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn50xx;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn58xx;
+	struct cvmx_gmxx_rxx_rx_inbnd_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_rxx_rx_inbnd cvmx_gmxx_rxx_rx_inbnd_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_ctl
+ *
+ * GMX_RX_STATS_CTL = RX Stats Control register
+ *
+ */
+union cvmx_gmxx_rxx_stats_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 rd_clr : 1;
+	} s;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_ctl_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_ctl cvmx_gmxx_rxx_stats_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_octs
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_rxx_stats_octs {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_octs_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_gmxx_rxx_stats_octs_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_octs_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_octs_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_octs cvmx_gmxx_rxx_stats_octs_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_octs_ctl
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_rxx_stats_octs_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_octs_ctl cvmx_gmxx_rxx_stats_octs_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_octs_dmac
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_rxx_stats_octs_dmac {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_dmac_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_octs_dmac cvmx_gmxx_rxx_stats_octs_dmac_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_octs_drp
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_RX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_rxx_stats_octs_drp {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s {
+		u64 reserved_48_63 : 16;
+		u64 cnt : 48;
+	} s;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_octs_drp_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_octs_drp cvmx_gmxx_rxx_stats_octs_drp_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_pkts
+ *
+ * Count of good received packets - packets that are not recognized as PAUSE
+ * packets, dropped due the DMAC filter, dropped due FIFO full status, or
+ * have any other OPCODE (FCS, Length, etc).
+ */
+union cvmx_gmxx_rxx_stats_pkts {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_pkts_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_pkts_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_pkts cvmx_gmxx_rxx_stats_pkts_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_pkts_bad
+ *
+ * Count of all packets received with some error that were not dropped
+ * either due to the dmac filter or lack of room in the receive FIFO.
+ */
+union cvmx_gmxx_rxx_stats_pkts_bad {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_bad_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_pkts_bad cvmx_gmxx_rxx_stats_pkts_bad_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_pkts_ctl
+ *
+ * Count of all packets received that were recognized as Flow Control or
+ * PAUSE packets.  PAUSE packets with any kind of error are counted in
+ * GMX_RX_STATS_PKTS_BAD.  Pause packets can be optionally dropped or
+ * forwarded based on the GMX_RX_FRM_CTL[CTL_DRP] bit.  This count
+ * increments regardless of whether the packet is dropped.  Pause packets
+ * will never be counted in GMX_RX_STATS_PKTS.  Packets dropped due the dmac
+ * filter will be counted in GMX_RX_STATS_PKTS_DMAC and not here.
+ */
+union cvmx_gmxx_rxx_stats_pkts_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_pkts_ctl cvmx_gmxx_rxx_stats_pkts_ctl_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_pkts_dmac
+ *
+ * Count of all packets received that were dropped by the dmac filter.
+ * Packets that match the DMAC will be dropped and counted here regardless
+ * of if they were bad packets.  These packets will never be counted in
+ * GMX_RX_STATS_PKTS.
+ * Some packets that were not able to satisify the DECISION_CNT may not
+ * actually be dropped by Octeon, but they will be counted here as if they
+ * were dropped.
+ */
+union cvmx_gmxx_rxx_stats_pkts_dmac {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_dmac_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_pkts_dmac cvmx_gmxx_rxx_stats_pkts_dmac_t;
+
+/**
+ * cvmx_gmx#_rx#_stats_pkts_drp
+ *
+ * Count of all packets received that were dropped due to a full receive FIFO.
+ * This counts both partial packets in which there was enough space in the RX
+ * FIFO to begin to buffer and the packet and total drops in which no packet was
+ * sent to PKI.  This counts good and bad packets received - all packets dropped
+ * by the FIFO.  It does not count packets dropped by the dmac or pause packet
+ * filters.
+ */
+union cvmx_gmxx_rxx_stats_pkts_drp {
+	u64 u64;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn30xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn31xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn38xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn38xxp2;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn50xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn52xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn52xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn56xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn56xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn58xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn58xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn61xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn63xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn63xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn66xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn68xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn68xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn70xx;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cn70xxp1;
+	struct cvmx_gmxx_rxx_stats_pkts_drp_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_stats_pkts_drp cvmx_gmxx_rxx_stats_pkts_drp_t;
+
+/**
+ * cvmx_gmx#_rx#_udd_skp
+ *
+ * GMX_RX_UDD_SKP = Amount of User-defined data before the start of the L2 data
+ *
+ *
+ * Notes:
+ * (1) The skip bytes are part of the packet and will be sent down the NCB
+ *     packet interface and will be handled by PKI.
+ *
+ * (2) The system can determine if the UDD bytes are included in the FCS check
+ *     by using the FCSSEL field - if the FCS check is enabled.
+ *
+ * (3) Assume that the preamble/sfd is always@the start of the frame - even
+ *     before UDD bytes.  In most cases, there will be no preamble in these
+ *     cases since it will be packet interface in direct communication to
+ *     another packet interface (MAC to MAC) without a PHY involved.
+ *
+ * (4) We can still do address filtering and control packet filtering is the
+ *     user desires.
+ *
+ * (5) UDD_SKP must be 0 in half-duplex operation unless
+ *     GMX_RX_FRM_CTL[PRE_CHK] is clear.  If GMX_RX_FRM_CTL[PRE_CHK] is clear,
+ *     then UDD_SKP will normally be 8.
+ *
+ * (6) In all cases, the UDD bytes will be sent down the packet interface as
+ *     part of the packet.  The UDD bytes are never stripped from the actual
+ *     packet.
+ *
+ * (7) If LEN != 0, then GMX_RX_FRM_CHK[LENERR] will be disabled and GMX_RX_INT_REG[LENERR] will be zero
+ */
+union cvmx_gmxx_rxx_udd_skp {
+	u64 u64;
+	struct cvmx_gmxx_rxx_udd_skp_s {
+		u64 reserved_9_63 : 55;
+		u64 fcssel : 1;
+		u64 reserved_7_7 : 1;
+		u64 len : 7;
+	} s;
+	struct cvmx_gmxx_rxx_udd_skp_s cn30xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn31xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn38xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn38xxp2;
+	struct cvmx_gmxx_rxx_udd_skp_s cn50xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn52xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn52xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cn56xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn56xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cn58xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn58xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cn61xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn63xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn63xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cn66xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn68xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn68xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cn70xx;
+	struct cvmx_gmxx_rxx_udd_skp_s cn70xxp1;
+	struct cvmx_gmxx_rxx_udd_skp_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rxx_udd_skp cvmx_gmxx_rxx_udd_skp_t;
+
+/**
+ * cvmx_gmx#_rx_bp_drop#
+ *
+ * GMX_RX_BP_DROP = FIFO mark for packet drop
+ *
+ *
+ * Notes:
+ * The actual watermark is dynamic with respect to the GMX_RX_PRTS
+ * register.  The GMX_RX_PRTS controls the depth of the port's
+ * FIFO so as ports are added or removed, the drop point may change.
+ *
+ * In XAUI mode prt0 is used for checking.
+ */
+union cvmx_gmxx_rx_bp_dropx {
+	u64 u64;
+	struct cvmx_gmxx_rx_bp_dropx_s {
+		u64 reserved_6_63 : 58;
+		u64 mark : 6;
+	} s;
+	struct cvmx_gmxx_rx_bp_dropx_s cn30xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn31xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn38xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn38xxp2;
+	struct cvmx_gmxx_rx_bp_dropx_s cn50xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn52xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn52xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cn56xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn56xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cn58xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn58xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cn61xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn63xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn63xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cn66xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn68xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn68xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cn70xx;
+	struct cvmx_gmxx_rx_bp_dropx_s cn70xxp1;
+	struct cvmx_gmxx_rx_bp_dropx_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_bp_dropx cvmx_gmxx_rx_bp_dropx_t;
+
+/**
+ * cvmx_gmx#_rx_bp_off#
+ *
+ * GMX_RX_BP_OFF = Lowater mark for packet drop
+ *
+ *
+ * Notes:
+ * In XAUI mode, prt0 is used for checking.
+ *
+ */
+union cvmx_gmxx_rx_bp_offx {
+	u64 u64;
+	struct cvmx_gmxx_rx_bp_offx_s {
+		u64 reserved_6_63 : 58;
+		u64 mark : 6;
+	} s;
+	struct cvmx_gmxx_rx_bp_offx_s cn30xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn31xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn38xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn38xxp2;
+	struct cvmx_gmxx_rx_bp_offx_s cn50xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn52xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn52xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cn56xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn56xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cn58xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn58xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cn61xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn63xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn63xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cn66xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn68xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn68xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cn70xx;
+	struct cvmx_gmxx_rx_bp_offx_s cn70xxp1;
+	struct cvmx_gmxx_rx_bp_offx_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_bp_offx cvmx_gmxx_rx_bp_offx_t;
+
+/**
+ * cvmx_gmx#_rx_bp_on#
+ *
+ * GMX_RX_BP_ON = Hiwater mark for port/interface backpressure
+ *
+ *
+ * Notes:
+ * In XAUI mode, prt0 is used for checking.
+ *
+ */
+union cvmx_gmxx_rx_bp_onx {
+	u64 u64;
+	struct cvmx_gmxx_rx_bp_onx_s {
+		u64 reserved_11_63 : 53;
+		u64 mark : 11;
+	} s;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx {
+		u64 reserved_9_63 : 55;
+		u64 mark : 9;
+	} cn30xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn31xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn38xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn38xxp2;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn50xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn52xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn52xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn56xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn56xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn58xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn58xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn61xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn63xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn63xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn66xx;
+	struct cvmx_gmxx_rx_bp_onx_s cn68xx;
+	struct cvmx_gmxx_rx_bp_onx_s cn68xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn70xx;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cn70xxp1;
+	struct cvmx_gmxx_rx_bp_onx_cn30xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_bp_onx cvmx_gmxx_rx_bp_onx_t;
+
+/**
+ * cvmx_gmx#_rx_hg2_status
+ *
+ * ** HG2 message CSRs
+ *
+ */
+union cvmx_gmxx_rx_hg2_status {
+	u64 u64;
+	struct cvmx_gmxx_rx_hg2_status_s {
+		u64 reserved_48_63 : 16;
+		u64 phtim2go : 16;
+		u64 xof : 16;
+		u64 lgtim2go : 16;
+	} s;
+	struct cvmx_gmxx_rx_hg2_status_s cn52xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn52xxp1;
+	struct cvmx_gmxx_rx_hg2_status_s cn56xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn61xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn63xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn63xxp1;
+	struct cvmx_gmxx_rx_hg2_status_s cn66xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn68xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn68xxp1;
+	struct cvmx_gmxx_rx_hg2_status_s cn70xx;
+	struct cvmx_gmxx_rx_hg2_status_s cn70xxp1;
+	struct cvmx_gmxx_rx_hg2_status_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_hg2_status cvmx_gmxx_rx_hg2_status_t;
+
+/**
+ * cvmx_gmx#_rx_pass_en
+ *
+ * GMX_RX_PASS_EN = Packet pass through mode enable
+ *
+ * When both Octane ports are running in Spi4 mode, packets can be directly
+ * passed from one SPX interface to the other without being processed by the
+ * core or PP's.  The register has one bit for each port to enable the pass
+ * through feature.
+ *
+ * Notes:
+ * (1) Can only be used in dual Spi4 configs
+ *
+ * (2) The mapped pass through output port cannot be the destination port for
+ *     any Octane core traffic.
+ */
+union cvmx_gmxx_rx_pass_en {
+	u64 u64;
+	struct cvmx_gmxx_rx_pass_en_s {
+		u64 reserved_16_63 : 48;
+		u64 en : 16;
+	} s;
+	struct cvmx_gmxx_rx_pass_en_s cn38xx;
+	struct cvmx_gmxx_rx_pass_en_s cn38xxp2;
+	struct cvmx_gmxx_rx_pass_en_s cn58xx;
+	struct cvmx_gmxx_rx_pass_en_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_rx_pass_en cvmx_gmxx_rx_pass_en_t;
+
+/**
+ * cvmx_gmx#_rx_pass_map#
+ *
+ * GMX_RX_PASS_MAP = Packet pass through port map
+ *
+ */
+union cvmx_gmxx_rx_pass_mapx {
+	u64 u64;
+	struct cvmx_gmxx_rx_pass_mapx_s {
+		u64 reserved_4_63 : 60;
+		u64 dprt : 4;
+	} s;
+	struct cvmx_gmxx_rx_pass_mapx_s cn38xx;
+	struct cvmx_gmxx_rx_pass_mapx_s cn38xxp2;
+	struct cvmx_gmxx_rx_pass_mapx_s cn58xx;
+	struct cvmx_gmxx_rx_pass_mapx_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_rx_pass_mapx cvmx_gmxx_rx_pass_mapx_t;
+
+/**
+ * cvmx_gmx#_rx_prt_info
+ *
+ * GMX_RX_PRT_INFO = Report the RX status for port
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of DROP and COMMIT are used.
+ *
+ */
+union cvmx_gmxx_rx_prt_info {
+	u64 u64;
+	struct cvmx_gmxx_rx_prt_info_s {
+		u64 reserved_32_63 : 32;
+		u64 drop : 16;
+		u64 commit : 16;
+	} s;
+	struct cvmx_gmxx_rx_prt_info_cn30xx {
+		u64 reserved_19_63 : 45;
+		u64 drop : 3;
+		u64 reserved_3_15 : 13;
+		u64 commit : 3;
+	} cn30xx;
+	struct cvmx_gmxx_rx_prt_info_cn30xx cn31xx;
+	struct cvmx_gmxx_rx_prt_info_s cn38xx;
+	struct cvmx_gmxx_rx_prt_info_cn30xx cn50xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 drop : 4;
+		u64 reserved_4_15 : 12;
+		u64 commit : 4;
+	} cn52xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn52xxp1;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn56xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn56xxp1;
+	struct cvmx_gmxx_rx_prt_info_s cn58xx;
+	struct cvmx_gmxx_rx_prt_info_s cn58xxp1;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn61xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn63xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn63xxp1;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn66xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn68xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn68xxp1;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn70xx;
+	struct cvmx_gmxx_rx_prt_info_cn52xx cn70xxp1;
+	struct cvmx_gmxx_rx_prt_info_cnf71xx {
+		u64 reserved_18_63 : 46;
+		u64 drop : 2;
+		u64 reserved_2_15 : 14;
+		u64 commit : 2;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_prt_info cvmx_gmxx_rx_prt_info_t;
+
+/**
+ * cvmx_gmx#_rx_prts
+ *
+ * GMX_RX_PRTS = Number of FIFOs to carve the RX buffer into
+ *
+ *
+ * Notes:
+ * GMX_RX_PRTS[PRTS] must be set to '1' in XAUI mode.
+ *
+ */
+union cvmx_gmxx_rx_prts {
+	u64 u64;
+	struct cvmx_gmxx_rx_prts_s {
+		u64 reserved_3_63 : 61;
+		u64 prts : 3;
+	} s;
+	struct cvmx_gmxx_rx_prts_s cn30xx;
+	struct cvmx_gmxx_rx_prts_s cn31xx;
+	struct cvmx_gmxx_rx_prts_s cn38xx;
+	struct cvmx_gmxx_rx_prts_s cn38xxp2;
+	struct cvmx_gmxx_rx_prts_s cn50xx;
+	struct cvmx_gmxx_rx_prts_s cn52xx;
+	struct cvmx_gmxx_rx_prts_s cn52xxp1;
+	struct cvmx_gmxx_rx_prts_s cn56xx;
+	struct cvmx_gmxx_rx_prts_s cn56xxp1;
+	struct cvmx_gmxx_rx_prts_s cn58xx;
+	struct cvmx_gmxx_rx_prts_s cn58xxp1;
+	struct cvmx_gmxx_rx_prts_s cn61xx;
+	struct cvmx_gmxx_rx_prts_s cn63xx;
+	struct cvmx_gmxx_rx_prts_s cn63xxp1;
+	struct cvmx_gmxx_rx_prts_s cn66xx;
+	struct cvmx_gmxx_rx_prts_s cn68xx;
+	struct cvmx_gmxx_rx_prts_s cn68xxp1;
+	struct cvmx_gmxx_rx_prts_s cn70xx;
+	struct cvmx_gmxx_rx_prts_s cn70xxp1;
+	struct cvmx_gmxx_rx_prts_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_prts cvmx_gmxx_rx_prts_t;
+
+/**
+ * cvmx_gmx#_rx_tx_status
+ *
+ * GMX_RX_TX_STATUS = GMX RX/TX Status
+ *
+ */
+union cvmx_gmxx_rx_tx_status {
+	u64 u64;
+	struct cvmx_gmxx_rx_tx_status_s {
+		u64 reserved_7_63 : 57;
+		u64 tx : 3;
+		u64 reserved_3_3 : 1;
+		u64 rx : 3;
+	} s;
+	struct cvmx_gmxx_rx_tx_status_s cn30xx;
+	struct cvmx_gmxx_rx_tx_status_s cn31xx;
+	struct cvmx_gmxx_rx_tx_status_s cn50xx;
+};
+
+typedef union cvmx_gmxx_rx_tx_status cvmx_gmxx_rx_tx_status_t;
+
+/**
+ * cvmx_gmx#_rx_xaui_bad_col
+ */
+union cvmx_gmxx_rx_xaui_bad_col {
+	u64 u64;
+	struct cvmx_gmxx_rx_xaui_bad_col_s {
+		u64 reserved_40_63 : 24;
+		u64 val : 1;
+		u64 state : 3;
+		u64 lane_rxc : 4;
+		u64 lane_rxd : 32;
+	} s;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn52xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn52xxp1;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn56xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn56xxp1;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn61xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn63xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn63xxp1;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn66xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn68xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn68xxp1;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn70xx;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cn70xxp1;
+	struct cvmx_gmxx_rx_xaui_bad_col_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_xaui_bad_col cvmx_gmxx_rx_xaui_bad_col_t;
+
+/**
+ * cvmx_gmx#_rx_xaui_ctl
+ */
+union cvmx_gmxx_rx_xaui_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rx_xaui_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 status : 2;
+	} s;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn52xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn52xxp1;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn56xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn56xxp1;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn61xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn63xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn63xxp1;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn66xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn68xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn70xx;
+	struct cvmx_gmxx_rx_xaui_ctl_s cn70xxp1;
+	struct cvmx_gmxx_rx_xaui_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_rx_xaui_ctl cvmx_gmxx_rx_xaui_ctl_t;
+
+/**
+ * cvmx_gmx#_rxaui_ctl
+ */
+union cvmx_gmxx_rxaui_ctl {
+	u64 u64;
+	struct cvmx_gmxx_rxaui_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 disparity : 1;
+	} s;
+	struct cvmx_gmxx_rxaui_ctl_s cn68xx;
+	struct cvmx_gmxx_rxaui_ctl_s cn68xxp1;
+	struct cvmx_gmxx_rxaui_ctl_s cn70xx;
+	struct cvmx_gmxx_rxaui_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_gmxx_rxaui_ctl cvmx_gmxx_rxaui_ctl_t;
+
+/**
+ * cvmx_gmx#_smac#
+ *
+ * GMX_SMAC = Packet SMAC
+ *
+ */
+union cvmx_gmxx_smacx {
+	u64 u64;
+	struct cvmx_gmxx_smacx_s {
+		u64 reserved_48_63 : 16;
+		u64 smac : 48;
+	} s;
+	struct cvmx_gmxx_smacx_s cn30xx;
+	struct cvmx_gmxx_smacx_s cn31xx;
+	struct cvmx_gmxx_smacx_s cn38xx;
+	struct cvmx_gmxx_smacx_s cn38xxp2;
+	struct cvmx_gmxx_smacx_s cn50xx;
+	struct cvmx_gmxx_smacx_s cn52xx;
+	struct cvmx_gmxx_smacx_s cn52xxp1;
+	struct cvmx_gmxx_smacx_s cn56xx;
+	struct cvmx_gmxx_smacx_s cn56xxp1;
+	struct cvmx_gmxx_smacx_s cn58xx;
+	struct cvmx_gmxx_smacx_s cn58xxp1;
+	struct cvmx_gmxx_smacx_s cn61xx;
+	struct cvmx_gmxx_smacx_s cn63xx;
+	struct cvmx_gmxx_smacx_s cn63xxp1;
+	struct cvmx_gmxx_smacx_s cn66xx;
+	struct cvmx_gmxx_smacx_s cn68xx;
+	struct cvmx_gmxx_smacx_s cn68xxp1;
+	struct cvmx_gmxx_smacx_s cn70xx;
+	struct cvmx_gmxx_smacx_s cn70xxp1;
+	struct cvmx_gmxx_smacx_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_smacx cvmx_gmxx_smacx_t;
+
+/**
+ * cvmx_gmx#_soft_bist
+ *
+ * GMX_SOFT_BIST = Software BIST Control
+ *
+ */
+union cvmx_gmxx_soft_bist {
+	u64 u64;
+	struct cvmx_gmxx_soft_bist_s {
+		u64 reserved_2_63 : 62;
+		u64 start_bist : 1;
+		u64 clear_bist : 1;
+	} s;
+	struct cvmx_gmxx_soft_bist_s cn63xx;
+	struct cvmx_gmxx_soft_bist_s cn63xxp1;
+	struct cvmx_gmxx_soft_bist_s cn66xx;
+	struct cvmx_gmxx_soft_bist_s cn68xx;
+	struct cvmx_gmxx_soft_bist_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_soft_bist cvmx_gmxx_soft_bist_t;
+
+/**
+ * cvmx_gmx#_stat_bp
+ *
+ * GMX_STAT_BP = Number of cycles that the TX/Stats block has help up operation
+ *
+ *
+ * Notes:
+ * It has no relationship with the TX FIFO per se.  The TX engine sends packets
+ * from PKO and upon completion, sends a command to the TX stats block for an
+ * update based on the packet size.  The stats operation can take a few cycles -
+ * normally not enough to be visible considering the 64B min packet size that is
+ * ethernet convention.
+ *
+ * In the rare case in which SW attempted to schedule really, really, small packets
+ * or the sclk (6xxx) is running ass-slow, then the stats updates may not happen in
+ * real time and can back up the TX engine.
+ *
+ * This counter is the number of cycles in which the TX engine was stalled.  In
+ * normal operation, it should always be zeros.
+ */
+union cvmx_gmxx_stat_bp {
+	u64 u64;
+	struct cvmx_gmxx_stat_bp_s {
+		u64 reserved_17_63 : 47;
+		u64 bp : 1;
+		u64 cnt : 16;
+	} s;
+	struct cvmx_gmxx_stat_bp_s cn30xx;
+	struct cvmx_gmxx_stat_bp_s cn31xx;
+	struct cvmx_gmxx_stat_bp_s cn38xx;
+	struct cvmx_gmxx_stat_bp_s cn38xxp2;
+	struct cvmx_gmxx_stat_bp_s cn50xx;
+	struct cvmx_gmxx_stat_bp_s cn52xx;
+	struct cvmx_gmxx_stat_bp_s cn52xxp1;
+	struct cvmx_gmxx_stat_bp_s cn56xx;
+	struct cvmx_gmxx_stat_bp_s cn56xxp1;
+	struct cvmx_gmxx_stat_bp_s cn58xx;
+	struct cvmx_gmxx_stat_bp_s cn58xxp1;
+	struct cvmx_gmxx_stat_bp_s cn61xx;
+	struct cvmx_gmxx_stat_bp_s cn63xx;
+	struct cvmx_gmxx_stat_bp_s cn63xxp1;
+	struct cvmx_gmxx_stat_bp_s cn66xx;
+	struct cvmx_gmxx_stat_bp_s cn68xx;
+	struct cvmx_gmxx_stat_bp_s cn68xxp1;
+	struct cvmx_gmxx_stat_bp_s cn70xx;
+	struct cvmx_gmxx_stat_bp_s cn70xxp1;
+	struct cvmx_gmxx_stat_bp_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_stat_bp cvmx_gmxx_stat_bp_t;
+
+/**
+ * cvmx_gmx#_tb_reg
+ *
+ * DON'T PUT IN HRM*
+ *
+ */
+union cvmx_gmxx_tb_reg {
+	u64 u64;
+	struct cvmx_gmxx_tb_reg_s {
+		u64 reserved_1_63 : 63;
+		u64 wr_magic : 1;
+	} s;
+	struct cvmx_gmxx_tb_reg_s cn61xx;
+	struct cvmx_gmxx_tb_reg_s cn66xx;
+	struct cvmx_gmxx_tb_reg_s cn68xx;
+	struct cvmx_gmxx_tb_reg_s cn70xx;
+	struct cvmx_gmxx_tb_reg_s cn70xxp1;
+	struct cvmx_gmxx_tb_reg_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tb_reg cvmx_gmxx_tb_reg_t;
+
+/**
+ * cvmx_gmx#_tx#_append
+ *
+ * GMX_TX_APPEND = Packet TX Append Control
+ *
+ */
+union cvmx_gmxx_txx_append {
+	u64 u64;
+	struct cvmx_gmxx_txx_append_s {
+		u64 reserved_4_63 : 60;
+		u64 force_fcs : 1;
+		u64 fcs : 1;
+		u64 pad : 1;
+		u64 preamble : 1;
+	} s;
+	struct cvmx_gmxx_txx_append_s cn30xx;
+	struct cvmx_gmxx_txx_append_s cn31xx;
+	struct cvmx_gmxx_txx_append_s cn38xx;
+	struct cvmx_gmxx_txx_append_s cn38xxp2;
+	struct cvmx_gmxx_txx_append_s cn50xx;
+	struct cvmx_gmxx_txx_append_s cn52xx;
+	struct cvmx_gmxx_txx_append_s cn52xxp1;
+	struct cvmx_gmxx_txx_append_s cn56xx;
+	struct cvmx_gmxx_txx_append_s cn56xxp1;
+	struct cvmx_gmxx_txx_append_s cn58xx;
+	struct cvmx_gmxx_txx_append_s cn58xxp1;
+	struct cvmx_gmxx_txx_append_s cn61xx;
+	struct cvmx_gmxx_txx_append_s cn63xx;
+	struct cvmx_gmxx_txx_append_s cn63xxp1;
+	struct cvmx_gmxx_txx_append_s cn66xx;
+	struct cvmx_gmxx_txx_append_s cn68xx;
+	struct cvmx_gmxx_txx_append_s cn68xxp1;
+	struct cvmx_gmxx_txx_append_s cn70xx;
+	struct cvmx_gmxx_txx_append_s cn70xxp1;
+	struct cvmx_gmxx_txx_append_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_append cvmx_gmxx_txx_append_t;
+
+/**
+ * cvmx_gmx#_tx#_bck_crdt
+ *
+ * gmi_tx_bck to gmi_tx_out credit count register
+ *
+ */
+union cvmx_gmxx_txx_bck_crdt {
+	u64 u64;
+	struct cvmx_gmxx_txx_bck_crdt_s {
+		u64 reserved_4_63 : 60;
+		u64 cnt : 4;
+	} s;
+	struct cvmx_gmxx_txx_bck_crdt_s cn70xx;
+	struct cvmx_gmxx_txx_bck_crdt_s cn70xxp1;
+};
+
+typedef union cvmx_gmxx_txx_bck_crdt cvmx_gmxx_txx_bck_crdt_t;
+
+/**
+ * cvmx_gmx#_tx#_burst
+ *
+ * GMX_TX_BURST = Packet TX Burst Counter
+ *
+ */
+union cvmx_gmxx_txx_burst {
+	u64 u64;
+	struct cvmx_gmxx_txx_burst_s {
+		u64 reserved_16_63 : 48;
+		u64 burst : 16;
+	} s;
+	struct cvmx_gmxx_txx_burst_s cn30xx;
+	struct cvmx_gmxx_txx_burst_s cn31xx;
+	struct cvmx_gmxx_txx_burst_s cn38xx;
+	struct cvmx_gmxx_txx_burst_s cn38xxp2;
+	struct cvmx_gmxx_txx_burst_s cn50xx;
+	struct cvmx_gmxx_txx_burst_s cn52xx;
+	struct cvmx_gmxx_txx_burst_s cn52xxp1;
+	struct cvmx_gmxx_txx_burst_s cn56xx;
+	struct cvmx_gmxx_txx_burst_s cn56xxp1;
+	struct cvmx_gmxx_txx_burst_s cn58xx;
+	struct cvmx_gmxx_txx_burst_s cn58xxp1;
+	struct cvmx_gmxx_txx_burst_s cn61xx;
+	struct cvmx_gmxx_txx_burst_s cn63xx;
+	struct cvmx_gmxx_txx_burst_s cn63xxp1;
+	struct cvmx_gmxx_txx_burst_s cn66xx;
+	struct cvmx_gmxx_txx_burst_s cn68xx;
+	struct cvmx_gmxx_txx_burst_s cn68xxp1;
+	struct cvmx_gmxx_txx_burst_s cn70xx;
+	struct cvmx_gmxx_txx_burst_s cn70xxp1;
+	struct cvmx_gmxx_txx_burst_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_burst cvmx_gmxx_txx_burst_t;
+
+/**
+ * cvmx_gmx#_tx#_cbfc_xoff
+ */
+union cvmx_gmxx_txx_cbfc_xoff {
+	u64 u64;
+	struct cvmx_gmxx_txx_cbfc_xoff_s {
+		u64 reserved_16_63 : 48;
+		u64 xoff : 16;
+	} s;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn52xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn56xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn61xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn63xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn63xxp1;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn66xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn68xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn68xxp1;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn70xx;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cn70xxp1;
+	struct cvmx_gmxx_txx_cbfc_xoff_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_cbfc_xoff cvmx_gmxx_txx_cbfc_xoff_t;
+
+/**
+ * cvmx_gmx#_tx#_cbfc_xon
+ */
+union cvmx_gmxx_txx_cbfc_xon {
+	u64 u64;
+	struct cvmx_gmxx_txx_cbfc_xon_s {
+		u64 reserved_16_63 : 48;
+		u64 xon : 16;
+	} s;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn52xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn56xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn61xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn63xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn63xxp1;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn66xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn68xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn68xxp1;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn70xx;
+	struct cvmx_gmxx_txx_cbfc_xon_s cn70xxp1;
+	struct cvmx_gmxx_txx_cbfc_xon_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_cbfc_xon cvmx_gmxx_txx_cbfc_xon_t;
+
+/**
+ * cvmx_gmx#_tx#_clk
+ *
+ * Per Port
+ *
+ *
+ * GMX_TX_CLK = RGMII TX Clock Generation Register
+ *
+ * Notes:
+ * Programming Restrictions:
+ *  (1) In RGMII mode, if GMX_PRT_CFG[SPEED]==0, then CLK_CNT must be > 1.
+ *  (2) In MII mode, CLK_CNT == 1
+ *  (3) In RGMII or GMII mode, if CLK_CNT==0, Octeon will not generate a tx clock.
+ *
+ * RGMII Example:
+ *  Given a 125MHz PLL reference clock...
+ *   CLK_CNT ==  1 ==> 125.0MHz TXC clock period (8ns* 1)
+ *   CLK_CNT ==  5 ==>  25.0MHz TXC clock period (8ns* 5)
+ *   CLK_CNT == 50 ==>   2.5MHz TXC clock period (8ns*50)
+ */
+union cvmx_gmxx_txx_clk {
+	u64 u64;
+	struct cvmx_gmxx_txx_clk_s {
+		u64 reserved_6_63 : 58;
+		u64 clk_cnt : 6;
+	} s;
+	struct cvmx_gmxx_txx_clk_s cn30xx;
+	struct cvmx_gmxx_txx_clk_s cn31xx;
+	struct cvmx_gmxx_txx_clk_s cn38xx;
+	struct cvmx_gmxx_txx_clk_s cn38xxp2;
+	struct cvmx_gmxx_txx_clk_s cn50xx;
+	struct cvmx_gmxx_txx_clk_s cn58xx;
+	struct cvmx_gmxx_txx_clk_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_txx_clk cvmx_gmxx_txx_clk_t;
+
+/**
+ * cvmx_gmx#_tx#_ctl
+ *
+ * GMX_TX_CTL = TX Control register
+ *
+ */
+union cvmx_gmxx_txx_ctl {
+	u64 u64;
+	struct cvmx_gmxx_txx_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 xsdef_en : 1;
+		u64 xscol_en : 1;
+	} s;
+	struct cvmx_gmxx_txx_ctl_s cn30xx;
+	struct cvmx_gmxx_txx_ctl_s cn31xx;
+	struct cvmx_gmxx_txx_ctl_s cn38xx;
+	struct cvmx_gmxx_txx_ctl_s cn38xxp2;
+	struct cvmx_gmxx_txx_ctl_s cn50xx;
+	struct cvmx_gmxx_txx_ctl_s cn52xx;
+	struct cvmx_gmxx_txx_ctl_s cn52xxp1;
+	struct cvmx_gmxx_txx_ctl_s cn56xx;
+	struct cvmx_gmxx_txx_ctl_s cn56xxp1;
+	struct cvmx_gmxx_txx_ctl_s cn58xx;
+	struct cvmx_gmxx_txx_ctl_s cn58xxp1;
+	struct cvmx_gmxx_txx_ctl_s cn61xx;
+	struct cvmx_gmxx_txx_ctl_s cn63xx;
+	struct cvmx_gmxx_txx_ctl_s cn63xxp1;
+	struct cvmx_gmxx_txx_ctl_s cn66xx;
+	struct cvmx_gmxx_txx_ctl_s cn68xx;
+	struct cvmx_gmxx_txx_ctl_s cn68xxp1;
+	struct cvmx_gmxx_txx_ctl_s cn70xx;
+	struct cvmx_gmxx_txx_ctl_s cn70xxp1;
+	struct cvmx_gmxx_txx_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_ctl cvmx_gmxx_txx_ctl_t;
+
+/**
+ * cvmx_gmx#_tx#_jam_mode
+ */
+union cvmx_gmxx_txx_jam_mode {
+	u64 u64;
+	struct cvmx_gmxx_txx_jam_mode_s {
+		u64 reserved_1_63 : 63;
+		u64 mode : 1;
+	} s;
+	struct cvmx_gmxx_txx_jam_mode_s cn70xx;
+	struct cvmx_gmxx_txx_jam_mode_s cn70xxp1;
+};
+
+typedef union cvmx_gmxx_txx_jam_mode cvmx_gmxx_txx_jam_mode_t;
+
+/**
+ * cvmx_gmx#_tx#_min_pkt
+ *
+ * GMX_TX_MIN_PKT = Packet TX Min Size Packet (PAD upto min size)
+ *
+ */
+union cvmx_gmxx_txx_min_pkt {
+	u64 u64;
+	struct cvmx_gmxx_txx_min_pkt_s {
+		u64 reserved_8_63 : 56;
+		u64 min_size : 8;
+	} s;
+	struct cvmx_gmxx_txx_min_pkt_s cn30xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn31xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn38xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn38xxp2;
+	struct cvmx_gmxx_txx_min_pkt_s cn50xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn52xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn52xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cn56xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn56xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cn58xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn58xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cn61xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn63xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn63xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cn66xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn68xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn68xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cn70xx;
+	struct cvmx_gmxx_txx_min_pkt_s cn70xxp1;
+	struct cvmx_gmxx_txx_min_pkt_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_min_pkt cvmx_gmxx_txx_min_pkt_t;
+
+/**
+ * cvmx_gmx#_tx#_pause_pkt_interval
+ *
+ * GMX_TX_PAUSE_PKT_INTERVAL = Packet TX Pause Packet transmission interval - how often PAUSE packets will be sent
+ *
+ *
+ * Notes:
+ * Choosing proper values of GMX_TX_PAUSE_PKT_TIME[TIME] and
+ * GMX_TX_PAUSE_PKT_INTERVAL[INTERVAL] can be challenging to the system
+ * designer.  It is suggested that TIME be much greater than INTERVAL and
+ * GMX_TX_PAUSE_ZERO[SEND] be set.  This allows a periodic refresh of the PAUSE
+ * count and then when the backpressure condition is lifted, a PAUSE packet
+ * with TIME==0 will be sent indicating that Octane is ready for additional
+ * data.
+ *
+ * If the system chooses to not set GMX_TX_PAUSE_ZERO[SEND], then it is
+ * suggested that TIME and INTERVAL are programmed such that they satisify the
+ * following rule...
+ *
+ *    INTERVAL <= TIME - (largest_pkt_size + IFG + pause_pkt_size)
+ *
+ * where largest_pkt_size is that largest packet that the system can send
+ * (normally 1518B), IFG is the interframe gap and pause_pkt_size is the size
+ * of the PAUSE packet (normally 64B).
+ */
+union cvmx_gmxx_txx_pause_pkt_interval {
+	u64 u64;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s {
+		u64 reserved_16_63 : 48;
+		u64 interval : 16;
+	} s;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn30xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn31xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn38xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn38xxp2;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn50xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn52xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn52xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn56xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn56xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn58xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn58xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn61xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn63xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn63xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn66xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn68xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn68xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn70xx;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cn70xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_interval_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_pause_pkt_interval cvmx_gmxx_txx_pause_pkt_interval_t;
+
+/**
+ * cvmx_gmx#_tx#_pause_pkt_time
+ *
+ * GMX_TX_PAUSE_PKT_TIME = Packet TX Pause Packet pause_time field
+ *
+ *
+ * Notes:
+ * Choosing proper values of GMX_TX_PAUSE_PKT_TIME[TIME] and
+ * GMX_TX_PAUSE_PKT_INTERVAL[INTERVAL] can be challenging to the system
+ * designer.  It is suggested that TIME be much greater than INTERVAL and
+ * GMX_TX_PAUSE_ZERO[SEND] be set.  This allows a periodic refresh of the PAUSE
+ * count and then when the backpressure condition is lifted, a PAUSE packet
+ * with TIME==0 will be sent indicating that Octane is ready for additional
+ * data.
+ *
+ * If the system chooses to not set GMX_TX_PAUSE_ZERO[SEND], then it is
+ * suggested that TIME and INTERVAL are programmed such that they satisify the
+ * following rule...
+ *
+ *    INTERVAL <= TIME - (largest_pkt_size + IFG + pause_pkt_size)
+ *
+ * where largest_pkt_size is that largest packet that the system can send
+ * (normally 1518B), IFG is the interframe gap and pause_pkt_size is the size
+ * of the PAUSE packet (normally 64B).
+ */
+union cvmx_gmxx_txx_pause_pkt_time {
+	u64 u64;
+	struct cvmx_gmxx_txx_pause_pkt_time_s {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} s;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn30xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn31xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn38xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn38xxp2;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn50xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn52xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn52xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn56xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn56xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn58xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn58xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn61xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn63xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn63xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn66xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn68xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn68xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn70xx;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cn70xxp1;
+	struct cvmx_gmxx_txx_pause_pkt_time_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_pause_pkt_time cvmx_gmxx_txx_pause_pkt_time_t;
+
+/**
+ * cvmx_gmx#_tx#_pause_togo
+ *
+ * GMX_TX_PAUSE_TOGO = Packet TX Amount of time remaining to backpressure
+ *
+ */
+union cvmx_gmxx_txx_pause_togo {
+	u64 u64;
+	struct cvmx_gmxx_txx_pause_togo_s {
+		u64 reserved_32_63 : 32;
+		u64 msg_time : 16;
+		u64 time : 16;
+	} s;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} cn30xx;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn31xx;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn38xx;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn38xxp2;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn50xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn52xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn52xxp1;
+	struct cvmx_gmxx_txx_pause_togo_s cn56xx;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn56xxp1;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn58xx;
+	struct cvmx_gmxx_txx_pause_togo_cn30xx cn58xxp1;
+	struct cvmx_gmxx_txx_pause_togo_s cn61xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn63xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn63xxp1;
+	struct cvmx_gmxx_txx_pause_togo_s cn66xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn68xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn68xxp1;
+	struct cvmx_gmxx_txx_pause_togo_s cn70xx;
+	struct cvmx_gmxx_txx_pause_togo_s cn70xxp1;
+	struct cvmx_gmxx_txx_pause_togo_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_pause_togo cvmx_gmxx_txx_pause_togo_t;
+
+/**
+ * cvmx_gmx#_tx#_pause_zero
+ *
+ * GMX_TX_PAUSE_ZERO = Packet TX Amount of time remaining to backpressure
+ *
+ */
+union cvmx_gmxx_txx_pause_zero {
+	u64 u64;
+	struct cvmx_gmxx_txx_pause_zero_s {
+		u64 reserved_1_63 : 63;
+		u64 send : 1;
+	} s;
+	struct cvmx_gmxx_txx_pause_zero_s cn30xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn31xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn38xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn38xxp2;
+	struct cvmx_gmxx_txx_pause_zero_s cn50xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn52xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn52xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cn56xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn56xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cn58xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn58xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cn61xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn63xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn63xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cn66xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn68xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn68xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cn70xx;
+	struct cvmx_gmxx_txx_pause_zero_s cn70xxp1;
+	struct cvmx_gmxx_txx_pause_zero_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_pause_zero cvmx_gmxx_txx_pause_zero_t;
+
+/**
+ * cvmx_gmx#_tx#_pipe
+ */
+union cvmx_gmxx_txx_pipe {
+	u64 u64;
+	struct cvmx_gmxx_txx_pipe_s {
+		u64 reserved_33_63 : 31;
+		u64 ign_bp : 1;
+		u64 reserved_21_31 : 11;
+		u64 nump : 5;
+		u64 reserved_7_15 : 9;
+		u64 base : 7;
+	} s;
+	struct cvmx_gmxx_txx_pipe_s cn68xx;
+	struct cvmx_gmxx_txx_pipe_s cn68xxp1;
+};
+
+typedef union cvmx_gmxx_txx_pipe cvmx_gmxx_txx_pipe_t;
+
+/**
+ * cvmx_gmx#_tx#_sgmii_ctl
+ */
+union cvmx_gmxx_txx_sgmii_ctl {
+	u64 u64;
+	struct cvmx_gmxx_txx_sgmii_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 align : 1;
+	} s;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn52xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn52xxp1;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn56xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn56xxp1;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn61xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn63xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn63xxp1;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn66xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn68xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn68xxp1;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn70xx;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cn70xxp1;
+	struct cvmx_gmxx_txx_sgmii_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_sgmii_ctl cvmx_gmxx_txx_sgmii_ctl_t;
+
+/**
+ * cvmx_gmx#_tx#_slot
+ *
+ * GMX_TX_SLOT = Packet TX Slottime Counter
+ *
+ */
+union cvmx_gmxx_txx_slot {
+	u64 u64;
+	struct cvmx_gmxx_txx_slot_s {
+		u64 reserved_10_63 : 54;
+		u64 slot : 10;
+	} s;
+	struct cvmx_gmxx_txx_slot_s cn30xx;
+	struct cvmx_gmxx_txx_slot_s cn31xx;
+	struct cvmx_gmxx_txx_slot_s cn38xx;
+	struct cvmx_gmxx_txx_slot_s cn38xxp2;
+	struct cvmx_gmxx_txx_slot_s cn50xx;
+	struct cvmx_gmxx_txx_slot_s cn52xx;
+	struct cvmx_gmxx_txx_slot_s cn52xxp1;
+	struct cvmx_gmxx_txx_slot_s cn56xx;
+	struct cvmx_gmxx_txx_slot_s cn56xxp1;
+	struct cvmx_gmxx_txx_slot_s cn58xx;
+	struct cvmx_gmxx_txx_slot_s cn58xxp1;
+	struct cvmx_gmxx_txx_slot_s cn61xx;
+	struct cvmx_gmxx_txx_slot_s cn63xx;
+	struct cvmx_gmxx_txx_slot_s cn63xxp1;
+	struct cvmx_gmxx_txx_slot_s cn66xx;
+	struct cvmx_gmxx_txx_slot_s cn68xx;
+	struct cvmx_gmxx_txx_slot_s cn68xxp1;
+	struct cvmx_gmxx_txx_slot_s cn70xx;
+	struct cvmx_gmxx_txx_slot_s cn70xxp1;
+	struct cvmx_gmxx_txx_slot_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_slot cvmx_gmxx_txx_slot_t;
+
+/**
+ * cvmx_gmx#_tx#_soft_pause
+ *
+ * GMX_TX_SOFT_PAUSE = Packet TX Software Pause
+ *
+ */
+union cvmx_gmxx_txx_soft_pause {
+	u64 u64;
+	struct cvmx_gmxx_txx_soft_pause_s {
+		u64 reserved_16_63 : 48;
+		u64 time : 16;
+	} s;
+	struct cvmx_gmxx_txx_soft_pause_s cn30xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn31xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn38xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn38xxp2;
+	struct cvmx_gmxx_txx_soft_pause_s cn50xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn52xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn52xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cn56xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn56xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cn58xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn58xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cn61xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn63xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn63xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cn66xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn68xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn68xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cn70xx;
+	struct cvmx_gmxx_txx_soft_pause_s cn70xxp1;
+	struct cvmx_gmxx_txx_soft_pause_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_soft_pause cvmx_gmxx_txx_soft_pause_t;
+
+/**
+ * cvmx_gmx#_tx#_stat0
+ *
+ * GMX_TX_STAT0 = GMX_TX_STATS_XSDEF / GMX_TX_STATS_XSCOL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat0 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat0_s {
+		u64 xsdef : 32;
+		u64 xscol : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat0_s cn30xx;
+	struct cvmx_gmxx_txx_stat0_s cn31xx;
+	struct cvmx_gmxx_txx_stat0_s cn38xx;
+	struct cvmx_gmxx_txx_stat0_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat0_s cn50xx;
+	struct cvmx_gmxx_txx_stat0_s cn52xx;
+	struct cvmx_gmxx_txx_stat0_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat0_s cn56xx;
+	struct cvmx_gmxx_txx_stat0_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat0_s cn58xx;
+	struct cvmx_gmxx_txx_stat0_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat0_s cn61xx;
+	struct cvmx_gmxx_txx_stat0_s cn63xx;
+	struct cvmx_gmxx_txx_stat0_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat0_s cn66xx;
+	struct cvmx_gmxx_txx_stat0_s cn68xx;
+	struct cvmx_gmxx_txx_stat0_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat0_s cn70xx;
+	struct cvmx_gmxx_txx_stat0_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat0_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat0 cvmx_gmxx_txx_stat0_t;
+
+/**
+ * cvmx_gmx#_tx#_stat1
+ *
+ * GMX_TX_STAT1 = GMX_TX_STATS_SCOL  / GMX_TX_STATS_MCOL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat1 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat1_s {
+		u64 scol : 32;
+		u64 mcol : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat1_s cn30xx;
+	struct cvmx_gmxx_txx_stat1_s cn31xx;
+	struct cvmx_gmxx_txx_stat1_s cn38xx;
+	struct cvmx_gmxx_txx_stat1_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat1_s cn50xx;
+	struct cvmx_gmxx_txx_stat1_s cn52xx;
+	struct cvmx_gmxx_txx_stat1_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat1_s cn56xx;
+	struct cvmx_gmxx_txx_stat1_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat1_s cn58xx;
+	struct cvmx_gmxx_txx_stat1_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat1_s cn61xx;
+	struct cvmx_gmxx_txx_stat1_s cn63xx;
+	struct cvmx_gmxx_txx_stat1_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat1_s cn66xx;
+	struct cvmx_gmxx_txx_stat1_s cn68xx;
+	struct cvmx_gmxx_txx_stat1_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat1_s cn70xx;
+	struct cvmx_gmxx_txx_stat1_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat1_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat1 cvmx_gmxx_txx_stat1_t;
+
+/**
+ * cvmx_gmx#_tx#_stat2
+ *
+ * GMX_TX_STAT2 = GMX_TX_STATS_OCTS
+ *
+ *
+ * Notes:
+ * - Octect counts are the sum of all data transmitted on the wire including
+ *   packet data, pad bytes, fcs bytes, pause bytes, and jam bytes.  The octect
+ *   counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat2 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_gmxx_txx_stat2_s cn30xx;
+	struct cvmx_gmxx_txx_stat2_s cn31xx;
+	struct cvmx_gmxx_txx_stat2_s cn38xx;
+	struct cvmx_gmxx_txx_stat2_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat2_s cn50xx;
+	struct cvmx_gmxx_txx_stat2_s cn52xx;
+	struct cvmx_gmxx_txx_stat2_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat2_s cn56xx;
+	struct cvmx_gmxx_txx_stat2_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat2_s cn58xx;
+	struct cvmx_gmxx_txx_stat2_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat2_s cn61xx;
+	struct cvmx_gmxx_txx_stat2_s cn63xx;
+	struct cvmx_gmxx_txx_stat2_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat2_s cn66xx;
+	struct cvmx_gmxx_txx_stat2_s cn68xx;
+	struct cvmx_gmxx_txx_stat2_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat2_s cn70xx;
+	struct cvmx_gmxx_txx_stat2_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat2_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat2 cvmx_gmxx_txx_stat2_t;
+
+/**
+ * cvmx_gmx#_tx#_stat3
+ *
+ * GMX_TX_STAT3 = GMX_TX_STATS_PKTS
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat3 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat3_s {
+		u64 reserved_32_63 : 32;
+		u64 pkts : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat3_s cn30xx;
+	struct cvmx_gmxx_txx_stat3_s cn31xx;
+	struct cvmx_gmxx_txx_stat3_s cn38xx;
+	struct cvmx_gmxx_txx_stat3_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat3_s cn50xx;
+	struct cvmx_gmxx_txx_stat3_s cn52xx;
+	struct cvmx_gmxx_txx_stat3_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat3_s cn56xx;
+	struct cvmx_gmxx_txx_stat3_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat3_s cn58xx;
+	struct cvmx_gmxx_txx_stat3_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat3_s cn61xx;
+	struct cvmx_gmxx_txx_stat3_s cn63xx;
+	struct cvmx_gmxx_txx_stat3_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat3_s cn66xx;
+	struct cvmx_gmxx_txx_stat3_s cn68xx;
+	struct cvmx_gmxx_txx_stat3_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat3_s cn70xx;
+	struct cvmx_gmxx_txx_stat3_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat3_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat3 cvmx_gmxx_txx_stat3_t;
+
+/**
+ * cvmx_gmx#_tx#_stat4
+ *
+ * GMX_TX_STAT4 = GMX_TX_STATS_HIST1 (64) / GMX_TX_STATS_HIST0 (<64)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat4 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat4_s {
+		u64 hist1 : 32;
+		u64 hist0 : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat4_s cn30xx;
+	struct cvmx_gmxx_txx_stat4_s cn31xx;
+	struct cvmx_gmxx_txx_stat4_s cn38xx;
+	struct cvmx_gmxx_txx_stat4_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat4_s cn50xx;
+	struct cvmx_gmxx_txx_stat4_s cn52xx;
+	struct cvmx_gmxx_txx_stat4_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat4_s cn56xx;
+	struct cvmx_gmxx_txx_stat4_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat4_s cn58xx;
+	struct cvmx_gmxx_txx_stat4_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat4_s cn61xx;
+	struct cvmx_gmxx_txx_stat4_s cn63xx;
+	struct cvmx_gmxx_txx_stat4_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat4_s cn66xx;
+	struct cvmx_gmxx_txx_stat4_s cn68xx;
+	struct cvmx_gmxx_txx_stat4_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat4_s cn70xx;
+	struct cvmx_gmxx_txx_stat4_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat4_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat4 cvmx_gmxx_txx_stat4_t;
+
+/**
+ * cvmx_gmx#_tx#_stat5
+ *
+ * GMX_TX_STAT5 = GMX_TX_STATS_HIST3 (128- 255) / GMX_TX_STATS_HIST2 (65- 127)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat5 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat5_s {
+		u64 hist3 : 32;
+		u64 hist2 : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat5_s cn30xx;
+	struct cvmx_gmxx_txx_stat5_s cn31xx;
+	struct cvmx_gmxx_txx_stat5_s cn38xx;
+	struct cvmx_gmxx_txx_stat5_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat5_s cn50xx;
+	struct cvmx_gmxx_txx_stat5_s cn52xx;
+	struct cvmx_gmxx_txx_stat5_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat5_s cn56xx;
+	struct cvmx_gmxx_txx_stat5_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat5_s cn58xx;
+	struct cvmx_gmxx_txx_stat5_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat5_s cn61xx;
+	struct cvmx_gmxx_txx_stat5_s cn63xx;
+	struct cvmx_gmxx_txx_stat5_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat5_s cn66xx;
+	struct cvmx_gmxx_txx_stat5_s cn68xx;
+	struct cvmx_gmxx_txx_stat5_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat5_s cn70xx;
+	struct cvmx_gmxx_txx_stat5_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat5_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat5 cvmx_gmxx_txx_stat5_t;
+
+/**
+ * cvmx_gmx#_tx#_stat6
+ *
+ * GMX_TX_STAT6 = GMX_TX_STATS_HIST5 (512-1023) / GMX_TX_STATS_HIST4 (256-511)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat6 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat6_s {
+		u64 hist5 : 32;
+		u64 hist4 : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat6_s cn30xx;
+	struct cvmx_gmxx_txx_stat6_s cn31xx;
+	struct cvmx_gmxx_txx_stat6_s cn38xx;
+	struct cvmx_gmxx_txx_stat6_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat6_s cn50xx;
+	struct cvmx_gmxx_txx_stat6_s cn52xx;
+	struct cvmx_gmxx_txx_stat6_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat6_s cn56xx;
+	struct cvmx_gmxx_txx_stat6_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat6_s cn58xx;
+	struct cvmx_gmxx_txx_stat6_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat6_s cn61xx;
+	struct cvmx_gmxx_txx_stat6_s cn63xx;
+	struct cvmx_gmxx_txx_stat6_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat6_s cn66xx;
+	struct cvmx_gmxx_txx_stat6_s cn68xx;
+	struct cvmx_gmxx_txx_stat6_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat6_s cn70xx;
+	struct cvmx_gmxx_txx_stat6_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat6_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat6 cvmx_gmxx_txx_stat6_t;
+
+/**
+ * cvmx_gmx#_tx#_stat7
+ *
+ * GMX_TX_STAT7 = GMX_TX_STATS_HIST7 (1024-1518) / GMX_TX_STATS_HIST6 (>1518)
+ *
+ *
+ * Notes:
+ * - Packet length is the sum of all data transmitted on the wire for the given
+ *   packet including packet data, pad bytes, fcs bytes, pause bytes, and jam
+ *   bytes.  The octect counts do not include PREAMBLE byte or EXTEND cycles.
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat7 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat7_s {
+		u64 hist7 : 32;
+		u64 hist6 : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat7_s cn30xx;
+	struct cvmx_gmxx_txx_stat7_s cn31xx;
+	struct cvmx_gmxx_txx_stat7_s cn38xx;
+	struct cvmx_gmxx_txx_stat7_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat7_s cn50xx;
+	struct cvmx_gmxx_txx_stat7_s cn52xx;
+	struct cvmx_gmxx_txx_stat7_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat7_s cn56xx;
+	struct cvmx_gmxx_txx_stat7_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat7_s cn58xx;
+	struct cvmx_gmxx_txx_stat7_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat7_s cn61xx;
+	struct cvmx_gmxx_txx_stat7_s cn63xx;
+	struct cvmx_gmxx_txx_stat7_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat7_s cn66xx;
+	struct cvmx_gmxx_txx_stat7_s cn68xx;
+	struct cvmx_gmxx_txx_stat7_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat7_s cn70xx;
+	struct cvmx_gmxx_txx_stat7_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat7_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat7 cvmx_gmxx_txx_stat7_t;
+
+/**
+ * cvmx_gmx#_tx#_stat8
+ *
+ * GMX_TX_STAT8 = GMX_TX_STATS_MCST  / GMX_TX_STATS_BCST
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ * - Note, GMX determines if the packet is MCST or BCST from the DMAC of the
+ *   packet.  GMX assumes that the DMAC lies in the first 6 bytes of the packet
+ *   as per the 802.3 frame definition.  If the system requires additional data
+ *   before the L2 header, then the MCST and BCST counters may not reflect
+ *   reality and should be ignored by software.
+ */
+union cvmx_gmxx_txx_stat8 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat8_s {
+		u64 mcst : 32;
+		u64 bcst : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat8_s cn30xx;
+	struct cvmx_gmxx_txx_stat8_s cn31xx;
+	struct cvmx_gmxx_txx_stat8_s cn38xx;
+	struct cvmx_gmxx_txx_stat8_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat8_s cn50xx;
+	struct cvmx_gmxx_txx_stat8_s cn52xx;
+	struct cvmx_gmxx_txx_stat8_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat8_s cn56xx;
+	struct cvmx_gmxx_txx_stat8_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat8_s cn58xx;
+	struct cvmx_gmxx_txx_stat8_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat8_s cn61xx;
+	struct cvmx_gmxx_txx_stat8_s cn63xx;
+	struct cvmx_gmxx_txx_stat8_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat8_s cn66xx;
+	struct cvmx_gmxx_txx_stat8_s cn68xx;
+	struct cvmx_gmxx_txx_stat8_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat8_s cn70xx;
+	struct cvmx_gmxx_txx_stat8_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat8_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat8 cvmx_gmxx_txx_stat8_t;
+
+/**
+ * cvmx_gmx#_tx#_stat9
+ *
+ * GMX_TX_STAT9 = GMX_TX_STATS_UNDFLW / GMX_TX_STATS_CTL
+ *
+ *
+ * Notes:
+ * - Cleared either by a write (of any value) or a read when GMX_TX_STATS_CTL[RD_CLR] is set
+ * - Counters will wrap
+ */
+union cvmx_gmxx_txx_stat9 {
+	u64 u64;
+	struct cvmx_gmxx_txx_stat9_s {
+		u64 undflw : 32;
+		u64 ctl : 32;
+	} s;
+	struct cvmx_gmxx_txx_stat9_s cn30xx;
+	struct cvmx_gmxx_txx_stat9_s cn31xx;
+	struct cvmx_gmxx_txx_stat9_s cn38xx;
+	struct cvmx_gmxx_txx_stat9_s cn38xxp2;
+	struct cvmx_gmxx_txx_stat9_s cn50xx;
+	struct cvmx_gmxx_txx_stat9_s cn52xx;
+	struct cvmx_gmxx_txx_stat9_s cn52xxp1;
+	struct cvmx_gmxx_txx_stat9_s cn56xx;
+	struct cvmx_gmxx_txx_stat9_s cn56xxp1;
+	struct cvmx_gmxx_txx_stat9_s cn58xx;
+	struct cvmx_gmxx_txx_stat9_s cn58xxp1;
+	struct cvmx_gmxx_txx_stat9_s cn61xx;
+	struct cvmx_gmxx_txx_stat9_s cn63xx;
+	struct cvmx_gmxx_txx_stat9_s cn63xxp1;
+	struct cvmx_gmxx_txx_stat9_s cn66xx;
+	struct cvmx_gmxx_txx_stat9_s cn68xx;
+	struct cvmx_gmxx_txx_stat9_s cn68xxp1;
+	struct cvmx_gmxx_txx_stat9_s cn70xx;
+	struct cvmx_gmxx_txx_stat9_s cn70xxp1;
+	struct cvmx_gmxx_txx_stat9_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stat9 cvmx_gmxx_txx_stat9_t;
+
+/**
+ * cvmx_gmx#_tx#_stats_ctl
+ *
+ * GMX_TX_STATS_CTL = TX Stats Control register
+ *
+ */
+union cvmx_gmxx_txx_stats_ctl {
+	u64 u64;
+	struct cvmx_gmxx_txx_stats_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 rd_clr : 1;
+	} s;
+	struct cvmx_gmxx_txx_stats_ctl_s cn30xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn31xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn38xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn38xxp2;
+	struct cvmx_gmxx_txx_stats_ctl_s cn50xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn52xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn52xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cn56xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn56xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cn58xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn58xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cn61xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn63xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn63xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cn66xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn68xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn68xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cn70xx;
+	struct cvmx_gmxx_txx_stats_ctl_s cn70xxp1;
+	struct cvmx_gmxx_txx_stats_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_stats_ctl cvmx_gmxx_txx_stats_ctl_t;
+
+/**
+ * cvmx_gmx#_tx#_thresh
+ *
+ * Per Port
+ * GMX_TX_THRESH = Packet TX Threshold
+ */
+union cvmx_gmxx_txx_thresh {
+	u64 u64;
+	struct cvmx_gmxx_txx_thresh_s {
+		u64 reserved_10_63 : 54;
+		u64 cnt : 10;
+	} s;
+	struct cvmx_gmxx_txx_thresh_cn30xx {
+		u64 reserved_7_63 : 57;
+		u64 cnt : 7;
+	} cn30xx;
+	struct cvmx_gmxx_txx_thresh_cn30xx cn31xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx {
+		u64 reserved_9_63 : 55;
+		u64 cnt : 9;
+	} cn38xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn38xxp2;
+	struct cvmx_gmxx_txx_thresh_cn30xx cn50xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn52xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn52xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn56xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn56xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn58xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn58xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn61xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn63xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn63xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn66xx;
+	struct cvmx_gmxx_txx_thresh_s cn68xx;
+	struct cvmx_gmxx_txx_thresh_s cn68xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn70xx;
+	struct cvmx_gmxx_txx_thresh_cn38xx cn70xxp1;
+	struct cvmx_gmxx_txx_thresh_cn38xx cnf71xx;
+};
+
+typedef union cvmx_gmxx_txx_thresh cvmx_gmxx_txx_thresh_t;
+
+/**
+ * cvmx_gmx#_tx_bp
+ *
+ * GMX_TX_BP = Packet Interface TX BackPressure Register
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of BP is used.
+ *
+ */
+union cvmx_gmxx_tx_bp {
+	u64 u64;
+	struct cvmx_gmxx_tx_bp_s {
+		u64 reserved_4_63 : 60;
+		u64 bp : 4;
+	} s;
+	struct cvmx_gmxx_tx_bp_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 bp : 3;
+	} cn30xx;
+	struct cvmx_gmxx_tx_bp_cn30xx cn31xx;
+	struct cvmx_gmxx_tx_bp_s cn38xx;
+	struct cvmx_gmxx_tx_bp_s cn38xxp2;
+	struct cvmx_gmxx_tx_bp_cn30xx cn50xx;
+	struct cvmx_gmxx_tx_bp_s cn52xx;
+	struct cvmx_gmxx_tx_bp_s cn52xxp1;
+	struct cvmx_gmxx_tx_bp_s cn56xx;
+	struct cvmx_gmxx_tx_bp_s cn56xxp1;
+	struct cvmx_gmxx_tx_bp_s cn58xx;
+	struct cvmx_gmxx_tx_bp_s cn58xxp1;
+	struct cvmx_gmxx_tx_bp_s cn61xx;
+	struct cvmx_gmxx_tx_bp_s cn63xx;
+	struct cvmx_gmxx_tx_bp_s cn63xxp1;
+	struct cvmx_gmxx_tx_bp_s cn66xx;
+	struct cvmx_gmxx_tx_bp_s cn68xx;
+	struct cvmx_gmxx_tx_bp_s cn68xxp1;
+	struct cvmx_gmxx_tx_bp_s cn70xx;
+	struct cvmx_gmxx_tx_bp_s cn70xxp1;
+	struct cvmx_gmxx_tx_bp_cnf71xx {
+		u64 reserved_2_63 : 62;
+		u64 bp : 2;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_bp cvmx_gmxx_tx_bp_t;
+
+/**
+ * cvmx_gmx#_tx_clk_msk#
+ *
+ * GMX_TX_CLK_MSK = GMX Clock Select
+ *
+ */
+union cvmx_gmxx_tx_clk_mskx {
+	u64 u64;
+	struct cvmx_gmxx_tx_clk_mskx_s {
+		u64 reserved_1_63 : 63;
+		u64 msk : 1;
+	} s;
+	struct cvmx_gmxx_tx_clk_mskx_s cn30xx;
+	struct cvmx_gmxx_tx_clk_mskx_s cn50xx;
+};
+
+typedef union cvmx_gmxx_tx_clk_mskx cvmx_gmxx_tx_clk_mskx_t;
+
+/**
+ * cvmx_gmx#_tx_col_attempt
+ *
+ * GMX_TX_COL_ATTEMPT = Packet TX collision attempts before dropping frame
+ *
+ */
+union cvmx_gmxx_tx_col_attempt {
+	u64 u64;
+	struct cvmx_gmxx_tx_col_attempt_s {
+		u64 reserved_5_63 : 59;
+		u64 limit : 5;
+	} s;
+	struct cvmx_gmxx_tx_col_attempt_s cn30xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn31xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn38xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn38xxp2;
+	struct cvmx_gmxx_tx_col_attempt_s cn50xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn52xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn52xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cn56xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn56xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cn58xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn58xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cn61xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn63xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn63xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cn66xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn68xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn68xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cn70xx;
+	struct cvmx_gmxx_tx_col_attempt_s cn70xxp1;
+	struct cvmx_gmxx_tx_col_attempt_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_col_attempt cvmx_gmxx_tx_col_attempt_t;
+
+/**
+ * cvmx_gmx#_tx_corrupt
+ *
+ * GMX_TX_CORRUPT = TX - Corrupt TX packets with the ERR bit set
+ *
+ *
+ * Notes:
+ * Packets sent from PKO with the ERR wire asserted will be corrupted by
+ * the transmitter if CORRUPT[prt] is set (XAUI uses prt==0).
+ *
+ * Corruption means that GMX will send a bad FCS value.  If GMX_TX_APPEND[FCS]
+ * is clear then no FCS is sent and the GMX cannot corrupt it.  The corrupt FCS
+ * value is 0xeeeeeeee for SGMII/1000Base-X and 4 bytes of the error
+ * propagation code in XAUI mode.
+ */
+union cvmx_gmxx_tx_corrupt {
+	u64 u64;
+	struct cvmx_gmxx_tx_corrupt_s {
+		u64 reserved_4_63 : 60;
+		u64 corrupt : 4;
+	} s;
+	struct cvmx_gmxx_tx_corrupt_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 corrupt : 3;
+	} cn30xx;
+	struct cvmx_gmxx_tx_corrupt_cn30xx cn31xx;
+	struct cvmx_gmxx_tx_corrupt_s cn38xx;
+	struct cvmx_gmxx_tx_corrupt_s cn38xxp2;
+	struct cvmx_gmxx_tx_corrupt_cn30xx cn50xx;
+	struct cvmx_gmxx_tx_corrupt_s cn52xx;
+	struct cvmx_gmxx_tx_corrupt_s cn52xxp1;
+	struct cvmx_gmxx_tx_corrupt_s cn56xx;
+	struct cvmx_gmxx_tx_corrupt_s cn56xxp1;
+	struct cvmx_gmxx_tx_corrupt_s cn58xx;
+	struct cvmx_gmxx_tx_corrupt_s cn58xxp1;
+	struct cvmx_gmxx_tx_corrupt_s cn61xx;
+	struct cvmx_gmxx_tx_corrupt_s cn63xx;
+	struct cvmx_gmxx_tx_corrupt_s cn63xxp1;
+	struct cvmx_gmxx_tx_corrupt_s cn66xx;
+	struct cvmx_gmxx_tx_corrupt_s cn68xx;
+	struct cvmx_gmxx_tx_corrupt_s cn68xxp1;
+	struct cvmx_gmxx_tx_corrupt_s cn70xx;
+	struct cvmx_gmxx_tx_corrupt_s cn70xxp1;
+	struct cvmx_gmxx_tx_corrupt_cnf71xx {
+		u64 reserved_2_63 : 62;
+		u64 corrupt : 2;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_corrupt cvmx_gmxx_tx_corrupt_t;
+
+/**
+ * cvmx_gmx#_tx_hg2_reg1
+ *
+ * Notes:
+ * The TX_XOF[15:0] field in GMX(0)_TX_HG2_REG1 and the TX_XON[15:0] field in
+ * GMX(0)_TX_HG2_REG2 register map to the same 16 physical flops. When written with address of
+ * GMX(0)_TX_HG2_REG1, it will exhibit write 1 to set behavior and when written with address of
+ * GMX(0)_TX_HG2_REG2, it will exhibit write 1 to clear behavior.
+ * For reads, either address will return the $GMX(0)_TX_HG2_REG1 values.
+ */
+union cvmx_gmxx_tx_hg2_reg1 {
+	u64 u64;
+	struct cvmx_gmxx_tx_hg2_reg1_s {
+		u64 reserved_16_63 : 48;
+		u64 tx_xof : 16;
+	} s;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn52xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn52xxp1;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn56xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn61xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn63xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn63xxp1;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn66xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn68xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn68xxp1;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn70xx;
+	struct cvmx_gmxx_tx_hg2_reg1_s cn70xxp1;
+	struct cvmx_gmxx_tx_hg2_reg1_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_hg2_reg1 cvmx_gmxx_tx_hg2_reg1_t;
+
+/**
+ * cvmx_gmx#_tx_hg2_reg2
+ *
+ * Notes:
+ * The TX_XOF[15:0] field in GMX(0)_TX_HG2_REG1 and the TX_XON[15:0] field in
+ * GMX(0)_TX_HG2_REG2 register map to the same 16 physical flops. When written with address  of
+ * GMX(0)_TX_HG2_REG1, it will exhibit write 1 to set behavior and when written with address of
+ * GMX(0)_TX_HG2_REG2, it will exhibit write 1 to clear behavior.
+ * For reads, either address will return the $GMX(0)_TX_HG2_REG1 values.
+ */
+union cvmx_gmxx_tx_hg2_reg2 {
+	u64 u64;
+	struct cvmx_gmxx_tx_hg2_reg2_s {
+		u64 reserved_16_63 : 48;
+		u64 tx_xon : 16;
+	} s;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn52xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn52xxp1;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn56xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn61xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn63xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn63xxp1;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn66xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn68xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn68xxp1;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn70xx;
+	struct cvmx_gmxx_tx_hg2_reg2_s cn70xxp1;
+	struct cvmx_gmxx_tx_hg2_reg2_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_hg2_reg2 cvmx_gmxx_tx_hg2_reg2_t;
+
+/**
+ * cvmx_gmx#_tx_ifg
+ *
+ * GMX_TX_IFG = Packet TX Interframe Gap
+ *
+ *
+ * Notes:
+ * * Programming IFG1 and IFG2.
+ *
+ * For 10/100/1000Mbs half-duplex systems that require IEEE 802.3
+ * compatibility, IFG1 must be in the range of 1-8, IFG2 must be in the range
+ * of 4-12, and the IFG1+IFG2 sum must be 12.
+ *
+ * For 10/100/1000Mbs full-duplex systems that require IEEE 802.3
+ * compatibility, IFG1 must be in the range of 1-11, IFG2 must be in the range
+ * of 1-11, and the IFG1+IFG2 sum must be 12.
+ *
+ * For XAUI/10Gbs systems that require IEEE 802.3 compatibility, the
+ * IFG1+IFG2 sum must be 12.  IFG1[1:0] and IFG2[1:0] must be zero.
+ *
+ * For all other systems, IFG1 and IFG2 can be any value in the range of
+ * 1-15.  Allowing for a total possible IFG sum of 2-30.
+ */
+union cvmx_gmxx_tx_ifg {
+	u64 u64;
+	struct cvmx_gmxx_tx_ifg_s {
+		u64 reserved_8_63 : 56;
+		u64 ifg2 : 4;
+		u64 ifg1 : 4;
+	} s;
+	struct cvmx_gmxx_tx_ifg_s cn30xx;
+	struct cvmx_gmxx_tx_ifg_s cn31xx;
+	struct cvmx_gmxx_tx_ifg_s cn38xx;
+	struct cvmx_gmxx_tx_ifg_s cn38xxp2;
+	struct cvmx_gmxx_tx_ifg_s cn50xx;
+	struct cvmx_gmxx_tx_ifg_s cn52xx;
+	struct cvmx_gmxx_tx_ifg_s cn52xxp1;
+	struct cvmx_gmxx_tx_ifg_s cn56xx;
+	struct cvmx_gmxx_tx_ifg_s cn56xxp1;
+	struct cvmx_gmxx_tx_ifg_s cn58xx;
+	struct cvmx_gmxx_tx_ifg_s cn58xxp1;
+	struct cvmx_gmxx_tx_ifg_s cn61xx;
+	struct cvmx_gmxx_tx_ifg_s cn63xx;
+	struct cvmx_gmxx_tx_ifg_s cn63xxp1;
+	struct cvmx_gmxx_tx_ifg_s cn66xx;
+	struct cvmx_gmxx_tx_ifg_s cn68xx;
+	struct cvmx_gmxx_tx_ifg_s cn68xxp1;
+	struct cvmx_gmxx_tx_ifg_s cn70xx;
+	struct cvmx_gmxx_tx_ifg_s cn70xxp1;
+	struct cvmx_gmxx_tx_ifg_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_ifg cvmx_gmxx_tx_ifg_t;
+
+/**
+ * cvmx_gmx#_tx_int_en
+ *
+ * GMX_TX_INT_EN = Interrupt Enable
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of UNDFLW is used.
+ *
+ */
+union cvmx_gmxx_tx_int_en {
+	u64 u64;
+	struct cvmx_gmxx_tx_int_en_s {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} s;
+	struct cvmx_gmxx_tx_int_en_cn30xx {
+		u64 reserved_19_63 : 45;
+		u64 late_col : 3;
+		u64 reserved_15_15 : 1;
+		u64 xsdef : 3;
+		u64 reserved_11_11 : 1;
+		u64 xscol : 3;
+		u64 reserved_5_7 : 3;
+		u64 undflw : 3;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn30xx;
+	struct cvmx_gmxx_tx_int_en_cn31xx {
+		u64 reserved_15_63 : 49;
+		u64 xsdef : 3;
+		u64 reserved_11_11 : 1;
+		u64 xscol : 3;
+		u64 reserved_5_7 : 3;
+		u64 undflw : 3;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn31xx;
+	struct cvmx_gmxx_tx_int_en_cn38xx {
+		u64 reserved_20_63 : 44;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 ncb_nxa : 1;
+		u64 pko_nxa : 1;
+	} cn38xx;
+	struct cvmx_gmxx_tx_int_en_cn38xxp2 {
+		u64 reserved_16_63 : 48;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 ncb_nxa : 1;
+		u64 pko_nxa : 1;
+	} cn38xxp2;
+	struct cvmx_gmxx_tx_int_en_cn30xx cn50xx;
+	struct cvmx_gmxx_tx_int_en_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn52xx;
+	struct cvmx_gmxx_tx_int_en_cn52xx cn52xxp1;
+	struct cvmx_gmxx_tx_int_en_cn52xx cn56xx;
+	struct cvmx_gmxx_tx_int_en_cn52xx cn56xxp1;
+	struct cvmx_gmxx_tx_int_en_cn38xx cn58xx;
+	struct cvmx_gmxx_tx_int_en_cn38xx cn58xxp1;
+	struct cvmx_gmxx_tx_int_en_s cn61xx;
+	struct cvmx_gmxx_tx_int_en_cn63xx {
+		u64 reserved_24_63 : 40;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn63xx;
+	struct cvmx_gmxx_tx_int_en_cn63xx cn63xxp1;
+	struct cvmx_gmxx_tx_int_en_s cn66xx;
+	struct cvmx_gmxx_tx_int_en_cn68xx {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 pko_nxp : 1;
+		u64 pko_nxa : 1;
+	} cn68xx;
+	struct cvmx_gmxx_tx_int_en_cn68xx cn68xxp1;
+	struct cvmx_gmxx_tx_int_en_s cn70xx;
+	struct cvmx_gmxx_tx_int_en_s cn70xxp1;
+	struct cvmx_gmxx_tx_int_en_cnf71xx {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 reserved_22_23 : 2;
+		u64 ptp_lost : 2;
+		u64 reserved_18_19 : 2;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_int_en cvmx_gmxx_tx_int_en_t;
+
+/**
+ * cvmx_gmx#_tx_int_reg
+ *
+ * GMX_TX_INT_REG = Interrupt Register
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of UNDFLW is used.
+ *
+ */
+union cvmx_gmxx_tx_int_reg {
+	u64 u64;
+	struct cvmx_gmxx_tx_int_reg_s {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} s;
+	struct cvmx_gmxx_tx_int_reg_cn30xx {
+		u64 reserved_19_63 : 45;
+		u64 late_col : 3;
+		u64 reserved_15_15 : 1;
+		u64 xsdef : 3;
+		u64 reserved_11_11 : 1;
+		u64 xscol : 3;
+		u64 reserved_5_7 : 3;
+		u64 undflw : 3;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn30xx;
+	struct cvmx_gmxx_tx_int_reg_cn31xx {
+		u64 reserved_15_63 : 49;
+		u64 xsdef : 3;
+		u64 reserved_11_11 : 1;
+		u64 xscol : 3;
+		u64 reserved_5_7 : 3;
+		u64 undflw : 3;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn31xx;
+	struct cvmx_gmxx_tx_int_reg_cn38xx {
+		u64 reserved_20_63 : 44;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 ncb_nxa : 1;
+		u64 pko_nxa : 1;
+	} cn38xx;
+	struct cvmx_gmxx_tx_int_reg_cn38xxp2 {
+		u64 reserved_16_63 : 48;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 ncb_nxa : 1;
+		u64 pko_nxa : 1;
+	} cn38xxp2;
+	struct cvmx_gmxx_tx_int_reg_cn30xx cn50xx;
+	struct cvmx_gmxx_tx_int_reg_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn52xx;
+	struct cvmx_gmxx_tx_int_reg_cn52xx cn52xxp1;
+	struct cvmx_gmxx_tx_int_reg_cn52xx cn56xx;
+	struct cvmx_gmxx_tx_int_reg_cn52xx cn56xxp1;
+	struct cvmx_gmxx_tx_int_reg_cn38xx cn58xx;
+	struct cvmx_gmxx_tx_int_reg_cn38xx cn58xxp1;
+	struct cvmx_gmxx_tx_int_reg_s cn61xx;
+	struct cvmx_gmxx_tx_int_reg_cn63xx {
+		u64 reserved_24_63 : 40;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cn63xx;
+	struct cvmx_gmxx_tx_int_reg_cn63xx cn63xxp1;
+	struct cvmx_gmxx_tx_int_reg_s cn66xx;
+	struct cvmx_gmxx_tx_int_reg_cn68xx {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 ptp_lost : 4;
+		u64 late_col : 4;
+		u64 xsdef : 4;
+		u64 xscol : 4;
+		u64 reserved_6_7 : 2;
+		u64 undflw : 4;
+		u64 pko_nxp : 1;
+		u64 pko_nxa : 1;
+	} cn68xx;
+	struct cvmx_gmxx_tx_int_reg_cn68xx cn68xxp1;
+	struct cvmx_gmxx_tx_int_reg_s cn70xx;
+	struct cvmx_gmxx_tx_int_reg_s cn70xxp1;
+	struct cvmx_gmxx_tx_int_reg_cnf71xx {
+		u64 reserved_25_63 : 39;
+		u64 xchange : 1;
+		u64 reserved_22_23 : 2;
+		u64 ptp_lost : 2;
+		u64 reserved_18_19 : 2;
+		u64 late_col : 2;
+		u64 reserved_14_15 : 2;
+		u64 xsdef : 2;
+		u64 reserved_10_11 : 2;
+		u64 xscol : 2;
+		u64 reserved_4_7 : 4;
+		u64 undflw : 2;
+		u64 reserved_1_1 : 1;
+		u64 pko_nxa : 1;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_int_reg cvmx_gmxx_tx_int_reg_t;
+
+/**
+ * cvmx_gmx#_tx_jam
+ *
+ * GMX_TX_JAM = Packet TX Jam Pattern
+ *
+ */
+union cvmx_gmxx_tx_jam {
+	u64 u64;
+	struct cvmx_gmxx_tx_jam_s {
+		u64 reserved_8_63 : 56;
+		u64 jam : 8;
+	} s;
+	struct cvmx_gmxx_tx_jam_s cn30xx;
+	struct cvmx_gmxx_tx_jam_s cn31xx;
+	struct cvmx_gmxx_tx_jam_s cn38xx;
+	struct cvmx_gmxx_tx_jam_s cn38xxp2;
+	struct cvmx_gmxx_tx_jam_s cn50xx;
+	struct cvmx_gmxx_tx_jam_s cn52xx;
+	struct cvmx_gmxx_tx_jam_s cn52xxp1;
+	struct cvmx_gmxx_tx_jam_s cn56xx;
+	struct cvmx_gmxx_tx_jam_s cn56xxp1;
+	struct cvmx_gmxx_tx_jam_s cn58xx;
+	struct cvmx_gmxx_tx_jam_s cn58xxp1;
+	struct cvmx_gmxx_tx_jam_s cn61xx;
+	struct cvmx_gmxx_tx_jam_s cn63xx;
+	struct cvmx_gmxx_tx_jam_s cn63xxp1;
+	struct cvmx_gmxx_tx_jam_s cn66xx;
+	struct cvmx_gmxx_tx_jam_s cn68xx;
+	struct cvmx_gmxx_tx_jam_s cn68xxp1;
+	struct cvmx_gmxx_tx_jam_s cn70xx;
+	struct cvmx_gmxx_tx_jam_s cn70xxp1;
+	struct cvmx_gmxx_tx_jam_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_jam cvmx_gmxx_tx_jam_t;
+
+/**
+ * cvmx_gmx#_tx_lfsr
+ *
+ * GMX_TX_LFSR = LFSR used to implement truncated binary exponential backoff
+ *
+ */
+union cvmx_gmxx_tx_lfsr {
+	u64 u64;
+	struct cvmx_gmxx_tx_lfsr_s {
+		u64 reserved_16_63 : 48;
+		u64 lfsr : 16;
+	} s;
+	struct cvmx_gmxx_tx_lfsr_s cn30xx;
+	struct cvmx_gmxx_tx_lfsr_s cn31xx;
+	struct cvmx_gmxx_tx_lfsr_s cn38xx;
+	struct cvmx_gmxx_tx_lfsr_s cn38xxp2;
+	struct cvmx_gmxx_tx_lfsr_s cn50xx;
+	struct cvmx_gmxx_tx_lfsr_s cn52xx;
+	struct cvmx_gmxx_tx_lfsr_s cn52xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cn56xx;
+	struct cvmx_gmxx_tx_lfsr_s cn56xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cn58xx;
+	struct cvmx_gmxx_tx_lfsr_s cn58xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cn61xx;
+	struct cvmx_gmxx_tx_lfsr_s cn63xx;
+	struct cvmx_gmxx_tx_lfsr_s cn63xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cn66xx;
+	struct cvmx_gmxx_tx_lfsr_s cn68xx;
+	struct cvmx_gmxx_tx_lfsr_s cn68xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cn70xx;
+	struct cvmx_gmxx_tx_lfsr_s cn70xxp1;
+	struct cvmx_gmxx_tx_lfsr_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_lfsr cvmx_gmxx_tx_lfsr_t;
+
+/**
+ * cvmx_gmx#_tx_ovr_bp
+ *
+ * GMX_TX_OVR_BP = Packet Interface TX Override BackPressure
+ *
+ *
+ * Notes:
+ * In XAUI mode, only the lsb (corresponding to port0) of EN, BP, and IGN_FULL are used.
+ *
+ * GMX*_TX_OVR_BP[EN<0>] must be set to one and GMX*_TX_OVR_BP[BP<0>] must be cleared to zero
+ * (to forcibly disable HW-automatic 802.3 pause packet generation) with the HiGig2 Protocol
+ * when GMX*_HG2_CONTROL[HG2TX_EN]=0. (The HiGig2 protocol is indicated by
+ * GMX*_TX_XAUI_CTL[HG_EN]=1 and GMX*_RX0_UDD_SKP[LEN]=16.) HW can only auto-generate backpressure
+ * through HiGig2 messages (optionally, when GMX*_HG2_CONTROL[HG2TX_EN]=1) with the HiGig2
+ * protocol.
+ */
+union cvmx_gmxx_tx_ovr_bp {
+	u64 u64;
+	struct cvmx_gmxx_tx_ovr_bp_s {
+		u64 reserved_48_63 : 16;
+		u64 tx_prt_bp : 16;
+		u64 reserved_12_31 : 20;
+		u64 en : 4;
+		u64 bp : 4;
+		u64 ign_full : 4;
+	} s;
+	struct cvmx_gmxx_tx_ovr_bp_cn30xx {
+		u64 reserved_11_63 : 53;
+		u64 en : 3;
+		u64 reserved_7_7 : 1;
+		u64 bp : 3;
+		u64 reserved_3_3 : 1;
+		u64 ign_full : 3;
+	} cn30xx;
+	struct cvmx_gmxx_tx_ovr_bp_cn30xx cn31xx;
+	struct cvmx_gmxx_tx_ovr_bp_cn38xx {
+		u64 reserved_12_63 : 52;
+		u64 en : 4;
+		u64 bp : 4;
+		u64 ign_full : 4;
+	} cn38xx;
+	struct cvmx_gmxx_tx_ovr_bp_cn38xx cn38xxp2;
+	struct cvmx_gmxx_tx_ovr_bp_cn30xx cn50xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn52xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn52xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_s cn56xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn56xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_cn38xx cn58xx;
+	struct cvmx_gmxx_tx_ovr_bp_cn38xx cn58xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_s cn61xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn63xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn63xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_s cn66xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn68xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn68xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_s cn70xx;
+	struct cvmx_gmxx_tx_ovr_bp_s cn70xxp1;
+	struct cvmx_gmxx_tx_ovr_bp_cnf71xx {
+		u64 reserved_48_63 : 16;
+		u64 tx_prt_bp : 16;
+		u64 reserved_10_31 : 22;
+		u64 en : 2;
+		u64 reserved_6_7 : 2;
+		u64 bp : 2;
+		u64 reserved_2_3 : 2;
+		u64 ign_full : 2;
+	} cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_ovr_bp cvmx_gmxx_tx_ovr_bp_t;
+
+/**
+ * cvmx_gmx#_tx_pause_pkt_dmac
+ *
+ * GMX_TX_PAUSE_PKT_DMAC = Packet TX Pause Packet DMAC field
+ *
+ */
+union cvmx_gmxx_tx_pause_pkt_dmac {
+	u64 u64;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s {
+		u64 reserved_48_63 : 16;
+		u64 dmac : 48;
+	} s;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn30xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn31xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn38xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn38xxp2;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn50xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn52xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn52xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn56xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn56xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn58xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn58xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn61xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn63xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn63xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn66xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn68xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn68xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn70xx;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cn70xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_dmac_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_pause_pkt_dmac cvmx_gmxx_tx_pause_pkt_dmac_t;
+
+/**
+ * cvmx_gmx#_tx_pause_pkt_type
+ *
+ * GMX_TX_PAUSE_PKT_TYPE = Packet Interface TX Pause Packet TYPE field
+ *
+ */
+union cvmx_gmxx_tx_pause_pkt_type {
+	u64 u64;
+	struct cvmx_gmxx_tx_pause_pkt_type_s {
+		u64 reserved_16_63 : 48;
+		u64 type : 16;
+	} s;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn30xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn31xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn38xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn38xxp2;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn50xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn52xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn52xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn56xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn56xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn58xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn58xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn61xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn63xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn63xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn66xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn68xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn68xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn70xx;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cn70xxp1;
+	struct cvmx_gmxx_tx_pause_pkt_type_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_pause_pkt_type cvmx_gmxx_tx_pause_pkt_type_t;
+
+/**
+ * cvmx_gmx#_tx_prts
+ *
+ * Common
+ * GMX_TX_PRTS = TX Ports
+ */
+union cvmx_gmxx_tx_prts {
+	u64 u64;
+	struct cvmx_gmxx_tx_prts_s {
+		u64 reserved_5_63 : 59;
+		u64 prts : 5;
+	} s;
+	struct cvmx_gmxx_tx_prts_s cn30xx;
+	struct cvmx_gmxx_tx_prts_s cn31xx;
+	struct cvmx_gmxx_tx_prts_s cn38xx;
+	struct cvmx_gmxx_tx_prts_s cn38xxp2;
+	struct cvmx_gmxx_tx_prts_s cn50xx;
+	struct cvmx_gmxx_tx_prts_s cn52xx;
+	struct cvmx_gmxx_tx_prts_s cn52xxp1;
+	struct cvmx_gmxx_tx_prts_s cn56xx;
+	struct cvmx_gmxx_tx_prts_s cn56xxp1;
+	struct cvmx_gmxx_tx_prts_s cn58xx;
+	struct cvmx_gmxx_tx_prts_s cn58xxp1;
+	struct cvmx_gmxx_tx_prts_s cn61xx;
+	struct cvmx_gmxx_tx_prts_s cn63xx;
+	struct cvmx_gmxx_tx_prts_s cn63xxp1;
+	struct cvmx_gmxx_tx_prts_s cn66xx;
+	struct cvmx_gmxx_tx_prts_s cn68xx;
+	struct cvmx_gmxx_tx_prts_s cn68xxp1;
+	struct cvmx_gmxx_tx_prts_s cn70xx;
+	struct cvmx_gmxx_tx_prts_s cn70xxp1;
+	struct cvmx_gmxx_tx_prts_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_prts cvmx_gmxx_tx_prts_t;
+
+/**
+ * cvmx_gmx#_tx_spi_ctl
+ *
+ * GMX_TX_SPI_CTL = Spi4 TX ModesSpi4
+ *
+ */
+union cvmx_gmxx_tx_spi_ctl {
+	u64 u64;
+	struct cvmx_gmxx_tx_spi_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 tpa_clr : 1;
+		u64 cont_pkt : 1;
+	} s;
+	struct cvmx_gmxx_tx_spi_ctl_s cn38xx;
+	struct cvmx_gmxx_tx_spi_ctl_s cn38xxp2;
+	struct cvmx_gmxx_tx_spi_ctl_s cn58xx;
+	struct cvmx_gmxx_tx_spi_ctl_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_tx_spi_ctl cvmx_gmxx_tx_spi_ctl_t;
+
+/**
+ * cvmx_gmx#_tx_spi_drain
+ *
+ * GMX_TX_SPI_DRAIN = Drain out Spi TX FIFO
+ *
+ */
+union cvmx_gmxx_tx_spi_drain {
+	u64 u64;
+	struct cvmx_gmxx_tx_spi_drain_s {
+		u64 reserved_16_63 : 48;
+		u64 drain : 16;
+	} s;
+	struct cvmx_gmxx_tx_spi_drain_s cn38xx;
+	struct cvmx_gmxx_tx_spi_drain_s cn58xx;
+	struct cvmx_gmxx_tx_spi_drain_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_tx_spi_drain cvmx_gmxx_tx_spi_drain_t;
+
+/**
+ * cvmx_gmx#_tx_spi_max
+ *
+ * GMX_TX_SPI_MAX = RGMII TX Spi4 MAX
+ *
+ */
+union cvmx_gmxx_tx_spi_max {
+	u64 u64;
+	struct cvmx_gmxx_tx_spi_max_s {
+		u64 reserved_23_63 : 41;
+		u64 slice : 7;
+		u64 max2 : 8;
+		u64 max1 : 8;
+	} s;
+	struct cvmx_gmxx_tx_spi_max_cn38xx {
+		u64 reserved_16_63 : 48;
+		u64 max2 : 8;
+		u64 max1 : 8;
+	} cn38xx;
+	struct cvmx_gmxx_tx_spi_max_cn38xx cn38xxp2;
+	struct cvmx_gmxx_tx_spi_max_s cn58xx;
+	struct cvmx_gmxx_tx_spi_max_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_tx_spi_max cvmx_gmxx_tx_spi_max_t;
+
+/**
+ * cvmx_gmx#_tx_spi_round#
+ *
+ * GMX_TX_SPI_ROUND = Controls SPI4 TX Arbitration
+ *
+ */
+union cvmx_gmxx_tx_spi_roundx {
+	u64 u64;
+	struct cvmx_gmxx_tx_spi_roundx_s {
+		u64 reserved_16_63 : 48;
+		u64 round : 16;
+	} s;
+	struct cvmx_gmxx_tx_spi_roundx_s cn58xx;
+	struct cvmx_gmxx_tx_spi_roundx_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_tx_spi_roundx cvmx_gmxx_tx_spi_roundx_t;
+
+/**
+ * cvmx_gmx#_tx_spi_thresh
+ *
+ * GMX_TX_SPI_THRESH = RGMII TX Spi4 Transmit Threshold
+ *
+ *
+ * Notes:
+ * Note: zero will map to 0x20
+ *
+ * This will normally creates Spi4 traffic bursts at least THRESH in length.
+ * If dclk > eclk, then this rule may not always hold and Octeon may split
+ * transfers into smaller bursts - some of which could be as short as 16B.
+ * Octeon will never violate the Spi4.2 spec and send a non-EOP burst that is
+ * not a multiple of 16B.
+ */
+union cvmx_gmxx_tx_spi_thresh {
+	u64 u64;
+	struct cvmx_gmxx_tx_spi_thresh_s {
+		u64 reserved_6_63 : 58;
+		u64 thresh : 6;
+	} s;
+	struct cvmx_gmxx_tx_spi_thresh_s cn38xx;
+	struct cvmx_gmxx_tx_spi_thresh_s cn38xxp2;
+	struct cvmx_gmxx_tx_spi_thresh_s cn58xx;
+	struct cvmx_gmxx_tx_spi_thresh_s cn58xxp1;
+};
+
+typedef union cvmx_gmxx_tx_spi_thresh cvmx_gmxx_tx_spi_thresh_t;
+
+/**
+ * cvmx_gmx#_tx_xaui_ctl
+ */
+union cvmx_gmxx_tx_xaui_ctl {
+	u64 u64;
+	struct cvmx_gmxx_tx_xaui_ctl_s {
+		u64 reserved_11_63 : 53;
+		u64 hg_pause_hgi : 2;
+		u64 hg_en : 1;
+		u64 reserved_7_7 : 1;
+		u64 ls_byp : 1;
+		u64 ls : 2;
+		u64 reserved_2_3 : 2;
+		u64 uni_en : 1;
+		u64 dic_en : 1;
+	} s;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn52xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn52xxp1;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn56xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn56xxp1;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn61xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn63xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn63xxp1;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn66xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn68xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn68xxp1;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn70xx;
+	struct cvmx_gmxx_tx_xaui_ctl_s cn70xxp1;
+	struct cvmx_gmxx_tx_xaui_ctl_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_tx_xaui_ctl cvmx_gmxx_tx_xaui_ctl_t;
+
+/**
+ * cvmx_gmx#_wol_ctl
+ */
+union cvmx_gmxx_wol_ctl {
+	u64 u64;
+	struct cvmx_gmxx_wol_ctl_s {
+		u64 reserved_36_63 : 28;
+		u64 magic_en : 4;
+		u64 reserved_20_31 : 12;
+		u64 direct_en : 4;
+		u64 reserved_1_15 : 15;
+		u64 en : 1;
+	} s;
+	struct cvmx_gmxx_wol_ctl_s cn70xx;
+	struct cvmx_gmxx_wol_ctl_s cn70xxp1;
+};
+
+typedef union cvmx_gmxx_wol_ctl cvmx_gmxx_wol_ctl_t;
+
+/**
+ * cvmx_gmx#_xaui_ext_loopback
+ */
+union cvmx_gmxx_xaui_ext_loopback {
+	u64 u64;
+	struct cvmx_gmxx_xaui_ext_loopback_s {
+		u64 reserved_5_63 : 59;
+		u64 en : 1;
+		u64 thresh : 4;
+	} s;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn52xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn52xxp1;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn56xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn56xxp1;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn61xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn63xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn63xxp1;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn66xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn68xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn68xxp1;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn70xx;
+	struct cvmx_gmxx_xaui_ext_loopback_s cn70xxp1;
+	struct cvmx_gmxx_xaui_ext_loopback_s cnf71xx;
+};
+
+typedef union cvmx_gmxx_xaui_ext_loopback cvmx_gmxx_xaui_ext_loopback_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 12/50] mips: octeon: Add cvmx-gserx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (10 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 11/50] mips: octeon: Add cvmx-gmxx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 13/50] mips: octeon: Add cvmx-ipd-defs.h " Stefan Roese
                   ` (40 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-gserx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-gserx-defs.h            | 2191 +++++++++++++++++
 1 file changed, 2191 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gserx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gserx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-gserx-defs.h
new file mode 100644
index 0000000000..832a592dba
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-gserx-defs.h
@@ -0,0 +1,2191 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_GSERX_DEFS_H__
+#define __CVMX_GSERX_DEFS_H__
+
+#define CVMX_GSERX_DLMX_TX_AMPLITUDE(offset, block_id) (0x0001180090003008ull)
+#define CVMX_GSERX_DLMX_TX_PREEMPH(offset, block_id)   (0x0001180090003028ull)
+#define CVMX_GSERX_DLMX_MPLL_EN(offset, block_id)      (0x0001180090001020ull)
+#define CVMX_GSERX_DLMX_REF_SSP_EN(offset, block_id)   (0x0001180090001048ull)
+#define CVMX_GSERX_DLMX_TX_RATE(offset, block_id)      (0x0001180090003030ull)
+#define CVMX_GSERX_DLMX_TX_EN(offset, block_id)	       (0x0001180090003020ull)
+#define CVMX_GSERX_DLMX_TX_CM_EN(offset, block_id)     (0x0001180090003010ull)
+#define CVMX_GSERX_DLMX_TX_RESET(offset, block_id)     (0x0001180090003038ull)
+#define CVMX_GSERX_DLMX_TX_DATA_EN(offset, block_id)   (0x0001180090003018ull)
+#define CVMX_GSERX_DLMX_RX_RATE(offset, block_id)      (0x0001180090002028ull)
+#define CVMX_GSERX_DLMX_RX_PLL_EN(offset, block_id)    (0x0001180090002020ull)
+#define CVMX_GSERX_DLMX_RX_DATA_EN(offset, block_id)   (0x0001180090002008ull)
+#define CVMX_GSERX_DLMX_RX_RESET(offset, block_id)     (0x0001180090002030ull)
+
+#define CVMX_GSERX_DLMX_TX_STATUS(offset, block_id)                                                \
+	(0x0001180090003000ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_RX_STATUS(offset, block_id)                                                \
+	(0x0001180090002000ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+
+static inline u64 CVMX_GSERX_SATA_STATUS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090100200ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090100900ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090100900ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180090100900ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_GSERX_DLMX_RX_EQ(offset, block_id)                                                    \
+	(0x0001180090002010ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_REF_USE_PAD(offset, block_id)                                              \
+	(0x0001180090001050ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_REFCLK_SEL(offset, block_id)                                               \
+	(0x0001180090000008ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_PHY_RESET(offset, block_id)                                                \
+	(0x0001180090001038ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_TEST_POWERDOWN(offset, block_id)                                           \
+	(0x0001180090001060ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_REF_CLKDIV2(offset, block_id)                                              \
+	(0x0001180090001040ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_MPLL_MULTIPLIER(offset, block_id)                                          \
+	(0x0001180090001030ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+#define CVMX_GSERX_DLMX_MPLL_STATUS(offset, block_id)                                              \
+	(0x0001180090001000ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+
+#define CVMX_GSERX_BR_RXX_CTL(offset, block_id)                                                    \
+	(0x0001180090000400ull + (((offset) & 3) + ((block_id) & 15) * 0x20000ull) * 128)
+#define CVMX_GSERX_BR_RXX_EER(offset, block_id)                                                    \
+	(0x0001180090000418ull + (((offset) & 3) + ((block_id) & 15) * 0x20000ull) * 128)
+
+#define CVMX_GSERX_PCIE_PIPE_PORT_SEL(offset) (0x0001180090080460ull)
+#define CVMX_GSERX_PCIE_PIPE_RST(offset)      (0x0001180090080448ull)
+
+#define CVMX_GSERX_SATA_CFG(offset)	   (0x0001180090100208ull)
+#define CVMX_GSERX_SATA_REF_SSP_EN(offset) (0x0001180090100600ull)
+
+static inline u64 CVMX_GSERX_SATA_LANE_RST(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090100210ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090000908ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180090000908ull + (offset) * 0x1000000ull;
+	}
+	return 0x0001180090000908ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_GSERX_EQ_WAIT_TIME(offset) (0x00011800904E0000ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_GLBL_MISC_CONFIG_1(offset) (0x0001180090460030ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_GLBL_PLL_CFG_3(offset)     (0x0001180090460018ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_PHYX_OVRD_IN_LO(offset, block_id)                                               \
+	(0x0001180090400088ull + (((offset) & 3) + ((block_id) & 0) * 0x0ull) * 524288)
+
+#define CVMX_GSERX_RX_PWR_CTRL_P1(offset) (0x00011800904600B0ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_RX_PWR_CTRL_P2(offset) (0x00011800904600B8ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_RX_EIE_DETSTS(offset)  (0x0001180090000150ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_LANE_MODE(offset) (0x0001180090000118ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_LANE_VMA_FINE_CTRL_0(offset)                                                    \
+	(0x00011800904E01C8ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_LANEX_LBERT_CFG(offset, block_id)                                               \
+	(0x00011800904C0020ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANEX_MISC_CFG_0(offset, block_id)                                              \
+	(0x00011800904C0000ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANE_PX_MODE_0(offset, block_id)                                                \
+	(0x00011800904E0040ull + (((offset) & 15) + ((block_id) & 15) * 0x80000ull) * 32)
+#define CVMX_GSERX_LANE_PX_MODE_1(offset, block_id)                                                \
+	(0x00011800904E0048ull + (((offset) & 15) + ((block_id) & 15) * 0x80000ull) * 32)
+
+#define CVMX_GSERX_LANEX_RX_CFG_0(offset, block_id)                                                \
+	(0x0001180090440000ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CFG_1(offset, block_id)                                                \
+	(0x0001180090440008ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CFG_2(offset, block_id)                                                \
+	(0x0001180090440010ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CFG_3(offset, block_id)                                                \
+	(0x0001180090440018ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CFG_4(offset, block_id)                                                \
+	(0x0001180090440020ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CFG_5(offset, block_id)                                                \
+	(0x0001180090440028ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_CTLE_CTRL(offset, block_id)                                            \
+	(0x0001180090440058ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANEX_RX_LOOP_CTRL(offset, block_id)                                            \
+	(0x0001180090440048ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(offset, block_id)                                        \
+	(0x0001180090440240ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_VALBBD_CTRL_1(offset, block_id)                                        \
+	(0x0001180090440248ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_VALBBD_CTRL_2(offset, block_id)                                        \
+	(0x0001180090440250ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_RX_MISC_OVRRD(offset, block_id)                                           \
+	(0x0001180090440258ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANEX_TX_CFG_0(offset, block_id)                                                \
+	(0x00011800904400A8ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_TX_CFG_1(offset, block_id)                                                \
+	(0x00011800904400B0ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_TX_CFG_2(offset, block_id)                                                \
+	(0x00011800904400B8ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_TX_CFG_3(offset, block_id)                                                \
+	(0x00011800904400C0ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_TX_PRE_EMPHASIS(offset, block_id)                                         \
+	(0x00011800904400C8ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_RX_TXDIR_CTRL_0(offset) (0x00011800904600E8ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_RX_TXDIR_CTRL_1(offset) (0x00011800904600F0ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_RX_TXDIR_CTRL_2(offset) (0x00011800904600F8ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_LANEX_PCS_CTLIFC_0(offset, block_id)                                            \
+	(0x00011800904C0060ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_PCS_CTLIFC_1(offset, block_id)                                            \
+	(0x00011800904C0068ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+#define CVMX_GSERX_LANEX_PCS_CTLIFC_2(offset, block_id)                                            \
+	(0x00011800904C0070ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANEX_PWR_CTRL(offset, block_id)                                                \
+	(0x00011800904400D8ull + (((offset) & 3) + ((block_id) & 15) * 0x10ull) * 1048576)
+
+#define CVMX_GSERX_LANE_VMA_FINE_CTRL_2(offset)                                                    \
+	(0x00011800904E01D8ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_PLL_STAT(offset) (0x0001180090000010ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_QLM_STAT(offset) (0x00011800900000A0ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_PLL_PX_MODE_0(offset, block_id)                                                 \
+	(0x00011800904E0030ull + (((offset) & 15) + ((block_id) & 15) * 0x80000ull) * 32)
+#define CVMX_GSERX_PLL_PX_MODE_1(offset, block_id)                                                 \
+	(0x00011800904E0038ull + (((offset) & 15) + ((block_id) & 15) * 0x80000ull) * 32)
+
+#define CVMX_GSERX_SLICE_CFG(offset) (0x0001180090460060ull + ((offset) & 15) * 0x1000000ull)
+
+#define CVMX_GSERX_SLICEX_PCIE1_MODE(offset, block_id)                                             \
+	(0x0001180090460228ull + (((offset) & 1) + ((block_id) & 15) * 0x8ull) * 2097152)
+#define CVMX_GSERX_SLICEX_PCIE2_MODE(offset, block_id)                                             \
+	(0x0001180090460230ull + (((offset) & 1) + ((block_id) & 15) * 0x8ull) * 2097152)
+#define CVMX_GSERX_SLICEX_PCIE3_MODE(offset, block_id)                                             \
+	(0x0001180090460238ull + (((offset) & 1) + ((block_id) & 15) * 0x8ull) * 2097152)
+#define CVMX_GSERX_SLICEX_RX_SDLL_CTRL(offset, block_id)                                           \
+	(0x0001180090460220ull + (((offset) & 1) + ((block_id) & 15) * 0x8ull) * 2097152)
+
+#define CVMX_GSERX_REFCLK_SEL(offset) (0x0001180090000008ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_PHY_CTL(offset)    (0x0001180090000000ull + ((offset) & 15) * 0x1000000ull)
+#define CVMX_GSERX_CFG(offset)	      (0x0001180090000080ull + ((offset) & 15) * 0x1000000ull)
+
+/**
+ * cvmx_gser#_cfg
+ */
+union cvmx_gserx_cfg {
+	u64 u64;
+	struct cvmx_gserx_cfg_s {
+		u64 reserved_9_63 : 55;
+		u64 rmac_pipe : 1;
+		u64 rmac : 1;
+		u64 srio : 1;
+		u64 sata : 1;
+		u64 bgx_quad : 1;
+		u64 bgx_dual : 1;
+		u64 bgx : 1;
+		u64 ila : 1;
+		u64 pcie : 1;
+	} s;
+	struct cvmx_gserx_cfg_cn73xx {
+		u64 reserved_6_63 : 58;
+		u64 sata : 1;
+		u64 bgx_quad : 1;
+		u64 bgx_dual : 1;
+		u64 bgx : 1;
+		u64 ila : 1;
+		u64 pcie : 1;
+	} cn73xx;
+	struct cvmx_gserx_cfg_cn78xx {
+		u64 reserved_5_63 : 59;
+		u64 bgx_quad : 1;
+		u64 bgx_dual : 1;
+		u64 bgx : 1;
+		u64 ila : 1;
+		u64 pcie : 1;
+	} cn78xx;
+	struct cvmx_gserx_cfg_cn78xx cn78xxp1;
+	struct cvmx_gserx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_gserx_cfg cvmx_gserx_cfg_t;
+
+/**
+ * cvmx_gser#_eq_wait_time
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_eq_wait_time {
+	u64 u64;
+	struct cvmx_gserx_eq_wait_time_s {
+		u64 reserved_8_63 : 56;
+		u64 rxeq_wait_cnt : 4;
+		u64 txeq_wait_cnt : 4;
+	} s;
+	struct cvmx_gserx_eq_wait_time_s cn73xx;
+	struct cvmx_gserx_eq_wait_time_s cn78xx;
+	struct cvmx_gserx_eq_wait_time_s cn78xxp1;
+	struct cvmx_gserx_eq_wait_time_s cnf75xx;
+};
+
+typedef union cvmx_gserx_eq_wait_time cvmx_gserx_eq_wait_time_t;
+
+/**
+ * cvmx_gser#_glbl_misc_config_1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_glbl_misc_config_1 {
+	u64 u64;
+	struct cvmx_gserx_glbl_misc_config_1_s {
+		u64 reserved_10_63 : 54;
+		u64 pcs_sds_vref_tr : 4;
+		u64 pcs_sds_trim_chp_reg : 2;
+		u64 pcs_sds_vco_reg_tr : 2;
+		u64 pcs_sds_cvbg_en : 1;
+		u64 pcs_sds_extvbg_en : 1;
+	} s;
+	struct cvmx_gserx_glbl_misc_config_1_s cn73xx;
+	struct cvmx_gserx_glbl_misc_config_1_s cn78xx;
+	struct cvmx_gserx_glbl_misc_config_1_s cn78xxp1;
+	struct cvmx_gserx_glbl_misc_config_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_glbl_misc_config_1 cvmx_gserx_glbl_misc_config_1_t;
+
+/**
+ * cvmx_gser#_glbl_pll_cfg_3
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_glbl_pll_cfg_3 {
+	u64 u64;
+	struct cvmx_gserx_glbl_pll_cfg_3_s {
+		u64 reserved_10_63 : 54;
+		u64 pcs_sds_pll_vco_amp : 2;
+		u64 pll_bypass_uq : 1;
+		u64 pll_vctrl_sel_ovrrd_en : 1;
+		u64 pll_vctrl_sel_ovrrd_val : 2;
+		u64 pll_vctrl_sel_lcvco_val : 2;
+		u64 pll_vctrl_sel_rovco_val : 2;
+	} s;
+	struct cvmx_gserx_glbl_pll_cfg_3_s cn73xx;
+	struct cvmx_gserx_glbl_pll_cfg_3_s cn78xx;
+	struct cvmx_gserx_glbl_pll_cfg_3_s cn78xxp1;
+	struct cvmx_gserx_glbl_pll_cfg_3_s cnf75xx;
+};
+
+typedef union cvmx_gserx_glbl_pll_cfg_3 cvmx_gserx_glbl_pll_cfg_3_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_data_en
+ *
+ * DLM Receiver Enable.
+ *
+ */
+union cvmx_gserx_dlmx_rx_data_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_data_en_s {
+		u64 reserved_9_63 : 55;
+		u64 rx1_data_en : 1;
+		u64 reserved_1_7 : 7;
+		u64 rx0_data_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_rx_data_en_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_data_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_data_en cvmx_gserx_dlmx_rx_data_en_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_pll_en
+ *
+ * DLM0 DPLL Enable.
+ *
+ */
+union cvmx_gserx_dlmx_rx_pll_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_pll_en_s {
+		u64 reserved_9_63 : 55;
+		u64 rx1_pll_en : 1;
+		u64 reserved_1_7 : 7;
+		u64 rx0_pll_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_rx_pll_en_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_pll_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_pll_en cvmx_gserx_dlmx_rx_pll_en_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_rate
+ *
+ * DLM0 Rx Data Rate.
+ *
+ */
+union cvmx_gserx_dlmx_rx_rate {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_rate_s {
+		u64 reserved_10_63 : 54;
+		u64 rx1_rate : 2;
+		u64 reserved_2_7 : 6;
+		u64 rx0_rate : 2;
+	} s;
+	struct cvmx_gserx_dlmx_rx_rate_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_rate_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_rate cvmx_gserx_dlmx_rx_rate_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_reset
+ *
+ * DLM0 Receiver Reset.
+ *
+ */
+union cvmx_gserx_dlmx_rx_reset {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_reset_s {
+		u64 reserved_9_63 : 55;
+		u64 rx1_reset : 1;
+		u64 reserved_1_7 : 7;
+		u64 rx0_reset : 1;
+	} s;
+	struct cvmx_gserx_dlmx_rx_reset_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_reset_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_reset cvmx_gserx_dlmx_rx_reset_t;
+
+/**
+ * cvmx_gser#_dlm#_test_powerdown
+ *
+ * DLM Test Powerdown.
+ *
+ */
+union cvmx_gserx_dlmx_test_powerdown {
+	u64 u64;
+	struct cvmx_gserx_dlmx_test_powerdown_s {
+		u64 reserved_1_63 : 63;
+		u64 test_powerdown : 1;
+	} s;
+	struct cvmx_gserx_dlmx_test_powerdown_s cn70xx;
+	struct cvmx_gserx_dlmx_test_powerdown_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_test_powerdown cvmx_gserx_dlmx_test_powerdown_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_amplitude
+ *
+ * DLM0 Tx Amplitude (Full Swing Mode).
+ *
+ */
+union cvmx_gserx_dlmx_tx_amplitude {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_amplitude_s {
+		u64 reserved_15_63 : 49;
+		u64 tx1_amplitude : 7;
+		u64 reserved_7_7 : 1;
+		u64 tx0_amplitude : 7;
+	} s;
+	struct cvmx_gserx_dlmx_tx_amplitude_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_amplitude_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_amplitude cvmx_gserx_dlmx_tx_amplitude_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_en
+ *
+ * DLM Transmit Clocking and Data Sampling Enable.
+ *
+ */
+union cvmx_gserx_dlmx_tx_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_en_s {
+		u64 reserved_9_63 : 55;
+		u64 tx1_en : 1;
+		u64 reserved_1_7 : 7;
+		u64 tx0_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_tx_en_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_en cvmx_gserx_dlmx_tx_en_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_preemph
+ *
+ * DLM0 Tx Deemphasis.
+ *
+ */
+union cvmx_gserx_dlmx_tx_preemph {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_preemph_s {
+		u64 reserved_15_63 : 49;
+		u64 tx1_preemph : 7;
+		u64 reserved_7_7 : 1;
+		u64 tx0_preemph : 7;
+	} s;
+	struct cvmx_gserx_dlmx_tx_preemph_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_preemph_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_preemph cvmx_gserx_dlmx_tx_preemph_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_status
+ *
+ * DLM Transmit Common Mode Control Status.
+ *
+ */
+union cvmx_gserx_dlmx_tx_status {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_status_s {
+		u64 reserved_10_63 : 54;
+		u64 tx1_cm_status : 1;
+		u64 tx1_status : 1;
+		u64 reserved_2_7 : 6;
+		u64 tx0_cm_status : 1;
+		u64 tx0_status : 1;
+	} s;
+	struct cvmx_gserx_dlmx_tx_status_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_status_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_status cvmx_gserx_dlmx_tx_status_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_status
+ *
+ * DLM Receive DPLL State Indicator.
+ *
+ */
+union cvmx_gserx_dlmx_rx_status {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_status_s {
+		u64 reserved_9_63 : 55;
+		u64 rx1_status : 1;
+		u64 reserved_1_7 : 7;
+		u64 rx0_status : 1;
+	} s;
+	struct cvmx_gserx_dlmx_rx_status_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_status_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_status cvmx_gserx_dlmx_rx_status_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_rate
+ *
+ * DLM0 Tx Data Rate.
+ *
+ */
+union cvmx_gserx_dlmx_tx_rate {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_rate_s {
+		u64 reserved_10_63 : 54;
+		u64 tx1_rate : 2;
+		u64 reserved_2_7 : 6;
+		u64 tx0_rate : 2;
+	} s;
+	struct cvmx_gserx_dlmx_tx_rate_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_rate_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_rate cvmx_gserx_dlmx_tx_rate_t;
+
+/**
+ * cvmx_gser#_sata_status
+ *
+ * SATA PHY Ready Status.
+ *
+ */
+union cvmx_gserx_sata_status {
+	u64 u64;
+	struct cvmx_gserx_sata_status_s {
+		u64 reserved_2_63 : 62;
+		u64 p1_rdy : 1;
+		u64 p0_rdy : 1;
+	} s;
+	struct cvmx_gserx_sata_status_s cn70xx;
+	struct cvmx_gserx_sata_status_s cn70xxp1;
+	struct cvmx_gserx_sata_status_s cn73xx;
+	struct cvmx_gserx_sata_status_s cnf75xx;
+};
+
+typedef union cvmx_gserx_sata_status cvmx_gserx_sata_status_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_data_en
+ *
+ * DLM0 Transmit Driver Enable.
+ *
+ */
+union cvmx_gserx_dlmx_tx_data_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_data_en_s {
+		u64 reserved_9_63 : 55;
+		u64 tx1_data_en : 1;
+		u64 reserved_1_7 : 7;
+		u64 tx0_data_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_tx_data_en_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_data_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_data_en cvmx_gserx_dlmx_tx_data_en_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_cm_en
+ *
+ * DLM0 Transmit Common-Mode Control Enable.
+ *
+ */
+union cvmx_gserx_dlmx_tx_cm_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_cm_en_s {
+		u64 reserved_9_63 : 55;
+		u64 tx1_cm_en : 1;
+		u64 reserved_1_7 : 7;
+		u64 tx0_cm_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_tx_cm_en_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_cm_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_cm_en cvmx_gserx_dlmx_tx_cm_en_t;
+
+/**
+ * cvmx_gser#_dlm#_tx_reset
+ *
+ * DLM0 Tx Reset.
+ *
+ */
+union cvmx_gserx_dlmx_tx_reset {
+	u64 u64;
+	struct cvmx_gserx_dlmx_tx_reset_s {
+		u64 reserved_9_63 : 55;
+		u64 tx1_reset : 1;
+		u64 reserved_1_7 : 7;
+		u64 tx0_reset : 1;
+	} s;
+	struct cvmx_gserx_dlmx_tx_reset_s cn70xx;
+	struct cvmx_gserx_dlmx_tx_reset_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_tx_reset cvmx_gserx_dlmx_tx_reset_t;
+
+/**
+ * cvmx_gser#_dlm#_mpll_status
+ *
+ * DLM PLL Lock Status.
+ *
+ */
+union cvmx_gserx_dlmx_mpll_status {
+	u64 u64;
+	struct cvmx_gserx_dlmx_mpll_status_s {
+		u64 reserved_1_63 : 63;
+		u64 mpll_status : 1;
+	} s;
+	struct cvmx_gserx_dlmx_mpll_status_s cn70xx;
+	struct cvmx_gserx_dlmx_mpll_status_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_mpll_status cvmx_gserx_dlmx_mpll_status_t;
+
+/**
+ * cvmx_gser#_dlm#_phy_reset
+ *
+ * DLM Core and State Machine Reset.
+ *
+ */
+union cvmx_gserx_dlmx_phy_reset {
+	u64 u64;
+	struct cvmx_gserx_dlmx_phy_reset_s {
+		u64 reserved_1_63 : 63;
+		u64 phy_reset : 1;
+	} s;
+	struct cvmx_gserx_dlmx_phy_reset_s cn70xx;
+	struct cvmx_gserx_dlmx_phy_reset_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_phy_reset cvmx_gserx_dlmx_phy_reset_t;
+
+/**
+ * cvmx_gser#_dlm#_ref_clkdiv2
+ *
+ * DLM Input Reference Clock Divider Control.
+ *
+ */
+union cvmx_gserx_dlmx_ref_clkdiv2 {
+	u64 u64;
+	struct cvmx_gserx_dlmx_ref_clkdiv2_s {
+		u64 reserved_1_63 : 63;
+		u64 ref_clkdiv2 : 1;
+	} s;
+	struct cvmx_gserx_dlmx_ref_clkdiv2_s cn70xx;
+	struct cvmx_gserx_dlmx_ref_clkdiv2_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_ref_clkdiv2 cvmx_gserx_dlmx_ref_clkdiv2_t;
+
+/**
+ * cvmx_gser#_dlm#_ref_ssp_en
+ *
+ * DLM0 Reference Clock Enable for the PHY.
+ *
+ */
+union cvmx_gserx_dlmx_ref_ssp_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_ref_ssp_en_s {
+		u64 reserved_1_63 : 63;
+		u64 ref_ssp_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_ref_ssp_en_s cn70xx;
+	struct cvmx_gserx_dlmx_ref_ssp_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_ref_ssp_en cvmx_gserx_dlmx_ref_ssp_en_t;
+
+union cvmx_gserx_dlmx_mpll_en {
+	u64 u64;
+	struct cvmx_gserx_dlmx_mpll_en_s {
+		u64 reserved_1_63 : 63;
+		u64 mpll_en : 1;
+	} s;
+	struct cvmx_gserx_dlmx_mpll_en_s cn70xx;
+	struct cvmx_gserx_dlmx_mpll_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_mpll_en cvmx_gserx_dlmx_mpll_en_t;
+
+/**
+ * cvmx_gser#_dlm#_rx_eq
+ *
+ * DLM Receiver Equalization Setting.
+ *
+ */
+union cvmx_gserx_dlmx_rx_eq {
+	u64 u64;
+	struct cvmx_gserx_dlmx_rx_eq_s {
+		u64 reserved_11_63 : 53;
+		u64 rx1_eq : 3;
+		u64 reserved_3_7 : 5;
+		u64 rx0_eq : 3;
+	} s;
+	struct cvmx_gserx_dlmx_rx_eq_s cn70xx;
+	struct cvmx_gserx_dlmx_rx_eq_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_rx_eq cvmx_gserx_dlmx_rx_eq_t;
+
+/**
+ * cvmx_gser#_dlm#_mpll_multiplier
+ *
+ * DLM MPLL Frequency Multiplier Control.
+ *
+ */
+union cvmx_gserx_dlmx_mpll_multiplier {
+	u64 u64;
+	struct cvmx_gserx_dlmx_mpll_multiplier_s {
+		u64 reserved_7_63 : 57;
+		u64 mpll_multiplier : 7;
+	} s;
+	struct cvmx_gserx_dlmx_mpll_multiplier_s cn70xx;
+	struct cvmx_gserx_dlmx_mpll_multiplier_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_dlmx_mpll_multiplier cvmx_gserx_dlmx_mpll_multiplier_t;
+
+/**
+ * cvmx_gser#_br_rx#_ctl
+ */
+union cvmx_gserx_br_rxx_ctl {
+	u64 u64;
+	struct cvmx_gserx_br_rxx_ctl_s {
+		u64 reserved_4_63 : 60;
+		u64 rxt_adtmout_disable : 1;
+		u64 rxt_swm : 1;
+		u64 rxt_preset : 1;
+		u64 rxt_initialize : 1;
+	} s;
+	struct cvmx_gserx_br_rxx_ctl_s cn73xx;
+	struct cvmx_gserx_br_rxx_ctl_s cn78xx;
+	struct cvmx_gserx_br_rxx_ctl_cn78xxp1 {
+		u64 reserved_3_63 : 61;
+		u64 rxt_swm : 1;
+		u64 rxt_preset : 1;
+		u64 rxt_initialize : 1;
+	} cn78xxp1;
+	struct cvmx_gserx_br_rxx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_gserx_br_rxx_ctl cvmx_gserx_br_rxx_ctl_t;
+
+/**
+ * cvmx_gser#_br_rx#_eer
+ *
+ * GSER software BASE-R RX link training equalization evaluation request (EER). A write to
+ * [RXT_EER] initiates a equalization request to the RAW PCS. A read of this register returns the
+ * equalization status message and a valid bit indicating it was updated. These registers are for
+ * diagnostic use only.
+ */
+union cvmx_gserx_br_rxx_eer {
+	u64 u64;
+	struct cvmx_gserx_br_rxx_eer_s {
+		u64 reserved_16_63 : 48;
+		u64 rxt_eer : 1;
+		u64 rxt_esv : 1;
+		u64 rxt_esm : 14;
+	} s;
+	struct cvmx_gserx_br_rxx_eer_s cn73xx;
+	struct cvmx_gserx_br_rxx_eer_s cn78xx;
+	struct cvmx_gserx_br_rxx_eer_s cn78xxp1;
+	struct cvmx_gserx_br_rxx_eer_s cnf75xx;
+};
+
+typedef union cvmx_gserx_br_rxx_eer cvmx_gserx_br_rxx_eer_t;
+
+/**
+ * cvmx_gser#_pcie_pipe_port_sel
+ *
+ * PCIE PIPE Enable Request.
+ *
+ */
+union cvmx_gserx_pcie_pipe_port_sel {
+	u64 u64;
+	struct cvmx_gserx_pcie_pipe_port_sel_s {
+		u64 reserved_3_63 : 61;
+		u64 cfg_pem1_dlm2 : 1;
+		u64 pipe_port_sel : 2;
+	} s;
+	struct cvmx_gserx_pcie_pipe_port_sel_s cn70xx;
+	struct cvmx_gserx_pcie_pipe_port_sel_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_pcie_pipe_port_sel cvmx_gserx_pcie_pipe_port_sel_t;
+
+/**
+ * cvmx_gser#_pcie_pipe_rst
+ *
+ * PCIE PIPE Reset.
+ *
+ */
+union cvmx_gserx_pcie_pipe_rst {
+	u64 u64;
+	struct cvmx_gserx_pcie_pipe_rst_s {
+		u64 reserved_4_63 : 60;
+		u64 pipe3_rst : 1;
+		u64 pipe2_rst : 1;
+		u64 pipe1_rst : 1;
+		u64 pipe0_rst : 1;
+	} s;
+	struct cvmx_gserx_pcie_pipe_rst_s cn70xx;
+	struct cvmx_gserx_pcie_pipe_rst_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_pcie_pipe_rst cvmx_gserx_pcie_pipe_rst_t;
+
+/**
+ * cvmx_gser#_sata_cfg
+ *
+ * SATA Config Enable.
+ *
+ */
+union cvmx_gserx_sata_cfg {
+	u64 u64;
+	struct cvmx_gserx_sata_cfg_s {
+		u64 reserved_1_63 : 63;
+		u64 sata_en : 1;
+	} s;
+	struct cvmx_gserx_sata_cfg_s cn70xx;
+	struct cvmx_gserx_sata_cfg_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_sata_cfg cvmx_gserx_sata_cfg_t;
+
+/**
+ * cvmx_gser#_sata_lane_rst
+ *
+ * Lane Reset Control.
+ *
+ */
+union cvmx_gserx_sata_lane_rst {
+	u64 u64;
+	struct cvmx_gserx_sata_lane_rst_s {
+		u64 reserved_2_63 : 62;
+		u64 l1_rst : 1;
+		u64 l0_rst : 1;
+	} s;
+	struct cvmx_gserx_sata_lane_rst_s cn70xx;
+	struct cvmx_gserx_sata_lane_rst_s cn70xxp1;
+	struct cvmx_gserx_sata_lane_rst_s cn73xx;
+	struct cvmx_gserx_sata_lane_rst_s cnf75xx;
+};
+
+typedef union cvmx_gserx_sata_lane_rst cvmx_gserx_sata_lane_rst_t;
+
+/**
+ * cvmx_gser#_sata_ref_ssp_en
+ *
+ * SATA Reference Clock Enable for the PHY.
+ *
+ */
+union cvmx_gserx_sata_ref_ssp_en {
+	u64 u64;
+	struct cvmx_gserx_sata_ref_ssp_en_s {
+		u64 reserved_1_63 : 63;
+		u64 ref_ssp_en : 1;
+	} s;
+	struct cvmx_gserx_sata_ref_ssp_en_s cn70xx;
+	struct cvmx_gserx_sata_ref_ssp_en_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_sata_ref_ssp_en cvmx_gserx_sata_ref_ssp_en_t;
+
+/**
+ * cvmx_gser#_phy#_ovrd_in_lo
+ *
+ * PHY Override Input Low Register.
+ *
+ */
+union cvmx_gserx_phyx_ovrd_in_lo {
+	u64 u64;
+	struct cvmx_gserx_phyx_ovrd_in_lo_s {
+		u64 reserved_16_63 : 48;
+		u64 res_ack_in_ovrd : 1;
+		u64 res_ack_in : 1;
+		u64 res_req_in_ovrd : 1;
+		u64 res_req_in : 1;
+		u64 rtune_req_ovrd : 1;
+		u64 rtune_req : 1;
+		u64 mpll_multiplier_ovrd : 1;
+		u64 mpll_multiplier : 7;
+		u64 mpll_en_ovrd : 1;
+		u64 mpll_en : 1;
+	} s;
+	struct cvmx_gserx_phyx_ovrd_in_lo_s cn70xx;
+	struct cvmx_gserx_phyx_ovrd_in_lo_s cn70xxp1;
+};
+
+typedef union cvmx_gserx_phyx_ovrd_in_lo cvmx_gserx_phyx_ovrd_in_lo_t;
+
+/**
+ * cvmx_gser#_phy_ctl
+ *
+ * This register contains general PHY/PLL control of the RAW PCS.
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_phy_ctl {
+	u64 u64;
+	struct cvmx_gserx_phy_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 phy_reset : 1;
+		u64 phy_pd : 1;
+	} s;
+	struct cvmx_gserx_phy_ctl_s cn73xx;
+	struct cvmx_gserx_phy_ctl_s cn78xx;
+	struct cvmx_gserx_phy_ctl_s cn78xxp1;
+	struct cvmx_gserx_phy_ctl_s cnf75xx;
+};
+
+typedef union cvmx_gserx_phy_ctl cvmx_gserx_phy_ctl_t;
+
+/**
+ * cvmx_gser#_rx_pwr_ctrl_p1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_rx_pwr_ctrl_p1 {
+	u64 u64;
+	struct cvmx_gserx_rx_pwr_ctrl_p1_s {
+		u64 reserved_14_63 : 50;
+		u64 p1_rx_resetn : 1;
+		u64 pq_rx_allow_pll_pd : 1;
+		u64 pq_rx_pcs_reset : 1;
+		u64 p1_rx_agc_en : 1;
+		u64 p1_rx_dfe_en : 1;
+		u64 p1_rx_cdr_en : 1;
+		u64 p1_rx_cdr_coast : 1;
+		u64 p1_rx_cdr_clr : 1;
+		u64 p1_rx_subblk_pd : 5;
+		u64 p1_rx_chpd : 1;
+	} s;
+	struct cvmx_gserx_rx_pwr_ctrl_p1_s cn73xx;
+	struct cvmx_gserx_rx_pwr_ctrl_p1_s cn78xx;
+	struct cvmx_gserx_rx_pwr_ctrl_p1_s cn78xxp1;
+	struct cvmx_gserx_rx_pwr_ctrl_p1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_pwr_ctrl_p1 cvmx_gserx_rx_pwr_ctrl_p1_t;
+
+/**
+ * cvmx_gser#_rx_pwr_ctrl_p2
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_rx_pwr_ctrl_p2 {
+	u64 u64;
+	struct cvmx_gserx_rx_pwr_ctrl_p2_s {
+		u64 reserved_14_63 : 50;
+		u64 p2_rx_resetn : 1;
+		u64 p2_rx_allow_pll_pd : 1;
+		u64 p2_rx_pcs_reset : 1;
+		u64 p2_rx_agc_en : 1;
+		u64 p2_rx_dfe_en : 1;
+		u64 p2_rx_cdr_en : 1;
+		u64 p2_rx_cdr_coast : 1;
+		u64 p2_rx_cdr_clr : 1;
+		u64 p2_rx_subblk_pd : 5;
+		u64 p2_rx_chpd : 1;
+	} s;
+	struct cvmx_gserx_rx_pwr_ctrl_p2_s cn73xx;
+	struct cvmx_gserx_rx_pwr_ctrl_p2_s cn78xx;
+	struct cvmx_gserx_rx_pwr_ctrl_p2_s cn78xxp1;
+	struct cvmx_gserx_rx_pwr_ctrl_p2_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_pwr_ctrl_p2 cvmx_gserx_rx_pwr_ctrl_p2_t;
+
+/**
+ * cvmx_gser#_rx_txdir_ctrl_0
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_rx_txdir_ctrl_0 {
+	u64 u64;
+	struct cvmx_gserx_rx_txdir_ctrl_0_s {
+		u64 reserved_13_63 : 51;
+		u64 rx_boost_hi_thrs : 4;
+		u64 rx_boost_lo_thrs : 4;
+		u64 rx_boost_hi_val : 5;
+	} s;
+	struct cvmx_gserx_rx_txdir_ctrl_0_s cn73xx;
+	struct cvmx_gserx_rx_txdir_ctrl_0_s cn78xx;
+	struct cvmx_gserx_rx_txdir_ctrl_0_s cn78xxp1;
+	struct cvmx_gserx_rx_txdir_ctrl_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_txdir_ctrl_0 cvmx_gserx_rx_txdir_ctrl_0_t;
+
+/**
+ * cvmx_gser#_rx_txdir_ctrl_1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_rx_txdir_ctrl_1 {
+	u64 u64;
+	struct cvmx_gserx_rx_txdir_ctrl_1_s {
+		u64 reserved_12_63 : 52;
+		u64 rx_precorr_chg_dir : 1;
+		u64 rx_tap1_chg_dir : 1;
+		u64 rx_tap1_hi_thrs : 5;
+		u64 rx_tap1_lo_thrs : 5;
+	} s;
+	struct cvmx_gserx_rx_txdir_ctrl_1_s cn73xx;
+	struct cvmx_gserx_rx_txdir_ctrl_1_s cn78xx;
+	struct cvmx_gserx_rx_txdir_ctrl_1_s cn78xxp1;
+	struct cvmx_gserx_rx_txdir_ctrl_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_txdir_ctrl_1 cvmx_gserx_rx_txdir_ctrl_1_t;
+
+/**
+ * cvmx_gser#_rx_txdir_ctrl_2
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_rx_txdir_ctrl_2 {
+	u64 u64;
+	struct cvmx_gserx_rx_txdir_ctrl_2_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_precorr_hi_thrs : 8;
+		u64 rx_precorr_lo_thrs : 8;
+	} s;
+	struct cvmx_gserx_rx_txdir_ctrl_2_s cn73xx;
+	struct cvmx_gserx_rx_txdir_ctrl_2_s cn78xx;
+	struct cvmx_gserx_rx_txdir_ctrl_2_s cn78xxp1;
+	struct cvmx_gserx_rx_txdir_ctrl_2_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_txdir_ctrl_2 cvmx_gserx_rx_txdir_ctrl_2_t;
+
+/**
+ * cvmx_gser#_rx_eie_detsts
+ */
+union cvmx_gserx_rx_eie_detsts {
+	u64 u64;
+	struct cvmx_gserx_rx_eie_detsts_s {
+		u64 reserved_12_63 : 52;
+		u64 cdrlock : 4;
+		u64 eiests : 4;
+		u64 eieltch : 4;
+	} s;
+	struct cvmx_gserx_rx_eie_detsts_s cn73xx;
+	struct cvmx_gserx_rx_eie_detsts_s cn78xx;
+	struct cvmx_gserx_rx_eie_detsts_s cn78xxp1;
+	struct cvmx_gserx_rx_eie_detsts_s cnf75xx;
+};
+
+typedef union cvmx_gserx_rx_eie_detsts cvmx_gserx_rx_eie_detsts_t;
+
+/**
+ * cvmx_gser#_refclk_sel
+ *
+ * This register selects the reference clock.
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ *
+ * Not used with GSER6, GSER7, and GSER8.
+ */
+union cvmx_gserx_refclk_sel {
+	u64 u64;
+	struct cvmx_gserx_refclk_sel_s {
+		u64 reserved_3_63 : 61;
+		u64 pcie_refclk125 : 1;
+		u64 com_clk_sel : 1;
+		u64 use_com1 : 1;
+	} s;
+	struct cvmx_gserx_refclk_sel_s cn73xx;
+	struct cvmx_gserx_refclk_sel_s cn78xx;
+	struct cvmx_gserx_refclk_sel_s cn78xxp1;
+	struct cvmx_gserx_refclk_sel_s cnf75xx;
+};
+
+typedef union cvmx_gserx_refclk_sel cvmx_gserx_refclk_sel_t;
+
+/**
+ * cvmx_gser#_lane#_lbert_cfg
+ *
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_lbert_cfg {
+	u64 u64;
+	struct cvmx_gserx_lanex_lbert_cfg_s {
+		u64 reserved_16_63 : 48;
+		u64 lbert_pg_err_insert : 1;
+		u64 lbert_pm_sync_start : 1;
+		u64 lbert_pg_en : 1;
+		u64 lbert_pg_width : 2;
+		u64 lbert_pg_mode : 4;
+		u64 lbert_pm_en : 1;
+		u64 lbert_pm_width : 2;
+		u64 lbert_pm_mode : 4;
+	} s;
+	struct cvmx_gserx_lanex_lbert_cfg_s cn73xx;
+	struct cvmx_gserx_lanex_lbert_cfg_s cn78xx;
+	struct cvmx_gserx_lanex_lbert_cfg_s cn78xxp1;
+	struct cvmx_gserx_lanex_lbert_cfg_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_lbert_cfg cvmx_gserx_lanex_lbert_cfg_t;
+
+/**
+ * cvmx_gser#_lane#_misc_cfg_0
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_misc_cfg_0 {
+	u64 u64;
+	struct cvmx_gserx_lanex_misc_cfg_0_s {
+		u64 reserved_16_63 : 48;
+		u64 use_pma_polarity : 1;
+		u64 cfg_pcs_loopback : 1;
+		u64 pcs_tx_mode_ovrrd_en : 1;
+		u64 pcs_rx_mode_ovrrd_en : 1;
+		u64 cfg_eie_det_cnt : 4;
+		u64 eie_det_stl_on_time : 3;
+		u64 eie_det_stl_off_time : 3;
+		u64 tx_bit_order : 1;
+		u64 rx_bit_order : 1;
+	} s;
+	struct cvmx_gserx_lanex_misc_cfg_0_s cn73xx;
+	struct cvmx_gserx_lanex_misc_cfg_0_s cn78xx;
+	struct cvmx_gserx_lanex_misc_cfg_0_s cn78xxp1;
+	struct cvmx_gserx_lanex_misc_cfg_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_misc_cfg_0 cvmx_gserx_lanex_misc_cfg_0_t;
+
+/**
+ * cvmx_gser#_lane_p#_mode_0
+ *
+ * These are the RAW PCS lane settings mode 0 registers. There is one register per
+ * 4 lanes per GSER per GSER_LMODE_E value (0..11). Only one entry is used at any given time in a
+ * given GSER lane - the one selected by the corresponding GSER()_LANE_MODE[LMODE].
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_px_mode_0 {
+	u64 u64;
+	struct cvmx_gserx_lane_px_mode_0_s {
+		u64 reserved_15_63 : 49;
+		u64 ctle : 2;
+		u64 pcie : 1;
+		u64 tx_ldiv : 2;
+		u64 rx_ldiv : 2;
+		u64 srate : 3;
+		u64 reserved_4_4 : 1;
+		u64 tx_mode : 2;
+		u64 rx_mode : 2;
+	} s;
+	struct cvmx_gserx_lane_px_mode_0_s cn73xx;
+	struct cvmx_gserx_lane_px_mode_0_s cn78xx;
+	struct cvmx_gserx_lane_px_mode_0_s cn78xxp1;
+	struct cvmx_gserx_lane_px_mode_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_px_mode_0 cvmx_gserx_lane_px_mode_0_t;
+
+/**
+ * cvmx_gser#_lane_p#_mode_1
+ *
+ * These are the RAW PCS lane settings mode 1 registers. There is one register per 4 lanes,
+ * (0..3) per GSER per GSER_LMODE_E value (0..11). Only one entry is used at any given time in a
+ * given
+ * GSER lane - the one selected by the corresponding GSER()_LANE_MODE[LMODE].
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_px_mode_1 {
+	u64 u64;
+	struct cvmx_gserx_lane_px_mode_1_s {
+		u64 reserved_16_63 : 48;
+		u64 vma_fine_cfg_sel : 1;
+		u64 vma_mm : 1;
+		u64 cdr_fgain : 4;
+		u64 ph_acc_adj : 10;
+	} s;
+	struct cvmx_gserx_lane_px_mode_1_s cn73xx;
+	struct cvmx_gserx_lane_px_mode_1_s cn78xx;
+	struct cvmx_gserx_lane_px_mode_1_s cn78xxp1;
+	struct cvmx_gserx_lane_px_mode_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_px_mode_1 cvmx_gserx_lane_px_mode_1_t;
+
+/**
+ * cvmx_gser#_lane#_rx_loop_ctrl
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_loop_ctrl {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_loop_ctrl_s {
+		u64 reserved_12_63 : 52;
+		u64 fast_dll_lock : 1;
+		u64 fast_ofst_cncl : 1;
+		u64 cfg_rx_lctrl : 10;
+	} s;
+	struct cvmx_gserx_lanex_rx_loop_ctrl_s cn73xx;
+	struct cvmx_gserx_lanex_rx_loop_ctrl_s cn78xx;
+	struct cvmx_gserx_lanex_rx_loop_ctrl_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_loop_ctrl_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_loop_ctrl cvmx_gserx_lanex_rx_loop_ctrl_t;
+
+/**
+ * cvmx_gser#_lane#_rx_valbbd_ctrl_0
+ *
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_valbbd_ctrl_0 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_0_s {
+		u64 reserved_14_63 : 50;
+		u64 agc_gain : 2;
+		u64 dfe_gain : 2;
+		u64 dfe_c5_mval : 4;
+		u64 dfe_c5_msgn : 1;
+		u64 dfe_c4_mval : 4;
+		u64 dfe_c4_msgn : 1;
+	} s;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_0_s cn73xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_0_s cn78xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_0_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_valbbd_ctrl_0 cvmx_gserx_lanex_rx_valbbd_ctrl_0_t;
+
+/**
+ * cvmx_gser#_lane#_rx_valbbd_ctrl_1
+ *
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_valbbd_ctrl_1 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_1_s {
+		u64 reserved_15_63 : 49;
+		u64 dfe_c3_mval : 4;
+		u64 dfe_c3_msgn : 1;
+		u64 dfe_c2_mval : 4;
+		u64 dfe_c2_msgn : 1;
+		u64 dfe_c1_mval : 4;
+		u64 dfe_c1_msgn : 1;
+	} s;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_1_s cn73xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_1_s cn78xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_1_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_valbbd_ctrl_1 cvmx_gserx_lanex_rx_valbbd_ctrl_1_t;
+
+/**
+ * cvmx_gser#_lane#_rx_valbbd_ctrl_2
+ *
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_valbbd_ctrl_2 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_2_s {
+		u64 reserved_6_63 : 58;
+		u64 dfe_ovrd_en : 1;
+		u64 dfe_c5_ovrd_val : 1;
+		u64 dfe_c4_ovrd_val : 1;
+		u64 dfe_c3_ovrd_val : 1;
+		u64 dfe_c2_ovrd_val : 1;
+		u64 dfe_c1_ovrd_val : 1;
+	} s;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_2_s cn73xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_2_s cn78xx;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_2_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_valbbd_ctrl_2_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_valbbd_ctrl_2 cvmx_gserx_lanex_rx_valbbd_ctrl_2_t;
+
+/**
+ * cvmx_gser#_lane_vma_fine_ctrl_0
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_vma_fine_ctrl_0 {
+	u64 u64;
+	struct cvmx_gserx_lane_vma_fine_ctrl_0_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_sdll_iq_max_fine : 4;
+		u64 rx_sdll_iq_min_fine : 4;
+		u64 rx_sdll_iq_step_fine : 2;
+		u64 vma_window_wait_fine : 3;
+		u64 lms_wait_time_fine : 3;
+	} s;
+	struct cvmx_gserx_lane_vma_fine_ctrl_0_s cn73xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_0_s cn78xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_0_s cn78xxp1;
+	struct cvmx_gserx_lane_vma_fine_ctrl_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_vma_fine_ctrl_0 cvmx_gserx_lane_vma_fine_ctrl_0_t;
+
+/**
+ * cvmx_gser#_lane_vma_fine_ctrl_1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_vma_fine_ctrl_1 {
+	u64 u64;
+	struct cvmx_gserx_lane_vma_fine_ctrl_1_s {
+		u64 reserved_10_63 : 54;
+		u64 rx_ctle_peak_max_fine : 4;
+		u64 rx_ctle_peak_min_fine : 4;
+		u64 rx_ctle_peak_step_fine : 2;
+	} s;
+	struct cvmx_gserx_lane_vma_fine_ctrl_1_s cn73xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_1_s cn78xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_1_s cn78xxp1;
+	struct cvmx_gserx_lane_vma_fine_ctrl_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_vma_fine_ctrl_1 cvmx_gserx_lane_vma_fine_ctrl_1_t;
+
+/**
+ * cvmx_gser#_lane_vma_fine_ctrl_2
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_vma_fine_ctrl_2 {
+	u64 u64;
+	struct cvmx_gserx_lane_vma_fine_ctrl_2_s {
+		u64 reserved_10_63 : 54;
+		u64 rx_prectle_gain_max_fine : 4;
+		u64 rx_prectle_gain_min_fine : 4;
+		u64 rx_prectle_gain_step_fine : 2;
+	} s;
+	struct cvmx_gserx_lane_vma_fine_ctrl_2_s cn73xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_2_s cn78xx;
+	struct cvmx_gserx_lane_vma_fine_ctrl_2_s cn78xxp1;
+	struct cvmx_gserx_lane_vma_fine_ctrl_2_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_vma_fine_ctrl_2 cvmx_gserx_lane_vma_fine_ctrl_2_t;
+
+/**
+ * cvmx_gser#_lane#_pwr_ctrl
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_pwr_ctrl {
+	u64 u64;
+	struct cvmx_gserx_lanex_pwr_ctrl_s {
+		u64 reserved_15_63 : 49;
+		u64 tx_sds_fifo_reset_ovrrd_en : 1;
+		u64 tx_sds_fifo_reset_ovrrd_val : 1;
+		u64 tx_pcs_reset_ovrrd_val : 1;
+		u64 rx_pcs_reset_ovrrd_val : 1;
+		u64 reserved_9_10 : 2;
+		u64 rx_resetn_ovrrd_en : 1;
+		u64 rx_resetn_ovrrd_val : 1;
+		u64 rx_lctrl_ovrrd_en : 1;
+		u64 rx_lctrl_ovrrd_val : 1;
+		u64 tx_tristate_en_ovrrd_en : 1;
+		u64 tx_pcs_reset_ovrrd_en : 1;
+		u64 tx_elec_idle_ovrrd_en : 1;
+		u64 tx_pd_ovrrd_en : 1;
+		u64 tx_p2s_resetn_ovrrd_en : 1;
+	} s;
+	struct cvmx_gserx_lanex_pwr_ctrl_cn73xx {
+		u64 reserved_15_63 : 49;
+		u64 tx_sds_fifo_reset_ovrrd_en : 1;
+		u64 tx_sds_fifo_reset_ovrrd_val : 1;
+		u64 tx_pcs_reset_ovrrd_val : 1;
+		u64 rx_pcs_reset_ovrrd_val : 1;
+		u64 reserved_10_9 : 2;
+		u64 rx_resetn_ovrrd_en : 1;
+		u64 rx_resetn_ovrrd_val : 1;
+		u64 rx_lctrl_ovrrd_en : 1;
+		u64 rx_lctrl_ovrrd_val : 1;
+		u64 tx_tristate_en_ovrrd_en : 1;
+		u64 tx_pcs_reset_ovrrd_en : 1;
+		u64 tx_elec_idle_ovrrd_en : 1;
+		u64 tx_pd_ovrrd_en : 1;
+		u64 tx_p2s_resetn_ovrrd_en : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_pwr_ctrl_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_pwr_ctrl_s cn78xxp1;
+	struct cvmx_gserx_lanex_pwr_ctrl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_pwr_ctrl cvmx_gserx_lanex_pwr_ctrl_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_0
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_0 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_0_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_datarate_ovrrd_en : 1;
+		u64 reserved_14_14 : 1;
+		u64 rx_resetn_ovrrd_val : 1;
+		u64 pcs_sds_rx_eyemon_en : 1;
+		u64 pcs_sds_rx_pcm_ctrl : 4;
+		u64 rx_datarate_ovrrd_val : 2;
+		u64 cfg_rx_pol_invert : 1;
+		u64 rx_subblk_pd_ovrrd_val : 5;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_0_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 rx_datarate_ovrrd_en : 1;
+		u64 pcs_rx_tristate_enable : 1;
+		u64 rx_resetn_ovrrd_val : 1;
+		u64 pcs_sds_rx_eyemon_en : 1;
+		u64 pcs_sds_rx_pcm_ctrl : 4;
+		u64 rx_datarate_ovrrd_val : 2;
+		u64 cfg_rx_pol_invert : 1;
+		u64 rx_subblk_pd_ovrrd_val : 5;
+	} cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_0_cn78xx {
+		u64 reserved_16_63 : 48;
+		u64 rx_datarate_ovrrd_en : 1;
+		u64 pcs_sds_rx_tristate_enable : 1;
+		u64 rx_resetn_ovrrd_val : 1;
+		u64 pcs_sds_rx_eyemon_en : 1;
+		u64 pcs_sds_rx_pcm_ctrl : 4;
+		u64 rx_datarate_ovrrd_val : 2;
+		u64 cfg_rx_pol_invert : 1;
+		u64 rx_subblk_pd_ovrrd_val : 5;
+	} cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_0_cn78xx cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_0 cvmx_gserx_lanex_rx_cfg_0_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_1 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_1_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_chpd_ovrrd_val : 1;
+		u64 pcs_sds_rx_os_men : 1;
+		u64 eie_en_ovrrd_en : 1;
+		u64 eie_en_ovrrd_val : 1;
+		u64 reserved_11_11 : 1;
+		u64 rx_pcie_mode_ovrrd_en : 1;
+		u64 rx_pcie_mode_ovrrd_val : 1;
+		u64 cfg_rx_dll_locken : 1;
+		u64 pcs_sds_rx_cdr_ssc_mode : 8;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_1_s cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_1_s cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_1_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_1 cvmx_gserx_lanex_rx_cfg_1_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_2
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_2 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_2_s {
+		u64 reserved_15_63 : 49;
+		u64 pcs_sds_rx_terminate_to_vdda : 1;
+		u64 pcs_sds_rx_sampler_boost : 2;
+		u64 pcs_sds_rx_sampler_boost_en : 1;
+		u64 reserved_10_10 : 1;
+		u64 rx_sds_rx_agc_mval : 10;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_2_s cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_2_s cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_2_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_2_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_2 cvmx_gserx_lanex_rx_cfg_2_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_3
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_3 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_3_s {
+		u64 reserved_16_63 : 48;
+		u64 cfg_rx_errdet_ctrl : 16;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_3_s cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_3_s cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_3_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_3_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_3 cvmx_gserx_lanex_rx_cfg_3_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_4
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_4 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_4_s {
+		u64 reserved_16_63 : 48;
+		u64 cfg_rx_errdet_ctrl : 16;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_4_s cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_4_s cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_4_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_4_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_4 cvmx_gserx_lanex_rx_cfg_4_t;
+
+/**
+ * cvmx_gser#_lane#_rx_cfg_5
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_cfg_5 {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_cfg_5_s {
+		u64 reserved_5_63 : 59;
+		u64 rx_agc_men_ovrrd_en : 1;
+		u64 rx_agc_men_ovrrd_val : 1;
+		u64 rx_widthsel_ovrrd_en : 1;
+		u64 rx_widthsel_ovrrd_val : 2;
+	} s;
+	struct cvmx_gserx_lanex_rx_cfg_5_s cn73xx;
+	struct cvmx_gserx_lanex_rx_cfg_5_s cn78xx;
+	struct cvmx_gserx_lanex_rx_cfg_5_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_cfg_5_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_cfg_5 cvmx_gserx_lanex_rx_cfg_5_t;
+
+/**
+ * cvmx_gser#_lane#_rx_ctle_ctrl
+ *
+ * These are the RAW PCS per-lane RX CTLE control registers.
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_ctle_ctrl {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_ctle_ctrl_s {
+		u64 reserved_16_63 : 48;
+		u64 pcs_sds_rx_ctle_bias_ctrl : 2;
+		u64 pcs_sds_rx_ctle_zero : 4;
+		u64 rx_ctle_pole_ovrrd_en : 1;
+		u64 rx_ctle_pole_ovrrd_val : 4;
+		u64 pcs_sds_rx_ctle_pole_max : 2;
+		u64 pcs_sds_rx_ctle_pole_min : 2;
+		u64 pcs_sds_rx_ctle_pole_step : 1;
+	} s;
+	struct cvmx_gserx_lanex_rx_ctle_ctrl_s cn73xx;
+	struct cvmx_gserx_lanex_rx_ctle_ctrl_s cn78xx;
+	struct cvmx_gserx_lanex_rx_ctle_ctrl_s cn78xxp1;
+	struct cvmx_gserx_lanex_rx_ctle_ctrl_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_ctle_ctrl cvmx_gserx_lanex_rx_ctle_ctrl_t;
+
+/**
+ * cvmx_gser#_lane#_rx_misc_ovrrd
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_rx_misc_ovrrd {
+	u64 u64;
+	struct cvmx_gserx_lanex_rx_misc_ovrrd_s {
+		u64 reserved_14_63 : 50;
+		u64 cfg_rx_oob_clk_en_ovrrd_val : 1;
+		u64 cfg_rx_oob_clk_en_ovrrd_en : 1;
+		u64 cfg_rx_eie_det_ovrrd_val : 1;
+		u64 cfg_rx_eie_det_ovrrd_en : 1;
+		u64 cfg_rx_cdr_ctrl_ovrrd_en : 1;
+		u64 cfg_rx_eq_eval_ovrrd_val : 1;
+		u64 cfg_rx_eq_eval_ovrrd_en : 1;
+		u64 reserved_6_6 : 1;
+		u64 cfg_rx_dll_locken_ovrrd_en : 1;
+		u64 cfg_rx_errdet_ctrl_ovrrd_en : 1;
+		u64 reserved_1_3 : 3;
+		u64 cfg_rxeq_eval_restore_en : 1;
+	} s;
+	struct cvmx_gserx_lanex_rx_misc_ovrrd_cn73xx {
+		u64 reserved_14_63 : 50;
+		u64 cfg_rx_oob_clk_en_ovrrd_val : 1;
+		u64 cfg_rx_oob_clk_en_ovrrd_en : 1;
+		u64 cfg_rx_eie_det_ovrrd_val : 1;
+		u64 cfg_rx_eie_det_ovrrd_en : 1;
+		u64 cfg_rx_cdr_ctrl_ovrrd_en : 1;
+		u64 cfg_rx_eq_eval_ovrrd_val : 1;
+		u64 cfg_rx_eq_eval_ovrrd_en : 1;
+		u64 reserved_6_6 : 1;
+		u64 cfg_rx_dll_locken_ovrrd_en : 1;
+		u64 cfg_rx_errdet_ctrl_ovrrd_en : 1;
+		u64 reserved_3_1 : 3;
+		u64 cfg_rxeq_eval_restore_en : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_rx_misc_ovrrd_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_rx_misc_ovrrd_cn78xxp1 {
+		u64 reserved_14_63 : 50;
+		u64 cfg_rx_oob_clk_en_ovrrd_val : 1;
+		u64 cfg_rx_oob_clk_en_ovrrd_en : 1;
+		u64 cfg_rx_eie_det_ovrrd_val : 1;
+		u64 cfg_rx_eie_det_ovrrd_en : 1;
+		u64 cfg_rx_cdr_ctrl_ovrrd_en : 1;
+		u64 cfg_rx_eq_eval_ovrrd_val : 1;
+		u64 cfg_rx_eq_eval_ovrrd_en : 1;
+		u64 reserved_6_6 : 1;
+		u64 cfg_rx_dll_locken_ovrrd_en : 1;
+		u64 cfg_rx_errdet_ctrl_ovrrd_en : 1;
+		u64 reserved_0_3 : 4;
+	} cn78xxp1;
+	struct cvmx_gserx_lanex_rx_misc_ovrrd_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_rx_misc_ovrrd cvmx_gserx_lanex_rx_misc_ovrrd_t;
+
+/**
+ * cvmx_gser#_lane#_tx_cfg_0
+ *
+ * These registers are reset by hardware only during chip cold reset. The
+ * values of the CSR fields in these registers do not change during chip
+ * warm or soft resets.
+ */
+union cvmx_gserx_lanex_tx_cfg_0 {
+	u64 u64;
+	struct cvmx_gserx_lanex_tx_cfg_0_s {
+		u64 reserved_16_63 : 48;
+		u64 tx_tristate_en_ovrrd_val : 1;
+		u64 tx_chpd_ovrrd_val : 1;
+		u64 reserved_10_13 : 4;
+		u64 tx_resetn_ovrrd_val : 1;
+		u64 tx_cm_mode : 1;
+		u64 cfg_tx_swing : 5;
+		u64 fast_rdet_mode : 1;
+		u64 fast_tristate_mode : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_gserx_lanex_tx_cfg_0_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 tx_tristate_en_ovrrd_val : 1;
+		u64 tx_chpd_ovrrd_val : 1;
+		u64 reserved_13_10 : 4;
+		u64 tx_resetn_ovrrd_val : 1;
+		u64 tx_cm_mode : 1;
+		u64 cfg_tx_swing : 5;
+		u64 fast_rdet_mode : 1;
+		u64 fast_tristate_mode : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_tx_cfg_0_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_tx_cfg_0_s cn78xxp1;
+	struct cvmx_gserx_lanex_tx_cfg_0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_tx_cfg_0 cvmx_gserx_lanex_tx_cfg_0_t;
+
+/**
+ * cvmx_gser#_lane#_tx_cfg_1
+ *
+ * These registers are reset by hardware only during chip cold reset. The
+ * values of the CSR fields in these registers do not change during chip
+ * warm or soft resets.
+ */
+union cvmx_gserx_lanex_tx_cfg_1 {
+	u64 u64;
+	struct cvmx_gserx_lanex_tx_cfg_1_s {
+		u64 reserved_15_63 : 49;
+		u64 tx_widthsel_ovrrd_en : 1;
+		u64 tx_widthsel_ovrrd_val : 2;
+		u64 tx_vboost_en_ovrrd_en : 1;
+		u64 tx_turbo_en_ovrrd_en : 1;
+		u64 tx_swing_ovrrd_en : 1;
+		u64 tx_premptap_ovrrd_val : 1;
+		u64 tx_elec_idle_ovrrd_en : 1;
+		u64 smpl_rate_ovrrd_en : 1;
+		u64 smpl_rate_ovrrd_val : 3;
+		u64 tx_datarate_ovrrd_en : 1;
+		u64 tx_datarate_ovrrd_val : 2;
+	} s;
+	struct cvmx_gserx_lanex_tx_cfg_1_s cn73xx;
+	struct cvmx_gserx_lanex_tx_cfg_1_s cn78xx;
+	struct cvmx_gserx_lanex_tx_cfg_1_s cn78xxp1;
+	struct cvmx_gserx_lanex_tx_cfg_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_tx_cfg_1 cvmx_gserx_lanex_tx_cfg_1_t;
+
+/**
+ * cvmx_gser#_lane#_tx_cfg_2
+ *
+ * These registers are for diagnostic use only. These registers are reset by hardware only during
+ * chip cold reset. The values of the CSR fields in these registers do not change during chip
+ * warm or soft resets.
+ */
+union cvmx_gserx_lanex_tx_cfg_2 {
+	u64 u64;
+	struct cvmx_gserx_lanex_tx_cfg_2_s {
+		u64 reserved_16_63 : 48;
+		u64 pcs_sds_tx_dcc_en : 1;
+		u64 reserved_3_14 : 12;
+		u64 rcvr_test_ovrrd_en : 1;
+		u64 rcvr_test_ovrrd_val : 1;
+		u64 tx_rx_detect_dis_ovrrd_val : 1;
+	} s;
+	struct cvmx_gserx_lanex_tx_cfg_2_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 pcs_sds_tx_dcc_en : 1;
+		u64 reserved_14_3 : 12;
+		u64 rcvr_test_ovrrd_en : 1;
+		u64 rcvr_test_ovrrd_val : 1;
+		u64 tx_rx_detect_dis_ovrrd_val : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_tx_cfg_2_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_tx_cfg_2_s cn78xxp1;
+	struct cvmx_gserx_lanex_tx_cfg_2_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_tx_cfg_2 cvmx_gserx_lanex_tx_cfg_2_t;
+
+/**
+ * cvmx_gser#_lane#_tx_cfg_3
+ *
+ * These registers are for diagnostic use only. These registers are reset by hardware only during
+ * chip cold reset. The values of the CSR fields in these registers do not change during chip
+ * warm or soft resets.
+ */
+union cvmx_gserx_lanex_tx_cfg_3 {
+	u64 u64;
+	struct cvmx_gserx_lanex_tx_cfg_3_s {
+		u64 reserved_15_63 : 49;
+		u64 cfg_tx_vboost_en : 1;
+		u64 reserved_7_13 : 7;
+		u64 pcs_sds_tx_gain : 3;
+		u64 pcs_sds_tx_srate_sel : 3;
+		u64 cfg_tx_turbo_en : 1;
+	} s;
+	struct cvmx_gserx_lanex_tx_cfg_3_cn73xx {
+		u64 reserved_15_63 : 49;
+		u64 cfg_tx_vboost_en : 1;
+		u64 reserved_13_7 : 7;
+		u64 pcs_sds_tx_gain : 3;
+		u64 pcs_sds_tx_srate_sel : 3;
+		u64 cfg_tx_turbo_en : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_tx_cfg_3_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_tx_cfg_3_s cn78xxp1;
+	struct cvmx_gserx_lanex_tx_cfg_3_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_tx_cfg_3 cvmx_gserx_lanex_tx_cfg_3_t;
+
+/**
+ * cvmx_gser#_lane#_tx_pre_emphasis
+ *
+ * These registers are reset by hardware only during chip cold reset. The
+ * values of the CSR fields in these registers do not change during chip
+ * warm or soft resets.
+ */
+union cvmx_gserx_lanex_tx_pre_emphasis {
+	u64 u64;
+	struct cvmx_gserx_lanex_tx_pre_emphasis_s {
+		u64 reserved_9_63 : 55;
+		u64 cfg_tx_premptap : 9;
+	} s;
+	struct cvmx_gserx_lanex_tx_pre_emphasis_s cn73xx;
+	struct cvmx_gserx_lanex_tx_pre_emphasis_s cn78xx;
+	struct cvmx_gserx_lanex_tx_pre_emphasis_s cn78xxp1;
+	struct cvmx_gserx_lanex_tx_pre_emphasis_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_tx_pre_emphasis cvmx_gserx_lanex_tx_pre_emphasis_t;
+
+/**
+ * cvmx_gser#_lane#_pcs_ctlifc_0
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_pcs_ctlifc_0 {
+	u64 u64;
+	struct cvmx_gserx_lanex_pcs_ctlifc_0_s {
+		u64 reserved_14_63 : 50;
+		u64 cfg_tx_vboost_en_ovrrd_val : 1;
+		u64 cfg_tx_coeff_req_ovrrd_val : 1;
+		u64 cfg_rx_cdr_coast_req_ovrrd_val : 1;
+		u64 cfg_tx_detrx_en_req_ovrrd_val : 1;
+		u64 cfg_soft_reset_req_ovrrd_val : 1;
+		u64 cfg_lane_pwr_off_ovrrd_val : 1;
+		u64 cfg_tx_mode_ovrrd_val : 2;
+		u64 cfg_tx_pstate_req_ovrrd_val : 2;
+		u64 cfg_lane_mode_req_ovrrd_val : 4;
+	} s;
+	struct cvmx_gserx_lanex_pcs_ctlifc_0_s cn73xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_0_s cn78xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_0_s cn78xxp1;
+	struct cvmx_gserx_lanex_pcs_ctlifc_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_pcs_ctlifc_0 cvmx_gserx_lanex_pcs_ctlifc_0_t;
+
+/**
+ * cvmx_gser#_lane#_pcs_ctlifc_1
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_pcs_ctlifc_1 {
+	u64 u64;
+	struct cvmx_gserx_lanex_pcs_ctlifc_1_s {
+		u64 reserved_9_63 : 55;
+		u64 cfg_rx_pstate_req_ovrrd_val : 2;
+		u64 reserved_2_6 : 5;
+		u64 cfg_rx_mode_ovrrd_val : 2;
+	} s;
+	struct cvmx_gserx_lanex_pcs_ctlifc_1_cn73xx {
+		u64 reserved_9_63 : 55;
+		u64 cfg_rx_pstate_req_ovrrd_val : 2;
+		u64 reserved_6_2 : 5;
+		u64 cfg_rx_mode_ovrrd_val : 2;
+	} cn73xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_1_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_1_s cn78xxp1;
+	struct cvmx_gserx_lanex_pcs_ctlifc_1_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_pcs_ctlifc_1 cvmx_gserx_lanex_pcs_ctlifc_1_t;
+
+/**
+ * cvmx_gser#_lane#_pcs_ctlifc_2
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lanex_pcs_ctlifc_2 {
+	u64 u64;
+	struct cvmx_gserx_lanex_pcs_ctlifc_2_s {
+		u64 reserved_16_63 : 48;
+		u64 ctlifc_ovrrd_req : 1;
+		u64 reserved_9_14 : 6;
+		u64 cfg_tx_vboost_en_ovrrd_en : 1;
+		u64 cfg_tx_coeff_req_ovrrd_en : 1;
+		u64 cfg_rx_cdr_coast_req_ovrrd_en : 1;
+		u64 cfg_tx_detrx_en_req_ovrrd_en : 1;
+		u64 cfg_soft_reset_req_ovrrd_en : 1;
+		u64 cfg_lane_pwr_off_ovrrd_en : 1;
+		u64 cfg_tx_pstate_req_ovrrd_en : 1;
+		u64 cfg_rx_pstate_req_ovrrd_en : 1;
+		u64 cfg_lane_mode_req_ovrrd_en : 1;
+	} s;
+	struct cvmx_gserx_lanex_pcs_ctlifc_2_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 ctlifc_ovrrd_req : 1;
+		u64 reserved_14_9 : 6;
+		u64 cfg_tx_vboost_en_ovrrd_en : 1;
+		u64 cfg_tx_coeff_req_ovrrd_en : 1;
+		u64 cfg_rx_cdr_coast_req_ovrrd_en : 1;
+		u64 cfg_tx_detrx_en_req_ovrrd_en : 1;
+		u64 cfg_soft_reset_req_ovrrd_en : 1;
+		u64 cfg_lane_pwr_off_ovrrd_en : 1;
+		u64 cfg_tx_pstate_req_ovrrd_en : 1;
+		u64 cfg_rx_pstate_req_ovrrd_en : 1;
+		u64 cfg_lane_mode_req_ovrrd_en : 1;
+	} cn73xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_2_cn73xx cn78xx;
+	struct cvmx_gserx_lanex_pcs_ctlifc_2_s cn78xxp1;
+	struct cvmx_gserx_lanex_pcs_ctlifc_2_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_lanex_pcs_ctlifc_2 cvmx_gserx_lanex_pcs_ctlifc_2_t;
+
+/**
+ * cvmx_gser#_lane_mode
+ *
+ * These registers are reset by hardware only during chip cold reset. The values of the CSR
+ * fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_lane_mode {
+	u64 u64;
+	struct cvmx_gserx_lane_mode_s {
+		u64 reserved_4_63 : 60;
+		u64 lmode : 4;
+	} s;
+	struct cvmx_gserx_lane_mode_s cn73xx;
+	struct cvmx_gserx_lane_mode_s cn78xx;
+	struct cvmx_gserx_lane_mode_s cn78xxp1;
+	struct cvmx_gserx_lane_mode_s cnf75xx;
+};
+
+typedef union cvmx_gserx_lane_mode cvmx_gserx_lane_mode_t;
+
+/**
+ * cvmx_gser#_pll_p#_mode_0
+ *
+ * These are the RAW PCS PLL global settings mode 0 registers. There is one register per GSER per
+ * GSER_LMODE_E value (0..11). Only one entry is used at any given time in a given GSER - the one
+ * selected by the corresponding GSER()_LANE_MODE[LMODE].
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during subsequent chip warm or
+ * soft resets.
+ */
+union cvmx_gserx_pll_px_mode_0 {
+	u64 u64;
+	struct cvmx_gserx_pll_px_mode_0_s {
+		u64 reserved_16_63 : 48;
+		u64 pll_icp : 4;
+		u64 pll_rloop : 3;
+		u64 pll_pcs_div : 9;
+	} s;
+	struct cvmx_gserx_pll_px_mode_0_s cn73xx;
+	struct cvmx_gserx_pll_px_mode_0_s cn78xx;
+	struct cvmx_gserx_pll_px_mode_0_s cn78xxp1;
+	struct cvmx_gserx_pll_px_mode_0_s cnf75xx;
+};
+
+typedef union cvmx_gserx_pll_px_mode_0 cvmx_gserx_pll_px_mode_0_t;
+
+/**
+ * cvmx_gser#_pll_p#_mode_1
+ *
+ * These are the RAW PCS PLL global settings mode 1 registers. There is one register per GSER per
+ * GSER_LMODE_E value (0..11). Only one entry is used at any given time in a given GSER - the one
+ * selected by the corresponding GSER()_LANE_MODE[LMODE].
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in this register do not change during subsequent chip warm or
+ * soft resets.
+ */
+union cvmx_gserx_pll_px_mode_1 {
+	u64 u64;
+	struct cvmx_gserx_pll_px_mode_1_s {
+		u64 reserved_14_63 : 50;
+		u64 pll_16p5en : 1;
+		u64 pll_cpadj : 2;
+		u64 pll_pcie3en : 1;
+		u64 pll_opr : 1;
+		u64 pll_div : 9;
+	} s;
+	struct cvmx_gserx_pll_px_mode_1_s cn73xx;
+	struct cvmx_gserx_pll_px_mode_1_s cn78xx;
+	struct cvmx_gserx_pll_px_mode_1_s cn78xxp1;
+	struct cvmx_gserx_pll_px_mode_1_s cnf75xx;
+};
+
+typedef union cvmx_gserx_pll_px_mode_1 cvmx_gserx_pll_px_mode_1_t;
+
+/**
+ * cvmx_gser#_pll_stat
+ */
+union cvmx_gserx_pll_stat {
+	u64 u64;
+	struct cvmx_gserx_pll_stat_s {
+		u64 reserved_1_63 : 63;
+		u64 pll_lock : 1;
+	} s;
+	struct cvmx_gserx_pll_stat_s cn73xx;
+	struct cvmx_gserx_pll_stat_s cn78xx;
+	struct cvmx_gserx_pll_stat_s cn78xxp1;
+	struct cvmx_gserx_pll_stat_s cnf75xx;
+};
+
+typedef union cvmx_gserx_pll_stat cvmx_gserx_pll_stat_t;
+
+/**
+ * cvmx_gser#_qlm_stat
+ */
+union cvmx_gserx_qlm_stat {
+	u64 u64;
+	struct cvmx_gserx_qlm_stat_s {
+		u64 reserved_2_63 : 62;
+		u64 rst_rdy : 1;
+		u64 dcok : 1;
+	} s;
+	struct cvmx_gserx_qlm_stat_s cn73xx;
+	struct cvmx_gserx_qlm_stat_s cn78xx;
+	struct cvmx_gserx_qlm_stat_s cn78xxp1;
+	struct cvmx_gserx_qlm_stat_s cnf75xx;
+};
+
+typedef union cvmx_gserx_qlm_stat cvmx_gserx_qlm_stat_t;
+
+/**
+ * cvmx_gser#_slice_cfg
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ */
+union cvmx_gserx_slice_cfg {
+	u64 u64;
+	struct cvmx_gserx_slice_cfg_s {
+		u64 reserved_12_63 : 52;
+		u64 tx_rx_detect_lvl_enc : 4;
+		u64 reserved_6_7 : 2;
+		u64 pcs_sds_rx_pcie_pterm : 2;
+		u64 pcs_sds_rx_pcie_nterm : 2;
+		u64 pcs_sds_tx_stress_eye : 2;
+	} s;
+	struct cvmx_gserx_slice_cfg_cn73xx {
+		u64 reserved_12_63 : 52;
+		u64 tx_rx_detect_lvl_enc : 4;
+		u64 reserved_7_6 : 2;
+		u64 pcs_sds_rx_pcie_pterm : 2;
+		u64 pcs_sds_rx_pcie_nterm : 2;
+		u64 pcs_sds_tx_stress_eye : 2;
+	} cn73xx;
+	struct cvmx_gserx_slice_cfg_cn73xx cn78xx;
+	struct cvmx_gserx_slice_cfg_s cn78xxp1;
+	struct cvmx_gserx_slice_cfg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_slice_cfg cvmx_gserx_slice_cfg_t;
+
+/**
+ * cvmx_gser#_slice#_pcie1_mode
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ *
+ * Slice 1 does not exist on GSER0, GSER1, GSER4, GSER5, GSER6, GSER7, and GSER8.
+ */
+union cvmx_gserx_slicex_pcie1_mode {
+	u64 u64;
+	struct cvmx_gserx_slicex_pcie1_mode_s {
+		u64 reserved_15_63 : 49;
+		u64 slice_spare_1_0 : 2;
+		u64 rx_ldll_isel : 2;
+		u64 rx_sdll_isel : 2;
+		u64 rx_pi_bwsel : 3;
+		u64 rx_ldll_bwsel : 3;
+		u64 rx_sdll_bwsel : 3;
+	} s;
+	struct cvmx_gserx_slicex_pcie1_mode_s cn73xx;
+	struct cvmx_gserx_slicex_pcie1_mode_s cn78xx;
+	struct cvmx_gserx_slicex_pcie1_mode_s cn78xxp1;
+	struct cvmx_gserx_slicex_pcie1_mode_s cnf75xx;
+};
+
+typedef union cvmx_gserx_slicex_pcie1_mode cvmx_gserx_slicex_pcie1_mode_t;
+
+/**
+ * cvmx_gser#_slice#_pcie2_mode
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ *
+ * Slice 1 does not exist on GSER0, GSER1, GSER4, GSER5, GSER6, GSER7, and GSER8.
+ */
+union cvmx_gserx_slicex_pcie2_mode {
+	u64 u64;
+	struct cvmx_gserx_slicex_pcie2_mode_s {
+		u64 reserved_15_63 : 49;
+		u64 slice_spare_1_0 : 2;
+		u64 rx_ldll_isel : 2;
+		u64 rx_sdll_isel : 2;
+		u64 rx_pi_bwsel : 3;
+		u64 rx_ldll_bwsel : 3;
+		u64 rx_sdll_bwsel : 3;
+	} s;
+	struct cvmx_gserx_slicex_pcie2_mode_s cn73xx;
+	struct cvmx_gserx_slicex_pcie2_mode_s cn78xx;
+	struct cvmx_gserx_slicex_pcie2_mode_s cn78xxp1;
+	struct cvmx_gserx_slicex_pcie2_mode_s cnf75xx;
+};
+
+typedef union cvmx_gserx_slicex_pcie2_mode cvmx_gserx_slicex_pcie2_mode_t;
+
+/**
+ * cvmx_gser#_slice#_pcie3_mode
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ *
+ * Slice 1 does not exist on GSER0, GSER1, GSER4, GSER5, GSER6, GSER7, and GSER8.
+ */
+union cvmx_gserx_slicex_pcie3_mode {
+	u64 u64;
+	struct cvmx_gserx_slicex_pcie3_mode_s {
+		u64 reserved_15_63 : 49;
+		u64 slice_spare_1_0 : 2;
+		u64 rx_ldll_isel : 2;
+		u64 rx_sdll_isel : 2;
+		u64 rx_pi_bwsel : 3;
+		u64 rx_ldll_bwsel : 3;
+		u64 rx_sdll_bwsel : 3;
+	} s;
+	struct cvmx_gserx_slicex_pcie3_mode_s cn73xx;
+	struct cvmx_gserx_slicex_pcie3_mode_s cn78xx;
+	struct cvmx_gserx_slicex_pcie3_mode_s cn78xxp1;
+	struct cvmx_gserx_slicex_pcie3_mode_s cnf75xx;
+};
+
+typedef union cvmx_gserx_slicex_pcie3_mode cvmx_gserx_slicex_pcie3_mode_t;
+
+/**
+ * cvmx_gser#_slice#_rx_sdll_ctrl
+ *
+ * These registers are for diagnostic use only.
+ * These registers are reset by hardware only during chip cold reset.
+ * The values of the CSR fields in these registers do not change during chip warm or soft resets.
+ *
+ * Slice 1 does not exist on GSER0, GSER1, GSER4, GSER5, GSER6, GSER7, and GSER8.
+ */
+union cvmx_gserx_slicex_rx_sdll_ctrl {
+	u64 u64;
+	struct cvmx_gserx_slicex_rx_sdll_ctrl_s {
+		u64 reserved_16_63 : 48;
+		u64 pcs_sds_oob_clk_ctrl : 2;
+		u64 reserved_7_13 : 7;
+		u64 pcs_sds_rx_sdll_tune : 3;
+		u64 pcs_sds_rx_sdll_swsel : 4;
+	} s;
+	struct cvmx_gserx_slicex_rx_sdll_ctrl_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 pcs_sds_oob_clk_ctrl : 2;
+		u64 reserved_13_7 : 7;
+		u64 pcs_sds_rx_sdll_tune : 3;
+		u64 pcs_sds_rx_sdll_swsel : 4;
+	} cn73xx;
+	struct cvmx_gserx_slicex_rx_sdll_ctrl_cn73xx cn78xx;
+	struct cvmx_gserx_slicex_rx_sdll_ctrl_s cn78xxp1;
+	struct cvmx_gserx_slicex_rx_sdll_ctrl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_gserx_slicex_rx_sdll_ctrl cvmx_gserx_slicex_rx_sdll_ctrl_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 13/50] mips: octeon: Add cvmx-ipd-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (11 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 12/50] mips: octeon: Add cvmx-gserx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 14/50] mips: octeon: Add cvmx-l2c-defs.h " Stefan Roese
                   ` (39 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-ipd-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-ipd-defs.h  | 1925 +++++++++++++++++
 1 file changed, 1925 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-ipd-defs.h
new file mode 100644
index 0000000000..ad860fc7db
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd-defs.h
@@ -0,0 +1,1925 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon ipd.
+ */
+
+#ifndef __CVMX_IPD_DEFS_H__
+#define __CVMX_IPD_DEFS_H__
+
+#define CVMX_IPD_1ST_MBUFF_SKIP		    (0x00014F0000000000ull)
+#define CVMX_IPD_1st_NEXT_PTR_BACK	    (0x00014F0000000150ull)
+#define CVMX_IPD_2nd_NEXT_PTR_BACK	    (0x00014F0000000158ull)
+#define CVMX_IPD_BIST_STATUS		    (0x00014F00000007F8ull)
+#define CVMX_IPD_BPIDX_MBUF_TH(offset)	    (0x00014F0000002000ull + ((offset) & 63) * 8)
+#define CVMX_IPD_BPID_BP_COUNTERX(offset)   (0x00014F0000003000ull + ((offset) & 63) * 8)
+#define CVMX_IPD_BP_PRT_RED_END		    (0x00014F0000000328ull)
+#define CVMX_IPD_CLK_COUNT		    (0x00014F0000000338ull)
+#define CVMX_IPD_CREDITS		    (0x00014F0000004410ull)
+#define CVMX_IPD_CTL_STATUS		    (0x00014F0000000018ull)
+#define CVMX_IPD_ECC_CTL		    (0x00014F0000004408ull)
+#define CVMX_IPD_FREE_PTR_FIFO_CTL	    (0x00014F0000000780ull)
+#define CVMX_IPD_FREE_PTR_VALUE		    (0x00014F0000000788ull)
+#define CVMX_IPD_HOLD_PTR_FIFO_CTL	    (0x00014F0000000790ull)
+#define CVMX_IPD_INT_ENB		    (0x00014F0000000160ull)
+#define CVMX_IPD_INT_SUM		    (0x00014F0000000168ull)
+#define CVMX_IPD_NEXT_PKT_PTR		    (0x00014F00000007A0ull)
+#define CVMX_IPD_NEXT_WQE_PTR		    (0x00014F00000007A8ull)
+#define CVMX_IPD_NOT_1ST_MBUFF_SKIP	    (0x00014F0000000008ull)
+#define CVMX_IPD_ON_BP_DROP_PKTX(offset)    (0x00014F0000004100ull)
+#define CVMX_IPD_PACKET_MBUFF_SIZE	    (0x00014F0000000010ull)
+#define CVMX_IPD_PKT_ERR		    (0x00014F00000003F0ull)
+#define CVMX_IPD_PKT_PTR_VALID		    (0x00014F0000000358ull)
+#define CVMX_IPD_PORTX_BP_PAGE_CNT(offset)  (0x00014F0000000028ull + ((offset) & 63) * 8)
+#define CVMX_IPD_PORTX_BP_PAGE_CNT2(offset) (0x00014F0000000368ull + ((offset) & 63) * 8 - 8 * 36)
+#define CVMX_IPD_PORTX_BP_PAGE_CNT3(offset) (0x00014F00000003D0ull + ((offset) & 63) * 8 - 8 * 40)
+#define CVMX_IPD_PORT_BP_COUNTERS2_PAIRX(offset)                                                   \
+	(0x00014F0000000388ull + ((offset) & 63) * 8 - 8 * 36)
+#define CVMX_IPD_PORT_BP_COUNTERS3_PAIRX(offset)                                                   \
+	(0x00014F00000003B0ull + ((offset) & 63) * 8 - 8 * 40)
+#define CVMX_IPD_PORT_BP_COUNTERS4_PAIRX(offset)                                                   \
+	(0x00014F0000000410ull + ((offset) & 63) * 8 - 8 * 44)
+#define CVMX_IPD_PORT_BP_COUNTERS_PAIRX(offset) (0x00014F00000001B8ull + ((offset) & 63) * 8)
+#define CVMX_IPD_PORT_PTR_FIFO_CTL		(0x00014F0000000798ull)
+#define CVMX_IPD_PORT_QOS_INTX(offset)		(0x00014F0000000808ull + ((offset) & 7) * 8)
+#define CVMX_IPD_PORT_QOS_INT_ENBX(offset)	(0x00014F0000000848ull + ((offset) & 7) * 8)
+#define CVMX_IPD_PORT_QOS_X_CNT(offset)		(0x00014F0000000888ull + ((offset) & 511) * 8)
+#define CVMX_IPD_PORT_SOPX(offset)		(0x00014F0000004400ull)
+#define CVMX_IPD_PRC_HOLD_PTR_FIFO_CTL		(0x00014F0000000348ull)
+#define CVMX_IPD_PRC_PORT_PTR_FIFO_CTL		(0x00014F0000000350ull)
+#define CVMX_IPD_PTR_COUNT			(0x00014F0000000320ull)
+#define CVMX_IPD_PWP_PTR_FIFO_CTL		(0x00014F0000000340ull)
+#define CVMX_IPD_QOS0_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(0)
+#define CVMX_IPD_QOS1_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(1)
+#define CVMX_IPD_QOS2_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(2)
+#define CVMX_IPD_QOS3_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(3)
+#define CVMX_IPD_QOS4_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(4)
+#define CVMX_IPD_QOS5_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(5)
+#define CVMX_IPD_QOS6_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(6)
+#define CVMX_IPD_QOS7_RED_MARKS			CVMX_IPD_QOSX_RED_MARKS(7)
+#define CVMX_IPD_QOSX_RED_MARKS(offset)		(0x00014F0000000178ull + ((offset) & 7) * 8)
+#define CVMX_IPD_QUE0_FREE_PAGE_CNT		(0x00014F0000000330ull)
+#define CVMX_IPD_RED_BPID_ENABLEX(offset)	(0x00014F0000004200ull)
+#define CVMX_IPD_RED_DELAY			(0x00014F0000004300ull)
+#define CVMX_IPD_RED_PORT_ENABLE		(0x00014F00000002D8ull)
+#define CVMX_IPD_RED_PORT_ENABLE2		(0x00014F00000003A8ull)
+#define CVMX_IPD_RED_QUE0_PARAM			CVMX_IPD_RED_QUEX_PARAM(0)
+#define CVMX_IPD_RED_QUE1_PARAM			CVMX_IPD_RED_QUEX_PARAM(1)
+#define CVMX_IPD_RED_QUE2_PARAM			CVMX_IPD_RED_QUEX_PARAM(2)
+#define CVMX_IPD_RED_QUE3_PARAM			CVMX_IPD_RED_QUEX_PARAM(3)
+#define CVMX_IPD_RED_QUE4_PARAM			CVMX_IPD_RED_QUEX_PARAM(4)
+#define CVMX_IPD_RED_QUE5_PARAM			CVMX_IPD_RED_QUEX_PARAM(5)
+#define CVMX_IPD_RED_QUE6_PARAM			CVMX_IPD_RED_QUEX_PARAM(6)
+#define CVMX_IPD_RED_QUE7_PARAM			CVMX_IPD_RED_QUEX_PARAM(7)
+#define CVMX_IPD_RED_QUEX_PARAM(offset)		(0x00014F00000002E0ull + ((offset) & 7) * 8)
+#define CVMX_IPD_REQ_WGT			(0x00014F0000004418ull)
+#define CVMX_IPD_SUB_PORT_BP_PAGE_CNT		(0x00014F0000000148ull)
+#define CVMX_IPD_SUB_PORT_FCS			(0x00014F0000000170ull)
+#define CVMX_IPD_SUB_PORT_QOS_CNT		(0x00014F0000000800ull)
+#define CVMX_IPD_WQE_FPA_QUEUE			(0x00014F0000000020ull)
+#define CVMX_IPD_WQE_PTR_VALID			(0x00014F0000000360ull)
+
+/**
+ * cvmx_ipd_1st_mbuff_skip
+ *
+ * The number of words that the IPD will skip when writing the first MBUFF.
+ *
+ */
+union cvmx_ipd_1st_mbuff_skip {
+	u64 u64;
+	struct cvmx_ipd_1st_mbuff_skip_s {
+		u64 reserved_6_63 : 58;
+		u64 skip_sz : 6;
+	} s;
+	struct cvmx_ipd_1st_mbuff_skip_s cn30xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn31xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn38xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn38xxp2;
+	struct cvmx_ipd_1st_mbuff_skip_s cn50xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn52xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn52xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cn56xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn56xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cn58xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn58xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cn61xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn63xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn63xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cn66xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn68xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn68xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cn70xx;
+	struct cvmx_ipd_1st_mbuff_skip_s cn70xxp1;
+	struct cvmx_ipd_1st_mbuff_skip_s cnf71xx;
+};
+
+typedef union cvmx_ipd_1st_mbuff_skip cvmx_ipd_1st_mbuff_skip_t;
+
+/**
+ * cvmx_ipd_1st_next_ptr_back
+ *
+ * IPD_1st_NEXT_PTR_BACK = IPD First Next Pointer Back Values
+ * Contains the Back Field for use in creating the Next Pointer Header for the First MBUF
+ */
+union cvmx_ipd_1st_next_ptr_back {
+	u64 u64;
+	struct cvmx_ipd_1st_next_ptr_back_s {
+		u64 reserved_4_63 : 60;
+		u64 back : 4;
+	} s;
+	struct cvmx_ipd_1st_next_ptr_back_s cn30xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn31xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn38xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn38xxp2;
+	struct cvmx_ipd_1st_next_ptr_back_s cn50xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn52xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn52xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cn56xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn56xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cn58xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn58xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cn61xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn63xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn63xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cn66xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn68xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn68xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cn70xx;
+	struct cvmx_ipd_1st_next_ptr_back_s cn70xxp1;
+	struct cvmx_ipd_1st_next_ptr_back_s cnf71xx;
+};
+
+typedef union cvmx_ipd_1st_next_ptr_back cvmx_ipd_1st_next_ptr_back_t;
+
+/**
+ * cvmx_ipd_2nd_next_ptr_back
+ *
+ * Contains the Back Field for use in creating the Next Pointer Header for the First MBUF
+ *
+ */
+union cvmx_ipd_2nd_next_ptr_back {
+	u64 u64;
+	struct cvmx_ipd_2nd_next_ptr_back_s {
+		u64 reserved_4_63 : 60;
+		u64 back : 4;
+	} s;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn30xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn31xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn38xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn38xxp2;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn50xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn52xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn52xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn56xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn56xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn58xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn58xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn61xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn63xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn63xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn66xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn68xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn68xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn70xx;
+	struct cvmx_ipd_2nd_next_ptr_back_s cn70xxp1;
+	struct cvmx_ipd_2nd_next_ptr_back_s cnf71xx;
+};
+
+typedef union cvmx_ipd_2nd_next_ptr_back cvmx_ipd_2nd_next_ptr_back_t;
+
+/**
+ * cvmx_ipd_bist_status
+ *
+ * BIST Status for IPD's Memories.
+ *
+ */
+union cvmx_ipd_bist_status {
+	u64 u64;
+	struct cvmx_ipd_bist_status_s {
+		u64 reserved_23_63 : 41;
+		u64 iiwo1 : 1;
+		u64 iiwo0 : 1;
+		u64 iio1 : 1;
+		u64 iio0 : 1;
+		u64 pbm4 : 1;
+		u64 csr_mem : 1;
+		u64 csr_ncmd : 1;
+		u64 pwq_wqed : 1;
+		u64 pwq_wp1 : 1;
+		u64 pwq_pow : 1;
+		u64 ipq_pbe1 : 1;
+		u64 ipq_pbe0 : 1;
+		u64 pbm3 : 1;
+		u64 pbm2 : 1;
+		u64 pbm1 : 1;
+		u64 pbm0 : 1;
+		u64 pbm_word : 1;
+		u64 pwq1 : 1;
+		u64 pwq0 : 1;
+		u64 prc_off : 1;
+		u64 ipd_old : 1;
+		u64 ipd_new : 1;
+		u64 pwp : 1;
+	} s;
+	struct cvmx_ipd_bist_status_cn30xx {
+		u64 reserved_16_63 : 48;
+		u64 pwq_wqed : 1;
+		u64 pwq_wp1 : 1;
+		u64 pwq_pow : 1;
+		u64 ipq_pbe1 : 1;
+		u64 ipq_pbe0 : 1;
+		u64 pbm3 : 1;
+		u64 pbm2 : 1;
+		u64 pbm1 : 1;
+		u64 pbm0 : 1;
+		u64 pbm_word : 1;
+		u64 pwq1 : 1;
+		u64 pwq0 : 1;
+		u64 prc_off : 1;
+		u64 ipd_old : 1;
+		u64 ipd_new : 1;
+		u64 pwp : 1;
+	} cn30xx;
+	struct cvmx_ipd_bist_status_cn30xx cn31xx;
+	struct cvmx_ipd_bist_status_cn30xx cn38xx;
+	struct cvmx_ipd_bist_status_cn30xx cn38xxp2;
+	struct cvmx_ipd_bist_status_cn30xx cn50xx;
+	struct cvmx_ipd_bist_status_cn52xx {
+		u64 reserved_18_63 : 46;
+		u64 csr_mem : 1;
+		u64 csr_ncmd : 1;
+		u64 pwq_wqed : 1;
+		u64 pwq_wp1 : 1;
+		u64 pwq_pow : 1;
+		u64 ipq_pbe1 : 1;
+		u64 ipq_pbe0 : 1;
+		u64 pbm3 : 1;
+		u64 pbm2 : 1;
+		u64 pbm1 : 1;
+		u64 pbm0 : 1;
+		u64 pbm_word : 1;
+		u64 pwq1 : 1;
+		u64 pwq0 : 1;
+		u64 prc_off : 1;
+		u64 ipd_old : 1;
+		u64 ipd_new : 1;
+		u64 pwp : 1;
+	} cn52xx;
+	struct cvmx_ipd_bist_status_cn52xx cn52xxp1;
+	struct cvmx_ipd_bist_status_cn52xx cn56xx;
+	struct cvmx_ipd_bist_status_cn52xx cn56xxp1;
+	struct cvmx_ipd_bist_status_cn30xx cn58xx;
+	struct cvmx_ipd_bist_status_cn30xx cn58xxp1;
+	struct cvmx_ipd_bist_status_cn52xx cn61xx;
+	struct cvmx_ipd_bist_status_cn52xx cn63xx;
+	struct cvmx_ipd_bist_status_cn52xx cn63xxp1;
+	struct cvmx_ipd_bist_status_cn52xx cn66xx;
+	struct cvmx_ipd_bist_status_s cn68xx;
+	struct cvmx_ipd_bist_status_s cn68xxp1;
+	struct cvmx_ipd_bist_status_cn52xx cn70xx;
+	struct cvmx_ipd_bist_status_cn52xx cn70xxp1;
+	struct cvmx_ipd_bist_status_cn52xx cnf71xx;
+};
+
+typedef union cvmx_ipd_bist_status cvmx_ipd_bist_status_t;
+
+/**
+ * cvmx_ipd_bp_prt_red_end
+ *
+ * When IPD applies backpressure to a PORT and the corresponding bit in this register is set,
+ * the RED Unit will drop packets for that port.
+ */
+union cvmx_ipd_bp_prt_red_end {
+	u64 u64;
+	struct cvmx_ipd_bp_prt_red_end_s {
+		u64 reserved_48_63 : 16;
+		u64 prt_enb : 48;
+	} s;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx {
+		u64 reserved_36_63 : 28;
+		u64 prt_enb : 36;
+	} cn30xx;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn31xx;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn38xx;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn38xxp2;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn50xx;
+	struct cvmx_ipd_bp_prt_red_end_cn52xx {
+		u64 reserved_40_63 : 24;
+		u64 prt_enb : 40;
+	} cn52xx;
+	struct cvmx_ipd_bp_prt_red_end_cn52xx cn52xxp1;
+	struct cvmx_ipd_bp_prt_red_end_cn52xx cn56xx;
+	struct cvmx_ipd_bp_prt_red_end_cn52xx cn56xxp1;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn58xx;
+	struct cvmx_ipd_bp_prt_red_end_cn30xx cn58xxp1;
+	struct cvmx_ipd_bp_prt_red_end_s cn61xx;
+	struct cvmx_ipd_bp_prt_red_end_cn63xx {
+		u64 reserved_44_63 : 20;
+		u64 prt_enb : 44;
+	} cn63xx;
+	struct cvmx_ipd_bp_prt_red_end_cn63xx cn63xxp1;
+	struct cvmx_ipd_bp_prt_red_end_s cn66xx;
+	struct cvmx_ipd_bp_prt_red_end_s cn70xx;
+	struct cvmx_ipd_bp_prt_red_end_s cn70xxp1;
+	struct cvmx_ipd_bp_prt_red_end_s cnf71xx;
+};
+
+typedef union cvmx_ipd_bp_prt_red_end cvmx_ipd_bp_prt_red_end_t;
+
+/**
+ * cvmx_ipd_bpid#_mbuf_th
+ *
+ * 0x2000 2FFF
+ *
+ *                  IPD_BPIDX_MBUF_TH = IPD BPID  MBUFF Threshold
+ *
+ * The number of MBUFFs in use by the BPID, that when exceeded, backpressure will be applied to the BPID.
+ */
+union cvmx_ipd_bpidx_mbuf_th {
+	u64 u64;
+	struct cvmx_ipd_bpidx_mbuf_th_s {
+		u64 reserved_18_63 : 46;
+		u64 bp_enb : 1;
+		u64 page_cnt : 17;
+	} s;
+	struct cvmx_ipd_bpidx_mbuf_th_s cn68xx;
+	struct cvmx_ipd_bpidx_mbuf_th_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_bpidx_mbuf_th cvmx_ipd_bpidx_mbuf_th_t;
+
+/**
+ * cvmx_ipd_bpid_bp_counter#
+ *
+ * RESERVE SPACE UPTO 0x2FFF
+ *
+ * 0x3000 0x3ffff
+ *
+ * IPD_BPID_BP_COUNTERX = MBUF BPID Counters used to generate Back Pressure Per BPID.
+ */
+union cvmx_ipd_bpid_bp_counterx {
+	u64 u64;
+	struct cvmx_ipd_bpid_bp_counterx_s {
+		u64 reserved_25_63 : 39;
+		u64 cnt_val : 25;
+	} s;
+	struct cvmx_ipd_bpid_bp_counterx_s cn68xx;
+	struct cvmx_ipd_bpid_bp_counterx_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_bpid_bp_counterx cvmx_ipd_bpid_bp_counterx_t;
+
+/**
+ * cvmx_ipd_clk_count
+ *
+ * Counts the number of core clocks periods since the de-asserition of reset.
+ *
+ */
+union cvmx_ipd_clk_count {
+	u64 u64;
+	struct cvmx_ipd_clk_count_s {
+		u64 clk_cnt : 64;
+	} s;
+	struct cvmx_ipd_clk_count_s cn30xx;
+	struct cvmx_ipd_clk_count_s cn31xx;
+	struct cvmx_ipd_clk_count_s cn38xx;
+	struct cvmx_ipd_clk_count_s cn38xxp2;
+	struct cvmx_ipd_clk_count_s cn50xx;
+	struct cvmx_ipd_clk_count_s cn52xx;
+	struct cvmx_ipd_clk_count_s cn52xxp1;
+	struct cvmx_ipd_clk_count_s cn56xx;
+	struct cvmx_ipd_clk_count_s cn56xxp1;
+	struct cvmx_ipd_clk_count_s cn58xx;
+	struct cvmx_ipd_clk_count_s cn58xxp1;
+	struct cvmx_ipd_clk_count_s cn61xx;
+	struct cvmx_ipd_clk_count_s cn63xx;
+	struct cvmx_ipd_clk_count_s cn63xxp1;
+	struct cvmx_ipd_clk_count_s cn66xx;
+	struct cvmx_ipd_clk_count_s cn68xx;
+	struct cvmx_ipd_clk_count_s cn68xxp1;
+	struct cvmx_ipd_clk_count_s cn70xx;
+	struct cvmx_ipd_clk_count_s cn70xxp1;
+	struct cvmx_ipd_clk_count_s cnf71xx;
+};
+
+typedef union cvmx_ipd_clk_count cvmx_ipd_clk_count_t;
+
+/**
+ * cvmx_ipd_credits
+ *
+ * IPD_CREDITS = IPD Credits
+ *
+ * The credits allowed for IPD.
+ */
+union cvmx_ipd_credits {
+	u64 u64;
+	struct cvmx_ipd_credits_s {
+		u64 reserved_16_63 : 48;
+		u64 iob_wrc : 8;
+		u64 iob_wr : 8;
+	} s;
+	struct cvmx_ipd_credits_s cn68xx;
+	struct cvmx_ipd_credits_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_credits cvmx_ipd_credits_t;
+
+/**
+ * cvmx_ipd_ctl_status
+ *
+ * The number of words in a MBUFF used for packet data store.
+ *
+ */
+union cvmx_ipd_ctl_status {
+	u64 u64;
+	struct cvmx_ipd_ctl_status_s {
+		u64 reserved_18_63 : 46;
+		u64 use_sop : 1;
+		u64 rst_done : 1;
+		u64 clken : 1;
+		u64 no_wptr : 1;
+		u64 pq_apkt : 1;
+		u64 pq_nabuf : 1;
+		u64 ipd_full : 1;
+		u64 pkt_off : 1;
+		u64 len_m8 : 1;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} s;
+	struct cvmx_ipd_ctl_status_cn30xx {
+		u64 reserved_10_63 : 54;
+		u64 len_m8 : 1;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} cn30xx;
+	struct cvmx_ipd_ctl_status_cn30xx cn31xx;
+	struct cvmx_ipd_ctl_status_cn30xx cn38xx;
+	struct cvmx_ipd_ctl_status_cn38xxp2 {
+		u64 reserved_9_63 : 55;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} cn38xxp2;
+	struct cvmx_ipd_ctl_status_cn50xx {
+		u64 reserved_15_63 : 49;
+		u64 no_wptr : 1;
+		u64 pq_apkt : 1;
+		u64 pq_nabuf : 1;
+		u64 ipd_full : 1;
+		u64 pkt_off : 1;
+		u64 len_m8 : 1;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} cn50xx;
+	struct cvmx_ipd_ctl_status_cn50xx cn52xx;
+	struct cvmx_ipd_ctl_status_cn50xx cn52xxp1;
+	struct cvmx_ipd_ctl_status_cn50xx cn56xx;
+	struct cvmx_ipd_ctl_status_cn50xx cn56xxp1;
+	struct cvmx_ipd_ctl_status_cn58xx {
+		u64 reserved_12_63 : 52;
+		u64 ipd_full : 1;
+		u64 pkt_off : 1;
+		u64 len_m8 : 1;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} cn58xx;
+	struct cvmx_ipd_ctl_status_cn58xx cn58xxp1;
+	struct cvmx_ipd_ctl_status_s cn61xx;
+	struct cvmx_ipd_ctl_status_s cn63xx;
+	struct cvmx_ipd_ctl_status_cn63xxp1 {
+		u64 reserved_16_63 : 48;
+		u64 clken : 1;
+		u64 no_wptr : 1;
+		u64 pq_apkt : 1;
+		u64 pq_nabuf : 1;
+		u64 ipd_full : 1;
+		u64 pkt_off : 1;
+		u64 len_m8 : 1;
+		u64 reset : 1;
+		u64 addpkt : 1;
+		u64 naddbuf : 1;
+		u64 pkt_lend : 1;
+		u64 wqe_lend : 1;
+		u64 pbp_en : 1;
+		cvmx_ipd_mode_t opc_mode : 2;
+		u64 ipd_en : 1;
+	} cn63xxp1;
+	struct cvmx_ipd_ctl_status_s cn66xx;
+	struct cvmx_ipd_ctl_status_s cn68xx;
+	struct cvmx_ipd_ctl_status_s cn68xxp1;
+	struct cvmx_ipd_ctl_status_s cn70xx;
+	struct cvmx_ipd_ctl_status_s cn70xxp1;
+	struct cvmx_ipd_ctl_status_s cnf71xx;
+};
+
+typedef union cvmx_ipd_ctl_status cvmx_ipd_ctl_status_t;
+
+/**
+ * cvmx_ipd_ecc_ctl
+ *
+ * IPD_ECC_CTL = IPD ECC Control
+ *
+ * Allows inserting ECC errors for testing.
+ */
+union cvmx_ipd_ecc_ctl {
+	u64 u64;
+	struct cvmx_ipd_ecc_ctl_s {
+		u64 reserved_8_63 : 56;
+		u64 pm3_syn : 2;
+		u64 pm2_syn : 2;
+		u64 pm1_syn : 2;
+		u64 pm0_syn : 2;
+	} s;
+	struct cvmx_ipd_ecc_ctl_s cn68xx;
+	struct cvmx_ipd_ecc_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_ecc_ctl cvmx_ipd_ecc_ctl_t;
+
+/**
+ * cvmx_ipd_free_ptr_fifo_ctl
+ *
+ * IPD_FREE_PTR_FIFO_CTL = IPD's FREE Pointer FIFO Control
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's FREE Fifo.
+ * See also the IPD_FREE_PTR_VALUE
+ */
+union cvmx_ipd_free_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_free_ptr_fifo_ctl_s {
+		u64 reserved_32_63 : 32;
+		u64 max_cnts : 7;
+		u64 wraddr : 8;
+		u64 praddr : 8;
+		u64 cena : 1;
+		u64 raddr : 8;
+	} s;
+	struct cvmx_ipd_free_ptr_fifo_ctl_s cn68xx;
+	struct cvmx_ipd_free_ptr_fifo_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_free_ptr_fifo_ctl cvmx_ipd_free_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_free_ptr_value
+ *
+ * IPD_FREE_PTR_VALUE = IPD's FREE Pointer Value
+ *
+ * The value of the pointer selected through the IPD_FREE_PTR_FIFO_CTL
+ */
+union cvmx_ipd_free_ptr_value {
+	u64 u64;
+	struct cvmx_ipd_free_ptr_value_s {
+		u64 reserved_33_63 : 31;
+		u64 ptr : 33;
+	} s;
+	struct cvmx_ipd_free_ptr_value_s cn68xx;
+	struct cvmx_ipd_free_ptr_value_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_free_ptr_value cvmx_ipd_free_ptr_value_t;
+
+/**
+ * cvmx_ipd_hold_ptr_fifo_ctl
+ *
+ * IPD_HOLD_PTR_FIFO_CTL = IPD's Holding Pointer FIFO Control
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's Holding Fifo.
+ */
+union cvmx_ipd_hold_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_hold_ptr_fifo_ctl_s {
+		u64 reserved_43_63 : 21;
+		u64 ptr : 33;
+		u64 max_pkt : 3;
+		u64 praddr : 3;
+		u64 cena : 1;
+		u64 raddr : 3;
+	} s;
+	struct cvmx_ipd_hold_ptr_fifo_ctl_s cn68xx;
+	struct cvmx_ipd_hold_ptr_fifo_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_hold_ptr_fifo_ctl cvmx_ipd_hold_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_int_enb
+ *
+ * IPD_INTERRUPT_ENB = IPD Interrupt Enable Register
+ * Used to enable the various interrupting conditions of IPD
+ */
+union cvmx_ipd_int_enb {
+	u64 u64;
+	struct cvmx_ipd_int_enb_s {
+		u64 reserved_23_63 : 41;
+		u64 pw3_dbe : 1;
+		u64 pw3_sbe : 1;
+		u64 pw2_dbe : 1;
+		u64 pw2_sbe : 1;
+		u64 pw1_dbe : 1;
+		u64 pw1_sbe : 1;
+		u64 pw0_dbe : 1;
+		u64 pw0_sbe : 1;
+		u64 dat : 1;
+		u64 eop : 1;
+		u64 sop : 1;
+		u64 pq_sub : 1;
+		u64 pq_add : 1;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} s;
+	struct cvmx_ipd_int_enb_cn30xx {
+		u64 reserved_5_63 : 59;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn30xx;
+	struct cvmx_ipd_int_enb_cn30xx cn31xx;
+	struct cvmx_ipd_int_enb_cn38xx {
+		u64 reserved_10_63 : 54;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn38xx;
+	struct cvmx_ipd_int_enb_cn30xx cn38xxp2;
+	struct cvmx_ipd_int_enb_cn38xx cn50xx;
+	struct cvmx_ipd_int_enb_cn52xx {
+		u64 reserved_12_63 : 52;
+		u64 pq_sub : 1;
+		u64 pq_add : 1;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn52xx;
+	struct cvmx_ipd_int_enb_cn52xx cn52xxp1;
+	struct cvmx_ipd_int_enb_cn52xx cn56xx;
+	struct cvmx_ipd_int_enb_cn52xx cn56xxp1;
+	struct cvmx_ipd_int_enb_cn38xx cn58xx;
+	struct cvmx_ipd_int_enb_cn38xx cn58xxp1;
+	struct cvmx_ipd_int_enb_cn52xx cn61xx;
+	struct cvmx_ipd_int_enb_cn52xx cn63xx;
+	struct cvmx_ipd_int_enb_cn52xx cn63xxp1;
+	struct cvmx_ipd_int_enb_cn52xx cn66xx;
+	struct cvmx_ipd_int_enb_s cn68xx;
+	struct cvmx_ipd_int_enb_s cn68xxp1;
+	struct cvmx_ipd_int_enb_cn52xx cn70xx;
+	struct cvmx_ipd_int_enb_cn52xx cn70xxp1;
+	struct cvmx_ipd_int_enb_cn52xx cnf71xx;
+};
+
+typedef union cvmx_ipd_int_enb cvmx_ipd_int_enb_t;
+
+/**
+ * cvmx_ipd_int_sum
+ *
+ * IPD_INTERRUPT_SUM = IPD Interrupt Summary Register
+ * Set when an interrupt condition occurs, write '1' to clear.
+ */
+union cvmx_ipd_int_sum {
+	u64 u64;
+	struct cvmx_ipd_int_sum_s {
+		u64 reserved_23_63 : 41;
+		u64 pw3_dbe : 1;
+		u64 pw3_sbe : 1;
+		u64 pw2_dbe : 1;
+		u64 pw2_sbe : 1;
+		u64 pw1_dbe : 1;
+		u64 pw1_sbe : 1;
+		u64 pw0_dbe : 1;
+		u64 pw0_sbe : 1;
+		u64 dat : 1;
+		u64 eop : 1;
+		u64 sop : 1;
+		u64 pq_sub : 1;
+		u64 pq_add : 1;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} s;
+	struct cvmx_ipd_int_sum_cn30xx {
+		u64 reserved_5_63 : 59;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn30xx;
+	struct cvmx_ipd_int_sum_cn30xx cn31xx;
+	struct cvmx_ipd_int_sum_cn38xx {
+		u64 reserved_10_63 : 54;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn38xx;
+	struct cvmx_ipd_int_sum_cn30xx cn38xxp2;
+	struct cvmx_ipd_int_sum_cn38xx cn50xx;
+	struct cvmx_ipd_int_sum_cn52xx {
+		u64 reserved_12_63 : 52;
+		u64 pq_sub : 1;
+		u64 pq_add : 1;
+		u64 bc_ovr : 1;
+		u64 d_coll : 1;
+		u64 c_coll : 1;
+		u64 cc_ovr : 1;
+		u64 dc_ovr : 1;
+		u64 bp_sub : 1;
+		u64 prc_par3 : 1;
+		u64 prc_par2 : 1;
+		u64 prc_par1 : 1;
+		u64 prc_par0 : 1;
+	} cn52xx;
+	struct cvmx_ipd_int_sum_cn52xx cn52xxp1;
+	struct cvmx_ipd_int_sum_cn52xx cn56xx;
+	struct cvmx_ipd_int_sum_cn52xx cn56xxp1;
+	struct cvmx_ipd_int_sum_cn38xx cn58xx;
+	struct cvmx_ipd_int_sum_cn38xx cn58xxp1;
+	struct cvmx_ipd_int_sum_cn52xx cn61xx;
+	struct cvmx_ipd_int_sum_cn52xx cn63xx;
+	struct cvmx_ipd_int_sum_cn52xx cn63xxp1;
+	struct cvmx_ipd_int_sum_cn52xx cn66xx;
+	struct cvmx_ipd_int_sum_s cn68xx;
+	struct cvmx_ipd_int_sum_s cn68xxp1;
+	struct cvmx_ipd_int_sum_cn52xx cn70xx;
+	struct cvmx_ipd_int_sum_cn52xx cn70xxp1;
+	struct cvmx_ipd_int_sum_cn52xx cnf71xx;
+};
+
+typedef union cvmx_ipd_int_sum cvmx_ipd_int_sum_t;
+
+/**
+ * cvmx_ipd_next_pkt_ptr
+ *
+ * IPD_NEXT_PKT_PTR = IPD's Next Packet Pointer
+ *
+ * The value of the packet-pointer fetched and in the valid register.
+ */
+union cvmx_ipd_next_pkt_ptr {
+	u64 u64;
+	struct cvmx_ipd_next_pkt_ptr_s {
+		u64 reserved_33_63 : 31;
+		u64 ptr : 33;
+	} s;
+	struct cvmx_ipd_next_pkt_ptr_s cn68xx;
+	struct cvmx_ipd_next_pkt_ptr_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_next_pkt_ptr cvmx_ipd_next_pkt_ptr_t;
+
+/**
+ * cvmx_ipd_next_wqe_ptr
+ *
+ * IPD_NEXT_WQE_PTR = IPD's NEXT_WQE Pointer
+ *
+ * The value of the WQE-pointer fetched and in the valid register.
+ */
+union cvmx_ipd_next_wqe_ptr {
+	u64 u64;
+	struct cvmx_ipd_next_wqe_ptr_s {
+		u64 reserved_33_63 : 31;
+		u64 ptr : 33;
+	} s;
+	struct cvmx_ipd_next_wqe_ptr_s cn68xx;
+	struct cvmx_ipd_next_wqe_ptr_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_next_wqe_ptr cvmx_ipd_next_wqe_ptr_t;
+
+/**
+ * cvmx_ipd_not_1st_mbuff_skip
+ *
+ * The number of words that the IPD will skip when writing any MBUFF that is not the first.
+ *
+ */
+union cvmx_ipd_not_1st_mbuff_skip {
+	u64 u64;
+	struct cvmx_ipd_not_1st_mbuff_skip_s {
+		u64 reserved_6_63 : 58;
+		u64 skip_sz : 6;
+	} s;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn30xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn31xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn38xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn38xxp2;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn50xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn52xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn52xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn56xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn56xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn58xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn58xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn61xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn63xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn63xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn66xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn68xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn68xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn70xx;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cn70xxp1;
+	struct cvmx_ipd_not_1st_mbuff_skip_s cnf71xx;
+};
+
+typedef union cvmx_ipd_not_1st_mbuff_skip cvmx_ipd_not_1st_mbuff_skip_t;
+
+/**
+ * cvmx_ipd_on_bp_drop_pkt#
+ *
+ * RESERVE SPACE UPTO 0x3FFF
+ *
+ *
+ * RESERVED FOR FORMER IPD_SUB_PKIND_FCS - MOVED TO PIP
+ *
+ * RESERVE 4008 - 40FF
+ *
+ *
+ *                  IPD_ON_BP_DROP_PKT = IPD On Backpressure Drop Packet
+ *
+ * When IPD applies backpressure to a BPID and the corresponding bit in this register is set,
+ * then previously received packets will be dropped when processed.
+ */
+union cvmx_ipd_on_bp_drop_pktx {
+	u64 u64;
+	struct cvmx_ipd_on_bp_drop_pktx_s {
+		u64 prt_enb : 64;
+	} s;
+	struct cvmx_ipd_on_bp_drop_pktx_s cn68xx;
+	struct cvmx_ipd_on_bp_drop_pktx_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_on_bp_drop_pktx cvmx_ipd_on_bp_drop_pktx_t;
+
+/**
+ * cvmx_ipd_packet_mbuff_size
+ *
+ * The number of words in a MBUFF used for packet data store.
+ *
+ */
+union cvmx_ipd_packet_mbuff_size {
+	u64 u64;
+	struct cvmx_ipd_packet_mbuff_size_s {
+		u64 reserved_12_63 : 52;
+		u64 mb_size : 12;
+	} s;
+	struct cvmx_ipd_packet_mbuff_size_s cn30xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn31xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn38xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn38xxp2;
+	struct cvmx_ipd_packet_mbuff_size_s cn50xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn52xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn52xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cn56xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn56xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cn58xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn58xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cn61xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn63xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn63xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cn66xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn68xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn68xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cn70xx;
+	struct cvmx_ipd_packet_mbuff_size_s cn70xxp1;
+	struct cvmx_ipd_packet_mbuff_size_s cnf71xx;
+};
+
+typedef union cvmx_ipd_packet_mbuff_size cvmx_ipd_packet_mbuff_size_t;
+
+/**
+ * cvmx_ipd_pkt_err
+ *
+ * IPD_PKT_ERR = IPD Packet Error Register
+ *
+ * Provides status about the failing packet recevie error.
+ */
+union cvmx_ipd_pkt_err {
+	u64 u64;
+	struct cvmx_ipd_pkt_err_s {
+		u64 reserved_6_63 : 58;
+		u64 reasm : 6;
+	} s;
+	struct cvmx_ipd_pkt_err_s cn68xx;
+	struct cvmx_ipd_pkt_err_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_pkt_err cvmx_ipd_pkt_err_t;
+
+/**
+ * cvmx_ipd_pkt_ptr_valid
+ *
+ * The value of the packet-pointer fetched and in the valid register.
+ *
+ */
+union cvmx_ipd_pkt_ptr_valid {
+	u64 u64;
+	struct cvmx_ipd_pkt_ptr_valid_s {
+		u64 reserved_29_63 : 35;
+		u64 ptr : 29;
+	} s;
+	struct cvmx_ipd_pkt_ptr_valid_s cn30xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn31xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn38xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn50xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn52xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn52xxp1;
+	struct cvmx_ipd_pkt_ptr_valid_s cn56xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn56xxp1;
+	struct cvmx_ipd_pkt_ptr_valid_s cn58xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn58xxp1;
+	struct cvmx_ipd_pkt_ptr_valid_s cn61xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn63xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn63xxp1;
+	struct cvmx_ipd_pkt_ptr_valid_s cn66xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn70xx;
+	struct cvmx_ipd_pkt_ptr_valid_s cn70xxp1;
+	struct cvmx_ipd_pkt_ptr_valid_s cnf71xx;
+};
+
+typedef union cvmx_ipd_pkt_ptr_valid cvmx_ipd_pkt_ptr_valid_t;
+
+/**
+ * cvmx_ipd_port#_bp_page_cnt
+ *
+ * IPD_PORTX_BP_PAGE_CNT = IPD Port Backpressure Page Count
+ * The number of pages in use by the port that when exceeded, backpressure will be applied to the
+ * port.
+ * See also IPD_PORTX_BP_PAGE_CNT2
+ * See also IPD_PORTX_BP_PAGE_CNT3
+ */
+union cvmx_ipd_portx_bp_page_cnt {
+	u64 u64;
+	struct cvmx_ipd_portx_bp_page_cnt_s {
+		u64 reserved_18_63 : 46;
+		u64 bp_enb : 1;
+		u64 page_cnt : 17;
+	} s;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn30xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn31xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn38xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn38xxp2;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn50xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn52xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn52xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn56xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn56xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn58xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn58xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn61xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn63xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn63xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn66xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn70xx;
+	struct cvmx_ipd_portx_bp_page_cnt_s cn70xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt_s cnf71xx;
+};
+
+typedef union cvmx_ipd_portx_bp_page_cnt cvmx_ipd_portx_bp_page_cnt_t;
+
+/**
+ * cvmx_ipd_port#_bp_page_cnt2
+ *
+ * IPD_PORTX_BP_PAGE_CNT2 = IPD Port Backpressure Page Count
+ * The number of pages in use by the port that when exceeded, backpressure will be applied to the
+ * port.
+ * See also IPD_PORTX_BP_PAGE_CNT
+ * See also IPD_PORTX_BP_PAGE_CNT3
+ * 0x368-0x380
+ */
+union cvmx_ipd_portx_bp_page_cnt2 {
+	u64 u64;
+	struct cvmx_ipd_portx_bp_page_cnt2_s {
+		u64 reserved_18_63 : 46;
+		u64 bp_enb : 1;
+		u64 page_cnt : 17;
+	} s;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn52xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn52xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn56xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn56xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn61xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn63xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn63xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn66xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn70xx;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cn70xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt2_s cnf71xx;
+};
+
+typedef union cvmx_ipd_portx_bp_page_cnt2 cvmx_ipd_portx_bp_page_cnt2_t;
+
+/**
+ * cvmx_ipd_port#_bp_page_cnt3
+ *
+ * IPD_PORTX_BP_PAGE_CNT3 = IPD Port Backpressure Page Count
+ * The number of pages in use by the port that when exceeded, backpressure will be applied to the
+ * port.
+ * See also IPD_PORTX_BP_PAGE_CNT
+ * See also IPD_PORTX_BP_PAGE_CNT2
+ * 0x3d0-408
+ */
+union cvmx_ipd_portx_bp_page_cnt3 {
+	u64 u64;
+	struct cvmx_ipd_portx_bp_page_cnt3_s {
+		u64 reserved_18_63 : 46;
+		u64 bp_enb : 1;
+		u64 page_cnt : 17;
+	} s;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn61xx;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn63xx;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn63xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn66xx;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn70xx;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cn70xxp1;
+	struct cvmx_ipd_portx_bp_page_cnt3_s cnf71xx;
+};
+
+typedef union cvmx_ipd_portx_bp_page_cnt3 cvmx_ipd_portx_bp_page_cnt3_t;
+
+/**
+ * cvmx_ipd_port_bp_counters2_pair#
+ *
+ * See also IPD_PORT_BP_COUNTERS_PAIRX
+ * See also IPD_PORT_BP_COUNTERS3_PAIRX
+ * 0x388-0x3a0
+ */
+union cvmx_ipd_port_bp_counters2_pairx {
+	u64 u64;
+	struct cvmx_ipd_port_bp_counters2_pairx_s {
+		u64 reserved_25_63 : 39;
+		u64 cnt_val : 25;
+	} s;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn52xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn52xxp1;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn56xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn56xxp1;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn61xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn63xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn63xxp1;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn66xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn70xx;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cn70xxp1;
+	struct cvmx_ipd_port_bp_counters2_pairx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_bp_counters2_pairx cvmx_ipd_port_bp_counters2_pairx_t;
+
+/**
+ * cvmx_ipd_port_bp_counters3_pair#
+ *
+ * See also IPD_PORT_BP_COUNTERS_PAIRX
+ * See also IPD_PORT_BP_COUNTERS2_PAIRX
+ * 0x3b0-0x3c8
+ */
+union cvmx_ipd_port_bp_counters3_pairx {
+	u64 u64;
+	struct cvmx_ipd_port_bp_counters3_pairx_s {
+		u64 reserved_25_63 : 39;
+		u64 cnt_val : 25;
+	} s;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn61xx;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn63xx;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn63xxp1;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn66xx;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn70xx;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cn70xxp1;
+	struct cvmx_ipd_port_bp_counters3_pairx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_bp_counters3_pairx cvmx_ipd_port_bp_counters3_pairx_t;
+
+/**
+ * cvmx_ipd_port_bp_counters4_pair#
+ *
+ * See also IPD_PORT_BP_COUNTERS_PAIRX
+ * See also IPD_PORT_BP_COUNTERS2_PAIRX
+ * 0x410-0x3c8
+ */
+union cvmx_ipd_port_bp_counters4_pairx {
+	u64 u64;
+	struct cvmx_ipd_port_bp_counters4_pairx_s {
+		u64 reserved_25_63 : 39;
+		u64 cnt_val : 25;
+	} s;
+	struct cvmx_ipd_port_bp_counters4_pairx_s cn61xx;
+	struct cvmx_ipd_port_bp_counters4_pairx_s cn66xx;
+	struct cvmx_ipd_port_bp_counters4_pairx_s cn70xx;
+	struct cvmx_ipd_port_bp_counters4_pairx_s cn70xxp1;
+	struct cvmx_ipd_port_bp_counters4_pairx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_bp_counters4_pairx cvmx_ipd_port_bp_counters4_pairx_t;
+
+/**
+ * cvmx_ipd_port_bp_counters_pair#
+ *
+ * See also IPD_PORT_BP_COUNTERS2_PAIRX
+ * See also IPD_PORT_BP_COUNTERS3_PAIRX
+ * 0x1b8-0x2d0
+ */
+union cvmx_ipd_port_bp_counters_pairx {
+	u64 u64;
+	struct cvmx_ipd_port_bp_counters_pairx_s {
+		u64 reserved_25_63 : 39;
+		u64 cnt_val : 25;
+	} s;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn30xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn31xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn38xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn38xxp2;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn50xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn52xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn52xxp1;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn56xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn56xxp1;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn58xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn58xxp1;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn61xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn63xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn63xxp1;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn66xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn70xx;
+	struct cvmx_ipd_port_bp_counters_pairx_s cn70xxp1;
+	struct cvmx_ipd_port_bp_counters_pairx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_bp_counters_pairx cvmx_ipd_port_bp_counters_pairx_t;
+
+/**
+ * cvmx_ipd_port_ptr_fifo_ctl
+ *
+ * IPD_PORT_PTR_FIFO_CTL = IPD's Reasm-Id Pointer FIFO Control
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's Reasm-Id Fifo.
+ */
+union cvmx_ipd_port_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_port_ptr_fifo_ctl_s {
+		u64 reserved_48_63 : 16;
+		u64 ptr : 33;
+		u64 max_pkt : 7;
+		u64 cena : 1;
+		u64 raddr : 7;
+	} s;
+	struct cvmx_ipd_port_ptr_fifo_ctl_s cn68xx;
+	struct cvmx_ipd_port_ptr_fifo_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_port_ptr_fifo_ctl cvmx_ipd_port_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_port_qos_#_cnt
+ *
+ * IPD_PORT_QOS_X_CNT = IPD PortX QOS-0 Count
+ * A counter per port/qos. Counter are originzed in sequence where the first 8 counter (0-7)
+ * belong to Port-0
+ * QOS 0-7 respectively followed by port 1 at (8-15), etc
+ * Ports 0-3, 32-43
+ */
+union cvmx_ipd_port_qos_x_cnt {
+	u64 u64;
+	struct cvmx_ipd_port_qos_x_cnt_s {
+		u64 wmark : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_ipd_port_qos_x_cnt_s cn52xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn52xxp1;
+	struct cvmx_ipd_port_qos_x_cnt_s cn56xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn56xxp1;
+	struct cvmx_ipd_port_qos_x_cnt_s cn61xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn63xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn63xxp1;
+	struct cvmx_ipd_port_qos_x_cnt_s cn66xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn68xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn68xxp1;
+	struct cvmx_ipd_port_qos_x_cnt_s cn70xx;
+	struct cvmx_ipd_port_qos_x_cnt_s cn70xxp1;
+	struct cvmx_ipd_port_qos_x_cnt_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_qos_x_cnt cvmx_ipd_port_qos_x_cnt_t;
+
+/**
+ * cvmx_ipd_port_qos_int#
+ *
+ * See the description for IPD_PORT_QOS_X_CNT
+ * 0=P0-7; 1=P8-15; 2=P16-23; 3=P24-31; 4=P32-39; 5=P40-47; 6=P48-55; 7=P56-63
+ * Only ports used are: P0-3, p16-19, P24, P32-39. Therefore only IPD_PORT_QOS_INT0 ([63:32] ==
+ * Reserved), IPD_PORT_QOS_INT2 ([63:32] == Reserved), IPD_PORT_QOS_INT3 ([63:8] == Reserved),
+ * IPD_PORT_QOS_INT4
+ */
+union cvmx_ipd_port_qos_intx {
+	u64 u64;
+	struct cvmx_ipd_port_qos_intx_s {
+		u64 intr : 64;
+	} s;
+	struct cvmx_ipd_port_qos_intx_s cn52xx;
+	struct cvmx_ipd_port_qos_intx_s cn52xxp1;
+	struct cvmx_ipd_port_qos_intx_s cn56xx;
+	struct cvmx_ipd_port_qos_intx_s cn56xxp1;
+	struct cvmx_ipd_port_qos_intx_s cn61xx;
+	struct cvmx_ipd_port_qos_intx_s cn63xx;
+	struct cvmx_ipd_port_qos_intx_s cn63xxp1;
+	struct cvmx_ipd_port_qos_intx_s cn66xx;
+	struct cvmx_ipd_port_qos_intx_s cn68xx;
+	struct cvmx_ipd_port_qos_intx_s cn68xxp1;
+	struct cvmx_ipd_port_qos_intx_s cn70xx;
+	struct cvmx_ipd_port_qos_intx_s cn70xxp1;
+	struct cvmx_ipd_port_qos_intx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_qos_intx cvmx_ipd_port_qos_intx_t;
+
+/**
+ * cvmx_ipd_port_qos_int_enb#
+ *
+ * "When the IPD_PORT_QOS_INTX[\#] is '1' and IPD_PORT_QOS_INT_ENBX[\#] is '1' a interrupt will be
+ * generated."
+ */
+union cvmx_ipd_port_qos_int_enbx {
+	u64 u64;
+	struct cvmx_ipd_port_qos_int_enbx_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_ipd_port_qos_int_enbx_s cn52xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn52xxp1;
+	struct cvmx_ipd_port_qos_int_enbx_s cn56xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn56xxp1;
+	struct cvmx_ipd_port_qos_int_enbx_s cn61xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn63xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn63xxp1;
+	struct cvmx_ipd_port_qos_int_enbx_s cn66xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn68xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn68xxp1;
+	struct cvmx_ipd_port_qos_int_enbx_s cn70xx;
+	struct cvmx_ipd_port_qos_int_enbx_s cn70xxp1;
+	struct cvmx_ipd_port_qos_int_enbx_s cnf71xx;
+};
+
+typedef union cvmx_ipd_port_qos_int_enbx cvmx_ipd_port_qos_int_enbx_t;
+
+/**
+ * cvmx_ipd_port_sop#
+ *
+ * IPD_PORT_SOP = IPD Reasm-Id SOP
+ *
+ * Set when a SOP is detected on a reasm-num. Where the reasm-num value set the bit vector of this register.
+ */
+union cvmx_ipd_port_sopx {
+	u64 u64;
+	struct cvmx_ipd_port_sopx_s {
+		u64 sop : 64;
+	} s;
+	struct cvmx_ipd_port_sopx_s cn68xx;
+	struct cvmx_ipd_port_sopx_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_port_sopx cvmx_ipd_port_sopx_t;
+
+/**
+ * cvmx_ipd_prc_hold_ptr_fifo_ctl
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's PRC Holding Fifo.
+ *
+ */
+union cvmx_ipd_prc_hold_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s {
+		u64 reserved_39_63 : 25;
+		u64 max_pkt : 3;
+		u64 praddr : 3;
+		u64 ptr : 29;
+		u64 cena : 1;
+		u64 raddr : 3;
+	} s;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn30xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn31xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn38xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn50xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn52xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn52xxp1;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn56xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn56xxp1;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn58xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn58xxp1;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn61xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn63xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn63xxp1;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn66xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn70xx;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cn70xxp1;
+	struct cvmx_ipd_prc_hold_ptr_fifo_ctl_s cnf71xx;
+};
+
+typedef union cvmx_ipd_prc_hold_ptr_fifo_ctl cvmx_ipd_prc_hold_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_prc_port_ptr_fifo_ctl
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's PRC PORT Fifo.
+ *
+ */
+union cvmx_ipd_prc_port_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s {
+		u64 reserved_44_63 : 20;
+		u64 max_pkt : 7;
+		u64 ptr : 29;
+		u64 cena : 1;
+		u64 raddr : 7;
+	} s;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn30xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn31xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn38xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn50xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn52xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn52xxp1;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn56xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn56xxp1;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn58xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn58xxp1;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn61xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn63xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn63xxp1;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn66xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn70xx;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cn70xxp1;
+	struct cvmx_ipd_prc_port_ptr_fifo_ctl_s cnf71xx;
+};
+
+typedef union cvmx_ipd_prc_port_ptr_fifo_ctl cvmx_ipd_prc_port_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_ptr_count
+ *
+ * Shows the number of WQE and Packet Page Pointers stored in the IPD.
+ *
+ */
+union cvmx_ipd_ptr_count {
+	u64 u64;
+	struct cvmx_ipd_ptr_count_s {
+		u64 reserved_19_63 : 45;
+		u64 pktv_cnt : 1;
+		u64 wqev_cnt : 1;
+		u64 pfif_cnt : 3;
+		u64 pkt_pcnt : 7;
+		u64 wqe_pcnt : 7;
+	} s;
+	struct cvmx_ipd_ptr_count_s cn30xx;
+	struct cvmx_ipd_ptr_count_s cn31xx;
+	struct cvmx_ipd_ptr_count_s cn38xx;
+	struct cvmx_ipd_ptr_count_s cn38xxp2;
+	struct cvmx_ipd_ptr_count_s cn50xx;
+	struct cvmx_ipd_ptr_count_s cn52xx;
+	struct cvmx_ipd_ptr_count_s cn52xxp1;
+	struct cvmx_ipd_ptr_count_s cn56xx;
+	struct cvmx_ipd_ptr_count_s cn56xxp1;
+	struct cvmx_ipd_ptr_count_s cn58xx;
+	struct cvmx_ipd_ptr_count_s cn58xxp1;
+	struct cvmx_ipd_ptr_count_s cn61xx;
+	struct cvmx_ipd_ptr_count_s cn63xx;
+	struct cvmx_ipd_ptr_count_s cn63xxp1;
+	struct cvmx_ipd_ptr_count_s cn66xx;
+	struct cvmx_ipd_ptr_count_s cn68xx;
+	struct cvmx_ipd_ptr_count_s cn68xxp1;
+	struct cvmx_ipd_ptr_count_s cn70xx;
+	struct cvmx_ipd_ptr_count_s cn70xxp1;
+	struct cvmx_ipd_ptr_count_s cnf71xx;
+};
+
+typedef union cvmx_ipd_ptr_count cvmx_ipd_ptr_count_t;
+
+/**
+ * cvmx_ipd_pwp_ptr_fifo_ctl
+ *
+ * Allows reading of the Page-Pointers stored in the IPD's PWP Fifo.
+ *
+ */
+union cvmx_ipd_pwp_ptr_fifo_ctl {
+	u64 u64;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s {
+		u64 reserved_61_63 : 3;
+		u64 max_cnts : 7;
+		u64 wraddr : 8;
+		u64 praddr : 8;
+		u64 ptr : 29;
+		u64 cena : 1;
+		u64 raddr : 8;
+	} s;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn30xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn31xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn38xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn50xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn52xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn52xxp1;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn56xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn56xxp1;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn58xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn58xxp1;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn61xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn63xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn63xxp1;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn66xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn70xx;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cn70xxp1;
+	struct cvmx_ipd_pwp_ptr_fifo_ctl_s cnf71xx;
+};
+
+typedef union cvmx_ipd_pwp_ptr_fifo_ctl cvmx_ipd_pwp_ptr_fifo_ctl_t;
+
+/**
+ * cvmx_ipd_qos#_red_marks
+ *
+ * Set the pass-drop marks for qos level.
+ *
+ */
+union cvmx_ipd_qosx_red_marks {
+	u64 u64;
+	struct cvmx_ipd_qosx_red_marks_s {
+		u64 drop : 32;
+		u64 pass : 32;
+	} s;
+	struct cvmx_ipd_qosx_red_marks_s cn30xx;
+	struct cvmx_ipd_qosx_red_marks_s cn31xx;
+	struct cvmx_ipd_qosx_red_marks_s cn38xx;
+	struct cvmx_ipd_qosx_red_marks_s cn38xxp2;
+	struct cvmx_ipd_qosx_red_marks_s cn50xx;
+	struct cvmx_ipd_qosx_red_marks_s cn52xx;
+	struct cvmx_ipd_qosx_red_marks_s cn52xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cn56xx;
+	struct cvmx_ipd_qosx_red_marks_s cn56xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cn58xx;
+	struct cvmx_ipd_qosx_red_marks_s cn58xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cn61xx;
+	struct cvmx_ipd_qosx_red_marks_s cn63xx;
+	struct cvmx_ipd_qosx_red_marks_s cn63xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cn66xx;
+	struct cvmx_ipd_qosx_red_marks_s cn68xx;
+	struct cvmx_ipd_qosx_red_marks_s cn68xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cn70xx;
+	struct cvmx_ipd_qosx_red_marks_s cn70xxp1;
+	struct cvmx_ipd_qosx_red_marks_s cnf71xx;
+};
+
+typedef union cvmx_ipd_qosx_red_marks cvmx_ipd_qosx_red_marks_t;
+
+/**
+ * cvmx_ipd_que0_free_page_cnt
+ *
+ * Number of Free-Page Pointer that are available for use in the FPA for Queue-0.
+ *
+ */
+union cvmx_ipd_que0_free_page_cnt {
+	u64 u64;
+	struct cvmx_ipd_que0_free_page_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 q0_pcnt : 32;
+	} s;
+	struct cvmx_ipd_que0_free_page_cnt_s cn30xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn31xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn38xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn38xxp2;
+	struct cvmx_ipd_que0_free_page_cnt_s cn50xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn52xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn52xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cn56xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn56xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cn58xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn58xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cn61xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn63xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn63xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cn66xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn68xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn68xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cn70xx;
+	struct cvmx_ipd_que0_free_page_cnt_s cn70xxp1;
+	struct cvmx_ipd_que0_free_page_cnt_s cnf71xx;
+};
+
+typedef union cvmx_ipd_que0_free_page_cnt cvmx_ipd_que0_free_page_cnt_t;
+
+/**
+ * cvmx_ipd_red_bpid_enable#
+ *
+ * IPD_RED_BPID_ENABLE = IPD RED BPID Enable
+ *
+ * Set the pass-drop marks for qos level.
+ */
+union cvmx_ipd_red_bpid_enablex {
+	u64 u64;
+	struct cvmx_ipd_red_bpid_enablex_s {
+		u64 prt_enb : 64;
+	} s;
+	struct cvmx_ipd_red_bpid_enablex_s cn68xx;
+	struct cvmx_ipd_red_bpid_enablex_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_red_bpid_enablex cvmx_ipd_red_bpid_enablex_t;
+
+/**
+ * cvmx_ipd_red_delay
+ *
+ * IPD_RED_DELAY = IPD RED BPID Enable
+ *
+ * Set the pass-drop marks for qos level.
+ */
+union cvmx_ipd_red_delay {
+	u64 u64;
+	struct cvmx_ipd_red_delay_s {
+		u64 reserved_28_63 : 36;
+		u64 prb_dly : 14;
+		u64 avg_dly : 14;
+	} s;
+	struct cvmx_ipd_red_delay_s cn68xx;
+	struct cvmx_ipd_red_delay_s cn68xxp1;
+};
+
+typedef union cvmx_ipd_red_delay cvmx_ipd_red_delay_t;
+
+/**
+ * cvmx_ipd_red_port_enable
+ *
+ * Set the pass-drop marks for qos level.
+ *
+ */
+union cvmx_ipd_red_port_enable {
+	u64 u64;
+	struct cvmx_ipd_red_port_enable_s {
+		u64 prb_dly : 14;
+		u64 avg_dly : 14;
+		u64 prt_enb : 36;
+	} s;
+	struct cvmx_ipd_red_port_enable_s cn30xx;
+	struct cvmx_ipd_red_port_enable_s cn31xx;
+	struct cvmx_ipd_red_port_enable_s cn38xx;
+	struct cvmx_ipd_red_port_enable_s cn38xxp2;
+	struct cvmx_ipd_red_port_enable_s cn50xx;
+	struct cvmx_ipd_red_port_enable_s cn52xx;
+	struct cvmx_ipd_red_port_enable_s cn52xxp1;
+	struct cvmx_ipd_red_port_enable_s cn56xx;
+	struct cvmx_ipd_red_port_enable_s cn56xxp1;
+	struct cvmx_ipd_red_port_enable_s cn58xx;
+	struct cvmx_ipd_red_port_enable_s cn58xxp1;
+	struct cvmx_ipd_red_port_enable_s cn61xx;
+	struct cvmx_ipd_red_port_enable_s cn63xx;
+	struct cvmx_ipd_red_port_enable_s cn63xxp1;
+	struct cvmx_ipd_red_port_enable_s cn66xx;
+	struct cvmx_ipd_red_port_enable_s cn70xx;
+	struct cvmx_ipd_red_port_enable_s cn70xxp1;
+	struct cvmx_ipd_red_port_enable_s cnf71xx;
+};
+
+typedef union cvmx_ipd_red_port_enable cvmx_ipd_red_port_enable_t;
+
+/**
+ * cvmx_ipd_red_port_enable2
+ *
+ * Set the pass-drop marks for qos level.
+ *
+ */
+union cvmx_ipd_red_port_enable2 {
+	u64 u64;
+	struct cvmx_ipd_red_port_enable2_s {
+		u64 reserved_12_63 : 52;
+		u64 prt_enb : 12;
+	} s;
+	struct cvmx_ipd_red_port_enable2_cn52xx {
+		u64 reserved_4_63 : 60;
+		u64 prt_enb : 4;
+	} cn52xx;
+	struct cvmx_ipd_red_port_enable2_cn52xx cn52xxp1;
+	struct cvmx_ipd_red_port_enable2_cn52xx cn56xx;
+	struct cvmx_ipd_red_port_enable2_cn52xx cn56xxp1;
+	struct cvmx_ipd_red_port_enable2_s cn61xx;
+	struct cvmx_ipd_red_port_enable2_cn63xx {
+		u64 reserved_8_63 : 56;
+		u64 prt_enb : 8;
+	} cn63xx;
+	struct cvmx_ipd_red_port_enable2_cn63xx cn63xxp1;
+	struct cvmx_ipd_red_port_enable2_s cn66xx;
+	struct cvmx_ipd_red_port_enable2_s cn70xx;
+	struct cvmx_ipd_red_port_enable2_s cn70xxp1;
+	struct cvmx_ipd_red_port_enable2_s cnf71xx;
+};
+
+typedef union cvmx_ipd_red_port_enable2 cvmx_ipd_red_port_enable2_t;
+
+/**
+ * cvmx_ipd_red_que#_param
+ *
+ * Value control the Passing and Dropping of packets by the red engine for QOS Level-0.
+ *
+ */
+union cvmx_ipd_red_quex_param {
+	u64 u64;
+	struct cvmx_ipd_red_quex_param_s {
+		u64 reserved_49_63 : 15;
+		u64 use_pcnt : 1;
+		u64 new_con : 8;
+		u64 avg_con : 8;
+		u64 prb_con : 32;
+	} s;
+	struct cvmx_ipd_red_quex_param_s cn30xx;
+	struct cvmx_ipd_red_quex_param_s cn31xx;
+	struct cvmx_ipd_red_quex_param_s cn38xx;
+	struct cvmx_ipd_red_quex_param_s cn38xxp2;
+	struct cvmx_ipd_red_quex_param_s cn50xx;
+	struct cvmx_ipd_red_quex_param_s cn52xx;
+	struct cvmx_ipd_red_quex_param_s cn52xxp1;
+	struct cvmx_ipd_red_quex_param_s cn56xx;
+	struct cvmx_ipd_red_quex_param_s cn56xxp1;
+	struct cvmx_ipd_red_quex_param_s cn58xx;
+	struct cvmx_ipd_red_quex_param_s cn58xxp1;
+	struct cvmx_ipd_red_quex_param_s cn61xx;
+	struct cvmx_ipd_red_quex_param_s cn63xx;
+	struct cvmx_ipd_red_quex_param_s cn63xxp1;
+	struct cvmx_ipd_red_quex_param_s cn66xx;
+	struct cvmx_ipd_red_quex_param_s cn68xx;
+	struct cvmx_ipd_red_quex_param_s cn68xxp1;
+	struct cvmx_ipd_red_quex_param_s cn70xx;
+	struct cvmx_ipd_red_quex_param_s cn70xxp1;
+	struct cvmx_ipd_red_quex_param_s cnf71xx;
+};
+
+typedef union cvmx_ipd_red_quex_param cvmx_ipd_red_quex_param_t;
+
+/**
+ * cvmx_ipd_req_wgt
+ *
+ * IPD_REQ_WGT = IPD REQ weights
+ *
+ * There are 8 devices that can request to send packet traffic to the IPD. These weights are used for the Weighted Round Robin
+ * grant generated by the IPD to requestors.
+ */
+union cvmx_ipd_req_wgt {
+	u64 u64;
+	struct cvmx_ipd_req_wgt_s {
+		u64 wgt7 : 8;
+		u64 wgt6 : 8;
+		u64 wgt5 : 8;
+		u64 wgt4 : 8;
+		u64 wgt3 : 8;
+		u64 wgt2 : 8;
+		u64 wgt1 : 8;
+		u64 wgt0 : 8;
+	} s;
+	struct cvmx_ipd_req_wgt_s cn68xx;
+};
+
+typedef union cvmx_ipd_req_wgt cvmx_ipd_req_wgt_t;
+
+/**
+ * cvmx_ipd_sub_port_bp_page_cnt
+ *
+ * Will add the value to the indicated port count register, the number of pages supplied. The
+ * value added should
+ * be the 2's complement of the value that needs to be subtracted. Users add 2's complement
+ * values to the
+ * port-mbuf-count register to return (lower the count) mbufs to the counter in order to avoid
+ * port-level
+ * backpressure being applied to the port. Backpressure is applied when the MBUF used count of a
+ * port exceeds the
+ * value in the IPD_PORTX_BP_PAGE_CNT, IPD_PORTX_BP_PAGE_CNT2, and IPD_PORTX_BP_PAGE_CNT3.
+ * This register can't be written from the PCI via a window write.
+ */
+union cvmx_ipd_sub_port_bp_page_cnt {
+	u64 u64;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s {
+		u64 reserved_31_63 : 33;
+		u64 port : 6;
+		u64 page_cnt : 25;
+	} s;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn30xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn31xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn38xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn38xxp2;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn50xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn52xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn52xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn56xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn56xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn58xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn58xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn61xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn63xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn63xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn66xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn68xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn68xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn70xx;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cn70xxp1;
+	struct cvmx_ipd_sub_port_bp_page_cnt_s cnf71xx;
+};
+
+typedef union cvmx_ipd_sub_port_bp_page_cnt cvmx_ipd_sub_port_bp_page_cnt_t;
+
+/**
+ * cvmx_ipd_sub_port_fcs
+ *
+ * When set '1' the port corresponding to the bit set will subtract 4 bytes from the end of
+ * the packet.
+ */
+union cvmx_ipd_sub_port_fcs {
+	u64 u64;
+	struct cvmx_ipd_sub_port_fcs_s {
+		u64 reserved_40_63 : 24;
+		u64 port_bit2 : 4;
+		u64 reserved_32_35 : 4;
+		u64 port_bit : 32;
+	} s;
+	struct cvmx_ipd_sub_port_fcs_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 port_bit : 3;
+	} cn30xx;
+	struct cvmx_ipd_sub_port_fcs_cn30xx cn31xx;
+	struct cvmx_ipd_sub_port_fcs_cn38xx {
+		u64 reserved_32_63 : 32;
+		u64 port_bit : 32;
+	} cn38xx;
+	struct cvmx_ipd_sub_port_fcs_cn38xx cn38xxp2;
+	struct cvmx_ipd_sub_port_fcs_cn30xx cn50xx;
+	struct cvmx_ipd_sub_port_fcs_s cn52xx;
+	struct cvmx_ipd_sub_port_fcs_s cn52xxp1;
+	struct cvmx_ipd_sub_port_fcs_s cn56xx;
+	struct cvmx_ipd_sub_port_fcs_s cn56xxp1;
+	struct cvmx_ipd_sub_port_fcs_cn38xx cn58xx;
+	struct cvmx_ipd_sub_port_fcs_cn38xx cn58xxp1;
+	struct cvmx_ipd_sub_port_fcs_s cn61xx;
+	struct cvmx_ipd_sub_port_fcs_s cn63xx;
+	struct cvmx_ipd_sub_port_fcs_s cn63xxp1;
+	struct cvmx_ipd_sub_port_fcs_s cn66xx;
+	struct cvmx_ipd_sub_port_fcs_s cn70xx;
+	struct cvmx_ipd_sub_port_fcs_s cn70xxp1;
+	struct cvmx_ipd_sub_port_fcs_s cnf71xx;
+};
+
+typedef union cvmx_ipd_sub_port_fcs cvmx_ipd_sub_port_fcs_t;
+
+/**
+ * cvmx_ipd_sub_port_qos_cnt
+ *
+ * Will add the value (CNT) to the indicated Port-QOS register (PORT_QOS). The value added must
+ * be
+ * be the 2's complement of the value that needs to be subtracted.
+ */
+union cvmx_ipd_sub_port_qos_cnt {
+	u64 u64;
+	struct cvmx_ipd_sub_port_qos_cnt_s {
+		u64 reserved_41_63 : 23;
+		u64 port_qos : 9;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn52xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn52xxp1;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn56xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn56xxp1;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn61xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn63xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn63xxp1;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn66xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn68xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn68xxp1;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn70xx;
+	struct cvmx_ipd_sub_port_qos_cnt_s cn70xxp1;
+	struct cvmx_ipd_sub_port_qos_cnt_s cnf71xx;
+};
+
+typedef union cvmx_ipd_sub_port_qos_cnt cvmx_ipd_sub_port_qos_cnt_t;
+
+/**
+ * cvmx_ipd_wqe_fpa_queue
+ *
+ * Which FPA Queue (0-7) to fetch page-pointers from for WQE's
+ *
+ */
+union cvmx_ipd_wqe_fpa_queue {
+	u64 u64;
+	struct cvmx_ipd_wqe_fpa_queue_s {
+		u64 reserved_3_63 : 61;
+		u64 wqe_pool : 3;
+	} s;
+	struct cvmx_ipd_wqe_fpa_queue_s cn30xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn31xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn38xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn38xxp2;
+	struct cvmx_ipd_wqe_fpa_queue_s cn50xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn52xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn52xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cn56xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn56xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cn58xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn58xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cn61xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn63xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn63xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cn66xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn68xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn68xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cn70xx;
+	struct cvmx_ipd_wqe_fpa_queue_s cn70xxp1;
+	struct cvmx_ipd_wqe_fpa_queue_s cnf71xx;
+};
+
+typedef union cvmx_ipd_wqe_fpa_queue cvmx_ipd_wqe_fpa_queue_t;
+
+/**
+ * cvmx_ipd_wqe_ptr_valid
+ *
+ * The value of the WQE-pointer fetched and in the valid register.
+ *
+ */
+union cvmx_ipd_wqe_ptr_valid {
+	u64 u64;
+	struct cvmx_ipd_wqe_ptr_valid_s {
+		u64 reserved_29_63 : 35;
+		u64 ptr : 29;
+	} s;
+	struct cvmx_ipd_wqe_ptr_valid_s cn30xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn31xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn38xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn50xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn52xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn52xxp1;
+	struct cvmx_ipd_wqe_ptr_valid_s cn56xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn56xxp1;
+	struct cvmx_ipd_wqe_ptr_valid_s cn58xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn58xxp1;
+	struct cvmx_ipd_wqe_ptr_valid_s cn61xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn63xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn63xxp1;
+	struct cvmx_ipd_wqe_ptr_valid_s cn66xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn70xx;
+	struct cvmx_ipd_wqe_ptr_valid_s cn70xxp1;
+	struct cvmx_ipd_wqe_ptr_valid_s cnf71xx;
+};
+
+typedef union cvmx_ipd_wqe_ptr_valid cvmx_ipd_wqe_ptr_valid_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 14/50] mips: octeon: Add cvmx-l2c-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (12 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 13/50] mips: octeon: Add cvmx-ipd-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 15/50] mips: octeon: Add cvmx-mio-defs.h " Stefan Roese
                   ` (38 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-l2c-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-l2c-defs.h  | 172 ++++++++++++++++++
 1 file changed, 172 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-l2c-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-l2c-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-l2c-defs.h
new file mode 100644
index 0000000000..7fddcd6dfb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-l2c-defs.h
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_L2C_DEFS_H_
+#define __CVMX_L2C_DEFS_H_
+
+#define CVMX_L2C_CFG 0x0001180080000000ull
+#define CVMX_L2C_CTL 0x0001180080800000ull
+
+/*
+ * Mapping is done starting from 0x11800.80000000
+ * Use _REL for relative mapping
+ */
+#define CVMX_L2C_CTL_REL	 0x00800000
+#define CVMX_L2C_BIG_CTL_REL	 0x00800030
+#define CVMX_L2C_TADX_INT_REL(i) (0x00a00028 + (((i) & 7) * 0x40000))
+#define CVMX_L2C_MCIX_INT_REL(i) (0x00c00028 + (((i) & 3) * 0x40000))
+
+/**
+ * cvmx_l2c_cfg
+ *
+ * Specify the RSL base addresses for the block
+ *
+ *                  L2C_CFG = L2C Configuration
+ *
+ * Description:
+ */
+union cvmx_l2c_cfg {
+	u64 u64;
+	struct cvmx_l2c_cfg_s {
+		u64 reserved_20_63 : 44;
+		u64 bstrun : 1;
+		u64 lbist : 1;
+		u64 xor_bank : 1;
+		u64 dpres1 : 1;
+		u64 dpres0 : 1;
+		u64 dfill_dis : 1;
+		u64 fpexp : 4;
+		u64 fpempty : 1;
+		u64 fpen : 1;
+		u64 idxalias : 1;
+		u64 mwf_crd : 4;
+		u64 rsp_arb_mode : 1;
+		u64 rfb_arb_mode : 1;
+		u64 lrf_arb_mode : 1;
+	} s;
+};
+
+/**
+ * cvmx_l2c_ctl
+ *
+ * L2C_CTL = L2C Control
+ *
+ *
+ * Notes:
+ * (1) If MAXVAB is != 0, VAB_THRESH should be less than MAXVAB.
+ *
+ * (2) L2DFDBE and L2DFSBE allows software to generate L2DSBE, L2DDBE, VBFSBE,
+ * and VBFDBE errors for the purposes of testing error handling code.  When
+ * one (or both) of these bits are set a PL2 which misses in the L2 will fill
+ * with the appropriate error in the first 2 OWs of the fill. Software can
+ * determine which OW pair gets the error by choosing the desired fill order
+ * (address<6:5>).  A PL2 which hits in the L2 will not inject any errors.
+ * Therefore sending a WBIL2 prior to the PL2 is recommended to make a miss
+ * likely (if multiple processors are involved software must be careful to be
+ * sure no other processor or IO device can bring the block into the L2).
+ *
+ * To generate a VBFSBE or VBFDBE, software must first get the cache block
+ * into the cache with an error using a PL2 which misses the L2.  Then a
+ * store partial to a portion of the cache block without the error must
+ * change the block to dirty.  Then, a subsequent WBL2/WBIL2/victim will
+ * trigger the VBFSBE/VBFDBE error.
+ */
+union cvmx_l2c_ctl {
+	u64 u64;
+	struct cvmx_l2c_ctl_s {
+		u64 reserved_29_63 : 35;
+		u64 rdf_fast : 1;
+		u64 disstgl2i : 1;
+		u64 l2dfsbe : 1;
+		u64 l2dfdbe : 1;
+		u64 discclk : 1;
+		u64 maxvab : 4;
+		u64 maxlfb : 4;
+		u64 rsp_arb_mode : 1;
+		u64 xmc_arb_mode : 1;
+		u64 reserved_2_13 : 12;
+		u64 disecc : 1;
+		u64 disidxalias : 1;
+	} s;
+
+	struct cvmx_l2c_ctl_cn73xx {
+		u64 reserved_32_63 : 32;
+		u64 ocla_qos : 3;
+		u64 reserved_28_28 : 1;
+		u64 disstgl2i : 1;
+		u64 reserved_25_26 : 2;
+		u64 discclk : 1;
+		u64 reserved_16_23 : 8;
+		u64 rsp_arb_mode : 1;
+		u64 xmc_arb_mode : 1;
+		u64 rdf_cnt : 8;
+		u64 reserved_4_5 : 2;
+		u64 disldwb : 1;
+		u64 dissblkdty : 1;
+		u64 disecc : 1;
+		u64 disidxalias : 1;
+	} cn73xx;
+
+	struct cvmx_l2c_ctl_cn73xx cn78xx;
+};
+
+/**
+ * cvmx_l2c_big_ctl
+ *
+ * L2C_BIG_CTL = L2C Big memory control register
+ *
+ *
+ * Notes:
+ * (1) BIGRD interrupts can occur during normal operation as the PP's are
+ * allowed to prefetch to non-existent memory locations.  Therefore,
+ * BIGRD is for informational purposes only.
+ *
+ * (2) When HOLEWR/BIGWR blocks a store L2C_VER_ID, L2C_VER_PP, L2C_VER_IOB,
+ * and L2C_VER_MSC will be loaded just like a store which is blocked by VRTWR.
+ * Additionally, L2C_ERR_XMC will be loaded.
+ */
+union cvmx_l2c_big_ctl {
+	u64 u64;
+	struct cvmx_l2c_big_ctl_s {
+		u64 reserved_8_63 : 56;
+		u64 maxdram : 4;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_l2c_big_ctl_cn61xx {
+		u64 reserved_8_63 : 56;
+		u64 maxdram : 4;
+		u64 reserved_1_3 : 3;
+		u64 disable : 1;
+	} cn61xx;
+	struct cvmx_l2c_big_ctl_cn61xx cn63xx;
+	struct cvmx_l2c_big_ctl_cn61xx cn66xx;
+	struct cvmx_l2c_big_ctl_cn61xx cn68xx;
+	struct cvmx_l2c_big_ctl_cn61xx cn68xxp1;
+	struct cvmx_l2c_big_ctl_cn70xx {
+		u64 reserved_8_63 : 56;
+		u64 maxdram : 4;
+		u64 reserved_1_3 : 3;
+		u64 disbig : 1;
+	} cn70xx;
+	struct cvmx_l2c_big_ctl_cn70xx cn70xxp1;
+	struct cvmx_l2c_big_ctl_cn70xx cn73xx;
+	struct cvmx_l2c_big_ctl_cn70xx cn78xx;
+	struct cvmx_l2c_big_ctl_cn70xx cn78xxp1;
+	struct cvmx_l2c_big_ctl_cn61xx cnf71xx;
+	struct cvmx_l2c_big_ctl_cn70xx cnf75xx;
+};
+
+struct rlevel_byte_data {
+	int delay;
+	int loop_total;
+	int loop_count;
+	int best;
+	u64 bm;
+	int bmerrs;
+	int sqerrs;
+	int bestsq;
+};
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 15/50] mips: octeon: Add cvmx-mio-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (13 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 14/50] mips: octeon: Add cvmx-l2c-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 16/50] mips: octeon: Add cvmx-npi-defs.h " Stefan Roese
                   ` (37 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-mio-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-mio-defs.h  | 353 ++++++++++++++++++
 1 file changed, 353 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-mio-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-mio-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-mio-defs.h
new file mode 100644
index 0000000000..23a18be54e
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-mio-defs.h
@@ -0,0 +1,353 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_MIO_DEFS_H__
+#define __CVMX_MIO_DEFS_H__
+
+#define CVMX_MIO_PTP_CLOCK_CFG	  (0x0001070000000F00ull)
+#define CVMX_MIO_PTP_EVT_CNT	  (0x0001070000000F28ull)
+#define CVMX_MIO_RST_BOOT	  (0x0001180000001600ull)
+#define CVMX_MIO_RST_CTLX(offset) (0x0001180000001618ull + ((offset) & 1))
+#define CVMX_MIO_QLMX_CFG(offset) (0x0001180000001590ull + ((offset) & 7) * 8)
+
+/**
+ * cvmx_mio_ptp_clock_cfg
+ *
+ * This register configures the timestamp architecture.
+ *
+ */
+union cvmx_mio_ptp_clock_cfg {
+	u64 u64;
+	struct cvmx_mio_ptp_clock_cfg_s {
+		u64 reserved_40_63 : 24;
+		u64 ext_clk_edge : 2;
+		u64 ckout_out4 : 1;
+		u64 pps_out : 5;
+		u64 pps_inv : 1;
+		u64 pps_en : 1;
+		u64 ckout_out : 4;
+		u64 ckout_inv : 1;
+		u64 ckout_en : 1;
+		u64 evcnt_in : 6;
+		u64 evcnt_edge : 1;
+		u64 evcnt_en : 1;
+		u64 tstmp_in : 6;
+		u64 tstmp_edge : 1;
+		u64 tstmp_en : 1;
+		u64 ext_clk_in : 6;
+		u64 ext_clk_en : 1;
+		u64 ptp_en : 1;
+	} s;
+	struct cvmx_mio_ptp_clock_cfg_cn61xx {
+		u64 reserved_42_63 : 22;
+		u64 pps : 1;
+		u64 ckout : 1;
+		u64 ext_clk_edge : 2;
+		u64 ckout_out4 : 1;
+		u64 pps_out : 5;
+		u64 pps_inv : 1;
+		u64 pps_en : 1;
+		u64 ckout_out : 4;
+		u64 ckout_inv : 1;
+		u64 ckout_en : 1;
+		u64 evcnt_in : 6;
+		u64 evcnt_edge : 1;
+		u64 evcnt_en : 1;
+		u64 tstmp_in : 6;
+		u64 tstmp_edge : 1;
+		u64 tstmp_en : 1;
+		u64 ext_clk_in : 6;
+		u64 ext_clk_en : 1;
+		u64 ptp_en : 1;
+	} cn61xx;
+	struct cvmx_mio_ptp_clock_cfg_cn63xx {
+		u64 reserved_24_63 : 40;
+		u64 evcnt_in : 6;
+		u64 evcnt_edge : 1;
+		u64 evcnt_en : 1;
+		u64 tstmp_in : 6;
+		u64 tstmp_edge : 1;
+		u64 tstmp_en : 1;
+		u64 ext_clk_in : 6;
+		u64 ext_clk_en : 1;
+		u64 ptp_en : 1;
+	} cn63xx;
+	struct cvmx_mio_ptp_clock_cfg_cn63xx cn63xxp1;
+	struct cvmx_mio_ptp_clock_cfg_s cn66xx;
+	struct cvmx_mio_ptp_clock_cfg_cn61xx cn68xx;
+	struct cvmx_mio_ptp_clock_cfg_cn63xx cn68xxp1;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx {
+		u64 reserved_42_63 : 22;
+		u64 ckout : 1;
+		u64 pps : 1;
+		u64 ext_clk_edge : 2;
+		u64 reserved_32_37 : 6;
+		u64 pps_inv : 1;
+		u64 pps_en : 1;
+		u64 reserved_26_29 : 4;
+		u64 ckout_inv : 1;
+		u64 ckout_en : 1;
+		u64 evcnt_in : 6;
+		u64 evcnt_edge : 1;
+		u64 evcnt_en : 1;
+		u64 tstmp_in : 6;
+		u64 tstmp_edge : 1;
+		u64 tstmp_en : 1;
+		u64 ext_clk_in : 6;
+		u64 ext_clk_en : 1;
+		u64 ptp_en : 1;
+	} cn70xx;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx cn70xxp1;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx cn73xx;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx cn78xx;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx cn78xxp1;
+	struct cvmx_mio_ptp_clock_cfg_cn61xx cnf71xx;
+	struct cvmx_mio_ptp_clock_cfg_cn70xx cnf75xx;
+};
+
+typedef union cvmx_mio_ptp_clock_cfg cvmx_mio_ptp_clock_cfg_t;
+
+/**
+ * cvmx_mio_ptp_evt_cnt
+ *
+ * This register contains the PTP event counter.
+ *
+ */
+union cvmx_mio_ptp_evt_cnt {
+	u64 u64;
+	struct cvmx_mio_ptp_evt_cnt_s {
+		u64 cntr : 64;
+	} s;
+	struct cvmx_mio_ptp_evt_cnt_s cn61xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn63xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn63xxp1;
+	struct cvmx_mio_ptp_evt_cnt_s cn66xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn68xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn68xxp1;
+	struct cvmx_mio_ptp_evt_cnt_s cn70xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn70xxp1;
+	struct cvmx_mio_ptp_evt_cnt_s cn73xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn78xx;
+	struct cvmx_mio_ptp_evt_cnt_s cn78xxp1;
+	struct cvmx_mio_ptp_evt_cnt_s cnf71xx;
+	struct cvmx_mio_ptp_evt_cnt_s cnf75xx;
+};
+
+typedef union cvmx_mio_ptp_evt_cnt cvmx_mio_ptp_evt_cnt_t;
+
+/**
+ * cvmx_mio_rst_boot
+ *
+ * Notes:
+ * JTCSRDIS, EJTAGDIS, ROMEN reset to 1 in authentik mode; in all other modes they reset to 0.
+ *
+ */
+union cvmx_mio_rst_boot {
+	u64 u64;
+	struct cvmx_mio_rst_boot_s {
+		u64 chipkill : 1;
+		u64 jtcsrdis : 1;
+		u64 ejtagdis : 1;
+		u64 romen : 1;
+		u64 ckill_ppdis : 1;
+		u64 jt_tstmode : 1;
+		u64 reserved_50_57 : 8;
+		u64 lboot_ext : 2;
+		u64 reserved_44_47 : 4;
+		u64 qlm4_spd : 4;
+		u64 qlm3_spd : 4;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} s;
+	struct cvmx_mio_rst_boot_cn61xx {
+		u64 chipkill : 1;
+		u64 jtcsrdis : 1;
+		u64 ejtagdis : 1;
+		u64 romen : 1;
+		u64 ckill_ppdis : 1;
+		u64 jt_tstmode : 1;
+		u64 reserved_50_57 : 8;
+		u64 lboot_ext : 2;
+		u64 reserved_36_47 : 12;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} cn61xx;
+	struct cvmx_mio_rst_boot_cn63xx {
+		u64 reserved_36_63 : 28;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} cn63xx;
+	struct cvmx_mio_rst_boot_cn63xx cn63xxp1;
+	struct cvmx_mio_rst_boot_cn66xx {
+		u64 chipkill : 1;
+		u64 jtcsrdis : 1;
+		u64 ejtagdis : 1;
+		u64 romen : 1;
+		u64 ckill_ppdis : 1;
+		u64 reserved_50_58 : 9;
+		u64 lboot_ext : 2;
+		u64 reserved_36_47 : 12;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} cn66xx;
+	struct cvmx_mio_rst_boot_cn68xx {
+		u64 reserved_59_63 : 5;
+		u64 jt_tstmode : 1;
+		u64 reserved_44_57 : 14;
+		u64 qlm4_spd : 4;
+		u64 qlm3_spd : 4;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} cn68xx;
+	struct cvmx_mio_rst_boot_cn68xxp1 {
+		u64 reserved_44_63 : 20;
+		u64 qlm4_spd : 4;
+		u64 qlm3_spd : 4;
+		u64 c_mul : 6;
+		u64 pnr_mul : 6;
+		u64 qlm2_spd : 4;
+		u64 qlm1_spd : 4;
+		u64 qlm0_spd : 4;
+		u64 lboot : 10;
+		u64 rboot : 1;
+		u64 rboot_pin : 1;
+	} cn68xxp1;
+	struct cvmx_mio_rst_boot_cn61xx cnf71xx;
+};
+
+typedef union cvmx_mio_rst_boot cvmx_mio_rst_boot_t;
+
+/**
+ * cvmx_mio_rst_ctl#
+ *
+ * Notes:
+ * GEN1_Only mode is enabled for PEM0 when QLM1_SPD[0] is set or when sclk < 550Mhz.
+ * GEN1_Only mode is enabled for PEM1 when QLM1_SPD[1] is set or when sclk < 550Mhz.
+ */
+union cvmx_mio_rst_ctlx {
+	u64 u64;
+	struct cvmx_mio_rst_ctlx_s {
+		u64 reserved_13_63 : 51;
+		u64 in_rev_ln : 1;
+		u64 rev_lanes : 1;
+		u64 gen1_only : 1;
+		u64 prst_link : 1;
+		u64 rst_done : 1;
+		u64 rst_link : 1;
+		u64 host_mode : 1;
+		u64 prtmode : 2;
+		u64 rst_drv : 1;
+		u64 rst_rcv : 1;
+		u64 rst_chip : 1;
+		u64 rst_val : 1;
+	} s;
+	struct cvmx_mio_rst_ctlx_s cn61xx;
+	struct cvmx_mio_rst_ctlx_cn63xx {
+		u64 reserved_10_63 : 54;
+		u64 prst_link : 1;
+		u64 rst_done : 1;
+		u64 rst_link : 1;
+		u64 host_mode : 1;
+		u64 prtmode : 2;
+		u64 rst_drv : 1;
+		u64 rst_rcv : 1;
+		u64 rst_chip : 1;
+		u64 rst_val : 1;
+	} cn63xx;
+	struct cvmx_mio_rst_ctlx_cn63xxp1 {
+		u64 reserved_9_63 : 55;
+		u64 rst_done : 1;
+		u64 rst_link : 1;
+		u64 host_mode : 1;
+		u64 prtmode : 2;
+		u64 rst_drv : 1;
+		u64 rst_rcv : 1;
+		u64 rst_chip : 1;
+		u64 rst_val : 1;
+	} cn63xxp1;
+	struct cvmx_mio_rst_ctlx_cn63xx cn66xx;
+	struct cvmx_mio_rst_ctlx_cn63xx cn68xx;
+	struct cvmx_mio_rst_ctlx_cn63xx cn68xxp1;
+	struct cvmx_mio_rst_ctlx_s cnf71xx;
+};
+
+typedef union cvmx_mio_rst_ctlx cvmx_mio_rst_ctlx_t;
+
+/**
+ * cvmx_mio_qlm#_cfg
+ *
+ * Notes:
+ * Certain QLM_SPD is valid only for certain QLM_CFG configuration, refer to HRM for valid
+ * combinations.  These csrs are reset only on COLD_RESET.  The Reset values for QLM_SPD and QLM_CFG
+ * are as follows:               MIO_QLM0_CFG  SPD=F, CFG=2 SGMII (AGX0)
+ *                               MIO_QLM1_CFG  SPD=0, CFG=1 PCIE 2x1 (PEM0/PEM1)
+ */
+union cvmx_mio_qlmx_cfg {
+	u64 u64;
+	struct cvmx_mio_qlmx_cfg_s {
+		u64 reserved_15_63 : 49;
+		u64 prtmode : 1;
+		u64 reserved_12_13 : 2;
+		u64 qlm_spd : 4;
+		u64 reserved_4_7 : 4;
+		u64 qlm_cfg : 4;
+	} s;
+	struct cvmx_mio_qlmx_cfg_cn61xx {
+		u64 reserved_15_63 : 49;
+		u64 prtmode : 1;
+		u64 reserved_12_13 : 2;
+		u64 qlm_spd : 4;
+		u64 reserved_2_7 : 6;
+		u64 qlm_cfg : 2;
+	} cn61xx;
+	struct cvmx_mio_qlmx_cfg_cn66xx {
+		u64 reserved_12_63 : 52;
+		u64 qlm_spd : 4;
+		u64 reserved_4_7 : 4;
+		u64 qlm_cfg : 4;
+	} cn66xx;
+	struct cvmx_mio_qlmx_cfg_cn68xx {
+		u64 reserved_12_63 : 52;
+		u64 qlm_spd : 4;
+		u64 reserved_3_7 : 5;
+		u64 qlm_cfg : 3;
+	} cn68xx;
+	struct cvmx_mio_qlmx_cfg_cn68xx cn68xxp1;
+	struct cvmx_mio_qlmx_cfg_cn61xx cnf71xx;
+};
+
+typedef union cvmx_mio_qlmx_cfg cvmx_mio_qlmx_cfg_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 16/50] mips: octeon: Add cvmx-npi-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (14 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 15/50] mips: octeon: Add cvmx-mio-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 17/50] mips: octeon: Add cvmx-pcieepx-defs.h " Stefan Roese
                   ` (36 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-npi-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-npi-defs.h  | 1953 +++++++++++++++++
 1 file changed, 1953 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-npi-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-npi-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-npi-defs.h
new file mode 100644
index 0000000000..f23ed78ee4
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-npi-defs.h
@@ -0,0 +1,1953 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon npi.
+ */
+
+#ifndef __CVMX_NPI_DEFS_H__
+#define __CVMX_NPI_DEFS_H__
+
+#define CVMX_NPI_BASE_ADDR_INPUT0	   CVMX_NPI_BASE_ADDR_INPUTX(0)
+#define CVMX_NPI_BASE_ADDR_INPUT1	   CVMX_NPI_BASE_ADDR_INPUTX(1)
+#define CVMX_NPI_BASE_ADDR_INPUT2	   CVMX_NPI_BASE_ADDR_INPUTX(2)
+#define CVMX_NPI_BASE_ADDR_INPUT3	   CVMX_NPI_BASE_ADDR_INPUTX(3)
+#define CVMX_NPI_BASE_ADDR_INPUTX(offset)  (0x00011F0000000070ull + ((offset) & 3) * 16)
+#define CVMX_NPI_BASE_ADDR_OUTPUT0	   CVMX_NPI_BASE_ADDR_OUTPUTX(0)
+#define CVMX_NPI_BASE_ADDR_OUTPUT1	   CVMX_NPI_BASE_ADDR_OUTPUTX(1)
+#define CVMX_NPI_BASE_ADDR_OUTPUT2	   CVMX_NPI_BASE_ADDR_OUTPUTX(2)
+#define CVMX_NPI_BASE_ADDR_OUTPUT3	   CVMX_NPI_BASE_ADDR_OUTPUTX(3)
+#define CVMX_NPI_BASE_ADDR_OUTPUTX(offset) (0x00011F00000000B8ull + ((offset) & 3) * 8)
+#define CVMX_NPI_BIST_STATUS		   (0x00011F00000003F8ull)
+#define CVMX_NPI_BUFF_SIZE_OUTPUT0	   CVMX_NPI_BUFF_SIZE_OUTPUTX(0)
+#define CVMX_NPI_BUFF_SIZE_OUTPUT1	   CVMX_NPI_BUFF_SIZE_OUTPUTX(1)
+#define CVMX_NPI_BUFF_SIZE_OUTPUT2	   CVMX_NPI_BUFF_SIZE_OUTPUTX(2)
+#define CVMX_NPI_BUFF_SIZE_OUTPUT3	   CVMX_NPI_BUFF_SIZE_OUTPUTX(3)
+#define CVMX_NPI_BUFF_SIZE_OUTPUTX(offset) (0x00011F00000000E0ull + ((offset) & 3) * 8)
+#define CVMX_NPI_COMP_CTL		   (0x00011F0000000218ull)
+#define CVMX_NPI_CTL_STATUS		   (0x00011F0000000010ull)
+#define CVMX_NPI_DBG_SELECT		   (0x00011F0000000008ull)
+#define CVMX_NPI_DMA_CONTROL		   (0x00011F0000000128ull)
+#define CVMX_NPI_DMA_HIGHP_COUNTS	   (0x00011F0000000148ull)
+#define CVMX_NPI_DMA_HIGHP_NADDR	   (0x00011F0000000158ull)
+#define CVMX_NPI_DMA_LOWP_COUNTS	   (0x00011F0000000140ull)
+#define CVMX_NPI_DMA_LOWP_NADDR		   (0x00011F0000000150ull)
+#define CVMX_NPI_HIGHP_DBELL		   (0x00011F0000000120ull)
+#define CVMX_NPI_HIGHP_IBUFF_SADDR	   (0x00011F0000000110ull)
+#define CVMX_NPI_INPUT_CONTROL		   (0x00011F0000000138ull)
+#define CVMX_NPI_INT_ENB		   (0x00011F0000000020ull)
+#define CVMX_NPI_INT_SUM		   (0x00011F0000000018ull)
+#define CVMX_NPI_LOWP_DBELL		   (0x00011F0000000118ull)
+#define CVMX_NPI_LOWP_IBUFF_SADDR	   (0x00011F0000000108ull)
+#define CVMX_NPI_MEM_ACCESS_SUBID3	   CVMX_NPI_MEM_ACCESS_SUBIDX(3)
+#define CVMX_NPI_MEM_ACCESS_SUBID4	   CVMX_NPI_MEM_ACCESS_SUBIDX(4)
+#define CVMX_NPI_MEM_ACCESS_SUBID5	   CVMX_NPI_MEM_ACCESS_SUBIDX(5)
+#define CVMX_NPI_MEM_ACCESS_SUBID6	   CVMX_NPI_MEM_ACCESS_SUBIDX(6)
+#define CVMX_NPI_MEM_ACCESS_SUBIDX(offset) (0x00011F0000000028ull + ((offset) & 7) * 8 - 8 * 3)
+#define CVMX_NPI_MSI_RCV		   (0x0000000000000190ull)
+#define CVMX_NPI_NPI_MSI_RCV		   (0x00011F0000001190ull)
+#define CVMX_NPI_NUM_DESC_OUTPUT0	   CVMX_NPI_NUM_DESC_OUTPUTX(0)
+#define CVMX_NPI_NUM_DESC_OUTPUT1	   CVMX_NPI_NUM_DESC_OUTPUTX(1)
+#define CVMX_NPI_NUM_DESC_OUTPUT2	   CVMX_NPI_NUM_DESC_OUTPUTX(2)
+#define CVMX_NPI_NUM_DESC_OUTPUT3	   CVMX_NPI_NUM_DESC_OUTPUTX(3)
+#define CVMX_NPI_NUM_DESC_OUTPUTX(offset)  (0x00011F0000000050ull + ((offset) & 3) * 8)
+#define CVMX_NPI_OUTPUT_CONTROL		   (0x00011F0000000100ull)
+#define CVMX_NPI_P0_DBPAIR_ADDR		   CVMX_NPI_PX_DBPAIR_ADDR(0)
+#define CVMX_NPI_P0_INSTR_ADDR		   CVMX_NPI_PX_INSTR_ADDR(0)
+#define CVMX_NPI_P0_INSTR_CNTS		   CVMX_NPI_PX_INSTR_CNTS(0)
+#define CVMX_NPI_P0_PAIR_CNTS		   CVMX_NPI_PX_PAIR_CNTS(0)
+#define CVMX_NPI_P1_DBPAIR_ADDR		   CVMX_NPI_PX_DBPAIR_ADDR(1)
+#define CVMX_NPI_P1_INSTR_ADDR		   CVMX_NPI_PX_INSTR_ADDR(1)
+#define CVMX_NPI_P1_INSTR_CNTS		   CVMX_NPI_PX_INSTR_CNTS(1)
+#define CVMX_NPI_P1_PAIR_CNTS		   CVMX_NPI_PX_PAIR_CNTS(1)
+#define CVMX_NPI_P2_DBPAIR_ADDR		   CVMX_NPI_PX_DBPAIR_ADDR(2)
+#define CVMX_NPI_P2_INSTR_ADDR		   CVMX_NPI_PX_INSTR_ADDR(2)
+#define CVMX_NPI_P2_INSTR_CNTS		   CVMX_NPI_PX_INSTR_CNTS(2)
+#define CVMX_NPI_P2_PAIR_CNTS		   CVMX_NPI_PX_PAIR_CNTS(2)
+#define CVMX_NPI_P3_DBPAIR_ADDR		   CVMX_NPI_PX_DBPAIR_ADDR(3)
+#define CVMX_NPI_P3_INSTR_ADDR		   CVMX_NPI_PX_INSTR_ADDR(3)
+#define CVMX_NPI_P3_INSTR_CNTS		   CVMX_NPI_PX_INSTR_CNTS(3)
+#define CVMX_NPI_P3_PAIR_CNTS		   CVMX_NPI_PX_PAIR_CNTS(3)
+#define CVMX_NPI_PCI_BAR1_INDEXX(offset)   (0x00011F0000001100ull + ((offset) & 31) * 4)
+#define CVMX_NPI_PCI_BIST_REG		   (0x00011F00000011C0ull)
+#define CVMX_NPI_PCI_BURST_SIZE		   (0x00011F00000000D8ull)
+#define CVMX_NPI_PCI_CFG00		   (0x00011F0000001800ull)
+#define CVMX_NPI_PCI_CFG01		   (0x00011F0000001804ull)
+#define CVMX_NPI_PCI_CFG02		   (0x00011F0000001808ull)
+#define CVMX_NPI_PCI_CFG03		   (0x00011F000000180Cull)
+#define CVMX_NPI_PCI_CFG04		   (0x00011F0000001810ull)
+#define CVMX_NPI_PCI_CFG05		   (0x00011F0000001814ull)
+#define CVMX_NPI_PCI_CFG06		   (0x00011F0000001818ull)
+#define CVMX_NPI_PCI_CFG07		   (0x00011F000000181Cull)
+#define CVMX_NPI_PCI_CFG08		   (0x00011F0000001820ull)
+#define CVMX_NPI_PCI_CFG09		   (0x00011F0000001824ull)
+#define CVMX_NPI_PCI_CFG10		   (0x00011F0000001828ull)
+#define CVMX_NPI_PCI_CFG11		   (0x00011F000000182Cull)
+#define CVMX_NPI_PCI_CFG12		   (0x00011F0000001830ull)
+#define CVMX_NPI_PCI_CFG13		   (0x00011F0000001834ull)
+#define CVMX_NPI_PCI_CFG15		   (0x00011F000000183Cull)
+#define CVMX_NPI_PCI_CFG16		   (0x00011F0000001840ull)
+#define CVMX_NPI_PCI_CFG17		   (0x00011F0000001844ull)
+#define CVMX_NPI_PCI_CFG18		   (0x00011F0000001848ull)
+#define CVMX_NPI_PCI_CFG19		   (0x00011F000000184Cull)
+#define CVMX_NPI_PCI_CFG20		   (0x00011F0000001850ull)
+#define CVMX_NPI_PCI_CFG21		   (0x00011F0000001854ull)
+#define CVMX_NPI_PCI_CFG22		   (0x00011F0000001858ull)
+#define CVMX_NPI_PCI_CFG56		   (0x00011F00000018E0ull)
+#define CVMX_NPI_PCI_CFG57		   (0x00011F00000018E4ull)
+#define CVMX_NPI_PCI_CFG58		   (0x00011F00000018E8ull)
+#define CVMX_NPI_PCI_CFG59		   (0x00011F00000018ECull)
+#define CVMX_NPI_PCI_CFG60		   (0x00011F00000018F0ull)
+#define CVMX_NPI_PCI_CFG61		   (0x00011F00000018F4ull)
+#define CVMX_NPI_PCI_CFG62		   (0x00011F00000018F8ull)
+#define CVMX_NPI_PCI_CFG63		   (0x00011F00000018FCull)
+#define CVMX_NPI_PCI_CNT_REG		   (0x00011F00000011B8ull)
+#define CVMX_NPI_PCI_CTL_STATUS_2	   (0x00011F000000118Cull)
+#define CVMX_NPI_PCI_INT_ARB_CFG	   (0x00011F0000000130ull)
+#define CVMX_NPI_PCI_INT_ENB2		   (0x00011F00000011A0ull)
+#define CVMX_NPI_PCI_INT_SUM2		   (0x00011F0000001198ull)
+#define CVMX_NPI_PCI_READ_CMD		   (0x00011F0000000048ull)
+#define CVMX_NPI_PCI_READ_CMD_6		   (0x00011F0000001180ull)
+#define CVMX_NPI_PCI_READ_CMD_C		   (0x00011F0000001184ull)
+#define CVMX_NPI_PCI_READ_CMD_E		   (0x00011F0000001188ull)
+#define CVMX_NPI_PCI_SCM_REG		   (0x00011F00000011A8ull)
+#define CVMX_NPI_PCI_TSR_REG		   (0x00011F00000011B0ull)
+#define CVMX_NPI_PORT32_INSTR_HDR	   (0x00011F00000001F8ull)
+#define CVMX_NPI_PORT33_INSTR_HDR	   (0x00011F0000000200ull)
+#define CVMX_NPI_PORT34_INSTR_HDR	   (0x00011F0000000208ull)
+#define CVMX_NPI_PORT35_INSTR_HDR	   (0x00011F0000000210ull)
+#define CVMX_NPI_PORT_BP_CONTROL	   (0x00011F00000001F0ull)
+#define CVMX_NPI_PX_DBPAIR_ADDR(offset)	   (0x00011F0000000180ull + ((offset) & 3) * 8)
+#define CVMX_NPI_PX_INSTR_ADDR(offset)	   (0x00011F00000001C0ull + ((offset) & 3) * 8)
+#define CVMX_NPI_PX_INSTR_CNTS(offset)	   (0x00011F00000001A0ull + ((offset) & 3) * 8)
+#define CVMX_NPI_PX_PAIR_CNTS(offset)	   (0x00011F0000000160ull + ((offset) & 3) * 8)
+#define CVMX_NPI_RSL_INT_BLOCKS		   (0x00011F0000000000ull)
+#define CVMX_NPI_SIZE_INPUT0		   CVMX_NPI_SIZE_INPUTX(0)
+#define CVMX_NPI_SIZE_INPUT1		   CVMX_NPI_SIZE_INPUTX(1)
+#define CVMX_NPI_SIZE_INPUT2		   CVMX_NPI_SIZE_INPUTX(2)
+#define CVMX_NPI_SIZE_INPUT3		   CVMX_NPI_SIZE_INPUTX(3)
+#define CVMX_NPI_SIZE_INPUTX(offset)	   (0x00011F0000000078ull + ((offset) & 3) * 16)
+#define CVMX_NPI_WIN_READ_TO		   (0x00011F00000001E0ull)
+
+/**
+ * cvmx_npi_base_addr_input#
+ *
+ * NPI_BASE_ADDR_INPUT0 = NPI's Base Address Input 0 Register
+ *
+ * The address to start reading Instructions from for Input-0.
+ */
+union cvmx_npi_base_addr_inputx {
+	u64 u64;
+	struct cvmx_npi_base_addr_inputx_s {
+		u64 baddr : 61;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_npi_base_addr_inputx_s cn30xx;
+	struct cvmx_npi_base_addr_inputx_s cn31xx;
+	struct cvmx_npi_base_addr_inputx_s cn38xx;
+	struct cvmx_npi_base_addr_inputx_s cn38xxp2;
+	struct cvmx_npi_base_addr_inputx_s cn50xx;
+	struct cvmx_npi_base_addr_inputx_s cn58xx;
+	struct cvmx_npi_base_addr_inputx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_base_addr_inputx cvmx_npi_base_addr_inputx_t;
+
+/**
+ * cvmx_npi_base_addr_output#
+ *
+ * NPI_BASE_ADDR_OUTPUT0 = NPI's Base Address Output 0 Register
+ *
+ * The address to start reading Instructions from for Output-0.
+ */
+union cvmx_npi_base_addr_outputx {
+	u64 u64;
+	struct cvmx_npi_base_addr_outputx_s {
+		u64 baddr : 61;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_npi_base_addr_outputx_s cn30xx;
+	struct cvmx_npi_base_addr_outputx_s cn31xx;
+	struct cvmx_npi_base_addr_outputx_s cn38xx;
+	struct cvmx_npi_base_addr_outputx_s cn38xxp2;
+	struct cvmx_npi_base_addr_outputx_s cn50xx;
+	struct cvmx_npi_base_addr_outputx_s cn58xx;
+	struct cvmx_npi_base_addr_outputx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_base_addr_outputx cvmx_npi_base_addr_outputx_t;
+
+/**
+ * cvmx_npi_bist_status
+ *
+ * NPI_BIST_STATUS = NPI's BIST Status Register
+ *
+ * Results from BIST runs of NPI's memories.
+ */
+union cvmx_npi_bist_status {
+	u64 u64;
+	struct cvmx_npi_bist_status_s {
+		u64 reserved_20_63 : 44;
+		u64 csr_bs : 1;
+		u64 dif_bs : 1;
+		u64 rdp_bs : 1;
+		u64 pcnc_bs : 1;
+		u64 pcn_bs : 1;
+		u64 rdn_bs : 1;
+		u64 pcac_bs : 1;
+		u64 pcad_bs : 1;
+		u64 rdnl_bs : 1;
+		u64 pgf_bs : 1;
+		u64 pig_bs : 1;
+		u64 pof0_bs : 1;
+		u64 pof1_bs : 1;
+		u64 pof2_bs : 1;
+		u64 pof3_bs : 1;
+		u64 pos_bs : 1;
+		u64 nus_bs : 1;
+		u64 dob_bs : 1;
+		u64 pdf_bs : 1;
+		u64 dpi_bs : 1;
+	} s;
+	struct cvmx_npi_bist_status_cn30xx {
+		u64 reserved_20_63 : 44;
+		u64 csr_bs : 1;
+		u64 dif_bs : 1;
+		u64 rdp_bs : 1;
+		u64 pcnc_bs : 1;
+		u64 pcn_bs : 1;
+		u64 rdn_bs : 1;
+		u64 pcac_bs : 1;
+		u64 pcad_bs : 1;
+		u64 rdnl_bs : 1;
+		u64 pgf_bs : 1;
+		u64 pig_bs : 1;
+		u64 pof0_bs : 1;
+		u64 reserved_5_7 : 3;
+		u64 pos_bs : 1;
+		u64 nus_bs : 1;
+		u64 dob_bs : 1;
+		u64 pdf_bs : 1;
+		u64 dpi_bs : 1;
+	} cn30xx;
+	struct cvmx_npi_bist_status_s cn31xx;
+	struct cvmx_npi_bist_status_s cn38xx;
+	struct cvmx_npi_bist_status_s cn38xxp2;
+	struct cvmx_npi_bist_status_cn50xx {
+		u64 reserved_20_63 : 44;
+		u64 csr_bs : 1;
+		u64 dif_bs : 1;
+		u64 rdp_bs : 1;
+		u64 pcnc_bs : 1;
+		u64 pcn_bs : 1;
+		u64 rdn_bs : 1;
+		u64 pcac_bs : 1;
+		u64 pcad_bs : 1;
+		u64 rdnl_bs : 1;
+		u64 pgf_bs : 1;
+		u64 pig_bs : 1;
+		u64 pof0_bs : 1;
+		u64 pof1_bs : 1;
+		u64 reserved_5_6 : 2;
+		u64 pos_bs : 1;
+		u64 nus_bs : 1;
+		u64 dob_bs : 1;
+		u64 pdf_bs : 1;
+		u64 dpi_bs : 1;
+	} cn50xx;
+	struct cvmx_npi_bist_status_s cn58xx;
+	struct cvmx_npi_bist_status_s cn58xxp1;
+};
+
+typedef union cvmx_npi_bist_status cvmx_npi_bist_status_t;
+
+/**
+ * cvmx_npi_buff_size_output#
+ *
+ * NPI_BUFF_SIZE_OUTPUT0 = NPI's D/I Buffer Sizes For Output 0
+ *
+ * The size in bytes of the Data Bufffer and Information Buffer for output 0.
+ */
+union cvmx_npi_buff_size_outputx {
+	u64 u64;
+	struct cvmx_npi_buff_size_outputx_s {
+		u64 reserved_23_63 : 41;
+		u64 isize : 7;
+		u64 bsize : 16;
+	} s;
+	struct cvmx_npi_buff_size_outputx_s cn30xx;
+	struct cvmx_npi_buff_size_outputx_s cn31xx;
+	struct cvmx_npi_buff_size_outputx_s cn38xx;
+	struct cvmx_npi_buff_size_outputx_s cn38xxp2;
+	struct cvmx_npi_buff_size_outputx_s cn50xx;
+	struct cvmx_npi_buff_size_outputx_s cn58xx;
+	struct cvmx_npi_buff_size_outputx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_buff_size_outputx cvmx_npi_buff_size_outputx_t;
+
+/**
+ * cvmx_npi_comp_ctl
+ *
+ * NPI_COMP_CTL = PCI Compensation Control
+ *
+ * PCI Compensation Control
+ */
+union cvmx_npi_comp_ctl {
+	u64 u64;
+	struct cvmx_npi_comp_ctl_s {
+		u64 reserved_10_63 : 54;
+		u64 pctl : 5;
+		u64 nctl : 5;
+	} s;
+	struct cvmx_npi_comp_ctl_s cn50xx;
+	struct cvmx_npi_comp_ctl_s cn58xx;
+	struct cvmx_npi_comp_ctl_s cn58xxp1;
+};
+
+typedef union cvmx_npi_comp_ctl cvmx_npi_comp_ctl_t;
+
+/**
+ * cvmx_npi_ctl_status
+ *
+ * NPI_CTL_STATUS = NPI's Control Status Register
+ *
+ * Contains control ans status for NPI.
+ * Writes to this register are not ordered with writes/reads to the PCI Memory space.
+ * To ensure that a write has completed the user must read the register before
+ * making an access(i.e. PCI memory space) that requires the value of this register to be updated.
+ */
+union cvmx_npi_ctl_status {
+	u64 u64;
+	struct cvmx_npi_ctl_status_s {
+		u64 reserved_63_63 : 1;
+		u64 chip_rev : 8;
+		u64 dis_pniw : 1;
+		u64 out3_enb : 1;
+		u64 out2_enb : 1;
+		u64 out1_enb : 1;
+		u64 out0_enb : 1;
+		u64 ins3_enb : 1;
+		u64 ins2_enb : 1;
+		u64 ins1_enb : 1;
+		u64 ins0_enb : 1;
+		u64 ins3_64b : 1;
+		u64 ins2_64b : 1;
+		u64 ins1_64b : 1;
+		u64 ins0_64b : 1;
+		u64 pci_wdis : 1;
+		u64 wait_com : 1;
+		u64 reserved_37_39 : 3;
+		u64 max_word : 5;
+		u64 reserved_10_31 : 22;
+		u64 timer : 10;
+	} s;
+	struct cvmx_npi_ctl_status_cn30xx {
+		u64 reserved_63_63 : 1;
+		u64 chip_rev : 8;
+		u64 dis_pniw : 1;
+		u64 reserved_51_53 : 3;
+		u64 out0_enb : 1;
+		u64 reserved_47_49 : 3;
+		u64 ins0_enb : 1;
+		u64 reserved_43_45 : 3;
+		u64 ins0_64b : 1;
+		u64 pci_wdis : 1;
+		u64 wait_com : 1;
+		u64 reserved_37_39 : 3;
+		u64 max_word : 5;
+		u64 reserved_10_31 : 22;
+		u64 timer : 10;
+	} cn30xx;
+	struct cvmx_npi_ctl_status_cn31xx {
+		u64 reserved_63_63 : 1;
+		u64 chip_rev : 8;
+		u64 dis_pniw : 1;
+		u64 reserved_52_53 : 2;
+		u64 out1_enb : 1;
+		u64 out0_enb : 1;
+		u64 reserved_48_49 : 2;
+		u64 ins1_enb : 1;
+		u64 ins0_enb : 1;
+		u64 reserved_44_45 : 2;
+		u64 ins1_64b : 1;
+		u64 ins0_64b : 1;
+		u64 pci_wdis : 1;
+		u64 wait_com : 1;
+		u64 reserved_37_39 : 3;
+		u64 max_word : 5;
+		u64 reserved_10_31 : 22;
+		u64 timer : 10;
+	} cn31xx;
+	struct cvmx_npi_ctl_status_s cn38xx;
+	struct cvmx_npi_ctl_status_s cn38xxp2;
+	struct cvmx_npi_ctl_status_cn31xx cn50xx;
+	struct cvmx_npi_ctl_status_s cn58xx;
+	struct cvmx_npi_ctl_status_s cn58xxp1;
+};
+
+typedef union cvmx_npi_ctl_status cvmx_npi_ctl_status_t;
+
+/**
+ * cvmx_npi_dbg_select
+ *
+ * NPI_DBG_SELECT = Debug Select Register
+ *
+ * Contains the debug select value in last written to the RSLs.
+ */
+union cvmx_npi_dbg_select {
+	u64 u64;
+	struct cvmx_npi_dbg_select_s {
+		u64 reserved_16_63 : 48;
+		u64 dbg_sel : 16;
+	} s;
+	struct cvmx_npi_dbg_select_s cn30xx;
+	struct cvmx_npi_dbg_select_s cn31xx;
+	struct cvmx_npi_dbg_select_s cn38xx;
+	struct cvmx_npi_dbg_select_s cn38xxp2;
+	struct cvmx_npi_dbg_select_s cn50xx;
+	struct cvmx_npi_dbg_select_s cn58xx;
+	struct cvmx_npi_dbg_select_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dbg_select cvmx_npi_dbg_select_t;
+
+/**
+ * cvmx_npi_dma_control
+ *
+ * NPI_DMA_CONTROL = DMA Control Register
+ *
+ * Controls operation of the DMA IN/OUT of the NPI.
+ */
+union cvmx_npi_dma_control {
+	u64 u64;
+	struct cvmx_npi_dma_control_s {
+		u64 reserved_36_63 : 28;
+		u64 b0_lend : 1;
+		u64 dwb_denb : 1;
+		u64 dwb_ichk : 9;
+		u64 fpa_que : 3;
+		u64 o_add1 : 1;
+		u64 o_ro : 1;
+		u64 o_ns : 1;
+		u64 o_es : 2;
+		u64 o_mode : 1;
+		u64 hp_enb : 1;
+		u64 lp_enb : 1;
+		u64 csize : 14;
+	} s;
+	struct cvmx_npi_dma_control_s cn30xx;
+	struct cvmx_npi_dma_control_s cn31xx;
+	struct cvmx_npi_dma_control_s cn38xx;
+	struct cvmx_npi_dma_control_s cn38xxp2;
+	struct cvmx_npi_dma_control_s cn50xx;
+	struct cvmx_npi_dma_control_s cn58xx;
+	struct cvmx_npi_dma_control_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dma_control cvmx_npi_dma_control_t;
+
+/**
+ * cvmx_npi_dma_highp_counts
+ *
+ * NPI_DMA_HIGHP_COUNTS = NPI's High Priority DMA Counts
+ *
+ * Values for determing the number of instructions for High Priority DMA in the NPI.
+ */
+union cvmx_npi_dma_highp_counts {
+	u64 u64;
+	struct cvmx_npi_dma_highp_counts_s {
+		u64 reserved_39_63 : 25;
+		u64 fcnt : 7;
+		u64 dbell : 32;
+	} s;
+	struct cvmx_npi_dma_highp_counts_s cn30xx;
+	struct cvmx_npi_dma_highp_counts_s cn31xx;
+	struct cvmx_npi_dma_highp_counts_s cn38xx;
+	struct cvmx_npi_dma_highp_counts_s cn38xxp2;
+	struct cvmx_npi_dma_highp_counts_s cn50xx;
+	struct cvmx_npi_dma_highp_counts_s cn58xx;
+	struct cvmx_npi_dma_highp_counts_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dma_highp_counts cvmx_npi_dma_highp_counts_t;
+
+/**
+ * cvmx_npi_dma_highp_naddr
+ *
+ * NPI_DMA_HIGHP_NADDR = NPI's High Priority DMA Next Ichunk Address
+ *
+ * Place NPI will read the next Ichunk data from. This is valid when state is 0
+ */
+union cvmx_npi_dma_highp_naddr {
+	u64 u64;
+	struct cvmx_npi_dma_highp_naddr_s {
+		u64 reserved_40_63 : 24;
+		u64 state : 4;
+		u64 addr : 36;
+	} s;
+	struct cvmx_npi_dma_highp_naddr_s cn30xx;
+	struct cvmx_npi_dma_highp_naddr_s cn31xx;
+	struct cvmx_npi_dma_highp_naddr_s cn38xx;
+	struct cvmx_npi_dma_highp_naddr_s cn38xxp2;
+	struct cvmx_npi_dma_highp_naddr_s cn50xx;
+	struct cvmx_npi_dma_highp_naddr_s cn58xx;
+	struct cvmx_npi_dma_highp_naddr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dma_highp_naddr cvmx_npi_dma_highp_naddr_t;
+
+/**
+ * cvmx_npi_dma_lowp_counts
+ *
+ * NPI_DMA_LOWP_COUNTS = NPI's Low Priority DMA Counts
+ *
+ * Values for determing the number of instructions for Low Priority DMA in the NPI.
+ */
+union cvmx_npi_dma_lowp_counts {
+	u64 u64;
+	struct cvmx_npi_dma_lowp_counts_s {
+		u64 reserved_39_63 : 25;
+		u64 fcnt : 7;
+		u64 dbell : 32;
+	} s;
+	struct cvmx_npi_dma_lowp_counts_s cn30xx;
+	struct cvmx_npi_dma_lowp_counts_s cn31xx;
+	struct cvmx_npi_dma_lowp_counts_s cn38xx;
+	struct cvmx_npi_dma_lowp_counts_s cn38xxp2;
+	struct cvmx_npi_dma_lowp_counts_s cn50xx;
+	struct cvmx_npi_dma_lowp_counts_s cn58xx;
+	struct cvmx_npi_dma_lowp_counts_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dma_lowp_counts cvmx_npi_dma_lowp_counts_t;
+
+/**
+ * cvmx_npi_dma_lowp_naddr
+ *
+ * NPI_DMA_LOWP_NADDR = NPI's Low Priority DMA Next Ichunk Address
+ *
+ * Place NPI will read the next Ichunk data from. This is valid when state is 0
+ */
+union cvmx_npi_dma_lowp_naddr {
+	u64 u64;
+	struct cvmx_npi_dma_lowp_naddr_s {
+		u64 reserved_40_63 : 24;
+		u64 state : 4;
+		u64 addr : 36;
+	} s;
+	struct cvmx_npi_dma_lowp_naddr_s cn30xx;
+	struct cvmx_npi_dma_lowp_naddr_s cn31xx;
+	struct cvmx_npi_dma_lowp_naddr_s cn38xx;
+	struct cvmx_npi_dma_lowp_naddr_s cn38xxp2;
+	struct cvmx_npi_dma_lowp_naddr_s cn50xx;
+	struct cvmx_npi_dma_lowp_naddr_s cn58xx;
+	struct cvmx_npi_dma_lowp_naddr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_dma_lowp_naddr cvmx_npi_dma_lowp_naddr_t;
+
+/**
+ * cvmx_npi_highp_dbell
+ *
+ * NPI_HIGHP_DBELL = High Priority Door Bell
+ *
+ * The door bell register for the high priority DMA queue.
+ */
+union cvmx_npi_highp_dbell {
+	u64 u64;
+	struct cvmx_npi_highp_dbell_s {
+		u64 reserved_16_63 : 48;
+		u64 dbell : 16;
+	} s;
+	struct cvmx_npi_highp_dbell_s cn30xx;
+	struct cvmx_npi_highp_dbell_s cn31xx;
+	struct cvmx_npi_highp_dbell_s cn38xx;
+	struct cvmx_npi_highp_dbell_s cn38xxp2;
+	struct cvmx_npi_highp_dbell_s cn50xx;
+	struct cvmx_npi_highp_dbell_s cn58xx;
+	struct cvmx_npi_highp_dbell_s cn58xxp1;
+};
+
+typedef union cvmx_npi_highp_dbell cvmx_npi_highp_dbell_t;
+
+/**
+ * cvmx_npi_highp_ibuff_saddr
+ *
+ * NPI_HIGHP_IBUFF_SADDR = DMA High Priority Instruction Buffer Starting Address
+ *
+ * The address to start reading Instructions from for HIGHP.
+ */
+union cvmx_npi_highp_ibuff_saddr {
+	u64 u64;
+	struct cvmx_npi_highp_ibuff_saddr_s {
+		u64 reserved_36_63 : 28;
+		u64 saddr : 36;
+	} s;
+	struct cvmx_npi_highp_ibuff_saddr_s cn30xx;
+	struct cvmx_npi_highp_ibuff_saddr_s cn31xx;
+	struct cvmx_npi_highp_ibuff_saddr_s cn38xx;
+	struct cvmx_npi_highp_ibuff_saddr_s cn38xxp2;
+	struct cvmx_npi_highp_ibuff_saddr_s cn50xx;
+	struct cvmx_npi_highp_ibuff_saddr_s cn58xx;
+	struct cvmx_npi_highp_ibuff_saddr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_highp_ibuff_saddr cvmx_npi_highp_ibuff_saddr_t;
+
+/**
+ * cvmx_npi_input_control
+ *
+ * NPI_INPUT_CONTROL = NPI's Input Control Register
+ *
+ * Control for reads for gather list and instructions.
+ */
+union cvmx_npi_input_control {
+	u64 u64;
+	struct cvmx_npi_input_control_s {
+		u64 reserved_23_63 : 41;
+		u64 pkt_rr : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} s;
+	struct cvmx_npi_input_control_cn30xx {
+		u64 reserved_22_63 : 42;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} cn30xx;
+	struct cvmx_npi_input_control_cn30xx cn31xx;
+	struct cvmx_npi_input_control_s cn38xx;
+	struct cvmx_npi_input_control_cn30xx cn38xxp2;
+	struct cvmx_npi_input_control_s cn50xx;
+	struct cvmx_npi_input_control_s cn58xx;
+	struct cvmx_npi_input_control_s cn58xxp1;
+};
+
+typedef union cvmx_npi_input_control cvmx_npi_input_control_t;
+
+/**
+ * cvmx_npi_int_enb
+ *
+ * NPI_INTERRUPT_ENB = NPI's Interrupt Enable Register
+ *
+ * Used to enable the various interrupting conditions of NPI
+ */
+union cvmx_npi_int_enb {
+	u64 u64;
+	struct cvmx_npi_int_enb_s {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 i3_pperr : 1;
+		u64 i2_pperr : 1;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 p3_ptout : 1;
+		u64 p2_ptout : 1;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 p3_pperr : 1;
+		u64 p2_pperr : 1;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 g3_rtout : 1;
+		u64 g2_rtout : 1;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 p3_perr : 1;
+		u64 p2_perr : 1;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 p3_rtout : 1;
+		u64 p2_rtout : 1;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 i3_overf : 1;
+		u64 i2_overf : 1;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 i3_rtout : 1;
+		u64 i2_rtout : 1;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 po3_2sml : 1;
+		u64 po2_2sml : 1;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} s;
+	struct cvmx_npi_int_enb_cn30xx {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 reserved_36_38 : 3;
+		u64 i0_pperr : 1;
+		u64 reserved_32_34 : 3;
+		u64 p0_ptout : 1;
+		u64 reserved_28_30 : 3;
+		u64 p0_pperr : 1;
+		u64 reserved_24_26 : 3;
+		u64 g0_rtout : 1;
+		u64 reserved_20_22 : 3;
+		u64 p0_perr : 1;
+		u64 reserved_16_18 : 3;
+		u64 p0_rtout : 1;
+		u64 reserved_12_14 : 3;
+		u64 i0_overf : 1;
+		u64 reserved_8_10 : 3;
+		u64 i0_rtout : 1;
+		u64 reserved_4_6 : 3;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn30xx;
+	struct cvmx_npi_int_enb_cn31xx {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 reserved_37_38 : 2;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 reserved_33_34 : 2;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 reserved_29_30 : 2;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 reserved_25_26 : 2;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 reserved_21_22 : 2;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 reserved_17_18 : 2;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 reserved_13_14 : 2;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 reserved_9_10 : 2;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 reserved_5_6 : 2;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn31xx;
+	struct cvmx_npi_int_enb_s cn38xx;
+	struct cvmx_npi_int_enb_cn38xxp2 {
+		u64 reserved_42_63 : 22;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 i3_pperr : 1;
+		u64 i2_pperr : 1;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 p3_ptout : 1;
+		u64 p2_ptout : 1;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 p3_pperr : 1;
+		u64 p2_pperr : 1;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 g3_rtout : 1;
+		u64 g2_rtout : 1;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 p3_perr : 1;
+		u64 p2_perr : 1;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 p3_rtout : 1;
+		u64 p2_rtout : 1;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 i3_overf : 1;
+		u64 i2_overf : 1;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 i3_rtout : 1;
+		u64 i2_rtout : 1;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 po3_2sml : 1;
+		u64 po2_2sml : 1;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn38xxp2;
+	struct cvmx_npi_int_enb_cn31xx cn50xx;
+	struct cvmx_npi_int_enb_s cn58xx;
+	struct cvmx_npi_int_enb_s cn58xxp1;
+};
+
+typedef union cvmx_npi_int_enb cvmx_npi_int_enb_t;
+
+/**
+ * cvmx_npi_int_sum
+ *
+ * NPI_INTERRUPT_SUM = NPI Interrupt Summary Register
+ *
+ * Set when an interrupt condition occurs, write '1' to clear.
+ */
+union cvmx_npi_int_sum {
+	u64 u64;
+	struct cvmx_npi_int_sum_s {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 i3_pperr : 1;
+		u64 i2_pperr : 1;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 p3_ptout : 1;
+		u64 p2_ptout : 1;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 p3_pperr : 1;
+		u64 p2_pperr : 1;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 g3_rtout : 1;
+		u64 g2_rtout : 1;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 p3_perr : 1;
+		u64 p2_perr : 1;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 p3_rtout : 1;
+		u64 p2_rtout : 1;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 i3_overf : 1;
+		u64 i2_overf : 1;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 i3_rtout : 1;
+		u64 i2_rtout : 1;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 po3_2sml : 1;
+		u64 po2_2sml : 1;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} s;
+	struct cvmx_npi_int_sum_cn30xx {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 reserved_36_38 : 3;
+		u64 i0_pperr : 1;
+		u64 reserved_32_34 : 3;
+		u64 p0_ptout : 1;
+		u64 reserved_28_30 : 3;
+		u64 p0_pperr : 1;
+		u64 reserved_24_26 : 3;
+		u64 g0_rtout : 1;
+		u64 reserved_20_22 : 3;
+		u64 p0_perr : 1;
+		u64 reserved_16_18 : 3;
+		u64 p0_rtout : 1;
+		u64 reserved_12_14 : 3;
+		u64 i0_overf : 1;
+		u64 reserved_8_10 : 3;
+		u64 i0_rtout : 1;
+		u64 reserved_4_6 : 3;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn30xx;
+	struct cvmx_npi_int_sum_cn31xx {
+		u64 reserved_62_63 : 2;
+		u64 q1_a_f : 1;
+		u64 q1_s_e : 1;
+		u64 pdf_p_f : 1;
+		u64 pdf_p_e : 1;
+		u64 pcf_p_f : 1;
+		u64 pcf_p_e : 1;
+		u64 rdx_s_e : 1;
+		u64 rwx_s_e : 1;
+		u64 pnc_a_f : 1;
+		u64 pnc_s_e : 1;
+		u64 com_a_f : 1;
+		u64 com_s_e : 1;
+		u64 q3_a_f : 1;
+		u64 q3_s_e : 1;
+		u64 q2_a_f : 1;
+		u64 q2_s_e : 1;
+		u64 pcr_a_f : 1;
+		u64 pcr_s_e : 1;
+		u64 fcr_a_f : 1;
+		u64 fcr_s_e : 1;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 reserved_37_38 : 2;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 reserved_33_34 : 2;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 reserved_29_30 : 2;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 reserved_25_26 : 2;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 reserved_21_22 : 2;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 reserved_17_18 : 2;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 reserved_13_14 : 2;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 reserved_9_10 : 2;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 reserved_5_6 : 2;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn31xx;
+	struct cvmx_npi_int_sum_s cn38xx;
+	struct cvmx_npi_int_sum_cn38xxp2 {
+		u64 reserved_42_63 : 22;
+		u64 iobdma : 1;
+		u64 p_dperr : 1;
+		u64 win_rto : 1;
+		u64 i3_pperr : 1;
+		u64 i2_pperr : 1;
+		u64 i1_pperr : 1;
+		u64 i0_pperr : 1;
+		u64 p3_ptout : 1;
+		u64 p2_ptout : 1;
+		u64 p1_ptout : 1;
+		u64 p0_ptout : 1;
+		u64 p3_pperr : 1;
+		u64 p2_pperr : 1;
+		u64 p1_pperr : 1;
+		u64 p0_pperr : 1;
+		u64 g3_rtout : 1;
+		u64 g2_rtout : 1;
+		u64 g1_rtout : 1;
+		u64 g0_rtout : 1;
+		u64 p3_perr : 1;
+		u64 p2_perr : 1;
+		u64 p1_perr : 1;
+		u64 p0_perr : 1;
+		u64 p3_rtout : 1;
+		u64 p2_rtout : 1;
+		u64 p1_rtout : 1;
+		u64 p0_rtout : 1;
+		u64 i3_overf : 1;
+		u64 i2_overf : 1;
+		u64 i1_overf : 1;
+		u64 i0_overf : 1;
+		u64 i3_rtout : 1;
+		u64 i2_rtout : 1;
+		u64 i1_rtout : 1;
+		u64 i0_rtout : 1;
+		u64 po3_2sml : 1;
+		u64 po2_2sml : 1;
+		u64 po1_2sml : 1;
+		u64 po0_2sml : 1;
+		u64 pci_rsl : 1;
+		u64 rml_wto : 1;
+		u64 rml_rto : 1;
+	} cn38xxp2;
+	struct cvmx_npi_int_sum_cn31xx cn50xx;
+	struct cvmx_npi_int_sum_s cn58xx;
+	struct cvmx_npi_int_sum_s cn58xxp1;
+};
+
+typedef union cvmx_npi_int_sum cvmx_npi_int_sum_t;
+
+/**
+ * cvmx_npi_lowp_dbell
+ *
+ * NPI_LOWP_DBELL = Low Priority Door Bell
+ *
+ * The door bell register for the low priority DMA queue.
+ */
+union cvmx_npi_lowp_dbell {
+	u64 u64;
+	struct cvmx_npi_lowp_dbell_s {
+		u64 reserved_16_63 : 48;
+		u64 dbell : 16;
+	} s;
+	struct cvmx_npi_lowp_dbell_s cn30xx;
+	struct cvmx_npi_lowp_dbell_s cn31xx;
+	struct cvmx_npi_lowp_dbell_s cn38xx;
+	struct cvmx_npi_lowp_dbell_s cn38xxp2;
+	struct cvmx_npi_lowp_dbell_s cn50xx;
+	struct cvmx_npi_lowp_dbell_s cn58xx;
+	struct cvmx_npi_lowp_dbell_s cn58xxp1;
+};
+
+typedef union cvmx_npi_lowp_dbell cvmx_npi_lowp_dbell_t;
+
+/**
+ * cvmx_npi_lowp_ibuff_saddr
+ *
+ * NPI_LOWP_IBUFF_SADDR = DMA Low Priority's Instruction Buffer Starting Address
+ *
+ * The address to start reading Instructions from for LOWP.
+ */
+union cvmx_npi_lowp_ibuff_saddr {
+	u64 u64;
+	struct cvmx_npi_lowp_ibuff_saddr_s {
+		u64 reserved_36_63 : 28;
+		u64 saddr : 36;
+	} s;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn30xx;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn31xx;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn38xx;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn38xxp2;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn50xx;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn58xx;
+	struct cvmx_npi_lowp_ibuff_saddr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_lowp_ibuff_saddr cvmx_npi_lowp_ibuff_saddr_t;
+
+/**
+ * cvmx_npi_mem_access_subid#
+ *
+ * NPI_MEM_ACCESS_SUBID3 = Memory Access SubId 3Register
+ *
+ * Carries Read/Write parameters for PP access to PCI memory that use NPI SubId3.
+ * Writes to this register are not ordered with writes/reads to the PCI Memory space.
+ * To ensure that a write has completed the user must read the register before
+ * making an access(i.e. PCI memory space) that requires the value of this register to be updated.
+ */
+union cvmx_npi_mem_access_subidx {
+	u64 u64;
+	struct cvmx_npi_mem_access_subidx_s {
+		u64 reserved_38_63 : 26;
+		u64 shortl : 1;
+		u64 nmerge : 1;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 nsr : 1;
+		u64 nsw : 1;
+		u64 ror : 1;
+		u64 row : 1;
+		u64 ba : 28;
+	} s;
+	struct cvmx_npi_mem_access_subidx_s cn30xx;
+	struct cvmx_npi_mem_access_subidx_cn31xx {
+		u64 reserved_36_63 : 28;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 nsr : 1;
+		u64 nsw : 1;
+		u64 ror : 1;
+		u64 row : 1;
+		u64 ba : 28;
+	} cn31xx;
+	struct cvmx_npi_mem_access_subidx_s cn38xx;
+	struct cvmx_npi_mem_access_subidx_cn31xx cn38xxp2;
+	struct cvmx_npi_mem_access_subidx_s cn50xx;
+	struct cvmx_npi_mem_access_subidx_s cn58xx;
+	struct cvmx_npi_mem_access_subidx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_mem_access_subidx cvmx_npi_mem_access_subidx_t;
+
+/**
+ * cvmx_npi_msi_rcv
+ *
+ * NPI_MSI_RCV = NPI MSI Receive Vector Register
+ *
+ * A bit is set in this register relative to the vector received during a MSI. And cleared by a W1 to the register.
+ */
+union cvmx_npi_msi_rcv {
+	u64 u64;
+	struct cvmx_npi_msi_rcv_s {
+		u64 int_vec : 64;
+	} s;
+	struct cvmx_npi_msi_rcv_s cn30xx;
+	struct cvmx_npi_msi_rcv_s cn31xx;
+	struct cvmx_npi_msi_rcv_s cn38xx;
+	struct cvmx_npi_msi_rcv_s cn38xxp2;
+	struct cvmx_npi_msi_rcv_s cn50xx;
+	struct cvmx_npi_msi_rcv_s cn58xx;
+	struct cvmx_npi_msi_rcv_s cn58xxp1;
+};
+
+typedef union cvmx_npi_msi_rcv cvmx_npi_msi_rcv_t;
+
+/**
+ * cvmx_npi_num_desc_output#
+ *
+ * NUM_DESC_OUTPUT0 = Number Of Descriptors Available For Output 0
+ *
+ * The size of the Buffer/Info Pointer Pair ring for Output-0.
+ */
+union cvmx_npi_num_desc_outputx {
+	u64 u64;
+	struct cvmx_npi_num_desc_outputx_s {
+		u64 reserved_32_63 : 32;
+		u64 size : 32;
+	} s;
+	struct cvmx_npi_num_desc_outputx_s cn30xx;
+	struct cvmx_npi_num_desc_outputx_s cn31xx;
+	struct cvmx_npi_num_desc_outputx_s cn38xx;
+	struct cvmx_npi_num_desc_outputx_s cn38xxp2;
+	struct cvmx_npi_num_desc_outputx_s cn50xx;
+	struct cvmx_npi_num_desc_outputx_s cn58xx;
+	struct cvmx_npi_num_desc_outputx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_num_desc_outputx cvmx_npi_num_desc_outputx_t;
+
+/**
+ * cvmx_npi_output_control
+ *
+ * NPI_OUTPUT_CONTROL = NPI's Output Control Register
+ *
+ * The address to start reading Instructions from for Output-3.
+ */
+union cvmx_npi_output_control {
+	u64 u64;
+	struct cvmx_npi_output_control_s {
+		u64 reserved_49_63 : 15;
+		u64 pkt_rr : 1;
+		u64 p3_bmode : 1;
+		u64 p2_bmode : 1;
+		u64 p1_bmode : 1;
+		u64 p0_bmode : 1;
+		u64 o3_es : 2;
+		u64 o3_ns : 1;
+		u64 o3_ro : 1;
+		u64 o2_es : 2;
+		u64 o2_ns : 1;
+		u64 o2_ro : 1;
+		u64 o1_es : 2;
+		u64 o1_ns : 1;
+		u64 o1_ro : 1;
+		u64 o0_es : 2;
+		u64 o0_ns : 1;
+		u64 o0_ro : 1;
+		u64 o3_csrm : 1;
+		u64 o2_csrm : 1;
+		u64 o1_csrm : 1;
+		u64 o0_csrm : 1;
+		u64 reserved_20_23 : 4;
+		u64 iptr_o3 : 1;
+		u64 iptr_o2 : 1;
+		u64 iptr_o1 : 1;
+		u64 iptr_o0 : 1;
+		u64 esr_sl3 : 2;
+		u64 nsr_sl3 : 1;
+		u64 ror_sl3 : 1;
+		u64 esr_sl2 : 2;
+		u64 nsr_sl2 : 1;
+		u64 ror_sl2 : 1;
+		u64 esr_sl1 : 2;
+		u64 nsr_sl1 : 1;
+		u64 ror_sl1 : 1;
+		u64 esr_sl0 : 2;
+		u64 nsr_sl0 : 1;
+		u64 ror_sl0 : 1;
+	} s;
+	struct cvmx_npi_output_control_cn30xx {
+		u64 reserved_45_63 : 19;
+		u64 p0_bmode : 1;
+		u64 reserved_32_43 : 12;
+		u64 o0_es : 2;
+		u64 o0_ns : 1;
+		u64 o0_ro : 1;
+		u64 reserved_25_27 : 3;
+		u64 o0_csrm : 1;
+		u64 reserved_17_23 : 7;
+		u64 iptr_o0 : 1;
+		u64 reserved_4_15 : 12;
+		u64 esr_sl0 : 2;
+		u64 nsr_sl0 : 1;
+		u64 ror_sl0 : 1;
+	} cn30xx;
+	struct cvmx_npi_output_control_cn31xx {
+		u64 reserved_46_63 : 18;
+		u64 p1_bmode : 1;
+		u64 p0_bmode : 1;
+		u64 reserved_36_43 : 8;
+		u64 o1_es : 2;
+		u64 o1_ns : 1;
+		u64 o1_ro : 1;
+		u64 o0_es : 2;
+		u64 o0_ns : 1;
+		u64 o0_ro : 1;
+		u64 reserved_26_27 : 2;
+		u64 o1_csrm : 1;
+		u64 o0_csrm : 1;
+		u64 reserved_18_23 : 6;
+		u64 iptr_o1 : 1;
+		u64 iptr_o0 : 1;
+		u64 reserved_8_15 : 8;
+		u64 esr_sl1 : 2;
+		u64 nsr_sl1 : 1;
+		u64 ror_sl1 : 1;
+		u64 esr_sl0 : 2;
+		u64 nsr_sl0 : 1;
+		u64 ror_sl0 : 1;
+	} cn31xx;
+	struct cvmx_npi_output_control_s cn38xx;
+	struct cvmx_npi_output_control_cn38xxp2 {
+		u64 reserved_48_63 : 16;
+		u64 p3_bmode : 1;
+		u64 p2_bmode : 1;
+		u64 p1_bmode : 1;
+		u64 p0_bmode : 1;
+		u64 o3_es : 2;
+		u64 o3_ns : 1;
+		u64 o3_ro : 1;
+		u64 o2_es : 2;
+		u64 o2_ns : 1;
+		u64 o2_ro : 1;
+		u64 o1_es : 2;
+		u64 o1_ns : 1;
+		u64 o1_ro : 1;
+		u64 o0_es : 2;
+		u64 o0_ns : 1;
+		u64 o0_ro : 1;
+		u64 o3_csrm : 1;
+		u64 o2_csrm : 1;
+		u64 o1_csrm : 1;
+		u64 o0_csrm : 1;
+		u64 reserved_20_23 : 4;
+		u64 iptr_o3 : 1;
+		u64 iptr_o2 : 1;
+		u64 iptr_o1 : 1;
+		u64 iptr_o0 : 1;
+		u64 esr_sl3 : 2;
+		u64 nsr_sl3 : 1;
+		u64 ror_sl3 : 1;
+		u64 esr_sl2 : 2;
+		u64 nsr_sl2 : 1;
+		u64 ror_sl2 : 1;
+		u64 esr_sl1 : 2;
+		u64 nsr_sl1 : 1;
+		u64 ror_sl1 : 1;
+		u64 esr_sl0 : 2;
+		u64 nsr_sl0 : 1;
+		u64 ror_sl0 : 1;
+	} cn38xxp2;
+	struct cvmx_npi_output_control_cn50xx {
+		u64 reserved_49_63 : 15;
+		u64 pkt_rr : 1;
+		u64 reserved_46_47 : 2;
+		u64 p1_bmode : 1;
+		u64 p0_bmode : 1;
+		u64 reserved_36_43 : 8;
+		u64 o1_es : 2;
+		u64 o1_ns : 1;
+		u64 o1_ro : 1;
+		u64 o0_es : 2;
+		u64 o0_ns : 1;
+		u64 o0_ro : 1;
+		u64 reserved_26_27 : 2;
+		u64 o1_csrm : 1;
+		u64 o0_csrm : 1;
+		u64 reserved_18_23 : 6;
+		u64 iptr_o1 : 1;
+		u64 iptr_o0 : 1;
+		u64 reserved_8_15 : 8;
+		u64 esr_sl1 : 2;
+		u64 nsr_sl1 : 1;
+		u64 ror_sl1 : 1;
+		u64 esr_sl0 : 2;
+		u64 nsr_sl0 : 1;
+		u64 ror_sl0 : 1;
+	} cn50xx;
+	struct cvmx_npi_output_control_s cn58xx;
+	struct cvmx_npi_output_control_s cn58xxp1;
+};
+
+typedef union cvmx_npi_output_control cvmx_npi_output_control_t;
+
+/**
+ * cvmx_npi_p#_dbpair_addr
+ *
+ * NPI_P0_DBPAIR_ADDR = NPI's Port-0 DATA-BUFFER Pair Next Read Address.
+ *
+ * Contains the next address to read for Port's-0 Data/Buffer Pair.
+ */
+union cvmx_npi_px_dbpair_addr {
+	u64 u64;
+	struct cvmx_npi_px_dbpair_addr_s {
+		u64 reserved_63_63 : 1;
+		u64 state : 2;
+		u64 naddr : 61;
+	} s;
+	struct cvmx_npi_px_dbpair_addr_s cn30xx;
+	struct cvmx_npi_px_dbpair_addr_s cn31xx;
+	struct cvmx_npi_px_dbpair_addr_s cn38xx;
+	struct cvmx_npi_px_dbpair_addr_s cn38xxp2;
+	struct cvmx_npi_px_dbpair_addr_s cn50xx;
+	struct cvmx_npi_px_dbpair_addr_s cn58xx;
+	struct cvmx_npi_px_dbpair_addr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_px_dbpair_addr cvmx_npi_px_dbpair_addr_t;
+
+/**
+ * cvmx_npi_p#_instr_addr
+ *
+ * NPI_P0_INSTR_ADDR = NPI's Port-0 Instruction Next Read Address.
+ *
+ * Contains the next address to read for Port's-0 Instructions.
+ */
+union cvmx_npi_px_instr_addr {
+	u64 u64;
+	struct cvmx_npi_px_instr_addr_s {
+		u64 state : 3;
+		u64 naddr : 61;
+	} s;
+	struct cvmx_npi_px_instr_addr_s cn30xx;
+	struct cvmx_npi_px_instr_addr_s cn31xx;
+	struct cvmx_npi_px_instr_addr_s cn38xx;
+	struct cvmx_npi_px_instr_addr_s cn38xxp2;
+	struct cvmx_npi_px_instr_addr_s cn50xx;
+	struct cvmx_npi_px_instr_addr_s cn58xx;
+	struct cvmx_npi_px_instr_addr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_px_instr_addr cvmx_npi_px_instr_addr_t;
+
+/**
+ * cvmx_npi_p#_instr_cnts
+ *
+ * NPI_P0_INSTR_CNTS = NPI's Port-0 Instruction Counts For Packets In.
+ *
+ * Used to determine the number of instruction in the NPI and to be fetched for Input-Packets.
+ */
+union cvmx_npi_px_instr_cnts {
+	u64 u64;
+	struct cvmx_npi_px_instr_cnts_s {
+		u64 reserved_38_63 : 26;
+		u64 fcnt : 6;
+		u64 avail : 32;
+	} s;
+	struct cvmx_npi_px_instr_cnts_s cn30xx;
+	struct cvmx_npi_px_instr_cnts_s cn31xx;
+	struct cvmx_npi_px_instr_cnts_s cn38xx;
+	struct cvmx_npi_px_instr_cnts_s cn38xxp2;
+	struct cvmx_npi_px_instr_cnts_s cn50xx;
+	struct cvmx_npi_px_instr_cnts_s cn58xx;
+	struct cvmx_npi_px_instr_cnts_s cn58xxp1;
+};
+
+typedef union cvmx_npi_px_instr_cnts cvmx_npi_px_instr_cnts_t;
+
+/**
+ * cvmx_npi_p#_pair_cnts
+ *
+ * NPI_P0_PAIR_CNTS = NPI's Port-0 Instruction Counts For Packets Out.
+ *
+ * Used to determine the number of instruction in the NPI and to be fetched for Output-Packets.
+ */
+union cvmx_npi_px_pair_cnts {
+	u64 u64;
+	struct cvmx_npi_px_pair_cnts_s {
+		u64 reserved_37_63 : 27;
+		u64 fcnt : 5;
+		u64 avail : 32;
+	} s;
+	struct cvmx_npi_px_pair_cnts_s cn30xx;
+	struct cvmx_npi_px_pair_cnts_s cn31xx;
+	struct cvmx_npi_px_pair_cnts_s cn38xx;
+	struct cvmx_npi_px_pair_cnts_s cn38xxp2;
+	struct cvmx_npi_px_pair_cnts_s cn50xx;
+	struct cvmx_npi_px_pair_cnts_s cn58xx;
+	struct cvmx_npi_px_pair_cnts_s cn58xxp1;
+};
+
+typedef union cvmx_npi_px_pair_cnts cvmx_npi_px_pair_cnts_t;
+
+/**
+ * cvmx_npi_pci_burst_size
+ *
+ * NPI_PCI_BURST_SIZE = NPI PCI Burst Size Register
+ *
+ * Control the number of words the NPI will attempt to read / write to/from the PCI.
+ */
+union cvmx_npi_pci_burst_size {
+	u64 u64;
+	struct cvmx_npi_pci_burst_size_s {
+		u64 reserved_14_63 : 50;
+		u64 wr_brst : 7;
+		u64 rd_brst : 7;
+	} s;
+	struct cvmx_npi_pci_burst_size_s cn30xx;
+	struct cvmx_npi_pci_burst_size_s cn31xx;
+	struct cvmx_npi_pci_burst_size_s cn38xx;
+	struct cvmx_npi_pci_burst_size_s cn38xxp2;
+	struct cvmx_npi_pci_burst_size_s cn50xx;
+	struct cvmx_npi_pci_burst_size_s cn58xx;
+	struct cvmx_npi_pci_burst_size_s cn58xxp1;
+};
+
+typedef union cvmx_npi_pci_burst_size cvmx_npi_pci_burst_size_t;
+
+/**
+ * cvmx_npi_pci_int_arb_cfg
+ *
+ * NPI_PCI_INT_ARB_CFG = Configuration For PCI Arbiter
+ *
+ * Controls operation of the Internal PCI Arbiter.  This register should
+ * only be written when PRST# is asserted.  NPI_PCI_INT_ARB_CFG[EN] should
+ * only be set when Octane is a host.
+ */
+union cvmx_npi_pci_int_arb_cfg {
+	u64 u64;
+	struct cvmx_npi_pci_int_arb_cfg_s {
+		u64 reserved_13_63 : 51;
+		u64 hostmode : 1;
+		u64 pci_ovr : 4;
+		u64 reserved_5_7 : 3;
+		u64 en : 1;
+		u64 park_mod : 1;
+		u64 park_dev : 3;
+	} s;
+	struct cvmx_npi_pci_int_arb_cfg_cn30xx {
+		u64 reserved_5_63 : 59;
+		u64 en : 1;
+		u64 park_mod : 1;
+		u64 park_dev : 3;
+	} cn30xx;
+	struct cvmx_npi_pci_int_arb_cfg_cn30xx cn31xx;
+	struct cvmx_npi_pci_int_arb_cfg_cn30xx cn38xx;
+	struct cvmx_npi_pci_int_arb_cfg_cn30xx cn38xxp2;
+	struct cvmx_npi_pci_int_arb_cfg_s cn50xx;
+	struct cvmx_npi_pci_int_arb_cfg_s cn58xx;
+	struct cvmx_npi_pci_int_arb_cfg_s cn58xxp1;
+};
+
+typedef union cvmx_npi_pci_int_arb_cfg cvmx_npi_pci_int_arb_cfg_t;
+
+/**
+ * cvmx_npi_pci_read_cmd
+ *
+ * NPI_PCI_READ_CMD = NPI PCI Read Command Register
+ *
+ * Controls the type of read command sent.
+ * Writes to this register are not ordered with writes/reads to the PCI Memory space.
+ * To ensure that a write has completed the user must read the register before
+ * making an access(i.e. PCI memory space) that requires the value of this register to be updated.
+ * Also any previously issued reads/writes to PCI memory space, still stored in the outbound
+ * FIFO will use the value of this register after it has been updated.
+ */
+union cvmx_npi_pci_read_cmd {
+	u64 u64;
+	struct cvmx_npi_pci_read_cmd_s {
+		u64 reserved_11_63 : 53;
+		u64 cmd_size : 11;
+	} s;
+	struct cvmx_npi_pci_read_cmd_s cn30xx;
+	struct cvmx_npi_pci_read_cmd_s cn31xx;
+	struct cvmx_npi_pci_read_cmd_s cn38xx;
+	struct cvmx_npi_pci_read_cmd_s cn38xxp2;
+	struct cvmx_npi_pci_read_cmd_s cn50xx;
+	struct cvmx_npi_pci_read_cmd_s cn58xx;
+	struct cvmx_npi_pci_read_cmd_s cn58xxp1;
+};
+
+typedef union cvmx_npi_pci_read_cmd cvmx_npi_pci_read_cmd_t;
+
+/**
+ * cvmx_npi_port32_instr_hdr
+ *
+ * NPI_PORT32_INSTR_HDR = NPI Port 32 Instruction Header
+ *
+ * Contains bits [62:42] of the Instruction Header for port 32.
+ */
+union cvmx_npi_port32_instr_hdr {
+	u64 u64;
+	struct cvmx_npi_port32_instr_hdr_s {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 rsv_f : 5;
+		u64 rparmode : 2;
+		u64 rsv_e : 1;
+		u64 rskp_len : 7;
+		u64 rsv_d : 6;
+		u64 use_ihdr : 1;
+		u64 rsv_c : 5;
+		u64 par_mode : 2;
+		u64 rsv_b : 1;
+		u64 skp_len : 7;
+		u64 rsv_a : 6;
+	} s;
+	struct cvmx_npi_port32_instr_hdr_s cn30xx;
+	struct cvmx_npi_port32_instr_hdr_s cn31xx;
+	struct cvmx_npi_port32_instr_hdr_s cn38xx;
+	struct cvmx_npi_port32_instr_hdr_s cn38xxp2;
+	struct cvmx_npi_port32_instr_hdr_s cn50xx;
+	struct cvmx_npi_port32_instr_hdr_s cn58xx;
+	struct cvmx_npi_port32_instr_hdr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_port32_instr_hdr cvmx_npi_port32_instr_hdr_t;
+
+/**
+ * cvmx_npi_port33_instr_hdr
+ *
+ * NPI_PORT33_INSTR_HDR = NPI Port 33 Instruction Header
+ *
+ * Contains bits [62:42] of the Instruction Header for port 33.
+ */
+union cvmx_npi_port33_instr_hdr {
+	u64 u64;
+	struct cvmx_npi_port33_instr_hdr_s {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 rsv_f : 5;
+		u64 rparmode : 2;
+		u64 rsv_e : 1;
+		u64 rskp_len : 7;
+		u64 rsv_d : 6;
+		u64 use_ihdr : 1;
+		u64 rsv_c : 5;
+		u64 par_mode : 2;
+		u64 rsv_b : 1;
+		u64 skp_len : 7;
+		u64 rsv_a : 6;
+	} s;
+	struct cvmx_npi_port33_instr_hdr_s cn31xx;
+	struct cvmx_npi_port33_instr_hdr_s cn38xx;
+	struct cvmx_npi_port33_instr_hdr_s cn38xxp2;
+	struct cvmx_npi_port33_instr_hdr_s cn50xx;
+	struct cvmx_npi_port33_instr_hdr_s cn58xx;
+	struct cvmx_npi_port33_instr_hdr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_port33_instr_hdr cvmx_npi_port33_instr_hdr_t;
+
+/**
+ * cvmx_npi_port34_instr_hdr
+ *
+ * NPI_PORT34_INSTR_HDR = NPI Port 34 Instruction Header
+ *
+ * Contains bits [62:42] of the Instruction Header for port 34. Added for PASS-2.
+ */
+union cvmx_npi_port34_instr_hdr {
+	u64 u64;
+	struct cvmx_npi_port34_instr_hdr_s {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 rsv_f : 5;
+		u64 rparmode : 2;
+		u64 rsv_e : 1;
+		u64 rskp_len : 7;
+		u64 rsv_d : 6;
+		u64 use_ihdr : 1;
+		u64 rsv_c : 5;
+		u64 par_mode : 2;
+		u64 rsv_b : 1;
+		u64 skp_len : 7;
+		u64 rsv_a : 6;
+	} s;
+	struct cvmx_npi_port34_instr_hdr_s cn38xx;
+	struct cvmx_npi_port34_instr_hdr_s cn38xxp2;
+	struct cvmx_npi_port34_instr_hdr_s cn58xx;
+	struct cvmx_npi_port34_instr_hdr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_port34_instr_hdr cvmx_npi_port34_instr_hdr_t;
+
+/**
+ * cvmx_npi_port35_instr_hdr
+ *
+ * NPI_PORT35_INSTR_HDR = NPI Port 35 Instruction Header
+ *
+ * Contains bits [62:42] of the Instruction Header for port 35. Added for PASS-2.
+ */
+union cvmx_npi_port35_instr_hdr {
+	u64 u64;
+	struct cvmx_npi_port35_instr_hdr_s {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 rsv_f : 5;
+		u64 rparmode : 2;
+		u64 rsv_e : 1;
+		u64 rskp_len : 7;
+		u64 rsv_d : 6;
+		u64 use_ihdr : 1;
+		u64 rsv_c : 5;
+		u64 par_mode : 2;
+		u64 rsv_b : 1;
+		u64 skp_len : 7;
+		u64 rsv_a : 6;
+	} s;
+	struct cvmx_npi_port35_instr_hdr_s cn38xx;
+	struct cvmx_npi_port35_instr_hdr_s cn38xxp2;
+	struct cvmx_npi_port35_instr_hdr_s cn58xx;
+	struct cvmx_npi_port35_instr_hdr_s cn58xxp1;
+};
+
+typedef union cvmx_npi_port35_instr_hdr cvmx_npi_port35_instr_hdr_t;
+
+/**
+ * cvmx_npi_port_bp_control
+ *
+ * NPI_PORT_BP_CONTROL = Port Backpressure Control
+ *
+ * Enables Port Level Backpressure
+ */
+union cvmx_npi_port_bp_control {
+	u64 u64;
+	struct cvmx_npi_port_bp_control_s {
+		u64 reserved_8_63 : 56;
+		u64 bp_on : 4;
+		u64 enb : 4;
+	} s;
+	struct cvmx_npi_port_bp_control_s cn30xx;
+	struct cvmx_npi_port_bp_control_s cn31xx;
+	struct cvmx_npi_port_bp_control_s cn38xx;
+	struct cvmx_npi_port_bp_control_s cn38xxp2;
+	struct cvmx_npi_port_bp_control_s cn50xx;
+	struct cvmx_npi_port_bp_control_s cn58xx;
+	struct cvmx_npi_port_bp_control_s cn58xxp1;
+};
+
+typedef union cvmx_npi_port_bp_control cvmx_npi_port_bp_control_t;
+
+/**
+ * cvmx_npi_rsl_int_blocks
+ *
+ * RSL_INT_BLOCKS = RSL Interrupt Blocks Register
+ *
+ * Reading this register will return a vector with a bit set '1' for a corresponding RSL block
+ * that presently has an interrupt pending. The Field Description below supplies the name of the
+ * register that software should read to find out why that intterupt bit is set.
+ */
+union cvmx_npi_rsl_int_blocks {
+	u64 u64;
+	struct cvmx_npi_rsl_int_blocks_s {
+		u64 reserved_32_63 : 32;
+		u64 rint_31 : 1;
+		u64 iob : 1;
+		u64 reserved_28_29 : 2;
+		u64 rint_27 : 1;
+		u64 rint_26 : 1;
+		u64 rint_25 : 1;
+		u64 rint_24 : 1;
+		u64 asx1 : 1;
+		u64 asx0 : 1;
+		u64 rint_21 : 1;
+		u64 pip : 1;
+		u64 spx1 : 1;
+		u64 spx0 : 1;
+		u64 lmc : 1;
+		u64 l2c : 1;
+		u64 rint_15 : 1;
+		u64 reserved_13_14 : 2;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 rint_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 npi : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} s;
+	struct cvmx_npi_rsl_int_blocks_cn30xx {
+		u64 reserved_32_63 : 32;
+		u64 rint_31 : 1;
+		u64 iob : 1;
+		u64 rint_29 : 1;
+		u64 rint_28 : 1;
+		u64 rint_27 : 1;
+		u64 rint_26 : 1;
+		u64 rint_25 : 1;
+		u64 rint_24 : 1;
+		u64 asx1 : 1;
+		u64 asx0 : 1;
+		u64 rint_21 : 1;
+		u64 pip : 1;
+		u64 spx1 : 1;
+		u64 spx0 : 1;
+		u64 lmc : 1;
+		u64 l2c : 1;
+		u64 rint_15 : 1;
+		u64 rint_14 : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 rint_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 npi : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn30xx;
+	struct cvmx_npi_rsl_int_blocks_cn30xx cn31xx;
+	struct cvmx_npi_rsl_int_blocks_cn38xx {
+		u64 reserved_32_63 : 32;
+		u64 rint_31 : 1;
+		u64 iob : 1;
+		u64 rint_29 : 1;
+		u64 rint_28 : 1;
+		u64 rint_27 : 1;
+		u64 rint_26 : 1;
+		u64 rint_25 : 1;
+		u64 rint_24 : 1;
+		u64 asx1 : 1;
+		u64 asx0 : 1;
+		u64 rint_21 : 1;
+		u64 pip : 1;
+		u64 spx1 : 1;
+		u64 spx0 : 1;
+		u64 lmc : 1;
+		u64 l2c : 1;
+		u64 rint_15 : 1;
+		u64 rint_14 : 1;
+		u64 rint_13 : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 rint_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 npi : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn38xx;
+	struct cvmx_npi_rsl_int_blocks_cn38xx cn38xxp2;
+	struct cvmx_npi_rsl_int_blocks_cn50xx {
+		u64 reserved_31_63 : 33;
+		u64 iob : 1;
+		u64 lmc1 : 1;
+		u64 agl : 1;
+		u64 reserved_24_27 : 4;
+		u64 asx1 : 1;
+		u64 asx0 : 1;
+		u64 reserved_21_21 : 1;
+		u64 pip : 1;
+		u64 spx1 : 1;
+		u64 spx0 : 1;
+		u64 lmc : 1;
+		u64 l2c : 1;
+		u64 reserved_15_15 : 1;
+		u64 rad : 1;
+		u64 usb : 1;
+		u64 pow : 1;
+		u64 tim : 1;
+		u64 pko : 1;
+		u64 ipd : 1;
+		u64 reserved_8_8 : 1;
+		u64 zip : 1;
+		u64 dfa : 1;
+		u64 fpa : 1;
+		u64 key : 1;
+		u64 npi : 1;
+		u64 gmx1 : 1;
+		u64 gmx0 : 1;
+		u64 mio : 1;
+	} cn50xx;
+	struct cvmx_npi_rsl_int_blocks_cn38xx cn58xx;
+	struct cvmx_npi_rsl_int_blocks_cn38xx cn58xxp1;
+};
+
+typedef union cvmx_npi_rsl_int_blocks cvmx_npi_rsl_int_blocks_t;
+
+/**
+ * cvmx_npi_size_input#
+ *
+ * NPI_SIZE_INPUT0 = NPI's Size for Input 0 Register
+ *
+ * The size (in instructions) of Instruction Queue-0.
+ */
+union cvmx_npi_size_inputx {
+	u64 u64;
+	struct cvmx_npi_size_inputx_s {
+		u64 reserved_32_63 : 32;
+		u64 size : 32;
+	} s;
+	struct cvmx_npi_size_inputx_s cn30xx;
+	struct cvmx_npi_size_inputx_s cn31xx;
+	struct cvmx_npi_size_inputx_s cn38xx;
+	struct cvmx_npi_size_inputx_s cn38xxp2;
+	struct cvmx_npi_size_inputx_s cn50xx;
+	struct cvmx_npi_size_inputx_s cn58xx;
+	struct cvmx_npi_size_inputx_s cn58xxp1;
+};
+
+typedef union cvmx_npi_size_inputx cvmx_npi_size_inputx_t;
+
+/**
+ * cvmx_npi_win_read_to
+ *
+ * NPI_WIN_READ_TO = NPI WINDOW READ Timeout Register
+ *
+ * Number of core clocks to wait before timing out on a WINDOW-READ to the NCB.
+ */
+union cvmx_npi_win_read_to {
+	u64 u64;
+	struct cvmx_npi_win_read_to_s {
+		u64 reserved_32_63 : 32;
+		u64 time : 32;
+	} s;
+	struct cvmx_npi_win_read_to_s cn30xx;
+	struct cvmx_npi_win_read_to_s cn31xx;
+	struct cvmx_npi_win_read_to_s cn38xx;
+	struct cvmx_npi_win_read_to_s cn38xxp2;
+	struct cvmx_npi_win_read_to_s cn50xx;
+	struct cvmx_npi_win_read_to_s cn58xx;
+	struct cvmx_npi_win_read_to_s cn58xxp1;
+};
+
+typedef union cvmx_npi_win_read_to cvmx_npi_win_read_to_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 17/50] mips: octeon: Add cvmx-pcieepx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (15 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 16/50] mips: octeon: Add cvmx-npi-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 18/50] mips: octeon: Add cvmx-pciercx-defs.h " Stefan Roese
                   ` (35 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pcieepx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-pcieepx-defs.h          | 6848 +++++++++++++++++
 1 file changed, 6848 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcieepx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcieepx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pcieepx-defs.h
new file mode 100644
index 0000000000..13ef599a92
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pcieepx-defs.h
@@ -0,0 +1,6848 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pcieepx.
+ */
+
+#ifndef __CVMX_PCIEEPX_DEFS_H__
+#define __CVMX_PCIEEPX_DEFS_H__
+
+static inline u64 CVMX_PCIEEPX_CFG000(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000000ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000000ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000000ull;
+	}
+	return 0x0000030000000000ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG001(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000004ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000004ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000004ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000004ull;
+	}
+	return 0x0000030000000004ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG002(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000008ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000008ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000008ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000008ull;
+	}
+	return 0x0000030000000008ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG003(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000000Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000000Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000000Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000000Cull;
+	}
+	return 0x000003000000000Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG004(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000010ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000010ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000010ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000010ull;
+	}
+	return 0x0000030000000010ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG004_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000010ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000010ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000010ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000010ull;
+	}
+	return 0x0000030080000010ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG005(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000014ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000014ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000014ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000014ull;
+	}
+	return 0x0000030000000014ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG005_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000014ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000014ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000014ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000014ull;
+	}
+	return 0x0000030080000014ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG006(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000018ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000018ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000018ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000018ull;
+	}
+	return 0x0000030000000018ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG006_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000018ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000018ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000018ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000018ull;
+	}
+	return 0x0000030080000018ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG007(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000001Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000001Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000001Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000001Cull;
+	}
+	return 0x000003000000001Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG007_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003008000001Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003008000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003008000001Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003008000001Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003008000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000008000001Cull;
+	}
+	return 0x000003008000001Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG008(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000020ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000020ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000020ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000020ull;
+	}
+	return 0x0000030000000020ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG008_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000020ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000020ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000020ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000020ull;
+	}
+	return 0x0000030080000020ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG009(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000024ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000024ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000024ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000024ull;
+	}
+	return 0x0000030000000024ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG009_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000024ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000024ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000024ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000024ull;
+	}
+	return 0x0000030080000024ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG010(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000028ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000028ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000028ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000028ull;
+	}
+	return 0x0000030000000028ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG011(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000002Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000002Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000002Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000002Cull;
+	}
+	return 0x000003000000002Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG012(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000030ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000030ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000030ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000030ull;
+	}
+	return 0x0000030000000030ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG012_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000030ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030080000030ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030080000030ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030080000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000080000030ull;
+	}
+	return 0x0000030080000030ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG013(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000034ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000034ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000034ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000034ull;
+	}
+	return 0x0000030000000034ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG015(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000003Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000003Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000003Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000003Cull;
+	}
+	return 0x000003000000003Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG016(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000040ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000040ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000040ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000040ull;
+	}
+	return 0x0000030000000040ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG017(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000044ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000044ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000044ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000044ull;
+	}
+	return 0x0000030000000044ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG020(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000050ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000050ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000050ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000050ull;
+	}
+	return 0x0000030000000050ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG021(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000054ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000054ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000054ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000054ull;
+	}
+	return 0x0000030000000054ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG022(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000058ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000058ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000058ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000058ull;
+	}
+	return 0x0000030000000058ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG023(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000005Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000005Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000005Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000005Cull;
+	}
+	return 0x000003000000005Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG024(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000060ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000060ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000060ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000060ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000060ull + (offset) * 0x100000000ull;
+	}
+	return 0x0000030000000060ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG025(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000064ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000064ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000064ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000064ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000064ull + (offset) * 0x100000000ull;
+	}
+	return 0x0000030000000064ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG028(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000070ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000070ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000070ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000070ull;
+	}
+	return 0x0000030000000070ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG029(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000074ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000074ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000074ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000074ull;
+	}
+	return 0x0000030000000074ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG030(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000078ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000078ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000078ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000078ull;
+	}
+	return 0x0000030000000078ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG031(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000007Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000007Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000007Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000007Cull;
+	}
+	return 0x000003000000007Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG032(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000080ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000080ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000080ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000080ull;
+	}
+	return 0x0000030000000080ull;
+}
+
+#define CVMX_PCIEEPX_CFG033(offset) (0x0000000000000084ull)
+#define CVMX_PCIEEPX_CFG034(offset) (0x0000000000000088ull)
+static inline u64 CVMX_PCIEEPX_CFG037(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000094ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000094ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000094ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000094ull;
+	}
+	return 0x0000030000000094ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG038(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000098ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000098ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000098ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000098ull;
+	}
+	return 0x0000030000000098ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG039(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000009Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000009Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000009Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000009Cull;
+	}
+	return 0x000003000000009Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG040(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000A0ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000000A0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000000A0ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000000A0ull;
+	}
+	return 0x00000300000000A0ull;
+}
+
+#define CVMX_PCIEEPX_CFG041(offset) (0x00000000000000A4ull)
+#define CVMX_PCIEEPX_CFG042(offset) (0x00000000000000A8ull)
+static inline u64 CVMX_PCIEEPX_CFG044(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000000B0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000000B0ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B0ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B0ull;
+	}
+	return 0x00000300000000B0ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG045(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000000B4ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000000B4ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B4ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B4ull;
+	}
+	return 0x00000300000000B4ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG046(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000000B8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000000B8ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000000B8ull;
+	}
+	return 0x00000300000000B8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG064(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000100ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000100ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000100ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000100ull;
+	}
+	return 0x0000030000000100ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG065(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000104ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000104ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000104ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000104ull;
+	}
+	return 0x0000030000000104ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG066(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000108ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000108ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000108ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000108ull;
+	}
+	return 0x0000030000000108ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG067(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000010Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000010Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000010Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000010Cull;
+	}
+	return 0x000003000000010Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG068(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000110ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000110ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000110ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000110ull;
+	}
+	return 0x0000030000000110ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG069(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000114ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000114ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000114ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000114ull;
+	}
+	return 0x0000030000000114ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG070(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000118ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000118ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000118ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000118ull;
+	}
+	return 0x0000030000000118ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG071(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000011Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000011Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000011Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000011Cull;
+	}
+	return 0x000003000000011Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG072(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000120ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000120ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000120ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000120ull;
+	}
+	return 0x0000030000000120ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG073(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000124ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000124ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000124ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000124ull;
+	}
+	return 0x0000030000000124ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG074(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000128ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000128ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000128ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000128ull;
+	}
+	return 0x0000030000000128ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG078(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000138ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000138ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000138ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000138ull;
+	}
+	return 0x0000030000000138ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG082(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000148ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000148ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000148ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000148ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000148ull + (offset) * 0x100000000ull;
+	}
+	return 0x0000030000000148ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG083(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000014Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000014Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000014Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000014Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000014Cull + (offset) * 0x100000000ull;
+	}
+	return 0x000003000000014Cull;
+}
+
+#define CVMX_PCIEEPX_CFG084(offset) (0x0000030000000150ull + ((offset) & 3) * 0x100000000ull)
+static inline u64 CVMX_PCIEEPX_CFG086(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000158ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000158ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000158ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000158ull;
+	}
+	return 0x0000030000000158ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG087(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000015Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000015Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000015Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000015Cull;
+	}
+	return 0x000003000000015Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG088(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000160ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000160ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000160ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000160ull;
+	}
+	return 0x0000030000000160ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG089(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000164ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000164ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000164ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000164ull;
+	}
+	return 0x0000030000000164ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG090(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000168ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000168ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000168ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000168ull;
+	}
+	return 0x0000030000000168ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG091(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000016Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000016Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000016Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000016Cull;
+	}
+	return 0x000003000000016Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG092(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000170ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000170ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000170ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000170ull;
+	}
+	return 0x0000030000000170ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG094(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000178ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000178ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000178ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000178ull;
+	}
+	return 0x0000030000000178ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG095(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000017Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000017Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000017Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000017Cull;
+	}
+	return 0x000003000000017Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG096(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000180ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000180ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000180ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000180ull;
+	}
+	return 0x0000030000000180ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG097(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000184ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000184ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000184ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000184ull;
+	}
+	return 0x0000030000000184ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG098(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000188ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000188ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000188ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000188ull;
+	}
+	return 0x0000030000000188ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG099(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000018Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000018Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000018Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000018Cull;
+	}
+	return 0x000003000000018Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG100(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000190ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000190ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000190ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000190ull;
+	}
+	return 0x0000030000000190ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG101(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000194ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000194ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000194ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000194ull;
+	}
+	return 0x0000030000000194ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG102(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000198ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000198ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000198ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000198ull;
+	}
+	return 0x0000030000000198ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG103(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000019Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000019Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000019Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000019Cull;
+	}
+	return 0x000003000000019Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG104(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001A0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001A0ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A0ull;
+	}
+	return 0x00000300000001A0ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG105(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001A4ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001A4ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A4ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A4ull;
+	}
+	return 0x00000300000001A4ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG106(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001A8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001A8ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001A8ull;
+	}
+	return 0x00000300000001A8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG107(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001ACull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001ACull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001ACull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001ACull;
+	}
+	return 0x00000300000001ACull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG108(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001B0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001B0ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B0ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B0ull;
+	}
+	return 0x00000300000001B0ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG109(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001B4ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001B4ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B4ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B4ull;
+	}
+	return 0x00000300000001B4ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG110(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001B8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001B8ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001B8ull;
+	}
+	return 0x00000300000001B8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG111(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001BCull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001BCull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001BCull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001BCull;
+	}
+	return 0x00000300000001BCull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG112(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000001C0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000001C0ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001C0ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000001C0ull;
+	}
+	return 0x00000300000001C0ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG448(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000700ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000700ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000700ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000700ull;
+	}
+	return 0x0000030000000700ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG449(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000704ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000704ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000704ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000704ull;
+	}
+	return 0x0000030000000704ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG450(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000708ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000708ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000708ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000708ull;
+	}
+	return 0x0000030000000708ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG451(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000070Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000070Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000070Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000070Cull;
+	}
+	return 0x000003000000070Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG452(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000710ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000710ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000710ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000710ull;
+	}
+	return 0x0000030000000710ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG453(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000714ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000714ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000714ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000714ull;
+	}
+	return 0x0000030000000714ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG454(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000718ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000718ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000718ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000718ull;
+	}
+	return 0x0000030000000718ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG455(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000071Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000071Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000071Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000071Cull;
+	}
+	return 0x000003000000071Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG456(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000720ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000720ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000720ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000720ull;
+	}
+	return 0x0000030000000720ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG458(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000728ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000728ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000728ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000728ull;
+	}
+	return 0x0000030000000728ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG459(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000072Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000072Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000072Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000072Cull;
+	}
+	return 0x000003000000072Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG460(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000730ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000730ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000730ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000730ull;
+	}
+	return 0x0000030000000730ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG461(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000734ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000734ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000734ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000734ull;
+	}
+	return 0x0000030000000734ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG462(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000738ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000738ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000738ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000738ull;
+	}
+	return 0x0000030000000738ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG463(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000073Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000073Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000073Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000073Cull;
+	}
+	return 0x000003000000073Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG464(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000740ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000740ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000740ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000740ull;
+	}
+	return 0x0000030000000740ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG465(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000744ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000744ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000744ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000744ull;
+	}
+	return 0x0000030000000744ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG466(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000748ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000748ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000748ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000748ull;
+	}
+	return 0x0000030000000748ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG467(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000074Cull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000074Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000074Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000074Cull;
+	}
+	return 0x000003000000074Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG468(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000750ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000750ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000750ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000750ull;
+	}
+	return 0x0000030000000750ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG490(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007A8ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000007A8ull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007A8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG491(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007ACull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000007ACull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007ACull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG492(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007B0ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000007B0ull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007B0ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG515(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000003000000080Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000003000000080Cull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000003000000080Cull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000080Cull;
+	}
+	return 0x000003000000080Cull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG516(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000810ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000810ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000810ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000810ull;
+	}
+	return 0x0000030000000810ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG517(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000814ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000814ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000814ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000814ull;
+	}
+	return 0x0000030000000814ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG548(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000030000000890ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000030000000890ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000890ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000030000000890ull;
+	}
+	return 0x0000030000000890ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG554(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000008A8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000008A8ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008A8ull;
+	}
+	return 0x00000300000008A8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG558(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000008B8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000008B8ull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008B8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008B8ull;
+	}
+	return 0x00000300000008B8ull;
+}
+
+static inline u64 CVMX_PCIEEPX_CFG559(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000300000008BCull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000300000008BCull + (offset) * 0x100000000ull;
+
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008BCull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000300000008BCull;
+	}
+	return 0x00000300000008BCull;
+}
+
+/**
+ * cvmx_pcieep#_cfg000
+ *
+ * This register contains the first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg000 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg000_s {
+		u32 devid : 16;
+		u32 vendid : 16;
+	} s;
+	struct cvmx_pcieepx_cfg000_s cn52xx;
+	struct cvmx_pcieepx_cfg000_s cn52xxp1;
+	struct cvmx_pcieepx_cfg000_s cn56xx;
+	struct cvmx_pcieepx_cfg000_s cn56xxp1;
+	struct cvmx_pcieepx_cfg000_s cn61xx;
+	struct cvmx_pcieepx_cfg000_s cn63xx;
+	struct cvmx_pcieepx_cfg000_s cn63xxp1;
+	struct cvmx_pcieepx_cfg000_s cn66xx;
+	struct cvmx_pcieepx_cfg000_s cn68xx;
+	struct cvmx_pcieepx_cfg000_s cn68xxp1;
+	struct cvmx_pcieepx_cfg000_s cn70xx;
+	struct cvmx_pcieepx_cfg000_s cn70xxp1;
+	struct cvmx_pcieepx_cfg000_s cn73xx;
+	struct cvmx_pcieepx_cfg000_s cn78xx;
+	struct cvmx_pcieepx_cfg000_s cn78xxp1;
+	struct cvmx_pcieepx_cfg000_s cnf71xx;
+	struct cvmx_pcieepx_cfg000_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg000 cvmx_pcieepx_cfg000_t;
+
+/**
+ * cvmx_pcieep#_cfg001
+ *
+ * This register contains the second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg001 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg001_s {
+		u32 dpe : 1;
+		u32 sse : 1;
+		u32 rma : 1;
+		u32 rta : 1;
+		u32 sta : 1;
+		u32 devt : 2;
+		u32 mdpe : 1;
+		u32 fbb : 1;
+		u32 reserved_22_22 : 1;
+		u32 m66 : 1;
+		u32 cl : 1;
+		u32 i_stat : 1;
+		u32 reserved_11_18 : 8;
+		u32 i_dis : 1;
+		u32 fbbe : 1;
+		u32 see : 1;
+		u32 ids_wcc : 1;
+		u32 per : 1;
+		u32 vps : 1;
+		u32 mwice : 1;
+		u32 scse : 1;
+		u32 me : 1;
+		u32 msae : 1;
+		u32 isae : 1;
+	} s;
+	struct cvmx_pcieepx_cfg001_s cn52xx;
+	struct cvmx_pcieepx_cfg001_s cn52xxp1;
+	struct cvmx_pcieepx_cfg001_s cn56xx;
+	struct cvmx_pcieepx_cfg001_s cn56xxp1;
+	struct cvmx_pcieepx_cfg001_s cn61xx;
+	struct cvmx_pcieepx_cfg001_s cn63xx;
+	struct cvmx_pcieepx_cfg001_s cn63xxp1;
+	struct cvmx_pcieepx_cfg001_s cn66xx;
+	struct cvmx_pcieepx_cfg001_s cn68xx;
+	struct cvmx_pcieepx_cfg001_s cn68xxp1;
+	struct cvmx_pcieepx_cfg001_s cn70xx;
+	struct cvmx_pcieepx_cfg001_s cn70xxp1;
+	struct cvmx_pcieepx_cfg001_s cn73xx;
+	struct cvmx_pcieepx_cfg001_s cn78xx;
+	struct cvmx_pcieepx_cfg001_s cn78xxp1;
+	struct cvmx_pcieepx_cfg001_s cnf71xx;
+	struct cvmx_pcieepx_cfg001_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg001 cvmx_pcieepx_cfg001_t;
+
+/**
+ * cvmx_pcieep#_cfg002
+ *
+ * This register contains the third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg002 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg002_s {
+		u32 bcc : 8;
+		u32 sc : 8;
+		u32 pi : 8;
+		u32 rid : 8;
+	} s;
+	struct cvmx_pcieepx_cfg002_s cn52xx;
+	struct cvmx_pcieepx_cfg002_s cn52xxp1;
+	struct cvmx_pcieepx_cfg002_s cn56xx;
+	struct cvmx_pcieepx_cfg002_s cn56xxp1;
+	struct cvmx_pcieepx_cfg002_s cn61xx;
+	struct cvmx_pcieepx_cfg002_s cn63xx;
+	struct cvmx_pcieepx_cfg002_s cn63xxp1;
+	struct cvmx_pcieepx_cfg002_s cn66xx;
+	struct cvmx_pcieepx_cfg002_s cn68xx;
+	struct cvmx_pcieepx_cfg002_s cn68xxp1;
+	struct cvmx_pcieepx_cfg002_s cn70xx;
+	struct cvmx_pcieepx_cfg002_s cn70xxp1;
+	struct cvmx_pcieepx_cfg002_s cn73xx;
+	struct cvmx_pcieepx_cfg002_s cn78xx;
+	struct cvmx_pcieepx_cfg002_s cn78xxp1;
+	struct cvmx_pcieepx_cfg002_s cnf71xx;
+	struct cvmx_pcieepx_cfg002_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg002 cvmx_pcieepx_cfg002_t;
+
+/**
+ * cvmx_pcieep#_cfg003
+ *
+ * This register contains the fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg003 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg003_s {
+		u32 bist : 8;
+		u32 mfd : 1;
+		u32 chf : 7;
+		u32 lt : 8;
+		u32 cls : 8;
+	} s;
+	struct cvmx_pcieepx_cfg003_s cn52xx;
+	struct cvmx_pcieepx_cfg003_s cn52xxp1;
+	struct cvmx_pcieepx_cfg003_s cn56xx;
+	struct cvmx_pcieepx_cfg003_s cn56xxp1;
+	struct cvmx_pcieepx_cfg003_s cn61xx;
+	struct cvmx_pcieepx_cfg003_s cn63xx;
+	struct cvmx_pcieepx_cfg003_s cn63xxp1;
+	struct cvmx_pcieepx_cfg003_s cn66xx;
+	struct cvmx_pcieepx_cfg003_s cn68xx;
+	struct cvmx_pcieepx_cfg003_s cn68xxp1;
+	struct cvmx_pcieepx_cfg003_s cn70xx;
+	struct cvmx_pcieepx_cfg003_s cn70xxp1;
+	struct cvmx_pcieepx_cfg003_s cn73xx;
+	struct cvmx_pcieepx_cfg003_s cn78xx;
+	struct cvmx_pcieepx_cfg003_s cn78xxp1;
+	struct cvmx_pcieepx_cfg003_s cnf71xx;
+	struct cvmx_pcieepx_cfg003_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg003 cvmx_pcieepx_cfg003_t;
+
+/**
+ * cvmx_pcieep#_cfg004
+ *
+ * This register contains the fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg004 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg004_s {
+		u32 reserved_4_31 : 28;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg004_cn52xx {
+		u32 lbab : 18;
+		u32 reserved_4_13 : 10;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg004_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg004_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg004_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg004_cn52xx cn70xx;
+	struct cvmx_pcieepx_cfg004_cn52xx cn70xxp1;
+	struct cvmx_pcieepx_cfg004_cn73xx {
+		u32 lbab : 9;
+		u32 reserved_4_22 : 19;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg004_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg004_cn78xxp1 {
+		u32 lbab : 17;
+		u32 reserved_4_14 : 11;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn78xxp1;
+	struct cvmx_pcieepx_cfg004_cn52xx cnf71xx;
+	struct cvmx_pcieepx_cfg004_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg004 cvmx_pcieepx_cfg004_t;
+
+/**
+ * cvmx_pcieep#_cfg004_mask
+ *
+ * The BAR 0 mask register is invisible to host software and not readable from the application.
+ * The BAR 0 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg004_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg004_mask_s {
+		u32 lmask : 31;
+		u32 enb : 1;
+	} s;
+	struct cvmx_pcieepx_cfg004_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg004_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg004_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg004_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg004_mask cvmx_pcieepx_cfg004_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg005
+ *
+ * This register contains the sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg005 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg005_s {
+		u32 ubab : 32;
+	} s;
+	struct cvmx_pcieepx_cfg005_s cn52xx;
+	struct cvmx_pcieepx_cfg005_s cn52xxp1;
+	struct cvmx_pcieepx_cfg005_s cn56xx;
+	struct cvmx_pcieepx_cfg005_s cn56xxp1;
+	struct cvmx_pcieepx_cfg005_s cn61xx;
+	struct cvmx_pcieepx_cfg005_s cn63xx;
+	struct cvmx_pcieepx_cfg005_s cn63xxp1;
+	struct cvmx_pcieepx_cfg005_s cn66xx;
+	struct cvmx_pcieepx_cfg005_s cn68xx;
+	struct cvmx_pcieepx_cfg005_s cn68xxp1;
+	struct cvmx_pcieepx_cfg005_s cn70xx;
+	struct cvmx_pcieepx_cfg005_s cn70xxp1;
+	struct cvmx_pcieepx_cfg005_s cn73xx;
+	struct cvmx_pcieepx_cfg005_s cn78xx;
+	struct cvmx_pcieepx_cfg005_s cn78xxp1;
+	struct cvmx_pcieepx_cfg005_s cnf71xx;
+	struct cvmx_pcieepx_cfg005_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg005 cvmx_pcieepx_cfg005_t;
+
+/**
+ * cvmx_pcieep#_cfg005_mask
+ *
+ * The BAR 0 mask register is invisible to host software and not readable from the application.
+ * The BAR 0 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg005_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg005_mask_s {
+		u32 umask : 32;
+	} s;
+	struct cvmx_pcieepx_cfg005_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg005_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg005_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg005_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg005_mask cvmx_pcieepx_cfg005_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg006
+ *
+ * This register contains the seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg006 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg006_s {
+		u32 lbab : 6;
+		u32 reserved_4_25 : 22;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg006_s cn52xx;
+	struct cvmx_pcieepx_cfg006_s cn52xxp1;
+	struct cvmx_pcieepx_cfg006_s cn56xx;
+	struct cvmx_pcieepx_cfg006_s cn56xxp1;
+	struct cvmx_pcieepx_cfg006_s cn61xx;
+	struct cvmx_pcieepx_cfg006_s cn63xx;
+	struct cvmx_pcieepx_cfg006_s cn63xxp1;
+	struct cvmx_pcieepx_cfg006_s cn66xx;
+	struct cvmx_pcieepx_cfg006_s cn68xx;
+	struct cvmx_pcieepx_cfg006_s cn68xxp1;
+	struct cvmx_pcieepx_cfg006_s cn70xx;
+	struct cvmx_pcieepx_cfg006_s cn70xxp1;
+	struct cvmx_pcieepx_cfg006_s cn73xx;
+	struct cvmx_pcieepx_cfg006_s cn78xx;
+	struct cvmx_pcieepx_cfg006_s cn78xxp1;
+	struct cvmx_pcieepx_cfg006_s cnf71xx;
+	struct cvmx_pcieepx_cfg006_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg006 cvmx_pcieepx_cfg006_t;
+
+/**
+ * cvmx_pcieep#_cfg006_mask
+ *
+ * The BAR 1 mask register is invisible to host software and not readable from the application.
+ * The BAR 1 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg006_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg006_mask_s {
+		u32 lmask : 31;
+		u32 enb : 1;
+	} s;
+	struct cvmx_pcieepx_cfg006_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg006_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg006_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg006_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg006_mask cvmx_pcieepx_cfg006_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg007
+ *
+ * This register contains the eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg007 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg007_s {
+		u32 ubab : 32;
+	} s;
+	struct cvmx_pcieepx_cfg007_s cn52xx;
+	struct cvmx_pcieepx_cfg007_s cn52xxp1;
+	struct cvmx_pcieepx_cfg007_s cn56xx;
+	struct cvmx_pcieepx_cfg007_s cn56xxp1;
+	struct cvmx_pcieepx_cfg007_s cn61xx;
+	struct cvmx_pcieepx_cfg007_s cn63xx;
+	struct cvmx_pcieepx_cfg007_s cn63xxp1;
+	struct cvmx_pcieepx_cfg007_s cn66xx;
+	struct cvmx_pcieepx_cfg007_s cn68xx;
+	struct cvmx_pcieepx_cfg007_s cn68xxp1;
+	struct cvmx_pcieepx_cfg007_s cn70xx;
+	struct cvmx_pcieepx_cfg007_s cn70xxp1;
+	struct cvmx_pcieepx_cfg007_s cn73xx;
+	struct cvmx_pcieepx_cfg007_s cn78xx;
+	struct cvmx_pcieepx_cfg007_s cn78xxp1;
+	struct cvmx_pcieepx_cfg007_s cnf71xx;
+	struct cvmx_pcieepx_cfg007_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg007 cvmx_pcieepx_cfg007_t;
+
+/**
+ * cvmx_pcieep#_cfg007_mask
+ *
+ * The BAR 1 mask register is invisible to host software and not readable from the application.
+ * The BAR 1 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg007_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg007_mask_s {
+		u32 umask : 32;
+	} s;
+	struct cvmx_pcieepx_cfg007_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg007_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg007_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg007_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg007_mask cvmx_pcieepx_cfg007_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg008
+ *
+ * This register contains the ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg008 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg008_s {
+		u32 lbab : 12;
+		u32 reserved_4_19 : 16;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg008_cn52xx {
+		u32 reserved_4_31 : 28;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg008_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg008_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg008_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg008_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg008_s cn70xx;
+	struct cvmx_pcieepx_cfg008_s cn70xxp1;
+	struct cvmx_pcieepx_cfg008_s cn73xx;
+	struct cvmx_pcieepx_cfg008_s cn78xx;
+	struct cvmx_pcieepx_cfg008_s cn78xxp1;
+	struct cvmx_pcieepx_cfg008_cn52xx cnf71xx;
+	struct cvmx_pcieepx_cfg008_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg008 cvmx_pcieepx_cfg008_t;
+
+/**
+ * cvmx_pcieep#_cfg008_mask
+ *
+ * The BAR 2 mask register is invisible to host software and not readable from the application.
+ * The BAR 2 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg008_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg008_mask_s {
+		u32 lmask : 31;
+		u32 enb : 1;
+	} s;
+	struct cvmx_pcieepx_cfg008_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg008_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg008_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg008_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg008_mask cvmx_pcieepx_cfg008_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg009
+ *
+ * This register contains the tenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg009 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg009_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg009_cn52xx {
+		u32 ubab : 25;
+		u32 reserved_0_6 : 7;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg009_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg009_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg009_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg009_cn61xx {
+		u32 ubab : 23;
+		u32 reserved_0_8 : 9;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg009_cn61xx cn63xx;
+	struct cvmx_pcieepx_cfg009_cn61xx cn63xxp1;
+	struct cvmx_pcieepx_cfg009_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg009_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg009_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg009_cn70xx {
+		u32 ubab : 32;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg009_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg009_cn70xx cn73xx;
+	struct cvmx_pcieepx_cfg009_cn70xx cn78xx;
+	struct cvmx_pcieepx_cfg009_cn70xx cn78xxp1;
+	struct cvmx_pcieepx_cfg009_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg009_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg009 cvmx_pcieepx_cfg009_t;
+
+/**
+ * cvmx_pcieep#_cfg009_mask
+ *
+ * The BAR 2 mask register is invisible to host software and not readable from the application.
+ * The BAR 2 mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg009_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg009_mask_s {
+		u32 umask : 32;
+	} s;
+	struct cvmx_pcieepx_cfg009_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg009_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg009_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg009_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg009_mask cvmx_pcieepx_cfg009_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg010
+ *
+ * This register contains the eleventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg010 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg010_s {
+		u32 cisp : 32;
+	} s;
+	struct cvmx_pcieepx_cfg010_s cn52xx;
+	struct cvmx_pcieepx_cfg010_s cn52xxp1;
+	struct cvmx_pcieepx_cfg010_s cn56xx;
+	struct cvmx_pcieepx_cfg010_s cn56xxp1;
+	struct cvmx_pcieepx_cfg010_s cn61xx;
+	struct cvmx_pcieepx_cfg010_s cn63xx;
+	struct cvmx_pcieepx_cfg010_s cn63xxp1;
+	struct cvmx_pcieepx_cfg010_s cn66xx;
+	struct cvmx_pcieepx_cfg010_s cn68xx;
+	struct cvmx_pcieepx_cfg010_s cn68xxp1;
+	struct cvmx_pcieepx_cfg010_s cn70xx;
+	struct cvmx_pcieepx_cfg010_s cn70xxp1;
+	struct cvmx_pcieepx_cfg010_s cn73xx;
+	struct cvmx_pcieepx_cfg010_s cn78xx;
+	struct cvmx_pcieepx_cfg010_s cn78xxp1;
+	struct cvmx_pcieepx_cfg010_s cnf71xx;
+	struct cvmx_pcieepx_cfg010_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg010 cvmx_pcieepx_cfg010_t;
+
+/**
+ * cvmx_pcieep#_cfg011
+ *
+ * This register contains the twelfth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg011 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg011_s {
+		u32 ssid : 16;
+		u32 ssvid : 16;
+	} s;
+	struct cvmx_pcieepx_cfg011_s cn52xx;
+	struct cvmx_pcieepx_cfg011_s cn52xxp1;
+	struct cvmx_pcieepx_cfg011_s cn56xx;
+	struct cvmx_pcieepx_cfg011_s cn56xxp1;
+	struct cvmx_pcieepx_cfg011_s cn61xx;
+	struct cvmx_pcieepx_cfg011_s cn63xx;
+	struct cvmx_pcieepx_cfg011_s cn63xxp1;
+	struct cvmx_pcieepx_cfg011_s cn66xx;
+	struct cvmx_pcieepx_cfg011_s cn68xx;
+	struct cvmx_pcieepx_cfg011_s cn68xxp1;
+	struct cvmx_pcieepx_cfg011_s cn70xx;
+	struct cvmx_pcieepx_cfg011_s cn70xxp1;
+	struct cvmx_pcieepx_cfg011_s cn73xx;
+	struct cvmx_pcieepx_cfg011_s cn78xx;
+	struct cvmx_pcieepx_cfg011_s cn78xxp1;
+	struct cvmx_pcieepx_cfg011_s cnf71xx;
+	struct cvmx_pcieepx_cfg011_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg011 cvmx_pcieepx_cfg011_t;
+
+/**
+ * cvmx_pcieep#_cfg012
+ *
+ * This register contains the thirteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg012 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg012_s {
+		u32 eraddr : 16;
+		u32 reserved_1_15 : 15;
+		u32 er_en : 1;
+	} s;
+	struct cvmx_pcieepx_cfg012_s cn52xx;
+	struct cvmx_pcieepx_cfg012_s cn52xxp1;
+	struct cvmx_pcieepx_cfg012_s cn56xx;
+	struct cvmx_pcieepx_cfg012_s cn56xxp1;
+	struct cvmx_pcieepx_cfg012_s cn61xx;
+	struct cvmx_pcieepx_cfg012_s cn63xx;
+	struct cvmx_pcieepx_cfg012_s cn63xxp1;
+	struct cvmx_pcieepx_cfg012_s cn66xx;
+	struct cvmx_pcieepx_cfg012_s cn68xx;
+	struct cvmx_pcieepx_cfg012_s cn68xxp1;
+	struct cvmx_pcieepx_cfg012_s cn70xx;
+	struct cvmx_pcieepx_cfg012_s cn70xxp1;
+	struct cvmx_pcieepx_cfg012_s cn73xx;
+	struct cvmx_pcieepx_cfg012_s cn78xx;
+	struct cvmx_pcieepx_cfg012_s cn78xxp1;
+	struct cvmx_pcieepx_cfg012_s cnf71xx;
+	struct cvmx_pcieepx_cfg012_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg012 cvmx_pcieepx_cfg012_t;
+
+/**
+ * cvmx_pcieep#_cfg012_mask
+ *
+ * The ROM mask register is invisible to host software and not readable from the application. The
+ * ROM mask register is only writable through PEM()_CFG_WR.
+ */
+union cvmx_pcieepx_cfg012_mask {
+	u32 u32;
+	struct cvmx_pcieepx_cfg012_mask_s {
+		u32 mask : 31;
+		u32 enb : 1;
+	} s;
+	struct cvmx_pcieepx_cfg012_mask_s cn52xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn52xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cn56xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn56xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cn61xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn63xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn63xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cn66xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn68xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn68xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cn70xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn70xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cn73xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn78xx;
+	struct cvmx_pcieepx_cfg012_mask_s cn78xxp1;
+	struct cvmx_pcieepx_cfg012_mask_s cnf71xx;
+	struct cvmx_pcieepx_cfg012_mask_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg012_mask cvmx_pcieepx_cfg012_mask_t;
+
+/**
+ * cvmx_pcieep#_cfg013
+ *
+ * This register contains the fourteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg013 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg013_s {
+		u32 reserved_8_31 : 24;
+		u32 cp : 8;
+	} s;
+	struct cvmx_pcieepx_cfg013_s cn52xx;
+	struct cvmx_pcieepx_cfg013_s cn52xxp1;
+	struct cvmx_pcieepx_cfg013_s cn56xx;
+	struct cvmx_pcieepx_cfg013_s cn56xxp1;
+	struct cvmx_pcieepx_cfg013_s cn61xx;
+	struct cvmx_pcieepx_cfg013_s cn63xx;
+	struct cvmx_pcieepx_cfg013_s cn63xxp1;
+	struct cvmx_pcieepx_cfg013_s cn66xx;
+	struct cvmx_pcieepx_cfg013_s cn68xx;
+	struct cvmx_pcieepx_cfg013_s cn68xxp1;
+	struct cvmx_pcieepx_cfg013_s cn70xx;
+	struct cvmx_pcieepx_cfg013_s cn70xxp1;
+	struct cvmx_pcieepx_cfg013_s cn73xx;
+	struct cvmx_pcieepx_cfg013_s cn78xx;
+	struct cvmx_pcieepx_cfg013_s cn78xxp1;
+	struct cvmx_pcieepx_cfg013_s cnf71xx;
+	struct cvmx_pcieepx_cfg013_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg013 cvmx_pcieepx_cfg013_t;
+
+/**
+ * cvmx_pcieep#_cfg015
+ *
+ * This register contains the sixteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg015 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg015_s {
+		u32 ml : 8;
+		u32 mg : 8;
+		u32 inta : 8;
+		u32 il : 8;
+	} s;
+	struct cvmx_pcieepx_cfg015_s cn52xx;
+	struct cvmx_pcieepx_cfg015_s cn52xxp1;
+	struct cvmx_pcieepx_cfg015_s cn56xx;
+	struct cvmx_pcieepx_cfg015_s cn56xxp1;
+	struct cvmx_pcieepx_cfg015_s cn61xx;
+	struct cvmx_pcieepx_cfg015_s cn63xx;
+	struct cvmx_pcieepx_cfg015_s cn63xxp1;
+	struct cvmx_pcieepx_cfg015_s cn66xx;
+	struct cvmx_pcieepx_cfg015_s cn68xx;
+	struct cvmx_pcieepx_cfg015_s cn68xxp1;
+	struct cvmx_pcieepx_cfg015_s cn70xx;
+	struct cvmx_pcieepx_cfg015_s cn70xxp1;
+	struct cvmx_pcieepx_cfg015_s cn73xx;
+	struct cvmx_pcieepx_cfg015_s cn78xx;
+	struct cvmx_pcieepx_cfg015_s cn78xxp1;
+	struct cvmx_pcieepx_cfg015_s cnf71xx;
+	struct cvmx_pcieepx_cfg015_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg015 cvmx_pcieepx_cfg015_t;
+
+/**
+ * cvmx_pcieep#_cfg016
+ *
+ * This register contains the seventeenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg016 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg016_s {
+		u32 pmes : 5;
+		u32 d2s : 1;
+		u32 d1s : 1;
+		u32 auxc : 3;
+		u32 dsi : 1;
+		u32 reserved_20_20 : 1;
+		u32 pme_clock : 1;
+		u32 pmsv : 3;
+		u32 ncp : 8;
+		u32 pmcid : 8;
+	} s;
+	struct cvmx_pcieepx_cfg016_s cn52xx;
+	struct cvmx_pcieepx_cfg016_s cn52xxp1;
+	struct cvmx_pcieepx_cfg016_s cn56xx;
+	struct cvmx_pcieepx_cfg016_s cn56xxp1;
+	struct cvmx_pcieepx_cfg016_s cn61xx;
+	struct cvmx_pcieepx_cfg016_s cn63xx;
+	struct cvmx_pcieepx_cfg016_s cn63xxp1;
+	struct cvmx_pcieepx_cfg016_s cn66xx;
+	struct cvmx_pcieepx_cfg016_s cn68xx;
+	struct cvmx_pcieepx_cfg016_s cn68xxp1;
+	struct cvmx_pcieepx_cfg016_s cn70xx;
+	struct cvmx_pcieepx_cfg016_s cn70xxp1;
+	struct cvmx_pcieepx_cfg016_s cn73xx;
+	struct cvmx_pcieepx_cfg016_s cn78xx;
+	struct cvmx_pcieepx_cfg016_s cn78xxp1;
+	struct cvmx_pcieepx_cfg016_s cnf71xx;
+	struct cvmx_pcieepx_cfg016_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg016 cvmx_pcieepx_cfg016_t;
+
+/**
+ * cvmx_pcieep#_cfg017
+ *
+ * This register contains the eighteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg017 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg017_s {
+		u32 pmdia : 8;
+		u32 bpccee : 1;
+		u32 bd3h : 1;
+		u32 reserved_16_21 : 6;
+		u32 pmess : 1;
+		u32 pmedsia : 2;
+		u32 pmds : 4;
+		u32 pmeens : 1;
+		u32 reserved_4_7 : 4;
+		u32 nsr : 1;
+		u32 reserved_2_2 : 1;
+		u32 ps : 2;
+	} s;
+	struct cvmx_pcieepx_cfg017_s cn52xx;
+	struct cvmx_pcieepx_cfg017_s cn52xxp1;
+	struct cvmx_pcieepx_cfg017_s cn56xx;
+	struct cvmx_pcieepx_cfg017_s cn56xxp1;
+	struct cvmx_pcieepx_cfg017_s cn61xx;
+	struct cvmx_pcieepx_cfg017_s cn63xx;
+	struct cvmx_pcieepx_cfg017_s cn63xxp1;
+	struct cvmx_pcieepx_cfg017_s cn66xx;
+	struct cvmx_pcieepx_cfg017_s cn68xx;
+	struct cvmx_pcieepx_cfg017_s cn68xxp1;
+	struct cvmx_pcieepx_cfg017_s cn70xx;
+	struct cvmx_pcieepx_cfg017_s cn70xxp1;
+	struct cvmx_pcieepx_cfg017_s cn73xx;
+	struct cvmx_pcieepx_cfg017_s cn78xx;
+	struct cvmx_pcieepx_cfg017_s cn78xxp1;
+	struct cvmx_pcieepx_cfg017_s cnf71xx;
+	struct cvmx_pcieepx_cfg017_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg017 cvmx_pcieepx_cfg017_t;
+
+/**
+ * cvmx_pcieep#_cfg020
+ *
+ * This register contains the twenty-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg020 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg020_s {
+		u32 reserved_25_31 : 7;
+		u32 pvm : 1;
+		u32 m64 : 1;
+		u32 mme : 3;
+		u32 mmc : 3;
+		u32 msien : 1;
+		u32 ncp : 8;
+		u32 msicid : 8;
+	} s;
+	struct cvmx_pcieepx_cfg020_cn52xx {
+		u32 reserved_24_31 : 8;
+		u32 m64 : 1;
+		u32 mme : 3;
+		u32 mmc : 3;
+		u32 msien : 1;
+		u32 ncp : 8;
+		u32 msicid : 8;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg020_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg020_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg020_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg020_s cn61xx;
+	struct cvmx_pcieepx_cfg020_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg020_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg020_s cn66xx;
+	struct cvmx_pcieepx_cfg020_s cn68xx;
+	struct cvmx_pcieepx_cfg020_s cn68xxp1;
+	struct cvmx_pcieepx_cfg020_s cn70xx;
+	struct cvmx_pcieepx_cfg020_s cn70xxp1;
+	struct cvmx_pcieepx_cfg020_s cn73xx;
+	struct cvmx_pcieepx_cfg020_s cn78xx;
+	struct cvmx_pcieepx_cfg020_s cn78xxp1;
+	struct cvmx_pcieepx_cfg020_s cnf71xx;
+	struct cvmx_pcieepx_cfg020_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg020 cvmx_pcieepx_cfg020_t;
+
+/**
+ * cvmx_pcieep#_cfg021
+ *
+ * This register contains the twenty-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg021 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg021_s {
+		u32 lmsi : 30;
+		u32 reserved_0_1 : 2;
+	} s;
+	struct cvmx_pcieepx_cfg021_s cn52xx;
+	struct cvmx_pcieepx_cfg021_s cn52xxp1;
+	struct cvmx_pcieepx_cfg021_s cn56xx;
+	struct cvmx_pcieepx_cfg021_s cn56xxp1;
+	struct cvmx_pcieepx_cfg021_s cn61xx;
+	struct cvmx_pcieepx_cfg021_s cn63xx;
+	struct cvmx_pcieepx_cfg021_s cn63xxp1;
+	struct cvmx_pcieepx_cfg021_s cn66xx;
+	struct cvmx_pcieepx_cfg021_s cn68xx;
+	struct cvmx_pcieepx_cfg021_s cn68xxp1;
+	struct cvmx_pcieepx_cfg021_s cn70xx;
+	struct cvmx_pcieepx_cfg021_s cn70xxp1;
+	struct cvmx_pcieepx_cfg021_s cn73xx;
+	struct cvmx_pcieepx_cfg021_s cn78xx;
+	struct cvmx_pcieepx_cfg021_s cn78xxp1;
+	struct cvmx_pcieepx_cfg021_s cnf71xx;
+	struct cvmx_pcieepx_cfg021_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg021 cvmx_pcieepx_cfg021_t;
+
+/**
+ * cvmx_pcieep#_cfg022
+ *
+ * This register contains the twenty-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg022 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg022_s {
+		u32 umsi : 32;
+	} s;
+	struct cvmx_pcieepx_cfg022_s cn52xx;
+	struct cvmx_pcieepx_cfg022_s cn52xxp1;
+	struct cvmx_pcieepx_cfg022_s cn56xx;
+	struct cvmx_pcieepx_cfg022_s cn56xxp1;
+	struct cvmx_pcieepx_cfg022_s cn61xx;
+	struct cvmx_pcieepx_cfg022_s cn63xx;
+	struct cvmx_pcieepx_cfg022_s cn63xxp1;
+	struct cvmx_pcieepx_cfg022_s cn66xx;
+	struct cvmx_pcieepx_cfg022_s cn68xx;
+	struct cvmx_pcieepx_cfg022_s cn68xxp1;
+	struct cvmx_pcieepx_cfg022_s cn70xx;
+	struct cvmx_pcieepx_cfg022_s cn70xxp1;
+	struct cvmx_pcieepx_cfg022_s cn73xx;
+	struct cvmx_pcieepx_cfg022_s cn78xx;
+	struct cvmx_pcieepx_cfg022_s cn78xxp1;
+	struct cvmx_pcieepx_cfg022_s cnf71xx;
+	struct cvmx_pcieepx_cfg022_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg022 cvmx_pcieepx_cfg022_t;
+
+/**
+ * cvmx_pcieep#_cfg023
+ *
+ * This register contains the twenty-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg023 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg023_s {
+		u32 reserved_16_31 : 16;
+		u32 msimd : 16;
+	} s;
+	struct cvmx_pcieepx_cfg023_s cn52xx;
+	struct cvmx_pcieepx_cfg023_s cn52xxp1;
+	struct cvmx_pcieepx_cfg023_s cn56xx;
+	struct cvmx_pcieepx_cfg023_s cn56xxp1;
+	struct cvmx_pcieepx_cfg023_s cn61xx;
+	struct cvmx_pcieepx_cfg023_s cn63xx;
+	struct cvmx_pcieepx_cfg023_s cn63xxp1;
+	struct cvmx_pcieepx_cfg023_s cn66xx;
+	struct cvmx_pcieepx_cfg023_s cn68xx;
+	struct cvmx_pcieepx_cfg023_s cn68xxp1;
+	struct cvmx_pcieepx_cfg023_s cn70xx;
+	struct cvmx_pcieepx_cfg023_s cn70xxp1;
+	struct cvmx_pcieepx_cfg023_s cn73xx;
+	struct cvmx_pcieepx_cfg023_s cn78xx;
+	struct cvmx_pcieepx_cfg023_s cn78xxp1;
+	struct cvmx_pcieepx_cfg023_s cnf71xx;
+	struct cvmx_pcieepx_cfg023_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg023 cvmx_pcieepx_cfg023_t;
+
+/**
+ * cvmx_pcieep#_cfg024
+ *
+ * This register contains the twenty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg024 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg024_s {
+		u32 msimm : 32;
+	} s;
+	struct cvmx_pcieepx_cfg024_s cn70xx;
+	struct cvmx_pcieepx_cfg024_s cn70xxp1;
+	struct cvmx_pcieepx_cfg024_s cn73xx;
+	struct cvmx_pcieepx_cfg024_s cn78xx;
+	struct cvmx_pcieepx_cfg024_s cn78xxp1;
+	struct cvmx_pcieepx_cfg024_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg024 cvmx_pcieepx_cfg024_t;
+
+/**
+ * cvmx_pcieep#_cfg025
+ *
+ * This register contains the twenty-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg025 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg025_s {
+		u32 msimp : 32;
+	} s;
+	struct cvmx_pcieepx_cfg025_s cn70xx;
+	struct cvmx_pcieepx_cfg025_s cn70xxp1;
+	struct cvmx_pcieepx_cfg025_s cn73xx;
+	struct cvmx_pcieepx_cfg025_s cn78xx;
+	struct cvmx_pcieepx_cfg025_s cn78xxp1;
+	struct cvmx_pcieepx_cfg025_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg025 cvmx_pcieepx_cfg025_t;
+
+/**
+ * cvmx_pcieep#_cfg028
+ *
+ * This register contains the twenty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg028 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg028_s {
+		u32 reserved_30_31 : 2;
+		u32 imn : 5;
+		u32 si : 1;
+		u32 dpt : 4;
+		u32 pciecv : 4;
+		u32 ncp : 8;
+		u32 pcieid : 8;
+	} s;
+	struct cvmx_pcieepx_cfg028_s cn52xx;
+	struct cvmx_pcieepx_cfg028_s cn52xxp1;
+	struct cvmx_pcieepx_cfg028_s cn56xx;
+	struct cvmx_pcieepx_cfg028_s cn56xxp1;
+	struct cvmx_pcieepx_cfg028_s cn61xx;
+	struct cvmx_pcieepx_cfg028_s cn63xx;
+	struct cvmx_pcieepx_cfg028_s cn63xxp1;
+	struct cvmx_pcieepx_cfg028_s cn66xx;
+	struct cvmx_pcieepx_cfg028_s cn68xx;
+	struct cvmx_pcieepx_cfg028_s cn68xxp1;
+	struct cvmx_pcieepx_cfg028_s cn70xx;
+	struct cvmx_pcieepx_cfg028_s cn70xxp1;
+	struct cvmx_pcieepx_cfg028_s cn73xx;
+	struct cvmx_pcieepx_cfg028_s cn78xx;
+	struct cvmx_pcieepx_cfg028_s cn78xxp1;
+	struct cvmx_pcieepx_cfg028_s cnf71xx;
+	struct cvmx_pcieepx_cfg028_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg028 cvmx_pcieepx_cfg028_t;
+
+/**
+ * cvmx_pcieep#_cfg029
+ *
+ * This register contains the thirtieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg029 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg029_s {
+		u32 reserved_28_31 : 4;
+		u32 cspls : 2;
+		u32 csplv : 8;
+		u32 reserved_16_17 : 2;
+		u32 rber : 1;
+		u32 reserved_12_14 : 3;
+		u32 el1al : 3;
+		u32 el0al : 3;
+		u32 etfs : 1;
+		u32 pfs : 2;
+		u32 mpss : 3;
+	} s;
+	struct cvmx_pcieepx_cfg029_s cn52xx;
+	struct cvmx_pcieepx_cfg029_s cn52xxp1;
+	struct cvmx_pcieepx_cfg029_s cn56xx;
+	struct cvmx_pcieepx_cfg029_s cn56xxp1;
+	struct cvmx_pcieepx_cfg029_cn61xx {
+		u32 reserved_29_31 : 3;
+		u32 flr_cap : 1;
+		u32 cspls : 2;
+		u32 csplv : 8;
+		u32 reserved_16_17 : 2;
+		u32 rber : 1;
+		u32 reserved_12_14 : 3;
+		u32 el1al : 3;
+		u32 el0al : 3;
+		u32 etfs : 1;
+		u32 pfs : 2;
+		u32 mpss : 3;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg029_s cn63xx;
+	struct cvmx_pcieepx_cfg029_s cn63xxp1;
+	struct cvmx_pcieepx_cfg029_cn66xx {
+		u32 reserved_29_31 : 3;
+		u32 flr : 1;
+		u32 cspls : 2;
+		u32 csplv : 8;
+		u32 reserved_16_17 : 2;
+		u32 rber : 1;
+		u32 reserved_12_14 : 3;
+		u32 el1al : 3;
+		u32 el0al : 3;
+		u32 etfs : 1;
+		u32 pfs : 2;
+		u32 mpss : 3;
+	} cn66xx;
+	struct cvmx_pcieepx_cfg029_cn66xx cn68xx;
+	struct cvmx_pcieepx_cfg029_cn66xx cn68xxp1;
+	struct cvmx_pcieepx_cfg029_cn61xx cn70xx;
+	struct cvmx_pcieepx_cfg029_cn61xx cn70xxp1;
+	struct cvmx_pcieepx_cfg029_cn61xx cn73xx;
+	struct cvmx_pcieepx_cfg029_cn61xx cn78xx;
+	struct cvmx_pcieepx_cfg029_cn61xx cn78xxp1;
+	struct cvmx_pcieepx_cfg029_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg029_cn61xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg029 cvmx_pcieepx_cfg029_t;
+
+/**
+ * cvmx_pcieep#_cfg030
+ *
+ * This register contains the thirty-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg030 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg030_s {
+		u32 reserved_22_31 : 10;
+		u32 tp : 1;
+		u32 ap_d : 1;
+		u32 ur_d : 1;
+		u32 fe_d : 1;
+		u32 nfe_d : 1;
+		u32 ce_d : 1;
+		u32 i_flr : 1;
+		u32 mrrs : 3;
+		u32 ns_en : 1;
+		u32 ap_en : 1;
+		u32 pf_en : 1;
+		u32 etf_en : 1;
+		u32 mps : 3;
+		u32 ro_en : 1;
+		u32 ur_en : 1;
+		u32 fe_en : 1;
+		u32 nfe_en : 1;
+		u32 ce_en : 1;
+	} s;
+	struct cvmx_pcieepx_cfg030_cn52xx {
+		u32 reserved_22_31 : 10;
+		u32 tp : 1;
+		u32 ap_d : 1;
+		u32 ur_d : 1;
+		u32 fe_d : 1;
+		u32 nfe_d : 1;
+		u32 ce_d : 1;
+		u32 reserved_15_15 : 1;
+		u32 mrrs : 3;
+		u32 ns_en : 1;
+		u32 ap_en : 1;
+		u32 pf_en : 1;
+		u32 etf_en : 1;
+		u32 mps : 3;
+		u32 ro_en : 1;
+		u32 ur_en : 1;
+		u32 fe_en : 1;
+		u32 nfe_en : 1;
+		u32 ce_en : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg030_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg030_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg030_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg030_s cn61xx;
+	struct cvmx_pcieepx_cfg030_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg030_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg030_s cn66xx;
+	struct cvmx_pcieepx_cfg030_s cn68xx;
+	struct cvmx_pcieepx_cfg030_s cn68xxp1;
+	struct cvmx_pcieepx_cfg030_s cn70xx;
+	struct cvmx_pcieepx_cfg030_s cn70xxp1;
+	struct cvmx_pcieepx_cfg030_s cn73xx;
+	struct cvmx_pcieepx_cfg030_s cn78xx;
+	struct cvmx_pcieepx_cfg030_s cn78xxp1;
+	struct cvmx_pcieepx_cfg030_s cnf71xx;
+	struct cvmx_pcieepx_cfg030_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg030 cvmx_pcieepx_cfg030_t;
+
+/**
+ * cvmx_pcieep#_cfg031
+ *
+ * This register contains the thirty-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg031 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg031_s {
+		u32 pnum : 8;
+		u32 reserved_23_23 : 1;
+		u32 aspm : 1;
+		u32 lbnc : 1;
+		u32 dllarc : 1;
+		u32 sderc : 1;
+		u32 cpm : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 aslpms : 2;
+		u32 mlw : 6;
+		u32 mls : 4;
+	} s;
+	struct cvmx_pcieepx_cfg031_cn52xx {
+		u32 pnum : 8;
+		u32 reserved_22_23 : 2;
+		u32 lbnc : 1;
+		u32 dllarc : 1;
+		u32 sderc : 1;
+		u32 cpm : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 aslpms : 2;
+		u32 mlw : 6;
+		u32 mls : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg031_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg031_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg031_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg031_s cn61xx;
+	struct cvmx_pcieepx_cfg031_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg031_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg031_s cn66xx;
+	struct cvmx_pcieepx_cfg031_s cn68xx;
+	struct cvmx_pcieepx_cfg031_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg031_s cn70xx;
+	struct cvmx_pcieepx_cfg031_s cn70xxp1;
+	struct cvmx_pcieepx_cfg031_s cn73xx;
+	struct cvmx_pcieepx_cfg031_s cn78xx;
+	struct cvmx_pcieepx_cfg031_s cn78xxp1;
+	struct cvmx_pcieepx_cfg031_s cnf71xx;
+	struct cvmx_pcieepx_cfg031_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg031 cvmx_pcieepx_cfg031_t;
+
+/**
+ * cvmx_pcieep#_cfg032
+ *
+ * This register contains the thirty-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg032 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg032_s {
+		u32 lab : 1;
+		u32 lbm : 1;
+		u32 dlla : 1;
+		u32 scc : 1;
+		u32 lt : 1;
+		u32 reserved_26_26 : 1;
+		u32 nlw : 6;
+		u32 ls : 4;
+		u32 reserved_12_15 : 4;
+		u32 lab_int_enb : 1;
+		u32 lbm_int_enb : 1;
+		u32 hawd : 1;
+		u32 ecpm : 1;
+		u32 es : 1;
+		u32 ccc : 1;
+		u32 rl : 1;
+		u32 ld : 1;
+		u32 rcb : 1;
+		u32 reserved_2_2 : 1;
+		u32 aslpc : 2;
+	} s;
+	struct cvmx_pcieepx_cfg032_cn52xx {
+		u32 reserved_30_31 : 2;
+		u32 dlla : 1;
+		u32 scc : 1;
+		u32 lt : 1;
+		u32 reserved_26_26 : 1;
+		u32 nlw : 6;
+		u32 ls : 4;
+		u32 reserved_10_15 : 6;
+		u32 hawd : 1;
+		u32 ecpm : 1;
+		u32 es : 1;
+		u32 ccc : 1;
+		u32 rl : 1;
+		u32 ld : 1;
+		u32 rcb : 1;
+		u32 reserved_2_2 : 1;
+		u32 aslpc : 2;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg032_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg032_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg032_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg032_s cn61xx;
+	struct cvmx_pcieepx_cfg032_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg032_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg032_s cn66xx;
+	struct cvmx_pcieepx_cfg032_s cn68xx;
+	struct cvmx_pcieepx_cfg032_cn68xxp1 {
+		u32 reserved_30_31 : 2;
+		u32 dlla : 1;
+		u32 scc : 1;
+		u32 lt : 1;
+		u32 reserved_26_26 : 1;
+		u32 nlw : 6;
+		u32 ls : 4;
+		u32 reserved_12_15 : 4;
+		u32 lab_int_enb : 1;
+		u32 lbm_int_enb : 1;
+		u32 hawd : 1;
+		u32 ecpm : 1;
+		u32 es : 1;
+		u32 ccc : 1;
+		u32 rl : 1;
+		u32 ld : 1;
+		u32 rcb : 1;
+		u32 reserved_2_2 : 1;
+		u32 aslpc : 2;
+	} cn68xxp1;
+	struct cvmx_pcieepx_cfg032_s cn70xx;
+	struct cvmx_pcieepx_cfg032_s cn70xxp1;
+	struct cvmx_pcieepx_cfg032_s cn73xx;
+	struct cvmx_pcieepx_cfg032_s cn78xx;
+	struct cvmx_pcieepx_cfg032_s cn78xxp1;
+	struct cvmx_pcieepx_cfg032_s cnf71xx;
+	struct cvmx_pcieepx_cfg032_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg032 cvmx_pcieepx_cfg032_t;
+
+/**
+ * cvmx_pcieep#_cfg033
+ *
+ * PCIE_CFG033 = Thirty-fourth 32-bits of PCIE type 0 config space
+ * (Slot Capabilities Register)
+ */
+union cvmx_pcieepx_cfg033 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg033_s {
+		u32 ps_num : 13;
+		u32 nccs : 1;
+		u32 emip : 1;
+		u32 sp_ls : 2;
+		u32 sp_lv : 8;
+		u32 hp_c : 1;
+		u32 hp_s : 1;
+		u32 pip : 1;
+		u32 aip : 1;
+		u32 mrlsp : 1;
+		u32 pcp : 1;
+		u32 abp : 1;
+	} s;
+	struct cvmx_pcieepx_cfg033_s cn52xx;
+	struct cvmx_pcieepx_cfg033_s cn52xxp1;
+	struct cvmx_pcieepx_cfg033_s cn56xx;
+	struct cvmx_pcieepx_cfg033_s cn56xxp1;
+	struct cvmx_pcieepx_cfg033_s cn63xx;
+	struct cvmx_pcieepx_cfg033_s cn63xxp1;
+};
+
+typedef union cvmx_pcieepx_cfg033 cvmx_pcieepx_cfg033_t;
+
+/**
+ * cvmx_pcieep#_cfg034
+ *
+ * PCIE_CFG034 = Thirty-fifth 32-bits of PCIE type 0 config space
+ * (Slot Control Register/Slot Status Register)
+ */
+union cvmx_pcieepx_cfg034 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg034_s {
+		u32 reserved_25_31 : 7;
+		u32 dlls_c : 1;
+		u32 emis : 1;
+		u32 pds : 1;
+		u32 mrlss : 1;
+		u32 ccint_d : 1;
+		u32 pd_c : 1;
+		u32 mrls_c : 1;
+		u32 pf_d : 1;
+		u32 abp_d : 1;
+		u32 reserved_13_15 : 3;
+		u32 dlls_en : 1;
+		u32 emic : 1;
+		u32 pcc : 1;
+		u32 pic : 2;
+		u32 aic : 2;
+		u32 hpint_en : 1;
+		u32 ccint_en : 1;
+		u32 pd_en : 1;
+		u32 mrls_en : 1;
+		u32 pf_en : 1;
+		u32 abp_en : 1;
+	} s;
+	struct cvmx_pcieepx_cfg034_s cn52xx;
+	struct cvmx_pcieepx_cfg034_s cn52xxp1;
+	struct cvmx_pcieepx_cfg034_s cn56xx;
+	struct cvmx_pcieepx_cfg034_s cn56xxp1;
+	struct cvmx_pcieepx_cfg034_s cn63xx;
+	struct cvmx_pcieepx_cfg034_s cn63xxp1;
+};
+
+typedef union cvmx_pcieepx_cfg034 cvmx_pcieepx_cfg034_t;
+
+/**
+ * cvmx_pcieep#_cfg037
+ *
+ * This register contains the thirty-eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg037 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg037_s {
+		u32 reserved_24_31 : 8;
+		u32 meetp : 2;
+		u32 eetps : 1;
+		u32 effs : 1;
+		u32 obffs : 2;
+		u32 reserved_12_17 : 6;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} s;
+	struct cvmx_pcieepx_cfg037_cn52xx {
+		u32 reserved_5_31 : 27;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg037_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg037_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg037_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg037_cn61xx {
+		u32 reserved_14_31 : 18;
+		u32 tph : 2;
+		u32 reserved_11_11 : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg037_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg037_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg037_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg037_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg037_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg037_cn61xx cn70xx;
+	struct cvmx_pcieepx_cfg037_cn61xx cn70xxp1;
+	struct cvmx_pcieepx_cfg037_cn73xx {
+		u32 reserved_24_31 : 8;
+		u32 meetp : 2;
+		u32 eetps : 1;
+		u32 effs : 1;
+		u32 obffs : 2;
+		u32 reserved_14_17 : 4;
+		u32 tphs : 2;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg037_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg037_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg037_cnf71xx {
+		u32 reserved_20_31 : 12;
+		u32 obffs : 2;
+		u32 reserved_14_17 : 4;
+		u32 tphs : 2;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cnf71xx;
+	struct cvmx_pcieepx_cfg037_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg037 cvmx_pcieepx_cfg037_t;
+
+/**
+ * cvmx_pcieep#_cfg038
+ *
+ * This register contains the thirty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg038 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg038_s {
+		u32 reserved_16_31 : 16;
+		u32 eetpb : 1;
+		u32 obffe : 2;
+		u32 reserved_11_12 : 2;
+		u32 ltre : 1;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} s;
+	struct cvmx_pcieepx_cfg038_cn52xx {
+		u32 reserved_5_31 : 27;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg038_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg038_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg038_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg038_cn61xx {
+		u32 reserved_10_31 : 22;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg038_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg038_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg038_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg038_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg038_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg038_cn61xx cn70xx;
+	struct cvmx_pcieepx_cfg038_cn61xx cn70xxp1;
+	struct cvmx_pcieepx_cfg038_s cn73xx;
+	struct cvmx_pcieepx_cfg038_s cn78xx;
+	struct cvmx_pcieepx_cfg038_s cn78xxp1;
+	struct cvmx_pcieepx_cfg038_cnf71xx {
+		u32 reserved_15_31 : 17;
+		u32 obffe : 2;
+		u32 reserved_11_12 : 2;
+		u32 ltre : 1;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cnf71xx;
+	struct cvmx_pcieepx_cfg038_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg038 cvmx_pcieepx_cfg038_t;
+
+/**
+ * cvmx_pcieep#_cfg039
+ *
+ * This register contains the fortieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg039 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg039_s {
+		u32 reserved_9_31 : 23;
+		u32 cls : 1;
+		u32 slsv : 7;
+		u32 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pcieepx_cfg039_cn52xx {
+		u32 reserved_0_31 : 32;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg039_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg039_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg039_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg039_s cn61xx;
+	struct cvmx_pcieepx_cfg039_s cn63xx;
+	struct cvmx_pcieepx_cfg039_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg039_s cn66xx;
+	struct cvmx_pcieepx_cfg039_s cn68xx;
+	struct cvmx_pcieepx_cfg039_s cn68xxp1;
+	struct cvmx_pcieepx_cfg039_s cn70xx;
+	struct cvmx_pcieepx_cfg039_s cn70xxp1;
+	struct cvmx_pcieepx_cfg039_s cn73xx;
+	struct cvmx_pcieepx_cfg039_s cn78xx;
+	struct cvmx_pcieepx_cfg039_s cn78xxp1;
+	struct cvmx_pcieepx_cfg039_s cnf71xx;
+	struct cvmx_pcieepx_cfg039_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg039 cvmx_pcieepx_cfg039_t;
+
+/**
+ * cvmx_pcieep#_cfg040
+ *
+ * This register contains the forty-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg040 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg040_s {
+		u32 reserved_22_31 : 10;
+		u32 ler : 1;
+		u32 ep3s : 1;
+		u32 ep2s : 1;
+		u32 ep1s : 1;
+		u32 eqc : 1;
+		u32 cdl : 1;
+		u32 cde : 4;
+		u32 csos : 1;
+		u32 emc : 1;
+		u32 tm : 3;
+		u32 sde : 1;
+		u32 hasd : 1;
+		u32 ec : 1;
+		u32 tls : 4;
+	} s;
+	struct cvmx_pcieepx_cfg040_cn52xx {
+		u32 reserved_0_31 : 32;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg040_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg040_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg040_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg040_cn61xx {
+		u32 reserved_17_31 : 15;
+		u32 cdl : 1;
+		u32 reserved_13_15 : 3;
+		u32 cde : 1;
+		u32 csos : 1;
+		u32 emc : 1;
+		u32 tm : 3;
+		u32 sde : 1;
+		u32 hasd : 1;
+		u32 ec : 1;
+		u32 tls : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg040_cn61xx cn63xx;
+	struct cvmx_pcieepx_cfg040_cn61xx cn63xxp1;
+	struct cvmx_pcieepx_cfg040_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg040_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg040_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg040_cn61xx cn70xx;
+	struct cvmx_pcieepx_cfg040_cn61xx cn70xxp1;
+	struct cvmx_pcieepx_cfg040_s cn73xx;
+	struct cvmx_pcieepx_cfg040_s cn78xx;
+	struct cvmx_pcieepx_cfg040_s cn78xxp1;
+	struct cvmx_pcieepx_cfg040_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg040_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg040 cvmx_pcieepx_cfg040_t;
+
+/**
+ * cvmx_pcieep#_cfg041
+ *
+ * PCIE_CFG041 = Fourty-second 32-bits of PCIE type 0 config space
+ * (Slot Capabilities 2 Register)
+ */
+union cvmx_pcieepx_cfg041 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg041_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg041_s cn52xx;
+	struct cvmx_pcieepx_cfg041_s cn52xxp1;
+	struct cvmx_pcieepx_cfg041_s cn56xx;
+	struct cvmx_pcieepx_cfg041_s cn56xxp1;
+	struct cvmx_pcieepx_cfg041_s cn63xx;
+	struct cvmx_pcieepx_cfg041_s cn63xxp1;
+};
+
+typedef union cvmx_pcieepx_cfg041 cvmx_pcieepx_cfg041_t;
+
+/**
+ * cvmx_pcieep#_cfg042
+ *
+ * PCIE_CFG042 = Fourty-third 32-bits of PCIE type 0 config space
+ * (Slot Control 2 Register/Slot Status 2 Register)
+ */
+union cvmx_pcieepx_cfg042 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg042_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg042_s cn52xx;
+	struct cvmx_pcieepx_cfg042_s cn52xxp1;
+	struct cvmx_pcieepx_cfg042_s cn56xx;
+	struct cvmx_pcieepx_cfg042_s cn56xxp1;
+	struct cvmx_pcieepx_cfg042_s cn63xx;
+	struct cvmx_pcieepx_cfg042_s cn63xxp1;
+};
+
+typedef union cvmx_pcieepx_cfg042 cvmx_pcieepx_cfg042_t;
+
+/**
+ * cvmx_pcieep#_cfg044
+ *
+ * This register contains the forty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg044 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg044_s {
+		u32 msixen : 1;
+		u32 funm : 1;
+		u32 reserved_27_29 : 3;
+		u32 msixts : 11;
+		u32 ncp : 8;
+		u32 msixcid : 8;
+	} s;
+	struct cvmx_pcieepx_cfg044_s cn73xx;
+	struct cvmx_pcieepx_cfg044_s cn78xx;
+	struct cvmx_pcieepx_cfg044_s cn78xxp1;
+	struct cvmx_pcieepx_cfg044_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg044 cvmx_pcieepx_cfg044_t;
+
+/**
+ * cvmx_pcieep#_cfg045
+ *
+ * This register contains the forty-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg045 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg045_s {
+		u32 msixtoffs : 29;
+		u32 msixtbir : 3;
+	} s;
+	struct cvmx_pcieepx_cfg045_s cn73xx;
+	struct cvmx_pcieepx_cfg045_s cn78xx;
+	struct cvmx_pcieepx_cfg045_s cn78xxp1;
+	struct cvmx_pcieepx_cfg045_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg045 cvmx_pcieepx_cfg045_t;
+
+/**
+ * cvmx_pcieep#_cfg046
+ *
+ * This register contains the forty-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg046 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg046_s {
+		u32 msixpoffs : 29;
+		u32 msixpbir : 3;
+	} s;
+	struct cvmx_pcieepx_cfg046_s cn73xx;
+	struct cvmx_pcieepx_cfg046_s cn78xx;
+	struct cvmx_pcieepx_cfg046_s cn78xxp1;
+	struct cvmx_pcieepx_cfg046_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg046 cvmx_pcieepx_cfg046_t;
+
+/**
+ * cvmx_pcieep#_cfg064
+ *
+ * This register contains the sixty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg064 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg064_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pcieepx_cfg064_s cn52xx;
+	struct cvmx_pcieepx_cfg064_s cn52xxp1;
+	struct cvmx_pcieepx_cfg064_s cn56xx;
+	struct cvmx_pcieepx_cfg064_s cn56xxp1;
+	struct cvmx_pcieepx_cfg064_s cn61xx;
+	struct cvmx_pcieepx_cfg064_s cn63xx;
+	struct cvmx_pcieepx_cfg064_s cn63xxp1;
+	struct cvmx_pcieepx_cfg064_s cn66xx;
+	struct cvmx_pcieepx_cfg064_s cn68xx;
+	struct cvmx_pcieepx_cfg064_s cn68xxp1;
+	struct cvmx_pcieepx_cfg064_s cn70xx;
+	struct cvmx_pcieepx_cfg064_s cn70xxp1;
+	struct cvmx_pcieepx_cfg064_s cn73xx;
+	struct cvmx_pcieepx_cfg064_s cn78xx;
+	struct cvmx_pcieepx_cfg064_s cn78xxp1;
+	struct cvmx_pcieepx_cfg064_s cnf71xx;
+	struct cvmx_pcieepx_cfg064_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg064 cvmx_pcieepx_cfg064_t;
+
+/**
+ * cvmx_pcieep#_cfg065
+ *
+ * This register contains the sixty-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg065 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg065_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pcieepx_cfg065_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg065_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg065_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg065_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg065_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_21_23 : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg065_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg065_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg065_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg065_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg065_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg065_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg065_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg065_cn73xx {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_5_11 : 7;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg065_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg065_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg065_cnf71xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_5_11 : 7;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cnf71xx;
+	struct cvmx_pcieepx_cfg065_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg065 cvmx_pcieepx_cfg065_t;
+
+/**
+ * cvmx_pcieep#_cfg066
+ *
+ * This register contains the sixty-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg066 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg066_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbem : 1;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pcieepx_cfg066_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg066_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg066_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg066_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg066_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombm : 1;
+		u32 reserved_21_23 : 3;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg066_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg066_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg066_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg066_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg066_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg066_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg066_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg066_cn73xx {
+		u32 reserved_26_31 : 6;
+		u32 tpbem : 1;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_5_11 : 7;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg066_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg066_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg066_cnf71xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_5_11 : 7;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cnf71xx;
+	struct cvmx_pcieepx_cfg066_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg066 cvmx_pcieepx_cfg066_t;
+
+/**
+ * cvmx_pcieep#_cfg067
+ *
+ * This register contains the sixty-eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg067 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg067_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pcieepx_cfg067_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg067_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg067_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg067_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg067_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_21_23 : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg067_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg067_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg067_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg067_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg067_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg067_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg067_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg067_cn73xx {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_11_5 : 7;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg067_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg067_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg067_cnf71xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_5_11 : 7;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cnf71xx;
+	struct cvmx_pcieepx_cfg067_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg067 cvmx_pcieepx_cfg067_t;
+
+/**
+ * cvmx_pcieep#_cfg068
+ *
+ * This register contains the sixty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg068 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg068_s {
+		u32 reserved_15_31 : 17;
+		u32 cies : 1;
+		u32 anfes : 1;
+		u32 rtts : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrs : 1;
+		u32 bdllps : 1;
+		u32 btlps : 1;
+		u32 reserved_1_5 : 5;
+		u32 res : 1;
+	} s;
+	struct cvmx_pcieepx_cfg068_cn52xx {
+		u32 reserved_14_31 : 18;
+		u32 anfes : 1;
+		u32 rtts : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrs : 1;
+		u32 bdllps : 1;
+		u32 btlps : 1;
+		u32 reserved_1_5 : 5;
+		u32 res : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg068_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg068_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg068_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg068_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg068_s cn70xx;
+	struct cvmx_pcieepx_cfg068_s cn70xxp1;
+	struct cvmx_pcieepx_cfg068_s cn73xx;
+	struct cvmx_pcieepx_cfg068_s cn78xx;
+	struct cvmx_pcieepx_cfg068_s cn78xxp1;
+	struct cvmx_pcieepx_cfg068_s cnf71xx;
+	struct cvmx_pcieepx_cfg068_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg068 cvmx_pcieepx_cfg068_t;
+
+/**
+ * cvmx_pcieep#_cfg069
+ *
+ * This register contains the seventieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg069 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg069_s {
+		u32 reserved_15_31 : 17;
+		u32 ciem : 1;
+		u32 anfem : 1;
+		u32 rttm : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrm : 1;
+		u32 bdllpm : 1;
+		u32 btlpm : 1;
+		u32 reserved_1_5 : 5;
+		u32 rem : 1;
+	} s;
+	struct cvmx_pcieepx_cfg069_cn52xx {
+		u32 reserved_14_31 : 18;
+		u32 anfem : 1;
+		u32 rttm : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrm : 1;
+		u32 bdllpm : 1;
+		u32 btlpm : 1;
+		u32 reserved_1_5 : 5;
+		u32 rem : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg069_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg069_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg069_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg069_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg069_s cn70xx;
+	struct cvmx_pcieepx_cfg069_s cn70xxp1;
+	struct cvmx_pcieepx_cfg069_s cn73xx;
+	struct cvmx_pcieepx_cfg069_s cn78xx;
+	struct cvmx_pcieepx_cfg069_s cn78xxp1;
+	struct cvmx_pcieepx_cfg069_s cnf71xx;
+	struct cvmx_pcieepx_cfg069_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg069 cvmx_pcieepx_cfg069_t;
+
+/**
+ * cvmx_pcieep#_cfg070
+ *
+ * This register contains the seventy-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg070 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg070_s {
+		u32 reserved_12_31 : 20;
+		u32 tlp_plp : 1;
+		u32 reserved_9_10 : 2;
+		u32 ce : 1;
+		u32 cc : 1;
+		u32 ge : 1;
+		u32 gc : 1;
+		u32 fep : 5;
+	} s;
+	struct cvmx_pcieepx_cfg070_cn52xx {
+		u32 reserved_9_31 : 23;
+		u32 ce : 1;
+		u32 cc : 1;
+		u32 ge : 1;
+		u32 gc : 1;
+		u32 fep : 5;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg070_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg070_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg070_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg070_cn52xx cn70xx;
+	struct cvmx_pcieepx_cfg070_cn52xx cn70xxp1;
+	struct cvmx_pcieepx_cfg070_s cn73xx;
+	struct cvmx_pcieepx_cfg070_s cn78xx;
+	struct cvmx_pcieepx_cfg070_s cn78xxp1;
+	struct cvmx_pcieepx_cfg070_cn52xx cnf71xx;
+	struct cvmx_pcieepx_cfg070_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg070 cvmx_pcieepx_cfg070_t;
+
+/**
+ * cvmx_pcieep#_cfg071
+ *
+ * This register contains the seventy-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg071 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg071_s {
+		u32 dword1 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg071_s cn52xx;
+	struct cvmx_pcieepx_cfg071_s cn52xxp1;
+	struct cvmx_pcieepx_cfg071_s cn56xx;
+	struct cvmx_pcieepx_cfg071_s cn56xxp1;
+	struct cvmx_pcieepx_cfg071_s cn61xx;
+	struct cvmx_pcieepx_cfg071_s cn63xx;
+	struct cvmx_pcieepx_cfg071_s cn63xxp1;
+	struct cvmx_pcieepx_cfg071_s cn66xx;
+	struct cvmx_pcieepx_cfg071_s cn68xx;
+	struct cvmx_pcieepx_cfg071_s cn68xxp1;
+	struct cvmx_pcieepx_cfg071_s cn70xx;
+	struct cvmx_pcieepx_cfg071_s cn70xxp1;
+	struct cvmx_pcieepx_cfg071_s cn73xx;
+	struct cvmx_pcieepx_cfg071_s cn78xx;
+	struct cvmx_pcieepx_cfg071_s cn78xxp1;
+	struct cvmx_pcieepx_cfg071_s cnf71xx;
+	struct cvmx_pcieepx_cfg071_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg071 cvmx_pcieepx_cfg071_t;
+
+/**
+ * cvmx_pcieep#_cfg072
+ *
+ * This register contains the seventy-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg072 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg072_s {
+		u32 dword2 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg072_s cn52xx;
+	struct cvmx_pcieepx_cfg072_s cn52xxp1;
+	struct cvmx_pcieepx_cfg072_s cn56xx;
+	struct cvmx_pcieepx_cfg072_s cn56xxp1;
+	struct cvmx_pcieepx_cfg072_s cn61xx;
+	struct cvmx_pcieepx_cfg072_s cn63xx;
+	struct cvmx_pcieepx_cfg072_s cn63xxp1;
+	struct cvmx_pcieepx_cfg072_s cn66xx;
+	struct cvmx_pcieepx_cfg072_s cn68xx;
+	struct cvmx_pcieepx_cfg072_s cn68xxp1;
+	struct cvmx_pcieepx_cfg072_s cn70xx;
+	struct cvmx_pcieepx_cfg072_s cn70xxp1;
+	struct cvmx_pcieepx_cfg072_s cn73xx;
+	struct cvmx_pcieepx_cfg072_s cn78xx;
+	struct cvmx_pcieepx_cfg072_s cn78xxp1;
+	struct cvmx_pcieepx_cfg072_s cnf71xx;
+	struct cvmx_pcieepx_cfg072_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg072 cvmx_pcieepx_cfg072_t;
+
+/**
+ * cvmx_pcieep#_cfg073
+ *
+ * This register contains the seventy-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg073 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg073_s {
+		u32 dword3 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg073_s cn52xx;
+	struct cvmx_pcieepx_cfg073_s cn52xxp1;
+	struct cvmx_pcieepx_cfg073_s cn56xx;
+	struct cvmx_pcieepx_cfg073_s cn56xxp1;
+	struct cvmx_pcieepx_cfg073_s cn61xx;
+	struct cvmx_pcieepx_cfg073_s cn63xx;
+	struct cvmx_pcieepx_cfg073_s cn63xxp1;
+	struct cvmx_pcieepx_cfg073_s cn66xx;
+	struct cvmx_pcieepx_cfg073_s cn68xx;
+	struct cvmx_pcieepx_cfg073_s cn68xxp1;
+	struct cvmx_pcieepx_cfg073_s cn70xx;
+	struct cvmx_pcieepx_cfg073_s cn70xxp1;
+	struct cvmx_pcieepx_cfg073_s cn73xx;
+	struct cvmx_pcieepx_cfg073_s cn78xx;
+	struct cvmx_pcieepx_cfg073_s cn78xxp1;
+	struct cvmx_pcieepx_cfg073_s cnf71xx;
+	struct cvmx_pcieepx_cfg073_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg073 cvmx_pcieepx_cfg073_t;
+
+/**
+ * cvmx_pcieep#_cfg074
+ *
+ * This register contains the seventy-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg074 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg074_s {
+		u32 dword4 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg074_s cn52xx;
+	struct cvmx_pcieepx_cfg074_s cn52xxp1;
+	struct cvmx_pcieepx_cfg074_s cn56xx;
+	struct cvmx_pcieepx_cfg074_s cn56xxp1;
+	struct cvmx_pcieepx_cfg074_s cn61xx;
+	struct cvmx_pcieepx_cfg074_s cn63xx;
+	struct cvmx_pcieepx_cfg074_s cn63xxp1;
+	struct cvmx_pcieepx_cfg074_s cn66xx;
+	struct cvmx_pcieepx_cfg074_s cn68xx;
+	struct cvmx_pcieepx_cfg074_s cn68xxp1;
+	struct cvmx_pcieepx_cfg074_s cn70xx;
+	struct cvmx_pcieepx_cfg074_s cn70xxp1;
+	struct cvmx_pcieepx_cfg074_s cn73xx;
+	struct cvmx_pcieepx_cfg074_s cn78xx;
+	struct cvmx_pcieepx_cfg074_s cn78xxp1;
+	struct cvmx_pcieepx_cfg074_s cnf71xx;
+	struct cvmx_pcieepx_cfg074_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg074 cvmx_pcieepx_cfg074_t;
+
+/**
+ * cvmx_pcieep#_cfg078
+ *
+ * This register contains the seventy-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg078 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg078_s {
+		u32 tlp_pfx_log : 32;
+	} s;
+	struct cvmx_pcieepx_cfg078_s cn73xx;
+	struct cvmx_pcieepx_cfg078_s cn78xx;
+	struct cvmx_pcieepx_cfg078_s cn78xxp1;
+	struct cvmx_pcieepx_cfg078_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg078 cvmx_pcieepx_cfg078_t;
+
+/**
+ * cvmx_pcieep#_cfg082
+ *
+ * This register contains the eighty-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg082 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg082_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 reserved_0_15 : 16;
+	} s;
+	struct cvmx_pcieepx_cfg082_cn70xx {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg082_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg082_cn73xx {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 ariid : 16;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg082_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg082_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg082_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg082 cvmx_pcieepx_cfg082_t;
+
+/**
+ * cvmx_pcieep#_cfg083
+ *
+ * This register contains the eighty-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg083 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg083_s {
+		u32 reserved_2_31 : 30;
+		u32 acsfgc : 1;
+		u32 mfvcfgc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg083_cn70xx {
+		u32 reserved_26_31 : 6;
+		u32 srs : 22;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg083_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg083_cn73xx {
+		u32 reserved_23_31 : 9;
+		u32 fg : 3;
+		u32 reserved_18_19 : 2;
+		u32 acsfge : 1;
+		u32 mfvcfge : 1;
+		u32 nfn : 8;
+		u32 reserved_2_7 : 6;
+		u32 acsfgc : 1;
+		u32 mfvcfgc : 1;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg083_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg083_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg083_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg083 cvmx_pcieepx_cfg083_t;
+
+/**
+ * cvmx_pcieep#_cfg084
+ *
+ * PCIE_CFG084 = Eighty-fifth 32-bits of PCIE type 0 config space
+ * (PCI Express Resizable BAR (RBAR) Control Register)
+ */
+union cvmx_pcieepx_cfg084 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg084_s {
+		u32 reserved_13_31 : 19;
+		u32 rbars : 5;
+		u32 nrbar : 3;
+		u32 reserved_3_4 : 2;
+		u32 rbari : 3;
+	} s;
+	struct cvmx_pcieepx_cfg084_s cn70xx;
+	struct cvmx_pcieepx_cfg084_s cn70xxp1;
+};
+
+typedef union cvmx_pcieepx_cfg084 cvmx_pcieepx_cfg084_t;
+
+/**
+ * cvmx_pcieep#_cfg086
+ *
+ * This register contains the eighty-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg086 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg086_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pcieepx_cfg086_s cn73xx;
+	struct cvmx_pcieepx_cfg086_s cn78xx;
+	struct cvmx_pcieepx_cfg086_s cn78xxp1;
+	struct cvmx_pcieepx_cfg086_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg086 cvmx_pcieepx_cfg086_t;
+
+/**
+ * cvmx_pcieep#_cfg087
+ *
+ * This register contains the eighty-eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg087 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg087_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg087_s cn73xx;
+	struct cvmx_pcieepx_cfg087_s cn78xx;
+	struct cvmx_pcieepx_cfg087_s cn78xxp1;
+	struct cvmx_pcieepx_cfg087_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg087 cvmx_pcieepx_cfg087_t;
+
+/**
+ * cvmx_pcieep#_cfg088
+ *
+ * This register contains the eighty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg088 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg088_s {
+		u32 reserved_8_31 : 24;
+		u32 les : 8;
+	} s;
+	struct cvmx_pcieepx_cfg088_s cn73xx;
+	struct cvmx_pcieepx_cfg088_s cn78xx;
+	struct cvmx_pcieepx_cfg088_s cn78xxp1;
+	struct cvmx_pcieepx_cfg088_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg088 cvmx_pcieepx_cfg088_t;
+
+/**
+ * cvmx_pcieep#_cfg089
+ *
+ * This register contains the ninetieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg089 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg089_s {
+		u32 reserved_31_31 : 1;
+		u32 l1dph : 3;
+		u32 l1dtp : 4;
+		u32 reserved_15_23 : 9;
+		u32 l0dph : 3;
+		u32 l0dtp : 4;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg089_s cn73xx;
+	struct cvmx_pcieepx_cfg089_s cn78xx;
+	struct cvmx_pcieepx_cfg089_s cn78xxp1;
+	struct cvmx_pcieepx_cfg089_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg089 cvmx_pcieepx_cfg089_t;
+
+/**
+ * cvmx_pcieep#_cfg090
+ *
+ * This register contains the ninety-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg090 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg090_s {
+		u32 reserved_31_31 : 1;
+		u32 l3dph : 3;
+		u32 l3dtp : 4;
+		u32 reserved_15_23 : 9;
+		u32 l2dph : 3;
+		u32 l2dtp : 4;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg090_s cn73xx;
+	struct cvmx_pcieepx_cfg090_s cn78xx;
+	struct cvmx_pcieepx_cfg090_s cn78xxp1;
+	struct cvmx_pcieepx_cfg090_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg090 cvmx_pcieepx_cfg090_t;
+
+/**
+ * cvmx_pcieep#_cfg091
+ *
+ * This register contains the ninety-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg091 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg091_s {
+		u32 reserved_31_31 : 1;
+		u32 l5dph : 3;
+		u32 l5dtp : 4;
+		u32 reserved_15_23 : 9;
+		u32 l4dph : 3;
+		u32 l4dtp : 4;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg091_s cn73xx;
+	struct cvmx_pcieepx_cfg091_s cn78xx;
+	struct cvmx_pcieepx_cfg091_s cn78xxp1;
+	struct cvmx_pcieepx_cfg091_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg091 cvmx_pcieepx_cfg091_t;
+
+/**
+ * cvmx_pcieep#_cfg092
+ *
+ * This register contains the ninety-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg092 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg092_s {
+		u32 reserved_31_31 : 1;
+		u32 l7dph : 3;
+		u32 l7dtp : 4;
+		u32 reserved_15_23 : 9;
+		u32 l6dph : 3;
+		u32 l6dtp : 4;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg092_s cn73xx;
+	struct cvmx_pcieepx_cfg092_s cn78xx;
+	struct cvmx_pcieepx_cfg092_s cn78xxp1;
+	struct cvmx_pcieepx_cfg092_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg092 cvmx_pcieepx_cfg092_t;
+
+/**
+ * cvmx_pcieep#_cfg094
+ *
+ * This register contains the ninety-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg094 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg094_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pcieepx_cfg094_s cn73xx;
+	struct cvmx_pcieepx_cfg094_s cn78xx;
+	struct cvmx_pcieepx_cfg094_s cn78xxp1;
+	struct cvmx_pcieepx_cfg094_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg094 cvmx_pcieepx_cfg094_t;
+
+/**
+ * cvmx_pcieep#_cfg095
+ *
+ * This register contains the ninety-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg095 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg095_s {
+		u32 vfmimn : 11;
+		u32 reserved_2_20 : 19;
+		u32 arichp : 1;
+		u32 vfmc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg095_s cn73xx;
+	struct cvmx_pcieepx_cfg095_s cn78xx;
+	struct cvmx_pcieepx_cfg095_s cn78xxp1;
+	struct cvmx_pcieepx_cfg095_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg095 cvmx_pcieepx_cfg095_t;
+
+/**
+ * cvmx_pcieep#_cfg096
+ *
+ * This register contains the ninety-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg096 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg096_s {
+		u32 reserved_17_31 : 15;
+		u32 ms : 1;
+		u32 reserved_5_15 : 11;
+		u32 ach : 1;
+		u32 mse : 1;
+		u32 mie : 1;
+		u32 me : 1;
+		u32 vfe : 1;
+	} s;
+	struct cvmx_pcieepx_cfg096_s cn73xx;
+	struct cvmx_pcieepx_cfg096_s cn78xx;
+	struct cvmx_pcieepx_cfg096_s cn78xxp1;
+	struct cvmx_pcieepx_cfg096_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg096 cvmx_pcieepx_cfg096_t;
+
+/**
+ * cvmx_pcieep#_cfg097
+ *
+ * This register contains the ninety-eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg097 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg097_s {
+		u32 tvf : 16;
+		u32 ivf : 16;
+	} s;
+	struct cvmx_pcieepx_cfg097_s cn73xx;
+	struct cvmx_pcieepx_cfg097_s cn78xx;
+	struct cvmx_pcieepx_cfg097_s cn78xxp1;
+	struct cvmx_pcieepx_cfg097_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg097 cvmx_pcieepx_cfg097_t;
+
+/**
+ * cvmx_pcieep#_cfg098
+ *
+ * This register contains the ninety-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg098 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg098_s {
+		u32 reserved_24_31 : 8;
+		u32 fdl : 8;
+		u32 nvf : 16;
+	} s;
+	struct cvmx_pcieepx_cfg098_s cn73xx;
+	struct cvmx_pcieepx_cfg098_s cn78xx;
+	struct cvmx_pcieepx_cfg098_s cn78xxp1;
+	struct cvmx_pcieepx_cfg098_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg098 cvmx_pcieepx_cfg098_t;
+
+/**
+ * cvmx_pcieep#_cfg099
+ *
+ * This register contains the one hundredth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg099 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg099_s {
+		u32 vfs : 16;
+		u32 fo : 16;
+	} s;
+	struct cvmx_pcieepx_cfg099_s cn73xx;
+	struct cvmx_pcieepx_cfg099_s cn78xx;
+	struct cvmx_pcieepx_cfg099_s cn78xxp1;
+	struct cvmx_pcieepx_cfg099_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg099 cvmx_pcieepx_cfg099_t;
+
+/**
+ * cvmx_pcieep#_cfg100
+ *
+ * This register contains the one hundred first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg100 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg100_s {
+		u32 vfdev : 16;
+		u32 reserved_0_15 : 16;
+	} s;
+	struct cvmx_pcieepx_cfg100_s cn73xx;
+	struct cvmx_pcieepx_cfg100_s cn78xx;
+	struct cvmx_pcieepx_cfg100_s cn78xxp1;
+	struct cvmx_pcieepx_cfg100_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg100 cvmx_pcieepx_cfg100_t;
+
+/**
+ * cvmx_pcieep#_cfg101
+ *
+ * This register contains the one hundred second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg101 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg101_s {
+		u32 supps : 32;
+	} s;
+	struct cvmx_pcieepx_cfg101_s cn73xx;
+	struct cvmx_pcieepx_cfg101_s cn78xx;
+	struct cvmx_pcieepx_cfg101_s cn78xxp1;
+	struct cvmx_pcieepx_cfg101_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg101 cvmx_pcieepx_cfg101_t;
+
+/**
+ * cvmx_pcieep#_cfg102
+ *
+ * This register contains the one hundred third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg102 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg102_s {
+		u32 ps : 32;
+	} s;
+	struct cvmx_pcieepx_cfg102_s cn73xx;
+	struct cvmx_pcieepx_cfg102_s cn78xx;
+	struct cvmx_pcieepx_cfg102_s cn78xxp1;
+	struct cvmx_pcieepx_cfg102_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg102 cvmx_pcieepx_cfg102_t;
+
+/**
+ * cvmx_pcieep#_cfg103
+ *
+ * This register contains the one hundred fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg103 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg103_s {
+		u32 reserved_4_31 : 28;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg103_cn73xx {
+		u32 lbab : 12;
+		u32 reserved_4_19 : 16;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg103_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg103_cn78xxp1 {
+		u32 lbab : 17;
+		u32 reserved_4_14 : 11;
+		u32 pf : 1;
+		u32 typ : 2;
+		u32 mspc : 1;
+	} cn78xxp1;
+	struct cvmx_pcieepx_cfg103_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg103 cvmx_pcieepx_cfg103_t;
+
+/**
+ * cvmx_pcieep#_cfg104
+ *
+ * This register contains the one hundred seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg104 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg104_s {
+		u32 ubab : 32;
+	} s;
+	struct cvmx_pcieepx_cfg104_s cn73xx;
+	struct cvmx_pcieepx_cfg104_s cn78xx;
+	struct cvmx_pcieepx_cfg104_s cn78xxp1;
+	struct cvmx_pcieepx_cfg104_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg104 cvmx_pcieepx_cfg104_t;
+
+/**
+ * cvmx_pcieep#_cfg105
+ *
+ * This register contains the one hundred sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg105 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg105_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg105_s cn73xx;
+	struct cvmx_pcieepx_cfg105_s cn78xx;
+	struct cvmx_pcieepx_cfg105_s cn78xxp1;
+	struct cvmx_pcieepx_cfg105_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg105 cvmx_pcieepx_cfg105_t;
+
+/**
+ * cvmx_pcieep#_cfg106
+ *
+ * This register contains the one hundred seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg106 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg106_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg106_s cn73xx;
+	struct cvmx_pcieepx_cfg106_s cn78xx;
+	struct cvmx_pcieepx_cfg106_s cn78xxp1;
+	struct cvmx_pcieepx_cfg106_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg106 cvmx_pcieepx_cfg106_t;
+
+/**
+ * cvmx_pcieep#_cfg107
+ *
+ * This register contains the one hundred eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg107 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg107_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg107_s cn73xx;
+	struct cvmx_pcieepx_cfg107_s cn78xx;
+	struct cvmx_pcieepx_cfg107_s cn78xxp1;
+	struct cvmx_pcieepx_cfg107_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg107 cvmx_pcieepx_cfg107_t;
+
+/**
+ * cvmx_pcieep#_cfg108
+ *
+ * This register contains the one hundred ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg108 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg108_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg108_s cn73xx;
+	struct cvmx_pcieepx_cfg108_s cn78xx;
+	struct cvmx_pcieepx_cfg108_s cn78xxp1;
+	struct cvmx_pcieepx_cfg108_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg108 cvmx_pcieepx_cfg108_t;
+
+/**
+ * cvmx_pcieep#_cfg109
+ *
+ * This register contains the one hundred tenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg109 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg109_s {
+		u32 mso : 29;
+		u32 msbir : 3;
+	} s;
+	struct cvmx_pcieepx_cfg109_s cn73xx;
+	struct cvmx_pcieepx_cfg109_s cn78xx;
+	struct cvmx_pcieepx_cfg109_s cn78xxp1;
+	struct cvmx_pcieepx_cfg109_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg109 cvmx_pcieepx_cfg109_t;
+
+/**
+ * cvmx_pcieep#_cfg110
+ *
+ * This register contains the one hundred eleventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg110 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg110_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pcieepx_cfg110_s cn73xx;
+	struct cvmx_pcieepx_cfg110_s cn78xx;
+	struct cvmx_pcieepx_cfg110_s cn78xxp1;
+	struct cvmx_pcieepx_cfg110_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg110 cvmx_pcieepx_cfg110_t;
+
+/**
+ * cvmx_pcieep#_cfg111
+ *
+ * This register contains the one hundred twelfth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg111 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg111_s {
+		u32 reserved_30_31 : 2;
+		u32 srs : 26;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pcieepx_cfg111_s cn73xx;
+	struct cvmx_pcieepx_cfg111_s cn78xx;
+	struct cvmx_pcieepx_cfg111_s cn78xxp1;
+	struct cvmx_pcieepx_cfg111_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg111 cvmx_pcieepx_cfg111_t;
+
+/**
+ * cvmx_pcieep#_cfg112
+ *
+ * This register contains the one hundred thirteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg112 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg112_s {
+		u32 reserved_13_31 : 19;
+		u32 rbars : 5;
+		u32 nrbar : 3;
+		u32 reserved_3_4 : 2;
+		u32 rbari : 3;
+	} s;
+	struct cvmx_pcieepx_cfg112_s cn73xx;
+	struct cvmx_pcieepx_cfg112_s cn78xx;
+	struct cvmx_pcieepx_cfg112_s cn78xxp1;
+	struct cvmx_pcieepx_cfg112_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg112 cvmx_pcieepx_cfg112_t;
+
+/**
+ * cvmx_pcieep#_cfg448
+ *
+ * This register contains the four hundred forty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg448 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg448_s {
+		u32 rtl : 16;
+		u32 rtltl : 16;
+	} s;
+	struct cvmx_pcieepx_cfg448_s cn52xx;
+	struct cvmx_pcieepx_cfg448_s cn52xxp1;
+	struct cvmx_pcieepx_cfg448_s cn56xx;
+	struct cvmx_pcieepx_cfg448_s cn56xxp1;
+	struct cvmx_pcieepx_cfg448_s cn61xx;
+	struct cvmx_pcieepx_cfg448_s cn63xx;
+	struct cvmx_pcieepx_cfg448_s cn63xxp1;
+	struct cvmx_pcieepx_cfg448_s cn66xx;
+	struct cvmx_pcieepx_cfg448_s cn68xx;
+	struct cvmx_pcieepx_cfg448_s cn68xxp1;
+	struct cvmx_pcieepx_cfg448_s cn70xx;
+	struct cvmx_pcieepx_cfg448_s cn70xxp1;
+	struct cvmx_pcieepx_cfg448_s cn73xx;
+	struct cvmx_pcieepx_cfg448_s cn78xx;
+	struct cvmx_pcieepx_cfg448_s cn78xxp1;
+	struct cvmx_pcieepx_cfg448_s cnf71xx;
+	struct cvmx_pcieepx_cfg448_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg448 cvmx_pcieepx_cfg448_t;
+
+/**
+ * cvmx_pcieep#_cfg449
+ *
+ * This register contains the four hundred fiftieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg449 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg449_s {
+		u32 omr : 32;
+	} s;
+	struct cvmx_pcieepx_cfg449_s cn52xx;
+	struct cvmx_pcieepx_cfg449_s cn52xxp1;
+	struct cvmx_pcieepx_cfg449_s cn56xx;
+	struct cvmx_pcieepx_cfg449_s cn56xxp1;
+	struct cvmx_pcieepx_cfg449_s cn61xx;
+	struct cvmx_pcieepx_cfg449_s cn63xx;
+	struct cvmx_pcieepx_cfg449_s cn63xxp1;
+	struct cvmx_pcieepx_cfg449_s cn66xx;
+	struct cvmx_pcieepx_cfg449_s cn68xx;
+	struct cvmx_pcieepx_cfg449_s cn68xxp1;
+	struct cvmx_pcieepx_cfg449_s cn70xx;
+	struct cvmx_pcieepx_cfg449_s cn70xxp1;
+	struct cvmx_pcieepx_cfg449_s cn73xx;
+	struct cvmx_pcieepx_cfg449_s cn78xx;
+	struct cvmx_pcieepx_cfg449_s cn78xxp1;
+	struct cvmx_pcieepx_cfg449_s cnf71xx;
+	struct cvmx_pcieepx_cfg449_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg449 cvmx_pcieepx_cfg449_t;
+
+/**
+ * cvmx_pcieep#_cfg450
+ *
+ * This register contains the four hundred fifty-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg450 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg450_s {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_8_14 : 7;
+		u32 link_num : 8;
+	} s;
+	struct cvmx_pcieepx_cfg450_s cn52xx;
+	struct cvmx_pcieepx_cfg450_s cn52xxp1;
+	struct cvmx_pcieepx_cfg450_s cn56xx;
+	struct cvmx_pcieepx_cfg450_s cn56xxp1;
+	struct cvmx_pcieepx_cfg450_s cn61xx;
+	struct cvmx_pcieepx_cfg450_s cn63xx;
+	struct cvmx_pcieepx_cfg450_s cn63xxp1;
+	struct cvmx_pcieepx_cfg450_s cn66xx;
+	struct cvmx_pcieepx_cfg450_s cn68xx;
+	struct cvmx_pcieepx_cfg450_s cn68xxp1;
+	struct cvmx_pcieepx_cfg450_cn70xx {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_12_14 : 3;
+		u32 link_cmd : 4;
+		u32 link_num : 8;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg450_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg450_cn73xx {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_12_14 : 3;
+		u32 forced_ltssm : 4;
+		u32 link_num : 8;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg450_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg450_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg450_s cnf71xx;
+	struct cvmx_pcieepx_cfg450_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg450 cvmx_pcieepx_cfg450_t;
+
+/**
+ * cvmx_pcieep#_cfg451
+ *
+ * This register contains the four hundred fifty-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg451 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg451_s {
+		u32 reserved_31_31 : 1;
+		u32 easpml1 : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 n_fts_cc : 8;
+		u32 n_fts : 8;
+		u32 ack_freq : 8;
+	} s;
+	struct cvmx_pcieepx_cfg451_cn52xx {
+		u32 reserved_30_31 : 2;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 n_fts_cc : 8;
+		u32 n_fts : 8;
+		u32 ack_freq : 8;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg451_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg451_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg451_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg451_s cn61xx;
+	struct cvmx_pcieepx_cfg451_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg451_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg451_s cn66xx;
+	struct cvmx_pcieepx_cfg451_s cn68xx;
+	struct cvmx_pcieepx_cfg451_s cn68xxp1;
+	struct cvmx_pcieepx_cfg451_s cn70xx;
+	struct cvmx_pcieepx_cfg451_s cn70xxp1;
+	struct cvmx_pcieepx_cfg451_s cn73xx;
+	struct cvmx_pcieepx_cfg451_s cn78xx;
+	struct cvmx_pcieepx_cfg451_s cn78xxp1;
+	struct cvmx_pcieepx_cfg451_s cnf71xx;
+	struct cvmx_pcieepx_cfg451_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg451 cvmx_pcieepx_cfg451_t;
+
+/**
+ * cvmx_pcieep#_cfg452
+ *
+ * This register contains the four hundred fifty-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg452 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg452_s {
+		u32 reserved_26_31 : 6;
+		u32 eccrc : 1;
+		u32 reserved_22_24 : 3;
+		u32 lme : 6;
+		u32 reserved_12_15 : 4;
+		u32 link_rate : 4;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} s;
+	struct cvmx_pcieepx_cfg452_cn52xx {
+		u32 reserved_26_31 : 6;
+		u32 eccrc : 1;
+		u32 reserved_22_24 : 3;
+		u32 lme : 6;
+		u32 reserved_8_15 : 8;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg452_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg452_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg452_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg452_cn61xx {
+		u32 reserved_22_31 : 10;
+		u32 lme : 6;
+		u32 reserved_8_15 : 8;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg452_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg452_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg452_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg452_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg452_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg452_cn70xx {
+		u32 reserved_22_31 : 10;
+		u32 lme : 6;
+		u32 reserved_12_15 : 4;
+		u32 link_rate : 4;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg452_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg452_cn70xx cn73xx;
+	struct cvmx_pcieepx_cfg452_cn70xx cn78xx;
+	struct cvmx_pcieepx_cfg452_cn70xx cn78xxp1;
+	struct cvmx_pcieepx_cfg452_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg452_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg452 cvmx_pcieepx_cfg452_t;
+
+/**
+ * cvmx_pcieep#_cfg453
+ *
+ * This register contains the four hundred fifty-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg453 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg453_s {
+		u32 dlld : 1;
+		u32 reserved_26_30 : 5;
+		u32 ack_nak : 1;
+		u32 fcd : 1;
+		u32 ilst : 24;
+	} s;
+	struct cvmx_pcieepx_cfg453_s cn52xx;
+	struct cvmx_pcieepx_cfg453_s cn52xxp1;
+	struct cvmx_pcieepx_cfg453_s cn56xx;
+	struct cvmx_pcieepx_cfg453_s cn56xxp1;
+	struct cvmx_pcieepx_cfg453_s cn61xx;
+	struct cvmx_pcieepx_cfg453_s cn63xx;
+	struct cvmx_pcieepx_cfg453_s cn63xxp1;
+	struct cvmx_pcieepx_cfg453_s cn66xx;
+	struct cvmx_pcieepx_cfg453_s cn68xx;
+	struct cvmx_pcieepx_cfg453_s cn68xxp1;
+	struct cvmx_pcieepx_cfg453_s cn70xx;
+	struct cvmx_pcieepx_cfg453_s cn70xxp1;
+	struct cvmx_pcieepx_cfg453_s cn73xx;
+	struct cvmx_pcieepx_cfg453_s cn78xx;
+	struct cvmx_pcieepx_cfg453_s cn78xxp1;
+	struct cvmx_pcieepx_cfg453_s cnf71xx;
+	struct cvmx_pcieepx_cfg453_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg453 cvmx_pcieepx_cfg453_t;
+
+/**
+ * cvmx_pcieep#_cfg454
+ *
+ * This register contains the four hundred fifty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg454 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg454_s {
+		u32 cx_nfunc : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_11_13 : 3;
+		u32 nskps : 3;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg454_cn52xx {
+		u32 reserved_29_31 : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_11_13 : 3;
+		u32 nskps : 3;
+		u32 reserved_4_7 : 4;
+		u32 ntss : 4;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg454_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg454_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg454_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg454_cn61xx {
+		u32 cx_nfunc : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg454_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg454_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg454_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg454_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg454_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg454_cn70xx {
+		u32 reserved_24_31 : 8;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn70xx;
+	struct cvmx_pcieepx_cfg454_cn70xx cn70xxp1;
+	struct cvmx_pcieepx_cfg454_cn73xx {
+		u32 reserved_29_31 : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg454_cn73xx cn78xx;
+	struct cvmx_pcieepx_cfg454_cn73xx cn78xxp1;
+	struct cvmx_pcieepx_cfg454_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg454_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg454 cvmx_pcieepx_cfg454_t;
+
+/**
+ * cvmx_pcieep#_cfg455
+ *
+ * This register contains the four hundred fifty-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg455 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg455_s {
+		u32 m_cfg0_filt : 1;
+		u32 m_io_filt : 1;
+		u32 msg_ctrl : 1;
+		u32 m_cpl_ecrc_filt : 1;
+		u32 m_ecrc_filt : 1;
+		u32 m_cpl_len_err : 1;
+		u32 m_cpl_attr_err : 1;
+		u32 m_cpl_tc_err : 1;
+		u32 m_cpl_fun_err : 1;
+		u32 m_cpl_rid_err : 1;
+		u32 m_cpl_tag_err : 1;
+		u32 m_lk_filt : 1;
+		u32 m_cfg1_filt : 1;
+		u32 m_bar_match : 1;
+		u32 m_pois_filt : 1;
+		u32 m_fun : 1;
+		u32 dfcwt : 1;
+		u32 reserved_11_14 : 4;
+		u32 skpiv : 11;
+	} s;
+	struct cvmx_pcieepx_cfg455_s cn52xx;
+	struct cvmx_pcieepx_cfg455_s cn52xxp1;
+	struct cvmx_pcieepx_cfg455_s cn56xx;
+	struct cvmx_pcieepx_cfg455_s cn56xxp1;
+	struct cvmx_pcieepx_cfg455_s cn61xx;
+	struct cvmx_pcieepx_cfg455_s cn63xx;
+	struct cvmx_pcieepx_cfg455_s cn63xxp1;
+	struct cvmx_pcieepx_cfg455_s cn66xx;
+	struct cvmx_pcieepx_cfg455_s cn68xx;
+	struct cvmx_pcieepx_cfg455_s cn68xxp1;
+	struct cvmx_pcieepx_cfg455_s cn70xx;
+	struct cvmx_pcieepx_cfg455_s cn70xxp1;
+	struct cvmx_pcieepx_cfg455_s cn73xx;
+	struct cvmx_pcieepx_cfg455_s cn78xx;
+	struct cvmx_pcieepx_cfg455_s cn78xxp1;
+	struct cvmx_pcieepx_cfg455_s cnf71xx;
+	struct cvmx_pcieepx_cfg455_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg455 cvmx_pcieepx_cfg455_t;
+
+/**
+ * cvmx_pcieep#_cfg456
+ *
+ * This register contains the four hundred fifty-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg456 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg456_s {
+		u32 reserved_4_31 : 28;
+		u32 m_handle_flush : 1;
+		u32 m_dabort_4ucpl : 1;
+		u32 m_vend1_drp : 1;
+		u32 m_vend0_drp : 1;
+	} s;
+	struct cvmx_pcieepx_cfg456_cn52xx {
+		u32 reserved_2_31 : 30;
+		u32 m_vend1_drp : 1;
+		u32 m_vend0_drp : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg456_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg456_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg456_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg456_s cn61xx;
+	struct cvmx_pcieepx_cfg456_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg456_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg456_s cn66xx;
+	struct cvmx_pcieepx_cfg456_s cn68xx;
+	struct cvmx_pcieepx_cfg456_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg456_s cn70xx;
+	struct cvmx_pcieepx_cfg456_s cn70xxp1;
+	struct cvmx_pcieepx_cfg456_s cn73xx;
+	struct cvmx_pcieepx_cfg456_s cn78xx;
+	struct cvmx_pcieepx_cfg456_s cn78xxp1;
+	struct cvmx_pcieepx_cfg456_s cnf71xx;
+	struct cvmx_pcieepx_cfg456_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg456 cvmx_pcieepx_cfg456_t;
+
+/**
+ * cvmx_pcieep#_cfg458
+ *
+ * This register contains the four hundred fifty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg458 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg458_s {
+		u32 dbg_info_l32 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg458_s cn52xx;
+	struct cvmx_pcieepx_cfg458_s cn52xxp1;
+	struct cvmx_pcieepx_cfg458_s cn56xx;
+	struct cvmx_pcieepx_cfg458_s cn56xxp1;
+	struct cvmx_pcieepx_cfg458_s cn61xx;
+	struct cvmx_pcieepx_cfg458_s cn63xx;
+	struct cvmx_pcieepx_cfg458_s cn63xxp1;
+	struct cvmx_pcieepx_cfg458_s cn66xx;
+	struct cvmx_pcieepx_cfg458_s cn68xx;
+	struct cvmx_pcieepx_cfg458_s cn68xxp1;
+	struct cvmx_pcieepx_cfg458_s cn70xx;
+	struct cvmx_pcieepx_cfg458_s cn70xxp1;
+	struct cvmx_pcieepx_cfg458_s cn73xx;
+	struct cvmx_pcieepx_cfg458_s cn78xx;
+	struct cvmx_pcieepx_cfg458_s cn78xxp1;
+	struct cvmx_pcieepx_cfg458_s cnf71xx;
+	struct cvmx_pcieepx_cfg458_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg458 cvmx_pcieepx_cfg458_t;
+
+/**
+ * cvmx_pcieep#_cfg459
+ *
+ * This register contains the four hundred sixtieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg459 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg459_s {
+		u32 dbg_info_u32 : 32;
+	} s;
+	struct cvmx_pcieepx_cfg459_s cn52xx;
+	struct cvmx_pcieepx_cfg459_s cn52xxp1;
+	struct cvmx_pcieepx_cfg459_s cn56xx;
+	struct cvmx_pcieepx_cfg459_s cn56xxp1;
+	struct cvmx_pcieepx_cfg459_s cn61xx;
+	struct cvmx_pcieepx_cfg459_s cn63xx;
+	struct cvmx_pcieepx_cfg459_s cn63xxp1;
+	struct cvmx_pcieepx_cfg459_s cn66xx;
+	struct cvmx_pcieepx_cfg459_s cn68xx;
+	struct cvmx_pcieepx_cfg459_s cn68xxp1;
+	struct cvmx_pcieepx_cfg459_s cn70xx;
+	struct cvmx_pcieepx_cfg459_s cn70xxp1;
+	struct cvmx_pcieepx_cfg459_s cn73xx;
+	struct cvmx_pcieepx_cfg459_s cn78xx;
+	struct cvmx_pcieepx_cfg459_s cn78xxp1;
+	struct cvmx_pcieepx_cfg459_s cnf71xx;
+	struct cvmx_pcieepx_cfg459_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg459 cvmx_pcieepx_cfg459_t;
+
+/**
+ * cvmx_pcieep#_cfg460
+ *
+ * This register contains the four hundred sixty-first 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg460 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg460_s {
+		u32 reserved_20_31 : 12;
+		u32 tphfcc : 8;
+		u32 tpdfcc : 12;
+	} s;
+	struct cvmx_pcieepx_cfg460_s cn52xx;
+	struct cvmx_pcieepx_cfg460_s cn52xxp1;
+	struct cvmx_pcieepx_cfg460_s cn56xx;
+	struct cvmx_pcieepx_cfg460_s cn56xxp1;
+	struct cvmx_pcieepx_cfg460_s cn61xx;
+	struct cvmx_pcieepx_cfg460_s cn63xx;
+	struct cvmx_pcieepx_cfg460_s cn63xxp1;
+	struct cvmx_pcieepx_cfg460_s cn66xx;
+	struct cvmx_pcieepx_cfg460_s cn68xx;
+	struct cvmx_pcieepx_cfg460_s cn68xxp1;
+	struct cvmx_pcieepx_cfg460_s cn70xx;
+	struct cvmx_pcieepx_cfg460_s cn70xxp1;
+	struct cvmx_pcieepx_cfg460_s cn73xx;
+	struct cvmx_pcieepx_cfg460_s cn78xx;
+	struct cvmx_pcieepx_cfg460_s cn78xxp1;
+	struct cvmx_pcieepx_cfg460_s cnf71xx;
+	struct cvmx_pcieepx_cfg460_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg460 cvmx_pcieepx_cfg460_t;
+
+/**
+ * cvmx_pcieep#_cfg461
+ *
+ * This register contains the four hundred sixty-second 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg461 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg461_s {
+		u32 reserved_20_31 : 12;
+		u32 tchfcc : 8;
+		u32 tcdfcc : 12;
+	} s;
+	struct cvmx_pcieepx_cfg461_s cn52xx;
+	struct cvmx_pcieepx_cfg461_s cn52xxp1;
+	struct cvmx_pcieepx_cfg461_s cn56xx;
+	struct cvmx_pcieepx_cfg461_s cn56xxp1;
+	struct cvmx_pcieepx_cfg461_s cn61xx;
+	struct cvmx_pcieepx_cfg461_s cn63xx;
+	struct cvmx_pcieepx_cfg461_s cn63xxp1;
+	struct cvmx_pcieepx_cfg461_s cn66xx;
+	struct cvmx_pcieepx_cfg461_s cn68xx;
+	struct cvmx_pcieepx_cfg461_s cn68xxp1;
+	struct cvmx_pcieepx_cfg461_s cn70xx;
+	struct cvmx_pcieepx_cfg461_s cn70xxp1;
+	struct cvmx_pcieepx_cfg461_s cn73xx;
+	struct cvmx_pcieepx_cfg461_s cn78xx;
+	struct cvmx_pcieepx_cfg461_s cn78xxp1;
+	struct cvmx_pcieepx_cfg461_s cnf71xx;
+	struct cvmx_pcieepx_cfg461_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg461 cvmx_pcieepx_cfg461_t;
+
+/**
+ * cvmx_pcieep#_cfg462
+ *
+ * This register contains the four hundred sixty-third 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg462 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg462_s {
+		u32 reserved_20_31 : 12;
+		u32 tchfcc : 8;
+		u32 tcdfcc : 12;
+	} s;
+	struct cvmx_pcieepx_cfg462_s cn52xx;
+	struct cvmx_pcieepx_cfg462_s cn52xxp1;
+	struct cvmx_pcieepx_cfg462_s cn56xx;
+	struct cvmx_pcieepx_cfg462_s cn56xxp1;
+	struct cvmx_pcieepx_cfg462_s cn61xx;
+	struct cvmx_pcieepx_cfg462_s cn63xx;
+	struct cvmx_pcieepx_cfg462_s cn63xxp1;
+	struct cvmx_pcieepx_cfg462_s cn66xx;
+	struct cvmx_pcieepx_cfg462_s cn68xx;
+	struct cvmx_pcieepx_cfg462_s cn68xxp1;
+	struct cvmx_pcieepx_cfg462_s cn70xx;
+	struct cvmx_pcieepx_cfg462_s cn70xxp1;
+	struct cvmx_pcieepx_cfg462_s cn73xx;
+	struct cvmx_pcieepx_cfg462_s cn78xx;
+	struct cvmx_pcieepx_cfg462_s cn78xxp1;
+	struct cvmx_pcieepx_cfg462_s cnf71xx;
+	struct cvmx_pcieepx_cfg462_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg462 cvmx_pcieepx_cfg462_t;
+
+/**
+ * cvmx_pcieep#_cfg463
+ *
+ * This register contains the four hundred sixty-fourth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg463 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg463_s {
+		u32 fcltoe : 1;
+		u32 reserved_29_30 : 2;
+		u32 fcltov : 13;
+		u32 reserved_3_15 : 13;
+		u32 rqne : 1;
+		u32 trbne : 1;
+		u32 rtlpfccnr : 1;
+	} s;
+	struct cvmx_pcieepx_cfg463_cn52xx {
+		u32 reserved_3_31 : 29;
+		u32 rqne : 1;
+		u32 trbne : 1;
+		u32 rtlpfccnr : 1;
+	} cn52xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn52xxp1;
+	struct cvmx_pcieepx_cfg463_cn52xx cn56xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn56xxp1;
+	struct cvmx_pcieepx_cfg463_cn52xx cn61xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn63xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn63xxp1;
+	struct cvmx_pcieepx_cfg463_cn52xx cn66xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn68xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn68xxp1;
+	struct cvmx_pcieepx_cfg463_cn52xx cn70xx;
+	struct cvmx_pcieepx_cfg463_cn52xx cn70xxp1;
+	struct cvmx_pcieepx_cfg463_s cn73xx;
+	struct cvmx_pcieepx_cfg463_s cn78xx;
+	struct cvmx_pcieepx_cfg463_s cn78xxp1;
+	struct cvmx_pcieepx_cfg463_cn52xx cnf71xx;
+	struct cvmx_pcieepx_cfg463_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg463 cvmx_pcieepx_cfg463_t;
+
+/**
+ * cvmx_pcieep#_cfg464
+ *
+ * This register contains the four hundred sixty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg464 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg464_s {
+		u32 wrr_vc3 : 8;
+		u32 wrr_vc2 : 8;
+		u32 wrr_vc1 : 8;
+		u32 wrr_vc0 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg464_s cn52xx;
+	struct cvmx_pcieepx_cfg464_s cn52xxp1;
+	struct cvmx_pcieepx_cfg464_s cn56xx;
+	struct cvmx_pcieepx_cfg464_s cn56xxp1;
+	struct cvmx_pcieepx_cfg464_s cn61xx;
+	struct cvmx_pcieepx_cfg464_s cn63xx;
+	struct cvmx_pcieepx_cfg464_s cn63xxp1;
+	struct cvmx_pcieepx_cfg464_s cn66xx;
+	struct cvmx_pcieepx_cfg464_s cn68xx;
+	struct cvmx_pcieepx_cfg464_s cn68xxp1;
+	struct cvmx_pcieepx_cfg464_s cn70xx;
+	struct cvmx_pcieepx_cfg464_s cn70xxp1;
+	struct cvmx_pcieepx_cfg464_s cn73xx;
+	struct cvmx_pcieepx_cfg464_s cn78xx;
+	struct cvmx_pcieepx_cfg464_s cn78xxp1;
+	struct cvmx_pcieepx_cfg464_s cnf71xx;
+	struct cvmx_pcieepx_cfg464_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg464 cvmx_pcieepx_cfg464_t;
+
+/**
+ * cvmx_pcieep#_cfg465
+ *
+ * This register contains the four hundred sixty-sixth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg465 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg465_s {
+		u32 wrr_vc7 : 8;
+		u32 wrr_vc6 : 8;
+		u32 wrr_vc5 : 8;
+		u32 wrr_vc4 : 8;
+	} s;
+	struct cvmx_pcieepx_cfg465_s cn52xx;
+	struct cvmx_pcieepx_cfg465_s cn52xxp1;
+	struct cvmx_pcieepx_cfg465_s cn56xx;
+	struct cvmx_pcieepx_cfg465_s cn56xxp1;
+	struct cvmx_pcieepx_cfg465_s cn61xx;
+	struct cvmx_pcieepx_cfg465_s cn63xx;
+	struct cvmx_pcieepx_cfg465_s cn63xxp1;
+	struct cvmx_pcieepx_cfg465_s cn66xx;
+	struct cvmx_pcieepx_cfg465_s cn68xx;
+	struct cvmx_pcieepx_cfg465_s cn68xxp1;
+	struct cvmx_pcieepx_cfg465_s cn70xx;
+	struct cvmx_pcieepx_cfg465_s cn70xxp1;
+	struct cvmx_pcieepx_cfg465_s cn73xx;
+	struct cvmx_pcieepx_cfg465_s cn78xx;
+	struct cvmx_pcieepx_cfg465_s cn78xxp1;
+	struct cvmx_pcieepx_cfg465_s cnf71xx;
+	struct cvmx_pcieepx_cfg465_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg465 cvmx_pcieepx_cfg465_t;
+
+/**
+ * cvmx_pcieep#_cfg466
+ *
+ * This register contains the four hundred sixty-seventh 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg466 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg466_s {
+		u32 rx_queue_order : 1;
+		u32 type_ordering : 1;
+		u32 reserved_24_29 : 6;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pcieepx_cfg466_s cn52xx;
+	struct cvmx_pcieepx_cfg466_s cn52xxp1;
+	struct cvmx_pcieepx_cfg466_s cn56xx;
+	struct cvmx_pcieepx_cfg466_s cn56xxp1;
+	struct cvmx_pcieepx_cfg466_s cn61xx;
+	struct cvmx_pcieepx_cfg466_s cn63xx;
+	struct cvmx_pcieepx_cfg466_s cn63xxp1;
+	struct cvmx_pcieepx_cfg466_s cn66xx;
+	struct cvmx_pcieepx_cfg466_s cn68xx;
+	struct cvmx_pcieepx_cfg466_s cn68xxp1;
+	struct cvmx_pcieepx_cfg466_s cn70xx;
+	struct cvmx_pcieepx_cfg466_s cn70xxp1;
+	struct cvmx_pcieepx_cfg466_s cn73xx;
+	struct cvmx_pcieepx_cfg466_s cn78xx;
+	struct cvmx_pcieepx_cfg466_s cn78xxp1;
+	struct cvmx_pcieepx_cfg466_s cnf71xx;
+	struct cvmx_pcieepx_cfg466_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg466 cvmx_pcieepx_cfg466_t;
+
+/**
+ * cvmx_pcieep#_cfg467
+ *
+ * This register contains the four hundred sixty-eighth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg467 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg467_s {
+		u32 reserved_24_31 : 8;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pcieepx_cfg467_s cn52xx;
+	struct cvmx_pcieepx_cfg467_s cn52xxp1;
+	struct cvmx_pcieepx_cfg467_s cn56xx;
+	struct cvmx_pcieepx_cfg467_s cn56xxp1;
+	struct cvmx_pcieepx_cfg467_s cn61xx;
+	struct cvmx_pcieepx_cfg467_s cn63xx;
+	struct cvmx_pcieepx_cfg467_s cn63xxp1;
+	struct cvmx_pcieepx_cfg467_s cn66xx;
+	struct cvmx_pcieepx_cfg467_s cn68xx;
+	struct cvmx_pcieepx_cfg467_s cn68xxp1;
+	struct cvmx_pcieepx_cfg467_s cn70xx;
+	struct cvmx_pcieepx_cfg467_s cn70xxp1;
+	struct cvmx_pcieepx_cfg467_s cn73xx;
+	struct cvmx_pcieepx_cfg467_s cn78xx;
+	struct cvmx_pcieepx_cfg467_s cn78xxp1;
+	struct cvmx_pcieepx_cfg467_s cnf71xx;
+	struct cvmx_pcieepx_cfg467_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg467 cvmx_pcieepx_cfg467_t;
+
+/**
+ * cvmx_pcieep#_cfg468
+ *
+ * This register contains the four hundred sixty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg468 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg468_s {
+		u32 reserved_24_31 : 8;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pcieepx_cfg468_s cn52xx;
+	struct cvmx_pcieepx_cfg468_s cn52xxp1;
+	struct cvmx_pcieepx_cfg468_s cn56xx;
+	struct cvmx_pcieepx_cfg468_s cn56xxp1;
+	struct cvmx_pcieepx_cfg468_s cn61xx;
+	struct cvmx_pcieepx_cfg468_s cn63xx;
+	struct cvmx_pcieepx_cfg468_s cn63xxp1;
+	struct cvmx_pcieepx_cfg468_s cn66xx;
+	struct cvmx_pcieepx_cfg468_s cn68xx;
+	struct cvmx_pcieepx_cfg468_s cn68xxp1;
+	struct cvmx_pcieepx_cfg468_s cn70xx;
+	struct cvmx_pcieepx_cfg468_s cn70xxp1;
+	struct cvmx_pcieepx_cfg468_s cn73xx;
+	struct cvmx_pcieepx_cfg468_s cn78xx;
+	struct cvmx_pcieepx_cfg468_s cn78xxp1;
+	struct cvmx_pcieepx_cfg468_s cnf71xx;
+	struct cvmx_pcieepx_cfg468_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg468 cvmx_pcieepx_cfg468_t;
+
+/**
+ * cvmx_pcieep#_cfg490
+ *
+ * PCIE_CFG490 = Four hundred ninety-first 32-bits of PCIE type 0 config space
+ * (VC0 Posted Buffer Depth)
+ */
+union cvmx_pcieepx_cfg490 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg490_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pcieepx_cfg490_s cn52xx;
+	struct cvmx_pcieepx_cfg490_s cn52xxp1;
+	struct cvmx_pcieepx_cfg490_s cn56xx;
+	struct cvmx_pcieepx_cfg490_s cn56xxp1;
+	struct cvmx_pcieepx_cfg490_s cn61xx;
+	struct cvmx_pcieepx_cfg490_s cn63xx;
+	struct cvmx_pcieepx_cfg490_s cn63xxp1;
+	struct cvmx_pcieepx_cfg490_s cn66xx;
+	struct cvmx_pcieepx_cfg490_s cn68xx;
+	struct cvmx_pcieepx_cfg490_s cn68xxp1;
+	struct cvmx_pcieepx_cfg490_s cn70xx;
+	struct cvmx_pcieepx_cfg490_s cn70xxp1;
+	struct cvmx_pcieepx_cfg490_s cnf71xx;
+};
+
+typedef union cvmx_pcieepx_cfg490 cvmx_pcieepx_cfg490_t;
+
+/**
+ * cvmx_pcieep#_cfg491
+ *
+ * PCIE_CFG491 = Four hundred ninety-second 32-bits of PCIE type 0 config space
+ * (VC0 Non-Posted Buffer Depth)
+ */
+union cvmx_pcieepx_cfg491 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg491_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pcieepx_cfg491_s cn52xx;
+	struct cvmx_pcieepx_cfg491_s cn52xxp1;
+	struct cvmx_pcieepx_cfg491_s cn56xx;
+	struct cvmx_pcieepx_cfg491_s cn56xxp1;
+	struct cvmx_pcieepx_cfg491_s cn61xx;
+	struct cvmx_pcieepx_cfg491_s cn63xx;
+	struct cvmx_pcieepx_cfg491_s cn63xxp1;
+	struct cvmx_pcieepx_cfg491_s cn66xx;
+	struct cvmx_pcieepx_cfg491_s cn68xx;
+	struct cvmx_pcieepx_cfg491_s cn68xxp1;
+	struct cvmx_pcieepx_cfg491_s cn70xx;
+	struct cvmx_pcieepx_cfg491_s cn70xxp1;
+	struct cvmx_pcieepx_cfg491_s cnf71xx;
+};
+
+typedef union cvmx_pcieepx_cfg491 cvmx_pcieepx_cfg491_t;
+
+/**
+ * cvmx_pcieep#_cfg492
+ *
+ * PCIE_CFG492 = Four hundred ninety-third 32-bits of PCIE type 0 config space
+ * (VC0 Completion Buffer Depth)
+ */
+union cvmx_pcieepx_cfg492 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg492_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pcieepx_cfg492_s cn52xx;
+	struct cvmx_pcieepx_cfg492_s cn52xxp1;
+	struct cvmx_pcieepx_cfg492_s cn56xx;
+	struct cvmx_pcieepx_cfg492_s cn56xxp1;
+	struct cvmx_pcieepx_cfg492_s cn61xx;
+	struct cvmx_pcieepx_cfg492_s cn63xx;
+	struct cvmx_pcieepx_cfg492_s cn63xxp1;
+	struct cvmx_pcieepx_cfg492_s cn66xx;
+	struct cvmx_pcieepx_cfg492_s cn68xx;
+	struct cvmx_pcieepx_cfg492_s cn68xxp1;
+	struct cvmx_pcieepx_cfg492_s cn70xx;
+	struct cvmx_pcieepx_cfg492_s cn70xxp1;
+	struct cvmx_pcieepx_cfg492_s cnf71xx;
+};
+
+typedef union cvmx_pcieepx_cfg492 cvmx_pcieepx_cfg492_t;
+
+/**
+ * cvmx_pcieep#_cfg515
+ *
+ * This register contains the five hundred sixteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg515 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg515_s {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 reserved_8_16 : 9;
+		u32 n_fts : 8;
+	} s;
+	struct cvmx_pcieepx_cfg515_cn61xx {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 le : 9;
+		u32 n_fts : 8;
+	} cn61xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn63xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn63xxp1;
+	struct cvmx_pcieepx_cfg515_cn61xx cn66xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn68xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn68xxp1;
+	struct cvmx_pcieepx_cfg515_cn61xx cn70xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn70xxp1;
+	struct cvmx_pcieepx_cfg515_cn61xx cn73xx;
+	struct cvmx_pcieepx_cfg515_cn78xx {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 alaneflip : 1;
+		u32 pdetlane : 3;
+		u32 nlanes : 5;
+		u32 n_fts : 8;
+	} cn78xx;
+	struct cvmx_pcieepx_cfg515_cn61xx cn78xxp1;
+	struct cvmx_pcieepx_cfg515_cn61xx cnf71xx;
+	struct cvmx_pcieepx_cfg515_cn78xx cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg515 cvmx_pcieepx_cfg515_t;
+
+/**
+ * cvmx_pcieep#_cfg516
+ *
+ * This register contains the five hundred seventeenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg516 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg516_s {
+		u32 phy_stat : 32;
+	} s;
+	struct cvmx_pcieepx_cfg516_s cn52xx;
+	struct cvmx_pcieepx_cfg516_s cn52xxp1;
+	struct cvmx_pcieepx_cfg516_s cn56xx;
+	struct cvmx_pcieepx_cfg516_s cn56xxp1;
+	struct cvmx_pcieepx_cfg516_s cn61xx;
+	struct cvmx_pcieepx_cfg516_s cn63xx;
+	struct cvmx_pcieepx_cfg516_s cn63xxp1;
+	struct cvmx_pcieepx_cfg516_s cn66xx;
+	struct cvmx_pcieepx_cfg516_s cn68xx;
+	struct cvmx_pcieepx_cfg516_s cn68xxp1;
+	struct cvmx_pcieepx_cfg516_s cn70xx;
+	struct cvmx_pcieepx_cfg516_s cn70xxp1;
+	struct cvmx_pcieepx_cfg516_s cn73xx;
+	struct cvmx_pcieepx_cfg516_s cn78xx;
+	struct cvmx_pcieepx_cfg516_s cn78xxp1;
+	struct cvmx_pcieepx_cfg516_s cnf71xx;
+	struct cvmx_pcieepx_cfg516_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg516 cvmx_pcieepx_cfg516_t;
+
+/**
+ * cvmx_pcieep#_cfg517
+ *
+ * This register contains the five hundred eighteenth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg517 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg517_s {
+		u32 phy_ctrl : 32;
+	} s;
+	struct cvmx_pcieepx_cfg517_s cn52xx;
+	struct cvmx_pcieepx_cfg517_s cn52xxp1;
+	struct cvmx_pcieepx_cfg517_s cn56xx;
+	struct cvmx_pcieepx_cfg517_s cn56xxp1;
+	struct cvmx_pcieepx_cfg517_s cn61xx;
+	struct cvmx_pcieepx_cfg517_s cn63xx;
+	struct cvmx_pcieepx_cfg517_s cn63xxp1;
+	struct cvmx_pcieepx_cfg517_s cn66xx;
+	struct cvmx_pcieepx_cfg517_s cn68xx;
+	struct cvmx_pcieepx_cfg517_s cn68xxp1;
+	struct cvmx_pcieepx_cfg517_s cn70xx;
+	struct cvmx_pcieepx_cfg517_s cn70xxp1;
+	struct cvmx_pcieepx_cfg517_s cn73xx;
+	struct cvmx_pcieepx_cfg517_s cn78xx;
+	struct cvmx_pcieepx_cfg517_s cn78xxp1;
+	struct cvmx_pcieepx_cfg517_s cnf71xx;
+	struct cvmx_pcieepx_cfg517_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg517 cvmx_pcieepx_cfg517_t;
+
+/**
+ * cvmx_pcieep#_cfg548
+ *
+ * This register contains the five hundred forty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg548 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg548_s {
+		u32 reserved_26_31 : 6;
+		u32 rss : 2;
+		u32 eiedd : 1;
+		u32 reserved_19_22 : 4;
+		u32 dcbd : 1;
+		u32 dtdd : 1;
+		u32 ed : 1;
+		u32 reserved_13_15 : 3;
+		u32 rxeq_ph01_en : 1;
+		u32 erd : 1;
+		u32 ecrd : 1;
+		u32 ep2p3d : 1;
+		u32 dsg3 : 1;
+		u32 reserved_1_7 : 7;
+		u32 grizdnc : 1;
+	} s;
+	struct cvmx_pcieepx_cfg548_s cn73xx;
+	struct cvmx_pcieepx_cfg548_s cn78xx;
+	struct cvmx_pcieepx_cfg548_s cn78xxp1;
+	struct cvmx_pcieepx_cfg548_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg548 cvmx_pcieepx_cfg548_t;
+
+/**
+ * cvmx_pcieep#_cfg554
+ *
+ * This register contains the five hundred fifty-fifth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg554 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg554_s {
+		u32 reserved_27_31 : 5;
+		u32 scefpm : 1;
+		u32 reserved_25_25 : 1;
+		u32 iif : 1;
+		u32 prv : 16;
+		u32 reserved_6_7 : 2;
+		u32 p23td : 1;
+		u32 bt : 1;
+		u32 fm : 4;
+	} s;
+	struct cvmx_pcieepx_cfg554_cn73xx {
+		u32 reserved_25_31 : 7;
+		u32 iif : 1;
+		u32 prv : 16;
+		u32 reserved_6_7 : 2;
+		u32 p23td : 1;
+		u32 bt : 1;
+		u32 fm : 4;
+	} cn73xx;
+	struct cvmx_pcieepx_cfg554_s cn78xx;
+	struct cvmx_pcieepx_cfg554_s cn78xxp1;
+	struct cvmx_pcieepx_cfg554_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg554 cvmx_pcieepx_cfg554_t;
+
+/**
+ * cvmx_pcieep#_cfg558
+ *
+ * This register contains the five hundred fifty-ninth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg558 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg558_s {
+		u32 ple : 1;
+		u32 rxstatus : 31;
+	} s;
+	struct cvmx_pcieepx_cfg558_s cn73xx;
+	struct cvmx_pcieepx_cfg558_s cn78xx;
+	struct cvmx_pcieepx_cfg558_s cn78xxp1;
+	struct cvmx_pcieepx_cfg558_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg558 cvmx_pcieepx_cfg558_t;
+
+/**
+ * cvmx_pcieep#_cfg559
+ *
+ * This register contains the five hundred sixtieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pcieepx_cfg559 {
+	u32 u32;
+	struct cvmx_pcieepx_cfg559_s {
+		u32 reserved_1_31 : 31;
+		u32 dbi_ro_wr_en : 1;
+	} s;
+	struct cvmx_pcieepx_cfg559_s cn73xx;
+	struct cvmx_pcieepx_cfg559_s cn78xx;
+	struct cvmx_pcieepx_cfg559_s cn78xxp1;
+	struct cvmx_pcieepx_cfg559_s cnf75xx;
+};
+
+typedef union cvmx_pcieepx_cfg559 cvmx_pcieepx_cfg559_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 18/50] mips: octeon: Add cvmx-pciercx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (16 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 17/50] mips: octeon: Add cvmx-pcieepx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 19/50] mips: octeon: Add cvmx-pcsx-defs.h " Stefan Roese
                   ` (34 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pciercx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-pciercx-defs.h          | 5586 +++++++++++++++++
 1 file changed, 5586 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pciercx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pciercx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pciercx-defs.h
new file mode 100644
index 0000000000..1e3e045cbe
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pciercx-defs.h
@@ -0,0 +1,5586 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pciercx.
+ */
+
+#ifndef __CVMX_PCIERCX_DEFS_H__
+#define __CVMX_PCIERCX_DEFS_H__
+
+static inline u64 CVMX_PCIERCX_CFG000(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000000ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000000ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000000ull;
+	}
+	return 0x0000020000000000ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG001(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000004ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000004ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000004ull;
+	}
+	return 0x0000020000000004ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG002(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000008ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000008ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000008ull;
+	}
+	return 0x0000020000000008ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG003(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000000Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000000Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000000Cull;
+	}
+	return 0x000002000000000Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG004(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000010ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000010ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000010ull;
+	}
+	return 0x0000020000000010ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG005(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000014ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000014ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000014ull;
+	}
+	return 0x0000020000000014ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG006(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000018ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000018ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000018ull;
+	}
+	return 0x0000020000000018ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG007(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000001Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000001Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000001Cull;
+	}
+	return 0x000002000000001Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG008(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000020ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000020ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000020ull;
+	}
+	return 0x0000020000000020ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG009(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000024ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000024ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000024ull;
+	}
+	return 0x0000020000000024ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG010(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000028ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000028ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000028ull;
+	}
+	return 0x0000020000000028ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG011(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000002Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000002Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000002Cull;
+	}
+	return 0x000002000000002Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG012(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000030ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000030ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000030ull;
+	}
+	return 0x0000020000000030ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG013(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000034ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000034ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000034ull;
+	}
+	return 0x0000020000000034ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG014(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000038ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000038ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000038ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000038ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000038ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000038ull;
+	}
+	return 0x0000020000000038ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG015(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000003Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000003Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000003Cull;
+	}
+	return 0x000002000000003Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG016(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000040ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000040ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000040ull;
+	}
+	return 0x0000020000000040ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG017(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000044ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000044ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000044ull;
+	}
+	return 0x0000020000000044ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG020(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000050ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000050ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000050ull;
+	}
+	return 0x0000020000000050ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG021(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000054ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000054ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000054ull;
+	}
+	return 0x0000020000000054ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG022(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000058ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000058ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000058ull;
+	}
+	return 0x0000020000000058ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG023(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000005Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000005Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000005Cull;
+	}
+	return 0x000002000000005Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG028(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000070ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000070ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000070ull;
+	}
+	return 0x0000020000000070ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG029(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000074ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000074ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000074ull;
+	}
+	return 0x0000020000000074ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG030(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000078ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000078ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000078ull;
+	}
+	return 0x0000020000000078ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG031(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000007Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000007Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000007Cull;
+	}
+	return 0x000002000000007Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG032(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000080ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000080ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000080ull;
+	}
+	return 0x0000020000000080ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG033(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000084ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000084ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000084ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000084ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000084ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000084ull;
+	}
+	return 0x0000020000000084ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG034(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000088ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000088ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000088ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000088ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000088ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000088ull;
+	}
+	return 0x0000020000000088ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG035(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000008Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000008Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000008Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000008Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000008Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000008Cull;
+	}
+	return 0x000002000000008Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG036(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000090ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000090ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000090ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000090ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000090ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000090ull;
+	}
+	return 0x0000020000000090ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG037(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000094ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000094ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000094ull;
+	}
+	return 0x0000020000000094ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG038(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000098ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000098ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000098ull;
+	}
+	return 0x0000020000000098ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG039(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000009Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000009Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000009Cull;
+	}
+	return 0x000002000000009Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG040(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000000A0ull;
+	}
+	return 0x00000200000000A0ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG041(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000000A4ull;
+	}
+	return 0x00000200000000A4ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG042(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000000A8ull;
+	}
+	return 0x00000200000000A8ull + (offset) * 0x100000000ull;
+}
+
+#define CVMX_PCIERCX_CFG044(offset) (0x00000200000000B0ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG045(offset) (0x00000200000000B4ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG046(offset) (0x00000200000000B8ull + ((offset) & 3) * 0x100000000ull)
+static inline u64 CVMX_PCIERCX_CFG064(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000100ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000100ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000100ull;
+	}
+	return 0x0000020000000100ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG065(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000104ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000104ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000104ull;
+	}
+	return 0x0000020000000104ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG066(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000108ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000108ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000108ull;
+	}
+	return 0x0000020000000108ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG067(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000010Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000010Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000010Cull;
+	}
+	return 0x000002000000010Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG068(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000110ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000110ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000110ull;
+	}
+	return 0x0000020000000110ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG069(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000114ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000114ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000114ull;
+	}
+	return 0x0000020000000114ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG070(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000118ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000118ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000118ull;
+	}
+	return 0x0000020000000118ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG071(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000011Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000011Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000011Cull;
+	}
+	return 0x000002000000011Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG072(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000120ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000120ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000120ull;
+	}
+	return 0x0000020000000120ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG073(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000124ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000124ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000124ull;
+	}
+	return 0x0000020000000124ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG074(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000128ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000128ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000128ull;
+	}
+	return 0x0000020000000128ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG075(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000012Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000012Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000012Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000012Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000012Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000012Cull;
+	}
+	return 0x000002000000012Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG076(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000130ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000130ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000130ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000130ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000130ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000130ull;
+	}
+	return 0x0000020000000130ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG077(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000134ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000134ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000134ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000134ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000134ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000134ull;
+	}
+	return 0x0000020000000134ull + (offset) * 0x100000000ull;
+}
+
+#define CVMX_PCIERCX_CFG086(offset) (0x0000020000000158ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG087(offset) (0x000002000000015Cull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG088(offset) (0x0000020000000160ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG089(offset) (0x0000020000000164ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG090(offset) (0x0000020000000168ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG091(offset) (0x000002000000016Cull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG092(offset) (0x0000020000000170ull + ((offset) & 3) * 0x100000000ull)
+static inline u64 CVMX_PCIERCX_CFG448(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000700ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000700ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000700ull;
+	}
+	return 0x0000020000000700ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG449(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000704ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000704ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000704ull;
+	}
+	return 0x0000020000000704ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG450(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000708ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000708ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000708ull;
+	}
+	return 0x0000020000000708ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG451(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000070Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000070Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000070Cull;
+	}
+	return 0x000002000000070Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG452(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000710ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000710ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000710ull;
+	}
+	return 0x0000020000000710ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG453(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000714ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000714ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000714ull;
+	}
+	return 0x0000020000000714ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG454(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000718ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000718ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000718ull;
+	}
+	return 0x0000020000000718ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG455(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000071Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000071Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000071Cull;
+	}
+	return 0x000002000000071Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG456(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000720ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000720ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000720ull;
+	}
+	return 0x0000020000000720ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG458(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000728ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000728ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000728ull;
+	}
+	return 0x0000020000000728ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG459(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000072Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000072Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000072Cull;
+	}
+	return 0x000002000000072Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG460(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000730ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000730ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000730ull;
+	}
+	return 0x0000020000000730ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG461(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000734ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000734ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000734ull;
+	}
+	return 0x0000020000000734ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG462(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000738ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000738ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000738ull;
+	}
+	return 0x0000020000000738ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG463(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000073Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000073Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000073Cull;
+	}
+	return 0x000002000000073Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG464(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000740ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000740ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000740ull;
+	}
+	return 0x0000020000000740ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG465(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000744ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000744ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000744ull;
+	}
+	return 0x0000020000000744ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG466(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000748ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000748ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000748ull;
+	}
+	return 0x0000020000000748ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG467(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000074Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000074Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000074Cull;
+	}
+	return 0x000002000000074Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG468(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000750ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000750ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000750ull;
+	}
+	return 0x0000020000000750ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG490(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007A8ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000007A8ull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007A8ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG491(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007ACull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000007ACull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007ACull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG492(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000007B0ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00000200000007B0ull + (offset) * 0x100000000ull;
+	}
+	return 0x00000000000007B0ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG515(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x000002000000080Cull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x000002000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x000002000000080Cull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x000000000000080Cull;
+	}
+	return 0x000002000000080Cull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG516(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000810ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000810ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000810ull;
+	}
+	return 0x0000020000000810ull + (offset) * 0x100000000ull;
+}
+
+static inline u64 CVMX_PCIERCX_CFG517(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000020000000814ull + (offset) * 0x100000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000020000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000020000000814ull + (offset) * 0x100000000ull;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000814ull;
+	}
+	return 0x0000020000000814ull + (offset) * 0x100000000ull;
+}
+
+#define CVMX_PCIERCX_CFG548(offset) (0x0000020000000890ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG554(offset) (0x00000200000008A8ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG558(offset) (0x00000200000008B8ull + ((offset) & 3) * 0x100000000ull)
+#define CVMX_PCIERCX_CFG559(offset) (0x00000200000008BCull + ((offset) & 3) * 0x100000000ull)
+
+/**
+ * cvmx_pcierc#_cfg000
+ *
+ * This register contains the first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg000 {
+	u32 u32;
+	struct cvmx_pciercx_cfg000_s {
+		u32 devid : 16;
+		u32 vendid : 16;
+	} s;
+	struct cvmx_pciercx_cfg000_s cn52xx;
+	struct cvmx_pciercx_cfg000_s cn52xxp1;
+	struct cvmx_pciercx_cfg000_s cn56xx;
+	struct cvmx_pciercx_cfg000_s cn56xxp1;
+	struct cvmx_pciercx_cfg000_s cn61xx;
+	struct cvmx_pciercx_cfg000_s cn63xx;
+	struct cvmx_pciercx_cfg000_s cn63xxp1;
+	struct cvmx_pciercx_cfg000_s cn66xx;
+	struct cvmx_pciercx_cfg000_s cn68xx;
+	struct cvmx_pciercx_cfg000_s cn68xxp1;
+	struct cvmx_pciercx_cfg000_s cn70xx;
+	struct cvmx_pciercx_cfg000_s cn70xxp1;
+	struct cvmx_pciercx_cfg000_s cn73xx;
+	struct cvmx_pciercx_cfg000_s cn78xx;
+	struct cvmx_pciercx_cfg000_s cn78xxp1;
+	struct cvmx_pciercx_cfg000_s cnf71xx;
+	struct cvmx_pciercx_cfg000_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg000 cvmx_pciercx_cfg000_t;
+
+/**
+ * cvmx_pcierc#_cfg001
+ *
+ * This register contains the second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg001 {
+	u32 u32;
+	struct cvmx_pciercx_cfg001_s {
+		u32 dpe : 1;
+		u32 sse : 1;
+		u32 rma : 1;
+		u32 rta : 1;
+		u32 sta : 1;
+		u32 devt : 2;
+		u32 mdpe : 1;
+		u32 fbb : 1;
+		u32 reserved_22_22 : 1;
+		u32 m66 : 1;
+		u32 cl : 1;
+		u32 i_stat : 1;
+		u32 reserved_11_18 : 8;
+		u32 i_dis : 1;
+		u32 fbbe : 1;
+		u32 see : 1;
+		u32 ids_wcc : 1;
+		u32 per : 1;
+		u32 vps : 1;
+		u32 mwice : 1;
+		u32 scse : 1;
+		u32 me : 1;
+		u32 msae : 1;
+		u32 isae : 1;
+	} s;
+	struct cvmx_pciercx_cfg001_s cn52xx;
+	struct cvmx_pciercx_cfg001_s cn52xxp1;
+	struct cvmx_pciercx_cfg001_s cn56xx;
+	struct cvmx_pciercx_cfg001_s cn56xxp1;
+	struct cvmx_pciercx_cfg001_s cn61xx;
+	struct cvmx_pciercx_cfg001_s cn63xx;
+	struct cvmx_pciercx_cfg001_s cn63xxp1;
+	struct cvmx_pciercx_cfg001_s cn66xx;
+	struct cvmx_pciercx_cfg001_s cn68xx;
+	struct cvmx_pciercx_cfg001_s cn68xxp1;
+	struct cvmx_pciercx_cfg001_s cn70xx;
+	struct cvmx_pciercx_cfg001_s cn70xxp1;
+	struct cvmx_pciercx_cfg001_s cn73xx;
+	struct cvmx_pciercx_cfg001_s cn78xx;
+	struct cvmx_pciercx_cfg001_s cn78xxp1;
+	struct cvmx_pciercx_cfg001_s cnf71xx;
+	struct cvmx_pciercx_cfg001_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg001 cvmx_pciercx_cfg001_t;
+
+/**
+ * cvmx_pcierc#_cfg002
+ *
+ * This register contains the third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg002 {
+	u32 u32;
+	struct cvmx_pciercx_cfg002_s {
+		u32 bcc : 8;
+		u32 sc : 8;
+		u32 pi : 8;
+		u32 rid : 8;
+	} s;
+	struct cvmx_pciercx_cfg002_s cn52xx;
+	struct cvmx_pciercx_cfg002_s cn52xxp1;
+	struct cvmx_pciercx_cfg002_s cn56xx;
+	struct cvmx_pciercx_cfg002_s cn56xxp1;
+	struct cvmx_pciercx_cfg002_s cn61xx;
+	struct cvmx_pciercx_cfg002_s cn63xx;
+	struct cvmx_pciercx_cfg002_s cn63xxp1;
+	struct cvmx_pciercx_cfg002_s cn66xx;
+	struct cvmx_pciercx_cfg002_s cn68xx;
+	struct cvmx_pciercx_cfg002_s cn68xxp1;
+	struct cvmx_pciercx_cfg002_s cn70xx;
+	struct cvmx_pciercx_cfg002_s cn70xxp1;
+	struct cvmx_pciercx_cfg002_s cn73xx;
+	struct cvmx_pciercx_cfg002_s cn78xx;
+	struct cvmx_pciercx_cfg002_s cn78xxp1;
+	struct cvmx_pciercx_cfg002_s cnf71xx;
+	struct cvmx_pciercx_cfg002_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg002 cvmx_pciercx_cfg002_t;
+
+/**
+ * cvmx_pcierc#_cfg003
+ *
+ * This register contains the fourth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg003 {
+	u32 u32;
+	struct cvmx_pciercx_cfg003_s {
+		u32 bist : 8;
+		u32 mfd : 1;
+		u32 chf : 7;
+		u32 lt : 8;
+		u32 cls : 8;
+	} s;
+	struct cvmx_pciercx_cfg003_s cn52xx;
+	struct cvmx_pciercx_cfg003_s cn52xxp1;
+	struct cvmx_pciercx_cfg003_s cn56xx;
+	struct cvmx_pciercx_cfg003_s cn56xxp1;
+	struct cvmx_pciercx_cfg003_s cn61xx;
+	struct cvmx_pciercx_cfg003_s cn63xx;
+	struct cvmx_pciercx_cfg003_s cn63xxp1;
+	struct cvmx_pciercx_cfg003_s cn66xx;
+	struct cvmx_pciercx_cfg003_s cn68xx;
+	struct cvmx_pciercx_cfg003_s cn68xxp1;
+	struct cvmx_pciercx_cfg003_s cn70xx;
+	struct cvmx_pciercx_cfg003_s cn70xxp1;
+	struct cvmx_pciercx_cfg003_s cn73xx;
+	struct cvmx_pciercx_cfg003_s cn78xx;
+	struct cvmx_pciercx_cfg003_s cn78xxp1;
+	struct cvmx_pciercx_cfg003_s cnf71xx;
+	struct cvmx_pciercx_cfg003_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg003 cvmx_pciercx_cfg003_t;
+
+/**
+ * cvmx_pcierc#_cfg004
+ *
+ * This register contains the fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg004 {
+	u32 u32;
+	struct cvmx_pciercx_cfg004_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pciercx_cfg004_s cn52xx;
+	struct cvmx_pciercx_cfg004_s cn52xxp1;
+	struct cvmx_pciercx_cfg004_s cn56xx;
+	struct cvmx_pciercx_cfg004_s cn56xxp1;
+	struct cvmx_pciercx_cfg004_s cn61xx;
+	struct cvmx_pciercx_cfg004_s cn63xx;
+	struct cvmx_pciercx_cfg004_s cn63xxp1;
+	struct cvmx_pciercx_cfg004_s cn66xx;
+	struct cvmx_pciercx_cfg004_s cn68xx;
+	struct cvmx_pciercx_cfg004_s cn68xxp1;
+	struct cvmx_pciercx_cfg004_cn70xx {
+		u32 reserved_31_0 : 32;
+	} cn70xx;
+	struct cvmx_pciercx_cfg004_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg004_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg004_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg004_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg004_s cnf71xx;
+	struct cvmx_pciercx_cfg004_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg004 cvmx_pciercx_cfg004_t;
+
+/**
+ * cvmx_pcierc#_cfg005
+ *
+ * This register contains the sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg005 {
+	u32 u32;
+	struct cvmx_pciercx_cfg005_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pciercx_cfg005_s cn52xx;
+	struct cvmx_pciercx_cfg005_s cn52xxp1;
+	struct cvmx_pciercx_cfg005_s cn56xx;
+	struct cvmx_pciercx_cfg005_s cn56xxp1;
+	struct cvmx_pciercx_cfg005_s cn61xx;
+	struct cvmx_pciercx_cfg005_s cn63xx;
+	struct cvmx_pciercx_cfg005_s cn63xxp1;
+	struct cvmx_pciercx_cfg005_s cn66xx;
+	struct cvmx_pciercx_cfg005_s cn68xx;
+	struct cvmx_pciercx_cfg005_s cn68xxp1;
+	struct cvmx_pciercx_cfg005_cn70xx {
+		u32 reserved_31_0 : 32;
+	} cn70xx;
+	struct cvmx_pciercx_cfg005_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg005_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg005_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg005_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg005_s cnf71xx;
+	struct cvmx_pciercx_cfg005_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg005 cvmx_pciercx_cfg005_t;
+
+/**
+ * cvmx_pcierc#_cfg006
+ *
+ * This register contains the seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg006 {
+	u32 u32;
+	struct cvmx_pciercx_cfg006_s {
+		u32 slt : 8;
+		u32 subbnum : 8;
+		u32 sbnum : 8;
+		u32 pbnum : 8;
+	} s;
+	struct cvmx_pciercx_cfg006_s cn52xx;
+	struct cvmx_pciercx_cfg006_s cn52xxp1;
+	struct cvmx_pciercx_cfg006_s cn56xx;
+	struct cvmx_pciercx_cfg006_s cn56xxp1;
+	struct cvmx_pciercx_cfg006_s cn61xx;
+	struct cvmx_pciercx_cfg006_s cn63xx;
+	struct cvmx_pciercx_cfg006_s cn63xxp1;
+	struct cvmx_pciercx_cfg006_s cn66xx;
+	struct cvmx_pciercx_cfg006_s cn68xx;
+	struct cvmx_pciercx_cfg006_s cn68xxp1;
+	struct cvmx_pciercx_cfg006_s cn70xx;
+	struct cvmx_pciercx_cfg006_s cn70xxp1;
+	struct cvmx_pciercx_cfg006_s cn73xx;
+	struct cvmx_pciercx_cfg006_s cn78xx;
+	struct cvmx_pciercx_cfg006_s cn78xxp1;
+	struct cvmx_pciercx_cfg006_s cnf71xx;
+	struct cvmx_pciercx_cfg006_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg006 cvmx_pciercx_cfg006_t;
+
+/**
+ * cvmx_pcierc#_cfg007
+ *
+ * This register contains the eighth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg007 {
+	u32 u32;
+	struct cvmx_pciercx_cfg007_s {
+		u32 dpe : 1;
+		u32 sse : 1;
+		u32 rma : 1;
+		u32 rta : 1;
+		u32 sta : 1;
+		u32 devt : 2;
+		u32 mdpe : 1;
+		u32 fbb : 1;
+		u32 reserved_22_22 : 1;
+		u32 m66 : 1;
+		u32 reserved_16_20 : 5;
+		u32 lio_limi : 4;
+		u32 reserved_9_11 : 3;
+		u32 io32b : 1;
+		u32 lio_base : 4;
+		u32 reserved_1_3 : 3;
+		u32 io32a : 1;
+	} s;
+	struct cvmx_pciercx_cfg007_s cn52xx;
+	struct cvmx_pciercx_cfg007_s cn52xxp1;
+	struct cvmx_pciercx_cfg007_s cn56xx;
+	struct cvmx_pciercx_cfg007_s cn56xxp1;
+	struct cvmx_pciercx_cfg007_s cn61xx;
+	struct cvmx_pciercx_cfg007_s cn63xx;
+	struct cvmx_pciercx_cfg007_s cn63xxp1;
+	struct cvmx_pciercx_cfg007_s cn66xx;
+	struct cvmx_pciercx_cfg007_s cn68xx;
+	struct cvmx_pciercx_cfg007_s cn68xxp1;
+	struct cvmx_pciercx_cfg007_s cn70xx;
+	struct cvmx_pciercx_cfg007_s cn70xxp1;
+	struct cvmx_pciercx_cfg007_s cn73xx;
+	struct cvmx_pciercx_cfg007_s cn78xx;
+	struct cvmx_pciercx_cfg007_s cn78xxp1;
+	struct cvmx_pciercx_cfg007_s cnf71xx;
+	struct cvmx_pciercx_cfg007_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg007 cvmx_pciercx_cfg007_t;
+
+/**
+ * cvmx_pcierc#_cfg008
+ *
+ * This register contains the ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg008 {
+	u32 u32;
+	struct cvmx_pciercx_cfg008_s {
+		u32 ml_addr : 12;
+		u32 reserved_16_19 : 4;
+		u32 mb_addr : 12;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pciercx_cfg008_s cn52xx;
+	struct cvmx_pciercx_cfg008_s cn52xxp1;
+	struct cvmx_pciercx_cfg008_s cn56xx;
+	struct cvmx_pciercx_cfg008_s cn56xxp1;
+	struct cvmx_pciercx_cfg008_s cn61xx;
+	struct cvmx_pciercx_cfg008_s cn63xx;
+	struct cvmx_pciercx_cfg008_s cn63xxp1;
+	struct cvmx_pciercx_cfg008_s cn66xx;
+	struct cvmx_pciercx_cfg008_s cn68xx;
+	struct cvmx_pciercx_cfg008_s cn68xxp1;
+	struct cvmx_pciercx_cfg008_cn70xx {
+		u32 ml_addr : 12;
+		u32 reserved_19_16 : 4;
+		u32 mb_addr : 12;
+		u32 reserved_3_0 : 4;
+	} cn70xx;
+	struct cvmx_pciercx_cfg008_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg008_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg008_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg008_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg008_s cnf71xx;
+	struct cvmx_pciercx_cfg008_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg008 cvmx_pciercx_cfg008_t;
+
+/**
+ * cvmx_pcierc#_cfg009
+ *
+ * This register contains the tenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg009 {
+	u32 u32;
+	struct cvmx_pciercx_cfg009_s {
+		u32 lmem_limit : 12;
+		u32 reserved_17_19 : 3;
+		u32 mem64b : 1;
+		u32 lmem_base : 12;
+		u32 reserved_1_3 : 3;
+		u32 mem64a : 1;
+	} s;
+	struct cvmx_pciercx_cfg009_s cn52xx;
+	struct cvmx_pciercx_cfg009_s cn52xxp1;
+	struct cvmx_pciercx_cfg009_s cn56xx;
+	struct cvmx_pciercx_cfg009_s cn56xxp1;
+	struct cvmx_pciercx_cfg009_s cn61xx;
+	struct cvmx_pciercx_cfg009_s cn63xx;
+	struct cvmx_pciercx_cfg009_s cn63xxp1;
+	struct cvmx_pciercx_cfg009_s cn66xx;
+	struct cvmx_pciercx_cfg009_s cn68xx;
+	struct cvmx_pciercx_cfg009_s cn68xxp1;
+	struct cvmx_pciercx_cfg009_cn70xx {
+		u32 lmem_limit : 12;
+		u32 reserved_19_17 : 3;
+		u32 mem64b : 1;
+		u32 lmem_base : 12;
+		u32 reserved_3_1 : 3;
+		u32 mem64a : 1;
+	} cn70xx;
+	struct cvmx_pciercx_cfg009_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg009_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg009_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg009_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg009_s cnf71xx;
+	struct cvmx_pciercx_cfg009_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg009 cvmx_pciercx_cfg009_t;
+
+/**
+ * cvmx_pcierc#_cfg010
+ *
+ * This register contains the eleventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg010 {
+	u32 u32;
+	struct cvmx_pciercx_cfg010_s {
+		u32 umem_base : 32;
+	} s;
+	struct cvmx_pciercx_cfg010_s cn52xx;
+	struct cvmx_pciercx_cfg010_s cn52xxp1;
+	struct cvmx_pciercx_cfg010_s cn56xx;
+	struct cvmx_pciercx_cfg010_s cn56xxp1;
+	struct cvmx_pciercx_cfg010_s cn61xx;
+	struct cvmx_pciercx_cfg010_s cn63xx;
+	struct cvmx_pciercx_cfg010_s cn63xxp1;
+	struct cvmx_pciercx_cfg010_s cn66xx;
+	struct cvmx_pciercx_cfg010_s cn68xx;
+	struct cvmx_pciercx_cfg010_s cn68xxp1;
+	struct cvmx_pciercx_cfg010_s cn70xx;
+	struct cvmx_pciercx_cfg010_s cn70xxp1;
+	struct cvmx_pciercx_cfg010_s cn73xx;
+	struct cvmx_pciercx_cfg010_s cn78xx;
+	struct cvmx_pciercx_cfg010_s cn78xxp1;
+	struct cvmx_pciercx_cfg010_s cnf71xx;
+	struct cvmx_pciercx_cfg010_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg010 cvmx_pciercx_cfg010_t;
+
+/**
+ * cvmx_pcierc#_cfg011
+ *
+ * This register contains the twelfth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg011 {
+	u32 u32;
+	struct cvmx_pciercx_cfg011_s {
+		u32 umem_limit : 32;
+	} s;
+	struct cvmx_pciercx_cfg011_s cn52xx;
+	struct cvmx_pciercx_cfg011_s cn52xxp1;
+	struct cvmx_pciercx_cfg011_s cn56xx;
+	struct cvmx_pciercx_cfg011_s cn56xxp1;
+	struct cvmx_pciercx_cfg011_s cn61xx;
+	struct cvmx_pciercx_cfg011_s cn63xx;
+	struct cvmx_pciercx_cfg011_s cn63xxp1;
+	struct cvmx_pciercx_cfg011_s cn66xx;
+	struct cvmx_pciercx_cfg011_s cn68xx;
+	struct cvmx_pciercx_cfg011_s cn68xxp1;
+	struct cvmx_pciercx_cfg011_s cn70xx;
+	struct cvmx_pciercx_cfg011_s cn70xxp1;
+	struct cvmx_pciercx_cfg011_s cn73xx;
+	struct cvmx_pciercx_cfg011_s cn78xx;
+	struct cvmx_pciercx_cfg011_s cn78xxp1;
+	struct cvmx_pciercx_cfg011_s cnf71xx;
+	struct cvmx_pciercx_cfg011_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg011 cvmx_pciercx_cfg011_t;
+
+/**
+ * cvmx_pcierc#_cfg012
+ *
+ * This register contains the thirteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg012 {
+	u32 u32;
+	struct cvmx_pciercx_cfg012_s {
+		u32 uio_limit : 16;
+		u32 uio_base : 16;
+	} s;
+	struct cvmx_pciercx_cfg012_s cn52xx;
+	struct cvmx_pciercx_cfg012_s cn52xxp1;
+	struct cvmx_pciercx_cfg012_s cn56xx;
+	struct cvmx_pciercx_cfg012_s cn56xxp1;
+	struct cvmx_pciercx_cfg012_s cn61xx;
+	struct cvmx_pciercx_cfg012_s cn63xx;
+	struct cvmx_pciercx_cfg012_s cn63xxp1;
+	struct cvmx_pciercx_cfg012_s cn66xx;
+	struct cvmx_pciercx_cfg012_s cn68xx;
+	struct cvmx_pciercx_cfg012_s cn68xxp1;
+	struct cvmx_pciercx_cfg012_s cn70xx;
+	struct cvmx_pciercx_cfg012_s cn70xxp1;
+	struct cvmx_pciercx_cfg012_s cn73xx;
+	struct cvmx_pciercx_cfg012_s cn78xx;
+	struct cvmx_pciercx_cfg012_s cn78xxp1;
+	struct cvmx_pciercx_cfg012_s cnf71xx;
+	struct cvmx_pciercx_cfg012_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg012 cvmx_pciercx_cfg012_t;
+
+/**
+ * cvmx_pcierc#_cfg013
+ *
+ * This register contains the fourteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg013 {
+	u32 u32;
+	struct cvmx_pciercx_cfg013_s {
+		u32 reserved_8_31 : 24;
+		u32 cp : 8;
+	} s;
+	struct cvmx_pciercx_cfg013_s cn52xx;
+	struct cvmx_pciercx_cfg013_s cn52xxp1;
+	struct cvmx_pciercx_cfg013_s cn56xx;
+	struct cvmx_pciercx_cfg013_s cn56xxp1;
+	struct cvmx_pciercx_cfg013_s cn61xx;
+	struct cvmx_pciercx_cfg013_s cn63xx;
+	struct cvmx_pciercx_cfg013_s cn63xxp1;
+	struct cvmx_pciercx_cfg013_s cn66xx;
+	struct cvmx_pciercx_cfg013_s cn68xx;
+	struct cvmx_pciercx_cfg013_s cn68xxp1;
+	struct cvmx_pciercx_cfg013_s cn70xx;
+	struct cvmx_pciercx_cfg013_s cn70xxp1;
+	struct cvmx_pciercx_cfg013_s cn73xx;
+	struct cvmx_pciercx_cfg013_s cn78xx;
+	struct cvmx_pciercx_cfg013_s cn78xxp1;
+	struct cvmx_pciercx_cfg013_s cnf71xx;
+	struct cvmx_pciercx_cfg013_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg013 cvmx_pciercx_cfg013_t;
+
+/**
+ * cvmx_pcierc#_cfg014
+ *
+ * This register contains the fifteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg014 {
+	u32 u32;
+	struct cvmx_pciercx_cfg014_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pciercx_cfg014_s cn52xx;
+	struct cvmx_pciercx_cfg014_s cn52xxp1;
+	struct cvmx_pciercx_cfg014_s cn56xx;
+	struct cvmx_pciercx_cfg014_s cn56xxp1;
+	struct cvmx_pciercx_cfg014_s cn61xx;
+	struct cvmx_pciercx_cfg014_s cn63xx;
+	struct cvmx_pciercx_cfg014_s cn63xxp1;
+	struct cvmx_pciercx_cfg014_s cn66xx;
+	struct cvmx_pciercx_cfg014_s cn68xx;
+	struct cvmx_pciercx_cfg014_s cn68xxp1;
+	struct cvmx_pciercx_cfg014_s cn70xx;
+	struct cvmx_pciercx_cfg014_s cn70xxp1;
+	struct cvmx_pciercx_cfg014_s cn73xx;
+	struct cvmx_pciercx_cfg014_s cn78xx;
+	struct cvmx_pciercx_cfg014_s cn78xxp1;
+	struct cvmx_pciercx_cfg014_s cnf71xx;
+	struct cvmx_pciercx_cfg014_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg014 cvmx_pciercx_cfg014_t;
+
+/**
+ * cvmx_pcierc#_cfg015
+ *
+ * This register contains the sixteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg015 {
+	u32 u32;
+	struct cvmx_pciercx_cfg015_s {
+		u32 reserved_28_31 : 4;
+		u32 dtsees : 1;
+		u32 dts : 1;
+		u32 sdt : 1;
+		u32 pdt : 1;
+		u32 fbbe : 1;
+		u32 sbrst : 1;
+		u32 mam : 1;
+		u32 vga16d : 1;
+		u32 vgae : 1;
+		u32 isae : 1;
+		u32 see : 1;
+		u32 pere : 1;
+		u32 inta : 8;
+		u32 il : 8;
+	} s;
+	struct cvmx_pciercx_cfg015_s cn52xx;
+	struct cvmx_pciercx_cfg015_s cn52xxp1;
+	struct cvmx_pciercx_cfg015_s cn56xx;
+	struct cvmx_pciercx_cfg015_s cn56xxp1;
+	struct cvmx_pciercx_cfg015_s cn61xx;
+	struct cvmx_pciercx_cfg015_s cn63xx;
+	struct cvmx_pciercx_cfg015_s cn63xxp1;
+	struct cvmx_pciercx_cfg015_s cn66xx;
+	struct cvmx_pciercx_cfg015_s cn68xx;
+	struct cvmx_pciercx_cfg015_s cn68xxp1;
+	struct cvmx_pciercx_cfg015_cn70xx {
+		u32 reserved_31_28 : 4;
+		u32 dtsees : 1;
+		u32 dts : 1;
+		u32 sdt : 1;
+		u32 pdt : 1;
+		u32 fbbe : 1;
+		u32 sbrst : 1;
+		u32 mam : 1;
+		u32 vga16d : 1;
+		u32 vgae : 1;
+		u32 isae : 1;
+		u32 see : 1;
+		u32 pere : 1;
+		u32 inta : 8;
+		u32 il : 8;
+	} cn70xx;
+	struct cvmx_pciercx_cfg015_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg015_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg015_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg015_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg015_s cnf71xx;
+	struct cvmx_pciercx_cfg015_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg015 cvmx_pciercx_cfg015_t;
+
+/**
+ * cvmx_pcierc#_cfg016
+ *
+ * This register contains the seventeenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg016 {
+	u32 u32;
+	struct cvmx_pciercx_cfg016_s {
+		u32 pmes : 5;
+		u32 d2s : 1;
+		u32 d1s : 1;
+		u32 auxc : 3;
+		u32 dsi : 1;
+		u32 reserved_20_20 : 1;
+		u32 pme_clock : 1;
+		u32 pmsv : 3;
+		u32 ncp : 8;
+		u32 pmcid : 8;
+	} s;
+	struct cvmx_pciercx_cfg016_s cn52xx;
+	struct cvmx_pciercx_cfg016_s cn52xxp1;
+	struct cvmx_pciercx_cfg016_s cn56xx;
+	struct cvmx_pciercx_cfg016_s cn56xxp1;
+	struct cvmx_pciercx_cfg016_s cn61xx;
+	struct cvmx_pciercx_cfg016_s cn63xx;
+	struct cvmx_pciercx_cfg016_s cn63xxp1;
+	struct cvmx_pciercx_cfg016_s cn66xx;
+	struct cvmx_pciercx_cfg016_s cn68xx;
+	struct cvmx_pciercx_cfg016_s cn68xxp1;
+	struct cvmx_pciercx_cfg016_s cn70xx;
+	struct cvmx_pciercx_cfg016_s cn70xxp1;
+	struct cvmx_pciercx_cfg016_s cn73xx;
+	struct cvmx_pciercx_cfg016_s cn78xx;
+	struct cvmx_pciercx_cfg016_s cn78xxp1;
+	struct cvmx_pciercx_cfg016_s cnf71xx;
+	struct cvmx_pciercx_cfg016_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg016 cvmx_pciercx_cfg016_t;
+
+/**
+ * cvmx_pcierc#_cfg017
+ *
+ * This register contains the eighteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg017 {
+	u32 u32;
+	struct cvmx_pciercx_cfg017_s {
+		u32 pmdia : 8;
+		u32 bpccee : 1;
+		u32 bd3h : 1;
+		u32 reserved_16_21 : 6;
+		u32 pmess : 1;
+		u32 pmedsia : 2;
+		u32 pmds : 4;
+		u32 pmeens : 1;
+		u32 reserved_4_7 : 4;
+		u32 nsr : 1;
+		u32 reserved_2_2 : 1;
+		u32 ps : 2;
+	} s;
+	struct cvmx_pciercx_cfg017_s cn52xx;
+	struct cvmx_pciercx_cfg017_s cn52xxp1;
+	struct cvmx_pciercx_cfg017_s cn56xx;
+	struct cvmx_pciercx_cfg017_s cn56xxp1;
+	struct cvmx_pciercx_cfg017_s cn61xx;
+	struct cvmx_pciercx_cfg017_s cn63xx;
+	struct cvmx_pciercx_cfg017_s cn63xxp1;
+	struct cvmx_pciercx_cfg017_s cn66xx;
+	struct cvmx_pciercx_cfg017_s cn68xx;
+	struct cvmx_pciercx_cfg017_s cn68xxp1;
+	struct cvmx_pciercx_cfg017_s cn70xx;
+	struct cvmx_pciercx_cfg017_s cn70xxp1;
+	struct cvmx_pciercx_cfg017_s cn73xx;
+	struct cvmx_pciercx_cfg017_s cn78xx;
+	struct cvmx_pciercx_cfg017_s cn78xxp1;
+	struct cvmx_pciercx_cfg017_s cnf71xx;
+	struct cvmx_pciercx_cfg017_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg017 cvmx_pciercx_cfg017_t;
+
+/**
+ * cvmx_pcierc#_cfg020
+ *
+ * This register contains the twenty-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg020 {
+	u32 u32;
+	struct cvmx_pciercx_cfg020_s {
+		u32 reserved_24_31 : 8;
+		u32 m64 : 1;
+		u32 mme : 3;
+		u32 mmc : 3;
+		u32 msien : 1;
+		u32 ncp : 8;
+		u32 msicid : 8;
+	} s;
+	struct cvmx_pciercx_cfg020_s cn52xx;
+	struct cvmx_pciercx_cfg020_s cn52xxp1;
+	struct cvmx_pciercx_cfg020_s cn56xx;
+	struct cvmx_pciercx_cfg020_s cn56xxp1;
+	struct cvmx_pciercx_cfg020_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 pvm : 1;
+		u32 m64 : 1;
+		u32 mme : 3;
+		u32 mmc : 3;
+		u32 msien : 1;
+		u32 ncp : 8;
+		u32 msicid : 8;
+	} cn61xx;
+	struct cvmx_pciercx_cfg020_s cn63xx;
+	struct cvmx_pciercx_cfg020_s cn63xxp1;
+	struct cvmx_pciercx_cfg020_s cn66xx;
+	struct cvmx_pciercx_cfg020_s cn68xx;
+	struct cvmx_pciercx_cfg020_s cn68xxp1;
+	struct cvmx_pciercx_cfg020_cn61xx cn70xx;
+	struct cvmx_pciercx_cfg020_cn61xx cn70xxp1;
+	struct cvmx_pciercx_cfg020_cn73xx {
+		u32 reserved_25_31 : 7;
+		u32 pvms : 1;
+		u32 m64 : 1;
+		u32 mme : 3;
+		u32 mmc : 3;
+		u32 msien : 1;
+		u32 ncp : 8;
+		u32 msicid : 8;
+	} cn73xx;
+	struct cvmx_pciercx_cfg020_cn73xx cn78xx;
+	struct cvmx_pciercx_cfg020_cn73xx cn78xxp1;
+	struct cvmx_pciercx_cfg020_cn61xx cnf71xx;
+	struct cvmx_pciercx_cfg020_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg020 cvmx_pciercx_cfg020_t;
+
+/**
+ * cvmx_pcierc#_cfg021
+ *
+ * This register contains the twenty-second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg021 {
+	u32 u32;
+	struct cvmx_pciercx_cfg021_s {
+		u32 lmsi : 30;
+		u32 reserved_0_1 : 2;
+	} s;
+	struct cvmx_pciercx_cfg021_s cn52xx;
+	struct cvmx_pciercx_cfg021_s cn52xxp1;
+	struct cvmx_pciercx_cfg021_s cn56xx;
+	struct cvmx_pciercx_cfg021_s cn56xxp1;
+	struct cvmx_pciercx_cfg021_s cn61xx;
+	struct cvmx_pciercx_cfg021_s cn63xx;
+	struct cvmx_pciercx_cfg021_s cn63xxp1;
+	struct cvmx_pciercx_cfg021_s cn66xx;
+	struct cvmx_pciercx_cfg021_s cn68xx;
+	struct cvmx_pciercx_cfg021_s cn68xxp1;
+	struct cvmx_pciercx_cfg021_s cn70xx;
+	struct cvmx_pciercx_cfg021_s cn70xxp1;
+	struct cvmx_pciercx_cfg021_s cn73xx;
+	struct cvmx_pciercx_cfg021_s cn78xx;
+	struct cvmx_pciercx_cfg021_s cn78xxp1;
+	struct cvmx_pciercx_cfg021_s cnf71xx;
+	struct cvmx_pciercx_cfg021_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg021 cvmx_pciercx_cfg021_t;
+
+/**
+ * cvmx_pcierc#_cfg022
+ *
+ * This register contains the twenty-third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg022 {
+	u32 u32;
+	struct cvmx_pciercx_cfg022_s {
+		u32 umsi : 32;
+	} s;
+	struct cvmx_pciercx_cfg022_s cn52xx;
+	struct cvmx_pciercx_cfg022_s cn52xxp1;
+	struct cvmx_pciercx_cfg022_s cn56xx;
+	struct cvmx_pciercx_cfg022_s cn56xxp1;
+	struct cvmx_pciercx_cfg022_s cn61xx;
+	struct cvmx_pciercx_cfg022_s cn63xx;
+	struct cvmx_pciercx_cfg022_s cn63xxp1;
+	struct cvmx_pciercx_cfg022_s cn66xx;
+	struct cvmx_pciercx_cfg022_s cn68xx;
+	struct cvmx_pciercx_cfg022_s cn68xxp1;
+	struct cvmx_pciercx_cfg022_s cn70xx;
+	struct cvmx_pciercx_cfg022_s cn70xxp1;
+	struct cvmx_pciercx_cfg022_s cn73xx;
+	struct cvmx_pciercx_cfg022_s cn78xx;
+	struct cvmx_pciercx_cfg022_s cn78xxp1;
+	struct cvmx_pciercx_cfg022_s cnf71xx;
+	struct cvmx_pciercx_cfg022_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg022 cvmx_pciercx_cfg022_t;
+
+/**
+ * cvmx_pcierc#_cfg023
+ *
+ * This register contains the twenty-fourth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg023 {
+	u32 u32;
+	struct cvmx_pciercx_cfg023_s {
+		u32 reserved_16_31 : 16;
+		u32 msimd : 16;
+	} s;
+	struct cvmx_pciercx_cfg023_s cn52xx;
+	struct cvmx_pciercx_cfg023_s cn52xxp1;
+	struct cvmx_pciercx_cfg023_s cn56xx;
+	struct cvmx_pciercx_cfg023_s cn56xxp1;
+	struct cvmx_pciercx_cfg023_s cn61xx;
+	struct cvmx_pciercx_cfg023_s cn63xx;
+	struct cvmx_pciercx_cfg023_s cn63xxp1;
+	struct cvmx_pciercx_cfg023_s cn66xx;
+	struct cvmx_pciercx_cfg023_s cn68xx;
+	struct cvmx_pciercx_cfg023_s cn68xxp1;
+	struct cvmx_pciercx_cfg023_s cn70xx;
+	struct cvmx_pciercx_cfg023_s cn70xxp1;
+	struct cvmx_pciercx_cfg023_s cn73xx;
+	struct cvmx_pciercx_cfg023_s cn78xx;
+	struct cvmx_pciercx_cfg023_s cn78xxp1;
+	struct cvmx_pciercx_cfg023_s cnf71xx;
+	struct cvmx_pciercx_cfg023_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg023 cvmx_pciercx_cfg023_t;
+
+/**
+ * cvmx_pcierc#_cfg028
+ *
+ * This register contains the twenty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg028 {
+	u32 u32;
+	struct cvmx_pciercx_cfg028_s {
+		u32 reserved_30_31 : 2;
+		u32 imn : 5;
+		u32 si : 1;
+		u32 dpt : 4;
+		u32 pciecv : 4;
+		u32 ncp : 8;
+		u32 pcieid : 8;
+	} s;
+	struct cvmx_pciercx_cfg028_s cn52xx;
+	struct cvmx_pciercx_cfg028_s cn52xxp1;
+	struct cvmx_pciercx_cfg028_s cn56xx;
+	struct cvmx_pciercx_cfg028_s cn56xxp1;
+	struct cvmx_pciercx_cfg028_s cn61xx;
+	struct cvmx_pciercx_cfg028_s cn63xx;
+	struct cvmx_pciercx_cfg028_s cn63xxp1;
+	struct cvmx_pciercx_cfg028_s cn66xx;
+	struct cvmx_pciercx_cfg028_s cn68xx;
+	struct cvmx_pciercx_cfg028_s cn68xxp1;
+	struct cvmx_pciercx_cfg028_s cn70xx;
+	struct cvmx_pciercx_cfg028_s cn70xxp1;
+	struct cvmx_pciercx_cfg028_s cn73xx;
+	struct cvmx_pciercx_cfg028_s cn78xx;
+	struct cvmx_pciercx_cfg028_s cn78xxp1;
+	struct cvmx_pciercx_cfg028_s cnf71xx;
+	struct cvmx_pciercx_cfg028_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg028 cvmx_pciercx_cfg028_t;
+
+/**
+ * cvmx_pcierc#_cfg029
+ *
+ * This register contains the thirtieth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg029 {
+	u32 u32;
+	struct cvmx_pciercx_cfg029_s {
+		u32 reserved_29_31 : 3;
+		u32 flr_cap : 1;
+		u32 cspls : 2;
+		u32 csplv : 8;
+		u32 reserved_16_17 : 2;
+		u32 rber : 1;
+		u32 reserved_12_14 : 3;
+		u32 el1al : 3;
+		u32 el0al : 3;
+		u32 etfs : 1;
+		u32 pfs : 2;
+		u32 mpss : 3;
+	} s;
+	struct cvmx_pciercx_cfg029_cn52xx {
+		u32 reserved_28_31 : 4;
+		u32 cspls : 2;
+		u32 csplv : 8;
+		u32 reserved_16_17 : 2;
+		u32 rber : 1;
+		u32 reserved_12_14 : 3;
+		u32 el1al : 3;
+		u32 el0al : 3;
+		u32 etfs : 1;
+		u32 pfs : 2;
+		u32 mpss : 3;
+	} cn52xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg029_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg029_cn52xx cn61xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg029_cn52xx cn66xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn68xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg029_cn52xx cn70xx;
+	struct cvmx_pciercx_cfg029_cn52xx cn70xxp1;
+	struct cvmx_pciercx_cfg029_s cn73xx;
+	struct cvmx_pciercx_cfg029_s cn78xx;
+	struct cvmx_pciercx_cfg029_s cn78xxp1;
+	struct cvmx_pciercx_cfg029_cn52xx cnf71xx;
+	struct cvmx_pciercx_cfg029_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg029 cvmx_pciercx_cfg029_t;
+
+/**
+ * cvmx_pcierc#_cfg030
+ *
+ * This register contains the thirty-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg030 {
+	u32 u32;
+	struct cvmx_pciercx_cfg030_s {
+		u32 reserved_22_31 : 10;
+		u32 tp : 1;
+		u32 ap_d : 1;
+		u32 ur_d : 1;
+		u32 fe_d : 1;
+		u32 nfe_d : 1;
+		u32 ce_d : 1;
+		u32 reserved_15_15 : 1;
+		u32 mrrs : 3;
+		u32 ns_en : 1;
+		u32 ap_en : 1;
+		u32 pf_en : 1;
+		u32 etf_en : 1;
+		u32 mps : 3;
+		u32 ro_en : 1;
+		u32 ur_en : 1;
+		u32 fe_en : 1;
+		u32 nfe_en : 1;
+		u32 ce_en : 1;
+	} s;
+	struct cvmx_pciercx_cfg030_s cn52xx;
+	struct cvmx_pciercx_cfg030_s cn52xxp1;
+	struct cvmx_pciercx_cfg030_s cn56xx;
+	struct cvmx_pciercx_cfg030_s cn56xxp1;
+	struct cvmx_pciercx_cfg030_s cn61xx;
+	struct cvmx_pciercx_cfg030_s cn63xx;
+	struct cvmx_pciercx_cfg030_s cn63xxp1;
+	struct cvmx_pciercx_cfg030_s cn66xx;
+	struct cvmx_pciercx_cfg030_s cn68xx;
+	struct cvmx_pciercx_cfg030_s cn68xxp1;
+	struct cvmx_pciercx_cfg030_s cn70xx;
+	struct cvmx_pciercx_cfg030_s cn70xxp1;
+	struct cvmx_pciercx_cfg030_s cn73xx;
+	struct cvmx_pciercx_cfg030_s cn78xx;
+	struct cvmx_pciercx_cfg030_s cn78xxp1;
+	struct cvmx_pciercx_cfg030_s cnf71xx;
+	struct cvmx_pciercx_cfg030_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg030 cvmx_pciercx_cfg030_t;
+
+/**
+ * cvmx_pcierc#_cfg031
+ *
+ * This register contains the thirty-second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg031 {
+	u32 u32;
+	struct cvmx_pciercx_cfg031_s {
+		u32 pnum : 8;
+		u32 reserved_23_23 : 1;
+		u32 aspm : 1;
+		u32 lbnc : 1;
+		u32 dllarc : 1;
+		u32 sderc : 1;
+		u32 cpm : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 aslpms : 2;
+		u32 mlw : 6;
+		u32 mls : 4;
+	} s;
+	struct cvmx_pciercx_cfg031_cn52xx {
+		u32 pnum : 8;
+		u32 reserved_22_23 : 2;
+		u32 lbnc : 1;
+		u32 dllarc : 1;
+		u32 sderc : 1;
+		u32 cpm : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 aslpms : 2;
+		u32 mlw : 6;
+		u32 mls : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg031_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg031_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg031_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg031_s cn61xx;
+	struct cvmx_pciercx_cfg031_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg031_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg031_s cn66xx;
+	struct cvmx_pciercx_cfg031_s cn68xx;
+	struct cvmx_pciercx_cfg031_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg031_s cn70xx;
+	struct cvmx_pciercx_cfg031_s cn70xxp1;
+	struct cvmx_pciercx_cfg031_s cn73xx;
+	struct cvmx_pciercx_cfg031_s cn78xx;
+	struct cvmx_pciercx_cfg031_s cn78xxp1;
+	struct cvmx_pciercx_cfg031_s cnf71xx;
+	struct cvmx_pciercx_cfg031_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg031 cvmx_pciercx_cfg031_t;
+
+/**
+ * cvmx_pcierc#_cfg032
+ *
+ * This register contains the thirty-third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg032 {
+	u32 u32;
+	struct cvmx_pciercx_cfg032_s {
+		u32 lab : 1;
+		u32 lbm : 1;
+		u32 dlla : 1;
+		u32 scc : 1;
+		u32 lt : 1;
+		u32 reserved_26_26 : 1;
+		u32 nlw : 6;
+		u32 ls : 4;
+		u32 reserved_12_15 : 4;
+		u32 lab_int_enb : 1;
+		u32 lbm_int_enb : 1;
+		u32 hawd : 1;
+		u32 ecpm : 1;
+		u32 es : 1;
+		u32 ccc : 1;
+		u32 rl : 1;
+		u32 ld : 1;
+		u32 rcb : 1;
+		u32 reserved_2_2 : 1;
+		u32 aslpc : 2;
+	} s;
+	struct cvmx_pciercx_cfg032_s cn52xx;
+	struct cvmx_pciercx_cfg032_s cn52xxp1;
+	struct cvmx_pciercx_cfg032_s cn56xx;
+	struct cvmx_pciercx_cfg032_s cn56xxp1;
+	struct cvmx_pciercx_cfg032_s cn61xx;
+	struct cvmx_pciercx_cfg032_s cn63xx;
+	struct cvmx_pciercx_cfg032_s cn63xxp1;
+	struct cvmx_pciercx_cfg032_s cn66xx;
+	struct cvmx_pciercx_cfg032_s cn68xx;
+	struct cvmx_pciercx_cfg032_s cn68xxp1;
+	struct cvmx_pciercx_cfg032_s cn70xx;
+	struct cvmx_pciercx_cfg032_s cn70xxp1;
+	struct cvmx_pciercx_cfg032_s cn73xx;
+	struct cvmx_pciercx_cfg032_s cn78xx;
+	struct cvmx_pciercx_cfg032_s cn78xxp1;
+	struct cvmx_pciercx_cfg032_s cnf71xx;
+	struct cvmx_pciercx_cfg032_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg032 cvmx_pciercx_cfg032_t;
+
+/**
+ * cvmx_pcierc#_cfg033
+ *
+ * This register contains the thirty-fourth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg033 {
+	u32 u32;
+	struct cvmx_pciercx_cfg033_s {
+		u32 ps_num : 13;
+		u32 nccs : 1;
+		u32 emip : 1;
+		u32 sp_ls : 2;
+		u32 sp_lv : 8;
+		u32 hp_c : 1;
+		u32 hp_s : 1;
+		u32 pip : 1;
+		u32 aip : 1;
+		u32 mrlsp : 1;
+		u32 pcp : 1;
+		u32 abp : 1;
+	} s;
+	struct cvmx_pciercx_cfg033_s cn52xx;
+	struct cvmx_pciercx_cfg033_s cn52xxp1;
+	struct cvmx_pciercx_cfg033_s cn56xx;
+	struct cvmx_pciercx_cfg033_s cn56xxp1;
+	struct cvmx_pciercx_cfg033_s cn61xx;
+	struct cvmx_pciercx_cfg033_s cn63xx;
+	struct cvmx_pciercx_cfg033_s cn63xxp1;
+	struct cvmx_pciercx_cfg033_s cn66xx;
+	struct cvmx_pciercx_cfg033_s cn68xx;
+	struct cvmx_pciercx_cfg033_s cn68xxp1;
+	struct cvmx_pciercx_cfg033_s cn70xx;
+	struct cvmx_pciercx_cfg033_s cn70xxp1;
+	struct cvmx_pciercx_cfg033_s cn73xx;
+	struct cvmx_pciercx_cfg033_s cn78xx;
+	struct cvmx_pciercx_cfg033_s cn78xxp1;
+	struct cvmx_pciercx_cfg033_s cnf71xx;
+	struct cvmx_pciercx_cfg033_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg033 cvmx_pciercx_cfg033_t;
+
+/**
+ * cvmx_pcierc#_cfg034
+ *
+ * This register contains the thirty-fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg034 {
+	u32 u32;
+	struct cvmx_pciercx_cfg034_s {
+		u32 reserved_25_31 : 7;
+		u32 dlls_c : 1;
+		u32 emis : 1;
+		u32 pds : 1;
+		u32 mrlss : 1;
+		u32 ccint_d : 1;
+		u32 pd_c : 1;
+		u32 mrls_c : 1;
+		u32 pf_d : 1;
+		u32 abp_d : 1;
+		u32 reserved_13_15 : 3;
+		u32 dlls_en : 1;
+		u32 emic : 1;
+		u32 pcc : 1;
+		u32 pic : 2;
+		u32 aic : 2;
+		u32 hpint_en : 1;
+		u32 ccint_en : 1;
+		u32 pd_en : 1;
+		u32 mrls_en : 1;
+		u32 pf_en : 1;
+		u32 abp_en : 1;
+	} s;
+	struct cvmx_pciercx_cfg034_s cn52xx;
+	struct cvmx_pciercx_cfg034_s cn52xxp1;
+	struct cvmx_pciercx_cfg034_s cn56xx;
+	struct cvmx_pciercx_cfg034_s cn56xxp1;
+	struct cvmx_pciercx_cfg034_s cn61xx;
+	struct cvmx_pciercx_cfg034_s cn63xx;
+	struct cvmx_pciercx_cfg034_s cn63xxp1;
+	struct cvmx_pciercx_cfg034_s cn66xx;
+	struct cvmx_pciercx_cfg034_s cn68xx;
+	struct cvmx_pciercx_cfg034_s cn68xxp1;
+	struct cvmx_pciercx_cfg034_s cn70xx;
+	struct cvmx_pciercx_cfg034_s cn70xxp1;
+	struct cvmx_pciercx_cfg034_s cn73xx;
+	struct cvmx_pciercx_cfg034_s cn78xx;
+	struct cvmx_pciercx_cfg034_s cn78xxp1;
+	struct cvmx_pciercx_cfg034_s cnf71xx;
+	struct cvmx_pciercx_cfg034_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg034 cvmx_pciercx_cfg034_t;
+
+/**
+ * cvmx_pcierc#_cfg035
+ *
+ * This register contains the thirty-sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg035 {
+	u32 u32;
+	struct cvmx_pciercx_cfg035_s {
+		u32 reserved_17_31 : 15;
+		u32 crssv : 1;
+		u32 reserved_5_15 : 11;
+		u32 crssve : 1;
+		u32 pmeie : 1;
+		u32 sefee : 1;
+		u32 senfee : 1;
+		u32 secee : 1;
+	} s;
+	struct cvmx_pciercx_cfg035_s cn52xx;
+	struct cvmx_pciercx_cfg035_s cn52xxp1;
+	struct cvmx_pciercx_cfg035_s cn56xx;
+	struct cvmx_pciercx_cfg035_s cn56xxp1;
+	struct cvmx_pciercx_cfg035_s cn61xx;
+	struct cvmx_pciercx_cfg035_s cn63xx;
+	struct cvmx_pciercx_cfg035_s cn63xxp1;
+	struct cvmx_pciercx_cfg035_s cn66xx;
+	struct cvmx_pciercx_cfg035_s cn68xx;
+	struct cvmx_pciercx_cfg035_s cn68xxp1;
+	struct cvmx_pciercx_cfg035_s cn70xx;
+	struct cvmx_pciercx_cfg035_s cn70xxp1;
+	struct cvmx_pciercx_cfg035_s cn73xx;
+	struct cvmx_pciercx_cfg035_s cn78xx;
+	struct cvmx_pciercx_cfg035_s cn78xxp1;
+	struct cvmx_pciercx_cfg035_s cnf71xx;
+	struct cvmx_pciercx_cfg035_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg035 cvmx_pciercx_cfg035_t;
+
+/**
+ * cvmx_pcierc#_cfg036
+ *
+ * This register contains the thirty-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg036 {
+	u32 u32;
+	struct cvmx_pciercx_cfg036_s {
+		u32 reserved_18_31 : 14;
+		u32 pme_pend : 1;
+		u32 pme_stat : 1;
+		u32 pme_rid : 16;
+	} s;
+	struct cvmx_pciercx_cfg036_s cn52xx;
+	struct cvmx_pciercx_cfg036_s cn52xxp1;
+	struct cvmx_pciercx_cfg036_s cn56xx;
+	struct cvmx_pciercx_cfg036_s cn56xxp1;
+	struct cvmx_pciercx_cfg036_s cn61xx;
+	struct cvmx_pciercx_cfg036_s cn63xx;
+	struct cvmx_pciercx_cfg036_s cn63xxp1;
+	struct cvmx_pciercx_cfg036_s cn66xx;
+	struct cvmx_pciercx_cfg036_s cn68xx;
+	struct cvmx_pciercx_cfg036_s cn68xxp1;
+	struct cvmx_pciercx_cfg036_s cn70xx;
+	struct cvmx_pciercx_cfg036_s cn70xxp1;
+	struct cvmx_pciercx_cfg036_s cn73xx;
+	struct cvmx_pciercx_cfg036_s cn78xx;
+	struct cvmx_pciercx_cfg036_s cn78xxp1;
+	struct cvmx_pciercx_cfg036_s cnf71xx;
+	struct cvmx_pciercx_cfg036_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg036 cvmx_pciercx_cfg036_t;
+
+/**
+ * cvmx_pcierc#_cfg037
+ *
+ * This register contains the thirty-eighth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg037 {
+	u32 u32;
+	struct cvmx_pciercx_cfg037_s {
+		u32 reserved_24_31 : 8;
+		u32 meetp : 2;
+		u32 eetps : 1;
+		u32 effs : 1;
+		u32 obffs : 2;
+		u32 reserved_12_17 : 6;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 reserved_5_5 : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} s;
+	struct cvmx_pciercx_cfg037_cn52xx {
+		u32 reserved_5_31 : 27;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg037_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg037_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg037_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg037_cn61xx {
+		u32 reserved_14_31 : 18;
+		u32 tph : 2;
+		u32 reserved_11_11 : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari_fw : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg037_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg037_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg037_cn66xx {
+		u32 reserved_14_31 : 18;
+		u32 tph : 2;
+		u32 reserved_11_11 : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn66xx;
+	struct cvmx_pciercx_cfg037_cn66xx cn68xx;
+	struct cvmx_pciercx_cfg037_cn66xx cn68xxp1;
+	struct cvmx_pciercx_cfg037_cn61xx cn70xx;
+	struct cvmx_pciercx_cfg037_cn61xx cn70xxp1;
+	struct cvmx_pciercx_cfg037_cn73xx {
+		u32 reserved_24_31 : 8;
+		u32 meetp : 2;
+		u32 eetps : 1;
+		u32 effs : 1;
+		u32 obffs : 2;
+		u32 reserved_14_17 : 4;
+		u32 tph : 2;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari_fw : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cn73xx;
+	struct cvmx_pciercx_cfg037_cn73xx cn78xx;
+	struct cvmx_pciercx_cfg037_cn73xx cn78xxp1;
+	struct cvmx_pciercx_cfg037_cnf71xx {
+		u32 reserved_20_31 : 12;
+		u32 obffs : 2;
+		u32 reserved_14_17 : 4;
+		u32 tphs : 2;
+		u32 ltrs : 1;
+		u32 noroprpr : 1;
+		u32 atom128s : 1;
+		u32 atom64s : 1;
+		u32 atom32s : 1;
+		u32 atom_ops : 1;
+		u32 ari_fw : 1;
+		u32 ctds : 1;
+		u32 ctrs : 4;
+	} cnf71xx;
+	struct cvmx_pciercx_cfg037_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg037 cvmx_pciercx_cfg037_t;
+
+/**
+ * cvmx_pcierc#_cfg038
+ *
+ * This register contains the thirty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg038 {
+	u32 u32;
+	struct cvmx_pciercx_cfg038_s {
+		u32 reserved_16_31 : 16;
+		u32 eetpb : 1;
+		u32 obffe : 2;
+		u32 reserved_11_12 : 2;
+		u32 ltre : 1;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} s;
+	struct cvmx_pciercx_cfg038_cn52xx {
+		u32 reserved_5_31 : 27;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg038_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg038_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg038_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg038_cn61xx {
+		u32 reserved_10_31 : 22;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg038_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg038_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg038_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg038_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg038_cn61xx cn68xxp1;
+	struct cvmx_pciercx_cfg038_cn61xx cn70xx;
+	struct cvmx_pciercx_cfg038_cn61xx cn70xxp1;
+	struct cvmx_pciercx_cfg038_s cn73xx;
+	struct cvmx_pciercx_cfg038_s cn78xx;
+	struct cvmx_pciercx_cfg038_s cn78xxp1;
+	struct cvmx_pciercx_cfg038_cnf71xx {
+		u32 reserved_15_31 : 17;
+		u32 obffe : 2;
+		u32 reserved_11_12 : 2;
+		u32 ltre : 1;
+		u32 id0_cp : 1;
+		u32 id0_rq : 1;
+		u32 atom_op_eb : 1;
+		u32 atom_op : 1;
+		u32 ari : 1;
+		u32 ctd : 1;
+		u32 ctv : 4;
+	} cnf71xx;
+	struct cvmx_pciercx_cfg038_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg038 cvmx_pciercx_cfg038_t;
+
+/**
+ * cvmx_pcierc#_cfg039
+ *
+ * This register contains the fortieth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg039 {
+	u32 u32;
+	struct cvmx_pciercx_cfg039_s {
+		u32 reserved_9_31 : 23;
+		u32 cls : 1;
+		u32 slsv : 7;
+		u32 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pciercx_cfg039_cn52xx {
+		u32 reserved_0_31 : 32;
+	} cn52xx;
+	struct cvmx_pciercx_cfg039_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg039_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg039_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg039_s cn61xx;
+	struct cvmx_pciercx_cfg039_s cn63xx;
+	struct cvmx_pciercx_cfg039_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg039_s cn66xx;
+	struct cvmx_pciercx_cfg039_s cn68xx;
+	struct cvmx_pciercx_cfg039_s cn68xxp1;
+	struct cvmx_pciercx_cfg039_s cn70xx;
+	struct cvmx_pciercx_cfg039_s cn70xxp1;
+	struct cvmx_pciercx_cfg039_s cn73xx;
+	struct cvmx_pciercx_cfg039_s cn78xx;
+	struct cvmx_pciercx_cfg039_s cn78xxp1;
+	struct cvmx_pciercx_cfg039_s cnf71xx;
+	struct cvmx_pciercx_cfg039_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg039 cvmx_pciercx_cfg039_t;
+
+/**
+ * cvmx_pcierc#_cfg040
+ *
+ * This register contains the forty-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg040 {
+	u32 u32;
+	struct cvmx_pciercx_cfg040_s {
+		u32 reserved_22_31 : 10;
+		u32 ler : 1;
+		u32 ep3s : 1;
+		u32 ep2s : 1;
+		u32 ep1s : 1;
+		u32 eqc : 1;
+		u32 cdl : 1;
+		u32 cde : 4;
+		u32 csos : 1;
+		u32 emc : 1;
+		u32 tm : 3;
+		u32 sde : 1;
+		u32 hasd : 1;
+		u32 ec : 1;
+		u32 tls : 4;
+	} s;
+	struct cvmx_pciercx_cfg040_cn52xx {
+		u32 reserved_0_31 : 32;
+	} cn52xx;
+	struct cvmx_pciercx_cfg040_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg040_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg040_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg040_cn61xx {
+		u32 reserved_17_31 : 15;
+		u32 cdl : 1;
+		u32 reserved_13_15 : 3;
+		u32 cde : 1;
+		u32 csos : 1;
+		u32 emc : 1;
+		u32 tm : 3;
+		u32 sde : 1;
+		u32 hasd : 1;
+		u32 ec : 1;
+		u32 tls : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg040_cn61xx cn63xx;
+	struct cvmx_pciercx_cfg040_cn61xx cn63xxp1;
+	struct cvmx_pciercx_cfg040_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg040_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg040_cn61xx cn68xxp1;
+	struct cvmx_pciercx_cfg040_cn61xx cn70xx;
+	struct cvmx_pciercx_cfg040_cn61xx cn70xxp1;
+	struct cvmx_pciercx_cfg040_s cn73xx;
+	struct cvmx_pciercx_cfg040_s cn78xx;
+	struct cvmx_pciercx_cfg040_s cn78xxp1;
+	struct cvmx_pciercx_cfg040_cn61xx cnf71xx;
+	struct cvmx_pciercx_cfg040_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg040 cvmx_pciercx_cfg040_t;
+
+/**
+ * cvmx_pcierc#_cfg041
+ *
+ * This register contains the forty-second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg041 {
+	u32 u32;
+	struct cvmx_pciercx_cfg041_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pciercx_cfg041_s cn52xx;
+	struct cvmx_pciercx_cfg041_s cn52xxp1;
+	struct cvmx_pciercx_cfg041_s cn56xx;
+	struct cvmx_pciercx_cfg041_s cn56xxp1;
+	struct cvmx_pciercx_cfg041_s cn61xx;
+	struct cvmx_pciercx_cfg041_s cn63xx;
+	struct cvmx_pciercx_cfg041_s cn63xxp1;
+	struct cvmx_pciercx_cfg041_s cn66xx;
+	struct cvmx_pciercx_cfg041_s cn68xx;
+	struct cvmx_pciercx_cfg041_s cn68xxp1;
+	struct cvmx_pciercx_cfg041_s cn70xx;
+	struct cvmx_pciercx_cfg041_s cn70xxp1;
+	struct cvmx_pciercx_cfg041_s cn73xx;
+	struct cvmx_pciercx_cfg041_s cn78xx;
+	struct cvmx_pciercx_cfg041_s cn78xxp1;
+	struct cvmx_pciercx_cfg041_s cnf71xx;
+	struct cvmx_pciercx_cfg041_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg041 cvmx_pciercx_cfg041_t;
+
+/**
+ * cvmx_pcierc#_cfg042
+ *
+ * This register contains the forty-third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg042 {
+	u32 u32;
+	struct cvmx_pciercx_cfg042_s {
+		u32 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pciercx_cfg042_s cn52xx;
+	struct cvmx_pciercx_cfg042_s cn52xxp1;
+	struct cvmx_pciercx_cfg042_s cn56xx;
+	struct cvmx_pciercx_cfg042_s cn56xxp1;
+	struct cvmx_pciercx_cfg042_s cn61xx;
+	struct cvmx_pciercx_cfg042_s cn63xx;
+	struct cvmx_pciercx_cfg042_s cn63xxp1;
+	struct cvmx_pciercx_cfg042_s cn66xx;
+	struct cvmx_pciercx_cfg042_s cn68xx;
+	struct cvmx_pciercx_cfg042_s cn68xxp1;
+	struct cvmx_pciercx_cfg042_s cn70xx;
+	struct cvmx_pciercx_cfg042_s cn70xxp1;
+	struct cvmx_pciercx_cfg042_s cn73xx;
+	struct cvmx_pciercx_cfg042_s cn78xx;
+	struct cvmx_pciercx_cfg042_s cn78xxp1;
+	struct cvmx_pciercx_cfg042_s cnf71xx;
+	struct cvmx_pciercx_cfg042_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg042 cvmx_pciercx_cfg042_t;
+
+/**
+ * cvmx_pcierc#_cfg044
+ *
+ * This register contains the forty-fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg044 {
+	u32 u32;
+	struct cvmx_pciercx_cfg044_s {
+		u32 msixen : 1;
+		u32 funm : 1;
+		u32 reserved_27_29 : 3;
+		u32 msixts : 11;
+		u32 ncp : 8;
+		u32 msixcid : 8;
+	} s;
+	struct cvmx_pciercx_cfg044_s cn73xx;
+	struct cvmx_pciercx_cfg044_s cn78xx;
+	struct cvmx_pciercx_cfg044_s cn78xxp1;
+	struct cvmx_pciercx_cfg044_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg044 cvmx_pciercx_cfg044_t;
+
+/**
+ * cvmx_pcierc#_cfg045
+ *
+ * This register contains the forty-sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg045 {
+	u32 u32;
+	struct cvmx_pciercx_cfg045_s {
+		u32 msixtoffs : 29;
+		u32 msixtbir : 3;
+	} s;
+	struct cvmx_pciercx_cfg045_s cn73xx;
+	struct cvmx_pciercx_cfg045_s cn78xx;
+	struct cvmx_pciercx_cfg045_s cn78xxp1;
+	struct cvmx_pciercx_cfg045_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg045 cvmx_pciercx_cfg045_t;
+
+/**
+ * cvmx_pcierc#_cfg046
+ *
+ * This register contains the forty-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg046 {
+	u32 u32;
+	struct cvmx_pciercx_cfg046_s {
+		u32 msixpoffs : 29;
+		u32 msixpbir : 3;
+	} s;
+	struct cvmx_pciercx_cfg046_s cn73xx;
+	struct cvmx_pciercx_cfg046_s cn78xx;
+	struct cvmx_pciercx_cfg046_s cn78xxp1;
+	struct cvmx_pciercx_cfg046_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg046 cvmx_pciercx_cfg046_t;
+
+/**
+ * cvmx_pcierc#_cfg064
+ *
+ * This register contains the sixty-fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg064 {
+	u32 u32;
+	struct cvmx_pciercx_cfg064_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pciercx_cfg064_s cn52xx;
+	struct cvmx_pciercx_cfg064_s cn52xxp1;
+	struct cvmx_pciercx_cfg064_s cn56xx;
+	struct cvmx_pciercx_cfg064_s cn56xxp1;
+	struct cvmx_pciercx_cfg064_s cn61xx;
+	struct cvmx_pciercx_cfg064_s cn63xx;
+	struct cvmx_pciercx_cfg064_s cn63xxp1;
+	struct cvmx_pciercx_cfg064_s cn66xx;
+	struct cvmx_pciercx_cfg064_s cn68xx;
+	struct cvmx_pciercx_cfg064_s cn68xxp1;
+	struct cvmx_pciercx_cfg064_s cn70xx;
+	struct cvmx_pciercx_cfg064_s cn70xxp1;
+	struct cvmx_pciercx_cfg064_s cn73xx;
+	struct cvmx_pciercx_cfg064_s cn78xx;
+	struct cvmx_pciercx_cfg064_s cn78xxp1;
+	struct cvmx_pciercx_cfg064_s cnf71xx;
+	struct cvmx_pciercx_cfg064_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg064 cvmx_pciercx_cfg064_t;
+
+/**
+ * cvmx_pcierc#_cfg065
+ *
+ * This register contains the sixty-sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg065 {
+	u32 u32;
+	struct cvmx_pciercx_cfg065_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pciercx_cfg065_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg065_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg065_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg065_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg065_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_21_23 : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg065_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg065_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg065_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg065_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg065_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg065_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pciercx_cfg065_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg065_s cn73xx;
+	struct cvmx_pciercx_cfg065_s cn78xx;
+	struct cvmx_pciercx_cfg065_s cn78xxp1;
+	struct cvmx_pciercx_cfg065_cn70xx cnf71xx;
+	struct cvmx_pciercx_cfg065_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg065 cvmx_pciercx_cfg065_t;
+
+/**
+ * cvmx_pcierc#_cfg066
+ *
+ * This register contains the sixty-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg066 {
+	u32 u32;
+	struct cvmx_pciercx_cfg066_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbem : 1;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pciercx_cfg066_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg066_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg066_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg066_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg066_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombm : 1;
+		u32 reserved_21_23 : 3;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg066_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg066_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg066_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg066_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg066_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg066_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombm : 1;
+		u32 reserved_23_23 : 1;
+		u32 uciem : 1;
+		u32 reserved_21_21 : 1;
+		u32 urem : 1;
+		u32 ecrcem : 1;
+		u32 mtlpm : 1;
+		u32 rom : 1;
+		u32 ucm : 1;
+		u32 cam : 1;
+		u32 ctm : 1;
+		u32 fcpem : 1;
+		u32 ptlpm : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdem : 1;
+		u32 dlpem : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pciercx_cfg066_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg066_s cn73xx;
+	struct cvmx_pciercx_cfg066_s cn78xx;
+	struct cvmx_pciercx_cfg066_s cn78xxp1;
+	struct cvmx_pciercx_cfg066_cn70xx cnf71xx;
+	struct cvmx_pciercx_cfg066_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg066 cvmx_pciercx_cfg066_t;
+
+/**
+ * cvmx_pcierc#_cfg067
+ *
+ * This register contains the sixty-eighth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg067 {
+	u32 u32;
+	struct cvmx_pciercx_cfg067_s {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 reserved_21_23 : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pciercx_cfg067_cn52xx {
+		u32 reserved_21_31 : 11;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg067_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg067_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg067_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg067_cn61xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_21_23 : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn61xx;
+	struct cvmx_pciercx_cfg067_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg067_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg067_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg067_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg067_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg067_cn70xx {
+		u32 reserved_25_31 : 7;
+		u32 uatombs : 1;
+		u32 reserved_23_23 : 1;
+		u32 ucies : 1;
+		u32 reserved_21_21 : 1;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn70xx;
+	struct cvmx_pciercx_cfg067_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg067_cn73xx {
+		u32 reserved_26_31 : 6;
+		u32 tpbes : 1;
+		u32 uatombs : 1;
+		u32 unsuperr : 3;
+		u32 ures : 1;
+		u32 ecrces : 1;
+		u32 mtlps : 1;
+		u32 ros : 1;
+		u32 ucs : 1;
+		u32 cas : 1;
+		u32 cts : 1;
+		u32 fcpes : 1;
+		u32 ptlps : 1;
+		u32 reserved_6_11 : 6;
+		u32 sdes : 1;
+		u32 dlpes : 1;
+		u32 reserved_0_3 : 4;
+	} cn73xx;
+	struct cvmx_pciercx_cfg067_cn73xx cn78xx;
+	struct cvmx_pciercx_cfg067_cn73xx cn78xxp1;
+	struct cvmx_pciercx_cfg067_cn70xx cnf71xx;
+	struct cvmx_pciercx_cfg067_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg067 cvmx_pciercx_cfg067_t;
+
+/**
+ * cvmx_pcierc#_cfg068
+ *
+ * This register contains the sixty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg068 {
+	u32 u32;
+	struct cvmx_pciercx_cfg068_s {
+		u32 reserved_15_31 : 17;
+		u32 cies : 1;
+		u32 anfes : 1;
+		u32 rtts : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrs : 1;
+		u32 bdllps : 1;
+		u32 btlps : 1;
+		u32 reserved_1_5 : 5;
+		u32 res : 1;
+	} s;
+	struct cvmx_pciercx_cfg068_cn52xx {
+		u32 reserved_14_31 : 18;
+		u32 anfes : 1;
+		u32 rtts : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrs : 1;
+		u32 bdllps : 1;
+		u32 btlps : 1;
+		u32 reserved_1_5 : 5;
+		u32 res : 1;
+	} cn52xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg068_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg068_cn52xx cn61xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg068_cn52xx cn66xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn68xx;
+	struct cvmx_pciercx_cfg068_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg068_s cn70xx;
+	struct cvmx_pciercx_cfg068_s cn70xxp1;
+	struct cvmx_pciercx_cfg068_s cn73xx;
+	struct cvmx_pciercx_cfg068_s cn78xx;
+	struct cvmx_pciercx_cfg068_s cn78xxp1;
+	struct cvmx_pciercx_cfg068_s cnf71xx;
+	struct cvmx_pciercx_cfg068_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg068 cvmx_pciercx_cfg068_t;
+
+/**
+ * cvmx_pcierc#_cfg069
+ *
+ * This register contains the seventieth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg069 {
+	u32 u32;
+	struct cvmx_pciercx_cfg069_s {
+		u32 reserved_15_31 : 17;
+		u32 ciem : 1;
+		u32 anfem : 1;
+		u32 rttm : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrm : 1;
+		u32 bdllpm : 1;
+		u32 btlpm : 1;
+		u32 reserved_1_5 : 5;
+		u32 rem : 1;
+	} s;
+	struct cvmx_pciercx_cfg069_cn52xx {
+		u32 reserved_14_31 : 18;
+		u32 anfem : 1;
+		u32 rttm : 1;
+		u32 reserved_9_11 : 3;
+		u32 rnrm : 1;
+		u32 bdllpm : 1;
+		u32 btlpm : 1;
+		u32 reserved_1_5 : 5;
+		u32 rem : 1;
+	} cn52xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg069_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg069_cn52xx cn61xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg069_cn52xx cn66xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn68xx;
+	struct cvmx_pciercx_cfg069_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg069_s cn70xx;
+	struct cvmx_pciercx_cfg069_s cn70xxp1;
+	struct cvmx_pciercx_cfg069_s cn73xx;
+	struct cvmx_pciercx_cfg069_s cn78xx;
+	struct cvmx_pciercx_cfg069_s cn78xxp1;
+	struct cvmx_pciercx_cfg069_s cnf71xx;
+	struct cvmx_pciercx_cfg069_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg069 cvmx_pciercx_cfg069_t;
+
+/**
+ * cvmx_pcierc#_cfg070
+ *
+ * This register contains the seventy-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg070 {
+	u32 u32;
+	struct cvmx_pciercx_cfg070_s {
+		u32 reserved_12_31 : 20;
+		u32 tplp : 1;
+		u32 reserved_9_10 : 2;
+		u32 ce : 1;
+		u32 cc : 1;
+		u32 ge : 1;
+		u32 gc : 1;
+		u32 fep : 5;
+	} s;
+	struct cvmx_pciercx_cfg070_cn52xx {
+		u32 reserved_9_31 : 23;
+		u32 ce : 1;
+		u32 cc : 1;
+		u32 ge : 1;
+		u32 gc : 1;
+		u32 fep : 5;
+	} cn52xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg070_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg070_cn52xx cn61xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg070_cn52xx cn66xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn68xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg070_cn52xx cn70xx;
+	struct cvmx_pciercx_cfg070_cn52xx cn70xxp1;
+	struct cvmx_pciercx_cfg070_s cn73xx;
+	struct cvmx_pciercx_cfg070_s cn78xx;
+	struct cvmx_pciercx_cfg070_s cn78xxp1;
+	struct cvmx_pciercx_cfg070_cn52xx cnf71xx;
+	struct cvmx_pciercx_cfg070_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg070 cvmx_pciercx_cfg070_t;
+
+/**
+ * cvmx_pcierc#_cfg071
+ *
+ * This register contains the seventy-second 32-bits of PCIe type 1 configuration space.  The
+ * header log registers collect the header for the TLP corresponding to a detected error.
+ */
+union cvmx_pciercx_cfg071 {
+	u32 u32;
+	struct cvmx_pciercx_cfg071_s {
+		u32 dword1 : 32;
+	} s;
+	struct cvmx_pciercx_cfg071_s cn52xx;
+	struct cvmx_pciercx_cfg071_s cn52xxp1;
+	struct cvmx_pciercx_cfg071_s cn56xx;
+	struct cvmx_pciercx_cfg071_s cn56xxp1;
+	struct cvmx_pciercx_cfg071_s cn61xx;
+	struct cvmx_pciercx_cfg071_s cn63xx;
+	struct cvmx_pciercx_cfg071_s cn63xxp1;
+	struct cvmx_pciercx_cfg071_s cn66xx;
+	struct cvmx_pciercx_cfg071_s cn68xx;
+	struct cvmx_pciercx_cfg071_s cn68xxp1;
+	struct cvmx_pciercx_cfg071_s cn70xx;
+	struct cvmx_pciercx_cfg071_s cn70xxp1;
+	struct cvmx_pciercx_cfg071_s cn73xx;
+	struct cvmx_pciercx_cfg071_s cn78xx;
+	struct cvmx_pciercx_cfg071_s cn78xxp1;
+	struct cvmx_pciercx_cfg071_s cnf71xx;
+	struct cvmx_pciercx_cfg071_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg071 cvmx_pciercx_cfg071_t;
+
+/**
+ * cvmx_pcierc#_cfg072
+ *
+ * This register contains the seventy-third 32-bits of PCIe type 1 configuration space.  The
+ * header log registers collect the header for the TLP corresponding to a detected error.
+ */
+union cvmx_pciercx_cfg072 {
+	u32 u32;
+	struct cvmx_pciercx_cfg072_s {
+		u32 dword2 : 32;
+	} s;
+	struct cvmx_pciercx_cfg072_s cn52xx;
+	struct cvmx_pciercx_cfg072_s cn52xxp1;
+	struct cvmx_pciercx_cfg072_s cn56xx;
+	struct cvmx_pciercx_cfg072_s cn56xxp1;
+	struct cvmx_pciercx_cfg072_s cn61xx;
+	struct cvmx_pciercx_cfg072_s cn63xx;
+	struct cvmx_pciercx_cfg072_s cn63xxp1;
+	struct cvmx_pciercx_cfg072_s cn66xx;
+	struct cvmx_pciercx_cfg072_s cn68xx;
+	struct cvmx_pciercx_cfg072_s cn68xxp1;
+	struct cvmx_pciercx_cfg072_s cn70xx;
+	struct cvmx_pciercx_cfg072_s cn70xxp1;
+	struct cvmx_pciercx_cfg072_s cn73xx;
+	struct cvmx_pciercx_cfg072_s cn78xx;
+	struct cvmx_pciercx_cfg072_s cn78xxp1;
+	struct cvmx_pciercx_cfg072_s cnf71xx;
+	struct cvmx_pciercx_cfg072_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg072 cvmx_pciercx_cfg072_t;
+
+/**
+ * cvmx_pcierc#_cfg073
+ *
+ * This register contains the seventy-fourth 32-bits of PCIe type 1 configuration space.  The
+ * header log registers collect the header for the TLP corresponding to a detected error.
+ */
+union cvmx_pciercx_cfg073 {
+	u32 u32;
+	struct cvmx_pciercx_cfg073_s {
+		u32 dword3 : 32;
+	} s;
+	struct cvmx_pciercx_cfg073_s cn52xx;
+	struct cvmx_pciercx_cfg073_s cn52xxp1;
+	struct cvmx_pciercx_cfg073_s cn56xx;
+	struct cvmx_pciercx_cfg073_s cn56xxp1;
+	struct cvmx_pciercx_cfg073_s cn61xx;
+	struct cvmx_pciercx_cfg073_s cn63xx;
+	struct cvmx_pciercx_cfg073_s cn63xxp1;
+	struct cvmx_pciercx_cfg073_s cn66xx;
+	struct cvmx_pciercx_cfg073_s cn68xx;
+	struct cvmx_pciercx_cfg073_s cn68xxp1;
+	struct cvmx_pciercx_cfg073_s cn70xx;
+	struct cvmx_pciercx_cfg073_s cn70xxp1;
+	struct cvmx_pciercx_cfg073_s cn73xx;
+	struct cvmx_pciercx_cfg073_s cn78xx;
+	struct cvmx_pciercx_cfg073_s cn78xxp1;
+	struct cvmx_pciercx_cfg073_s cnf71xx;
+	struct cvmx_pciercx_cfg073_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg073 cvmx_pciercx_cfg073_t;
+
+/**
+ * cvmx_pcierc#_cfg074
+ *
+ * This register contains the seventy-fifth 32-bits of PCIe type 1 configuration space.  The
+ * header log registers collect the header for the TLP corresponding to a detected error.
+ */
+union cvmx_pciercx_cfg074 {
+	u32 u32;
+	struct cvmx_pciercx_cfg074_s {
+		u32 dword4 : 32;
+	} s;
+	struct cvmx_pciercx_cfg074_s cn52xx;
+	struct cvmx_pciercx_cfg074_s cn52xxp1;
+	struct cvmx_pciercx_cfg074_s cn56xx;
+	struct cvmx_pciercx_cfg074_s cn56xxp1;
+	struct cvmx_pciercx_cfg074_s cn61xx;
+	struct cvmx_pciercx_cfg074_s cn63xx;
+	struct cvmx_pciercx_cfg074_s cn63xxp1;
+	struct cvmx_pciercx_cfg074_s cn66xx;
+	struct cvmx_pciercx_cfg074_s cn68xx;
+	struct cvmx_pciercx_cfg074_s cn68xxp1;
+	struct cvmx_pciercx_cfg074_s cn70xx;
+	struct cvmx_pciercx_cfg074_s cn70xxp1;
+	struct cvmx_pciercx_cfg074_s cn73xx;
+	struct cvmx_pciercx_cfg074_s cn78xx;
+	struct cvmx_pciercx_cfg074_s cn78xxp1;
+	struct cvmx_pciercx_cfg074_s cnf71xx;
+	struct cvmx_pciercx_cfg074_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg074 cvmx_pciercx_cfg074_t;
+
+/**
+ * cvmx_pcierc#_cfg075
+ *
+ * This register contains the seventy-sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg075 {
+	u32 u32;
+	struct cvmx_pciercx_cfg075_s {
+		u32 reserved_3_31 : 29;
+		u32 fere : 1;
+		u32 nfere : 1;
+		u32 cere : 1;
+	} s;
+	struct cvmx_pciercx_cfg075_s cn52xx;
+	struct cvmx_pciercx_cfg075_s cn52xxp1;
+	struct cvmx_pciercx_cfg075_s cn56xx;
+	struct cvmx_pciercx_cfg075_s cn56xxp1;
+	struct cvmx_pciercx_cfg075_s cn61xx;
+	struct cvmx_pciercx_cfg075_s cn63xx;
+	struct cvmx_pciercx_cfg075_s cn63xxp1;
+	struct cvmx_pciercx_cfg075_s cn66xx;
+	struct cvmx_pciercx_cfg075_s cn68xx;
+	struct cvmx_pciercx_cfg075_s cn68xxp1;
+	struct cvmx_pciercx_cfg075_s cn70xx;
+	struct cvmx_pciercx_cfg075_s cn70xxp1;
+	struct cvmx_pciercx_cfg075_s cn73xx;
+	struct cvmx_pciercx_cfg075_s cn78xx;
+	struct cvmx_pciercx_cfg075_s cn78xxp1;
+	struct cvmx_pciercx_cfg075_s cnf71xx;
+	struct cvmx_pciercx_cfg075_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg075 cvmx_pciercx_cfg075_t;
+
+/**
+ * cvmx_pcierc#_cfg076
+ *
+ * This register contains the seventy-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg076 {
+	u32 u32;
+	struct cvmx_pciercx_cfg076_s {
+		u32 aeimn : 5;
+		u32 reserved_7_26 : 20;
+		u32 femr : 1;
+		u32 nfemr : 1;
+		u32 fuf : 1;
+		u32 multi_efnfr : 1;
+		u32 efnfr : 1;
+		u32 multi_ecr : 1;
+		u32 ecr : 1;
+	} s;
+	struct cvmx_pciercx_cfg076_s cn52xx;
+	struct cvmx_pciercx_cfg076_s cn52xxp1;
+	struct cvmx_pciercx_cfg076_s cn56xx;
+	struct cvmx_pciercx_cfg076_s cn56xxp1;
+	struct cvmx_pciercx_cfg076_s cn61xx;
+	struct cvmx_pciercx_cfg076_s cn63xx;
+	struct cvmx_pciercx_cfg076_s cn63xxp1;
+	struct cvmx_pciercx_cfg076_s cn66xx;
+	struct cvmx_pciercx_cfg076_s cn68xx;
+	struct cvmx_pciercx_cfg076_s cn68xxp1;
+	struct cvmx_pciercx_cfg076_s cn70xx;
+	struct cvmx_pciercx_cfg076_s cn70xxp1;
+	struct cvmx_pciercx_cfg076_s cn73xx;
+	struct cvmx_pciercx_cfg076_s cn78xx;
+	struct cvmx_pciercx_cfg076_s cn78xxp1;
+	struct cvmx_pciercx_cfg076_s cnf71xx;
+	struct cvmx_pciercx_cfg076_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg076 cvmx_pciercx_cfg076_t;
+
+/**
+ * cvmx_pcierc#_cfg077
+ *
+ * This register contains the seventy-eighth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg077 {
+	u32 u32;
+	struct cvmx_pciercx_cfg077_s {
+		u32 efnfsi : 16;
+		u32 ecsi : 16;
+	} s;
+	struct cvmx_pciercx_cfg077_s cn52xx;
+	struct cvmx_pciercx_cfg077_s cn52xxp1;
+	struct cvmx_pciercx_cfg077_s cn56xx;
+	struct cvmx_pciercx_cfg077_s cn56xxp1;
+	struct cvmx_pciercx_cfg077_s cn61xx;
+	struct cvmx_pciercx_cfg077_s cn63xx;
+	struct cvmx_pciercx_cfg077_s cn63xxp1;
+	struct cvmx_pciercx_cfg077_s cn66xx;
+	struct cvmx_pciercx_cfg077_s cn68xx;
+	struct cvmx_pciercx_cfg077_s cn68xxp1;
+	struct cvmx_pciercx_cfg077_s cn70xx;
+	struct cvmx_pciercx_cfg077_s cn70xxp1;
+	struct cvmx_pciercx_cfg077_s cn73xx;
+	struct cvmx_pciercx_cfg077_s cn78xx;
+	struct cvmx_pciercx_cfg077_s cn78xxp1;
+	struct cvmx_pciercx_cfg077_s cnf71xx;
+	struct cvmx_pciercx_cfg077_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg077 cvmx_pciercx_cfg077_t;
+
+/**
+ * cvmx_pcierc#_cfg086
+ *
+ * This register contains the eighty-ninth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg086 {
+	u32 u32;
+	struct cvmx_pciercx_cfg086_s {
+		u32 nco : 12;
+		u32 cv : 4;
+		u32 pcieec : 16;
+	} s;
+	struct cvmx_pciercx_cfg086_s cn73xx;
+	struct cvmx_pciercx_cfg086_s cn78xx;
+	struct cvmx_pciercx_cfg086_s cn78xxp1;
+	struct cvmx_pciercx_cfg086_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg086 cvmx_pciercx_cfg086_t;
+
+/**
+ * cvmx_pcierc#_cfg087
+ *
+ * This register contains the eighty-eighth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg087 {
+	u32 u32;
+	struct cvmx_pciercx_cfg087_s {
+		u32 reserved_2_31 : 30;
+		u32 ler : 1;
+		u32 pe : 1;
+	} s;
+	struct cvmx_pciercx_cfg087_s cn73xx;
+	struct cvmx_pciercx_cfg087_s cn78xx;
+	struct cvmx_pciercx_cfg087_s cn78xxp1;
+	struct cvmx_pciercx_cfg087_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg087 cvmx_pciercx_cfg087_t;
+
+/**
+ * cvmx_pcierc#_cfg088
+ *
+ * This register contains the eighty-ninth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg088 {
+	u32 u32;
+	struct cvmx_pciercx_cfg088_s {
+		u32 reserved_8_31 : 24;
+		u32 les : 8;
+	} s;
+	struct cvmx_pciercx_cfg088_s cn73xx;
+	struct cvmx_pciercx_cfg088_s cn78xx;
+	struct cvmx_pciercx_cfg088_s cn78xxp1;
+	struct cvmx_pciercx_cfg088_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg088 cvmx_pciercx_cfg088_t;
+
+/**
+ * cvmx_pcierc#_cfg089
+ *
+ * This register contains the ninetieth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg089 {
+	u32 u32;
+	struct cvmx_pciercx_cfg089_s {
+		u32 reserved_31_31 : 1;
+		u32 l1urph : 3;
+		u32 l1utp : 4;
+		u32 reserved_23_23 : 1;
+		u32 l1drph : 3;
+		u32 l1ddtp : 4;
+		u32 reserved_15_15 : 1;
+		u32 l0urph : 3;
+		u32 l0utp : 4;
+		u32 reserved_7_7 : 1;
+		u32 l0drph : 3;
+		u32 l0dtp : 4;
+	} s;
+	struct cvmx_pciercx_cfg089_s cn73xx;
+	struct cvmx_pciercx_cfg089_s cn78xx;
+	struct cvmx_pciercx_cfg089_s cn78xxp1;
+	struct cvmx_pciercx_cfg089_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg089 cvmx_pciercx_cfg089_t;
+
+/**
+ * cvmx_pcierc#_cfg090
+ *
+ * This register contains the ninety-first 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg090 {
+	u32 u32;
+	struct cvmx_pciercx_cfg090_s {
+		u32 reserved_31_31 : 1;
+		u32 l3urph : 3;
+		u32 l3utp : 4;
+		u32 reserved_23_23 : 1;
+		u32 l3drph : 3;
+		u32 l3dtp : 4;
+		u32 reserved_15_15 : 1;
+		u32 l2urph : 3;
+		u32 l2utp : 4;
+		u32 reserved_7_7 : 1;
+		u32 l2drph : 3;
+		u32 l2dtp : 4;
+	} s;
+	struct cvmx_pciercx_cfg090_s cn73xx;
+	struct cvmx_pciercx_cfg090_s cn78xx;
+	struct cvmx_pciercx_cfg090_s cn78xxp1;
+	struct cvmx_pciercx_cfg090_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg090 cvmx_pciercx_cfg090_t;
+
+/**
+ * cvmx_pcierc#_cfg091
+ *
+ * This register contains the ninety-second 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg091 {
+	u32 u32;
+	struct cvmx_pciercx_cfg091_s {
+		u32 reserved_31_31 : 1;
+		u32 l5urph : 3;
+		u32 l5utp : 4;
+		u32 reserved_23_23 : 1;
+		u32 l5drph : 3;
+		u32 l5dtp : 4;
+		u32 reserved_15_15 : 1;
+		u32 l4urph : 3;
+		u32 l4utp : 4;
+		u32 reserved_7_7 : 1;
+		u32 l4drph : 3;
+		u32 l4dtp : 4;
+	} s;
+	struct cvmx_pciercx_cfg091_s cn73xx;
+	struct cvmx_pciercx_cfg091_s cn78xx;
+	struct cvmx_pciercx_cfg091_s cn78xxp1;
+	struct cvmx_pciercx_cfg091_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg091 cvmx_pciercx_cfg091_t;
+
+/**
+ * cvmx_pcierc#_cfg092
+ *
+ * This register contains the ninety-third 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg092 {
+	u32 u32;
+	struct cvmx_pciercx_cfg092_s {
+		u32 reserved_31_31 : 1;
+		u32 l7urph : 3;
+		u32 l7utp : 4;
+		u32 reserved_23_23 : 1;
+		u32 l7drph : 3;
+		u32 l7dtp : 4;
+		u32 reserved_15_15 : 1;
+		u32 l6urph : 3;
+		u32 l6utp : 4;
+		u32 reserved_7_7 : 1;
+		u32 l6drph : 3;
+		u32 l6dtp : 4;
+	} s;
+	struct cvmx_pciercx_cfg092_s cn73xx;
+	struct cvmx_pciercx_cfg092_s cn78xx;
+	struct cvmx_pciercx_cfg092_s cn78xxp1;
+	struct cvmx_pciercx_cfg092_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg092 cvmx_pciercx_cfg092_t;
+
+/**
+ * cvmx_pcierc#_cfg448
+ *
+ * This register contains the four hundred forty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg448 {
+	u32 u32;
+	struct cvmx_pciercx_cfg448_s {
+		u32 rtl : 16;
+		u32 rtltl : 16;
+	} s;
+	struct cvmx_pciercx_cfg448_s cn52xx;
+	struct cvmx_pciercx_cfg448_s cn52xxp1;
+	struct cvmx_pciercx_cfg448_s cn56xx;
+	struct cvmx_pciercx_cfg448_s cn56xxp1;
+	struct cvmx_pciercx_cfg448_s cn61xx;
+	struct cvmx_pciercx_cfg448_s cn63xx;
+	struct cvmx_pciercx_cfg448_s cn63xxp1;
+	struct cvmx_pciercx_cfg448_s cn66xx;
+	struct cvmx_pciercx_cfg448_s cn68xx;
+	struct cvmx_pciercx_cfg448_s cn68xxp1;
+	struct cvmx_pciercx_cfg448_s cn70xx;
+	struct cvmx_pciercx_cfg448_s cn70xxp1;
+	struct cvmx_pciercx_cfg448_s cn73xx;
+	struct cvmx_pciercx_cfg448_s cn78xx;
+	struct cvmx_pciercx_cfg448_s cn78xxp1;
+	struct cvmx_pciercx_cfg448_s cnf71xx;
+	struct cvmx_pciercx_cfg448_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg448 cvmx_pciercx_cfg448_t;
+
+/**
+ * cvmx_pcierc#_cfg449
+ *
+ * This register contains the four hundred fiftieth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg449 {
+	u32 u32;
+	struct cvmx_pciercx_cfg449_s {
+		u32 omr : 32;
+	} s;
+	struct cvmx_pciercx_cfg449_s cn52xx;
+	struct cvmx_pciercx_cfg449_s cn52xxp1;
+	struct cvmx_pciercx_cfg449_s cn56xx;
+	struct cvmx_pciercx_cfg449_s cn56xxp1;
+	struct cvmx_pciercx_cfg449_s cn61xx;
+	struct cvmx_pciercx_cfg449_s cn63xx;
+	struct cvmx_pciercx_cfg449_s cn63xxp1;
+	struct cvmx_pciercx_cfg449_s cn66xx;
+	struct cvmx_pciercx_cfg449_s cn68xx;
+	struct cvmx_pciercx_cfg449_s cn68xxp1;
+	struct cvmx_pciercx_cfg449_s cn70xx;
+	struct cvmx_pciercx_cfg449_s cn70xxp1;
+	struct cvmx_pciercx_cfg449_s cn73xx;
+	struct cvmx_pciercx_cfg449_s cn78xx;
+	struct cvmx_pciercx_cfg449_s cn78xxp1;
+	struct cvmx_pciercx_cfg449_s cnf71xx;
+	struct cvmx_pciercx_cfg449_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg449 cvmx_pciercx_cfg449_t;
+
+/**
+ * cvmx_pcierc#_cfg450
+ *
+ * This register contains the four hundred fifty-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg450 {
+	u32 u32;
+	struct cvmx_pciercx_cfg450_s {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_8_14 : 7;
+		u32 link_num : 8;
+	} s;
+	struct cvmx_pciercx_cfg450_s cn52xx;
+	struct cvmx_pciercx_cfg450_s cn52xxp1;
+	struct cvmx_pciercx_cfg450_s cn56xx;
+	struct cvmx_pciercx_cfg450_s cn56xxp1;
+	struct cvmx_pciercx_cfg450_s cn61xx;
+	struct cvmx_pciercx_cfg450_s cn63xx;
+	struct cvmx_pciercx_cfg450_s cn63xxp1;
+	struct cvmx_pciercx_cfg450_s cn66xx;
+	struct cvmx_pciercx_cfg450_s cn68xx;
+	struct cvmx_pciercx_cfg450_s cn68xxp1;
+	struct cvmx_pciercx_cfg450_cn70xx {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_12_14 : 3;
+		u32 link_cmd : 4;
+		u32 link_num : 8;
+	} cn70xx;
+	struct cvmx_pciercx_cfg450_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg450_cn73xx {
+		u32 lpec : 8;
+		u32 reserved_22_23 : 2;
+		u32 link_state : 6;
+		u32 force_link : 1;
+		u32 reserved_12_14 : 3;
+		u32 forced_ltssm : 4;
+		u32 link_num : 8;
+	} cn73xx;
+	struct cvmx_pciercx_cfg450_cn73xx cn78xx;
+	struct cvmx_pciercx_cfg450_cn73xx cn78xxp1;
+	struct cvmx_pciercx_cfg450_s cnf71xx;
+	struct cvmx_pciercx_cfg450_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg450 cvmx_pciercx_cfg450_t;
+
+/**
+ * cvmx_pcierc#_cfg451
+ *
+ * This register contains the four hundred fifty-second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg451 {
+	u32 u32;
+	struct cvmx_pciercx_cfg451_s {
+		u32 reserved_31_31 : 1;
+		u32 easpml1 : 1;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 n_fts_cc : 8;
+		u32 n_fts : 8;
+		u32 ack_freq : 8;
+	} s;
+	struct cvmx_pciercx_cfg451_cn52xx {
+		u32 reserved_30_31 : 2;
+		u32 l1el : 3;
+		u32 l0el : 3;
+		u32 n_fts_cc : 8;
+		u32 n_fts : 8;
+		u32 ack_freq : 8;
+	} cn52xx;
+	struct cvmx_pciercx_cfg451_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg451_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg451_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg451_s cn61xx;
+	struct cvmx_pciercx_cfg451_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg451_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg451_s cn66xx;
+	struct cvmx_pciercx_cfg451_s cn68xx;
+	struct cvmx_pciercx_cfg451_s cn68xxp1;
+	struct cvmx_pciercx_cfg451_s cn70xx;
+	struct cvmx_pciercx_cfg451_s cn70xxp1;
+	struct cvmx_pciercx_cfg451_s cn73xx;
+	struct cvmx_pciercx_cfg451_s cn78xx;
+	struct cvmx_pciercx_cfg451_s cn78xxp1;
+	struct cvmx_pciercx_cfg451_s cnf71xx;
+	struct cvmx_pciercx_cfg451_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg451 cvmx_pciercx_cfg451_t;
+
+/**
+ * cvmx_pcierc#_cfg452
+ *
+ * This register contains the four hundred fifty-third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg452 {
+	u32 u32;
+	struct cvmx_pciercx_cfg452_s {
+		u32 reserved_26_31 : 6;
+		u32 eccrc : 1;
+		u32 reserved_22_24 : 3;
+		u32 lme : 6;
+		u32 reserved_12_15 : 4;
+		u32 link_rate : 4;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} s;
+	struct cvmx_pciercx_cfg452_cn52xx {
+		u32 reserved_26_31 : 6;
+		u32 eccrc : 1;
+		u32 reserved_22_24 : 3;
+		u32 lme : 6;
+		u32 reserved_8_15 : 8;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn52xx;
+	struct cvmx_pciercx_cfg452_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg452_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg452_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg452_cn61xx {
+		u32 reserved_22_31 : 10;
+		u32 lme : 6;
+		u32 reserved_8_15 : 8;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn61xx;
+	struct cvmx_pciercx_cfg452_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg452_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg452_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg452_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg452_cn61xx cn68xxp1;
+	struct cvmx_pciercx_cfg452_cn70xx {
+		u32 reserved_22_31 : 10;
+		u32 lme : 6;
+		u32 reserved_12_15 : 4;
+		u32 link_rate : 4;
+		u32 flm : 1;
+		u32 reserved_6_6 : 1;
+		u32 dllle : 1;
+		u32 reserved_4_4 : 1;
+		u32 ra : 1;
+		u32 le : 1;
+		u32 sd : 1;
+		u32 omr : 1;
+	} cn70xx;
+	struct cvmx_pciercx_cfg452_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg452_cn70xx cn73xx;
+	struct cvmx_pciercx_cfg452_cn70xx cn78xx;
+	struct cvmx_pciercx_cfg452_cn70xx cn78xxp1;
+	struct cvmx_pciercx_cfg452_cn61xx cnf71xx;
+	struct cvmx_pciercx_cfg452_cn70xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg452 cvmx_pciercx_cfg452_t;
+
+/**
+ * cvmx_pcierc#_cfg453
+ *
+ * This register contains the four hundred fifty-fourth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg453 {
+	u32 u32;
+	struct cvmx_pciercx_cfg453_s {
+		u32 dlld : 1;
+		u32 reserved_26_30 : 5;
+		u32 ack_nak : 1;
+		u32 fcd : 1;
+		u32 ilst : 24;
+	} s;
+	struct cvmx_pciercx_cfg453_s cn52xx;
+	struct cvmx_pciercx_cfg453_s cn52xxp1;
+	struct cvmx_pciercx_cfg453_s cn56xx;
+	struct cvmx_pciercx_cfg453_s cn56xxp1;
+	struct cvmx_pciercx_cfg453_s cn61xx;
+	struct cvmx_pciercx_cfg453_s cn63xx;
+	struct cvmx_pciercx_cfg453_s cn63xxp1;
+	struct cvmx_pciercx_cfg453_s cn66xx;
+	struct cvmx_pciercx_cfg453_s cn68xx;
+	struct cvmx_pciercx_cfg453_s cn68xxp1;
+	struct cvmx_pciercx_cfg453_s cn70xx;
+	struct cvmx_pciercx_cfg453_s cn70xxp1;
+	struct cvmx_pciercx_cfg453_s cn73xx;
+	struct cvmx_pciercx_cfg453_s cn78xx;
+	struct cvmx_pciercx_cfg453_s cn78xxp1;
+	struct cvmx_pciercx_cfg453_s cnf71xx;
+	struct cvmx_pciercx_cfg453_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg453 cvmx_pciercx_cfg453_t;
+
+/**
+ * cvmx_pcierc#_cfg454
+ *
+ * This register contains the four hundred fifty-fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg454 {
+	u32 u32;
+	struct cvmx_pciercx_cfg454_s {
+		u32 cx_nfunc : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_11_13 : 3;
+		u32 nskps : 3;
+		u32 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pciercx_cfg454_cn52xx {
+		u32 reserved_29_31 : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_11_13 : 3;
+		u32 nskps : 3;
+		u32 reserved_4_7 : 4;
+		u32 ntss : 4;
+	} cn52xx;
+	struct cvmx_pciercx_cfg454_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg454_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg454_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg454_cn61xx {
+		u32 cx_nfunc : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn61xx;
+	struct cvmx_pciercx_cfg454_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg454_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg454_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg454_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg454_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg454_cn70xx {
+		u32 reserved_24_31 : 8;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn70xx;
+	struct cvmx_pciercx_cfg454_cn70xx cn70xxp1;
+	struct cvmx_pciercx_cfg454_cn73xx {
+		u32 reserved_29_31 : 3;
+		u32 tmfcwt : 5;
+		u32 tmanlt : 5;
+		u32 tmrt : 5;
+		u32 reserved_8_13 : 6;
+		u32 mfuncn : 8;
+	} cn73xx;
+	struct cvmx_pciercx_cfg454_cn73xx cn78xx;
+	struct cvmx_pciercx_cfg454_cn73xx cn78xxp1;
+	struct cvmx_pciercx_cfg454_cn61xx cnf71xx;
+	struct cvmx_pciercx_cfg454_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg454 cvmx_pciercx_cfg454_t;
+
+/**
+ * cvmx_pcierc#_cfg455
+ *
+ * This register contains the four hundred fifty-sixth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg455 {
+	u32 u32;
+	struct cvmx_pciercx_cfg455_s {
+		u32 m_cfg0_filt : 1;
+		u32 m_io_filt : 1;
+		u32 msg_ctrl : 1;
+		u32 m_cpl_ecrc_filt : 1;
+		u32 m_ecrc_filt : 1;
+		u32 m_cpl_len_err : 1;
+		u32 m_cpl_attr_err : 1;
+		u32 m_cpl_tc_err : 1;
+		u32 m_cpl_fun_err : 1;
+		u32 m_cpl_rid_err : 1;
+		u32 m_cpl_tag_err : 1;
+		u32 m_lk_filt : 1;
+		u32 m_cfg1_filt : 1;
+		u32 m_bar_match : 1;
+		u32 m_pois_filt : 1;
+		u32 m_fun : 1;
+		u32 dfcwt : 1;
+		u32 reserved_11_14 : 4;
+		u32 skpiv : 11;
+	} s;
+	struct cvmx_pciercx_cfg455_s cn52xx;
+	struct cvmx_pciercx_cfg455_s cn52xxp1;
+	struct cvmx_pciercx_cfg455_s cn56xx;
+	struct cvmx_pciercx_cfg455_s cn56xxp1;
+	struct cvmx_pciercx_cfg455_s cn61xx;
+	struct cvmx_pciercx_cfg455_s cn63xx;
+	struct cvmx_pciercx_cfg455_s cn63xxp1;
+	struct cvmx_pciercx_cfg455_s cn66xx;
+	struct cvmx_pciercx_cfg455_s cn68xx;
+	struct cvmx_pciercx_cfg455_s cn68xxp1;
+	struct cvmx_pciercx_cfg455_s cn70xx;
+	struct cvmx_pciercx_cfg455_s cn70xxp1;
+	struct cvmx_pciercx_cfg455_s cn73xx;
+	struct cvmx_pciercx_cfg455_s cn78xx;
+	struct cvmx_pciercx_cfg455_s cn78xxp1;
+	struct cvmx_pciercx_cfg455_s cnf71xx;
+	struct cvmx_pciercx_cfg455_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg455 cvmx_pciercx_cfg455_t;
+
+/**
+ * cvmx_pcierc#_cfg456
+ *
+ * This register contains the four hundred fifty-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg456 {
+	u32 u32;
+	struct cvmx_pciercx_cfg456_s {
+		u32 reserved_4_31 : 28;
+		u32 m_handle_flush : 1;
+		u32 m_dabort_4ucpl : 1;
+		u32 m_vend1_drp : 1;
+		u32 m_vend0_drp : 1;
+	} s;
+	struct cvmx_pciercx_cfg456_cn52xx {
+		u32 reserved_2_31 : 30;
+		u32 m_vend1_drp : 1;
+		u32 m_vend0_drp : 1;
+	} cn52xx;
+	struct cvmx_pciercx_cfg456_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg456_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg456_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg456_s cn61xx;
+	struct cvmx_pciercx_cfg456_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg456_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg456_s cn66xx;
+	struct cvmx_pciercx_cfg456_s cn68xx;
+	struct cvmx_pciercx_cfg456_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg456_s cn70xx;
+	struct cvmx_pciercx_cfg456_s cn70xxp1;
+	struct cvmx_pciercx_cfg456_s cn73xx;
+	struct cvmx_pciercx_cfg456_s cn78xx;
+	struct cvmx_pciercx_cfg456_s cn78xxp1;
+	struct cvmx_pciercx_cfg456_s cnf71xx;
+	struct cvmx_pciercx_cfg456_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg456 cvmx_pciercx_cfg456_t;
+
+/**
+ * cvmx_pcierc#_cfg458
+ *
+ * This register contains the four hundred fifty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg458 {
+	u32 u32;
+	struct cvmx_pciercx_cfg458_s {
+		u32 dbg_info_l32 : 32;
+	} s;
+	struct cvmx_pciercx_cfg458_s cn52xx;
+	struct cvmx_pciercx_cfg458_s cn52xxp1;
+	struct cvmx_pciercx_cfg458_s cn56xx;
+	struct cvmx_pciercx_cfg458_s cn56xxp1;
+	struct cvmx_pciercx_cfg458_s cn61xx;
+	struct cvmx_pciercx_cfg458_s cn63xx;
+	struct cvmx_pciercx_cfg458_s cn63xxp1;
+	struct cvmx_pciercx_cfg458_s cn66xx;
+	struct cvmx_pciercx_cfg458_s cn68xx;
+	struct cvmx_pciercx_cfg458_s cn68xxp1;
+	struct cvmx_pciercx_cfg458_s cn70xx;
+	struct cvmx_pciercx_cfg458_s cn70xxp1;
+	struct cvmx_pciercx_cfg458_s cn73xx;
+	struct cvmx_pciercx_cfg458_s cn78xx;
+	struct cvmx_pciercx_cfg458_s cn78xxp1;
+	struct cvmx_pciercx_cfg458_s cnf71xx;
+	struct cvmx_pciercx_cfg458_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg458 cvmx_pciercx_cfg458_t;
+
+/**
+ * cvmx_pcierc#_cfg459
+ *
+ * This register contains the four hundred sixtieth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg459 {
+	u32 u32;
+	struct cvmx_pciercx_cfg459_s {
+		u32 dbg_info_u32 : 32;
+	} s;
+	struct cvmx_pciercx_cfg459_s cn52xx;
+	struct cvmx_pciercx_cfg459_s cn52xxp1;
+	struct cvmx_pciercx_cfg459_s cn56xx;
+	struct cvmx_pciercx_cfg459_s cn56xxp1;
+	struct cvmx_pciercx_cfg459_s cn61xx;
+	struct cvmx_pciercx_cfg459_s cn63xx;
+	struct cvmx_pciercx_cfg459_s cn63xxp1;
+	struct cvmx_pciercx_cfg459_s cn66xx;
+	struct cvmx_pciercx_cfg459_s cn68xx;
+	struct cvmx_pciercx_cfg459_s cn68xxp1;
+	struct cvmx_pciercx_cfg459_s cn70xx;
+	struct cvmx_pciercx_cfg459_s cn70xxp1;
+	struct cvmx_pciercx_cfg459_s cn73xx;
+	struct cvmx_pciercx_cfg459_s cn78xx;
+	struct cvmx_pciercx_cfg459_s cn78xxp1;
+	struct cvmx_pciercx_cfg459_s cnf71xx;
+	struct cvmx_pciercx_cfg459_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg459 cvmx_pciercx_cfg459_t;
+
+/**
+ * cvmx_pcierc#_cfg460
+ *
+ * This register contains the four hundred sixty-first 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg460 {
+	u32 u32;
+	struct cvmx_pciercx_cfg460_s {
+		u32 reserved_20_31 : 12;
+		u32 tphfcc : 8;
+		u32 tpdfcc : 12;
+	} s;
+	struct cvmx_pciercx_cfg460_s cn52xx;
+	struct cvmx_pciercx_cfg460_s cn52xxp1;
+	struct cvmx_pciercx_cfg460_s cn56xx;
+	struct cvmx_pciercx_cfg460_s cn56xxp1;
+	struct cvmx_pciercx_cfg460_s cn61xx;
+	struct cvmx_pciercx_cfg460_s cn63xx;
+	struct cvmx_pciercx_cfg460_s cn63xxp1;
+	struct cvmx_pciercx_cfg460_s cn66xx;
+	struct cvmx_pciercx_cfg460_s cn68xx;
+	struct cvmx_pciercx_cfg460_s cn68xxp1;
+	struct cvmx_pciercx_cfg460_s cn70xx;
+	struct cvmx_pciercx_cfg460_s cn70xxp1;
+	struct cvmx_pciercx_cfg460_s cn73xx;
+	struct cvmx_pciercx_cfg460_s cn78xx;
+	struct cvmx_pciercx_cfg460_s cn78xxp1;
+	struct cvmx_pciercx_cfg460_s cnf71xx;
+	struct cvmx_pciercx_cfg460_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg460 cvmx_pciercx_cfg460_t;
+
+/**
+ * cvmx_pcierc#_cfg461
+ *
+ * This register contains the four hundred sixty-second 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg461 {
+	u32 u32;
+	struct cvmx_pciercx_cfg461_s {
+		u32 reserved_20_31 : 12;
+		u32 tchfcc : 8;
+		u32 tcdfcc : 12;
+	} s;
+	struct cvmx_pciercx_cfg461_s cn52xx;
+	struct cvmx_pciercx_cfg461_s cn52xxp1;
+	struct cvmx_pciercx_cfg461_s cn56xx;
+	struct cvmx_pciercx_cfg461_s cn56xxp1;
+	struct cvmx_pciercx_cfg461_s cn61xx;
+	struct cvmx_pciercx_cfg461_s cn63xx;
+	struct cvmx_pciercx_cfg461_s cn63xxp1;
+	struct cvmx_pciercx_cfg461_s cn66xx;
+	struct cvmx_pciercx_cfg461_s cn68xx;
+	struct cvmx_pciercx_cfg461_s cn68xxp1;
+	struct cvmx_pciercx_cfg461_s cn70xx;
+	struct cvmx_pciercx_cfg461_s cn70xxp1;
+	struct cvmx_pciercx_cfg461_s cn73xx;
+	struct cvmx_pciercx_cfg461_s cn78xx;
+	struct cvmx_pciercx_cfg461_s cn78xxp1;
+	struct cvmx_pciercx_cfg461_s cnf71xx;
+	struct cvmx_pciercx_cfg461_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg461 cvmx_pciercx_cfg461_t;
+
+/**
+ * cvmx_pcierc#_cfg462
+ *
+ * This register contains the four hundred sixty-third 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg462 {
+	u32 u32;
+	struct cvmx_pciercx_cfg462_s {
+		u32 reserved_20_31 : 12;
+		u32 tchfcc : 8;
+		u32 tcdfcc : 12;
+	} s;
+	struct cvmx_pciercx_cfg462_s cn52xx;
+	struct cvmx_pciercx_cfg462_s cn52xxp1;
+	struct cvmx_pciercx_cfg462_s cn56xx;
+	struct cvmx_pciercx_cfg462_s cn56xxp1;
+	struct cvmx_pciercx_cfg462_s cn61xx;
+	struct cvmx_pciercx_cfg462_s cn63xx;
+	struct cvmx_pciercx_cfg462_s cn63xxp1;
+	struct cvmx_pciercx_cfg462_s cn66xx;
+	struct cvmx_pciercx_cfg462_s cn68xx;
+	struct cvmx_pciercx_cfg462_s cn68xxp1;
+	struct cvmx_pciercx_cfg462_s cn70xx;
+	struct cvmx_pciercx_cfg462_s cn70xxp1;
+	struct cvmx_pciercx_cfg462_s cn73xx;
+	struct cvmx_pciercx_cfg462_s cn78xx;
+	struct cvmx_pciercx_cfg462_s cn78xxp1;
+	struct cvmx_pciercx_cfg462_s cnf71xx;
+	struct cvmx_pciercx_cfg462_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg462 cvmx_pciercx_cfg462_t;
+
+/**
+ * cvmx_pcierc#_cfg463
+ *
+ * This register contains the four hundred sixty-fourth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg463 {
+	u32 u32;
+	struct cvmx_pciercx_cfg463_s {
+		u32 fcltoe : 1;
+		u32 reserved_29_30 : 2;
+		u32 fcltov : 13;
+		u32 reserved_3_15 : 13;
+		u32 rqne : 1;
+		u32 trbne : 1;
+		u32 rtlpfccnr : 1;
+	} s;
+	struct cvmx_pciercx_cfg463_cn52xx {
+		u32 reserved_3_31 : 29;
+		u32 rqne : 1;
+		u32 trbne : 1;
+		u32 rtlpfccnr : 1;
+	} cn52xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn52xxp1;
+	struct cvmx_pciercx_cfg463_cn52xx cn56xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn56xxp1;
+	struct cvmx_pciercx_cfg463_cn52xx cn61xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn63xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn63xxp1;
+	struct cvmx_pciercx_cfg463_cn52xx cn66xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn68xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn68xxp1;
+	struct cvmx_pciercx_cfg463_cn52xx cn70xx;
+	struct cvmx_pciercx_cfg463_cn52xx cn70xxp1;
+	struct cvmx_pciercx_cfg463_s cn73xx;
+	struct cvmx_pciercx_cfg463_s cn78xx;
+	struct cvmx_pciercx_cfg463_s cn78xxp1;
+	struct cvmx_pciercx_cfg463_cn52xx cnf71xx;
+	struct cvmx_pciercx_cfg463_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg463 cvmx_pciercx_cfg463_t;
+
+/**
+ * cvmx_pcierc#_cfg464
+ *
+ * This register contains the four hundred sixty-fifth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg464 {
+	u32 u32;
+	struct cvmx_pciercx_cfg464_s {
+		u32 wrr_vc3 : 8;
+		u32 wrr_vc2 : 8;
+		u32 wrr_vc1 : 8;
+		u32 wrr_vc0 : 8;
+	} s;
+	struct cvmx_pciercx_cfg464_s cn52xx;
+	struct cvmx_pciercx_cfg464_s cn52xxp1;
+	struct cvmx_pciercx_cfg464_s cn56xx;
+	struct cvmx_pciercx_cfg464_s cn56xxp1;
+	struct cvmx_pciercx_cfg464_s cn61xx;
+	struct cvmx_pciercx_cfg464_s cn63xx;
+	struct cvmx_pciercx_cfg464_s cn63xxp1;
+	struct cvmx_pciercx_cfg464_s cn66xx;
+	struct cvmx_pciercx_cfg464_s cn68xx;
+	struct cvmx_pciercx_cfg464_s cn68xxp1;
+	struct cvmx_pciercx_cfg464_s cn70xx;
+	struct cvmx_pciercx_cfg464_s cn70xxp1;
+	struct cvmx_pciercx_cfg464_s cn73xx;
+	struct cvmx_pciercx_cfg464_s cn78xx;
+	struct cvmx_pciercx_cfg464_s cn78xxp1;
+	struct cvmx_pciercx_cfg464_s cnf71xx;
+	struct cvmx_pciercx_cfg464_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg464 cvmx_pciercx_cfg464_t;
+
+/**
+ * cvmx_pcierc#_cfg465
+ *
+ * This register contains the four hundred sixty-sixth 32-bits of configuration space.
+ *
+ */
+union cvmx_pciercx_cfg465 {
+	u32 u32;
+	struct cvmx_pciercx_cfg465_s {
+		u32 wrr_vc7 : 8;
+		u32 wrr_vc6 : 8;
+		u32 wrr_vc5 : 8;
+		u32 wrr_vc4 : 8;
+	} s;
+	struct cvmx_pciercx_cfg465_s cn52xx;
+	struct cvmx_pciercx_cfg465_s cn52xxp1;
+	struct cvmx_pciercx_cfg465_s cn56xx;
+	struct cvmx_pciercx_cfg465_s cn56xxp1;
+	struct cvmx_pciercx_cfg465_s cn61xx;
+	struct cvmx_pciercx_cfg465_s cn63xx;
+	struct cvmx_pciercx_cfg465_s cn63xxp1;
+	struct cvmx_pciercx_cfg465_s cn66xx;
+	struct cvmx_pciercx_cfg465_s cn68xx;
+	struct cvmx_pciercx_cfg465_s cn68xxp1;
+	struct cvmx_pciercx_cfg465_s cn70xx;
+	struct cvmx_pciercx_cfg465_s cn70xxp1;
+	struct cvmx_pciercx_cfg465_s cn73xx;
+	struct cvmx_pciercx_cfg465_s cn78xx;
+	struct cvmx_pciercx_cfg465_s cn78xxp1;
+	struct cvmx_pciercx_cfg465_s cnf71xx;
+	struct cvmx_pciercx_cfg465_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg465 cvmx_pciercx_cfg465_t;
+
+/**
+ * cvmx_pcierc#_cfg466
+ *
+ * This register contains the four hundred sixty-seventh 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg466 {
+	u32 u32;
+	struct cvmx_pciercx_cfg466_s {
+		u32 rx_queue_order : 1;
+		u32 type_ordering : 1;
+		u32 reserved_24_29 : 6;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pciercx_cfg466_s cn52xx;
+	struct cvmx_pciercx_cfg466_s cn52xxp1;
+	struct cvmx_pciercx_cfg466_s cn56xx;
+	struct cvmx_pciercx_cfg466_s cn56xxp1;
+	struct cvmx_pciercx_cfg466_s cn61xx;
+	struct cvmx_pciercx_cfg466_s cn63xx;
+	struct cvmx_pciercx_cfg466_s cn63xxp1;
+	struct cvmx_pciercx_cfg466_s cn66xx;
+	struct cvmx_pciercx_cfg466_s cn68xx;
+	struct cvmx_pciercx_cfg466_s cn68xxp1;
+	struct cvmx_pciercx_cfg466_s cn70xx;
+	struct cvmx_pciercx_cfg466_s cn70xxp1;
+	struct cvmx_pciercx_cfg466_s cn73xx;
+	struct cvmx_pciercx_cfg466_s cn78xx;
+	struct cvmx_pciercx_cfg466_s cn78xxp1;
+	struct cvmx_pciercx_cfg466_s cnf71xx;
+	struct cvmx_pciercx_cfg466_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg466 cvmx_pciercx_cfg466_t;
+
+/**
+ * cvmx_pcierc#_cfg467
+ *
+ * This register contains the four hundred sixty-eighth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg467 {
+	u32 u32;
+	struct cvmx_pciercx_cfg467_s {
+		u32 reserved_24_31 : 8;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pciercx_cfg467_s cn52xx;
+	struct cvmx_pciercx_cfg467_s cn52xxp1;
+	struct cvmx_pciercx_cfg467_s cn56xx;
+	struct cvmx_pciercx_cfg467_s cn56xxp1;
+	struct cvmx_pciercx_cfg467_s cn61xx;
+	struct cvmx_pciercx_cfg467_s cn63xx;
+	struct cvmx_pciercx_cfg467_s cn63xxp1;
+	struct cvmx_pciercx_cfg467_s cn66xx;
+	struct cvmx_pciercx_cfg467_s cn68xx;
+	struct cvmx_pciercx_cfg467_s cn68xxp1;
+	struct cvmx_pciercx_cfg467_s cn70xx;
+	struct cvmx_pciercx_cfg467_s cn70xxp1;
+	struct cvmx_pciercx_cfg467_s cn73xx;
+	struct cvmx_pciercx_cfg467_s cn78xx;
+	struct cvmx_pciercx_cfg467_s cn78xxp1;
+	struct cvmx_pciercx_cfg467_s cnf71xx;
+	struct cvmx_pciercx_cfg467_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg467 cvmx_pciercx_cfg467_t;
+
+/**
+ * cvmx_pcierc#_cfg468
+ *
+ * This register contains the four hundred sixty-ninth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg468 {
+	u32 u32;
+	struct cvmx_pciercx_cfg468_s {
+		u32 reserved_24_31 : 8;
+		u32 queue_mode : 3;
+		u32 reserved_20_20 : 1;
+		u32 header_credits : 8;
+		u32 data_credits : 12;
+	} s;
+	struct cvmx_pciercx_cfg468_s cn52xx;
+	struct cvmx_pciercx_cfg468_s cn52xxp1;
+	struct cvmx_pciercx_cfg468_s cn56xx;
+	struct cvmx_pciercx_cfg468_s cn56xxp1;
+	struct cvmx_pciercx_cfg468_s cn61xx;
+	struct cvmx_pciercx_cfg468_s cn63xx;
+	struct cvmx_pciercx_cfg468_s cn63xxp1;
+	struct cvmx_pciercx_cfg468_s cn66xx;
+	struct cvmx_pciercx_cfg468_s cn68xx;
+	struct cvmx_pciercx_cfg468_s cn68xxp1;
+	struct cvmx_pciercx_cfg468_s cn70xx;
+	struct cvmx_pciercx_cfg468_s cn70xxp1;
+	struct cvmx_pciercx_cfg468_s cn73xx;
+	struct cvmx_pciercx_cfg468_s cn78xx;
+	struct cvmx_pciercx_cfg468_s cn78xxp1;
+	struct cvmx_pciercx_cfg468_s cnf71xx;
+	struct cvmx_pciercx_cfg468_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg468 cvmx_pciercx_cfg468_t;
+
+/**
+ * cvmx_pcierc#_cfg490
+ *
+ * PCIE_CFG490 = Four hundred ninety-first 32-bits of PCIE type 1 config space
+ * (VC0 Posted Buffer Depth)
+ */
+union cvmx_pciercx_cfg490 {
+	u32 u32;
+	struct cvmx_pciercx_cfg490_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pciercx_cfg490_s cn52xx;
+	struct cvmx_pciercx_cfg490_s cn52xxp1;
+	struct cvmx_pciercx_cfg490_s cn56xx;
+	struct cvmx_pciercx_cfg490_s cn56xxp1;
+	struct cvmx_pciercx_cfg490_s cn61xx;
+	struct cvmx_pciercx_cfg490_s cn63xx;
+	struct cvmx_pciercx_cfg490_s cn63xxp1;
+	struct cvmx_pciercx_cfg490_s cn66xx;
+	struct cvmx_pciercx_cfg490_s cn68xx;
+	struct cvmx_pciercx_cfg490_s cn68xxp1;
+	struct cvmx_pciercx_cfg490_s cn70xx;
+	struct cvmx_pciercx_cfg490_s cn70xxp1;
+	struct cvmx_pciercx_cfg490_s cnf71xx;
+};
+
+typedef union cvmx_pciercx_cfg490 cvmx_pciercx_cfg490_t;
+
+/**
+ * cvmx_pcierc#_cfg491
+ *
+ * PCIE_CFG491 = Four hundred ninety-second 32-bits of PCIE type 1 config space
+ * (VC0 Non-Posted Buffer Depth)
+ */
+union cvmx_pciercx_cfg491 {
+	u32 u32;
+	struct cvmx_pciercx_cfg491_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pciercx_cfg491_s cn52xx;
+	struct cvmx_pciercx_cfg491_s cn52xxp1;
+	struct cvmx_pciercx_cfg491_s cn56xx;
+	struct cvmx_pciercx_cfg491_s cn56xxp1;
+	struct cvmx_pciercx_cfg491_s cn61xx;
+	struct cvmx_pciercx_cfg491_s cn63xx;
+	struct cvmx_pciercx_cfg491_s cn63xxp1;
+	struct cvmx_pciercx_cfg491_s cn66xx;
+	struct cvmx_pciercx_cfg491_s cn68xx;
+	struct cvmx_pciercx_cfg491_s cn68xxp1;
+	struct cvmx_pciercx_cfg491_s cn70xx;
+	struct cvmx_pciercx_cfg491_s cn70xxp1;
+	struct cvmx_pciercx_cfg491_s cnf71xx;
+};
+
+typedef union cvmx_pciercx_cfg491 cvmx_pciercx_cfg491_t;
+
+/**
+ * cvmx_pcierc#_cfg492
+ *
+ * PCIE_CFG492 = Four hundred ninety-third 32-bits of PCIE type 1 config space
+ * (VC0 Completion Buffer Depth)
+ */
+union cvmx_pciercx_cfg492 {
+	u32 u32;
+	struct cvmx_pciercx_cfg492_s {
+		u32 reserved_26_31 : 6;
+		u32 header_depth : 10;
+		u32 reserved_14_15 : 2;
+		u32 data_depth : 14;
+	} s;
+	struct cvmx_pciercx_cfg492_s cn52xx;
+	struct cvmx_pciercx_cfg492_s cn52xxp1;
+	struct cvmx_pciercx_cfg492_s cn56xx;
+	struct cvmx_pciercx_cfg492_s cn56xxp1;
+	struct cvmx_pciercx_cfg492_s cn61xx;
+	struct cvmx_pciercx_cfg492_s cn63xx;
+	struct cvmx_pciercx_cfg492_s cn63xxp1;
+	struct cvmx_pciercx_cfg492_s cn66xx;
+	struct cvmx_pciercx_cfg492_s cn68xx;
+	struct cvmx_pciercx_cfg492_s cn68xxp1;
+	struct cvmx_pciercx_cfg492_s cn70xx;
+	struct cvmx_pciercx_cfg492_s cn70xxp1;
+	struct cvmx_pciercx_cfg492_s cnf71xx;
+};
+
+typedef union cvmx_pciercx_cfg492 cvmx_pciercx_cfg492_t;
+
+/**
+ * cvmx_pcierc#_cfg515
+ *
+ * This register contains the five hundred sixteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg515 {
+	u32 u32;
+	struct cvmx_pciercx_cfg515_s {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 reserved_8_16 : 9;
+		u32 n_fts : 8;
+	} s;
+	struct cvmx_pciercx_cfg515_cn61xx {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 le : 9;
+		u32 n_fts : 8;
+	} cn61xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn63xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn63xxp1;
+	struct cvmx_pciercx_cfg515_cn61xx cn66xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn68xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn68xxp1;
+	struct cvmx_pciercx_cfg515_cn61xx cn70xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn70xxp1;
+	struct cvmx_pciercx_cfg515_cn61xx cn73xx;
+	struct cvmx_pciercx_cfg515_cn78xx {
+		u32 reserved_21_31 : 11;
+		u32 s_d_e : 1;
+		u32 ctcrb : 1;
+		u32 cpyts : 1;
+		u32 dsc : 1;
+		u32 alaneflip : 1;
+		u32 pdetlane : 3;
+		u32 nlanes : 5;
+		u32 n_fts : 8;
+	} cn78xx;
+	struct cvmx_pciercx_cfg515_cn61xx cn78xxp1;
+	struct cvmx_pciercx_cfg515_cn61xx cnf71xx;
+	struct cvmx_pciercx_cfg515_cn78xx cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg515 cvmx_pciercx_cfg515_t;
+
+/**
+ * cvmx_pcierc#_cfg516
+ *
+ * This register contains the five hundred seventeenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg516 {
+	u32 u32;
+	struct cvmx_pciercx_cfg516_s {
+		u32 phy_stat : 32;
+	} s;
+	struct cvmx_pciercx_cfg516_s cn52xx;
+	struct cvmx_pciercx_cfg516_s cn52xxp1;
+	struct cvmx_pciercx_cfg516_s cn56xx;
+	struct cvmx_pciercx_cfg516_s cn56xxp1;
+	struct cvmx_pciercx_cfg516_s cn61xx;
+	struct cvmx_pciercx_cfg516_s cn63xx;
+	struct cvmx_pciercx_cfg516_s cn63xxp1;
+	struct cvmx_pciercx_cfg516_s cn66xx;
+	struct cvmx_pciercx_cfg516_s cn68xx;
+	struct cvmx_pciercx_cfg516_s cn68xxp1;
+	struct cvmx_pciercx_cfg516_s cn70xx;
+	struct cvmx_pciercx_cfg516_s cn70xxp1;
+	struct cvmx_pciercx_cfg516_s cn73xx;
+	struct cvmx_pciercx_cfg516_s cn78xx;
+	struct cvmx_pciercx_cfg516_s cn78xxp1;
+	struct cvmx_pciercx_cfg516_s cnf71xx;
+	struct cvmx_pciercx_cfg516_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg516 cvmx_pciercx_cfg516_t;
+
+/**
+ * cvmx_pcierc#_cfg517
+ *
+ * This register contains the five hundred eighteenth 32-bits of PCIe type 1 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg517 {
+	u32 u32;
+	struct cvmx_pciercx_cfg517_s {
+		u32 phy_ctrl : 32;
+	} s;
+	struct cvmx_pciercx_cfg517_s cn52xx;
+	struct cvmx_pciercx_cfg517_s cn52xxp1;
+	struct cvmx_pciercx_cfg517_s cn56xx;
+	struct cvmx_pciercx_cfg517_s cn56xxp1;
+	struct cvmx_pciercx_cfg517_s cn61xx;
+	struct cvmx_pciercx_cfg517_s cn63xx;
+	struct cvmx_pciercx_cfg517_s cn63xxp1;
+	struct cvmx_pciercx_cfg517_s cn66xx;
+	struct cvmx_pciercx_cfg517_s cn68xx;
+	struct cvmx_pciercx_cfg517_s cn68xxp1;
+	struct cvmx_pciercx_cfg517_s cn70xx;
+	struct cvmx_pciercx_cfg517_s cn70xxp1;
+	struct cvmx_pciercx_cfg517_s cn73xx;
+	struct cvmx_pciercx_cfg517_s cn78xx;
+	struct cvmx_pciercx_cfg517_s cn78xxp1;
+	struct cvmx_pciercx_cfg517_s cnf71xx;
+	struct cvmx_pciercx_cfg517_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg517 cvmx_pciercx_cfg517_t;
+
+/**
+ * cvmx_pcierc#_cfg548
+ *
+ * This register contains the five hundred forty-ninth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg548 {
+	u32 u32;
+	struct cvmx_pciercx_cfg548_s {
+		u32 reserved_26_31 : 6;
+		u32 rss : 2;
+		u32 eiedd : 1;
+		u32 reserved_19_22 : 4;
+		u32 dcbd : 1;
+		u32 dtdd : 1;
+		u32 ed : 1;
+		u32 reserved_13_15 : 3;
+		u32 rxeq_ph01_en : 1;
+		u32 erd : 1;
+		u32 ecrd : 1;
+		u32 ep2p3d : 1;
+		u32 dsg3 : 1;
+		u32 reserved_1_7 : 7;
+		u32 grizdnc : 1;
+	} s;
+	struct cvmx_pciercx_cfg548_s cn73xx;
+	struct cvmx_pciercx_cfg548_s cn78xx;
+	struct cvmx_pciercx_cfg548_s cn78xxp1;
+	struct cvmx_pciercx_cfg548_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg548 cvmx_pciercx_cfg548_t;
+
+/**
+ * cvmx_pcierc#_cfg554
+ *
+ * This register contains the five hundred fifty-fifth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg554 {
+	u32 u32;
+	struct cvmx_pciercx_cfg554_s {
+		u32 reserved_27_31 : 5;
+		u32 scefpm : 1;
+		u32 reserved_25_25 : 1;
+		u32 iif : 1;
+		u32 prv : 16;
+		u32 reserved_6_7 : 2;
+		u32 p23td : 1;
+		u32 bt : 1;
+		u32 fm : 4;
+	} s;
+	struct cvmx_pciercx_cfg554_cn73xx {
+		u32 reserved_25_31 : 7;
+		u32 iif : 1;
+		u32 prv : 16;
+		u32 reserved_6_7 : 2;
+		u32 p23td : 1;
+		u32 bt : 1;
+		u32 fm : 4;
+	} cn73xx;
+	struct cvmx_pciercx_cfg554_s cn78xx;
+	struct cvmx_pciercx_cfg554_s cn78xxp1;
+	struct cvmx_pciercx_cfg554_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg554 cvmx_pciercx_cfg554_t;
+
+/**
+ * cvmx_pcierc#_cfg558
+ *
+ * This register contains the five hundred fifty-ninth 32-bits of type 0 PCIe configuration space.
+ *
+ */
+union cvmx_pciercx_cfg558 {
+	u32 u32;
+	struct cvmx_pciercx_cfg558_s {
+		u32 ple : 1;
+		u32 rxstatus : 31;
+	} s;
+	struct cvmx_pciercx_cfg558_s cn73xx;
+	struct cvmx_pciercx_cfg558_s cn78xx;
+	struct cvmx_pciercx_cfg558_s cn78xxp1;
+	struct cvmx_pciercx_cfg558_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg558 cvmx_pciercx_cfg558_t;
+
+/**
+ * cvmx_pcierc#_cfg559
+ *
+ * This register contains the five hundred sixtieth 32-bits of PCIe type 0 configuration space.
+ *
+ */
+union cvmx_pciercx_cfg559 {
+	u32 u32;
+	struct cvmx_pciercx_cfg559_s {
+		u32 reserved_1_31 : 31;
+		u32 dbi_ro_wr_en : 1;
+	} s;
+	struct cvmx_pciercx_cfg559_s cn73xx;
+	struct cvmx_pciercx_cfg559_s cn78xx;
+	struct cvmx_pciercx_cfg559_s cn78xxp1;
+	struct cvmx_pciercx_cfg559_s cnf75xx;
+};
+
+typedef union cvmx_pciercx_cfg559 cvmx_pciercx_cfg559_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 19/50] mips: octeon: Add cvmx-pcsx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (17 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 18/50] mips: octeon: Add cvmx-pciercx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 20/50] mips: octeon: Add cvmx-pemx-defs.h " Stefan Roese
                   ` (33 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pcsx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pcsx-defs.h | 1005 +++++++++++++++++
 1 file changed, 1005 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcsx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcsx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pcsx-defs.h
new file mode 100644
index 0000000000..e534b6711d
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pcsx-defs.h
@@ -0,0 +1,1005 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pcsx.
+ */
+
+#ifndef __CVMX_PCSX_DEFS_H__
+#define __CVMX_PCSX_DEFS_H__
+
+static inline u64 CVMX_PCSX_ANX_ADV_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001010ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001010ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001010ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001010ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_ANX_EXT_ST_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001028ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001028ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001028ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001028ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_ANX_LP_ABIL_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001018ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001018ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001018ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001018ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_ANX_RESULTS_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001020ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001020ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001020ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001020ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_INTX_EN_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001088ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001088ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001088ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001088ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_INTX_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001080ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001080ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001080ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001080ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_LINKX_TIMER_COUNT_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001040ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001040ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001040ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001040ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_LOG_ANLX_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001090ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001090ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001090ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001090ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+#define CVMX_PCSX_MAC_CRDT_CNTX_REG(offset, block_id)                                              \
+	(0x00011800B00010B0ull + (((offset) & 3) + ((block_id) & 1) * 0x20000ull) * 1024)
+static inline u64 CVMX_PCSX_MISCX_CTL_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001078ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001078ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001078ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001078ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_MRX_CONTROL_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001000ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001000ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001000ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001000ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_MRX_STATUS_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001008ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001008ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001008ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001008ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_RXX_STATES_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001058ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001058ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001058ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001058ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_RXX_SYNC_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001050ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001050ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001050ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001050ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+#define CVMX_PCSX_SERDES_CRDT_CNTX_REG(offset, block_id)                                           \
+	(0x00011800B00010A0ull + (((offset) & 3) + ((block_id) & 1) * 0x20000ull) * 1024)
+static inline u64 CVMX_PCSX_SGMX_AN_ADV_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001068ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001068ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001068ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001068ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_SGMX_LP_ADV_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001070ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001070ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001070ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001070ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_TXX_STATES_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001060ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001060ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001060ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001060ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+static inline u64 CVMX_PCSX_TX_RXX_POLARITY_REG(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001048ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001048ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800B0001048ull + ((offset) + (block_id) * 0x4000ull) * 1024;
+	}
+	return 0x00011800B0001048ull + ((offset) + (block_id) * 0x20000ull) * 1024;
+}
+
+/**
+ * cvmx_pcs#_an#_adv_reg
+ *
+ * Bits [15:9] in the Status Register indicate ability to operate as per those signalling specification,
+ * when misc ctl reg MAC_PHY bit is set to MAC mode. Bits [15:9] will all, always read 1'b0, indicating
+ * that the chip cannot operate in the corresponding modes.
+ *
+ * Bit [4] RM_FLT is a don't care when the selected mode is SGMII.
+ *
+ *
+ *
+ * PCS_AN_ADV_REG = AN Advertisement Register4
+ */
+union cvmx_pcsx_anx_adv_reg {
+	u64 u64;
+	struct cvmx_pcsx_anx_adv_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 np : 1;
+		u64 reserved_14_14 : 1;
+		u64 rem_flt : 2;
+		u64 reserved_9_11 : 3;
+		u64 pause : 2;
+		u64 hfd : 1;
+		u64 fd : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_pcsx_anx_adv_reg_s cn52xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn52xxp1;
+	struct cvmx_pcsx_anx_adv_reg_s cn56xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn56xxp1;
+	struct cvmx_pcsx_anx_adv_reg_s cn61xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn63xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn63xxp1;
+	struct cvmx_pcsx_anx_adv_reg_s cn66xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn68xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn68xxp1;
+	struct cvmx_pcsx_anx_adv_reg_s cn70xx;
+	struct cvmx_pcsx_anx_adv_reg_s cn70xxp1;
+	struct cvmx_pcsx_anx_adv_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_anx_adv_reg cvmx_pcsx_anx_adv_reg_t;
+
+/**
+ * cvmx_pcs#_an#_ext_st_reg
+ *
+ * as per IEEE802.3 Clause 22
+ *
+ */
+union cvmx_pcsx_anx_ext_st_reg {
+	u64 u64;
+	struct cvmx_pcsx_anx_ext_st_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 thou_xfd : 1;
+		u64 thou_xhd : 1;
+		u64 thou_tfd : 1;
+		u64 thou_thd : 1;
+		u64 reserved_0_11 : 12;
+	} s;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn52xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn52xxp1;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn56xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn56xxp1;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn61xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn63xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn63xxp1;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn66xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn68xx;
+	struct cvmx_pcsx_anx_ext_st_reg_s cn68xxp1;
+	struct cvmx_pcsx_anx_ext_st_reg_cn70xx {
+		u64 reserved_16_63 : 48;
+		u64 thou_xfd : 1;
+		u64 thou_xhd : 1;
+		u64 thou_tfd : 1;
+		u64 thou_thd : 1;
+		u64 reserved_11_0 : 12;
+	} cn70xx;
+	struct cvmx_pcsx_anx_ext_st_reg_cn70xx cn70xxp1;
+	struct cvmx_pcsx_anx_ext_st_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_anx_ext_st_reg cvmx_pcsx_anx_ext_st_reg_t;
+
+/**
+ * cvmx_pcs#_an#_lp_abil_reg
+ *
+ * as per IEEE802.3 Clause 37
+ *
+ */
+union cvmx_pcsx_anx_lp_abil_reg {
+	u64 u64;
+	struct cvmx_pcsx_anx_lp_abil_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 np : 1;
+		u64 ack : 1;
+		u64 rem_flt : 2;
+		u64 reserved_9_11 : 3;
+		u64 pause : 2;
+		u64 hfd : 1;
+		u64 fd : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn52xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn52xxp1;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn56xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn56xxp1;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn61xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn63xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn63xxp1;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn66xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn68xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn68xxp1;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn70xx;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cn70xxp1;
+	struct cvmx_pcsx_anx_lp_abil_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_anx_lp_abil_reg cvmx_pcsx_anx_lp_abil_reg_t;
+
+/**
+ * cvmx_pcs#_an#_results_reg
+ *
+ * NOTE:
+ * an_results_reg is don't care when AN_OVRD is set to 1. If AN_OVRD=0 and AN_CPT=1
+ * the an_results_reg is valid.
+ */
+union cvmx_pcsx_anx_results_reg {
+	u64 u64;
+	struct cvmx_pcsx_anx_results_reg_s {
+		u64 reserved_7_63 : 57;
+		u64 pause : 2;
+		u64 spd : 2;
+		u64 an_cpt : 1;
+		u64 dup : 1;
+		u64 link_ok : 1;
+	} s;
+	struct cvmx_pcsx_anx_results_reg_s cn52xx;
+	struct cvmx_pcsx_anx_results_reg_s cn52xxp1;
+	struct cvmx_pcsx_anx_results_reg_s cn56xx;
+	struct cvmx_pcsx_anx_results_reg_s cn56xxp1;
+	struct cvmx_pcsx_anx_results_reg_s cn61xx;
+	struct cvmx_pcsx_anx_results_reg_s cn63xx;
+	struct cvmx_pcsx_anx_results_reg_s cn63xxp1;
+	struct cvmx_pcsx_anx_results_reg_s cn66xx;
+	struct cvmx_pcsx_anx_results_reg_s cn68xx;
+	struct cvmx_pcsx_anx_results_reg_s cn68xxp1;
+	struct cvmx_pcsx_anx_results_reg_s cn70xx;
+	struct cvmx_pcsx_anx_results_reg_s cn70xxp1;
+	struct cvmx_pcsx_anx_results_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_anx_results_reg cvmx_pcsx_anx_results_reg_t;
+
+/**
+ * cvmx_pcs#_int#_en_reg
+ *
+ * PCS Interrupt Enable Register
+ *
+ */
+union cvmx_pcsx_intx_en_reg {
+	u64 u64;
+	struct cvmx_pcsx_intx_en_reg_s {
+		u64 reserved_13_63 : 51;
+		u64 dbg_sync_en : 1;
+		u64 dup : 1;
+		u64 sync_bad_en : 1;
+		u64 an_bad_en : 1;
+		u64 rxlock_en : 1;
+		u64 rxbad_en : 1;
+		u64 rxerr_en : 1;
+		u64 txbad_en : 1;
+		u64 txfifo_en : 1;
+		u64 txfifu_en : 1;
+		u64 an_err_en : 1;
+		u64 xmit_en : 1;
+		u64 lnkspd_en : 1;
+	} s;
+	struct cvmx_pcsx_intx_en_reg_cn52xx {
+		u64 reserved_12_63 : 52;
+		u64 dup : 1;
+		u64 sync_bad_en : 1;
+		u64 an_bad_en : 1;
+		u64 rxlock_en : 1;
+		u64 rxbad_en : 1;
+		u64 rxerr_en : 1;
+		u64 txbad_en : 1;
+		u64 txfifo_en : 1;
+		u64 txfifu_en : 1;
+		u64 an_err_en : 1;
+		u64 xmit_en : 1;
+		u64 lnkspd_en : 1;
+	} cn52xx;
+	struct cvmx_pcsx_intx_en_reg_cn52xx cn52xxp1;
+	struct cvmx_pcsx_intx_en_reg_cn52xx cn56xx;
+	struct cvmx_pcsx_intx_en_reg_cn52xx cn56xxp1;
+	struct cvmx_pcsx_intx_en_reg_s cn61xx;
+	struct cvmx_pcsx_intx_en_reg_s cn63xx;
+	struct cvmx_pcsx_intx_en_reg_s cn63xxp1;
+	struct cvmx_pcsx_intx_en_reg_s cn66xx;
+	struct cvmx_pcsx_intx_en_reg_s cn68xx;
+	struct cvmx_pcsx_intx_en_reg_s cn68xxp1;
+	struct cvmx_pcsx_intx_en_reg_s cn70xx;
+	struct cvmx_pcsx_intx_en_reg_s cn70xxp1;
+	struct cvmx_pcsx_intx_en_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_intx_en_reg cvmx_pcsx_intx_en_reg_t;
+
+/**
+ * cvmx_pcs#_int#_reg
+ *
+ * PCS Interrupt Register
+ * NOTE: RXERR and TXERR conditions to be discussed with Dan before finalising
+ * DBG_SYNC interrupt fires when code group synchronization state machine makes a transition from
+ * SYNC_ACQUIRED_1 state to SYNC_ACQUIRED_2 state(See IEEE 802.3-2005 figure 37-9). It is an
+ * indication that a bad code group
+ * was received after code group synchronizaton was achieved. This interrupt should be disabled
+ * during normal link operation.
+ * Use it as a debug help feature only.
+ */
+union cvmx_pcsx_intx_reg {
+	u64 u64;
+	struct cvmx_pcsx_intx_reg_s {
+		u64 reserved_13_63 : 51;
+		u64 dbg_sync : 1;
+		u64 dup : 1;
+		u64 sync_bad : 1;
+		u64 an_bad : 1;
+		u64 rxlock : 1;
+		u64 rxbad : 1;
+		u64 rxerr : 1;
+		u64 txbad : 1;
+		u64 txfifo : 1;
+		u64 txfifu : 1;
+		u64 an_err : 1;
+		u64 xmit : 1;
+		u64 lnkspd : 1;
+	} s;
+	struct cvmx_pcsx_intx_reg_cn52xx {
+		u64 reserved_12_63 : 52;
+		u64 dup : 1;
+		u64 sync_bad : 1;
+		u64 an_bad : 1;
+		u64 rxlock : 1;
+		u64 rxbad : 1;
+		u64 rxerr : 1;
+		u64 txbad : 1;
+		u64 txfifo : 1;
+		u64 txfifu : 1;
+		u64 an_err : 1;
+		u64 xmit : 1;
+		u64 lnkspd : 1;
+	} cn52xx;
+	struct cvmx_pcsx_intx_reg_cn52xx cn52xxp1;
+	struct cvmx_pcsx_intx_reg_cn52xx cn56xx;
+	struct cvmx_pcsx_intx_reg_cn52xx cn56xxp1;
+	struct cvmx_pcsx_intx_reg_s cn61xx;
+	struct cvmx_pcsx_intx_reg_s cn63xx;
+	struct cvmx_pcsx_intx_reg_s cn63xxp1;
+	struct cvmx_pcsx_intx_reg_s cn66xx;
+	struct cvmx_pcsx_intx_reg_s cn68xx;
+	struct cvmx_pcsx_intx_reg_s cn68xxp1;
+	struct cvmx_pcsx_intx_reg_s cn70xx;
+	struct cvmx_pcsx_intx_reg_s cn70xxp1;
+	struct cvmx_pcsx_intx_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_intx_reg cvmx_pcsx_intx_reg_t;
+
+/**
+ * cvmx_pcs#_link#_timer_count_reg
+ *
+ * PCS_LINK_TIMER_COUNT_REG = 1.6ms nominal link timer register
+ *
+ */
+union cvmx_pcsx_linkx_timer_count_reg {
+	u64 u64;
+	struct cvmx_pcsx_linkx_timer_count_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 count : 16;
+	} s;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn52xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn52xxp1;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn56xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn56xxp1;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn61xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn63xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn63xxp1;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn66xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn68xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn68xxp1;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn70xx;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cn70xxp1;
+	struct cvmx_pcsx_linkx_timer_count_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_linkx_timer_count_reg cvmx_pcsx_linkx_timer_count_reg_t;
+
+/**
+ * cvmx_pcs#_log_anl#_reg
+ *
+ * PCS Logic Analyzer Register
+ * NOTE: Logic Analyzer is enabled with LA_EN for the specified PCS lane only. PKT_SZ is
+ * effective only when LA_EN=1
+ * For normal operation(sgmii or 1000Base-X), this bit must be 0.
+ * See pcsx.csr for xaui logic analyzer mode.
+ * For full description see document at .../rtl/pcs/readme_logic_analyzer.txt
+ */
+union cvmx_pcsx_log_anlx_reg {
+	u64 u64;
+	struct cvmx_pcsx_log_anlx_reg_s {
+		u64 reserved_4_63 : 60;
+		u64 lafifovfl : 1;
+		u64 la_en : 1;
+		u64 pkt_sz : 2;
+	} s;
+	struct cvmx_pcsx_log_anlx_reg_s cn52xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn52xxp1;
+	struct cvmx_pcsx_log_anlx_reg_s cn56xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn56xxp1;
+	struct cvmx_pcsx_log_anlx_reg_s cn61xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn63xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn63xxp1;
+	struct cvmx_pcsx_log_anlx_reg_s cn66xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn68xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn68xxp1;
+	struct cvmx_pcsx_log_anlx_reg_s cn70xx;
+	struct cvmx_pcsx_log_anlx_reg_s cn70xxp1;
+	struct cvmx_pcsx_log_anlx_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_log_anlx_reg cvmx_pcsx_log_anlx_reg_t;
+
+/**
+ * cvmx_pcs#_mac_crdt_cnt#_reg
+ *
+ * PCS MAC Credit Count
+ *
+ */
+union cvmx_pcsx_mac_crdt_cntx_reg {
+	u64 u64;
+	struct cvmx_pcsx_mac_crdt_cntx_reg_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_pcsx_mac_crdt_cntx_reg_s cn70xx;
+	struct cvmx_pcsx_mac_crdt_cntx_reg_s cn70xxp1;
+};
+
+typedef union cvmx_pcsx_mac_crdt_cntx_reg cvmx_pcsx_mac_crdt_cntx_reg_t;
+
+/**
+ * cvmx_pcs#_misc#_ctl_reg
+ *
+ * SGMII Misc Control Register
+ * SGMII bit [12] is really a misnomer, it is a decode  of pi_qlm_cfg pins to indicate SGMII or
+ * 1000Base-X modes.
+ * Note: MODE bit
+ * When MODE=1,  1000Base-X mode is selected. Auto negotiation will follow IEEE 802.3 clause 37.
+ * When MODE=0,  SGMII mode is selected and the following note will apply.
+ * Repeat note from SGM_AN_ADV register
+ * NOTE: The SGMII AN Advertisement Register above will be sent during Auto Negotiation if the
+ * MAC_PHY mode bit in misc_ctl_reg
+ * is set (1=PHY mode). If the bit is not set (0=MAC mode), the tx_config_reg[14] becomes ACK bit
+ * and [0] is always 1.
+ * All other bits in tx_config_reg sent will be 0. The PHY dictates the Auto Negotiation results.
+ */
+union cvmx_pcsx_miscx_ctl_reg {
+	u64 u64;
+	struct cvmx_pcsx_miscx_ctl_reg_s {
+		u64 reserved_13_63 : 51;
+		u64 sgmii : 1;
+		u64 gmxeno : 1;
+		u64 loopbck2 : 1;
+		u64 mac_phy : 1;
+		u64 mode : 1;
+		u64 an_ovrd : 1;
+		u64 samp_pt : 7;
+	} s;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn52xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn52xxp1;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn56xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn56xxp1;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn61xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn63xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn63xxp1;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn66xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn68xx;
+	struct cvmx_pcsx_miscx_ctl_reg_s cn68xxp1;
+	struct cvmx_pcsx_miscx_ctl_reg_cn70xx {
+		u64 reserved_12_63 : 52;
+		u64 gmxeno : 1;
+		u64 loopbck2 : 1;
+		u64 mac_phy : 1;
+		u64 mode : 1;
+		u64 an_ovrd : 1;
+		u64 samp_pt : 7;
+	} cn70xx;
+	struct cvmx_pcsx_miscx_ctl_reg_cn70xx cn70xxp1;
+	struct cvmx_pcsx_miscx_ctl_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_miscx_ctl_reg cvmx_pcsx_miscx_ctl_reg_t;
+
+/**
+ * cvmx_pcs#_mr#_control_reg
+ *
+ * NOTE:
+ * Whenever AN_EN bit[12] is set, Auto negotiation is allowed to happen. The results
+ * of the auto negotiation process set the fields in the AN_RESULTS reg. When AN_EN is not set,
+ * AN_RESULTS reg is don't care. The effective SPD, DUP etc.. get their values
+ * from the pcs_mr_ctrl reg.
+ */
+union cvmx_pcsx_mrx_control_reg {
+	u64 u64;
+	struct cvmx_pcsx_mrx_control_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 reset : 1;
+		u64 loopbck1 : 1;
+		u64 spdlsb : 1;
+		u64 an_en : 1;
+		u64 pwr_dn : 1;
+		u64 reserved_10_10 : 1;
+		u64 rst_an : 1;
+		u64 dup : 1;
+		u64 coltst : 1;
+		u64 spdmsb : 1;
+		u64 uni : 1;
+		u64 reserved_0_4 : 5;
+	} s;
+	struct cvmx_pcsx_mrx_control_reg_s cn52xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn52xxp1;
+	struct cvmx_pcsx_mrx_control_reg_s cn56xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn56xxp1;
+	struct cvmx_pcsx_mrx_control_reg_s cn61xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn63xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn63xxp1;
+	struct cvmx_pcsx_mrx_control_reg_s cn66xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn68xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn68xxp1;
+	struct cvmx_pcsx_mrx_control_reg_s cn70xx;
+	struct cvmx_pcsx_mrx_control_reg_s cn70xxp1;
+	struct cvmx_pcsx_mrx_control_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_mrx_control_reg cvmx_pcsx_mrx_control_reg_t;
+
+/**
+ * cvmx_pcs#_mr#_status_reg
+ *
+ * Bits [15:9] in the Status Register indicate ability to operate as per those signalling
+ * specification,
+ * when misc ctl reg MAC_PHY bit is set to MAC mode. Bits [15:9] will all, always read 1'b0,
+ * indicating
+ * that the chip cannot operate in the corresponding modes.
+ * Bit [4] RM_FLT is a don't care when the selected mode is SGMII.
+ */
+union cvmx_pcsx_mrx_status_reg {
+	u64 u64;
+	struct cvmx_pcsx_mrx_status_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 hun_t4 : 1;
+		u64 hun_xfd : 1;
+		u64 hun_xhd : 1;
+		u64 ten_fd : 1;
+		u64 ten_hd : 1;
+		u64 hun_t2fd : 1;
+		u64 hun_t2hd : 1;
+		u64 ext_st : 1;
+		u64 reserved_7_7 : 1;
+		u64 prb_sup : 1;
+		u64 an_cpt : 1;
+		u64 rm_flt : 1;
+		u64 an_abil : 1;
+		u64 lnk_st : 1;
+		u64 reserved_1_1 : 1;
+		u64 extnd : 1;
+	} s;
+	struct cvmx_pcsx_mrx_status_reg_s cn52xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn52xxp1;
+	struct cvmx_pcsx_mrx_status_reg_s cn56xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn56xxp1;
+	struct cvmx_pcsx_mrx_status_reg_s cn61xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn63xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn63xxp1;
+	struct cvmx_pcsx_mrx_status_reg_s cn66xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn68xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn68xxp1;
+	struct cvmx_pcsx_mrx_status_reg_s cn70xx;
+	struct cvmx_pcsx_mrx_status_reg_s cn70xxp1;
+	struct cvmx_pcsx_mrx_status_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_mrx_status_reg cvmx_pcsx_mrx_status_reg_t;
+
+/**
+ * cvmx_pcs#_rx#_states_reg
+ *
+ * PCS_RX_STATES_REG = RX State Machines states register
+ *
+ */
+union cvmx_pcsx_rxx_states_reg {
+	u64 u64;
+	struct cvmx_pcsx_rxx_states_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 rx_bad : 1;
+		u64 rx_st : 5;
+		u64 sync_bad : 1;
+		u64 sync : 4;
+		u64 an_bad : 1;
+		u64 an_st : 4;
+	} s;
+	struct cvmx_pcsx_rxx_states_reg_s cn52xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn52xxp1;
+	struct cvmx_pcsx_rxx_states_reg_s cn56xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn56xxp1;
+	struct cvmx_pcsx_rxx_states_reg_s cn61xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn63xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn63xxp1;
+	struct cvmx_pcsx_rxx_states_reg_s cn66xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn68xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn68xxp1;
+	struct cvmx_pcsx_rxx_states_reg_s cn70xx;
+	struct cvmx_pcsx_rxx_states_reg_s cn70xxp1;
+	struct cvmx_pcsx_rxx_states_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_rxx_states_reg cvmx_pcsx_rxx_states_reg_t;
+
+/**
+ * cvmx_pcs#_rx#_sync_reg
+ *
+ * Note:
+ * r_tx_rx_polarity_reg bit [2] will show correct polarity needed on the link receive path after code grp synchronization is achieved.
+ *
+ *
+ *  PCS_RX_SYNC_REG = Code Group synchronization reg
+ */
+union cvmx_pcsx_rxx_sync_reg {
+	u64 u64;
+	struct cvmx_pcsx_rxx_sync_reg_s {
+		u64 reserved_2_63 : 62;
+		u64 sync : 1;
+		u64 bit_lock : 1;
+	} s;
+	struct cvmx_pcsx_rxx_sync_reg_s cn52xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn52xxp1;
+	struct cvmx_pcsx_rxx_sync_reg_s cn56xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn56xxp1;
+	struct cvmx_pcsx_rxx_sync_reg_s cn61xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn63xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn63xxp1;
+	struct cvmx_pcsx_rxx_sync_reg_s cn66xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn68xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn68xxp1;
+	struct cvmx_pcsx_rxx_sync_reg_s cn70xx;
+	struct cvmx_pcsx_rxx_sync_reg_s cn70xxp1;
+	struct cvmx_pcsx_rxx_sync_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_rxx_sync_reg cvmx_pcsx_rxx_sync_reg_t;
+
+/**
+ * cvmx_pcs#_serdes_crdt_cnt#_reg
+ *
+ * PCS SERDES Credit Count
+ *
+ */
+union cvmx_pcsx_serdes_crdt_cntx_reg {
+	u64 u64;
+	struct cvmx_pcsx_serdes_crdt_cntx_reg_s {
+		u64 reserved_5_63 : 59;
+		u64 cnt : 5;
+	} s;
+	struct cvmx_pcsx_serdes_crdt_cntx_reg_s cn70xx;
+	struct cvmx_pcsx_serdes_crdt_cntx_reg_s cn70xxp1;
+};
+
+typedef union cvmx_pcsx_serdes_crdt_cntx_reg cvmx_pcsx_serdes_crdt_cntx_reg_t;
+
+/**
+ * cvmx_pcs#_sgm#_an_adv_reg
+ *
+ * SGMII AN Advertisement Register (sent out as tx_config_reg)
+ * NOTE: The SGMII AN Advertisement Register above will be sent during Auto Negotiation if the
+ * MAC_PHY mode bit in misc_ctl_reg
+ * is set (1=PHY mode). If the bit is not set (0=MAC mode), the tx_config_reg[14] becomes ACK bit
+ * and [0] is always 1.
+ * All other bits in tx_config_reg sent will be 0. The PHY dictates the Auto Negotiation results.
+ */
+union cvmx_pcsx_sgmx_an_adv_reg {
+	u64 u64;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 link : 1;
+		u64 ack : 1;
+		u64 reserved_13_13 : 1;
+		u64 dup : 1;
+		u64 speed : 2;
+		u64 reserved_1_9 : 9;
+		u64 one : 1;
+	} s;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn52xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn52xxp1;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn56xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn56xxp1;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn61xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn63xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn63xxp1;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn66xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn68xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn68xxp1;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn70xx;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cn70xxp1;
+	struct cvmx_pcsx_sgmx_an_adv_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_sgmx_an_adv_reg cvmx_pcsx_sgmx_an_adv_reg_t;
+
+/**
+ * cvmx_pcs#_sgm#_lp_adv_reg
+ *
+ * SGMII LP Advertisement Register (received as rx_config_reg)
+ *
+ */
+union cvmx_pcsx_sgmx_lp_adv_reg {
+	u64 u64;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s {
+		u64 reserved_16_63 : 48;
+		u64 link : 1;
+		u64 reserved_13_14 : 2;
+		u64 dup : 1;
+		u64 speed : 2;
+		u64 reserved_1_9 : 9;
+		u64 one : 1;
+	} s;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn52xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn52xxp1;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn56xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn56xxp1;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn61xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn63xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn63xxp1;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn66xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn68xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn68xxp1;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn70xx;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cn70xxp1;
+	struct cvmx_pcsx_sgmx_lp_adv_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_sgmx_lp_adv_reg cvmx_pcsx_sgmx_lp_adv_reg_t;
+
+/**
+ * cvmx_pcs#_tx#_states_reg
+ *
+ * PCS_TX_STATES_REG = TX State Machines states register
+ *
+ */
+union cvmx_pcsx_txx_states_reg {
+	u64 u64;
+	struct cvmx_pcsx_txx_states_reg_s {
+		u64 reserved_7_63 : 57;
+		u64 xmit : 2;
+		u64 tx_bad : 1;
+		u64 ord_st : 4;
+	} s;
+	struct cvmx_pcsx_txx_states_reg_s cn52xx;
+	struct cvmx_pcsx_txx_states_reg_s cn52xxp1;
+	struct cvmx_pcsx_txx_states_reg_s cn56xx;
+	struct cvmx_pcsx_txx_states_reg_s cn56xxp1;
+	struct cvmx_pcsx_txx_states_reg_s cn61xx;
+	struct cvmx_pcsx_txx_states_reg_s cn63xx;
+	struct cvmx_pcsx_txx_states_reg_s cn63xxp1;
+	struct cvmx_pcsx_txx_states_reg_s cn66xx;
+	struct cvmx_pcsx_txx_states_reg_s cn68xx;
+	struct cvmx_pcsx_txx_states_reg_s cn68xxp1;
+	struct cvmx_pcsx_txx_states_reg_s cn70xx;
+	struct cvmx_pcsx_txx_states_reg_s cn70xxp1;
+	struct cvmx_pcsx_txx_states_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_txx_states_reg cvmx_pcsx_txx_states_reg_t;
+
+/**
+ * cvmx_pcs#_tx_rx#_polarity_reg
+ *
+ * Note:
+ * r_tx_rx_polarity_reg bit [2] will show correct polarity needed on the link receive path after
+ * code grp synchronization is achieved.
+ */
+union cvmx_pcsx_tx_rxx_polarity_reg {
+	u64 u64;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s {
+		u64 reserved_4_63 : 60;
+		u64 rxovrd : 1;
+		u64 autorxpl : 1;
+		u64 rxplrt : 1;
+		u64 txplrt : 1;
+	} s;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn52xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn52xxp1;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn56xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn56xxp1;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn61xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn63xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn63xxp1;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn66xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn68xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn68xxp1;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn70xx;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cn70xxp1;
+	struct cvmx_pcsx_tx_rxx_polarity_reg_s cnf71xx;
+};
+
+typedef union cvmx_pcsx_tx_rxx_polarity_reg cvmx_pcsx_tx_rxx_polarity_reg_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 20/50] mips: octeon: Add cvmx-pemx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (18 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 19/50] mips: octeon: Add cvmx-pcsx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 21/50] mips: octeon: Add cvmx-pepx-defs.h " Stefan Roese
                   ` (32 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pemx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pemx-defs.h | 2028 +++++++++++++++++
 1 file changed, 2028 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pemx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pemx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pemx-defs.h
new file mode 100644
index 0000000000..9ec7a4b67c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pemx-defs.h
@@ -0,0 +1,2028 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pemx.
+ */
+
+#ifndef __CVMX_PEMX_DEFS_H__
+#define __CVMX_PEMX_DEFS_H__
+
+static inline u64 CVMX_PEMX_BAR1_INDEXX(unsigned long offset, unsigned long block_id)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000A8ull + ((offset) + (block_id) * 0x200000ull) * 8;
+	}
+	return 0x00011800C0000100ull + ((offset) + (block_id) * 0x200000ull) * 8;
+}
+
+static inline u64 CVMX_PEMX_BAR2_MASK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000B0ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000B0ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C00000B0ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C00000B0ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C10000B0ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000130ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C00000B0ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_BAR_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000A8ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000A8ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C00000A8ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C00000A8ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C10000A8ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000128ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C00000A8ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_BIST_STATUS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000018ull + (offset) * 0x1000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000018ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000440ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C0000440ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C0000440ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C1000440ull + (offset) * 0x1000000ull - 16777216 * 1;
+	}
+	return 0x00011800C0000440ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_BIST_STATUS2(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000420ull + (offset) * 0x1000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000440ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000420ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_PEMX_CFG(offset)		(0x00011800C0000410ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CFG_RD(offset)	(0x00011800C0000030ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CFG_WR(offset)	(0x00011800C0000028ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CLK_EN(offset)	(0x00011800C0000400ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CPL_LUT_VALID(offset) (0x00011800C0000098ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CTL_STATUS(offset)	(0x00011800C0000000ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_CTL_STATUS2(offset)	(0x00011800C0000008ull + ((offset) & 3) * 0x1000000ull)
+static inline u64 CVMX_PEMX_DBG_INFO(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000D0ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000D0ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C00000D0ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C00000D0ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C10000D0ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000008ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C00000D0ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_PEMX_DBG_INFO_EN(offset) (0x00011800C00000A0ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_DIAG_STATUS(offset) (0x00011800C0000020ull + ((offset) & 3) * 0x1000000ull)
+static inline u64 CVMX_PEMX_ECC_ENA(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000448ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C0000448ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C0000448ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C1000448ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000C0ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000448ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_ECC_SYND_CTRL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000450ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C0000450ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C0000450ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C1000450ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000C8ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000450ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_PEMX_ECO(offset)		     (0x00011800C0000010ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_FLR_GLBLCNT_CTL(offset)    (0x00011800C0000210ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_FLR_PF0_VF_STOPREQ(offset) (0x00011800C0000220ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_FLR_PF_STOPREQ(offset)     (0x00011800C0000218ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_FLR_STOPREQ_CTL(offset)    (0x00011800C0000238ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_FLR_ZOMBIE_CTL(offset)     (0x00011800C0000230ull + ((offset) & 3) * 0x1000000ull)
+static inline u64 CVMX_PEMX_INB_READ_CREDITS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000B8ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C00000B8ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C00000B8ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C00000B8ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C10000B8ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000138ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C00000B8ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_INT_ENB(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000410ull + (offset) * 0x1000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000430ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000410ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_INT_ENB_INT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000418ull + (offset) * 0x1000000ull;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000438ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000418ull + (offset) * 0x1000000ull;
+}
+
+static inline u64 CVMX_PEMX_INT_SUM(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000428ull + (offset) * 0x1000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000428ull + (offset) * 0x1000000ull;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011800C0000428ull + (offset) * 0x1000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011800C0000428ull + (offset) * 0x1000000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C1000428ull + (offset) * 0x1000000ull - 16777216 * 1;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011800C0000408ull + (offset) * 0x1000000ull;
+	}
+	return 0x00011800C0000428ull + (offset) * 0x1000000ull;
+}
+
+#define CVMX_PEMX_ON(offset)		 (0x00011800C0000420ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_P2N_BAR0_START(offset) (0x00011800C0000080ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_P2N_BAR1_START(offset) (0x00011800C0000088ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_P2N_BAR2_START(offset) (0x00011800C0000090ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_P2P_BARX_END(offset, block_id)                                                   \
+	(0x00011800C0000048ull + (((offset) & 3) + ((block_id) & 3) * 0x100000ull) * 16)
+#define CVMX_PEMX_P2P_BARX_START(offset, block_id)                                                 \
+	(0x00011800C0000040ull + (((offset) & 3) + ((block_id) & 3) * 0x100000ull) * 16)
+#define CVMX_PEMX_QLM(offset)	      (0x00011800C0000418ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_SPI_CTL(offset)     (0x00011800C0000180ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_SPI_DATA(offset)    (0x00011800C0000188ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_STRAP(offset)	      (0x00011800C0000408ull + ((offset) & 3) * 0x1000000ull)
+#define CVMX_PEMX_TLP_CREDITS(offset) (0x00011800C0000038ull + ((offset) & 3) * 0x1000000ull)
+
+/**
+ * cvmx_pem#_bar1_index#
+ *
+ * This register contains the address index and control bits for access to memory ranges of BAR1.
+ * The index is built from supplied address [25:22].
+ */
+union cvmx_pemx_bar1_indexx {
+	u64 u64;
+	struct cvmx_pemx_bar1_indexx_s {
+		u64 reserved_24_63 : 40;
+		u64 addr_idx : 20;
+		u64 ca : 1;
+		u64 end_swp : 2;
+		u64 addr_v : 1;
+	} s;
+	struct cvmx_pemx_bar1_indexx_cn61xx {
+		u64 reserved_20_63 : 44;
+		u64 addr_idx : 16;
+		u64 ca : 1;
+		u64 end_swp : 2;
+		u64 addr_v : 1;
+	} cn61xx;
+	struct cvmx_pemx_bar1_indexx_cn61xx cn63xx;
+	struct cvmx_pemx_bar1_indexx_cn61xx cn63xxp1;
+	struct cvmx_pemx_bar1_indexx_cn61xx cn66xx;
+	struct cvmx_pemx_bar1_indexx_cn61xx cn68xx;
+	struct cvmx_pemx_bar1_indexx_cn61xx cn68xxp1;
+	struct cvmx_pemx_bar1_indexx_s cn70xx;
+	struct cvmx_pemx_bar1_indexx_s cn70xxp1;
+	struct cvmx_pemx_bar1_indexx_s cn73xx;
+	struct cvmx_pemx_bar1_indexx_s cn78xx;
+	struct cvmx_pemx_bar1_indexx_s cn78xxp1;
+	struct cvmx_pemx_bar1_indexx_cn61xx cnf71xx;
+	struct cvmx_pemx_bar1_indexx_s cnf75xx;
+};
+
+typedef union cvmx_pemx_bar1_indexx cvmx_pemx_bar1_indexx_t;
+
+/**
+ * cvmx_pem#_bar2_mask
+ *
+ * This register contains the mask pattern that is ANDed with the address from the PCIe core for
+ * BAR2 hits. This allows the effective size of RC BAR2 to be shrunk. Must not be changed
+ * from its reset value in EP mode.
+ */
+union cvmx_pemx_bar2_mask {
+	u64 u64;
+	struct cvmx_pemx_bar2_mask_s {
+		u64 reserved_45_63 : 19;
+		u64 mask : 42;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_pemx_bar2_mask_cn61xx {
+		u64 reserved_38_63 : 26;
+		u64 mask : 35;
+		u64 reserved_0_2 : 3;
+	} cn61xx;
+	struct cvmx_pemx_bar2_mask_cn61xx cn66xx;
+	struct cvmx_pemx_bar2_mask_cn61xx cn68xx;
+	struct cvmx_pemx_bar2_mask_cn61xx cn68xxp1;
+	struct cvmx_pemx_bar2_mask_cn61xx cn70xx;
+	struct cvmx_pemx_bar2_mask_cn61xx cn70xxp1;
+	struct cvmx_pemx_bar2_mask_cn73xx {
+		u64 reserved_42_63 : 22;
+		u64 mask : 39;
+		u64 reserved_0_2 : 3;
+	} cn73xx;
+	struct cvmx_pemx_bar2_mask_s cn78xx;
+	struct cvmx_pemx_bar2_mask_cn73xx cn78xxp1;
+	struct cvmx_pemx_bar2_mask_cn61xx cnf71xx;
+	struct cvmx_pemx_bar2_mask_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_bar2_mask cvmx_pemx_bar2_mask_t;
+
+/**
+ * cvmx_pem#_bar_ctl
+ *
+ * This register contains control for BAR accesses.
+ *
+ */
+union cvmx_pemx_bar_ctl {
+	u64 u64;
+	struct cvmx_pemx_bar_ctl_s {
+		u64 reserved_7_63 : 57;
+		u64 bar1_siz : 3;
+		u64 bar2_enb : 1;
+		u64 bar2_esx : 2;
+		u64 bar2_cax : 1;
+	} s;
+	struct cvmx_pemx_bar_ctl_s cn61xx;
+	struct cvmx_pemx_bar_ctl_s cn63xx;
+	struct cvmx_pemx_bar_ctl_s cn63xxp1;
+	struct cvmx_pemx_bar_ctl_s cn66xx;
+	struct cvmx_pemx_bar_ctl_s cn68xx;
+	struct cvmx_pemx_bar_ctl_s cn68xxp1;
+	struct cvmx_pemx_bar_ctl_s cn70xx;
+	struct cvmx_pemx_bar_ctl_s cn70xxp1;
+	struct cvmx_pemx_bar_ctl_s cn73xx;
+	struct cvmx_pemx_bar_ctl_s cn78xx;
+	struct cvmx_pemx_bar_ctl_s cn78xxp1;
+	struct cvmx_pemx_bar_ctl_s cnf71xx;
+	struct cvmx_pemx_bar_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pemx_bar_ctl cvmx_pemx_bar_ctl_t;
+
+/**
+ * cvmx_pem#_bist_status
+ *
+ * This register contains results from BIST runs of PEM's memories.
+ *
+ */
+union cvmx_pemx_bist_status {
+	u64 u64;
+	struct cvmx_pemx_bist_status_s {
+		u64 reserved_16_63 : 48;
+		u64 retryc : 1;
+		u64 reserved_14_14 : 1;
+		u64 rqhdrb0 : 1;
+		u64 rqhdrb1 : 1;
+		u64 rqdatab0 : 1;
+		u64 rqdatab1 : 1;
+		u64 tlpn_d0 : 1;
+		u64 tlpn_d1 : 1;
+		u64 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pemx_bist_status_cn61xx {
+		u64 reserved_8_63 : 56;
+		u64 retry : 1;
+		u64 rqdata0 : 1;
+		u64 rqdata1 : 1;
+		u64 rqdata2 : 1;
+		u64 rqdata3 : 1;
+		u64 rqhdr1 : 1;
+		u64 rqhdr0 : 1;
+		u64 sot : 1;
+	} cn61xx;
+	struct cvmx_pemx_bist_status_cn61xx cn63xx;
+	struct cvmx_pemx_bist_status_cn61xx cn63xxp1;
+	struct cvmx_pemx_bist_status_cn61xx cn66xx;
+	struct cvmx_pemx_bist_status_cn61xx cn68xx;
+	struct cvmx_pemx_bist_status_cn61xx cn68xxp1;
+	struct cvmx_pemx_bist_status_cn70xx {
+		u64 reserved_6_63 : 58;
+		u64 retry : 1;
+		u64 sot : 1;
+		u64 rqhdr0 : 1;
+		u64 rqhdr1 : 1;
+		u64 rqdata0 : 1;
+		u64 rqdata1 : 1;
+	} cn70xx;
+	struct cvmx_pemx_bist_status_cn70xx cn70xxp1;
+	struct cvmx_pemx_bist_status_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 retryc : 1;
+		u64 sot : 1;
+		u64 rqhdrb0 : 1;
+		u64 rqhdrb1 : 1;
+		u64 rqdatab0 : 1;
+		u64 rqdatab1 : 1;
+		u64 tlpn_d0 : 1;
+		u64 tlpn_d1 : 1;
+		u64 tlpn_ctl : 1;
+		u64 tlpp_d0 : 1;
+		u64 tlpp_d1 : 1;
+		u64 tlpp_ctl : 1;
+		u64 tlpc_d0 : 1;
+		u64 tlpc_d1 : 1;
+		u64 tlpc_ctl : 1;
+		u64 m2s : 1;
+	} cn73xx;
+	struct cvmx_pemx_bist_status_cn73xx cn78xx;
+	struct cvmx_pemx_bist_status_cn73xx cn78xxp1;
+	struct cvmx_pemx_bist_status_cn61xx cnf71xx;
+	struct cvmx_pemx_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_bist_status cvmx_pemx_bist_status_t;
+
+/**
+ * cvmx_pem#_bist_status2
+ *
+ * "PEM#_BIST_STATUS2 = PEM BIST Status Register
+ * Results from BIST runs of PEM's memories."
+ */
+union cvmx_pemx_bist_status2 {
+	u64 u64;
+	struct cvmx_pemx_bist_status2_s {
+		u64 reserved_13_63 : 51;
+		u64 tlpn_d : 1;
+		u64 tlpn_ctl : 1;
+		u64 tlpp_d : 1;
+		u64 reserved_0_9 : 10;
+	} s;
+	struct cvmx_pemx_bist_status2_cn61xx {
+		u64 reserved_10_63 : 54;
+		u64 e2p_cpl : 1;
+		u64 e2p_n : 1;
+		u64 e2p_p : 1;
+		u64 peai_p2e : 1;
+		u64 pef_tpf1 : 1;
+		u64 pef_tpf0 : 1;
+		u64 pef_tnf : 1;
+		u64 pef_tcf1 : 1;
+		u64 pef_tc0 : 1;
+		u64 ppf : 1;
+	} cn61xx;
+	struct cvmx_pemx_bist_status2_cn61xx cn63xx;
+	struct cvmx_pemx_bist_status2_cn61xx cn63xxp1;
+	struct cvmx_pemx_bist_status2_cn61xx cn66xx;
+	struct cvmx_pemx_bist_status2_cn61xx cn68xx;
+	struct cvmx_pemx_bist_status2_cn61xx cn68xxp1;
+	struct cvmx_pemx_bist_status2_cn70xx {
+		u64 reserved_14_63 : 50;
+		u64 peai_p2e : 1;
+		u64 tlpn_d : 1;
+		u64 tlpn_ctl : 1;
+		u64 tlpp_d : 1;
+		u64 tlpp_ctl : 1;
+		u64 tlpc_d : 1;
+		u64 tlpc_ctl : 1;
+		u64 tlpan_d : 1;
+		u64 tlpan_ctl : 1;
+		u64 tlpap_d : 1;
+		u64 tlpap_ctl : 1;
+		u64 tlpac_d : 1;
+		u64 tlpac_ctl : 1;
+		u64 m2s : 1;
+	} cn70xx;
+	struct cvmx_pemx_bist_status2_cn70xx cn70xxp1;
+	struct cvmx_pemx_bist_status2_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pemx_bist_status2 cvmx_pemx_bist_status2_t;
+
+/**
+ * cvmx_pem#_cfg
+ *
+ * Configuration of the PCIe Application.
+ *
+ */
+union cvmx_pemx_cfg {
+	u64 u64;
+	struct cvmx_pemx_cfg_s {
+		u64 reserved_5_63 : 59;
+		u64 laneswap : 1;
+		u64 reserved_2_3 : 2;
+		u64 md : 2;
+	} s;
+	struct cvmx_pemx_cfg_cn70xx {
+		u64 reserved_5_63 : 59;
+		u64 laneswap : 1;
+		u64 hostmd : 1;
+		u64 md : 3;
+	} cn70xx;
+	struct cvmx_pemx_cfg_cn70xx cn70xxp1;
+	struct cvmx_pemx_cfg_cn73xx {
+		u64 reserved_5_63 : 59;
+		u64 laneswap : 1;
+		u64 lanes8 : 1;
+		u64 hostmd : 1;
+		u64 md : 2;
+	} cn73xx;
+	struct cvmx_pemx_cfg_cn73xx cn78xx;
+	struct cvmx_pemx_cfg_cn73xx cn78xxp1;
+	struct cvmx_pemx_cfg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_cfg cvmx_pemx_cfg_t;
+
+/**
+ * cvmx_pem#_cfg_rd
+ *
+ * This register allows read access to the configuration in the PCIe core.
+ *
+ */
+union cvmx_pemx_cfg_rd {
+	u64 u64;
+	struct cvmx_pemx_cfg_rd_s {
+		u64 data : 32;
+		u64 addr : 32;
+	} s;
+	struct cvmx_pemx_cfg_rd_s cn61xx;
+	struct cvmx_pemx_cfg_rd_s cn63xx;
+	struct cvmx_pemx_cfg_rd_s cn63xxp1;
+	struct cvmx_pemx_cfg_rd_s cn66xx;
+	struct cvmx_pemx_cfg_rd_s cn68xx;
+	struct cvmx_pemx_cfg_rd_s cn68xxp1;
+	struct cvmx_pemx_cfg_rd_s cn70xx;
+	struct cvmx_pemx_cfg_rd_s cn70xxp1;
+	struct cvmx_pemx_cfg_rd_s cn73xx;
+	struct cvmx_pemx_cfg_rd_s cn78xx;
+	struct cvmx_pemx_cfg_rd_s cn78xxp1;
+	struct cvmx_pemx_cfg_rd_s cnf71xx;
+	struct cvmx_pemx_cfg_rd_s cnf75xx;
+};
+
+typedef union cvmx_pemx_cfg_rd cvmx_pemx_cfg_rd_t;
+
+/**
+ * cvmx_pem#_cfg_wr
+ *
+ * This register allows write access to the configuration in the PCIe core.
+ *
+ */
+union cvmx_pemx_cfg_wr {
+	u64 u64;
+	struct cvmx_pemx_cfg_wr_s {
+		u64 data : 32;
+		u64 addr : 32;
+	} s;
+	struct cvmx_pemx_cfg_wr_s cn61xx;
+	struct cvmx_pemx_cfg_wr_s cn63xx;
+	struct cvmx_pemx_cfg_wr_s cn63xxp1;
+	struct cvmx_pemx_cfg_wr_s cn66xx;
+	struct cvmx_pemx_cfg_wr_s cn68xx;
+	struct cvmx_pemx_cfg_wr_s cn68xxp1;
+	struct cvmx_pemx_cfg_wr_s cn70xx;
+	struct cvmx_pemx_cfg_wr_s cn70xxp1;
+	struct cvmx_pemx_cfg_wr_s cn73xx;
+	struct cvmx_pemx_cfg_wr_s cn78xx;
+	struct cvmx_pemx_cfg_wr_s cn78xxp1;
+	struct cvmx_pemx_cfg_wr_s cnf71xx;
+	struct cvmx_pemx_cfg_wr_s cnf75xx;
+};
+
+typedef union cvmx_pemx_cfg_wr cvmx_pemx_cfg_wr_t;
+
+/**
+ * cvmx_pem#_clk_en
+ *
+ * This register contains the clock enable for ECLK and PCE_CLK.
+ *
+ */
+union cvmx_pemx_clk_en {
+	u64 u64;
+	struct cvmx_pemx_clk_en_s {
+		u64 reserved_2_63 : 62;
+		u64 pceclk_gate : 1;
+		u64 csclk_gate : 1;
+	} s;
+	struct cvmx_pemx_clk_en_s cn70xx;
+	struct cvmx_pemx_clk_en_s cn70xxp1;
+	struct cvmx_pemx_clk_en_s cn73xx;
+	struct cvmx_pemx_clk_en_s cn78xx;
+	struct cvmx_pemx_clk_en_s cn78xxp1;
+	struct cvmx_pemx_clk_en_s cnf75xx;
+};
+
+typedef union cvmx_pemx_clk_en cvmx_pemx_clk_en_t;
+
+/**
+ * cvmx_pem#_cpl_lut_valid
+ *
+ * This register specifies the bit set for an outstanding tag read.
+ *
+ */
+union cvmx_pemx_cpl_lut_valid {
+	u64 u64;
+	struct cvmx_pemx_cpl_lut_valid_s {
+		u64 tag : 64;
+	} s;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 tag : 32;
+	} cn61xx;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn63xx;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn63xxp1;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn66xx;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn68xx;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn68xxp1;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn70xx;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cn70xxp1;
+	struct cvmx_pemx_cpl_lut_valid_s cn73xx;
+	struct cvmx_pemx_cpl_lut_valid_s cn78xx;
+	struct cvmx_pemx_cpl_lut_valid_s cn78xxp1;
+	struct cvmx_pemx_cpl_lut_valid_cn61xx cnf71xx;
+	struct cvmx_pemx_cpl_lut_valid_s cnf75xx;
+};
+
+typedef union cvmx_pemx_cpl_lut_valid cvmx_pemx_cpl_lut_valid_t;
+
+/**
+ * cvmx_pem#_ctl_status
+ *
+ * This is a general control and status register of the PEM.
+ *
+ */
+union cvmx_pemx_ctl_status {
+	u64 u64;
+	struct cvmx_pemx_ctl_status_s {
+		u64 reserved_51_63 : 13;
+		u64 inv_dpar : 1;
+		u64 inv_hpar : 1;
+		u64 inv_rpar : 1;
+		u64 auto_sd : 1;
+		u64 dnum : 5;
+		u64 pbus : 8;
+		u64 reserved_32_33 : 2;
+		u64 cfg_rtry : 16;
+		u64 reserved_12_15 : 4;
+		u64 pm_xtoff : 1;
+		u64 pm_xpme : 1;
+		u64 ob_p_cmd : 1;
+		u64 reserved_7_8 : 2;
+		u64 nf_ecrc : 1;
+		u64 dly_one : 1;
+		u64 lnk_enb : 1;
+		u64 ro_ctlp : 1;
+		u64 fast_lm : 1;
+		u64 inv_ecrc : 1;
+		u64 inv_lcrc : 1;
+	} s;
+	struct cvmx_pemx_ctl_status_cn61xx {
+		u64 reserved_48_63 : 16;
+		u64 auto_sd : 1;
+		u64 dnum : 5;
+		u64 pbus : 8;
+		u64 reserved_32_33 : 2;
+		u64 cfg_rtry : 16;
+		u64 reserved_12_15 : 4;
+		u64 pm_xtoff : 1;
+		u64 pm_xpme : 1;
+		u64 ob_p_cmd : 1;
+		u64 reserved_7_8 : 2;
+		u64 nf_ecrc : 1;
+		u64 dly_one : 1;
+		u64 lnk_enb : 1;
+		u64 ro_ctlp : 1;
+		u64 fast_lm : 1;
+		u64 inv_ecrc : 1;
+		u64 inv_lcrc : 1;
+	} cn61xx;
+	struct cvmx_pemx_ctl_status_cn61xx cn63xx;
+	struct cvmx_pemx_ctl_status_cn61xx cn63xxp1;
+	struct cvmx_pemx_ctl_status_cn61xx cn66xx;
+	struct cvmx_pemx_ctl_status_cn61xx cn68xx;
+	struct cvmx_pemx_ctl_status_cn61xx cn68xxp1;
+	struct cvmx_pemx_ctl_status_s cn70xx;
+	struct cvmx_pemx_ctl_status_s cn70xxp1;
+	struct cvmx_pemx_ctl_status_cn73xx {
+		u64 reserved_51_63 : 13;
+		u64 inv_dpar : 1;
+		u64 reserved_48_49 : 2;
+		u64 auto_sd : 1;
+		u64 dnum : 5;
+		u64 pbus : 8;
+		u64 reserved_32_33 : 2;
+		u64 cfg_rtry : 16;
+		u64 reserved_12_15 : 4;
+		u64 pm_xtoff : 1;
+		u64 pm_xpme : 1;
+		u64 ob_p_cmd : 1;
+		u64 reserved_7_8 : 2;
+		u64 nf_ecrc : 1;
+		u64 dly_one : 1;
+		u64 lnk_enb : 1;
+		u64 ro_ctlp : 1;
+		u64 fast_lm : 1;
+		u64 inv_ecrc : 1;
+		u64 inv_lcrc : 1;
+	} cn73xx;
+	struct cvmx_pemx_ctl_status_cn73xx cn78xx;
+	struct cvmx_pemx_ctl_status_cn73xx cn78xxp1;
+	struct cvmx_pemx_ctl_status_cn61xx cnf71xx;
+	struct cvmx_pemx_ctl_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_ctl_status cvmx_pemx_ctl_status_t;
+
+/**
+ * cvmx_pem#_ctl_status2
+ *
+ * This register contains additional general control and status of the PEM.
+ *
+ */
+union cvmx_pemx_ctl_status2 {
+	u64 u64;
+	struct cvmx_pemx_ctl_status2_s {
+		u64 reserved_16_63 : 48;
+		u64 no_fwd_prg : 16;
+	} s;
+	struct cvmx_pemx_ctl_status2_s cn73xx;
+	struct cvmx_pemx_ctl_status2_s cn78xx;
+	struct cvmx_pemx_ctl_status2_s cn78xxp1;
+	struct cvmx_pemx_ctl_status2_s cnf75xx;
+};
+
+typedef union cvmx_pemx_ctl_status2 cvmx_pemx_ctl_status2_t;
+
+/**
+ * cvmx_pem#_dbg_info
+ *
+ * This is a debug information register of the PEM.
+ *
+ */
+union cvmx_pemx_dbg_info {
+	u64 u64;
+	struct cvmx_pemx_dbg_info_s {
+		u64 reserved_62_63 : 2;
+		u64 m2s_c_dbe : 1;
+		u64 m2s_c_sbe : 1;
+		u64 m2s_d_dbe : 1;
+		u64 m2s_d_sbe : 1;
+		u64 qhdr_b1_dbe : 1;
+		u64 qhdr_b1_sbe : 1;
+		u64 qhdr_b0_dbe : 1;
+		u64 qhdr_b0_sbe : 1;
+		u64 rtry_dbe : 1;
+		u64 rtry_sbe : 1;
+		u64 reserved_50_51 : 2;
+		u64 c_d1_dbe : 1;
+		u64 c_d1_sbe : 1;
+		u64 c_d0_dbe : 1;
+		u64 c_d0_sbe : 1;
+		u64 reserved_34_45 : 12;
+		u64 datq_pe : 1;
+		u64 reserved_31_32 : 2;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} s;
+	struct cvmx_pemx_dbg_info_cn61xx {
+		u64 reserved_31_63 : 33;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} cn61xx;
+	struct cvmx_pemx_dbg_info_cn61xx cn63xx;
+	struct cvmx_pemx_dbg_info_cn61xx cn63xxp1;
+	struct cvmx_pemx_dbg_info_cn61xx cn66xx;
+	struct cvmx_pemx_dbg_info_cn61xx cn68xx;
+	struct cvmx_pemx_dbg_info_cn61xx cn68xxp1;
+	struct cvmx_pemx_dbg_info_cn70xx {
+		u64 reserved_46_63 : 18;
+		u64 c_c_dbe : 1;
+		u64 c_c_sbe : 1;
+		u64 c_d_dbe : 1;
+		u64 c_d_sbe : 1;
+		u64 n_c_dbe : 1;
+		u64 n_c_sbe : 1;
+		u64 n_d_dbe : 1;
+		u64 n_d_sbe : 1;
+		u64 p_c_dbe : 1;
+		u64 p_c_sbe : 1;
+		u64 p_d_dbe : 1;
+		u64 p_d_sbe : 1;
+		u64 datq_pe : 1;
+		u64 hdrq_pe : 1;
+		u64 rtry_pe : 1;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} cn70xx;
+	struct cvmx_pemx_dbg_info_cn70xx cn70xxp1;
+	struct cvmx_pemx_dbg_info_cn73xx {
+		u64 reserved_62_63 : 2;
+		u64 m2s_c_dbe : 1;
+		u64 m2s_c_sbe : 1;
+		u64 m2s_d_dbe : 1;
+		u64 m2s_d_sbe : 1;
+		u64 qhdr_b1_dbe : 1;
+		u64 qhdr_b1_sbe : 1;
+		u64 qhdr_b0_dbe : 1;
+		u64 qhdr_b0_sbe : 1;
+		u64 rtry_dbe : 1;
+		u64 rtry_sbe : 1;
+		u64 c_c_dbe : 1;
+		u64 c_c_sbe : 1;
+		u64 c_d1_dbe : 1;
+		u64 c_d1_sbe : 1;
+		u64 c_d0_dbe : 1;
+		u64 c_d0_sbe : 1;
+		u64 n_c_dbe : 1;
+		u64 n_c_sbe : 1;
+		u64 n_d1_dbe : 1;
+		u64 n_d1_sbe : 1;
+		u64 n_d0_dbe : 1;
+		u64 n_d0_sbe : 1;
+		u64 p_c_dbe : 1;
+		u64 p_c_sbe : 1;
+		u64 p_d1_dbe : 1;
+		u64 p_d1_sbe : 1;
+		u64 p_d0_dbe : 1;
+		u64 p_d0_sbe : 1;
+		u64 datq_pe : 1;
+		u64 bmd_e : 1;
+		u64 lofp : 1;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} cn73xx;
+	struct cvmx_pemx_dbg_info_cn73xx cn78xx;
+	struct cvmx_pemx_dbg_info_cn78xxp1 {
+		u64 reserved_58_63 : 6;
+		u64 qhdr_b1_dbe : 1;
+		u64 qhdr_b1_sbe : 1;
+		u64 qhdr_b0_dbe : 1;
+		u64 qhdr_b0_sbe : 1;
+		u64 rtry_dbe : 1;
+		u64 rtry_sbe : 1;
+		u64 c_c_dbe : 1;
+		u64 c_c_sbe : 1;
+		u64 c_d1_dbe : 1;
+		u64 c_d1_sbe : 1;
+		u64 c_d0_dbe : 1;
+		u64 c_d0_sbe : 1;
+		u64 n_c_dbe : 1;
+		u64 n_c_sbe : 1;
+		u64 n_d1_dbe : 1;
+		u64 n_d1_sbe : 1;
+		u64 n_d0_dbe : 1;
+		u64 n_d0_sbe : 1;
+		u64 p_c_dbe : 1;
+		u64 p_c_sbe : 1;
+		u64 p_d1_dbe : 1;
+		u64 p_d1_sbe : 1;
+		u64 p_d0_dbe : 1;
+		u64 p_d0_sbe : 1;
+		u64 datq_pe : 1;
+		u64 reserved_32_32 : 1;
+		u64 lofp : 1;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} cn78xxp1;
+	struct cvmx_pemx_dbg_info_cn61xx cnf71xx;
+	struct cvmx_pemx_dbg_info_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_dbg_info cvmx_pemx_dbg_info_t;
+
+/**
+ * cvmx_pem#_dbg_info_en
+ *
+ * "PEM#_DBG_INFO_EN = PEM Debug Information Enable
+ * Allows PEM_DBG_INFO to generate interrupts when cooresponding enable bit is set."
+ */
+union cvmx_pemx_dbg_info_en {
+	u64 u64;
+	struct cvmx_pemx_dbg_info_en_s {
+		u64 reserved_46_63 : 18;
+		u64 tpcdbe1 : 1;
+		u64 tpcsbe1 : 1;
+		u64 tpcdbe0 : 1;
+		u64 tpcsbe0 : 1;
+		u64 tnfdbe1 : 1;
+		u64 tnfsbe1 : 1;
+		u64 tnfdbe0 : 1;
+		u64 tnfsbe0 : 1;
+		u64 tpfdbe1 : 1;
+		u64 tpfsbe1 : 1;
+		u64 tpfdbe0 : 1;
+		u64 tpfsbe0 : 1;
+		u64 datq_pe : 1;
+		u64 hdrq_pe : 1;
+		u64 rtry_pe : 1;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} s;
+	struct cvmx_pemx_dbg_info_en_cn61xx {
+		u64 reserved_31_63 : 33;
+		u64 ecrc_e : 1;
+		u64 rawwpp : 1;
+		u64 racpp : 1;
+		u64 ramtlp : 1;
+		u64 rarwdns : 1;
+		u64 caar : 1;
+		u64 racca : 1;
+		u64 racur : 1;
+		u64 rauc : 1;
+		u64 rqo : 1;
+		u64 fcuv : 1;
+		u64 rpe : 1;
+		u64 fcpvwt : 1;
+		u64 dpeoosd : 1;
+		u64 rtwdle : 1;
+		u64 rdwdle : 1;
+		u64 mre : 1;
+		u64 rte : 1;
+		u64 acto : 1;
+		u64 rvdm : 1;
+		u64 rumep : 1;
+		u64 rptamrc : 1;
+		u64 rpmerc : 1;
+		u64 rfemrc : 1;
+		u64 rnfemrc : 1;
+		u64 rcemrc : 1;
+		u64 rpoison : 1;
+		u64 recrce : 1;
+		u64 rtlplle : 1;
+		u64 rtlpmal : 1;
+		u64 spoison : 1;
+	} cn61xx;
+	struct cvmx_pemx_dbg_info_en_cn61xx cn63xx;
+	struct cvmx_pemx_dbg_info_en_cn61xx cn63xxp1;
+	struct cvmx_pemx_dbg_info_en_cn61xx cn66xx;
+	struct cvmx_pemx_dbg_info_en_cn61xx cn68xx;
+	struct cvmx_pemx_dbg_info_en_cn61xx cn68xxp1;
+	struct cvmx_pemx_dbg_info_en_s cn70xx;
+	struct cvmx_pemx_dbg_info_en_s cn70xxp1;
+	struct cvmx_pemx_dbg_info_en_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pemx_dbg_info_en cvmx_pemx_dbg_info_en_t;
+
+/**
+ * cvmx_pem#_diag_status
+ *
+ * This register contains selection control for the core diagnostic bus.
+ *
+ */
+union cvmx_pemx_diag_status {
+	u64 u64;
+	struct cvmx_pemx_diag_status_s {
+		u64 reserved_9_63 : 55;
+		u64 pwrdwn : 3;
+		u64 pm_dst : 3;
+		u64 pm_stat : 1;
+		u64 pm_en : 1;
+		u64 aux_en : 1;
+	} s;
+	struct cvmx_pemx_diag_status_cn61xx {
+		u64 reserved_4_63 : 60;
+		u64 pm_dst : 1;
+		u64 pm_stat : 1;
+		u64 pm_en : 1;
+		u64 aux_en : 1;
+	} cn61xx;
+	struct cvmx_pemx_diag_status_cn61xx cn63xx;
+	struct cvmx_pemx_diag_status_cn61xx cn63xxp1;
+	struct cvmx_pemx_diag_status_cn61xx cn66xx;
+	struct cvmx_pemx_diag_status_cn61xx cn68xx;
+	struct cvmx_pemx_diag_status_cn61xx cn68xxp1;
+	struct cvmx_pemx_diag_status_cn70xx {
+		u64 reserved_63_6 : 58;
+		u64 pm_dst : 3;
+		u64 pm_stat : 1;
+		u64 pm_en : 1;
+		u64 aux_en : 1;
+	} cn70xx;
+	struct cvmx_pemx_diag_status_cn70xx cn70xxp1;
+	struct cvmx_pemx_diag_status_cn73xx {
+		u64 reserved_63_9 : 55;
+		u64 pwrdwn : 3;
+		u64 pm_dst : 3;
+		u64 pm_stat : 1;
+		u64 pm_en : 1;
+		u64 aux_en : 1;
+	} cn73xx;
+	struct cvmx_pemx_diag_status_cn73xx cn78xx;
+	struct cvmx_pemx_diag_status_cn73xx cn78xxp1;
+	struct cvmx_pemx_diag_status_cn61xx cnf71xx;
+	struct cvmx_pemx_diag_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_diag_status cvmx_pemx_diag_status_t;
+
+/**
+ * cvmx_pem#_ecc_ena
+ *
+ * Contains enables for TLP FIFO ECC RAMs.
+ *
+ */
+union cvmx_pemx_ecc_ena {
+	u64 u64;
+	struct cvmx_pemx_ecc_ena_s {
+		u64 reserved_35_63 : 29;
+		u64 qhdr_b1_ena : 1;
+		u64 qhdr_b0_ena : 1;
+		u64 rtry_ena : 1;
+		u64 reserved_11_31 : 21;
+		u64 m2s_c_ena : 1;
+		u64 m2s_d_ena : 1;
+		u64 c_c_ena : 1;
+		u64 c_d1_ena : 1;
+		u64 c_d0_ena : 1;
+		u64 reserved_0_5 : 6;
+	} s;
+	struct cvmx_pemx_ecc_ena_cn70xx {
+		u64 reserved_6_63 : 58;
+		u64 tlp_nc_ena : 1;
+		u64 tlp_nd_ena : 1;
+		u64 tlp_pc_ena : 1;
+		u64 tlp_pd_ena : 1;
+		u64 tlp_cc_ena : 1;
+		u64 tlp_cd_ena : 1;
+	} cn70xx;
+	struct cvmx_pemx_ecc_ena_cn70xx cn70xxp1;
+	struct cvmx_pemx_ecc_ena_cn73xx {
+		u64 reserved_35_63 : 29;
+		u64 qhdr_b1_ena : 1;
+		u64 qhdr_b0_ena : 1;
+		u64 rtry_ena : 1;
+		u64 reserved_11_31 : 21;
+		u64 m2s_c_ena : 1;
+		u64 m2s_d_ena : 1;
+		u64 c_c_ena : 1;
+		u64 c_d1_ena : 1;
+		u64 c_d0_ena : 1;
+		u64 n_c_ena : 1;
+		u64 n_d1_ena : 1;
+		u64 n_d0_ena : 1;
+		u64 p_c_ena : 1;
+		u64 p_d1_ena : 1;
+		u64 p_d0_ena : 1;
+	} cn73xx;
+	struct cvmx_pemx_ecc_ena_cn73xx cn78xx;
+	struct cvmx_pemx_ecc_ena_cn78xxp1 {
+		u64 reserved_35_63 : 29;
+		u64 qhdr_b1_ena : 1;
+		u64 qhdr_b0_ena : 1;
+		u64 rtry_ena : 1;
+		u64 reserved_9_31 : 23;
+		u64 c_c_ena : 1;
+		u64 c_d1_ena : 1;
+		u64 c_d0_ena : 1;
+		u64 n_c_ena : 1;
+		u64 n_d1_ena : 1;
+		u64 n_d0_ena : 1;
+		u64 p_c_ena : 1;
+		u64 p_d1_ena : 1;
+		u64 p_d0_ena : 1;
+	} cn78xxp1;
+	struct cvmx_pemx_ecc_ena_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_ecc_ena cvmx_pemx_ecc_ena_t;
+
+/**
+ * cvmx_pem#_ecc_synd_ctrl
+ *
+ * This register contains syndrome control for TLP FIFO ECC RAMs.
+ *
+ */
+union cvmx_pemx_ecc_synd_ctrl {
+	u64 u64;
+	struct cvmx_pemx_ecc_synd_ctrl_s {
+		u64 reserved_38_63 : 26;
+		u64 qhdr_b1_syn : 2;
+		u64 qhdr_b0_syn : 2;
+		u64 rtry_syn : 2;
+		u64 reserved_22_31 : 10;
+		u64 m2s_c_syn : 2;
+		u64 m2s_d_syn : 2;
+		u64 c_c_syn : 2;
+		u64 c_d1_syn : 2;
+		u64 c_d0_syn : 2;
+		u64 reserved_0_11 : 12;
+	} s;
+	struct cvmx_pemx_ecc_synd_ctrl_cn70xx {
+		u64 reserved_12_63 : 52;
+		u64 tlp_nc_syn : 2;
+		u64 tlp_nd_syn : 2;
+		u64 tlp_pc_syn : 2;
+		u64 tlp_pd_syn : 2;
+		u64 tlp_cc_syn : 2;
+		u64 tlp_cd_syn : 2;
+	} cn70xx;
+	struct cvmx_pemx_ecc_synd_ctrl_cn70xx cn70xxp1;
+	struct cvmx_pemx_ecc_synd_ctrl_cn73xx {
+		u64 reserved_38_63 : 26;
+		u64 qhdr_b1_syn : 2;
+		u64 qhdr_b0_syn : 2;
+		u64 rtry_syn : 2;
+		u64 reserved_22_31 : 10;
+		u64 m2s_c_syn : 2;
+		u64 m2s_d_syn : 2;
+		u64 c_c_syn : 2;
+		u64 c_d1_syn : 2;
+		u64 c_d0_syn : 2;
+		u64 n_c_syn : 2;
+		u64 n_d1_syn : 2;
+		u64 n_d0_syn : 2;
+		u64 p_c_syn : 2;
+		u64 p_d1_syn : 2;
+		u64 p_d0_syn : 2;
+	} cn73xx;
+	struct cvmx_pemx_ecc_synd_ctrl_cn73xx cn78xx;
+	struct cvmx_pemx_ecc_synd_ctrl_cn78xxp1 {
+		u64 reserved_38_63 : 26;
+		u64 qhdr_b1_syn : 2;
+		u64 qhdr_b0_syn : 2;
+		u64 rtry_syn : 2;
+		u64 reserved_18_31 : 14;
+		u64 c_c_syn : 2;
+		u64 c_d1_syn : 2;
+		u64 c_d0_syn : 2;
+		u64 n_c_syn : 2;
+		u64 n_d1_syn : 2;
+		u64 n_d0_syn : 2;
+		u64 p_c_syn : 2;
+		u64 p_d1_syn : 2;
+		u64 p_d0_syn : 2;
+	} cn78xxp1;
+	struct cvmx_pemx_ecc_synd_ctrl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_ecc_synd_ctrl cvmx_pemx_ecc_synd_ctrl_t;
+
+/**
+ * cvmx_pem#_eco
+ */
+union cvmx_pemx_eco {
+	u64 u64;
+	struct cvmx_pemx_eco_s {
+		u64 reserved_8_63 : 56;
+		u64 eco_rw : 8;
+	} s;
+	struct cvmx_pemx_eco_s cn73xx;
+	struct cvmx_pemx_eco_s cn78xx;
+	struct cvmx_pemx_eco_s cnf75xx;
+};
+
+typedef union cvmx_pemx_eco cvmx_pemx_eco_t;
+
+/**
+ * cvmx_pem#_flr_glblcnt_ctl
+ */
+union cvmx_pemx_flr_glblcnt_ctl {
+	u64 u64;
+	struct cvmx_pemx_flr_glblcnt_ctl_s {
+		u64 reserved_4_63 : 60;
+		u64 chge : 1;
+		u64 inc : 1;
+		u64 delta : 2;
+	} s;
+	struct cvmx_pemx_flr_glblcnt_ctl_s cn73xx;
+	struct cvmx_pemx_flr_glblcnt_ctl_s cn78xx;
+	struct cvmx_pemx_flr_glblcnt_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pemx_flr_glblcnt_ctl cvmx_pemx_flr_glblcnt_ctl_t;
+
+/**
+ * cvmx_pem#_flr_pf0_vf_stopreq
+ *
+ * Hardware automatically sets the STOPREQ bit for the VF when it enters a
+ * Function Level Reset (FLR).  Software is responsible for clearing the STOPREQ
+ * bit but must not do so prior to hardware taking down the FLR, which could be
+ * as long as 100ms.  It may be appropriate for software to wait longer before clearing
+ * STOPREQ, software may need to drain deep DPI queues for example.
+ * Whenever PEM receives a request mastered by CNXXXX over S2M (i.e. P or NP),
+ * when STOPREQ is set for the function, PEM will discard the outgoing request
+ * before sending it to the PCIe core.  If a NP, PEM will schedule an immediate
+ * SWI_RSP_ERROR completion for the request - no timeout is required.
+ *
+ * STOPREQ mimics the behavior of PCIEEPVF()_CFG001[ME] for outbound requests that will
+ * master the PCIe bus (P and NP).
+ *
+ * Note that STOPREQ will have no effect on completions returned by CNXXXX over the S2M,
+ * nor on M2S traffic.
+ */
+union cvmx_pemx_flr_pf0_vf_stopreq {
+	u64 u64;
+	struct cvmx_pemx_flr_pf0_vf_stopreq_s {
+		u64 vf_stopreq : 64;
+	} s;
+	struct cvmx_pemx_flr_pf0_vf_stopreq_s cn73xx;
+	struct cvmx_pemx_flr_pf0_vf_stopreq_s cn78xx;
+	struct cvmx_pemx_flr_pf0_vf_stopreq_s cnf75xx;
+};
+
+typedef union cvmx_pemx_flr_pf0_vf_stopreq cvmx_pemx_flr_pf0_vf_stopreq_t;
+
+/**
+ * cvmx_pem#_flr_pf_stopreq
+ *
+ * Hardware automatically sets the STOPREQ bit for the PF when it enters a
+ * Function Level Reset (FLR).  Software is responsible for clearing the STOPREQ
+ * bit but must not do so prior to hardware taking down the FLR, which could be
+ * as long as 100ms.  It may be appropriate for software to wait longer before clearing
+ * STOPREQ, software may need to drain deep DPI queues for example.
+ * Whenever PEM receives a PF or child VF request mastered by CNXXXX over S2M (i.e. P or NP),
+ * when STOPREQ is set for the function, PEM will discard the outgoing request
+ * before sending it to the PCIe core.  If a NP, PEM will schedule an immediate
+ * SWI_RSP_ERROR completion for the request - no timeout is required.
+ *
+ * STOPREQ mimics the behavior of PCIEEP()_CFG001[ME] for outbound requests that will
+ * master the PCIe bus (P and NP).
+ *
+ * STOPREQ will have no effect on completions returned by CNXXXX over the S2M,
+ * nor on M2S traffic.
+ *
+ * When a PF()_STOPREQ is set, none of the associated
+ * PEM()_FLR_PF()_VF_STOPREQ[VF_STOPREQ] will be set.
+ *
+ * STOPREQ is reset when the MAC is reset, and is not reset after a chip soft reset.
+ */
+union cvmx_pemx_flr_pf_stopreq {
+	u64 u64;
+	struct cvmx_pemx_flr_pf_stopreq_s {
+		u64 reserved_1_63 : 63;
+		u64 pf0_stopreq : 1;
+	} s;
+	struct cvmx_pemx_flr_pf_stopreq_s cn73xx;
+	struct cvmx_pemx_flr_pf_stopreq_s cn78xx;
+	struct cvmx_pemx_flr_pf_stopreq_s cnf75xx;
+};
+
+typedef union cvmx_pemx_flr_pf_stopreq cvmx_pemx_flr_pf_stopreq_t;
+
+/**
+ * cvmx_pem#_flr_stopreq_ctl
+ */
+union cvmx_pemx_flr_stopreq_ctl {
+	u64 u64;
+	struct cvmx_pemx_flr_stopreq_ctl_s {
+		u64 reserved_1_63 : 63;
+		u64 stopreqclr : 1;
+	} s;
+	struct cvmx_pemx_flr_stopreq_ctl_s cn78xx;
+	struct cvmx_pemx_flr_stopreq_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pemx_flr_stopreq_ctl cvmx_pemx_flr_stopreq_ctl_t;
+
+/**
+ * cvmx_pem#_flr_zombie_ctl
+ */
+union cvmx_pemx_flr_zombie_ctl {
+	u64 u64;
+	struct cvmx_pemx_flr_zombie_ctl_s {
+		u64 reserved_10_63 : 54;
+		u64 exp : 10;
+	} s;
+	struct cvmx_pemx_flr_zombie_ctl_s cn73xx;
+	struct cvmx_pemx_flr_zombie_ctl_s cn78xx;
+	struct cvmx_pemx_flr_zombie_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pemx_flr_zombie_ctl cvmx_pemx_flr_zombie_ctl_t;
+
+/**
+ * cvmx_pem#_inb_read_credits
+ *
+ * This register contains the number of in-flight read operations from PCIe core to SLI.
+ *
+ */
+union cvmx_pemx_inb_read_credits {
+	u64 u64;
+	struct cvmx_pemx_inb_read_credits_s {
+		u64 reserved_7_63 : 57;
+		u64 num : 7;
+	} s;
+	struct cvmx_pemx_inb_read_credits_cn61xx {
+		u64 reserved_6_63 : 58;
+		u64 num : 6;
+	} cn61xx;
+	struct cvmx_pemx_inb_read_credits_cn61xx cn66xx;
+	struct cvmx_pemx_inb_read_credits_cn61xx cn68xx;
+	struct cvmx_pemx_inb_read_credits_cn61xx cn70xx;
+	struct cvmx_pemx_inb_read_credits_cn61xx cn70xxp1;
+	struct cvmx_pemx_inb_read_credits_s cn73xx;
+	struct cvmx_pemx_inb_read_credits_s cn78xx;
+	struct cvmx_pemx_inb_read_credits_s cn78xxp1;
+	struct cvmx_pemx_inb_read_credits_cn61xx cnf71xx;
+	struct cvmx_pemx_inb_read_credits_s cnf75xx;
+};
+
+typedef union cvmx_pemx_inb_read_credits cvmx_pemx_inb_read_credits_t;
+
+/**
+ * cvmx_pem#_int_enb
+ *
+ * "PEM#_INT_ENB = PEM Interrupt Enable
+ * Enables interrupt conditions for the PEM to generate an RSL interrupt."
+ */
+union cvmx_pemx_int_enb {
+	u64 u64;
+	struct cvmx_pemx_int_enb_s {
+		u64 reserved_14_63 : 50;
+		u64 crs_dr : 1;
+		u64 crs_er : 1;
+		u64 rdlk : 1;
+		u64 exc : 1;
+		u64 un_bx : 1;
+		u64 un_b2 : 1;
+		u64 un_b1 : 1;
+		u64 up_bx : 1;
+		u64 up_b2 : 1;
+		u64 up_b1 : 1;
+		u64 pmem : 1;
+		u64 pmei : 1;
+		u64 se : 1;
+		u64 aeri : 1;
+	} s;
+	struct cvmx_pemx_int_enb_s cn61xx;
+	struct cvmx_pemx_int_enb_s cn63xx;
+	struct cvmx_pemx_int_enb_s cn63xxp1;
+	struct cvmx_pemx_int_enb_s cn66xx;
+	struct cvmx_pemx_int_enb_s cn68xx;
+	struct cvmx_pemx_int_enb_s cn68xxp1;
+	struct cvmx_pemx_int_enb_s cn70xx;
+	struct cvmx_pemx_int_enb_s cn70xxp1;
+	struct cvmx_pemx_int_enb_s cnf71xx;
+};
+
+typedef union cvmx_pemx_int_enb cvmx_pemx_int_enb_t;
+
+/**
+ * cvmx_pem#_int_enb_int
+ *
+ * "PEM#_INT_ENB_INT = PEM Interrupt Enable
+ * Enables interrupt conditions for the PEM to generate an RSL interrupt."
+ */
+union cvmx_pemx_int_enb_int {
+	u64 u64;
+	struct cvmx_pemx_int_enb_int_s {
+		u64 reserved_14_63 : 50;
+		u64 crs_dr : 1;
+		u64 crs_er : 1;
+		u64 rdlk : 1;
+		u64 exc : 1;
+		u64 un_bx : 1;
+		u64 un_b2 : 1;
+		u64 un_b1 : 1;
+		u64 up_bx : 1;
+		u64 up_b2 : 1;
+		u64 up_b1 : 1;
+		u64 pmem : 1;
+		u64 pmei : 1;
+		u64 se : 1;
+		u64 aeri : 1;
+	} s;
+	struct cvmx_pemx_int_enb_int_s cn61xx;
+	struct cvmx_pemx_int_enb_int_s cn63xx;
+	struct cvmx_pemx_int_enb_int_s cn63xxp1;
+	struct cvmx_pemx_int_enb_int_s cn66xx;
+	struct cvmx_pemx_int_enb_int_s cn68xx;
+	struct cvmx_pemx_int_enb_int_s cn68xxp1;
+	struct cvmx_pemx_int_enb_int_s cn70xx;
+	struct cvmx_pemx_int_enb_int_s cn70xxp1;
+	struct cvmx_pemx_int_enb_int_s cnf71xx;
+};
+
+typedef union cvmx_pemx_int_enb_int cvmx_pemx_int_enb_int_t;
+
+/**
+ * cvmx_pem#_int_sum
+ *
+ * This register contains the different interrupt summary bits of the PEM.
+ *
+ */
+union cvmx_pemx_int_sum {
+	u64 u64;
+	struct cvmx_pemx_int_sum_s {
+		u64 intd : 1;
+		u64 intc : 1;
+		u64 intb : 1;
+		u64 inta : 1;
+		u64 reserved_14_59 : 46;
+		u64 crs_dr : 1;
+		u64 crs_er : 1;
+		u64 rdlk : 1;
+		u64 exc : 1;
+		u64 un_bx : 1;
+		u64 un_b2 : 1;
+		u64 un_b1 : 1;
+		u64 up_bx : 1;
+		u64 up_b2 : 1;
+		u64 up_b1 : 1;
+		u64 pmem : 1;
+		u64 pmei : 1;
+		u64 se : 1;
+		u64 aeri : 1;
+	} s;
+	struct cvmx_pemx_int_sum_cn61xx {
+		u64 reserved_14_63 : 50;
+		u64 crs_dr : 1;
+		u64 crs_er : 1;
+		u64 rdlk : 1;
+		u64 exc : 1;
+		u64 un_bx : 1;
+		u64 un_b2 : 1;
+		u64 un_b1 : 1;
+		u64 up_bx : 1;
+		u64 up_b2 : 1;
+		u64 up_b1 : 1;
+		u64 pmem : 1;
+		u64 pmei : 1;
+		u64 se : 1;
+		u64 aeri : 1;
+	} cn61xx;
+	struct cvmx_pemx_int_sum_cn61xx cn63xx;
+	struct cvmx_pemx_int_sum_cn61xx cn63xxp1;
+	struct cvmx_pemx_int_sum_cn61xx cn66xx;
+	struct cvmx_pemx_int_sum_cn61xx cn68xx;
+	struct cvmx_pemx_int_sum_cn61xx cn68xxp1;
+	struct cvmx_pemx_int_sum_cn61xx cn70xx;
+	struct cvmx_pemx_int_sum_cn61xx cn70xxp1;
+	struct cvmx_pemx_int_sum_cn73xx {
+		u64 intd : 1;
+		u64 intc : 1;
+		u64 intb : 1;
+		u64 inta : 1;
+		u64 reserved_14_59 : 46;
+		u64 crs_dr : 1;
+		u64 crs_er : 1;
+		u64 rdlk : 1;
+		u64 reserved_10_10 : 1;
+		u64 un_bx : 1;
+		u64 un_b2 : 1;
+		u64 un_b1 : 1;
+		u64 up_bx : 1;
+		u64 up_b2 : 1;
+		u64 up_b1 : 1;
+		u64 reserved_3_3 : 1;
+		u64 pmei : 1;
+		u64 se : 1;
+		u64 aeri : 1;
+	} cn73xx;
+	struct cvmx_pemx_int_sum_cn73xx cn78xx;
+	struct cvmx_pemx_int_sum_cn73xx cn78xxp1;
+	struct cvmx_pemx_int_sum_cn61xx cnf71xx;
+	struct cvmx_pemx_int_sum_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_int_sum cvmx_pemx_int_sum_t;
+
+/**
+ * cvmx_pem#_on
+ *
+ * This register indicates that PEM is configured and ready.
+ *
+ */
+union cvmx_pemx_on {
+	u64 u64;
+	struct cvmx_pemx_on_s {
+		u64 reserved_2_63 : 62;
+		u64 pemoor : 1;
+		u64 pemon : 1;
+	} s;
+	struct cvmx_pemx_on_s cn70xx;
+	struct cvmx_pemx_on_s cn70xxp1;
+	struct cvmx_pemx_on_s cn73xx;
+	struct cvmx_pemx_on_s cn78xx;
+	struct cvmx_pemx_on_s cn78xxp1;
+	struct cvmx_pemx_on_s cnf75xx;
+};
+
+typedef union cvmx_pemx_on cvmx_pemx_on_t;
+
+/**
+ * cvmx_pem#_p2n_bar0_start
+ *
+ * This register specifies the starting address for memory requests that are to be forwarded to
+ * the SLI in RC mode.
+ */
+union cvmx_pemx_p2n_bar0_start {
+	u64 u64;
+	struct cvmx_pemx_p2n_bar0_start_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx {
+		u64 addr : 50;
+		u64 reserved_0_13 : 14;
+	} cn61xx;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn63xx;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn63xxp1;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn66xx;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn68xx;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn68xxp1;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn70xx;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cn70xxp1;
+	struct cvmx_pemx_p2n_bar0_start_cn73xx {
+		u64 addr : 41;
+		u64 reserved_0_22 : 23;
+	} cn73xx;
+	struct cvmx_pemx_p2n_bar0_start_cn73xx cn78xx;
+	struct cvmx_pemx_p2n_bar0_start_cn78xxp1 {
+		u64 addr : 49;
+		u64 reserved_0_14 : 15;
+	} cn78xxp1;
+	struct cvmx_pemx_p2n_bar0_start_cn61xx cnf71xx;
+	struct cvmx_pemx_p2n_bar0_start_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_p2n_bar0_start cvmx_pemx_p2n_bar0_start_t;
+
+/**
+ * cvmx_pem#_p2n_bar1_start
+ *
+ * This register specifies the starting address for memory requests that are to be forwarded to
+ * the SLI in RC mode.
+ */
+union cvmx_pemx_p2n_bar1_start {
+	u64 u64;
+	struct cvmx_pemx_p2n_bar1_start_s {
+		u64 addr : 38;
+		u64 reserved_0_25 : 26;
+	} s;
+	struct cvmx_pemx_p2n_bar1_start_s cn61xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn63xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn63xxp1;
+	struct cvmx_pemx_p2n_bar1_start_s cn66xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn68xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn68xxp1;
+	struct cvmx_pemx_p2n_bar1_start_s cn70xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn70xxp1;
+	struct cvmx_pemx_p2n_bar1_start_s cn73xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn78xx;
+	struct cvmx_pemx_p2n_bar1_start_s cn78xxp1;
+	struct cvmx_pemx_p2n_bar1_start_s cnf71xx;
+	struct cvmx_pemx_p2n_bar1_start_s cnf75xx;
+};
+
+typedef union cvmx_pemx_p2n_bar1_start cvmx_pemx_p2n_bar1_start_t;
+
+/**
+ * cvmx_pem#_p2n_bar2_start
+ *
+ * This register specifies the starting address for memory requests that are to be forwarded to
+ * the SLI in RC mode.
+ */
+union cvmx_pemx_p2n_bar2_start {
+	u64 u64;
+	struct cvmx_pemx_p2n_bar2_start_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx {
+		u64 addr : 23;
+		u64 reserved_0_40 : 41;
+	} cn61xx;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn63xx;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn63xxp1;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn66xx;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn68xx;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn68xxp1;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn70xx;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cn70xxp1;
+	struct cvmx_pemx_p2n_bar2_start_cn73xx {
+		u64 addr : 19;
+		u64 reserved_0_44 : 45;
+	} cn73xx;
+	struct cvmx_pemx_p2n_bar2_start_cn73xx cn78xx;
+	struct cvmx_pemx_p2n_bar2_start_cn73xx cn78xxp1;
+	struct cvmx_pemx_p2n_bar2_start_cn61xx cnf71xx;
+	struct cvmx_pemx_p2n_bar2_start_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_p2n_bar2_start cvmx_pemx_p2n_bar2_start_t;
+
+/**
+ * cvmx_pem#_p2p_bar#_end
+ *
+ * This register specifies the ending address for memory requests that are to be forwarded to the
+ * PCIe peer port.
+ */
+union cvmx_pemx_p2p_barx_end {
+	u64 u64;
+	struct cvmx_pemx_p2p_barx_end_s {
+		u64 addr : 52;
+		u64 reserved_0_11 : 12;
+	} s;
+	struct cvmx_pemx_p2p_barx_end_s cn63xx;
+	struct cvmx_pemx_p2p_barx_end_s cn63xxp1;
+	struct cvmx_pemx_p2p_barx_end_s cn66xx;
+	struct cvmx_pemx_p2p_barx_end_s cn68xx;
+	struct cvmx_pemx_p2p_barx_end_s cn68xxp1;
+	struct cvmx_pemx_p2p_barx_end_s cn73xx;
+	struct cvmx_pemx_p2p_barx_end_s cn78xx;
+	struct cvmx_pemx_p2p_barx_end_s cn78xxp1;
+	struct cvmx_pemx_p2p_barx_end_s cnf75xx;
+};
+
+typedef union cvmx_pemx_p2p_barx_end cvmx_pemx_p2p_barx_end_t;
+
+/**
+ * cvmx_pem#_p2p_bar#_start
+ *
+ * This register specifies the starting address for memory requests that are to be forwarded to
+ * the PCIe peer port.
+ */
+union cvmx_pemx_p2p_barx_start {
+	u64 u64;
+	struct cvmx_pemx_p2p_barx_start_s {
+		u64 addr : 52;
+		u64 reserved_2_11 : 10;
+		u64 dst : 2;
+	} s;
+	struct cvmx_pemx_p2p_barx_start_cn63xx {
+		u64 addr : 52;
+		u64 reserved_0_11 : 12;
+	} cn63xx;
+	struct cvmx_pemx_p2p_barx_start_cn63xx cn63xxp1;
+	struct cvmx_pemx_p2p_barx_start_cn63xx cn66xx;
+	struct cvmx_pemx_p2p_barx_start_cn63xx cn68xx;
+	struct cvmx_pemx_p2p_barx_start_cn63xx cn68xxp1;
+	struct cvmx_pemx_p2p_barx_start_s cn73xx;
+	struct cvmx_pemx_p2p_barx_start_s cn78xx;
+	struct cvmx_pemx_p2p_barx_start_s cn78xxp1;
+	struct cvmx_pemx_p2p_barx_start_s cnf75xx;
+};
+
+typedef union cvmx_pemx_p2p_barx_start cvmx_pemx_p2p_barx_start_t;
+
+/**
+ * cvmx_pem#_qlm
+ *
+ * This register configures the PEM3 QLM.
+ *
+ */
+union cvmx_pemx_qlm {
+	u64 u64;
+	struct cvmx_pemx_qlm_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pemx_qlm_cn73xx {
+		u64 reserved_1_63 : 63;
+		u64 pemdlmsel : 1;
+	} cn73xx;
+	struct cvmx_pemx_qlm_cn78xx {
+		u64 reserved_1_63 : 63;
+		u64 pem3qlm : 1;
+	} cn78xx;
+	struct cvmx_pemx_qlm_cn78xx cn78xxp1;
+	struct cvmx_pemx_qlm_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_qlm cvmx_pemx_qlm_t;
+
+/**
+ * cvmx_pem#_spi_ctl
+ *
+ * PEM#_SPI_CTL register.
+ *
+ */
+union cvmx_pemx_spi_ctl {
+	u64 u64;
+	struct cvmx_pemx_spi_ctl_s {
+		u64 reserved_14_63 : 50;
+		u64 start_busy : 1;
+		u64 tvalid : 1;
+		u64 cmd : 3;
+		u64 adr : 9;
+	} s;
+	struct cvmx_pemx_spi_ctl_s cn70xx;
+	struct cvmx_pemx_spi_ctl_s cn70xxp1;
+	struct cvmx_pemx_spi_ctl_s cn73xx;
+	struct cvmx_pemx_spi_ctl_s cn78xx;
+	struct cvmx_pemx_spi_ctl_s cn78xxp1;
+	struct cvmx_pemx_spi_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pemx_spi_ctl cvmx_pemx_spi_ctl_t;
+
+/**
+ * cvmx_pem#_spi_data
+ *
+ * This register contains the most recently read or written SPI data and is unpredictable upon
+ * power-up. Is valid after a PEM()_SPI_CTL[CMD]=READ/RDSR when hardware clears
+ * PEM()_SPI_CTL[START_BUSY]. Is written after a PEM()_SPI_CTL[CMD]=WRITE/WRSR
+ * when hardware clears PEM()_SPI_CTL[START_BUSY].
+ */
+union cvmx_pemx_spi_data {
+	u64 u64;
+	struct cvmx_pemx_spi_data_s {
+		u64 preamble : 16;
+		u64 reserved_45_47 : 3;
+		u64 cs2 : 1;
+		u64 adr : 12;
+		u64 data : 32;
+	} s;
+	struct cvmx_pemx_spi_data_s cn70xx;
+	struct cvmx_pemx_spi_data_s cn70xxp1;
+	struct cvmx_pemx_spi_data_s cn73xx;
+	struct cvmx_pemx_spi_data_s cn78xx;
+	struct cvmx_pemx_spi_data_s cn78xxp1;
+	struct cvmx_pemx_spi_data_s cnf75xx;
+};
+
+typedef union cvmx_pemx_spi_data cvmx_pemx_spi_data_t;
+
+/**
+ * cvmx_pem#_strap
+ *
+ * "Below are in pesc_csr
+ * The input strapping pins"
+ */
+union cvmx_pemx_strap {
+	u64 u64;
+	struct cvmx_pemx_strap_s {
+		u64 reserved_5_63 : 59;
+		u64 miopem2dlm5sel : 1;
+		u64 pilaneswap : 1;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_pemx_strap_cn70xx {
+		u64 reserved_4_63 : 60;
+		u64 pilaneswap : 1;
+		u64 pimode : 3;
+	} cn70xx;
+	struct cvmx_pemx_strap_cn70xx cn70xxp1;
+	struct cvmx_pemx_strap_cn73xx {
+		u64 reserved_5_63 : 59;
+		u64 miopem2dlm5sel : 1;
+		u64 pilaneswap : 1;
+		u64 pilanes8 : 1;
+		u64 pimode : 2;
+	} cn73xx;
+	struct cvmx_pemx_strap_cn78xx {
+		u64 reserved_4_63 : 60;
+		u64 pilaneswap : 1;
+		u64 pilanes8 : 1;
+		u64 pimode : 2;
+	} cn78xx;
+	struct cvmx_pemx_strap_cn78xx cn78xxp1;
+	struct cvmx_pemx_strap_cnf75xx {
+		u64 reserved_5_63 : 59;
+		u64 miopem2dlm5sel : 1;
+		u64 pilaneswap : 1;
+		u64 pilanes4 : 1;
+		u64 pimode : 2;
+	} cnf75xx;
+};
+
+typedef union cvmx_pemx_strap cvmx_pemx_strap_t;
+
+/**
+ * cvmx_pem#_tlp_credits
+ *
+ * This register specifies the number of credits for use in moving TLPs. When this register is
+ * written, the credit values are reset to the register value. A write to this register should
+ * take place before traffic flow starts.
+ */
+union cvmx_pemx_tlp_credits {
+	u64 u64;
+	struct cvmx_pemx_tlp_credits_s {
+		u64 reserved_56_63 : 8;
+		u64 peai_ppf : 8;
+		u64 pem_cpl : 8;
+		u64 pem_np : 8;
+		u64 pem_p : 8;
+		u64 sli_cpl : 8;
+		u64 sli_np : 8;
+		u64 sli_p : 8;
+	} s;
+	struct cvmx_pemx_tlp_credits_cn61xx {
+		u64 reserved_56_63 : 8;
+		u64 peai_ppf : 8;
+		u64 reserved_24_47 : 24;
+		u64 sli_cpl : 8;
+		u64 sli_np : 8;
+		u64 sli_p : 8;
+	} cn61xx;
+	struct cvmx_pemx_tlp_credits_s cn63xx;
+	struct cvmx_pemx_tlp_credits_s cn63xxp1;
+	struct cvmx_pemx_tlp_credits_s cn66xx;
+	struct cvmx_pemx_tlp_credits_s cn68xx;
+	struct cvmx_pemx_tlp_credits_s cn68xxp1;
+	struct cvmx_pemx_tlp_credits_cn61xx cn70xx;
+	struct cvmx_pemx_tlp_credits_cn61xx cn70xxp1;
+	struct cvmx_pemx_tlp_credits_cn73xx {
+		u64 reserved_48_63 : 16;
+		u64 pem_cpl : 8;
+		u64 pem_np : 8;
+		u64 pem_p : 8;
+		u64 sli_cpl : 8;
+		u64 sli_np : 8;
+		u64 sli_p : 8;
+	} cn73xx;
+	struct cvmx_pemx_tlp_credits_cn73xx cn78xx;
+	struct cvmx_pemx_tlp_credits_cn73xx cn78xxp1;
+	struct cvmx_pemx_tlp_credits_cn61xx cnf71xx;
+	struct cvmx_pemx_tlp_credits_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pemx_tlp_credits cvmx_pemx_tlp_credits_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 21/50] mips: octeon: Add cvmx-pepx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (19 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 20/50] mips: octeon: Add cvmx-pemx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 22/50] mips: octeon: Add cvmx-pip-defs.h " Stefan Roese
                   ` (31 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pepx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pexp-defs.h | 1382 +++++++++++++++++
 1 file changed, 1382 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pexp-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pexp-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pexp-defs.h
new file mode 100644
index 0000000000..333c2caee1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pexp-defs.h
@@ -0,0 +1,1382 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) definitions for
+ * OCTEON PEXP.
+ */
+
+#ifndef __CVMX_PEXP_DEFS_H__
+#define __CVMX_PEXP_DEFS_H__
+
+#define CVMX_PEXP_NPEI_BAR1_INDEXX(offset)	(0x00011F0000008000ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_BIST_STATUS		(0x00011F0000008580ull)
+#define CVMX_PEXP_NPEI_BIST_STATUS2		(0x00011F0000008680ull)
+#define CVMX_PEXP_NPEI_CTL_PORT0		(0x00011F0000008250ull)
+#define CVMX_PEXP_NPEI_CTL_PORT1		(0x00011F0000008260ull)
+#define CVMX_PEXP_NPEI_CTL_STATUS		(0x00011F0000008570ull)
+#define CVMX_PEXP_NPEI_CTL_STATUS2		(0x00011F000000BC00ull)
+#define CVMX_PEXP_NPEI_DATA_OUT_CNT		(0x00011F00000085F0ull)
+#define CVMX_PEXP_NPEI_DBG_DATA			(0x00011F0000008510ull)
+#define CVMX_PEXP_NPEI_DBG_SELECT		(0x00011F0000008500ull)
+#define CVMX_PEXP_NPEI_DMA0_INT_LEVEL		(0x00011F00000085C0ull)
+#define CVMX_PEXP_NPEI_DMA1_INT_LEVEL		(0x00011F00000085D0ull)
+#define CVMX_PEXP_NPEI_DMAX_COUNTS(offset)	(0x00011F0000008450ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_NPEI_DMAX_DBELL(offset)	(0x00011F00000083B0ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_NPEI_DMAX_IBUFF_SADDR(offset) (0x00011F0000008400ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_NPEI_DMAX_NADDR(offset)	(0x00011F00000084A0ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_NPEI_DMA_CNTS			(0x00011F00000085E0ull)
+#define CVMX_PEXP_NPEI_DMA_CONTROL		(0x00011F00000083A0ull)
+#define CVMX_PEXP_NPEI_DMA_PCIE_REQ_NUM		(0x00011F00000085B0ull)
+#define CVMX_PEXP_NPEI_DMA_STATE1		(0x00011F00000086C0ull)
+#define CVMX_PEXP_NPEI_DMA_STATE1_P1		(0x00011F0000008680ull)
+#define CVMX_PEXP_NPEI_DMA_STATE2		(0x00011F00000086D0ull)
+#define CVMX_PEXP_NPEI_DMA_STATE2_P1		(0x00011F0000008690ull)
+#define CVMX_PEXP_NPEI_DMA_STATE3_P1		(0x00011F00000086A0ull)
+#define CVMX_PEXP_NPEI_DMA_STATE4_P1		(0x00011F00000086B0ull)
+#define CVMX_PEXP_NPEI_DMA_STATE5_P1		(0x00011F00000086C0ull)
+#define CVMX_PEXP_NPEI_INT_A_ENB		(0x00011F0000008560ull)
+#define CVMX_PEXP_NPEI_INT_A_ENB2		(0x00011F000000BCE0ull)
+#define CVMX_PEXP_NPEI_INT_A_SUM		(0x00011F0000008550ull)
+#define CVMX_PEXP_NPEI_INT_ENB			(0x00011F0000008540ull)
+#define CVMX_PEXP_NPEI_INT_ENB2			(0x00011F000000BCD0ull)
+#define CVMX_PEXP_NPEI_INT_INFO			(0x00011F0000008590ull)
+#define CVMX_PEXP_NPEI_INT_SUM			(0x00011F0000008530ull)
+#define CVMX_PEXP_NPEI_INT_SUM2			(0x00011F000000BCC0ull)
+#define CVMX_PEXP_NPEI_LAST_WIN_RDATA0		(0x00011F0000008600ull)
+#define CVMX_PEXP_NPEI_LAST_WIN_RDATA1		(0x00011F0000008610ull)
+#define CVMX_PEXP_NPEI_MEM_ACCESS_CTL		(0x00011F00000084F0ull)
+#define CVMX_PEXP_NPEI_MEM_ACCESS_SUBIDX(offset)                                                   \
+	(0x00011F0000008280ull + ((offset) & 31) * 16 - 16 * 12)
+#define CVMX_PEXP_NPEI_MSI_ENB0			      (0x00011F000000BC50ull)
+#define CVMX_PEXP_NPEI_MSI_ENB1			      (0x00011F000000BC60ull)
+#define CVMX_PEXP_NPEI_MSI_ENB2			      (0x00011F000000BC70ull)
+#define CVMX_PEXP_NPEI_MSI_ENB3			      (0x00011F000000BC80ull)
+#define CVMX_PEXP_NPEI_MSI_RCV0			      (0x00011F000000BC10ull)
+#define CVMX_PEXP_NPEI_MSI_RCV1			      (0x00011F000000BC20ull)
+#define CVMX_PEXP_NPEI_MSI_RCV2			      (0x00011F000000BC30ull)
+#define CVMX_PEXP_NPEI_MSI_RCV3			      (0x00011F000000BC40ull)
+#define CVMX_PEXP_NPEI_MSI_RD_MAP		      (0x00011F000000BCA0ull)
+#define CVMX_PEXP_NPEI_MSI_W1C_ENB0		      (0x00011F000000BCF0ull)
+#define CVMX_PEXP_NPEI_MSI_W1C_ENB1		      (0x00011F000000BD00ull)
+#define CVMX_PEXP_NPEI_MSI_W1C_ENB2		      (0x00011F000000BD10ull)
+#define CVMX_PEXP_NPEI_MSI_W1C_ENB3		      (0x00011F000000BD20ull)
+#define CVMX_PEXP_NPEI_MSI_W1S_ENB0		      (0x00011F000000BD30ull)
+#define CVMX_PEXP_NPEI_MSI_W1S_ENB1		      (0x00011F000000BD40ull)
+#define CVMX_PEXP_NPEI_MSI_W1S_ENB2		      (0x00011F000000BD50ull)
+#define CVMX_PEXP_NPEI_MSI_W1S_ENB3		      (0x00011F000000BD60ull)
+#define CVMX_PEXP_NPEI_MSI_WR_MAP		      (0x00011F000000BC90ull)
+#define CVMX_PEXP_NPEI_PCIE_CREDIT_CNT		      (0x00011F000000BD70ull)
+#define CVMX_PEXP_NPEI_PCIE_MSI_RCV		      (0x00011F000000BCB0ull)
+#define CVMX_PEXP_NPEI_PCIE_MSI_RCV_B1		      (0x00011F0000008650ull)
+#define CVMX_PEXP_NPEI_PCIE_MSI_RCV_B2		      (0x00011F0000008660ull)
+#define CVMX_PEXP_NPEI_PCIE_MSI_RCV_B3		      (0x00011F0000008670ull)
+#define CVMX_PEXP_NPEI_PKTX_CNTS(offset)	      (0x00011F000000A400ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_INSTR_BADDR(offset)	      (0x00011F000000A800ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_INSTR_BAOFF_DBELL(offset) (0x00011F000000AC00ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_INSTR_FIFO_RSIZE(offset)  (0x00011F000000B000ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_INSTR_HEADER(offset)      (0x00011F000000B400ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_IN_BP(offset)	      (0x00011F000000B800ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_SLIST_BADDR(offset)	      (0x00011F0000009400ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_SLIST_BAOFF_DBELL(offset) (0x00011F0000009800ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKTX_SLIST_FIFO_RSIZE(offset)  (0x00011F0000009C00ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKT_CNT_INT		      (0x00011F0000009110ull)
+#define CVMX_PEXP_NPEI_PKT_CNT_INT_ENB		      (0x00011F0000009130ull)
+#define CVMX_PEXP_NPEI_PKT_DATA_OUT_ES		      (0x00011F00000090B0ull)
+#define CVMX_PEXP_NPEI_PKT_DATA_OUT_NS		      (0x00011F00000090A0ull)
+#define CVMX_PEXP_NPEI_PKT_DATA_OUT_ROR		      (0x00011F0000009090ull)
+#define CVMX_PEXP_NPEI_PKT_DPADDR		      (0x00011F0000009080ull)
+#define CVMX_PEXP_NPEI_PKT_INPUT_CONTROL	      (0x00011F0000009150ull)
+#define CVMX_PEXP_NPEI_PKT_INSTR_ENB		      (0x00011F0000009000ull)
+#define CVMX_PEXP_NPEI_PKT_INSTR_RD_SIZE	      (0x00011F0000009190ull)
+#define CVMX_PEXP_NPEI_PKT_INSTR_SIZE		      (0x00011F0000009020ull)
+#define CVMX_PEXP_NPEI_PKT_INT_LEVELS		      (0x00011F0000009100ull)
+#define CVMX_PEXP_NPEI_PKT_IN_BP		      (0x00011F00000086B0ull)
+#define CVMX_PEXP_NPEI_PKT_IN_DONEX_CNTS(offset)      (0x00011F000000A000ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_NPEI_PKT_IN_INSTR_COUNTS	      (0x00011F00000086A0ull)
+#define CVMX_PEXP_NPEI_PKT_IN_PCIE_PORT		      (0x00011F00000091A0ull)
+#define CVMX_PEXP_NPEI_PKT_IPTR			      (0x00011F0000009070ull)
+#define CVMX_PEXP_NPEI_PKT_OUTPUT_WMARK		      (0x00011F0000009160ull)
+#define CVMX_PEXP_NPEI_PKT_OUT_BMODE		      (0x00011F00000090D0ull)
+#define CVMX_PEXP_NPEI_PKT_OUT_ENB		      (0x00011F0000009010ull)
+#define CVMX_PEXP_NPEI_PKT_PCIE_PORT		      (0x00011F00000090E0ull)
+#define CVMX_PEXP_NPEI_PKT_PORT_IN_RST		      (0x00011F0000008690ull)
+#define CVMX_PEXP_NPEI_PKT_SLIST_ES		      (0x00011F0000009050ull)
+#define CVMX_PEXP_NPEI_PKT_SLIST_ID_SIZE	      (0x00011F0000009180ull)
+#define CVMX_PEXP_NPEI_PKT_SLIST_NS		      (0x00011F0000009040ull)
+#define CVMX_PEXP_NPEI_PKT_SLIST_ROR		      (0x00011F0000009030ull)
+#define CVMX_PEXP_NPEI_PKT_TIME_INT		      (0x00011F0000009120ull)
+#define CVMX_PEXP_NPEI_PKT_TIME_INT_ENB		      (0x00011F0000009140ull)
+#define CVMX_PEXP_NPEI_RSL_INT_BLOCKS		      (0x00011F0000008520ull)
+#define CVMX_PEXP_NPEI_SCRATCH_1		      (0x00011F0000008270ull)
+#define CVMX_PEXP_NPEI_STATE1			      (0x00011F0000008620ull)
+#define CVMX_PEXP_NPEI_STATE2			      (0x00011F0000008630ull)
+#define CVMX_PEXP_NPEI_STATE3			      (0x00011F0000008640ull)
+#define CVMX_PEXP_NPEI_WINDOW_CTL		      (0x00011F0000008380ull)
+#define CVMX_PEXP_NQM_VFX_ACQ(offset)		      (0x0001450000000030ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_AQA(offset)		      (0x0001450000000024ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_ASQ(offset)		      (0x0001450000000028ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_CAP(offset)		      (0x0001450000000000ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_CC(offset)		      (0x0001450000000014ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_CQX_HDBL(offset, block_id)                                               \
+	(0x0001450000001004ull + (((offset) & 31) + ((block_id) & 2047) * 0x4000ull) * 8)
+#define CVMX_PEXP_NQM_VFX_CSTS(offset)	   (0x000145000000001Cull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_INTMC(offset)	   (0x0001450000000010ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_INTMS(offset)	   (0x000145000000000Cull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_MSIX_PBA(offset) (0x0001450000010200ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_NSSR(offset)	   (0x0001450000000020ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_NQM_VFX_SQX_TDBL(offset, block_id)                                               \
+	(0x0001450000001000ull + (((offset) & 31) + ((block_id) & 2047) * 0x4000ull) * 8)
+#define CVMX_PEXP_NQM_VFX_VECX_MSIX_ADDR(offset, block_id)                                         \
+	(0x0001450000010000ull + (((offset) & 31) + ((block_id) & 2047) * 0x2000ull) * 16)
+#define CVMX_PEXP_NQM_VFX_VECX_MSIX_CTL(offset, block_id)                                          \
+	(0x0001450000010008ull + (((offset) & 31) + ((block_id) & 2047) * 0x2000ull) * 16)
+#define CVMX_PEXP_NQM_VFX_VS(offset)		 (0x0001450000000008ull + ((offset) & 2047) * 0x20000ull)
+#define CVMX_PEXP_SLITB_MSIXX_TABLE_ADDR(offset) (0x00011F0000004000ull + ((offset) & 127) * 16)
+#define CVMX_PEXP_SLITB_MSIXX_TABLE_DATA(offset) (0x00011F0000004008ull + ((offset) & 127) * 16)
+#define CVMX_PEXP_SLITB_MSIX_MACX_PFX_TABLE_ADDR(offset, block_id)                                 \
+	(0x00011F0000002000ull + ((offset) & 1) * 4096 + ((block_id) & 3) * 0x10ull)
+#define CVMX_PEXP_SLITB_MSIX_MACX_PFX_TABLE_DATA(offset, block_id)                                 \
+	(0x00011F0000002008ull + ((offset) & 1) * 4096 + ((block_id) & 3) * 0x10ull)
+#define CVMX_PEXP_SLITB_PFX_PKT_CNT_INT(offset)	 (0x00011F0000008000ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_SLITB_PFX_PKT_INT(offset)	 (0x00011F0000008300ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_SLITB_PFX_PKT_IN_INT(offset)	 (0x00011F0000008200ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_SLITB_PFX_PKT_RING_RST(offset) (0x00011F0000008400ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_SLITB_PFX_PKT_TIME_INT(offset) (0x00011F0000008100ull + ((offset) & 7) * 16)
+#define CVMX_PEXP_SLITB_PKTX_PF_VF_MBOX_SIGX(offset, block_id)                                     \
+	(0x00011F0000011000ull + (((offset) & 1) + ((block_id) & 127) * 0x4000ull) * 8)
+#define CVMX_PEXP_SLI_BIST_STATUS CVMX_PEXP_SLI_BIST_STATUS_FUNC()
+static inline u64 CVMX_PEXP_SLI_BIST_STATUS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010580ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010580ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028580ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028580ull;
+	}
+	return 0x00011F0000028580ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_CTL_PORTX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010050ull + (offset) * 16;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010050ull + (offset) * 16;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010050ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000106E0ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000286E0ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000286E0ull + (offset) * 16;
+	}
+	return 0x00011F00000286E0ull + (offset) * 16;
+}
+
+#define CVMX_PEXP_SLI_CTL_STATUS CVMX_PEXP_SLI_CTL_STATUS_FUNC()
+static inline u64 CVMX_PEXP_SLI_CTL_STATUS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010570ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010570ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028570ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028570ull;
+	}
+	return 0x00011F0000028570ull;
+}
+
+#define CVMX_PEXP_SLI_DATA_OUT_CNT CVMX_PEXP_SLI_DATA_OUT_CNT_FUNC()
+static inline u64 CVMX_PEXP_SLI_DATA_OUT_CNT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000105F0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000105F0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000285F0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000285F0ull;
+	}
+	return 0x00011F00000285F0ull;
+}
+
+#define CVMX_PEXP_SLI_DBG_DATA	 (0x00011F0000010310ull)
+#define CVMX_PEXP_SLI_DBG_SELECT (0x00011F0000010300ull)
+static inline u64 CVMX_PEXP_SLI_DMAX_CNT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010400ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028400ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028400ull + (offset) * 16;
+	}
+	return 0x00011F0000028400ull + (offset) * 16;
+}
+
+static inline u64 CVMX_PEXP_SLI_DMAX_INT_LEVEL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000103E0ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000103E0ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000283E0ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000283E0ull + (offset) * 16;
+	}
+	return 0x00011F00000283E0ull + (offset) * 16;
+}
+
+static inline u64 CVMX_PEXP_SLI_DMAX_TIM(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010420ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010420ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028420ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028420ull + (offset) * 16;
+	}
+	return 0x00011F0000028420ull + (offset) * 16;
+}
+
+#define CVMX_PEXP_SLI_INT_ENB_CIU	    (0x00011F0000013CD0ull)
+#define CVMX_PEXP_SLI_INT_ENB_PORTX(offset) (0x00011F0000010340ull + ((offset) & 3) * 16)
+#define CVMX_PEXP_SLI_INT_SUM		    (0x00011F0000010330ull)
+#define CVMX_PEXP_SLI_LAST_WIN_RDATA0	    (0x00011F0000010600ull)
+#define CVMX_PEXP_SLI_LAST_WIN_RDATA1	    (0x00011F0000010610ull)
+#define CVMX_PEXP_SLI_LAST_WIN_RDATA2	    (0x00011F00000106C0ull)
+#define CVMX_PEXP_SLI_LAST_WIN_RDATA3	    (0x00011F00000106D0ull)
+#define CVMX_PEXP_SLI_MACX_PFX_DMA_VF_INT(offset, block_id)                                        \
+	(0x00011F0000027280ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_DMA_VF_INT_ENB(offset, block_id)                                    \
+	(0x00011F0000027500ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_FLR_VF_INT(offset, block_id)                                        \
+	(0x00011F0000027400ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_INT_ENB(offset, block_id)                                           \
+	(0x00011F0000027080ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_INT_SUM(offset, block_id)                                           \
+	(0x00011F0000027000ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_MBOX_INT(offset, block_id)                                          \
+	(0x00011F0000027380ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_PKT_VF_INT(offset, block_id)                                        \
+	(0x00011F0000027300ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_PKT_VF_INT_ENB(offset, block_id)                                    \
+	(0x00011F0000027580ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_PP_VF_INT(offset, block_id)                                         \
+	(0x00011F0000027200ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MACX_PFX_PP_VF_INT_ENB(offset, block_id)                                     \
+	(0x00011F0000027480ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_MAC_CREDIT_CNT CVMX_PEXP_SLI_MAC_CREDIT_CNT_FUNC()
+static inline u64 CVMX_PEXP_SLI_MAC_CREDIT_CNT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013D70ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013D70ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023D70ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023D70ull;
+	}
+	return 0x00011F0000023D70ull;
+}
+
+#define CVMX_PEXP_SLI_MAC_CREDIT_CNT2 CVMX_PEXP_SLI_MAC_CREDIT_CNT2_FUNC()
+static inline u64 CVMX_PEXP_SLI_MAC_CREDIT_CNT2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013E10ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013E10ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023E10ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023E10ull;
+	}
+	return 0x00011F0000023E10ull;
+}
+
+#define CVMX_PEXP_SLI_MEM_ACCESS_CTL CVMX_PEXP_SLI_MEM_ACCESS_CTL_FUNC()
+static inline u64 CVMX_PEXP_SLI_MEM_ACCESS_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000102F0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000102F0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000282F0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000282F0ull;
+	}
+	return 0x00011F00000282F0ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_MEM_ACCESS_SUBIDX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000100E0ull + (offset) * 16 - 16 * 12;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000100E0ull + (offset) * 16 - 16 * 12;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000280E0ull + (offset) * 16 - 16 * 12;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000280E0ull + (offset) * 16 - 16 * 12;
+	}
+	return 0x00011F00000280E0ull + (offset) * 16 - 16 * 12;
+}
+
+#define CVMX_PEXP_SLI_MEM_CTL CVMX_PEXP_SLI_MEM_CTL_FUNC()
+static inline u64 CVMX_PEXP_SLI_MEM_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000105E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000285E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000285E0ull;
+	}
+	return 0x00011F00000285E0ull;
+}
+
+#define CVMX_PEXP_SLI_MEM_INT_SUM CVMX_PEXP_SLI_MEM_INT_SUM_FUNC()
+static inline u64 CVMX_PEXP_SLI_MEM_INT_SUM_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000105D0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000285D0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000285D0ull;
+	}
+	return 0x00011F00000285D0ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_MSIXX_TABLE_ADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000016000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000000000ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000000000ull + (offset) * 16;
+	}
+	return 0x00011F0000000000ull + (offset) * 16;
+}
+
+static inline u64 CVMX_PEXP_SLI_MSIXX_TABLE_DATA(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000016008ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000000008ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000000008ull + (offset) * 16;
+	}
+	return 0x00011F0000000008ull + (offset) * 16;
+}
+
+#define CVMX_PEXP_SLI_MSIX_MACX_PF_TABLE_ADDR(offset) (0x00011F0000017C00ull + ((offset) & 3) * 16)
+#define CVMX_PEXP_SLI_MSIX_MACX_PF_TABLE_DATA(offset) (0x00011F0000017C08ull + ((offset) & 3) * 16)
+#define CVMX_PEXP_SLI_MSIX_PBA0			      CVMX_PEXP_SLI_MSIX_PBA0_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSIX_PBA0_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000017000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000001000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000001000ull;
+	}
+	return 0x00011F0000001000ull;
+}
+
+#define CVMX_PEXP_SLI_MSIX_PBA1 CVMX_PEXP_SLI_MSIX_PBA1_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSIX_PBA1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000017010ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000001008ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000001008ull;
+	}
+	return 0x00011F0000001008ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_ENB0 (0x00011F0000013C50ull)
+#define CVMX_PEXP_SLI_MSI_ENB1 (0x00011F0000013C60ull)
+#define CVMX_PEXP_SLI_MSI_ENB2 (0x00011F0000013C70ull)
+#define CVMX_PEXP_SLI_MSI_ENB3 (0x00011F0000013C80ull)
+#define CVMX_PEXP_SLI_MSI_RCV0 CVMX_PEXP_SLI_MSI_RCV0_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_RCV0_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013C10ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013C10ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023C10ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023C10ull;
+	}
+	return 0x00011F0000023C10ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_RCV1 CVMX_PEXP_SLI_MSI_RCV1_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_RCV1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013C20ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013C20ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023C20ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023C20ull;
+	}
+	return 0x00011F0000023C20ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_RCV2 CVMX_PEXP_SLI_MSI_RCV2_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_RCV2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013C30ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013C30ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023C30ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023C30ull;
+	}
+	return 0x00011F0000023C30ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_RCV3 CVMX_PEXP_SLI_MSI_RCV3_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_RCV3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013C40ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013C40ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023C40ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023C40ull;
+	}
+	return 0x00011F0000023C40ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_RD_MAP CVMX_PEXP_SLI_MSI_RD_MAP_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_RD_MAP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013CA0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013CA0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023CA0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023CA0ull;
+	}
+	return 0x00011F0000023CA0ull;
+}
+
+#define CVMX_PEXP_SLI_MSI_W1C_ENB0 (0x00011F0000013CF0ull)
+#define CVMX_PEXP_SLI_MSI_W1C_ENB1 (0x00011F0000013D00ull)
+#define CVMX_PEXP_SLI_MSI_W1C_ENB2 (0x00011F0000013D10ull)
+#define CVMX_PEXP_SLI_MSI_W1C_ENB3 (0x00011F0000013D20ull)
+#define CVMX_PEXP_SLI_MSI_W1S_ENB0 (0x00011F0000013D30ull)
+#define CVMX_PEXP_SLI_MSI_W1S_ENB1 (0x00011F0000013D40ull)
+#define CVMX_PEXP_SLI_MSI_W1S_ENB2 (0x00011F0000013D50ull)
+#define CVMX_PEXP_SLI_MSI_W1S_ENB3 (0x00011F0000013D60ull)
+#define CVMX_PEXP_SLI_MSI_WR_MAP   CVMX_PEXP_SLI_MSI_WR_MAP_FUNC()
+static inline u64 CVMX_PEXP_SLI_MSI_WR_MAP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013C90ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013C90ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023C90ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023C90ull;
+	}
+	return 0x00011F0000023C90ull;
+}
+
+#define CVMX_PEXP_SLI_PCIE_MSI_RCV CVMX_PEXP_SLI_PCIE_MSI_RCV_FUNC()
+static inline u64 CVMX_PEXP_SLI_PCIE_MSI_RCV_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013CB0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013CB0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023CB0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023CB0ull;
+	}
+	return 0x00011F0000023CB0ull;
+}
+
+#define CVMX_PEXP_SLI_PCIE_MSI_RCV_B1 CVMX_PEXP_SLI_PCIE_MSI_RCV_B1_FUNC()
+static inline u64 CVMX_PEXP_SLI_PCIE_MSI_RCV_B1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010650ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010650ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028650ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028650ull;
+	}
+	return 0x00011F0000028650ull;
+}
+
+#define CVMX_PEXP_SLI_PCIE_MSI_RCV_B2 CVMX_PEXP_SLI_PCIE_MSI_RCV_B2_FUNC()
+static inline u64 CVMX_PEXP_SLI_PCIE_MSI_RCV_B2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010660ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010660ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028660ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028660ull;
+	}
+	return 0x00011F0000028660ull;
+}
+
+#define CVMX_PEXP_SLI_PCIE_MSI_RCV_B3 CVMX_PEXP_SLI_PCIE_MSI_RCV_B3_FUNC()
+static inline u64 CVMX_PEXP_SLI_PCIE_MSI_RCV_B3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010670ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010670ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028670ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028670ull;
+	}
+	return 0x00011F0000028670ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_CNTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000012400ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000012400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000100B0ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000100B0ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F00000100B0ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKTX_ERROR_INFO(offset) (0x00011F00000100C0ull + ((offset) & 63) * 0x20000ull)
+static inline u64 CVMX_PEXP_SLI_PKTX_INPUT_CONTROL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000014000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010000ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010000ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010000ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_INSTR_BADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000012800ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000012800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010010ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010010ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010010ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_INSTR_BAOFF_DBELL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000012C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000012C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010020ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010020ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010020ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_INSTR_FIFO_RSIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013000ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010030ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010030ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010030ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKTX_INSTR_HEADER(offset) (0x00011F0000013400ull + ((offset) & 31) * 16)
+static inline u64 CVMX_PEXP_SLI_PKTX_INT_LEVELS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000014400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000100A0ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000100A0ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F00000100A0ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKTX_IN_BP(offset)    (0x00011F0000013800ull + ((offset) & 31) * 16)
+#define CVMX_PEXP_SLI_PKTX_MBOX_INT(offset) (0x00011F0000010210ull + ((offset) & 63) * 0x20000ull)
+static inline u64 CVMX_PEXP_SLI_PKTX_OUTPUT_CONTROL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000014800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010050ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010050ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010050ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_OUT_SIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010060ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010060ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010060ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKTX_PF_VF_MBOX_SIGX(offset, block_id)                                       \
+	(0x00011F0000010200ull + (((offset) & 1) + ((block_id) & 63) * 0x4000ull) * 8)
+static inline u64 CVMX_PEXP_SLI_PKTX_SLIST_BADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011400ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010070ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010070ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010070ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_SLIST_BAOFF_DBELL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011800ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010080ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010080ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010080ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_PEXP_SLI_PKTX_SLIST_FIFO_RSIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010090ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010090ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010090ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKTX_VF_INT_SUM(offset) (0x00011F00000100D0ull + ((offset) & 63) * 0x20000ull)
+#define CVMX_PEXP_SLI_PKTX_VF_SIG(offset)     (0x00011F0000014C00ull + ((offset) & 63) * 16)
+#define CVMX_PEXP_SLI_PKT_BIST_STATUS	      (0x00011F0000029220ull)
+#define CVMX_PEXP_SLI_PKT_CNT_INT	      CVMX_PEXP_SLI_PKT_CNT_INT_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_CNT_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011130ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011130ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029130ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029130ull;
+	}
+	return 0x00011F0000029130ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_CNT_INT_ENB	(0x00011F0000011150ull)
+#define CVMX_PEXP_SLI_PKT_CTL		(0x00011F0000011220ull)
+#define CVMX_PEXP_SLI_PKT_DATA_OUT_ES	(0x00011F00000110B0ull)
+#define CVMX_PEXP_SLI_PKT_DATA_OUT_NS	(0x00011F00000110A0ull)
+#define CVMX_PEXP_SLI_PKT_DATA_OUT_ROR	(0x00011F0000011090ull)
+#define CVMX_PEXP_SLI_PKT_DPADDR	(0x00011F0000011080ull)
+#define CVMX_PEXP_SLI_PKT_GBL_CONTROL	(0x00011F0000029210ull)
+#define CVMX_PEXP_SLI_PKT_INPUT_CONTROL (0x00011F0000011170ull)
+#define CVMX_PEXP_SLI_PKT_INSTR_ENB	(0x00011F0000011000ull)
+#define CVMX_PEXP_SLI_PKT_INSTR_RD_SIZE (0x00011F00000111A0ull)
+#define CVMX_PEXP_SLI_PKT_INSTR_SIZE	(0x00011F0000011020ull)
+#define CVMX_PEXP_SLI_PKT_INT		CVMX_PEXP_SLI_PKT_INT_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011160ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029160ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029160ull;
+	}
+	return 0x00011F0000029160ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_INT_LEVELS (0x00011F0000011120ull)
+#define CVMX_PEXP_SLI_PKT_IN_BP	     (0x00011F0000011210ull)
+static inline u64 CVMX_PEXP_SLI_PKT_IN_DONEX_CNTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000012000ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000012000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000010040ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010040ull + (offset) * 0x20000ull;
+	}
+	return 0x00011F0000010040ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_IN_INSTR_COUNTS CVMX_PEXP_SLI_PKT_IN_INSTR_COUNTS_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_IN_INSTR_COUNTS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011200ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011200ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029200ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029200ull;
+	}
+	return 0x00011F0000029200ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_IN_INT CVMX_PEXP_SLI_PKT_IN_INT_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_IN_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011150ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029150ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029150ull;
+	}
+	return 0x00011F0000029150ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_IN_JABBER    (0x00011F0000029170ull)
+#define CVMX_PEXP_SLI_PKT_IN_PCIE_PORT (0x00011F00000111B0ull)
+#define CVMX_PEXP_SLI_PKT_IPTR	       (0x00011F0000011070ull)
+#define CVMX_PEXP_SLI_PKT_MAC0_SIG0    (0x00011F0000011300ull)
+#define CVMX_PEXP_SLI_PKT_MAC0_SIG1    (0x00011F0000011310ull)
+#define CVMX_PEXP_SLI_PKT_MAC1_SIG0    (0x00011F0000011320ull)
+#define CVMX_PEXP_SLI_PKT_MAC1_SIG1    (0x00011F0000011330ull)
+#define CVMX_PEXP_SLI_PKT_MACX_PFX_RINFO(offset, block_id)                                         \
+	(0x00011F0000029030ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_PEXP_SLI_PKT_MACX_RINFO(offset) (0x00011F0000011030ull + ((offset) & 3) * 16)
+#define CVMX_PEXP_SLI_PKT_MEM_CTL	     CVMX_PEXP_SLI_PKT_MEM_CTL_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_MEM_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011120ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029120ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029120ull;
+	}
+	return 0x00011F0000029120ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_OUTPUT_WMARK CVMX_PEXP_SLI_PKT_OUTPUT_WMARK_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_OUTPUT_WMARK_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011180ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011180ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029180ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029180ull;
+	}
+	return 0x00011F0000029180ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_OUT_BMODE	 (0x00011F00000110D0ull)
+#define CVMX_PEXP_SLI_PKT_OUT_BP_EN	 (0x00011F0000011240ull)
+#define CVMX_PEXP_SLI_PKT_OUT_BP_EN2_W1C (0x00011F0000029290ull)
+#define CVMX_PEXP_SLI_PKT_OUT_BP_EN2_W1S (0x00011F0000029270ull)
+#define CVMX_PEXP_SLI_PKT_OUT_BP_EN_W1C	 (0x00011F0000029280ull)
+#define CVMX_PEXP_SLI_PKT_OUT_BP_EN_W1S	 (0x00011F0000029260ull)
+#define CVMX_PEXP_SLI_PKT_OUT_ENB	 (0x00011F0000011010ull)
+#define CVMX_PEXP_SLI_PKT_PCIE_PORT	 (0x00011F00000110E0ull)
+#define CVMX_PEXP_SLI_PKT_PKIND_VALID	 (0x00011F0000029190ull)
+#define CVMX_PEXP_SLI_PKT_PORT_IN_RST	 (0x00011F00000111F0ull)
+#define CVMX_PEXP_SLI_PKT_RING_RST	 CVMX_PEXP_SLI_PKT_RING_RST_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_RING_RST_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000111E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000291E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000291E0ull;
+	}
+	return 0x00011F00000291E0ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_SLIST_ES  (0x00011F0000011050ull)
+#define CVMX_PEXP_SLI_PKT_SLIST_NS  (0x00011F0000011040ull)
+#define CVMX_PEXP_SLI_PKT_SLIST_ROR (0x00011F0000011030ull)
+#define CVMX_PEXP_SLI_PKT_TIME_INT  CVMX_PEXP_SLI_PKT_TIME_INT_FUNC()
+static inline u64 CVMX_PEXP_SLI_PKT_TIME_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000011140ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000011140ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000029140ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000029140ull;
+	}
+	return 0x00011F0000029140ull;
+}
+
+#define CVMX_PEXP_SLI_PKT_TIME_INT_ENB	  (0x00011F0000011160ull)
+#define CVMX_PEXP_SLI_PORTX_PKIND(offset) (0x00011F0000010800ull + ((offset) & 31) * 16)
+static inline u64 CVMX_PEXP_SLI_S2M_PORTX_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013D80ull + (offset) * 16;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013D80ull + (offset) * 16;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000013D80ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000013D80ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000023D80ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000023D80ull + (offset) * 16;
+	}
+	return 0x00011F0000023D80ull + (offset) * 16;
+}
+
+#define CVMX_PEXP_SLI_SCRATCH_1 CVMX_PEXP_SLI_SCRATCH_1_FUNC()
+static inline u64 CVMX_PEXP_SLI_SCRATCH_1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000103C0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000103C0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000283C0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000283C0ull;
+	}
+	return 0x00011F00000283C0ull;
+}
+
+#define CVMX_PEXP_SLI_SCRATCH_2 CVMX_PEXP_SLI_SCRATCH_2_FUNC()
+static inline u64 CVMX_PEXP_SLI_SCRATCH_2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000103D0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000103D0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000283D0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000283D0ull;
+	}
+	return 0x00011F00000283D0ull;
+}
+
+#define CVMX_PEXP_SLI_STATE1 CVMX_PEXP_SLI_STATE1_FUNC()
+static inline u64 CVMX_PEXP_SLI_STATE1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010620ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010620ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028620ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028620ull;
+	}
+	return 0x00011F0000028620ull;
+}
+
+#define CVMX_PEXP_SLI_STATE2 CVMX_PEXP_SLI_STATE2_FUNC()
+static inline u64 CVMX_PEXP_SLI_STATE2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010630ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010630ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028630ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028630ull;
+	}
+	return 0x00011F0000028630ull;
+}
+
+#define CVMX_PEXP_SLI_STATE3 CVMX_PEXP_SLI_STATE3_FUNC()
+static inline u64 CVMX_PEXP_SLI_STATE3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000010640ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000010640ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000028640ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000028640ull;
+	}
+	return 0x00011F0000028640ull;
+}
+
+#define CVMX_PEXP_SLI_TX_PIPE	 (0x00011F0000011230ull)
+#define CVMX_PEXP_SLI_WINDOW_CTL CVMX_PEXP_SLI_WINDOW_CTL_FUNC()
+static inline u64 CVMX_PEXP_SLI_WINDOW_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000102E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F00000102E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F00000282E0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F00000282E0ull;
+	}
+	return 0x00011F00000282E0ull;
+}
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 22/50] mips: octeon: Add cvmx-pip-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (20 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 21/50] mips: octeon: Add cvmx-pepx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 23/50] mips: octeon: Add cvmx-pki-defs.h " Stefan Roese
                   ` (30 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pip-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pip-defs.h  | 3040 +++++++++++++++++
 1 file changed, 3040 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pip-defs.h
new file mode 100644
index 0000000000..574e80b6f2
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pip-defs.h
@@ -0,0 +1,3040 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pip.
+ */
+
+#ifndef __CVMX_PIP_DEFS_H__
+#define __CVMX_PIP_DEFS_H__
+
+#define CVMX_PIP_ALT_SKIP_CFGX(offset)	     (0x00011800A0002A00ull + ((offset) & 3) * 8)
+#define CVMX_PIP_BCK_PRS		     (0x00011800A0000038ull)
+#define CVMX_PIP_BIST_STATUS		     (0x00011800A0000000ull)
+#define CVMX_PIP_BSEL_EXT_CFGX(offset)	     (0x00011800A0002800ull + ((offset) & 3) * 16)
+#define CVMX_PIP_BSEL_EXT_POSX(offset)	     (0x00011800A0002808ull + ((offset) & 3) * 16)
+#define CVMX_PIP_BSEL_TBL_ENTX(offset)	     (0x00011800A0003000ull + ((offset) & 511) * 8)
+#define CVMX_PIP_CLKEN			     (0x00011800A0000040ull)
+#define CVMX_PIP_CRC_CTLX(offset)	     (0x00011800A0000040ull + ((offset) & 1) * 8)
+#define CVMX_PIP_CRC_IVX(offset)	     (0x00011800A0000050ull + ((offset) & 1) * 8)
+#define CVMX_PIP_DEC_IPSECX(offset)	     (0x00011800A0000080ull + ((offset) & 3) * 8)
+#define CVMX_PIP_DSA_SRC_GRP		     (0x00011800A0000190ull)
+#define CVMX_PIP_DSA_VID_GRP		     (0x00011800A0000198ull)
+#define CVMX_PIP_FRM_LEN_CHKX(offset)	     (0x00011800A0000180ull + ((offset) & 1) * 8)
+#define CVMX_PIP_GBL_CFG		     (0x00011800A0000028ull)
+#define CVMX_PIP_GBL_CTL		     (0x00011800A0000020ull)
+#define CVMX_PIP_HG_PRI_QOS		     (0x00011800A00001A0ull)
+#define CVMX_PIP_INT_EN			     (0x00011800A0000010ull)
+#define CVMX_PIP_INT_REG		     (0x00011800A0000008ull)
+#define CVMX_PIP_IP_OFFSET		     (0x00011800A0000060ull)
+#define CVMX_PIP_PRI_TBLX(offset)	     (0x00011800A0004000ull + ((offset) & 255) * 8)
+#define CVMX_PIP_PRT_CFGBX(offset)	     (0x00011800A0008000ull + ((offset) & 63) * 8)
+#define CVMX_PIP_PRT_CFGX(offset)	     (0x00011800A0000200ull + ((offset) & 63) * 8)
+#define CVMX_PIP_PRT_TAGX(offset)	     (0x00011800A0000400ull + ((offset) & 63) * 8)
+#define CVMX_PIP_QOS_DIFFX(offset)	     (0x00011800A0000600ull + ((offset) & 63) * 8)
+#define CVMX_PIP_QOS_VLANX(offset)	     (0x00011800A00000C0ull + ((offset) & 7) * 8)
+#define CVMX_PIP_QOS_WATCHX(offset)	     (0x00011800A0000100ull + ((offset) & 7) * 8)
+#define CVMX_PIP_RAW_WORD		     (0x00011800A00000B0ull)
+#define CVMX_PIP_SFT_RST		     (0x00011800A0000030ull)
+#define CVMX_PIP_STAT0_PRTX(offset)	     (0x00011800A0000800ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT0_X(offset)	     (0x00011800A0040000ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT10_PRTX(offset)	     (0x00011800A0001480ull + ((offset) & 63) * 16)
+#define CVMX_PIP_STAT10_X(offset)	     (0x00011800A0040050ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT11_PRTX(offset)	     (0x00011800A0001488ull + ((offset) & 63) * 16)
+#define CVMX_PIP_STAT11_X(offset)	     (0x00011800A0040058ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT1_PRTX(offset)	     (0x00011800A0000808ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT1_X(offset)	     (0x00011800A0040008ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT2_PRTX(offset)	     (0x00011800A0000810ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT2_X(offset)	     (0x00011800A0040010ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT3_PRTX(offset)	     (0x00011800A0000818ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT3_X(offset)	     (0x00011800A0040018ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT4_PRTX(offset)	     (0x00011800A0000820ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT4_X(offset)	     (0x00011800A0040020ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT5_PRTX(offset)	     (0x00011800A0000828ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT5_X(offset)	     (0x00011800A0040028ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT6_PRTX(offset)	     (0x00011800A0000830ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT6_X(offset)	     (0x00011800A0040030ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT7_PRTX(offset)	     (0x00011800A0000838ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT7_X(offset)	     (0x00011800A0040038ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT8_PRTX(offset)	     (0x00011800A0000840ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT8_X(offset)	     (0x00011800A0040040ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT9_PRTX(offset)	     (0x00011800A0000848ull + ((offset) & 63) * 80)
+#define CVMX_PIP_STAT9_X(offset)	     (0x00011800A0040048ull + ((offset) & 63) * 128)
+#define CVMX_PIP_STAT_CTL		     (0x00011800A0000018ull)
+#define CVMX_PIP_STAT_INB_ERRSX(offset)	     (0x00011800A0001A10ull + ((offset) & 63) * 32)
+#define CVMX_PIP_STAT_INB_ERRS_PKNDX(offset) (0x00011800A0020010ull + ((offset) & 63) * 32)
+#define CVMX_PIP_STAT_INB_OCTSX(offset)	     (0x00011800A0001A08ull + ((offset) & 63) * 32)
+#define CVMX_PIP_STAT_INB_OCTS_PKNDX(offset) (0x00011800A0020008ull + ((offset) & 63) * 32)
+#define CVMX_PIP_STAT_INB_PKTSX(offset)	     (0x00011800A0001A00ull + ((offset) & 63) * 32)
+#define CVMX_PIP_STAT_INB_PKTS_PKNDX(offset) (0x00011800A0020000ull + ((offset) & 63) * 32)
+#define CVMX_PIP_SUB_PKIND_FCSX(offset)	     (0x00011800A0080000ull)
+#define CVMX_PIP_TAG_INCX(offset)	     (0x00011800A0001800ull + ((offset) & 63) * 8)
+#define CVMX_PIP_TAG_MASK		     (0x00011800A0000070ull)
+#define CVMX_PIP_TAG_SECRET		     (0x00011800A0000068ull)
+#define CVMX_PIP_TODO_ENTRY		     (0x00011800A0000078ull)
+#define CVMX_PIP_VLAN_ETYPESX(offset)	     (0x00011800A00001C0ull + ((offset) & 1) * 8)
+#define CVMX_PIP_XSTAT0_PRTX(offset)	     (0x00011800A0002000ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT10_PRTX(offset)	     (0x00011800A0001700ull + ((offset) & 63) * 16 - 16 * 40)
+#define CVMX_PIP_XSTAT11_PRTX(offset)	     (0x00011800A0001708ull + ((offset) & 63) * 16 - 16 * 40)
+#define CVMX_PIP_XSTAT1_PRTX(offset)	     (0x00011800A0002008ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT2_PRTX(offset)	     (0x00011800A0002010ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT3_PRTX(offset)	     (0x00011800A0002018ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT4_PRTX(offset)	     (0x00011800A0002020ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT5_PRTX(offset)	     (0x00011800A0002028ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT6_PRTX(offset)	     (0x00011800A0002030ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT7_PRTX(offset)	     (0x00011800A0002038ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT8_PRTX(offset)	     (0x00011800A0002040ull + ((offset) & 63) * 80 - 80 * 40)
+#define CVMX_PIP_XSTAT9_PRTX(offset)	     (0x00011800A0002048ull + ((offset) & 63) * 80 - 80 * 40)
+
+/**
+ * cvmx_pip_alt_skip_cfg#
+ *
+ * Notes:
+ * The actual SKIP I determined by HW is based on the packet contents.  BIT0 and
+ * BIT1 make up a two value value that the selects the skip value as follows.
+ *
+ *    lookup_value = LEN ? ( packet_in_bits[BIT1], packet_in_bits[BIT0] ) : ( 0, packet_in_bits[BIT0] );
+ *    SKIP I       = lookup_value == 3 ? SKIP3 :
+ *                   lookup_value == 2 ? SKIP2 :
+ *                   lookup_value == 1 ? SKIP1 :
+ *                   PIP_PRT_CFG<pknd>[SKIP];
+ */
+union cvmx_pip_alt_skip_cfgx {
+	u64 u64;
+	struct cvmx_pip_alt_skip_cfgx_s {
+		u64 reserved_57_63 : 7;
+		u64 len : 1;
+		u64 reserved_46_55 : 10;
+		u64 bit1 : 6;
+		u64 reserved_38_39 : 2;
+		u64 bit0 : 6;
+		u64 reserved_23_31 : 9;
+		u64 skip3 : 7;
+		u64 reserved_15_15 : 1;
+		u64 skip2 : 7;
+		u64 reserved_7_7 : 1;
+		u64 skip1 : 7;
+	} s;
+	struct cvmx_pip_alt_skip_cfgx_s cn61xx;
+	struct cvmx_pip_alt_skip_cfgx_s cn66xx;
+	struct cvmx_pip_alt_skip_cfgx_s cn68xx;
+	struct cvmx_pip_alt_skip_cfgx_s cn70xx;
+	struct cvmx_pip_alt_skip_cfgx_s cn70xxp1;
+	struct cvmx_pip_alt_skip_cfgx_s cnf71xx;
+};
+
+typedef union cvmx_pip_alt_skip_cfgx cvmx_pip_alt_skip_cfgx_t;
+
+/**
+ * cvmx_pip_bck_prs
+ *
+ * When to assert backpressure based on the todo list filling up
+ *
+ */
+union cvmx_pip_bck_prs {
+	u64 u64;
+	struct cvmx_pip_bck_prs_s {
+		u64 bckprs : 1;
+		u64 reserved_13_62 : 50;
+		u64 hiwater : 5;
+		u64 reserved_5_7 : 3;
+		u64 lowater : 5;
+	} s;
+	struct cvmx_pip_bck_prs_s cn38xx;
+	struct cvmx_pip_bck_prs_s cn38xxp2;
+	struct cvmx_pip_bck_prs_s cn56xx;
+	struct cvmx_pip_bck_prs_s cn56xxp1;
+	struct cvmx_pip_bck_prs_s cn58xx;
+	struct cvmx_pip_bck_prs_s cn58xxp1;
+	struct cvmx_pip_bck_prs_s cn61xx;
+	struct cvmx_pip_bck_prs_s cn63xx;
+	struct cvmx_pip_bck_prs_s cn63xxp1;
+	struct cvmx_pip_bck_prs_s cn66xx;
+	struct cvmx_pip_bck_prs_s cn68xx;
+	struct cvmx_pip_bck_prs_s cn68xxp1;
+	struct cvmx_pip_bck_prs_s cn70xx;
+	struct cvmx_pip_bck_prs_s cn70xxp1;
+	struct cvmx_pip_bck_prs_s cnf71xx;
+};
+
+typedef union cvmx_pip_bck_prs cvmx_pip_bck_prs_t;
+
+/**
+ * cvmx_pip_bist_status
+ *
+ * PIP_BIST_STATUS = PIP's BIST Results
+ *
+ */
+union cvmx_pip_bist_status {
+	u64 u64;
+	struct cvmx_pip_bist_status_s {
+		u64 reserved_22_63 : 42;
+		u64 bist : 22;
+	} s;
+	struct cvmx_pip_bist_status_cn30xx {
+		u64 reserved_18_63 : 46;
+		u64 bist : 18;
+	} cn30xx;
+	struct cvmx_pip_bist_status_cn30xx cn31xx;
+	struct cvmx_pip_bist_status_cn30xx cn38xx;
+	struct cvmx_pip_bist_status_cn30xx cn38xxp2;
+	struct cvmx_pip_bist_status_cn50xx {
+		u64 reserved_17_63 : 47;
+		u64 bist : 17;
+	} cn50xx;
+	struct cvmx_pip_bist_status_cn30xx cn52xx;
+	struct cvmx_pip_bist_status_cn30xx cn52xxp1;
+	struct cvmx_pip_bist_status_cn30xx cn56xx;
+	struct cvmx_pip_bist_status_cn30xx cn56xxp1;
+	struct cvmx_pip_bist_status_cn30xx cn58xx;
+	struct cvmx_pip_bist_status_cn30xx cn58xxp1;
+	struct cvmx_pip_bist_status_cn61xx {
+		u64 reserved_20_63 : 44;
+		u64 bist : 20;
+	} cn61xx;
+	struct cvmx_pip_bist_status_cn30xx cn63xx;
+	struct cvmx_pip_bist_status_cn30xx cn63xxp1;
+	struct cvmx_pip_bist_status_cn61xx cn66xx;
+	struct cvmx_pip_bist_status_s cn68xx;
+	struct cvmx_pip_bist_status_cn61xx cn68xxp1;
+	struct cvmx_pip_bist_status_cn61xx cn70xx;
+	struct cvmx_pip_bist_status_cn61xx cn70xxp1;
+	struct cvmx_pip_bist_status_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pip_bist_status cvmx_pip_bist_status_t;
+
+/**
+ * cvmx_pip_bsel_ext_cfg#
+ *
+ * tag, offset, and skip values to be used when using the corresponding extractor.
+ *
+ */
+union cvmx_pip_bsel_ext_cfgx {
+	u64 u64;
+	struct cvmx_pip_bsel_ext_cfgx_s {
+		u64 reserved_56_63 : 8;
+		u64 upper_tag : 16;
+		u64 tag : 8;
+		u64 reserved_25_31 : 7;
+		u64 offset : 9;
+		u64 reserved_7_15 : 9;
+		u64 skip : 7;
+	} s;
+	struct cvmx_pip_bsel_ext_cfgx_s cn61xx;
+	struct cvmx_pip_bsel_ext_cfgx_s cn68xx;
+	struct cvmx_pip_bsel_ext_cfgx_s cn70xx;
+	struct cvmx_pip_bsel_ext_cfgx_s cn70xxp1;
+	struct cvmx_pip_bsel_ext_cfgx_s cnf71xx;
+};
+
+typedef union cvmx_pip_bsel_ext_cfgx cvmx_pip_bsel_ext_cfgx_t;
+
+/**
+ * cvmx_pip_bsel_ext_pos#
+ *
+ * bit positions and valids to be used when using the corresponding extractor.
+ *
+ */
+union cvmx_pip_bsel_ext_posx {
+	u64 u64;
+	struct cvmx_pip_bsel_ext_posx_s {
+		u64 pos7_val : 1;
+		u64 pos7 : 7;
+		u64 pos6_val : 1;
+		u64 pos6 : 7;
+		u64 pos5_val : 1;
+		u64 pos5 : 7;
+		u64 pos4_val : 1;
+		u64 pos4 : 7;
+		u64 pos3_val : 1;
+		u64 pos3 : 7;
+		u64 pos2_val : 1;
+		u64 pos2 : 7;
+		u64 pos1_val : 1;
+		u64 pos1 : 7;
+		u64 pos0_val : 1;
+		u64 pos0 : 7;
+	} s;
+	struct cvmx_pip_bsel_ext_posx_s cn61xx;
+	struct cvmx_pip_bsel_ext_posx_s cn68xx;
+	struct cvmx_pip_bsel_ext_posx_s cn70xx;
+	struct cvmx_pip_bsel_ext_posx_s cn70xxp1;
+	struct cvmx_pip_bsel_ext_posx_s cnf71xx;
+};
+
+typedef union cvmx_pip_bsel_ext_posx cvmx_pip_bsel_ext_posx_t;
+
+/**
+ * cvmx_pip_bsel_tbl_ent#
+ *
+ * PIP_BSEL_TBL_ENTX = Entry for the extractor table
+ *
+ */
+union cvmx_pip_bsel_tbl_entx {
+	u64 u64;
+	struct cvmx_pip_bsel_tbl_entx_s {
+		u64 tag_en : 1;
+		u64 grp_en : 1;
+		u64 tt_en : 1;
+		u64 qos_en : 1;
+		u64 reserved_40_59 : 20;
+		u64 tag : 8;
+		u64 reserved_22_31 : 10;
+		u64 grp : 6;
+		u64 reserved_10_15 : 6;
+		u64 tt : 2;
+		u64 reserved_3_7 : 5;
+		u64 qos : 3;
+	} s;
+	struct cvmx_pip_bsel_tbl_entx_cn61xx {
+		u64 tag_en : 1;
+		u64 grp_en : 1;
+		u64 tt_en : 1;
+		u64 qos_en : 1;
+		u64 reserved_40_59 : 20;
+		u64 tag : 8;
+		u64 reserved_20_31 : 12;
+		u64 grp : 4;
+		u64 reserved_10_15 : 6;
+		u64 tt : 2;
+		u64 reserved_3_7 : 5;
+		u64 qos : 3;
+	} cn61xx;
+	struct cvmx_pip_bsel_tbl_entx_s cn68xx;
+	struct cvmx_pip_bsel_tbl_entx_cn61xx cn70xx;
+	struct cvmx_pip_bsel_tbl_entx_cn61xx cn70xxp1;
+	struct cvmx_pip_bsel_tbl_entx_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pip_bsel_tbl_entx cvmx_pip_bsel_tbl_entx_t;
+
+/**
+ * cvmx_pip_clken
+ */
+union cvmx_pip_clken {
+	u64 u64;
+	struct cvmx_pip_clken_s {
+		u64 reserved_1_63 : 63;
+		u64 clken : 1;
+	} s;
+	struct cvmx_pip_clken_s cn61xx;
+	struct cvmx_pip_clken_s cn63xx;
+	struct cvmx_pip_clken_s cn63xxp1;
+	struct cvmx_pip_clken_s cn66xx;
+	struct cvmx_pip_clken_s cn68xx;
+	struct cvmx_pip_clken_s cn68xxp1;
+	struct cvmx_pip_clken_s cn70xx;
+	struct cvmx_pip_clken_s cn70xxp1;
+	struct cvmx_pip_clken_s cnf71xx;
+};
+
+typedef union cvmx_pip_clken cvmx_pip_clken_t;
+
+/**
+ * cvmx_pip_crc_ctl#
+ *
+ * PIP_CRC_CTL = PIP CRC Control Register
+ *
+ * Controls datapath reflection when calculating CRC
+ */
+union cvmx_pip_crc_ctlx {
+	u64 u64;
+	struct cvmx_pip_crc_ctlx_s {
+		u64 reserved_2_63 : 62;
+		u64 invres : 1;
+		u64 reflect : 1;
+	} s;
+	struct cvmx_pip_crc_ctlx_s cn38xx;
+	struct cvmx_pip_crc_ctlx_s cn38xxp2;
+	struct cvmx_pip_crc_ctlx_s cn58xx;
+	struct cvmx_pip_crc_ctlx_s cn58xxp1;
+};
+
+typedef union cvmx_pip_crc_ctlx cvmx_pip_crc_ctlx_t;
+
+/**
+ * cvmx_pip_crc_iv#
+ *
+ * PIP_CRC_IV = PIP CRC IV Register
+ *
+ * Determines the IV used by the CRC algorithm
+ *
+ * Notes:
+ * * PIP_CRC_IV
+ * PIP_CRC_IV controls the initial state of the CRC algorithm.  Octane can
+ * support a wide range of CRC algorithms and as such, the IV must be
+ * carefully constructed to meet the specific algorithm.  The code below
+ * determines the value to program into Octane based on the algorthim's IV
+ * and width.  In the case of Octane, the width should always be 32.
+ *
+ * PIP_CRC_IV0 sets the IV for ports 0-15 while PIP_CRC_IV1 sets the IV for
+ * ports 16-31.
+ *
+ *  unsigned octane_crc_iv(unsigned algorithm_iv, unsigned poly, unsigned w)
+ *  [
+ *    int i;
+ *    int doit;
+ *    unsigned int current_val = algorithm_iv;
+ *
+ *    for(i = 0; i < w; i++) [
+ *      doit = current_val & 0x1;
+ *
+ *      if(doit) current_val ^= poly;
+ *      assert(!(current_val & 0x1));
+ *
+ *      current_val = (current_val >> 1) | (doit << (w-1));
+ *    ]
+ *
+ *    return current_val;
+ *  ]
+ */
+union cvmx_pip_crc_ivx {
+	u64 u64;
+	struct cvmx_pip_crc_ivx_s {
+		u64 reserved_32_63 : 32;
+		u64 iv : 32;
+	} s;
+	struct cvmx_pip_crc_ivx_s cn38xx;
+	struct cvmx_pip_crc_ivx_s cn38xxp2;
+	struct cvmx_pip_crc_ivx_s cn58xx;
+	struct cvmx_pip_crc_ivx_s cn58xxp1;
+};
+
+typedef union cvmx_pip_crc_ivx cvmx_pip_crc_ivx_t;
+
+/**
+ * cvmx_pip_dec_ipsec#
+ *
+ * PIP sets the dec_ipsec based on TCP or UDP destination port.
+ *
+ */
+union cvmx_pip_dec_ipsecx {
+	u64 u64;
+	struct cvmx_pip_dec_ipsecx_s {
+		u64 reserved_18_63 : 46;
+		u64 tcp : 1;
+		u64 udp : 1;
+		u64 dprt : 16;
+	} s;
+	struct cvmx_pip_dec_ipsecx_s cn30xx;
+	struct cvmx_pip_dec_ipsecx_s cn31xx;
+	struct cvmx_pip_dec_ipsecx_s cn38xx;
+	struct cvmx_pip_dec_ipsecx_s cn38xxp2;
+	struct cvmx_pip_dec_ipsecx_s cn50xx;
+	struct cvmx_pip_dec_ipsecx_s cn52xx;
+	struct cvmx_pip_dec_ipsecx_s cn52xxp1;
+	struct cvmx_pip_dec_ipsecx_s cn56xx;
+	struct cvmx_pip_dec_ipsecx_s cn56xxp1;
+	struct cvmx_pip_dec_ipsecx_s cn58xx;
+	struct cvmx_pip_dec_ipsecx_s cn58xxp1;
+	struct cvmx_pip_dec_ipsecx_s cn61xx;
+	struct cvmx_pip_dec_ipsecx_s cn63xx;
+	struct cvmx_pip_dec_ipsecx_s cn63xxp1;
+	struct cvmx_pip_dec_ipsecx_s cn66xx;
+	struct cvmx_pip_dec_ipsecx_s cn68xx;
+	struct cvmx_pip_dec_ipsecx_s cn68xxp1;
+	struct cvmx_pip_dec_ipsecx_s cn70xx;
+	struct cvmx_pip_dec_ipsecx_s cn70xxp1;
+	struct cvmx_pip_dec_ipsecx_s cnf71xx;
+};
+
+typedef union cvmx_pip_dec_ipsecx cvmx_pip_dec_ipsecx_t;
+
+/**
+ * cvmx_pip_dsa_src_grp
+ */
+union cvmx_pip_dsa_src_grp {
+	u64 u64;
+	struct cvmx_pip_dsa_src_grp_s {
+		u64 map15 : 4;
+		u64 map14 : 4;
+		u64 map13 : 4;
+		u64 map12 : 4;
+		u64 map11 : 4;
+		u64 map10 : 4;
+		u64 map9 : 4;
+		u64 map8 : 4;
+		u64 map7 : 4;
+		u64 map6 : 4;
+		u64 map5 : 4;
+		u64 map4 : 4;
+		u64 map3 : 4;
+		u64 map2 : 4;
+		u64 map1 : 4;
+		u64 map0 : 4;
+	} s;
+	struct cvmx_pip_dsa_src_grp_s cn52xx;
+	struct cvmx_pip_dsa_src_grp_s cn52xxp1;
+	struct cvmx_pip_dsa_src_grp_s cn56xx;
+	struct cvmx_pip_dsa_src_grp_s cn61xx;
+	struct cvmx_pip_dsa_src_grp_s cn63xx;
+	struct cvmx_pip_dsa_src_grp_s cn63xxp1;
+	struct cvmx_pip_dsa_src_grp_s cn66xx;
+	struct cvmx_pip_dsa_src_grp_s cn68xx;
+	struct cvmx_pip_dsa_src_grp_s cn68xxp1;
+	struct cvmx_pip_dsa_src_grp_s cn70xx;
+	struct cvmx_pip_dsa_src_grp_s cn70xxp1;
+	struct cvmx_pip_dsa_src_grp_s cnf71xx;
+};
+
+typedef union cvmx_pip_dsa_src_grp cvmx_pip_dsa_src_grp_t;
+
+/**
+ * cvmx_pip_dsa_vid_grp
+ */
+union cvmx_pip_dsa_vid_grp {
+	u64 u64;
+	struct cvmx_pip_dsa_vid_grp_s {
+		u64 map15 : 4;
+		u64 map14 : 4;
+		u64 map13 : 4;
+		u64 map12 : 4;
+		u64 map11 : 4;
+		u64 map10 : 4;
+		u64 map9 : 4;
+		u64 map8 : 4;
+		u64 map7 : 4;
+		u64 map6 : 4;
+		u64 map5 : 4;
+		u64 map4 : 4;
+		u64 map3 : 4;
+		u64 map2 : 4;
+		u64 map1 : 4;
+		u64 map0 : 4;
+	} s;
+	struct cvmx_pip_dsa_vid_grp_s cn52xx;
+	struct cvmx_pip_dsa_vid_grp_s cn52xxp1;
+	struct cvmx_pip_dsa_vid_grp_s cn56xx;
+	struct cvmx_pip_dsa_vid_grp_s cn61xx;
+	struct cvmx_pip_dsa_vid_grp_s cn63xx;
+	struct cvmx_pip_dsa_vid_grp_s cn63xxp1;
+	struct cvmx_pip_dsa_vid_grp_s cn66xx;
+	struct cvmx_pip_dsa_vid_grp_s cn68xx;
+	struct cvmx_pip_dsa_vid_grp_s cn68xxp1;
+	struct cvmx_pip_dsa_vid_grp_s cn70xx;
+	struct cvmx_pip_dsa_vid_grp_s cn70xxp1;
+	struct cvmx_pip_dsa_vid_grp_s cnf71xx;
+};
+
+typedef union cvmx_pip_dsa_vid_grp cvmx_pip_dsa_vid_grp_t;
+
+/**
+ * cvmx_pip_frm_len_chk#
+ *
+ * Notes:
+ * PIP_FRM_LEN_CHK0 is used for packets on packet interface0, PCI, PCI RAW, and PKO loopback ports.
+ * PIP_FRM_LEN_CHK1 is unused.
+ */
+union cvmx_pip_frm_len_chkx {
+	u64 u64;
+	struct cvmx_pip_frm_len_chkx_s {
+		u64 reserved_32_63 : 32;
+		u64 maxlen : 16;
+		u64 minlen : 16;
+	} s;
+	struct cvmx_pip_frm_len_chkx_s cn50xx;
+	struct cvmx_pip_frm_len_chkx_s cn52xx;
+	struct cvmx_pip_frm_len_chkx_s cn52xxp1;
+	struct cvmx_pip_frm_len_chkx_s cn56xx;
+	struct cvmx_pip_frm_len_chkx_s cn56xxp1;
+	struct cvmx_pip_frm_len_chkx_s cn61xx;
+	struct cvmx_pip_frm_len_chkx_s cn63xx;
+	struct cvmx_pip_frm_len_chkx_s cn63xxp1;
+	struct cvmx_pip_frm_len_chkx_s cn66xx;
+	struct cvmx_pip_frm_len_chkx_s cn68xx;
+	struct cvmx_pip_frm_len_chkx_s cn68xxp1;
+	struct cvmx_pip_frm_len_chkx_s cn70xx;
+	struct cvmx_pip_frm_len_chkx_s cn70xxp1;
+	struct cvmx_pip_frm_len_chkx_s cnf71xx;
+};
+
+typedef union cvmx_pip_frm_len_chkx cvmx_pip_frm_len_chkx_t;
+
+/**
+ * cvmx_pip_gbl_cfg
+ *
+ * Global config information that applies to all ports.
+ *
+ */
+union cvmx_pip_gbl_cfg {
+	u64 u64;
+	struct cvmx_pip_gbl_cfg_s {
+		u64 reserved_19_63 : 45;
+		u64 tag_syn : 1;
+		u64 ip6_udp : 1;
+		u64 max_l2 : 1;
+		u64 reserved_11_15 : 5;
+		u64 raw_shf : 3;
+		u64 reserved_3_7 : 5;
+		u64 nip_shf : 3;
+	} s;
+	struct cvmx_pip_gbl_cfg_s cn30xx;
+	struct cvmx_pip_gbl_cfg_s cn31xx;
+	struct cvmx_pip_gbl_cfg_s cn38xx;
+	struct cvmx_pip_gbl_cfg_s cn38xxp2;
+	struct cvmx_pip_gbl_cfg_s cn50xx;
+	struct cvmx_pip_gbl_cfg_s cn52xx;
+	struct cvmx_pip_gbl_cfg_s cn52xxp1;
+	struct cvmx_pip_gbl_cfg_s cn56xx;
+	struct cvmx_pip_gbl_cfg_s cn56xxp1;
+	struct cvmx_pip_gbl_cfg_s cn58xx;
+	struct cvmx_pip_gbl_cfg_s cn58xxp1;
+	struct cvmx_pip_gbl_cfg_s cn61xx;
+	struct cvmx_pip_gbl_cfg_s cn63xx;
+	struct cvmx_pip_gbl_cfg_s cn63xxp1;
+	struct cvmx_pip_gbl_cfg_s cn66xx;
+	struct cvmx_pip_gbl_cfg_s cn68xx;
+	struct cvmx_pip_gbl_cfg_s cn68xxp1;
+	struct cvmx_pip_gbl_cfg_s cn70xx;
+	struct cvmx_pip_gbl_cfg_s cn70xxp1;
+	struct cvmx_pip_gbl_cfg_s cnf71xx;
+};
+
+typedef union cvmx_pip_gbl_cfg cvmx_pip_gbl_cfg_t;
+
+/**
+ * cvmx_pip_gbl_ctl
+ *
+ * Global control information.  These are the global checker enables for
+ * IPv4/IPv6 and TCP/UDP parsing.  The enables effect all ports.
+ */
+union cvmx_pip_gbl_ctl {
+	u64 u64;
+	struct cvmx_pip_gbl_ctl_s {
+		u64 reserved_29_63 : 35;
+		u64 egrp_dis : 1;
+		u64 ihmsk_dis : 1;
+		u64 dsa_grp_tvid : 1;
+		u64 dsa_grp_scmd : 1;
+		u64 dsa_grp_sid : 1;
+		u64 reserved_21_23 : 3;
+		u64 ring_en : 1;
+		u64 reserved_17_19 : 3;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} s;
+	struct cvmx_pip_gbl_ctl_cn30xx {
+		u64 reserved_17_63 : 47;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn30xx;
+	struct cvmx_pip_gbl_ctl_cn30xx cn31xx;
+	struct cvmx_pip_gbl_ctl_cn30xx cn38xx;
+	struct cvmx_pip_gbl_ctl_cn30xx cn38xxp2;
+	struct cvmx_pip_gbl_ctl_cn30xx cn50xx;
+	struct cvmx_pip_gbl_ctl_cn52xx {
+		u64 reserved_27_63 : 37;
+		u64 dsa_grp_tvid : 1;
+		u64 dsa_grp_scmd : 1;
+		u64 dsa_grp_sid : 1;
+		u64 reserved_21_23 : 3;
+		u64 ring_en : 1;
+		u64 reserved_17_19 : 3;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn52xx;
+	struct cvmx_pip_gbl_ctl_cn52xx cn52xxp1;
+	struct cvmx_pip_gbl_ctl_cn52xx cn56xx;
+	struct cvmx_pip_gbl_ctl_cn56xxp1 {
+		u64 reserved_21_63 : 43;
+		u64 ring_en : 1;
+		u64 reserved_17_19 : 3;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn56xxp1;
+	struct cvmx_pip_gbl_ctl_cn30xx cn58xx;
+	struct cvmx_pip_gbl_ctl_cn30xx cn58xxp1;
+	struct cvmx_pip_gbl_ctl_cn61xx {
+		u64 reserved_28_63 : 36;
+		u64 ihmsk_dis : 1;
+		u64 dsa_grp_tvid : 1;
+		u64 dsa_grp_scmd : 1;
+		u64 dsa_grp_sid : 1;
+		u64 reserved_21_23 : 3;
+		u64 ring_en : 1;
+		u64 reserved_17_19 : 3;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn61xx;
+	struct cvmx_pip_gbl_ctl_cn61xx cn63xx;
+	struct cvmx_pip_gbl_ctl_cn61xx cn63xxp1;
+	struct cvmx_pip_gbl_ctl_cn61xx cn66xx;
+	struct cvmx_pip_gbl_ctl_cn68xx {
+		u64 reserved_29_63 : 35;
+		u64 egrp_dis : 1;
+		u64 ihmsk_dis : 1;
+		u64 dsa_grp_tvid : 1;
+		u64 dsa_grp_scmd : 1;
+		u64 dsa_grp_sid : 1;
+		u64 reserved_17_23 : 7;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn68xx;
+	struct cvmx_pip_gbl_ctl_cn68xxp1 {
+		u64 reserved_28_63 : 36;
+		u64 ihmsk_dis : 1;
+		u64 dsa_grp_tvid : 1;
+		u64 dsa_grp_scmd : 1;
+		u64 dsa_grp_sid : 1;
+		u64 reserved_17_23 : 7;
+		u64 ignrs : 1;
+		u64 vs_wqe : 1;
+		u64 vs_qos : 1;
+		u64 l2_mal : 1;
+		u64 tcp_flag : 1;
+		u64 l4_len : 1;
+		u64 l4_chk : 1;
+		u64 l4_prt : 1;
+		u64 l4_mal : 1;
+		u64 reserved_6_7 : 2;
+		u64 ip6_eext : 2;
+		u64 ip4_opts : 1;
+		u64 ip_hop : 1;
+		u64 ip_mal : 1;
+		u64 ip_chk : 1;
+	} cn68xxp1;
+	struct cvmx_pip_gbl_ctl_cn61xx cn70xx;
+	struct cvmx_pip_gbl_ctl_cn61xx cn70xxp1;
+	struct cvmx_pip_gbl_ctl_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pip_gbl_ctl cvmx_pip_gbl_ctl_t;
+
+/**
+ * cvmx_pip_hg_pri_qos
+ *
+ * Notes:
+ * This register controls accesses to the HG_QOS_TABLE.  To write an entry of
+ * the table, write PIP_HG_PRI_QOS with PRI=table address, QOS=priority level,
+ * UP_QOS=1.  To read an entry of the table, write PIP_HG_PRI_QOS with
+ * PRI=table address, QOS=dont_carepriority level, UP_QOS=0 and then read
+ * PIP_HG_PRI_QOS.  The table data will be in PIP_HG_PRI_QOS[QOS].
+ */
+union cvmx_pip_hg_pri_qos {
+	u64 u64;
+	struct cvmx_pip_hg_pri_qos_s {
+		u64 reserved_13_63 : 51;
+		u64 up_qos : 1;
+		u64 reserved_11_11 : 1;
+		u64 qos : 3;
+		u64 reserved_6_7 : 2;
+		u64 pri : 6;
+	} s;
+	struct cvmx_pip_hg_pri_qos_s cn52xx;
+	struct cvmx_pip_hg_pri_qos_s cn52xxp1;
+	struct cvmx_pip_hg_pri_qos_s cn56xx;
+	struct cvmx_pip_hg_pri_qos_s cn61xx;
+	struct cvmx_pip_hg_pri_qos_s cn63xx;
+	struct cvmx_pip_hg_pri_qos_s cn63xxp1;
+	struct cvmx_pip_hg_pri_qos_s cn66xx;
+	struct cvmx_pip_hg_pri_qos_s cn70xx;
+	struct cvmx_pip_hg_pri_qos_s cn70xxp1;
+	struct cvmx_pip_hg_pri_qos_s cnf71xx;
+};
+
+typedef union cvmx_pip_hg_pri_qos cvmx_pip_hg_pri_qos_t;
+
+/**
+ * cvmx_pip_int_en
+ *
+ * Determines if hardward should raise an interrupt to software
+ * when an exception event occurs.
+ */
+union cvmx_pip_int_en {
+	u64 u64;
+	struct cvmx_pip_int_en_s {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} s;
+	struct cvmx_pip_int_en_cn30xx {
+		u64 reserved_9_63 : 55;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn30xx;
+	struct cvmx_pip_int_en_cn30xx cn31xx;
+	struct cvmx_pip_int_en_cn30xx cn38xx;
+	struct cvmx_pip_int_en_cn30xx cn38xxp2;
+	struct cvmx_pip_int_en_cn50xx {
+		u64 reserved_12_63 : 52;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 reserved_1_1 : 1;
+		u64 pktdrp : 1;
+	} cn50xx;
+	struct cvmx_pip_int_en_cn52xx {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 reserved_1_1 : 1;
+		u64 pktdrp : 1;
+	} cn52xx;
+	struct cvmx_pip_int_en_cn52xx cn52xxp1;
+	struct cvmx_pip_int_en_s cn56xx;
+	struct cvmx_pip_int_en_cn56xxp1 {
+		u64 reserved_12_63 : 52;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn56xxp1;
+	struct cvmx_pip_int_en_cn58xx {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 reserved_9_11 : 3;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn58xx;
+	struct cvmx_pip_int_en_cn30xx cn58xxp1;
+	struct cvmx_pip_int_en_s cn61xx;
+	struct cvmx_pip_int_en_s cn63xx;
+	struct cvmx_pip_int_en_s cn63xxp1;
+	struct cvmx_pip_int_en_s cn66xx;
+	struct cvmx_pip_int_en_s cn68xx;
+	struct cvmx_pip_int_en_s cn68xxp1;
+	struct cvmx_pip_int_en_s cn70xx;
+	struct cvmx_pip_int_en_s cn70xxp1;
+	struct cvmx_pip_int_en_s cnf71xx;
+};
+
+typedef union cvmx_pip_int_en cvmx_pip_int_en_t;
+
+/**
+ * cvmx_pip_int_reg
+ *
+ * Any exception event that occurs is captured in the PIP_INT_REG.
+ * PIP_INT_REG will set the exception bit regardless of the value
+ * of PIP_INT_EN.  PIP_INT_EN only controls if an interrupt is
+ * raised to software.
+ */
+union cvmx_pip_int_reg {
+	u64 u64;
+	struct cvmx_pip_int_reg_s {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} s;
+	struct cvmx_pip_int_reg_cn30xx {
+		u64 reserved_9_63 : 55;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn30xx;
+	struct cvmx_pip_int_reg_cn30xx cn31xx;
+	struct cvmx_pip_int_reg_cn30xx cn38xx;
+	struct cvmx_pip_int_reg_cn30xx cn38xxp2;
+	struct cvmx_pip_int_reg_cn50xx {
+		u64 reserved_12_63 : 52;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 reserved_1_1 : 1;
+		u64 pktdrp : 1;
+	} cn50xx;
+	struct cvmx_pip_int_reg_cn52xx {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 reserved_1_1 : 1;
+		u64 pktdrp : 1;
+	} cn52xx;
+	struct cvmx_pip_int_reg_cn52xx cn52xxp1;
+	struct cvmx_pip_int_reg_s cn56xx;
+	struct cvmx_pip_int_reg_cn56xxp1 {
+		u64 reserved_12_63 : 52;
+		u64 lenerr : 1;
+		u64 maxerr : 1;
+		u64 minerr : 1;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn56xxp1;
+	struct cvmx_pip_int_reg_cn58xx {
+		u64 reserved_13_63 : 51;
+		u64 punyerr : 1;
+		u64 reserved_9_11 : 3;
+		u64 beperr : 1;
+		u64 feperr : 1;
+		u64 todoovr : 1;
+		u64 skprunt : 1;
+		u64 badtag : 1;
+		u64 prtnxa : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn58xx;
+	struct cvmx_pip_int_reg_cn30xx cn58xxp1;
+	struct cvmx_pip_int_reg_s cn61xx;
+	struct cvmx_pip_int_reg_s cn63xx;
+	struct cvmx_pip_int_reg_s cn63xxp1;
+	struct cvmx_pip_int_reg_s cn66xx;
+	struct cvmx_pip_int_reg_s cn68xx;
+	struct cvmx_pip_int_reg_s cn68xxp1;
+	struct cvmx_pip_int_reg_s cn70xx;
+	struct cvmx_pip_int_reg_s cn70xxp1;
+	struct cvmx_pip_int_reg_s cnf71xx;
+};
+
+typedef union cvmx_pip_int_reg cvmx_pip_int_reg_t;
+
+/**
+ * cvmx_pip_ip_offset
+ *
+ * An 8-byte offset to find the start of the IP header in the data portion of IP workQ entires
+ *
+ */
+union cvmx_pip_ip_offset {
+	u64 u64;
+	struct cvmx_pip_ip_offset_s {
+		u64 reserved_3_63 : 61;
+		u64 offset : 3;
+	} s;
+	struct cvmx_pip_ip_offset_s cn30xx;
+	struct cvmx_pip_ip_offset_s cn31xx;
+	struct cvmx_pip_ip_offset_s cn38xx;
+	struct cvmx_pip_ip_offset_s cn38xxp2;
+	struct cvmx_pip_ip_offset_s cn50xx;
+	struct cvmx_pip_ip_offset_s cn52xx;
+	struct cvmx_pip_ip_offset_s cn52xxp1;
+	struct cvmx_pip_ip_offset_s cn56xx;
+	struct cvmx_pip_ip_offset_s cn56xxp1;
+	struct cvmx_pip_ip_offset_s cn58xx;
+	struct cvmx_pip_ip_offset_s cn58xxp1;
+	struct cvmx_pip_ip_offset_s cn61xx;
+	struct cvmx_pip_ip_offset_s cn63xx;
+	struct cvmx_pip_ip_offset_s cn63xxp1;
+	struct cvmx_pip_ip_offset_s cn66xx;
+	struct cvmx_pip_ip_offset_s cn68xx;
+	struct cvmx_pip_ip_offset_s cn68xxp1;
+	struct cvmx_pip_ip_offset_s cn70xx;
+	struct cvmx_pip_ip_offset_s cn70xxp1;
+	struct cvmx_pip_ip_offset_s cnf71xx;
+};
+
+typedef union cvmx_pip_ip_offset cvmx_pip_ip_offset_t;
+
+/**
+ * cvmx_pip_pri_tbl#
+ *
+ * Notes:
+ * The priority level from HiGig header is as follows
+ *
+ * HiGig/HiGig+ PRI = [1'b0, CNG[1:0], COS[2:0]]
+ * HiGig2       PRI = [DP[1:0], TC[3:0]]
+ *
+ * DSA          PRI = WORD0[15:13]
+ *
+ * VLAN         PRI = VLAN[15:13]
+ *
+ * DIFFSERV         = IP.TOS/CLASS<7:2>
+ */
+union cvmx_pip_pri_tblx {
+	u64 u64;
+	struct cvmx_pip_pri_tblx_s {
+		u64 diff2_padd : 8;
+		u64 hg2_padd : 8;
+		u64 vlan2_padd : 8;
+		u64 reserved_38_39 : 2;
+		u64 diff2_bpid : 6;
+		u64 reserved_30_31 : 2;
+		u64 hg2_bpid : 6;
+		u64 reserved_22_23 : 2;
+		u64 vlan2_bpid : 6;
+		u64 reserved_11_15 : 5;
+		u64 diff2_qos : 3;
+		u64 reserved_7_7 : 1;
+		u64 hg2_qos : 3;
+		u64 reserved_3_3 : 1;
+		u64 vlan2_qos : 3;
+	} s;
+	struct cvmx_pip_pri_tblx_s cn68xx;
+	struct cvmx_pip_pri_tblx_s cn68xxp1;
+};
+
+typedef union cvmx_pip_pri_tblx cvmx_pip_pri_tblx_t;
+
+/**
+ * cvmx_pip_prt_cfg#
+ *
+ * PIP_PRT_CFGX = Per port config information
+ *
+ */
+union cvmx_pip_prt_cfgx {
+	u64 u64;
+	struct cvmx_pip_prt_cfgx_s {
+		u64 reserved_55_63 : 9;
+		u64 ih_pri : 1;
+		u64 len_chk_sel : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 lenerr_en : 1;
+		u64 maxerr_en : 1;
+		u64 minerr_en : 1;
+		u64 grp_wat_47 : 4;
+		u64 qos_wat_47 : 4;
+		u64 reserved_37_39 : 3;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 hg_qos : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 qos_vsel : 1;
+		u64 qos_vod : 1;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 higig_en : 1;
+		u64 dsa_en : 1;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} s;
+	struct cvmx_pip_prt_cfgx_cn30xx {
+		u64 reserved_37_63 : 27;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 reserved_27_27 : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 reserved_18_19 : 2;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_10_15 : 6;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn30xx;
+	struct cvmx_pip_prt_cfgx_cn30xx cn31xx;
+	struct cvmx_pip_prt_cfgx_cn38xx {
+		u64 reserved_37_63 : 27;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 reserved_27_27 : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 reserved_18_19 : 2;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 reserved_10_11 : 2;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn38xx;
+	struct cvmx_pip_prt_cfgx_cn38xx cn38xxp2;
+	struct cvmx_pip_prt_cfgx_cn50xx {
+		u64 reserved_53_63 : 11;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 lenerr_en : 1;
+		u64 maxerr_en : 1;
+		u64 minerr_en : 1;
+		u64 grp_wat_47 : 4;
+		u64 qos_wat_47 : 4;
+		u64 reserved_37_39 : 3;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 reserved_27_27 : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 reserved_19_19 : 1;
+		u64 qos_vod : 1;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 reserved_10_11 : 2;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn50xx;
+	struct cvmx_pip_prt_cfgx_cn52xx {
+		u64 reserved_53_63 : 11;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 lenerr_en : 1;
+		u64 maxerr_en : 1;
+		u64 minerr_en : 1;
+		u64 grp_wat_47 : 4;
+		u64 qos_wat_47 : 4;
+		u64 reserved_37_39 : 3;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 hg_qos : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 qos_vsel : 1;
+		u64 qos_vod : 1;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 higig_en : 1;
+		u64 dsa_en : 1;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn52xx;
+	struct cvmx_pip_prt_cfgx_cn52xx cn52xxp1;
+	struct cvmx_pip_prt_cfgx_cn52xx cn56xx;
+	struct cvmx_pip_prt_cfgx_cn50xx cn56xxp1;
+	struct cvmx_pip_prt_cfgx_cn58xx {
+		u64 reserved_37_63 : 27;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 reserved_27_27 : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 reserved_19_19 : 1;
+		u64 qos_vod : 1;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 reserved_10_11 : 2;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn58xx;
+	struct cvmx_pip_prt_cfgx_cn58xx cn58xxp1;
+	struct cvmx_pip_prt_cfgx_cn52xx cn61xx;
+	struct cvmx_pip_prt_cfgx_cn52xx cn63xx;
+	struct cvmx_pip_prt_cfgx_cn52xx cn63xxp1;
+	struct cvmx_pip_prt_cfgx_cn52xx cn66xx;
+	struct cvmx_pip_prt_cfgx_cn68xx {
+		u64 reserved_55_63 : 9;
+		u64 ih_pri : 1;
+		u64 len_chk_sel : 1;
+		u64 pad_len : 1;
+		u64 vlan_len : 1;
+		u64 lenerr_en : 1;
+		u64 maxerr_en : 1;
+		u64 minerr_en : 1;
+		u64 grp_wat_47 : 4;
+		u64 qos_wat_47 : 4;
+		u64 reserved_37_39 : 3;
+		u64 rawdrp : 1;
+		u64 tag_inc : 2;
+		u64 dyn_rs : 1;
+		u64 inst_hdr : 1;
+		u64 grp_wat : 4;
+		u64 hg_qos : 1;
+		u64 qos : 3;
+		u64 qos_wat : 4;
+		u64 reserved_19_19 : 1;
+		u64 qos_vod : 1;
+		u64 qos_diff : 1;
+		u64 qos_vlan : 1;
+		u64 reserved_13_15 : 3;
+		u64 crc_en : 1;
+		u64 higig_en : 1;
+		u64 dsa_en : 1;
+		cvmx_pip_port_parse_mode_t mode : 2;
+		u64 reserved_7_7 : 1;
+		u64 skip : 7;
+	} cn68xx;
+	struct cvmx_pip_prt_cfgx_cn68xx cn68xxp1;
+	struct cvmx_pip_prt_cfgx_cn52xx cn70xx;
+	struct cvmx_pip_prt_cfgx_cn52xx cn70xxp1;
+	struct cvmx_pip_prt_cfgx_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pip_prt_cfgx cvmx_pip_prt_cfgx_t;
+
+/**
+ * cvmx_pip_prt_cfgb#
+ *
+ * Notes:
+ * PIP_PRT_CFGB* does not exist prior to pass 1.2.
+ *
+ */
+union cvmx_pip_prt_cfgbx {
+	u64 u64;
+	struct cvmx_pip_prt_cfgbx_s {
+		u64 reserved_39_63 : 25;
+		u64 alt_skp_sel : 2;
+		u64 alt_skp_en : 1;
+		u64 reserved_35_35 : 1;
+		u64 bsel_num : 2;
+		u64 bsel_en : 1;
+		u64 reserved_24_31 : 8;
+		u64 base : 8;
+		u64 reserved_6_15 : 10;
+		u64 bpid : 6;
+	} s;
+	struct cvmx_pip_prt_cfgbx_cn61xx {
+		u64 reserved_39_63 : 25;
+		u64 alt_skp_sel : 2;
+		u64 alt_skp_en : 1;
+		u64 reserved_35_35 : 1;
+		u64 bsel_num : 2;
+		u64 bsel_en : 1;
+		u64 reserved_0_31 : 32;
+	} cn61xx;
+	struct cvmx_pip_prt_cfgbx_cn66xx {
+		u64 reserved_39_63 : 25;
+		u64 alt_skp_sel : 2;
+		u64 alt_skp_en : 1;
+		u64 reserved_0_35 : 36;
+	} cn66xx;
+	struct cvmx_pip_prt_cfgbx_s cn68xx;
+	struct cvmx_pip_prt_cfgbx_cn68xxp1 {
+		u64 reserved_24_63 : 40;
+		u64 base : 8;
+		u64 reserved_6_15 : 10;
+		u64 bpid : 6;
+	} cn68xxp1;
+	struct cvmx_pip_prt_cfgbx_cn61xx cn70xx;
+	struct cvmx_pip_prt_cfgbx_cn61xx cn70xxp1;
+	struct cvmx_pip_prt_cfgbx_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pip_prt_cfgbx cvmx_pip_prt_cfgbx_t;
+
+/**
+ * cvmx_pip_prt_tag#
+ *
+ * PIP_PRT_TAGX = Per port config information
+ *
+ */
+union cvmx_pip_prt_tagx {
+	u64 u64;
+	struct cvmx_pip_prt_tagx_s {
+		u64 reserved_54_63 : 10;
+		u64 portadd_en : 1;
+		u64 inc_hwchk : 1;
+		u64 reserved_50_51 : 2;
+		u64 grptagbase_msb : 2;
+		u64 reserved_46_47 : 2;
+		u64 grptagmask_msb : 2;
+		u64 reserved_42_43 : 2;
+		u64 grp_msb : 2;
+		u64 grptagbase : 4;
+		u64 grptagmask : 4;
+		u64 grptag : 1;
+		u64 grptag_mskip : 1;
+		u64 tag_mode : 2;
+		u64 inc_vs : 2;
+		u64 inc_vlan : 1;
+		u64 inc_prt_flag : 1;
+		u64 ip6_dprt_flag : 1;
+		u64 ip4_dprt_flag : 1;
+		u64 ip6_sprt_flag : 1;
+		u64 ip4_sprt_flag : 1;
+		u64 ip6_nxth_flag : 1;
+		u64 ip4_pctl_flag : 1;
+		u64 ip6_dst_flag : 1;
+		u64 ip4_dst_flag : 1;
+		u64 ip6_src_flag : 1;
+		u64 ip4_src_flag : 1;
+		cvmx_pow_tag_type_t tcp6_tag_type : 2;
+		cvmx_pow_tag_type_t tcp4_tag_type : 2;
+		cvmx_pow_tag_type_t ip6_tag_type : 2;
+		cvmx_pow_tag_type_t ip4_tag_type : 2;
+		cvmx_pow_tag_type_t non_tag_type : 2;
+		u64 grp : 4;
+	} s;
+	struct cvmx_pip_prt_tagx_cn30xx {
+		u64 reserved_40_63 : 24;
+		u64 grptagbase : 4;
+		u64 grptagmask : 4;
+		u64 grptag : 1;
+		u64 reserved_30_30 : 1;
+		u64 tag_mode : 2;
+		u64 inc_vs : 2;
+		u64 inc_vlan : 1;
+		u64 inc_prt_flag : 1;
+		u64 ip6_dprt_flag : 1;
+		u64 ip4_dprt_flag : 1;
+		u64 ip6_sprt_flag : 1;
+		u64 ip4_sprt_flag : 1;
+		u64 ip6_nxth_flag : 1;
+		u64 ip4_pctl_flag : 1;
+		u64 ip6_dst_flag : 1;
+		u64 ip4_dst_flag : 1;
+		u64 ip6_src_flag : 1;
+		u64 ip4_src_flag : 1;
+		cvmx_pow_tag_type_t tcp6_tag_type : 2;
+		cvmx_pow_tag_type_t tcp4_tag_type : 2;
+		cvmx_pow_tag_type_t ip6_tag_type : 2;
+		cvmx_pow_tag_type_t ip4_tag_type : 2;
+		cvmx_pow_tag_type_t non_tag_type : 2;
+		u64 grp : 4;
+	} cn30xx;
+	struct cvmx_pip_prt_tagx_cn30xx cn31xx;
+	struct cvmx_pip_prt_tagx_cn30xx cn38xx;
+	struct cvmx_pip_prt_tagx_cn30xx cn38xxp2;
+	struct cvmx_pip_prt_tagx_cn50xx {
+		u64 reserved_40_63 : 24;
+		u64 grptagbase : 4;
+		u64 grptagmask : 4;
+		u64 grptag : 1;
+		u64 grptag_mskip : 1;
+		u64 tag_mode : 2;
+		u64 inc_vs : 2;
+		u64 inc_vlan : 1;
+		u64 inc_prt_flag : 1;
+		u64 ip6_dprt_flag : 1;
+		u64 ip4_dprt_flag : 1;
+		u64 ip6_sprt_flag : 1;
+		u64 ip4_sprt_flag : 1;
+		u64 ip6_nxth_flag : 1;
+		u64 ip4_pctl_flag : 1;
+		u64 ip6_dst_flag : 1;
+		u64 ip4_dst_flag : 1;
+		u64 ip6_src_flag : 1;
+		u64 ip4_src_flag : 1;
+		cvmx_pow_tag_type_t tcp6_tag_type : 2;
+		cvmx_pow_tag_type_t tcp4_tag_type : 2;
+		cvmx_pow_tag_type_t ip6_tag_type : 2;
+		cvmx_pow_tag_type_t ip4_tag_type : 2;
+		cvmx_pow_tag_type_t non_tag_type : 2;
+		u64 grp : 4;
+	} cn50xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn52xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn52xxp1;
+	struct cvmx_pip_prt_tagx_cn50xx cn56xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn56xxp1;
+	struct cvmx_pip_prt_tagx_cn30xx cn58xx;
+	struct cvmx_pip_prt_tagx_cn30xx cn58xxp1;
+	struct cvmx_pip_prt_tagx_cn50xx cn61xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn63xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn63xxp1;
+	struct cvmx_pip_prt_tagx_cn50xx cn66xx;
+	struct cvmx_pip_prt_tagx_s cn68xx;
+	struct cvmx_pip_prt_tagx_s cn68xxp1;
+	struct cvmx_pip_prt_tagx_cn50xx cn70xx;
+	struct cvmx_pip_prt_tagx_cn50xx cn70xxp1;
+	struct cvmx_pip_prt_tagx_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pip_prt_tagx cvmx_pip_prt_tagx_t;
+
+/**
+ * cvmx_pip_qos_diff#
+ *
+ * PIP_QOS_DIFFX = QOS Diffserv Tables
+ *
+ */
+union cvmx_pip_qos_diffx {
+	u64 u64;
+	struct cvmx_pip_qos_diffx_s {
+		u64 reserved_3_63 : 61;
+		u64 qos : 3;
+	} s;
+	struct cvmx_pip_qos_diffx_s cn30xx;
+	struct cvmx_pip_qos_diffx_s cn31xx;
+	struct cvmx_pip_qos_diffx_s cn38xx;
+	struct cvmx_pip_qos_diffx_s cn38xxp2;
+	struct cvmx_pip_qos_diffx_s cn50xx;
+	struct cvmx_pip_qos_diffx_s cn52xx;
+	struct cvmx_pip_qos_diffx_s cn52xxp1;
+	struct cvmx_pip_qos_diffx_s cn56xx;
+	struct cvmx_pip_qos_diffx_s cn56xxp1;
+	struct cvmx_pip_qos_diffx_s cn58xx;
+	struct cvmx_pip_qos_diffx_s cn58xxp1;
+	struct cvmx_pip_qos_diffx_s cn61xx;
+	struct cvmx_pip_qos_diffx_s cn63xx;
+	struct cvmx_pip_qos_diffx_s cn63xxp1;
+	struct cvmx_pip_qos_diffx_s cn66xx;
+	struct cvmx_pip_qos_diffx_s cn70xx;
+	struct cvmx_pip_qos_diffx_s cn70xxp1;
+	struct cvmx_pip_qos_diffx_s cnf71xx;
+};
+
+typedef union cvmx_pip_qos_diffx cvmx_pip_qos_diffx_t;
+
+/**
+ * cvmx_pip_qos_vlan#
+ *
+ * If the PIP indentifies a packet is DSA/VLAN tagged, then the QOS
+ * can be set based on the DSA/VLAN user priority.  These eight register
+ * comprise the QOS values for all DSA/VLAN user priority values.
+ */
+union cvmx_pip_qos_vlanx {
+	u64 u64;
+	struct cvmx_pip_qos_vlanx_s {
+		u64 reserved_7_63 : 57;
+		u64 qos1 : 3;
+		u64 reserved_3_3 : 1;
+		u64 qos : 3;
+	} s;
+	struct cvmx_pip_qos_vlanx_cn30xx {
+		u64 reserved_3_63 : 61;
+		u64 qos : 3;
+	} cn30xx;
+	struct cvmx_pip_qos_vlanx_cn30xx cn31xx;
+	struct cvmx_pip_qos_vlanx_cn30xx cn38xx;
+	struct cvmx_pip_qos_vlanx_cn30xx cn38xxp2;
+	struct cvmx_pip_qos_vlanx_cn30xx cn50xx;
+	struct cvmx_pip_qos_vlanx_s cn52xx;
+	struct cvmx_pip_qos_vlanx_s cn52xxp1;
+	struct cvmx_pip_qos_vlanx_s cn56xx;
+	struct cvmx_pip_qos_vlanx_cn30xx cn56xxp1;
+	struct cvmx_pip_qos_vlanx_cn30xx cn58xx;
+	struct cvmx_pip_qos_vlanx_cn30xx cn58xxp1;
+	struct cvmx_pip_qos_vlanx_s cn61xx;
+	struct cvmx_pip_qos_vlanx_s cn63xx;
+	struct cvmx_pip_qos_vlanx_s cn63xxp1;
+	struct cvmx_pip_qos_vlanx_s cn66xx;
+	struct cvmx_pip_qos_vlanx_s cn70xx;
+	struct cvmx_pip_qos_vlanx_s cn70xxp1;
+	struct cvmx_pip_qos_vlanx_s cnf71xx;
+};
+
+typedef union cvmx_pip_qos_vlanx cvmx_pip_qos_vlanx_t;
+
+/**
+ * cvmx_pip_qos_watch#
+ *
+ * Sets up the Configuration CSRs for the four QOS Watchers.
+ * Each Watcher can be set to look for a specific protocol,
+ * TCP/UDP destination port, or Ethertype to override the
+ * default QOS value.
+ */
+union cvmx_pip_qos_watchx {
+	u64 u64;
+	struct cvmx_pip_qos_watchx_s {
+		u64 reserved_48_63 : 16;
+		u64 mask : 16;
+		u64 reserved_30_31 : 2;
+		u64 grp : 6;
+		u64 reserved_23_23 : 1;
+		u64 qos : 3;
+		u64 reserved_16_19 : 4;
+		u64 match_value : 16;
+	} s;
+	struct cvmx_pip_qos_watchx_cn30xx {
+		u64 reserved_48_63 : 16;
+		u64 mask : 16;
+		u64 reserved_28_31 : 4;
+		u64 grp : 4;
+		u64 reserved_23_23 : 1;
+		u64 qos : 3;
+		u64 reserved_18_19 : 2;
+
+		cvmx_pip_qos_watch_types match_type : 2;
+		u64 match_value : 16;
+	} cn30xx;
+	struct cvmx_pip_qos_watchx_cn30xx cn31xx;
+	struct cvmx_pip_qos_watchx_cn30xx cn38xx;
+	struct cvmx_pip_qos_watchx_cn30xx cn38xxp2;
+	struct cvmx_pip_qos_watchx_cn50xx {
+		u64 reserved_48_63 : 16;
+		u64 mask : 16;
+		u64 reserved_28_31 : 4;
+		u64 grp : 4;
+		u64 reserved_23_23 : 1;
+		u64 qos : 3;
+		u64 reserved_19_19 : 1;
+
+		cvmx_pip_qos_watch_types match_type : 3;
+		u64 match_value : 16;
+	} cn50xx;
+	struct cvmx_pip_qos_watchx_cn50xx cn52xx;
+	struct cvmx_pip_qos_watchx_cn50xx cn52xxp1;
+	struct cvmx_pip_qos_watchx_cn50xx cn56xx;
+	struct cvmx_pip_qos_watchx_cn50xx cn56xxp1;
+	struct cvmx_pip_qos_watchx_cn30xx cn58xx;
+	struct cvmx_pip_qos_watchx_cn30xx cn58xxp1;
+	struct cvmx_pip_qos_watchx_cn50xx cn61xx;
+	struct cvmx_pip_qos_watchx_cn50xx cn63xx;
+	struct cvmx_pip_qos_watchx_cn50xx cn63xxp1;
+	struct cvmx_pip_qos_watchx_cn50xx cn66xx;
+	struct cvmx_pip_qos_watchx_cn68xx {
+		u64 reserved_48_63 : 16;
+		u64 mask : 16;
+		u64 reserved_30_31 : 2;
+		u64 grp : 6;
+		u64 reserved_23_23 : 1;
+		u64 qos : 3;
+		u64 reserved_19_19 : 1;
+
+		cvmx_pip_qos_watch_types match_type : 3;
+		u64 match_value : 16;
+	} cn68xx;
+	struct cvmx_pip_qos_watchx_cn68xx cn68xxp1;
+	struct cvmx_pip_qos_watchx_cn70xx {
+		u64 reserved_48_63 : 16;
+		u64 mask : 16;
+		u64 reserved_28_31 : 4;
+		u64 grp : 4;
+		u64 reserved_23_23 : 1;
+		u64 qos : 3;
+		u64 reserved_19_19 : 1;
+		u64 typ : 3;
+		u64 match_value : 16;
+	} cn70xx;
+	struct cvmx_pip_qos_watchx_cn70xx cn70xxp1;
+	struct cvmx_pip_qos_watchx_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pip_qos_watchx cvmx_pip_qos_watchx_t;
+
+/**
+ * cvmx_pip_raw_word
+ *
+ * The RAW Word2 to be inserted into the workQ entry of RAWFULL packets.
+ *
+ */
+union cvmx_pip_raw_word {
+	u64 u64;
+	struct cvmx_pip_raw_word_s {
+		u64 reserved_56_63 : 8;
+		u64 word : 56;
+	} s;
+	struct cvmx_pip_raw_word_s cn30xx;
+	struct cvmx_pip_raw_word_s cn31xx;
+	struct cvmx_pip_raw_word_s cn38xx;
+	struct cvmx_pip_raw_word_s cn38xxp2;
+	struct cvmx_pip_raw_word_s cn50xx;
+	struct cvmx_pip_raw_word_s cn52xx;
+	struct cvmx_pip_raw_word_s cn52xxp1;
+	struct cvmx_pip_raw_word_s cn56xx;
+	struct cvmx_pip_raw_word_s cn56xxp1;
+	struct cvmx_pip_raw_word_s cn58xx;
+	struct cvmx_pip_raw_word_s cn58xxp1;
+	struct cvmx_pip_raw_word_s cn61xx;
+	struct cvmx_pip_raw_word_s cn63xx;
+	struct cvmx_pip_raw_word_s cn63xxp1;
+	struct cvmx_pip_raw_word_s cn66xx;
+	struct cvmx_pip_raw_word_s cn68xx;
+	struct cvmx_pip_raw_word_s cn68xxp1;
+	struct cvmx_pip_raw_word_s cn70xx;
+	struct cvmx_pip_raw_word_s cn70xxp1;
+	struct cvmx_pip_raw_word_s cnf71xx;
+};
+
+typedef union cvmx_pip_raw_word cvmx_pip_raw_word_t;
+
+/**
+ * cvmx_pip_sft_rst
+ *
+ * When written to a '1', resets the pip block
+ *
+ */
+union cvmx_pip_sft_rst {
+	u64 u64;
+	struct cvmx_pip_sft_rst_s {
+		u64 reserved_1_63 : 63;
+		u64 rst : 1;
+	} s;
+	struct cvmx_pip_sft_rst_s cn30xx;
+	struct cvmx_pip_sft_rst_s cn31xx;
+	struct cvmx_pip_sft_rst_s cn38xx;
+	struct cvmx_pip_sft_rst_s cn50xx;
+	struct cvmx_pip_sft_rst_s cn52xx;
+	struct cvmx_pip_sft_rst_s cn52xxp1;
+	struct cvmx_pip_sft_rst_s cn56xx;
+	struct cvmx_pip_sft_rst_s cn56xxp1;
+	struct cvmx_pip_sft_rst_s cn58xx;
+	struct cvmx_pip_sft_rst_s cn58xxp1;
+	struct cvmx_pip_sft_rst_s cn61xx;
+	struct cvmx_pip_sft_rst_s cn63xx;
+	struct cvmx_pip_sft_rst_s cn63xxp1;
+	struct cvmx_pip_sft_rst_s cn66xx;
+	struct cvmx_pip_sft_rst_s cn68xx;
+	struct cvmx_pip_sft_rst_s cn68xxp1;
+	struct cvmx_pip_sft_rst_s cn70xx;
+	struct cvmx_pip_sft_rst_s cn70xxp1;
+	struct cvmx_pip_sft_rst_s cnf71xx;
+};
+
+typedef union cvmx_pip_sft_rst cvmx_pip_sft_rst_t;
+
+/**
+ * cvmx_pip_stat0_#
+ *
+ * PIP Statistics Counters
+ *
+ * Note: special stat counter behavior
+ *
+ * 1) Read and write operations must arbitrate for the statistics resources
+ *     along with the packet engines which are incrementing the counters.
+ *     In order to not drop packet information, the packet HW is always a
+ *     higher priority and the CSR requests will only be satisified when
+ *     there are idle cycles.  This can potentially cause long delays if the
+ *     system becomes full.
+ *
+ * 2) stat counters can be cleared in two ways.  If PIP_STAT_CTL[RDCLR] is
+ *     set, then all read accesses will clear the register.  In addition,
+ *     any write to a stats register will also reset the register to zero.
+ *     Please note that the clearing operations must obey rule \#1 above.
+ *
+ * 3) all counters are wrapping - software must ensure they are read periodically
+ *
+ * 4) The counters accumulate statistics for packets that are sent to PKI.  If
+ *    PTP_MODE is enabled, the 8B timestamp is prepended to the packet.  This
+ *    additional 8B of data is captured in the octet counts.
+ *
+ * 5) X represents either the packet's port-kind or backpressure ID as
+ *    determined by PIP_STAT_CTL[MODE]
+ * PIP_STAT0_X = PIP_STAT_DRP_PKTS / PIP_STAT_DRP_OCTS
+ */
+union cvmx_pip_stat0_x {
+	u64 u64;
+	struct cvmx_pip_stat0_x_s {
+		u64 drp_pkts : 32;
+		u64 drp_octs : 32;
+	} s;
+	struct cvmx_pip_stat0_x_s cn68xx;
+	struct cvmx_pip_stat0_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat0_x cvmx_pip_stat0_x_t;
+
+/**
+ * cvmx_pip_stat0_prt#
+ *
+ * "PIP Statistics Counters
+ * Note: special stat counter behavior
+ * 1) Read and write operations must arbitrate for the statistics resources
+ * along with the packet engines which are incrementing the counters.
+ * In order to not drop packet information, the packet HW is always a
+ * higher priority and the CSR requests will only be satisified when
+ * there are idle cycles.  This can potentially cause long delays if the
+ * system becomes full.
+ * 2) stat counters can be cleared in two ways.  If PIP_STAT_CTL[RDCLR] is
+ * set, then all read accesses will clear the register.  In addition,
+ * any write to a stats register will also reset the register to zero.
+ * Please note that the clearing operations must obey rule \#1 above.
+ * 3) all counters are wrapping - software must ensure they are read periodically
+ * 4) The counters accumulate statistics for packets that are sent to PKI.  If
+ * PTP_MODE is enabled, the 8B timestamp is prepended to the packet.  This
+ * additional 8B of data is captured in the octet counts.
+ * PIP_STAT0_PRT = PIP_STAT_DRP_PKTS / PIP_STAT_DRP_OCTS"
+ */
+union cvmx_pip_stat0_prtx {
+	u64 u64;
+	struct cvmx_pip_stat0_prtx_s {
+		u64 drp_pkts : 32;
+		u64 drp_octs : 32;
+	} s;
+	struct cvmx_pip_stat0_prtx_s cn30xx;
+	struct cvmx_pip_stat0_prtx_s cn31xx;
+	struct cvmx_pip_stat0_prtx_s cn38xx;
+	struct cvmx_pip_stat0_prtx_s cn38xxp2;
+	struct cvmx_pip_stat0_prtx_s cn50xx;
+	struct cvmx_pip_stat0_prtx_s cn52xx;
+	struct cvmx_pip_stat0_prtx_s cn52xxp1;
+	struct cvmx_pip_stat0_prtx_s cn56xx;
+	struct cvmx_pip_stat0_prtx_s cn56xxp1;
+	struct cvmx_pip_stat0_prtx_s cn58xx;
+	struct cvmx_pip_stat0_prtx_s cn58xxp1;
+	struct cvmx_pip_stat0_prtx_s cn61xx;
+	struct cvmx_pip_stat0_prtx_s cn63xx;
+	struct cvmx_pip_stat0_prtx_s cn63xxp1;
+	struct cvmx_pip_stat0_prtx_s cn66xx;
+	struct cvmx_pip_stat0_prtx_s cn70xx;
+	struct cvmx_pip_stat0_prtx_s cn70xxp1;
+	struct cvmx_pip_stat0_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat0_prtx cvmx_pip_stat0_prtx_t;
+
+/**
+ * cvmx_pip_stat10_#
+ *
+ * PIP_STAT10_X = PIP_STAT_L2_MCAST / PIP_STAT_L2_BCAST
+ *
+ */
+union cvmx_pip_stat10_x {
+	u64 u64;
+	struct cvmx_pip_stat10_x_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_stat10_x_s cn68xx;
+	struct cvmx_pip_stat10_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat10_x cvmx_pip_stat10_x_t;
+
+/**
+ * cvmx_pip_stat10_prt#
+ *
+ * PIP_STAT10_PRTX = PIP_STAT_L2_MCAST / PIP_STAT_L2_BCAST
+ *
+ */
+union cvmx_pip_stat10_prtx {
+	u64 u64;
+	struct cvmx_pip_stat10_prtx_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_stat10_prtx_s cn52xx;
+	struct cvmx_pip_stat10_prtx_s cn52xxp1;
+	struct cvmx_pip_stat10_prtx_s cn56xx;
+	struct cvmx_pip_stat10_prtx_s cn56xxp1;
+	struct cvmx_pip_stat10_prtx_s cn61xx;
+	struct cvmx_pip_stat10_prtx_s cn63xx;
+	struct cvmx_pip_stat10_prtx_s cn63xxp1;
+	struct cvmx_pip_stat10_prtx_s cn66xx;
+	struct cvmx_pip_stat10_prtx_s cn70xx;
+	struct cvmx_pip_stat10_prtx_s cn70xxp1;
+	struct cvmx_pip_stat10_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat10_prtx cvmx_pip_stat10_prtx_t;
+
+/**
+ * cvmx_pip_stat11_#
+ *
+ * PIP_STAT11_X = PIP_STAT_L3_MCAST / PIP_STAT_L3_BCAST
+ *
+ */
+union cvmx_pip_stat11_x {
+	u64 u64;
+	struct cvmx_pip_stat11_x_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_stat11_x_s cn68xx;
+	struct cvmx_pip_stat11_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat11_x cvmx_pip_stat11_x_t;
+
+/**
+ * cvmx_pip_stat11_prt#
+ *
+ * PIP_STAT11_PRTX = PIP_STAT_L3_MCAST / PIP_STAT_L3_BCAST
+ *
+ */
+union cvmx_pip_stat11_prtx {
+	u64 u64;
+	struct cvmx_pip_stat11_prtx_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_stat11_prtx_s cn52xx;
+	struct cvmx_pip_stat11_prtx_s cn52xxp1;
+	struct cvmx_pip_stat11_prtx_s cn56xx;
+	struct cvmx_pip_stat11_prtx_s cn56xxp1;
+	struct cvmx_pip_stat11_prtx_s cn61xx;
+	struct cvmx_pip_stat11_prtx_s cn63xx;
+	struct cvmx_pip_stat11_prtx_s cn63xxp1;
+	struct cvmx_pip_stat11_prtx_s cn66xx;
+	struct cvmx_pip_stat11_prtx_s cn70xx;
+	struct cvmx_pip_stat11_prtx_s cn70xxp1;
+	struct cvmx_pip_stat11_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat11_prtx cvmx_pip_stat11_prtx_t;
+
+/**
+ * cvmx_pip_stat1_#
+ *
+ * PIP_STAT1_X = PIP_STAT_OCTS
+ *
+ */
+union cvmx_pip_stat1_x {
+	u64 u64;
+	struct cvmx_pip_stat1_x_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pip_stat1_x_s cn68xx;
+	struct cvmx_pip_stat1_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat1_x cvmx_pip_stat1_x_t;
+
+/**
+ * cvmx_pip_stat1_prt#
+ *
+ * PIP_STAT1_PRTX = PIP_STAT_OCTS
+ *
+ */
+union cvmx_pip_stat1_prtx {
+	u64 u64;
+	struct cvmx_pip_stat1_prtx_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pip_stat1_prtx_s cn30xx;
+	struct cvmx_pip_stat1_prtx_s cn31xx;
+	struct cvmx_pip_stat1_prtx_s cn38xx;
+	struct cvmx_pip_stat1_prtx_s cn38xxp2;
+	struct cvmx_pip_stat1_prtx_s cn50xx;
+	struct cvmx_pip_stat1_prtx_s cn52xx;
+	struct cvmx_pip_stat1_prtx_s cn52xxp1;
+	struct cvmx_pip_stat1_prtx_s cn56xx;
+	struct cvmx_pip_stat1_prtx_s cn56xxp1;
+	struct cvmx_pip_stat1_prtx_s cn58xx;
+	struct cvmx_pip_stat1_prtx_s cn58xxp1;
+	struct cvmx_pip_stat1_prtx_s cn61xx;
+	struct cvmx_pip_stat1_prtx_s cn63xx;
+	struct cvmx_pip_stat1_prtx_s cn63xxp1;
+	struct cvmx_pip_stat1_prtx_s cn66xx;
+	struct cvmx_pip_stat1_prtx_s cn70xx;
+	struct cvmx_pip_stat1_prtx_s cn70xxp1;
+	struct cvmx_pip_stat1_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat1_prtx cvmx_pip_stat1_prtx_t;
+
+/**
+ * cvmx_pip_stat2_#
+ *
+ * PIP_STAT2_X = PIP_STAT_PKTS     / PIP_STAT_RAW
+ *
+ */
+union cvmx_pip_stat2_x {
+	u64 u64;
+	struct cvmx_pip_stat2_x_s {
+		u64 pkts : 32;
+		u64 raw : 32;
+	} s;
+	struct cvmx_pip_stat2_x_s cn68xx;
+	struct cvmx_pip_stat2_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat2_x cvmx_pip_stat2_x_t;
+
+/**
+ * cvmx_pip_stat2_prt#
+ *
+ * PIP_STAT2_PRTX = PIP_STAT_PKTS     / PIP_STAT_RAW
+ *
+ */
+union cvmx_pip_stat2_prtx {
+	u64 u64;
+	struct cvmx_pip_stat2_prtx_s {
+		u64 pkts : 32;
+		u64 raw : 32;
+	} s;
+	struct cvmx_pip_stat2_prtx_s cn30xx;
+	struct cvmx_pip_stat2_prtx_s cn31xx;
+	struct cvmx_pip_stat2_prtx_s cn38xx;
+	struct cvmx_pip_stat2_prtx_s cn38xxp2;
+	struct cvmx_pip_stat2_prtx_s cn50xx;
+	struct cvmx_pip_stat2_prtx_s cn52xx;
+	struct cvmx_pip_stat2_prtx_s cn52xxp1;
+	struct cvmx_pip_stat2_prtx_s cn56xx;
+	struct cvmx_pip_stat2_prtx_s cn56xxp1;
+	struct cvmx_pip_stat2_prtx_s cn58xx;
+	struct cvmx_pip_stat2_prtx_s cn58xxp1;
+	struct cvmx_pip_stat2_prtx_s cn61xx;
+	struct cvmx_pip_stat2_prtx_s cn63xx;
+	struct cvmx_pip_stat2_prtx_s cn63xxp1;
+	struct cvmx_pip_stat2_prtx_s cn66xx;
+	struct cvmx_pip_stat2_prtx_s cn70xx;
+	struct cvmx_pip_stat2_prtx_s cn70xxp1;
+	struct cvmx_pip_stat2_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat2_prtx cvmx_pip_stat2_prtx_t;
+
+/**
+ * cvmx_pip_stat3_#
+ *
+ * PIP_STAT3_X = PIP_STAT_BCST     / PIP_STAT_MCST
+ *
+ */
+union cvmx_pip_stat3_x {
+	u64 u64;
+	struct cvmx_pip_stat3_x_s {
+		u64 bcst : 32;
+		u64 mcst : 32;
+	} s;
+	struct cvmx_pip_stat3_x_s cn68xx;
+	struct cvmx_pip_stat3_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat3_x cvmx_pip_stat3_x_t;
+
+/**
+ * cvmx_pip_stat3_prt#
+ *
+ * PIP_STAT3_PRTX = PIP_STAT_BCST     / PIP_STAT_MCST
+ *
+ */
+union cvmx_pip_stat3_prtx {
+	u64 u64;
+	struct cvmx_pip_stat3_prtx_s {
+		u64 bcst : 32;
+		u64 mcst : 32;
+	} s;
+	struct cvmx_pip_stat3_prtx_s cn30xx;
+	struct cvmx_pip_stat3_prtx_s cn31xx;
+	struct cvmx_pip_stat3_prtx_s cn38xx;
+	struct cvmx_pip_stat3_prtx_s cn38xxp2;
+	struct cvmx_pip_stat3_prtx_s cn50xx;
+	struct cvmx_pip_stat3_prtx_s cn52xx;
+	struct cvmx_pip_stat3_prtx_s cn52xxp1;
+	struct cvmx_pip_stat3_prtx_s cn56xx;
+	struct cvmx_pip_stat3_prtx_s cn56xxp1;
+	struct cvmx_pip_stat3_prtx_s cn58xx;
+	struct cvmx_pip_stat3_prtx_s cn58xxp1;
+	struct cvmx_pip_stat3_prtx_s cn61xx;
+	struct cvmx_pip_stat3_prtx_s cn63xx;
+	struct cvmx_pip_stat3_prtx_s cn63xxp1;
+	struct cvmx_pip_stat3_prtx_s cn66xx;
+	struct cvmx_pip_stat3_prtx_s cn70xx;
+	struct cvmx_pip_stat3_prtx_s cn70xxp1;
+	struct cvmx_pip_stat3_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat3_prtx cvmx_pip_stat3_prtx_t;
+
+/**
+ * cvmx_pip_stat4_#
+ *
+ * PIP_STAT4_X = PIP_STAT_HIST1    / PIP_STAT_HIST0
+ *
+ */
+union cvmx_pip_stat4_x {
+	u64 u64;
+	struct cvmx_pip_stat4_x_s {
+		u64 h65to127 : 32;
+		u64 h64 : 32;
+	} s;
+	struct cvmx_pip_stat4_x_s cn68xx;
+	struct cvmx_pip_stat4_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat4_x cvmx_pip_stat4_x_t;
+
+/**
+ * cvmx_pip_stat4_prt#
+ *
+ * PIP_STAT4_PRTX = PIP_STAT_HIST1    / PIP_STAT_HIST0
+ *
+ */
+union cvmx_pip_stat4_prtx {
+	u64 u64;
+	struct cvmx_pip_stat4_prtx_s {
+		u64 h65to127 : 32;
+		u64 h64 : 32;
+	} s;
+	struct cvmx_pip_stat4_prtx_s cn30xx;
+	struct cvmx_pip_stat4_prtx_s cn31xx;
+	struct cvmx_pip_stat4_prtx_s cn38xx;
+	struct cvmx_pip_stat4_prtx_s cn38xxp2;
+	struct cvmx_pip_stat4_prtx_s cn50xx;
+	struct cvmx_pip_stat4_prtx_s cn52xx;
+	struct cvmx_pip_stat4_prtx_s cn52xxp1;
+	struct cvmx_pip_stat4_prtx_s cn56xx;
+	struct cvmx_pip_stat4_prtx_s cn56xxp1;
+	struct cvmx_pip_stat4_prtx_s cn58xx;
+	struct cvmx_pip_stat4_prtx_s cn58xxp1;
+	struct cvmx_pip_stat4_prtx_s cn61xx;
+	struct cvmx_pip_stat4_prtx_s cn63xx;
+	struct cvmx_pip_stat4_prtx_s cn63xxp1;
+	struct cvmx_pip_stat4_prtx_s cn66xx;
+	struct cvmx_pip_stat4_prtx_s cn70xx;
+	struct cvmx_pip_stat4_prtx_s cn70xxp1;
+	struct cvmx_pip_stat4_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat4_prtx cvmx_pip_stat4_prtx_t;
+
+/**
+ * cvmx_pip_stat5_#
+ *
+ * PIP_STAT5_X = PIP_STAT_HIST3    / PIP_STAT_HIST2
+ *
+ */
+union cvmx_pip_stat5_x {
+	u64 u64;
+	struct cvmx_pip_stat5_x_s {
+		u64 h256to511 : 32;
+		u64 h128to255 : 32;
+	} s;
+	struct cvmx_pip_stat5_x_s cn68xx;
+	struct cvmx_pip_stat5_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat5_x cvmx_pip_stat5_x_t;
+
+/**
+ * cvmx_pip_stat5_prt#
+ *
+ * PIP_STAT5_PRTX = PIP_STAT_HIST3    / PIP_STAT_HIST2
+ *
+ */
+union cvmx_pip_stat5_prtx {
+	u64 u64;
+	struct cvmx_pip_stat5_prtx_s {
+		u64 h256to511 : 32;
+		u64 h128to255 : 32;
+	} s;
+	struct cvmx_pip_stat5_prtx_s cn30xx;
+	struct cvmx_pip_stat5_prtx_s cn31xx;
+	struct cvmx_pip_stat5_prtx_s cn38xx;
+	struct cvmx_pip_stat5_prtx_s cn38xxp2;
+	struct cvmx_pip_stat5_prtx_s cn50xx;
+	struct cvmx_pip_stat5_prtx_s cn52xx;
+	struct cvmx_pip_stat5_prtx_s cn52xxp1;
+	struct cvmx_pip_stat5_prtx_s cn56xx;
+	struct cvmx_pip_stat5_prtx_s cn56xxp1;
+	struct cvmx_pip_stat5_prtx_s cn58xx;
+	struct cvmx_pip_stat5_prtx_s cn58xxp1;
+	struct cvmx_pip_stat5_prtx_s cn61xx;
+	struct cvmx_pip_stat5_prtx_s cn63xx;
+	struct cvmx_pip_stat5_prtx_s cn63xxp1;
+	struct cvmx_pip_stat5_prtx_s cn66xx;
+	struct cvmx_pip_stat5_prtx_s cn70xx;
+	struct cvmx_pip_stat5_prtx_s cn70xxp1;
+	struct cvmx_pip_stat5_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat5_prtx cvmx_pip_stat5_prtx_t;
+
+/**
+ * cvmx_pip_stat6_#
+ *
+ * PIP_STAT6_X = PIP_STAT_HIST5    / PIP_STAT_HIST4
+ *
+ */
+union cvmx_pip_stat6_x {
+	u64 u64;
+	struct cvmx_pip_stat6_x_s {
+		u64 h1024to1518 : 32;
+		u64 h512to1023 : 32;
+	} s;
+	struct cvmx_pip_stat6_x_s cn68xx;
+	struct cvmx_pip_stat6_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat6_x cvmx_pip_stat6_x_t;
+
+/**
+ * cvmx_pip_stat6_prt#
+ *
+ * PIP_STAT6_PRTX = PIP_STAT_HIST5    / PIP_STAT_HIST4
+ *
+ */
+union cvmx_pip_stat6_prtx {
+	u64 u64;
+	struct cvmx_pip_stat6_prtx_s {
+		u64 h1024to1518 : 32;
+		u64 h512to1023 : 32;
+	} s;
+	struct cvmx_pip_stat6_prtx_s cn30xx;
+	struct cvmx_pip_stat6_prtx_s cn31xx;
+	struct cvmx_pip_stat6_prtx_s cn38xx;
+	struct cvmx_pip_stat6_prtx_s cn38xxp2;
+	struct cvmx_pip_stat6_prtx_s cn50xx;
+	struct cvmx_pip_stat6_prtx_s cn52xx;
+	struct cvmx_pip_stat6_prtx_s cn52xxp1;
+	struct cvmx_pip_stat6_prtx_s cn56xx;
+	struct cvmx_pip_stat6_prtx_s cn56xxp1;
+	struct cvmx_pip_stat6_prtx_s cn58xx;
+	struct cvmx_pip_stat6_prtx_s cn58xxp1;
+	struct cvmx_pip_stat6_prtx_s cn61xx;
+	struct cvmx_pip_stat6_prtx_s cn63xx;
+	struct cvmx_pip_stat6_prtx_s cn63xxp1;
+	struct cvmx_pip_stat6_prtx_s cn66xx;
+	struct cvmx_pip_stat6_prtx_s cn70xx;
+	struct cvmx_pip_stat6_prtx_s cn70xxp1;
+	struct cvmx_pip_stat6_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat6_prtx cvmx_pip_stat6_prtx_t;
+
+/**
+ * cvmx_pip_stat7_#
+ *
+ * PIP_STAT7_X = PIP_STAT_FCS      / PIP_STAT_HIST6
+ *
+ */
+union cvmx_pip_stat7_x {
+	u64 u64;
+	struct cvmx_pip_stat7_x_s {
+		u64 fcs : 32;
+		u64 h1519 : 32;
+	} s;
+	struct cvmx_pip_stat7_x_s cn68xx;
+	struct cvmx_pip_stat7_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat7_x cvmx_pip_stat7_x_t;
+
+/**
+ * cvmx_pip_stat7_prt#
+ *
+ * PIP_STAT7_PRTX = PIP_STAT_FCS      / PIP_STAT_HIST6
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore FCS will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore FCS will never increment on sRIO ports 40-47
+ */
+union cvmx_pip_stat7_prtx {
+	u64 u64;
+	struct cvmx_pip_stat7_prtx_s {
+		u64 fcs : 32;
+		u64 h1519 : 32;
+	} s;
+	struct cvmx_pip_stat7_prtx_s cn30xx;
+	struct cvmx_pip_stat7_prtx_s cn31xx;
+	struct cvmx_pip_stat7_prtx_s cn38xx;
+	struct cvmx_pip_stat7_prtx_s cn38xxp2;
+	struct cvmx_pip_stat7_prtx_s cn50xx;
+	struct cvmx_pip_stat7_prtx_s cn52xx;
+	struct cvmx_pip_stat7_prtx_s cn52xxp1;
+	struct cvmx_pip_stat7_prtx_s cn56xx;
+	struct cvmx_pip_stat7_prtx_s cn56xxp1;
+	struct cvmx_pip_stat7_prtx_s cn58xx;
+	struct cvmx_pip_stat7_prtx_s cn58xxp1;
+	struct cvmx_pip_stat7_prtx_s cn61xx;
+	struct cvmx_pip_stat7_prtx_s cn63xx;
+	struct cvmx_pip_stat7_prtx_s cn63xxp1;
+	struct cvmx_pip_stat7_prtx_s cn66xx;
+	struct cvmx_pip_stat7_prtx_s cn70xx;
+	struct cvmx_pip_stat7_prtx_s cn70xxp1;
+	struct cvmx_pip_stat7_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat7_prtx cvmx_pip_stat7_prtx_t;
+
+/**
+ * cvmx_pip_stat8_#
+ *
+ * PIP_STAT8_X = PIP_STAT_FRAG     / PIP_STAT_UNDER
+ *
+ */
+union cvmx_pip_stat8_x {
+	u64 u64;
+	struct cvmx_pip_stat8_x_s {
+		u64 frag : 32;
+		u64 undersz : 32;
+	} s;
+	struct cvmx_pip_stat8_x_s cn68xx;
+	struct cvmx_pip_stat8_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat8_x cvmx_pip_stat8_x_t;
+
+/**
+ * cvmx_pip_stat8_prt#
+ *
+ * PIP_STAT8_PRTX = PIP_STAT_FRAG     / PIP_STAT_UNDER
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore FRAG will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore FRAG will never increment on sRIO ports 40-47
+ */
+union cvmx_pip_stat8_prtx {
+	u64 u64;
+	struct cvmx_pip_stat8_prtx_s {
+		u64 frag : 32;
+		u64 undersz : 32;
+	} s;
+	struct cvmx_pip_stat8_prtx_s cn30xx;
+	struct cvmx_pip_stat8_prtx_s cn31xx;
+	struct cvmx_pip_stat8_prtx_s cn38xx;
+	struct cvmx_pip_stat8_prtx_s cn38xxp2;
+	struct cvmx_pip_stat8_prtx_s cn50xx;
+	struct cvmx_pip_stat8_prtx_s cn52xx;
+	struct cvmx_pip_stat8_prtx_s cn52xxp1;
+	struct cvmx_pip_stat8_prtx_s cn56xx;
+	struct cvmx_pip_stat8_prtx_s cn56xxp1;
+	struct cvmx_pip_stat8_prtx_s cn58xx;
+	struct cvmx_pip_stat8_prtx_s cn58xxp1;
+	struct cvmx_pip_stat8_prtx_s cn61xx;
+	struct cvmx_pip_stat8_prtx_s cn63xx;
+	struct cvmx_pip_stat8_prtx_s cn63xxp1;
+	struct cvmx_pip_stat8_prtx_s cn66xx;
+	struct cvmx_pip_stat8_prtx_s cn70xx;
+	struct cvmx_pip_stat8_prtx_s cn70xxp1;
+	struct cvmx_pip_stat8_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat8_prtx cvmx_pip_stat8_prtx_t;
+
+/**
+ * cvmx_pip_stat9_#
+ *
+ * PIP_STAT9_X = PIP_STAT_JABBER   / PIP_STAT_OVER
+ *
+ */
+union cvmx_pip_stat9_x {
+	u64 u64;
+	struct cvmx_pip_stat9_x_s {
+		u64 jabber : 32;
+		u64 oversz : 32;
+	} s;
+	struct cvmx_pip_stat9_x_s cn68xx;
+	struct cvmx_pip_stat9_x_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat9_x cvmx_pip_stat9_x_t;
+
+/**
+ * cvmx_pip_stat9_prt#
+ *
+ * PIP_STAT9_PRTX = PIP_STAT_JABBER   / PIP_STAT_OVER
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore JABBER will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore JABBER will never increment on sRIO ports 40-47 due to FCS errors
+ * sRIO does use the JABBER opcode to communicate sRIO error, therefore JABBER can increment under the sRIO error conditions
+ */
+union cvmx_pip_stat9_prtx {
+	u64 u64;
+	struct cvmx_pip_stat9_prtx_s {
+		u64 jabber : 32;
+		u64 oversz : 32;
+	} s;
+	struct cvmx_pip_stat9_prtx_s cn30xx;
+	struct cvmx_pip_stat9_prtx_s cn31xx;
+	struct cvmx_pip_stat9_prtx_s cn38xx;
+	struct cvmx_pip_stat9_prtx_s cn38xxp2;
+	struct cvmx_pip_stat9_prtx_s cn50xx;
+	struct cvmx_pip_stat9_prtx_s cn52xx;
+	struct cvmx_pip_stat9_prtx_s cn52xxp1;
+	struct cvmx_pip_stat9_prtx_s cn56xx;
+	struct cvmx_pip_stat9_prtx_s cn56xxp1;
+	struct cvmx_pip_stat9_prtx_s cn58xx;
+	struct cvmx_pip_stat9_prtx_s cn58xxp1;
+	struct cvmx_pip_stat9_prtx_s cn61xx;
+	struct cvmx_pip_stat9_prtx_s cn63xx;
+	struct cvmx_pip_stat9_prtx_s cn63xxp1;
+	struct cvmx_pip_stat9_prtx_s cn66xx;
+	struct cvmx_pip_stat9_prtx_s cn70xx;
+	struct cvmx_pip_stat9_prtx_s cn70xxp1;
+	struct cvmx_pip_stat9_prtx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat9_prtx cvmx_pip_stat9_prtx_t;
+
+/**
+ * cvmx_pip_stat_ctl
+ *
+ * Controls how the PIP statistics counters are handled.
+ *
+ */
+union cvmx_pip_stat_ctl {
+	u64 u64;
+	struct cvmx_pip_stat_ctl_s {
+		u64 reserved_9_63 : 55;
+		u64 mode : 1;
+		u64 reserved_1_7 : 7;
+		u64 rdclr : 1;
+	} s;
+	struct cvmx_pip_stat_ctl_cn30xx {
+		u64 reserved_1_63 : 63;
+		u64 rdclr : 1;
+	} cn30xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn31xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn38xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn38xxp2;
+	struct cvmx_pip_stat_ctl_cn30xx cn50xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn52xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn52xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cn56xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn56xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cn58xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn58xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cn61xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn63xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn63xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cn66xx;
+	struct cvmx_pip_stat_ctl_s cn68xx;
+	struct cvmx_pip_stat_ctl_s cn68xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cn70xx;
+	struct cvmx_pip_stat_ctl_cn30xx cn70xxp1;
+	struct cvmx_pip_stat_ctl_cn30xx cnf71xx;
+};
+
+typedef union cvmx_pip_stat_ctl cvmx_pip_stat_ctl_t;
+
+/**
+ * cvmx_pip_stat_inb_errs#
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems.
+ */
+union cvmx_pip_stat_inb_errsx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_errsx_s {
+		u64 reserved_16_63 : 48;
+		u64 errs : 16;
+	} s;
+	struct cvmx_pip_stat_inb_errsx_s cn30xx;
+	struct cvmx_pip_stat_inb_errsx_s cn31xx;
+	struct cvmx_pip_stat_inb_errsx_s cn38xx;
+	struct cvmx_pip_stat_inb_errsx_s cn38xxp2;
+	struct cvmx_pip_stat_inb_errsx_s cn50xx;
+	struct cvmx_pip_stat_inb_errsx_s cn52xx;
+	struct cvmx_pip_stat_inb_errsx_s cn52xxp1;
+	struct cvmx_pip_stat_inb_errsx_s cn56xx;
+	struct cvmx_pip_stat_inb_errsx_s cn56xxp1;
+	struct cvmx_pip_stat_inb_errsx_s cn58xx;
+	struct cvmx_pip_stat_inb_errsx_s cn58xxp1;
+	struct cvmx_pip_stat_inb_errsx_s cn61xx;
+	struct cvmx_pip_stat_inb_errsx_s cn63xx;
+	struct cvmx_pip_stat_inb_errsx_s cn63xxp1;
+	struct cvmx_pip_stat_inb_errsx_s cn66xx;
+	struct cvmx_pip_stat_inb_errsx_s cn70xx;
+	struct cvmx_pip_stat_inb_errsx_s cn70xxp1;
+	struct cvmx_pip_stat_inb_errsx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat_inb_errsx cvmx_pip_stat_inb_errsx_t;
+
+/**
+ * cvmx_pip_stat_inb_errs_pknd#
+ *
+ * PIP_STAT_INB_ERRS_PKNDX = Inbound error packets received by PIP per pkind
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems.
+ */
+union cvmx_pip_stat_inb_errs_pkndx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_errs_pkndx_s {
+		u64 reserved_16_63 : 48;
+		u64 errs : 16;
+	} s;
+	struct cvmx_pip_stat_inb_errs_pkndx_s cn68xx;
+	struct cvmx_pip_stat_inb_errs_pkndx_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat_inb_errs_pkndx cvmx_pip_stat_inb_errs_pkndx_t;
+
+/**
+ * cvmx_pip_stat_inb_octs#
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems. The OCTS will include the bytes from
+ * timestamp fields in PTP_MODE.
+ */
+union cvmx_pip_stat_inb_octsx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_octsx_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pip_stat_inb_octsx_s cn30xx;
+	struct cvmx_pip_stat_inb_octsx_s cn31xx;
+	struct cvmx_pip_stat_inb_octsx_s cn38xx;
+	struct cvmx_pip_stat_inb_octsx_s cn38xxp2;
+	struct cvmx_pip_stat_inb_octsx_s cn50xx;
+	struct cvmx_pip_stat_inb_octsx_s cn52xx;
+	struct cvmx_pip_stat_inb_octsx_s cn52xxp1;
+	struct cvmx_pip_stat_inb_octsx_s cn56xx;
+	struct cvmx_pip_stat_inb_octsx_s cn56xxp1;
+	struct cvmx_pip_stat_inb_octsx_s cn58xx;
+	struct cvmx_pip_stat_inb_octsx_s cn58xxp1;
+	struct cvmx_pip_stat_inb_octsx_s cn61xx;
+	struct cvmx_pip_stat_inb_octsx_s cn63xx;
+	struct cvmx_pip_stat_inb_octsx_s cn63xxp1;
+	struct cvmx_pip_stat_inb_octsx_s cn66xx;
+	struct cvmx_pip_stat_inb_octsx_s cn70xx;
+	struct cvmx_pip_stat_inb_octsx_s cn70xxp1;
+	struct cvmx_pip_stat_inb_octsx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat_inb_octsx cvmx_pip_stat_inb_octsx_t;
+
+/**
+ * cvmx_pip_stat_inb_octs_pknd#
+ *
+ * PIP_STAT_INB_OCTS_PKNDX = Inbound octets received by PIP per pkind
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems. The OCTS will include the bytes from
+ * timestamp fields in PTP_MODE.
+ */
+union cvmx_pip_stat_inb_octs_pkndx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_octs_pkndx_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pip_stat_inb_octs_pkndx_s cn68xx;
+	struct cvmx_pip_stat_inb_octs_pkndx_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat_inb_octs_pkndx cvmx_pip_stat_inb_octs_pkndx_t;
+
+/**
+ * cvmx_pip_stat_inb_pkts#
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems.
+ */
+union cvmx_pip_stat_inb_pktsx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_pktsx_s {
+		u64 reserved_32_63 : 32;
+		u64 pkts : 32;
+	} s;
+	struct cvmx_pip_stat_inb_pktsx_s cn30xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn31xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn38xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn38xxp2;
+	struct cvmx_pip_stat_inb_pktsx_s cn50xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn52xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn52xxp1;
+	struct cvmx_pip_stat_inb_pktsx_s cn56xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn56xxp1;
+	struct cvmx_pip_stat_inb_pktsx_s cn58xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn58xxp1;
+	struct cvmx_pip_stat_inb_pktsx_s cn61xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn63xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn63xxp1;
+	struct cvmx_pip_stat_inb_pktsx_s cn66xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn70xx;
+	struct cvmx_pip_stat_inb_pktsx_s cn70xxp1;
+	struct cvmx_pip_stat_inb_pktsx_s cnf71xx;
+};
+
+typedef union cvmx_pip_stat_inb_pktsx cvmx_pip_stat_inb_pktsx_t;
+
+/**
+ * cvmx_pip_stat_inb_pkts_pknd#
+ *
+ * PIP_STAT_INB_PKTS_PKNDX = Inbound packets received by PIP per pkind
+ *
+ * Inbound stats collect all data sent to PIP from all packet interfaces.
+ * Its the raw counts of everything that comes into the block.  The counts
+ * will reflect all error packets and packets dropped by the PKI RED engine.
+ * These counts are intended for system debug, but could convey useful
+ * information in production systems.
+ */
+union cvmx_pip_stat_inb_pkts_pkndx {
+	u64 u64;
+	struct cvmx_pip_stat_inb_pkts_pkndx_s {
+		u64 reserved_32_63 : 32;
+		u64 pkts : 32;
+	} s;
+	struct cvmx_pip_stat_inb_pkts_pkndx_s cn68xx;
+	struct cvmx_pip_stat_inb_pkts_pkndx_s cn68xxp1;
+};
+
+typedef union cvmx_pip_stat_inb_pkts_pkndx cvmx_pip_stat_inb_pkts_pkndx_t;
+
+/**
+ * cvmx_pip_sub_pkind_fcs#
+ */
+union cvmx_pip_sub_pkind_fcsx {
+	u64 u64;
+	struct cvmx_pip_sub_pkind_fcsx_s {
+		u64 port_bit : 64;
+	} s;
+	struct cvmx_pip_sub_pkind_fcsx_s cn68xx;
+	struct cvmx_pip_sub_pkind_fcsx_s cn68xxp1;
+};
+
+typedef union cvmx_pip_sub_pkind_fcsx cvmx_pip_sub_pkind_fcsx_t;
+
+/**
+ * cvmx_pip_tag_inc#
+ *
+ * # $PIP_TAG_INCX = 0x300+X X=(0..63) RegType=(RSL) RtlReg=(pip_tag_inc_csr_direct_TestbuilderTask)
+ *
+ */
+union cvmx_pip_tag_incx {
+	u64 u64;
+	struct cvmx_pip_tag_incx_s {
+		u64 reserved_8_63 : 56;
+		u64 en : 8;
+	} s;
+	struct cvmx_pip_tag_incx_s cn30xx;
+	struct cvmx_pip_tag_incx_s cn31xx;
+	struct cvmx_pip_tag_incx_s cn38xx;
+	struct cvmx_pip_tag_incx_s cn38xxp2;
+	struct cvmx_pip_tag_incx_s cn50xx;
+	struct cvmx_pip_tag_incx_s cn52xx;
+	struct cvmx_pip_tag_incx_s cn52xxp1;
+	struct cvmx_pip_tag_incx_s cn56xx;
+	struct cvmx_pip_tag_incx_s cn56xxp1;
+	struct cvmx_pip_tag_incx_s cn58xx;
+	struct cvmx_pip_tag_incx_s cn58xxp1;
+	struct cvmx_pip_tag_incx_s cn61xx;
+	struct cvmx_pip_tag_incx_s cn63xx;
+	struct cvmx_pip_tag_incx_s cn63xxp1;
+	struct cvmx_pip_tag_incx_s cn66xx;
+	struct cvmx_pip_tag_incx_s cn68xx;
+	struct cvmx_pip_tag_incx_s cn68xxp1;
+	struct cvmx_pip_tag_incx_s cn70xx;
+	struct cvmx_pip_tag_incx_s cn70xxp1;
+	struct cvmx_pip_tag_incx_s cnf71xx;
+};
+
+typedef union cvmx_pip_tag_incx cvmx_pip_tag_incx_t;
+
+/**
+ * cvmx_pip_tag_mask
+ *
+ * PIP_TAG_MASK = Mask bit in the tag generation
+ *
+ */
+union cvmx_pip_tag_mask {
+	u64 u64;
+	struct cvmx_pip_tag_mask_s {
+		u64 reserved_16_63 : 48;
+		u64 mask : 16;
+	} s;
+	struct cvmx_pip_tag_mask_s cn30xx;
+	struct cvmx_pip_tag_mask_s cn31xx;
+	struct cvmx_pip_tag_mask_s cn38xx;
+	struct cvmx_pip_tag_mask_s cn38xxp2;
+	struct cvmx_pip_tag_mask_s cn50xx;
+	struct cvmx_pip_tag_mask_s cn52xx;
+	struct cvmx_pip_tag_mask_s cn52xxp1;
+	struct cvmx_pip_tag_mask_s cn56xx;
+	struct cvmx_pip_tag_mask_s cn56xxp1;
+	struct cvmx_pip_tag_mask_s cn58xx;
+	struct cvmx_pip_tag_mask_s cn58xxp1;
+	struct cvmx_pip_tag_mask_s cn61xx;
+	struct cvmx_pip_tag_mask_s cn63xx;
+	struct cvmx_pip_tag_mask_s cn63xxp1;
+	struct cvmx_pip_tag_mask_s cn66xx;
+	struct cvmx_pip_tag_mask_s cn68xx;
+	struct cvmx_pip_tag_mask_s cn68xxp1;
+	struct cvmx_pip_tag_mask_s cn70xx;
+	struct cvmx_pip_tag_mask_s cn70xxp1;
+	struct cvmx_pip_tag_mask_s cnf71xx;
+};
+
+typedef union cvmx_pip_tag_mask cvmx_pip_tag_mask_t;
+
+/**
+ * cvmx_pip_tag_secret
+ *
+ * The source and destination IV's provide a mechanism for each Octeon to be unique.
+ *
+ */
+union cvmx_pip_tag_secret {
+	u64 u64;
+	struct cvmx_pip_tag_secret_s {
+		u64 reserved_32_63 : 32;
+		u64 dst : 16;
+		u64 src : 16;
+	} s;
+	struct cvmx_pip_tag_secret_s cn30xx;
+	struct cvmx_pip_tag_secret_s cn31xx;
+	struct cvmx_pip_tag_secret_s cn38xx;
+	struct cvmx_pip_tag_secret_s cn38xxp2;
+	struct cvmx_pip_tag_secret_s cn50xx;
+	struct cvmx_pip_tag_secret_s cn52xx;
+	struct cvmx_pip_tag_secret_s cn52xxp1;
+	struct cvmx_pip_tag_secret_s cn56xx;
+	struct cvmx_pip_tag_secret_s cn56xxp1;
+	struct cvmx_pip_tag_secret_s cn58xx;
+	struct cvmx_pip_tag_secret_s cn58xxp1;
+	struct cvmx_pip_tag_secret_s cn61xx;
+	struct cvmx_pip_tag_secret_s cn63xx;
+	struct cvmx_pip_tag_secret_s cn63xxp1;
+	struct cvmx_pip_tag_secret_s cn66xx;
+	struct cvmx_pip_tag_secret_s cn68xx;
+	struct cvmx_pip_tag_secret_s cn68xxp1;
+	struct cvmx_pip_tag_secret_s cn70xx;
+	struct cvmx_pip_tag_secret_s cn70xxp1;
+	struct cvmx_pip_tag_secret_s cnf71xx;
+};
+
+typedef union cvmx_pip_tag_secret cvmx_pip_tag_secret_t;
+
+/**
+ * cvmx_pip_todo_entry
+ *
+ * Summary of the current packet that has completed and waiting to be processed
+ *
+ */
+union cvmx_pip_todo_entry {
+	u64 u64;
+	struct cvmx_pip_todo_entry_s {
+		u64 val : 1;
+		u64 reserved_62_62 : 1;
+		u64 entry : 62;
+	} s;
+	struct cvmx_pip_todo_entry_s cn30xx;
+	struct cvmx_pip_todo_entry_s cn31xx;
+	struct cvmx_pip_todo_entry_s cn38xx;
+	struct cvmx_pip_todo_entry_s cn38xxp2;
+	struct cvmx_pip_todo_entry_s cn50xx;
+	struct cvmx_pip_todo_entry_s cn52xx;
+	struct cvmx_pip_todo_entry_s cn52xxp1;
+	struct cvmx_pip_todo_entry_s cn56xx;
+	struct cvmx_pip_todo_entry_s cn56xxp1;
+	struct cvmx_pip_todo_entry_s cn58xx;
+	struct cvmx_pip_todo_entry_s cn58xxp1;
+	struct cvmx_pip_todo_entry_s cn61xx;
+	struct cvmx_pip_todo_entry_s cn63xx;
+	struct cvmx_pip_todo_entry_s cn63xxp1;
+	struct cvmx_pip_todo_entry_s cn66xx;
+	struct cvmx_pip_todo_entry_s cn68xx;
+	struct cvmx_pip_todo_entry_s cn68xxp1;
+	struct cvmx_pip_todo_entry_s cn70xx;
+	struct cvmx_pip_todo_entry_s cn70xxp1;
+	struct cvmx_pip_todo_entry_s cnf71xx;
+};
+
+typedef union cvmx_pip_todo_entry cvmx_pip_todo_entry_t;
+
+/**
+ * cvmx_pip_vlan_etypes#
+ */
+union cvmx_pip_vlan_etypesx {
+	u64 u64;
+	struct cvmx_pip_vlan_etypesx_s {
+		u64 type3 : 16;
+		u64 type2 : 16;
+		u64 type1 : 16;
+		u64 type0 : 16;
+	} s;
+	struct cvmx_pip_vlan_etypesx_s cn61xx;
+	struct cvmx_pip_vlan_etypesx_s cn66xx;
+	struct cvmx_pip_vlan_etypesx_s cn68xx;
+	struct cvmx_pip_vlan_etypesx_s cn70xx;
+	struct cvmx_pip_vlan_etypesx_s cn70xxp1;
+	struct cvmx_pip_vlan_etypesx_s cnf71xx;
+};
+
+typedef union cvmx_pip_vlan_etypesx cvmx_pip_vlan_etypesx_t;
+
+/**
+ * cvmx_pip_xstat0_prt#
+ *
+ * PIP_XSTAT0_PRT = PIP_XSTAT_DRP_PKTS / PIP_XSTAT_DRP_OCTS
+ *
+ */
+union cvmx_pip_xstat0_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat0_prtx_s {
+		u64 drp_pkts : 32;
+		u64 drp_octs : 32;
+	} s;
+	struct cvmx_pip_xstat0_prtx_s cn63xx;
+	struct cvmx_pip_xstat0_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat0_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat0_prtx cvmx_pip_xstat0_prtx_t;
+
+/**
+ * cvmx_pip_xstat10_prt#
+ *
+ * PIP_XSTAT10_PRTX = PIP_XSTAT_L2_MCAST / PIP_XSTAT_L2_BCAST
+ *
+ */
+union cvmx_pip_xstat10_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat10_prtx_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_xstat10_prtx_s cn63xx;
+	struct cvmx_pip_xstat10_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat10_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat10_prtx cvmx_pip_xstat10_prtx_t;
+
+/**
+ * cvmx_pip_xstat11_prt#
+ *
+ * PIP_XSTAT11_PRTX = PIP_XSTAT_L3_MCAST / PIP_XSTAT_L3_BCAST
+ *
+ */
+union cvmx_pip_xstat11_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat11_prtx_s {
+		u64 bcast : 32;
+		u64 mcast : 32;
+	} s;
+	struct cvmx_pip_xstat11_prtx_s cn63xx;
+	struct cvmx_pip_xstat11_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat11_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat11_prtx cvmx_pip_xstat11_prtx_t;
+
+/**
+ * cvmx_pip_xstat1_prt#
+ *
+ * PIP_XSTAT1_PRTX = PIP_XSTAT_OCTS
+ *
+ */
+union cvmx_pip_xstat1_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat1_prtx_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pip_xstat1_prtx_s cn63xx;
+	struct cvmx_pip_xstat1_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat1_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat1_prtx cvmx_pip_xstat1_prtx_t;
+
+/**
+ * cvmx_pip_xstat2_prt#
+ *
+ * PIP_XSTAT2_PRTX = PIP_XSTAT_PKTS     / PIP_XSTAT_RAW
+ *
+ */
+union cvmx_pip_xstat2_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat2_prtx_s {
+		u64 pkts : 32;
+		u64 raw : 32;
+	} s;
+	struct cvmx_pip_xstat2_prtx_s cn63xx;
+	struct cvmx_pip_xstat2_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat2_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat2_prtx cvmx_pip_xstat2_prtx_t;
+
+/**
+ * cvmx_pip_xstat3_prt#
+ *
+ * PIP_XSTAT3_PRTX = PIP_XSTAT_BCST     / PIP_XSTAT_MCST
+ *
+ */
+union cvmx_pip_xstat3_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat3_prtx_s {
+		u64 bcst : 32;
+		u64 mcst : 32;
+	} s;
+	struct cvmx_pip_xstat3_prtx_s cn63xx;
+	struct cvmx_pip_xstat3_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat3_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat3_prtx cvmx_pip_xstat3_prtx_t;
+
+/**
+ * cvmx_pip_xstat4_prt#
+ *
+ * PIP_XSTAT4_PRTX = PIP_XSTAT_HIST1    / PIP_XSTAT_HIST0
+ *
+ */
+union cvmx_pip_xstat4_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat4_prtx_s {
+		u64 h65to127 : 32;
+		u64 h64 : 32;
+	} s;
+	struct cvmx_pip_xstat4_prtx_s cn63xx;
+	struct cvmx_pip_xstat4_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat4_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat4_prtx cvmx_pip_xstat4_prtx_t;
+
+/**
+ * cvmx_pip_xstat5_prt#
+ *
+ * PIP_XSTAT5_PRTX = PIP_XSTAT_HIST3    / PIP_XSTAT_HIST2
+ *
+ */
+union cvmx_pip_xstat5_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat5_prtx_s {
+		u64 h256to511 : 32;
+		u64 h128to255 : 32;
+	} s;
+	struct cvmx_pip_xstat5_prtx_s cn63xx;
+	struct cvmx_pip_xstat5_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat5_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat5_prtx cvmx_pip_xstat5_prtx_t;
+
+/**
+ * cvmx_pip_xstat6_prt#
+ *
+ * PIP_XSTAT6_PRTX = PIP_XSTAT_HIST5    / PIP_XSTAT_HIST4
+ *
+ */
+union cvmx_pip_xstat6_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat6_prtx_s {
+		u64 h1024to1518 : 32;
+		u64 h512to1023 : 32;
+	} s;
+	struct cvmx_pip_xstat6_prtx_s cn63xx;
+	struct cvmx_pip_xstat6_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat6_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat6_prtx cvmx_pip_xstat6_prtx_t;
+
+/**
+ * cvmx_pip_xstat7_prt#
+ *
+ * PIP_XSTAT7_PRTX = PIP_XSTAT_FCS      / PIP_XSTAT_HIST6
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore FCS will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore FCS will never increment on sRIO ports 40-47
+ */
+union cvmx_pip_xstat7_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat7_prtx_s {
+		u64 fcs : 32;
+		u64 h1519 : 32;
+	} s;
+	struct cvmx_pip_xstat7_prtx_s cn63xx;
+	struct cvmx_pip_xstat7_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat7_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat7_prtx cvmx_pip_xstat7_prtx_t;
+
+/**
+ * cvmx_pip_xstat8_prt#
+ *
+ * PIP_XSTAT8_PRTX = PIP_XSTAT_FRAG     / PIP_XSTAT_UNDER
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore FRAG will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore FRAG will never increment on sRIO ports 40-47
+ */
+union cvmx_pip_xstat8_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat8_prtx_s {
+		u64 frag : 32;
+		u64 undersz : 32;
+	} s;
+	struct cvmx_pip_xstat8_prtx_s cn63xx;
+	struct cvmx_pip_xstat8_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat8_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat8_prtx cvmx_pip_xstat8_prtx_t;
+
+/**
+ * cvmx_pip_xstat9_prt#
+ *
+ * PIP_XSTAT9_PRTX = PIP_XSTAT_JABBER   / PIP_XSTAT_OVER
+ *
+ *
+ * Notes:
+ * DPI does not check FCS, therefore JABBER will never increment on DPI ports 32-35
+ * sRIO does not check FCS, therefore JABBER will never increment on sRIO ports 40-47 due to FCS errors
+ * sRIO does use the JABBER opcode to communicate sRIO error, therefore JABBER can increment under the sRIO error conditions
+ */
+union cvmx_pip_xstat9_prtx {
+	u64 u64;
+	struct cvmx_pip_xstat9_prtx_s {
+		u64 jabber : 32;
+		u64 oversz : 32;
+	} s;
+	struct cvmx_pip_xstat9_prtx_s cn63xx;
+	struct cvmx_pip_xstat9_prtx_s cn63xxp1;
+	struct cvmx_pip_xstat9_prtx_s cn66xx;
+};
+
+typedef union cvmx_pip_xstat9_prtx cvmx_pip_xstat9_prtx_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 23/50] mips: octeon: Add cvmx-pki-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (21 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 22/50] mips: octeon: Add cvmx-pip-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 24/50] mips: octeon: Add cvmx-pko-defs.h " Stefan Roese
                   ` (29 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pki-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pki-defs.h  | 2353 +++++++++++++++++
 1 file changed, 2353 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pki-defs.h
new file mode 100644
index 0000000000..4465872e87
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-defs.h
@@ -0,0 +1,2353 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pki.
+ */
+
+#ifndef __CVMX_PKI_DEFS_H__
+#define __CVMX_PKI_DEFS_H__
+
+#define CVMX_PKI_ACTIVE0	     (0x0001180044000220ull)
+#define CVMX_PKI_ACTIVE1	     (0x0001180044000230ull)
+#define CVMX_PKI_ACTIVE2	     (0x0001180044000240ull)
+#define CVMX_PKI_AURAX_CFG(offset)   (0x0001180044900000ull + ((offset) & 1023) * 8)
+#define CVMX_PKI_BIST_STATUS0	     (0x0001180044000080ull)
+#define CVMX_PKI_BIST_STATUS1	     (0x0001180044000088ull)
+#define CVMX_PKI_BIST_STATUS2	     (0x0001180044000090ull)
+#define CVMX_PKI_BPIDX_STATE(offset) (0x0001180044B00000ull + ((offset) & 1023) * 8)
+#define CVMX_PKI_BUF_CTL	     (0x0001180044000100ull)
+#define CVMX_PKI_CHANX_CFG(offset)   (0x0001180044A00000ull + ((offset) & 4095) * 8)
+#define CVMX_PKI_CLKEN		     (0x0001180044000410ull)
+#define CVMX_PKI_CLX_ECC_CTL(offset) (0x000118004400C020ull + ((offset) & 3) * 0x10000ull)
+#define CVMX_PKI_CLX_ECC_INT(offset) (0x000118004400C010ull + ((offset) & 3) * 0x10000ull)
+#define CVMX_PKI_CLX_INT(offset)     (0x000118004400C000ull + ((offset) & 3) * 0x10000ull)
+#define CVMX_PKI_CLX_PCAMX_ACTIONX(a, b, c)                                                        \
+	(0x0001180044708000ull + ((a) << 16) + ((b) << 12) + ((c) << 3))
+#define CVMX_PKI_CLX_PCAMX_MATCHX(a, b, c)                                                         \
+	(0x0001180044704000ull + ((a) << 16) + ((b) << 12) + ((c) << 3))
+#define CVMX_PKI_CLX_PCAMX_TERMX(a, b, c)                                                          \
+	(0x0001180044700000ull + ((a) << 16) + ((b) << 12) + ((c) << 3))
+#define CVMX_PKI_CLX_PKINDX_CFG(offset, block_id)                                                  \
+	(0x0001180044300040ull + (((offset) & 63) + ((block_id) & 3) * 0x100ull) * 256)
+#define CVMX_PKI_CLX_PKINDX_KMEMX(a, b, c)                                                         \
+	(0x0001180044200000ull + ((a) << 16) + ((b) << 8) + ((c) << 3))
+#define CVMX_PKI_CLX_PKINDX_L2_CUSTOM(offset, block_id)                                            \
+	(0x0001180044300058ull + (((offset) & 63) + ((block_id) & 3) * 0x100ull) * 256)
+#define CVMX_PKI_CLX_PKINDX_LG_CUSTOM(offset, block_id)                                            \
+	(0x0001180044300060ull + (((offset) & 63) + ((block_id) & 3) * 0x100ull) * 256)
+#define CVMX_PKI_CLX_PKINDX_SKIP(offset, block_id)                                                 \
+	(0x0001180044300050ull + (((offset) & 63) + ((block_id) & 3) * 0x100ull) * 256)
+#define CVMX_PKI_CLX_PKINDX_STYLE(offset, block_id)                                                \
+	(0x0001180044300048ull + (((offset) & 63) + ((block_id) & 3) * 0x100ull) * 256)
+#define CVMX_PKI_CLX_SMEMX(offset, block_id)                                                       \
+	(0x0001180044400000ull + (((offset) & 2047) + ((block_id) & 3) * 0x2000ull) * 8)
+#define CVMX_PKI_CLX_START(offset) (0x000118004400C030ull + ((offset) & 3) * 0x10000ull)
+#define CVMX_PKI_CLX_STYLEX_ALG(offset, block_id)                                                  \
+	(0x0001180044501000ull + (((offset) & 63) + ((block_id) & 3) * 0x2000ull) * 8)
+#define CVMX_PKI_CLX_STYLEX_CFG(offset, block_id)                                                  \
+	(0x0001180044500000ull + (((offset) & 63) + ((block_id) & 3) * 0x2000ull) * 8)
+#define CVMX_PKI_CLX_STYLEX_CFG2(offset, block_id)                                                 \
+	(0x0001180044500800ull + (((offset) & 63) + ((block_id) & 3) * 0x2000ull) * 8)
+#define CVMX_PKI_DSTATX_STAT0(offset)	 (0x0001180044C00000ull + ((offset) & 1023) * 64)
+#define CVMX_PKI_DSTATX_STAT1(offset)	 (0x0001180044C00008ull + ((offset) & 1023) * 64)
+#define CVMX_PKI_DSTATX_STAT2(offset)	 (0x0001180044C00010ull + ((offset) & 1023) * 64)
+#define CVMX_PKI_DSTATX_STAT3(offset)	 (0x0001180044C00018ull + ((offset) & 1023) * 64)
+#define CVMX_PKI_DSTATX_STAT4(offset)	 (0x0001180044C00020ull + ((offset) & 1023) * 64)
+#define CVMX_PKI_ECC_CTL0		 (0x0001180044000060ull)
+#define CVMX_PKI_ECC_CTL1		 (0x0001180044000068ull)
+#define CVMX_PKI_ECC_CTL2		 (0x0001180044000070ull)
+#define CVMX_PKI_ECC_INT0		 (0x0001180044000040ull)
+#define CVMX_PKI_ECC_INT1		 (0x0001180044000048ull)
+#define CVMX_PKI_ECC_INT2		 (0x0001180044000050ull)
+#define CVMX_PKI_FRM_LEN_CHKX(offset)	 (0x0001180044004000ull + ((offset) & 1) * 8)
+#define CVMX_PKI_GBL_PEN		 (0x0001180044000200ull)
+#define CVMX_PKI_GEN_INT		 (0x0001180044000020ull)
+#define CVMX_PKI_ICGX_CFG(offset)	 (0x000118004400A000ull)
+#define CVMX_PKI_IMEMX(offset)		 (0x0001180044100000ull + ((offset) & 2047) * 8)
+#define CVMX_PKI_LTYPEX_MAP(offset)	 (0x0001180044005000ull + ((offset) & 31) * 8)
+#define CVMX_PKI_PBE_ECO		 (0x0001180044000710ull)
+#define CVMX_PKI_PCAM_LOOKUP		 (0x0001180044000500ull)
+#define CVMX_PKI_PCAM_RESULT		 (0x0001180044000510ull)
+#define CVMX_PKI_PFE_DIAG		 (0x0001180044000560ull)
+#define CVMX_PKI_PFE_ECO		 (0x0001180044000720ull)
+#define CVMX_PKI_PIX_CLKEN		 (0x0001180044000600ull)
+#define CVMX_PKI_PIX_DIAG		 (0x0001180044000580ull)
+#define CVMX_PKI_PIX_ECO		 (0x0001180044000700ull)
+#define CVMX_PKI_PKINDX_ICGSEL(offset)	 (0x0001180044010000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_PKNDX_INB_STAT0(offset) (0x0001180044F00000ull + ((offset) & 63) * 256)
+#define CVMX_PKI_PKNDX_INB_STAT1(offset) (0x0001180044F00008ull + ((offset) & 63) * 256)
+#define CVMX_PKI_PKNDX_INB_STAT2(offset) (0x0001180044F00010ull + ((offset) & 63) * 256)
+#define CVMX_PKI_PKT_ERR		 (0x0001180044000030ull)
+#define CVMX_PKI_PTAG_AVAIL		 (0x0001180044000130ull)
+#define CVMX_PKI_QPG_TBLBX(offset)	 (0x0001180044820000ull + ((offset) & 2047) * 8)
+#define CVMX_PKI_QPG_TBLX(offset)	 (0x0001180044800000ull + ((offset) & 2047) * 8)
+#define CVMX_PKI_REASM_SOPX(offset)	 (0x0001180044006000ull + ((offset) & 1) * 8)
+#define CVMX_PKI_REQ_WGT		 (0x0001180044000120ull)
+#define CVMX_PKI_SFT_RST		 (0x0001180044000010ull)
+#define CVMX_PKI_STATX_HIST0(offset)	 (0x0001180044E00000ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST1(offset)	 (0x0001180044E00008ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST2(offset)	 (0x0001180044E00010ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST3(offset)	 (0x0001180044E00018ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST4(offset)	 (0x0001180044E00020ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST5(offset)	 (0x0001180044E00028ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_HIST6(offset)	 (0x0001180044E00030ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT0(offset)	 (0x0001180044E00038ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT1(offset)	 (0x0001180044E00040ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT10(offset)	 (0x0001180044E00088ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT11(offset)	 (0x0001180044E00090ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT12(offset)	 (0x0001180044E00098ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT13(offset)	 (0x0001180044E000A0ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT14(offset)	 (0x0001180044E000A8ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT15(offset)	 (0x0001180044E000B0ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT16(offset)	 (0x0001180044E000B8ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT17(offset)	 (0x0001180044E000C0ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT18(offset)	 (0x0001180044E000C8ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT2(offset)	 (0x0001180044E00048ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT3(offset)	 (0x0001180044E00050ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT4(offset)	 (0x0001180044E00058ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT5(offset)	 (0x0001180044E00060ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT6(offset)	 (0x0001180044E00068ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT7(offset)	 (0x0001180044E00070ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT8(offset)	 (0x0001180044E00078ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STATX_STAT9(offset)	 (0x0001180044E00080ull + ((offset) & 63) * 256)
+#define CVMX_PKI_STAT_CTL		 (0x0001180044000110ull)
+#define CVMX_PKI_STYLEX_BUF(offset)	 (0x0001180044024000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_STYLEX_TAG_MASK(offset) (0x0001180044021000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_STYLEX_TAG_SEL(offset)	 (0x0001180044020000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_STYLEX_WQ2(offset)	 (0x0001180044022000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_STYLEX_WQ4(offset)	 (0x0001180044023000ull + ((offset) & 63) * 8)
+#define CVMX_PKI_TAG_INCX_CTL(offset)	 (0x0001180044007000ull + ((offset) & 31) * 8)
+#define CVMX_PKI_TAG_INCX_MASK(offset)	 (0x0001180044008000ull + ((offset) & 31) * 8)
+#define CVMX_PKI_TAG_SECRET		 (0x0001180044000430ull)
+#define CVMX_PKI_X2P_REQ_OFL		 (0x0001180044000038ull)
+
+/**
+ * cvmx_pki_active0
+ */
+union cvmx_pki_active0 {
+	u64 u64;
+	struct cvmx_pki_active0_s {
+		u64 reserved_1_63 : 63;
+		u64 pfe_active : 1;
+	} s;
+	struct cvmx_pki_active0_s cn73xx;
+	struct cvmx_pki_active0_s cn78xx;
+	struct cvmx_pki_active0_s cn78xxp1;
+	struct cvmx_pki_active0_s cnf75xx;
+};
+
+typedef union cvmx_pki_active0 cvmx_pki_active0_t;
+
+/**
+ * cvmx_pki_active1
+ */
+union cvmx_pki_active1 {
+	u64 u64;
+	struct cvmx_pki_active1_s {
+		u64 reserved_4_63 : 60;
+		u64 fpc_active : 1;
+		u64 iobp_active : 1;
+		u64 sws_active : 1;
+		u64 pbtag_active : 1;
+	} s;
+	struct cvmx_pki_active1_s cn73xx;
+	struct cvmx_pki_active1_s cn78xx;
+	struct cvmx_pki_active1_s cn78xxp1;
+	struct cvmx_pki_active1_s cnf75xx;
+};
+
+typedef union cvmx_pki_active1 cvmx_pki_active1_t;
+
+/**
+ * cvmx_pki_active2
+ */
+union cvmx_pki_active2 {
+	u64 u64;
+	struct cvmx_pki_active2_s {
+		u64 reserved_5_63 : 59;
+		u64 pix_active : 5;
+	} s;
+	struct cvmx_pki_active2_s cn73xx;
+	struct cvmx_pki_active2_s cn78xx;
+	struct cvmx_pki_active2_s cn78xxp1;
+	struct cvmx_pki_active2_s cnf75xx;
+};
+
+typedef union cvmx_pki_active2 cvmx_pki_active2_t;
+
+/**
+ * cvmx_pki_aura#_cfg
+ *
+ * This register configures aura backpressure, etc.
+ *
+ */
+union cvmx_pki_aurax_cfg {
+	u64 u64;
+	struct cvmx_pki_aurax_cfg_s {
+		u64 reserved_32_63 : 32;
+		u64 pkt_add : 2;
+		u64 reserved_19_29 : 11;
+		u64 ena_red : 1;
+		u64 ena_drop : 1;
+		u64 ena_bp : 1;
+		u64 reserved_10_15 : 6;
+		u64 bpid : 10;
+	} s;
+	struct cvmx_pki_aurax_cfg_s cn73xx;
+	struct cvmx_pki_aurax_cfg_s cn78xx;
+	struct cvmx_pki_aurax_cfg_s cn78xxp1;
+	struct cvmx_pki_aurax_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pki_aurax_cfg cvmx_pki_aurax_cfg_t;
+
+/**
+ * cvmx_pki_bist_status0
+ *
+ * This register indicates BIST status.
+ *
+ */
+union cvmx_pki_bist_status0 {
+	u64 u64;
+	struct cvmx_pki_bist_status0_s {
+		u64 reserved_31_63 : 33;
+		u64 bist : 31;
+	} s;
+	struct cvmx_pki_bist_status0_s cn73xx;
+	struct cvmx_pki_bist_status0_s cn78xx;
+	struct cvmx_pki_bist_status0_s cn78xxp1;
+	struct cvmx_pki_bist_status0_s cnf75xx;
+};
+
+typedef union cvmx_pki_bist_status0 cvmx_pki_bist_status0_t;
+
+/**
+ * cvmx_pki_bist_status1
+ *
+ * This register indicates BIST status.
+ *
+ */
+union cvmx_pki_bist_status1 {
+	u64 u64;
+	struct cvmx_pki_bist_status1_s {
+		u64 reserved_26_63 : 38;
+		u64 bist : 26;
+	} s;
+	struct cvmx_pki_bist_status1_s cn73xx;
+	struct cvmx_pki_bist_status1_s cn78xx;
+	struct cvmx_pki_bist_status1_cn78xxp1 {
+		u64 reserved_21_63 : 43;
+		u64 bist : 21;
+	} cn78xxp1;
+	struct cvmx_pki_bist_status1_s cnf75xx;
+};
+
+typedef union cvmx_pki_bist_status1 cvmx_pki_bist_status1_t;
+
+/**
+ * cvmx_pki_bist_status2
+ *
+ * This register indicates BIST status.
+ *
+ */
+union cvmx_pki_bist_status2 {
+	u64 u64;
+	struct cvmx_pki_bist_status2_s {
+		u64 reserved_25_63 : 39;
+		u64 bist : 25;
+	} s;
+	struct cvmx_pki_bist_status2_s cn73xx;
+	struct cvmx_pki_bist_status2_s cn78xx;
+	struct cvmx_pki_bist_status2_s cn78xxp1;
+	struct cvmx_pki_bist_status2_s cnf75xx;
+};
+
+typedef union cvmx_pki_bist_status2 cvmx_pki_bist_status2_t;
+
+/**
+ * cvmx_pki_bpid#_state
+ *
+ * This register shows the current bpid state for diagnostics.
+ *
+ */
+union cvmx_pki_bpidx_state {
+	u64 u64;
+	struct cvmx_pki_bpidx_state_s {
+		u64 reserved_1_63 : 63;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pki_bpidx_state_s cn73xx;
+	struct cvmx_pki_bpidx_state_s cn78xx;
+	struct cvmx_pki_bpidx_state_s cn78xxp1;
+	struct cvmx_pki_bpidx_state_s cnf75xx;
+};
+
+typedef union cvmx_pki_bpidx_state cvmx_pki_bpidx_state_t;
+
+/**
+ * cvmx_pki_buf_ctl
+ */
+union cvmx_pki_buf_ctl {
+	u64 u64;
+	struct cvmx_pki_buf_ctl_s {
+		u64 reserved_11_63 : 53;
+		u64 fpa_wait : 1;
+		u64 fpa_cac_dis : 1;
+		u64 reserved_6_8 : 3;
+		u64 pkt_off : 1;
+		u64 reserved_3_4 : 2;
+		u64 pbp_en : 1;
+		u64 reserved_1_1 : 1;
+		u64 pki_en : 1;
+	} s;
+	struct cvmx_pki_buf_ctl_s cn73xx;
+	struct cvmx_pki_buf_ctl_s cn78xx;
+	struct cvmx_pki_buf_ctl_s cn78xxp1;
+	struct cvmx_pki_buf_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pki_buf_ctl cvmx_pki_buf_ctl_t;
+
+/**
+ * cvmx_pki_chan#_cfg
+ *
+ * This register configures each channel.
+ *
+ */
+union cvmx_pki_chanx_cfg {
+	u64 u64;
+	struct cvmx_pki_chanx_cfg_s {
+		u64 reserved_17_63 : 47;
+		u64 imp : 1;
+		u64 reserved_10_15 : 6;
+		u64 bpid : 10;
+	} s;
+	struct cvmx_pki_chanx_cfg_s cn73xx;
+	struct cvmx_pki_chanx_cfg_s cn78xx;
+	struct cvmx_pki_chanx_cfg_s cn78xxp1;
+	struct cvmx_pki_chanx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pki_chanx_cfg cvmx_pki_chanx_cfg_t;
+
+/**
+ * cvmx_pki_cl#_ecc_ctl
+ *
+ * This register configures ECC. All of PKI_CL()_ECC_CTL must be configured identically.
+ *
+ */
+union cvmx_pki_clx_ecc_ctl {
+	u64 u64;
+	struct cvmx_pki_clx_ecc_ctl_s {
+		u64 pcam_en : 1;
+		u64 reserved_24_62 : 39;
+		u64 pcam1_flip : 2;
+		u64 pcam0_flip : 2;
+		u64 smem_flip : 2;
+		u64 dmem_flip : 1;
+		u64 rf_flip : 1;
+		u64 reserved_5_15 : 11;
+		u64 pcam1_cdis : 1;
+		u64 pcam0_cdis : 1;
+		u64 smem_cdis : 1;
+		u64 dmem_cdis : 1;
+		u64 rf_cdis : 1;
+	} s;
+	struct cvmx_pki_clx_ecc_ctl_s cn73xx;
+	struct cvmx_pki_clx_ecc_ctl_s cn78xx;
+	struct cvmx_pki_clx_ecc_ctl_s cn78xxp1;
+	struct cvmx_pki_clx_ecc_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_ecc_ctl cvmx_pki_clx_ecc_ctl_t;
+
+/**
+ * cvmx_pki_cl#_ecc_int
+ */
+union cvmx_pki_clx_ecc_int {
+	u64 u64;
+	struct cvmx_pki_clx_ecc_int_s {
+		u64 reserved_8_63 : 56;
+		u64 pcam1_dbe : 1;
+		u64 pcam1_sbe : 1;
+		u64 pcam0_dbe : 1;
+		u64 pcam0_sbe : 1;
+		u64 smem_dbe : 1;
+		u64 smem_sbe : 1;
+		u64 dmem_perr : 1;
+		u64 rf_perr : 1;
+	} s;
+	struct cvmx_pki_clx_ecc_int_s cn73xx;
+	struct cvmx_pki_clx_ecc_int_s cn78xx;
+	struct cvmx_pki_clx_ecc_int_s cn78xxp1;
+	struct cvmx_pki_clx_ecc_int_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_ecc_int cvmx_pki_clx_ecc_int_t;
+
+/**
+ * cvmx_pki_cl#_int
+ */
+union cvmx_pki_clx_int {
+	u64 u64;
+	struct cvmx_pki_clx_int_s {
+		u64 reserved_4_63 : 60;
+		u64 iptint : 1;
+		u64 sched_conf : 1;
+		u64 pcam_conf : 2;
+	} s;
+	struct cvmx_pki_clx_int_s cn73xx;
+	struct cvmx_pki_clx_int_s cn78xx;
+	struct cvmx_pki_clx_int_s cn78xxp1;
+	struct cvmx_pki_clx_int_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_int cvmx_pki_clx_int_t;
+
+/**
+ * cvmx_pki_cl#_pcam#_action#
+ *
+ * This register configures the result side of the PCAM. PKI hardware is opaque as to the use
+ * of the 32 bits of CAM result.
+ *
+ * For each legal j and k, PKI_CL(i)_PCAM(j)_ACTION(k) must be configured identically for i=0..1.
+ *
+ * With the current parse engine code:
+ *
+ * Action performed based on PCAM lookup using the PKI_CL()_PCAM()_TERM() and
+ * PKI_CL()_PCAM()_MATCH() registers.
+ *
+ * If lookup data matches no PCAM entries, then no action takes place. No matches indicates
+ * normal parsing will continue.
+ *
+ * If data matches multiple PCAM entries, PKI_WQE_S[ERRLEV,OPCODE] of the processed packet may
+ * be set to PKI_ERRLEV_E::RE,PKI_OPCODE_E::RE_PKIPCAM and the PKI_CL()_INT[PCAM_CONF] error
+ * interrupt is signaled.  Once a conflict is detected, the PCAM state is unpredictable and is
+ * required to be fully reconfigured before further valid processing can take place.
+ */
+union cvmx_pki_clx_pcamx_actionx {
+	u64 u64;
+	struct cvmx_pki_clx_pcamx_actionx_s {
+		u64 reserved_31_63 : 33;
+		u64 pmc : 7;
+		u64 style_add : 8;
+		u64 pf : 3;
+		u64 setty : 5;
+		u64 advance : 8;
+	} s;
+	struct cvmx_pki_clx_pcamx_actionx_s cn73xx;
+	struct cvmx_pki_clx_pcamx_actionx_s cn78xx;
+	struct cvmx_pki_clx_pcamx_actionx_s cn78xxp1;
+	struct cvmx_pki_clx_pcamx_actionx_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pcamx_actionx cvmx_pki_clx_pcamx_actionx_t;
+
+/**
+ * cvmx_pki_cl#_pcam#_match#
+ *
+ * This register configures the match side of the PCAM. PKI hardware is opaque as to the use
+ * of the 32 bits of CAM data.
+ *
+ * For each legal j and k, PKI_CL(i)_PCAM(j)_MATCH(k) must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pcamx_matchx {
+	u64 u64;
+	struct cvmx_pki_clx_pcamx_matchx_s {
+		u64 data1 : 32;
+		u64 data0 : 32;
+	} s;
+	struct cvmx_pki_clx_pcamx_matchx_s cn73xx;
+	struct cvmx_pki_clx_pcamx_matchx_s cn78xx;
+	struct cvmx_pki_clx_pcamx_matchx_s cn78xxp1;
+	struct cvmx_pki_clx_pcamx_matchx_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pcamx_matchx cvmx_pki_clx_pcamx_matchx_t;
+
+/**
+ * cvmx_pki_cl#_pcam#_term#
+ *
+ * This register configures the match side of the PCAM. PKI hardware is opaque as to the use
+ * of the 16 bits of CAM data; the split between TERM and STYLE is defined by the
+ * parse engine.
+ *
+ * For each legal j and k, PKI_CL(i)_PCAM(j)_TERM(k) must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pcamx_termx {
+	u64 u64;
+	struct cvmx_pki_clx_pcamx_termx_s {
+		u64 valid : 1;
+		u64 reserved_48_62 : 15;
+		u64 term1 : 8;
+		u64 style1 : 8;
+		u64 reserved_16_31 : 16;
+		u64 term0 : 8;
+		u64 style0 : 8;
+	} s;
+	struct cvmx_pki_clx_pcamx_termx_s cn73xx;
+	struct cvmx_pki_clx_pcamx_termx_s cn78xx;
+	struct cvmx_pki_clx_pcamx_termx_s cn78xxp1;
+	struct cvmx_pki_clx_pcamx_termx_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pcamx_termx cvmx_pki_clx_pcamx_termx_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_cfg
+ *
+ * This register is inside PKI_CL()_PKIND()_KMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_PKIND(j)_CFG must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_cfg {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_cfg_s {
+		u64 reserved_11_63 : 53;
+		u64 lg_custom_layer : 3;
+		u64 fcs_pres : 1;
+		u64 mpls_en : 1;
+		u64 inst_hdr : 1;
+		u64 lg_custom : 1;
+		u64 fulc_en : 1;
+		u64 dsa_en : 1;
+		u64 hg2_en : 1;
+		u64 hg_en : 1;
+	} s;
+	struct cvmx_pki_clx_pkindx_cfg_s cn73xx;
+	struct cvmx_pki_clx_pkindx_cfg_s cn78xx;
+	struct cvmx_pki_clx_pkindx_cfg_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_cfg cvmx_pki_clx_pkindx_cfg_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_kmem#
+ *
+ * This register initializes the KMEM, which initializes the parse engine state for each
+ * pkind. These CSRs are used only by the PKI parse engine.
+ *
+ * Inside the KMEM are the following parse engine registers. These registers are the
+ * preferred access method for software:
+ * * PKI_CL()_PKIND()_CFG.
+ * * PKI_CL()_PKIND()_STYLE.
+ * * PKI_CL()_PKIND()_SKIP.
+ * * PKI_CL()_PKIND()_L2_CUSTOM.
+ * * PKI_CL()_PKIND()_LG_CUSTOM.
+ *
+ * To avoid overlapping addresses, these aliases have address bit 20 set in contrast to
+ * this register; the PKI address decoder ignores bit 20 when accessing
+ * PKI_CL()_PKIND()_KMEM().
+ *
+ * Software must reload the PKI_CL()_PKIND()_KMEM() registers upon the detection of
+ * PKI_ECC_INT0[KMEM_SBE] or PKI_ECC_INT0[KMEM_DBE].
+ *
+ * For each legal j and k value, PKI_CL(i)_PKIND(j)_KMEM(k) must be configured
+ * identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_kmemx {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_kmemx_s {
+		u64 reserved_16_63 : 48;
+		u64 data : 16;
+	} s;
+	struct cvmx_pki_clx_pkindx_kmemx_s cn73xx;
+	struct cvmx_pki_clx_pkindx_kmemx_s cn78xx;
+	struct cvmx_pki_clx_pkindx_kmemx_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_kmemx_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_kmemx cvmx_pki_clx_pkindx_kmemx_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_l2_custom
+ *
+ * This register is inside PKI_CL()_PKIND()_KMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_PKIND(j)_L2_CUSTOM must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_l2_custom {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_l2_custom_s {
+		u64 reserved_16_63 : 48;
+		u64 valid : 1;
+		u64 reserved_8_14 : 7;
+		u64 offset : 8;
+	} s;
+	struct cvmx_pki_clx_pkindx_l2_custom_s cn73xx;
+	struct cvmx_pki_clx_pkindx_l2_custom_s cn78xx;
+	struct cvmx_pki_clx_pkindx_l2_custom_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_l2_custom_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_l2_custom cvmx_pki_clx_pkindx_l2_custom_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_lg_custom
+ *
+ * This register is inside PKI_CL()_PKIND()_KMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_PKIND(j)_LG_CUSTOM must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_lg_custom {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_lg_custom_s {
+		u64 reserved_8_63 : 56;
+		u64 offset : 8;
+	} s;
+	struct cvmx_pki_clx_pkindx_lg_custom_s cn73xx;
+	struct cvmx_pki_clx_pkindx_lg_custom_s cn78xx;
+	struct cvmx_pki_clx_pkindx_lg_custom_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_lg_custom_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_lg_custom cvmx_pki_clx_pkindx_lg_custom_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_skip
+ *
+ * This register is inside PKI_CL()_PKIND()_KMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_PKIND(j)_SKIP must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_skip {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_skip_s {
+		u64 reserved_16_63 : 48;
+		u64 fcs_skip : 8;
+		u64 inst_skip : 8;
+	} s;
+	struct cvmx_pki_clx_pkindx_skip_s cn73xx;
+	struct cvmx_pki_clx_pkindx_skip_s cn78xx;
+	struct cvmx_pki_clx_pkindx_skip_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_skip_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_skip cvmx_pki_clx_pkindx_skip_t;
+
+/**
+ * cvmx_pki_cl#_pkind#_style
+ *
+ * This register is inside PKI_CL()_PKIND()_KMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_PKIND(j)_STYLE must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_pkindx_style {
+	u64 u64;
+	struct cvmx_pki_clx_pkindx_style_s {
+		u64 reserved_15_63 : 49;
+		u64 pm : 7;
+		u64 style : 8;
+	} s;
+	struct cvmx_pki_clx_pkindx_style_s cn73xx;
+	struct cvmx_pki_clx_pkindx_style_s cn78xx;
+	struct cvmx_pki_clx_pkindx_style_s cn78xxp1;
+	struct cvmx_pki_clx_pkindx_style_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_pkindx_style cvmx_pki_clx_pkindx_style_t;
+
+/**
+ * cvmx_pki_cl#_smem#
+ *
+ * This register initializes the SMEM, which configures the parse engine. These CSRs
+ * are used by the PKI parse engine and other PKI hardware.
+ *
+ * Inside the SMEM are the following parse engine registers. These registers are the
+ * preferred access method for software:
+ * * PKI_CL()_STYLE()_CFG
+ * * PKI_CL()_STYLE()_CFG2
+ * * PKI_CL()_STYLE()_ALG
+ *
+ * To avoid overlapping addresses, these aliases have address bit 20 set in contrast to
+ * this register; the PKI address decoder ignores bit 20 when accessing
+ * PKI_CL()_SMEM().
+ *
+ * Software must reload the PKI_CL()_SMEM() registers upon the detection of
+ * PKI_CL()_ECC_INT[SMEM_SBE] or PKI_CL()_ECC_INT[SMEM_DBE].
+ *
+ * For each legal j, PKI_CL(i)_SMEM(j) must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_smemx {
+	u64 u64;
+	struct cvmx_pki_clx_smemx_s {
+		u64 reserved_32_63 : 32;
+		u64 data : 32;
+	} s;
+	struct cvmx_pki_clx_smemx_s cn73xx;
+	struct cvmx_pki_clx_smemx_s cn78xx;
+	struct cvmx_pki_clx_smemx_s cn78xxp1;
+	struct cvmx_pki_clx_smemx_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_smemx cvmx_pki_clx_smemx_t;
+
+/**
+ * cvmx_pki_cl#_start
+ *
+ * This register configures a cluster. All of PKI_CL()_START must be programmed identically.
+ *
+ */
+union cvmx_pki_clx_start {
+	u64 u64;
+	struct cvmx_pki_clx_start_s {
+		u64 reserved_11_63 : 53;
+		u64 start : 11;
+	} s;
+	struct cvmx_pki_clx_start_s cn73xx;
+	struct cvmx_pki_clx_start_s cn78xx;
+	struct cvmx_pki_clx_start_s cn78xxp1;
+	struct cvmx_pki_clx_start_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_start cvmx_pki_clx_start_t;
+
+/**
+ * cvmx_pki_cl#_style#_alg
+ *
+ * This register is inside PKI_CL()_SMEM(). These CSRs are used only by
+ * the PKI parse engine.
+ *
+ * For each legal j, PKI_CL(i)_STYLE(j)_ALG must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_stylex_alg {
+	u64 u64;
+	struct cvmx_pki_clx_stylex_alg_s {
+		u64 reserved_32_63 : 32;
+		u64 tt : 2;
+		u64 apad_nip : 3;
+		u64 qpg_qos : 3;
+		u64 qpg_port_sh : 3;
+		u64 qpg_port_msb : 4;
+		u64 reserved_11_16 : 6;
+		u64 tag_vni : 1;
+		u64 tag_gtp : 1;
+		u64 tag_spi : 1;
+		u64 tag_syn : 1;
+		u64 tag_pctl : 1;
+		u64 tag_vs1 : 1;
+		u64 tag_vs0 : 1;
+		u64 tag_vlan : 1;
+		u64 tag_mpls0 : 1;
+		u64 tag_prt : 1;
+		u64 wqe_vs : 1;
+	} s;
+	struct cvmx_pki_clx_stylex_alg_s cn73xx;
+	struct cvmx_pki_clx_stylex_alg_s cn78xx;
+	struct cvmx_pki_clx_stylex_alg_s cn78xxp1;
+	struct cvmx_pki_clx_stylex_alg_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_stylex_alg cvmx_pki_clx_stylex_alg_t;
+
+/**
+ * cvmx_pki_cl#_style#_cfg
+ *
+ * This register is inside PKI_CL()_SMEM(). These CSRs are used by
+ * the PKI parse engine and other PKI hardware.
+ *
+ * For each legal j, PKI_CL(i)_STYLE(j)_CFG must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_stylex_cfg {
+	u64 u64;
+	struct cvmx_pki_clx_stylex_cfg_s {
+		u64 reserved_31_63 : 33;
+		u64 ip6_udp_opt : 1;
+		u64 lenerr_en : 1;
+		u64 lenerr_eqpad : 1;
+		u64 minmax_sel : 1;
+		u64 maxerr_en : 1;
+		u64 minerr_en : 1;
+		u64 qpg_dis_grptag : 1;
+		u64 fcs_strip : 1;
+		u64 fcs_chk : 1;
+		u64 rawdrp : 1;
+		u64 drop : 1;
+		u64 nodrop : 1;
+		u64 qpg_dis_padd : 1;
+		u64 qpg_dis_grp : 1;
+		u64 qpg_dis_aura : 1;
+		u64 reserved_11_15 : 5;
+		u64 qpg_base : 11;
+	} s;
+	struct cvmx_pki_clx_stylex_cfg_s cn73xx;
+	struct cvmx_pki_clx_stylex_cfg_s cn78xx;
+	struct cvmx_pki_clx_stylex_cfg_s cn78xxp1;
+	struct cvmx_pki_clx_stylex_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_stylex_cfg cvmx_pki_clx_stylex_cfg_t;
+
+/**
+ * cvmx_pki_cl#_style#_cfg2
+ *
+ * This register is inside PKI_CL()_SMEM(). These CSRs are used by
+ * the PKI parse engine and other PKI hardware.
+ *
+ * For each legal j, PKI_CL(i)_STYLE(j)_CFG2 must be configured identically for i=0..1.
+ */
+union cvmx_pki_clx_stylex_cfg2 {
+	u64 u64;
+	struct cvmx_pki_clx_stylex_cfg2_s {
+		u64 reserved_32_63 : 32;
+		u64 tag_inc : 4;
+		u64 reserved_25_27 : 3;
+		u64 tag_masken : 1;
+		u64 tag_src_lg : 1;
+		u64 tag_src_lf : 1;
+		u64 tag_src_le : 1;
+		u64 tag_src_ld : 1;
+		u64 tag_src_lc : 1;
+		u64 tag_src_lb : 1;
+		u64 tag_dst_lg : 1;
+		u64 tag_dst_lf : 1;
+		u64 tag_dst_le : 1;
+		u64 tag_dst_ld : 1;
+		u64 tag_dst_lc : 1;
+		u64 tag_dst_lb : 1;
+		u64 len_lg : 1;
+		u64 len_lf : 1;
+		u64 len_le : 1;
+		u64 len_ld : 1;
+		u64 len_lc : 1;
+		u64 len_lb : 1;
+		u64 csum_lg : 1;
+		u64 csum_lf : 1;
+		u64 csum_le : 1;
+		u64 csum_ld : 1;
+		u64 csum_lc : 1;
+		u64 csum_lb : 1;
+	} s;
+	struct cvmx_pki_clx_stylex_cfg2_s cn73xx;
+	struct cvmx_pki_clx_stylex_cfg2_s cn78xx;
+	struct cvmx_pki_clx_stylex_cfg2_s cn78xxp1;
+	struct cvmx_pki_clx_stylex_cfg2_s cnf75xx;
+};
+
+typedef union cvmx_pki_clx_stylex_cfg2 cvmx_pki_clx_stylex_cfg2_t;
+
+/**
+ * cvmx_pki_clken
+ */
+union cvmx_pki_clken {
+	u64 u64;
+	struct cvmx_pki_clken_s {
+		u64 reserved_1_63 : 63;
+		u64 clken : 1;
+	} s;
+	struct cvmx_pki_clken_s cn73xx;
+	struct cvmx_pki_clken_s cn78xx;
+	struct cvmx_pki_clken_s cn78xxp1;
+	struct cvmx_pki_clken_s cnf75xx;
+};
+
+typedef union cvmx_pki_clken cvmx_pki_clken_t;
+
+/**
+ * cvmx_pki_dstat#_stat0
+ *
+ * This register contains statistics indexed by PKI_QPG_TBLB()[DSTAT_ID].
+ *
+ */
+union cvmx_pki_dstatx_stat0 {
+	u64 u64;
+	struct cvmx_pki_dstatx_stat0_s {
+		u64 reserved_32_63 : 32;
+		u64 pkts : 32;
+	} s;
+	struct cvmx_pki_dstatx_stat0_s cn73xx;
+	struct cvmx_pki_dstatx_stat0_s cn78xx;
+	struct cvmx_pki_dstatx_stat0_s cnf75xx;
+};
+
+typedef union cvmx_pki_dstatx_stat0 cvmx_pki_dstatx_stat0_t;
+
+/**
+ * cvmx_pki_dstat#_stat1
+ *
+ * This register contains statistics indexed by PKI_QPG_TBLB()[DSTAT_ID].
+ *
+ */
+union cvmx_pki_dstatx_stat1 {
+	u64 u64;
+	struct cvmx_pki_dstatx_stat1_s {
+		u64 reserved_40_63 : 24;
+		u64 octs : 40;
+	} s;
+	struct cvmx_pki_dstatx_stat1_s cn73xx;
+	struct cvmx_pki_dstatx_stat1_s cn78xx;
+	struct cvmx_pki_dstatx_stat1_s cnf75xx;
+};
+
+typedef union cvmx_pki_dstatx_stat1 cvmx_pki_dstatx_stat1_t;
+
+/**
+ * cvmx_pki_dstat#_stat2
+ *
+ * This register contains statistics indexed by PKI_QPG_TBLB()[DSTAT_ID].
+ *
+ */
+union cvmx_pki_dstatx_stat2 {
+	u64 u64;
+	struct cvmx_pki_dstatx_stat2_s {
+		u64 reserved_32_63 : 32;
+		u64 err_pkts : 32;
+	} s;
+	struct cvmx_pki_dstatx_stat2_s cn73xx;
+	struct cvmx_pki_dstatx_stat2_s cn78xx;
+	struct cvmx_pki_dstatx_stat2_s cnf75xx;
+};
+
+typedef union cvmx_pki_dstatx_stat2 cvmx_pki_dstatx_stat2_t;
+
+/**
+ * cvmx_pki_dstat#_stat3
+ *
+ * This register contains statistics indexed by PKI_QPG_TBLB()[DSTAT_ID].
+ *
+ */
+union cvmx_pki_dstatx_stat3 {
+	u64 u64;
+	struct cvmx_pki_dstatx_stat3_s {
+		u64 reserved_32_63 : 32;
+		u64 drp_pkts : 32;
+	} s;
+	struct cvmx_pki_dstatx_stat3_s cn73xx;
+	struct cvmx_pki_dstatx_stat3_s cn78xx;
+	struct cvmx_pki_dstatx_stat3_s cnf75xx;
+};
+
+typedef union cvmx_pki_dstatx_stat3 cvmx_pki_dstatx_stat3_t;
+
+/**
+ * cvmx_pki_dstat#_stat4
+ *
+ * This register contains statistics indexed by PKI_QPG_TBLB()[DSTAT_ID].
+ *
+ */
+union cvmx_pki_dstatx_stat4 {
+	u64 u64;
+	struct cvmx_pki_dstatx_stat4_s {
+		u64 reserved_40_63 : 24;
+		u64 drp_octs : 40;
+	} s;
+	struct cvmx_pki_dstatx_stat4_s cn73xx;
+	struct cvmx_pki_dstatx_stat4_s cn78xx;
+	struct cvmx_pki_dstatx_stat4_s cnf75xx;
+};
+
+typedef union cvmx_pki_dstatx_stat4 cvmx_pki_dstatx_stat4_t;
+
+/**
+ * cvmx_pki_ecc_ctl0
+ *
+ * This register allows inserting ECC errors for testing.
+ *
+ */
+union cvmx_pki_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pki_ecc_ctl0_s {
+		u64 reserved_24_63 : 40;
+		u64 ldfif_flip : 2;
+		u64 ldfif_cdis : 1;
+		u64 pbe_flip : 2;
+		u64 pbe_cdis : 1;
+		u64 wadr_flip : 2;
+		u64 wadr_cdis : 1;
+		u64 nxtptag_flip : 2;
+		u64 nxtptag_cdis : 1;
+		u64 curptag_flip : 2;
+		u64 curptag_cdis : 1;
+		u64 nxtblk_flip : 2;
+		u64 nxtblk_cdis : 1;
+		u64 kmem_flip : 2;
+		u64 kmem_cdis : 1;
+		u64 asm_flip : 2;
+		u64 asm_cdis : 1;
+	} s;
+	struct cvmx_pki_ecc_ctl0_s cn73xx;
+	struct cvmx_pki_ecc_ctl0_s cn78xx;
+	struct cvmx_pki_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pki_ecc_ctl0_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_ctl0 cvmx_pki_ecc_ctl0_t;
+
+/**
+ * cvmx_pki_ecc_ctl1
+ *
+ * This register allows inserting ECC errors for testing.
+ *
+ */
+union cvmx_pki_ecc_ctl1 {
+	u64 u64;
+	struct cvmx_pki_ecc_ctl1_s {
+		u64 reserved_51_63 : 13;
+		u64 sws_flip : 2;
+		u64 sws_cdis : 1;
+		u64 wqeout_flip : 2;
+		u64 wqeout_cdis : 1;
+		u64 doa_flip : 2;
+		u64 doa_cdis : 1;
+		u64 bpid_flip : 2;
+		u64 bpid_cdis : 1;
+		u64 reserved_30_38 : 9;
+		u64 plc_flip : 2;
+		u64 plc_cdis : 1;
+		u64 pktwq_flip : 2;
+		u64 pktwq_cdis : 1;
+		u64 reserved_21_23 : 3;
+		u64 stylewq2_flip : 2;
+		u64 stylewq2_cdis : 1;
+		u64 tag_flip : 2;
+		u64 tag_cdis : 1;
+		u64 aura_flip : 2;
+		u64 aura_cdis : 1;
+		u64 chan_flip : 2;
+		u64 chan_cdis : 1;
+		u64 pbtag_flip : 2;
+		u64 pbtag_cdis : 1;
+		u64 stylewq_flip : 2;
+		u64 stylewq_cdis : 1;
+		u64 qpg_flip : 2;
+		u64 qpg_cdis : 1;
+	} s;
+	struct cvmx_pki_ecc_ctl1_s cn73xx;
+	struct cvmx_pki_ecc_ctl1_s cn78xx;
+	struct cvmx_pki_ecc_ctl1_cn78xxp1 {
+		u64 reserved_51_63 : 13;
+		u64 sws_flip : 2;
+		u64 sws_cdis : 1;
+		u64 wqeout_flip : 2;
+		u64 wqeout_cdis : 1;
+		u64 doa_flip : 2;
+		u64 doa_cdis : 1;
+		u64 bpid_flip : 2;
+		u64 bpid_cdis : 1;
+		u64 reserved_30_38 : 9;
+		u64 plc_flip : 2;
+		u64 plc_cdis : 1;
+		u64 pktwq_flip : 2;
+		u64 pktwq_cdis : 1;
+		u64 reserved_18_23 : 6;
+		u64 tag_flip : 2;
+		u64 tag_cdis : 1;
+		u64 aura_flip : 2;
+		u64 aura_cdis : 1;
+		u64 chan_flip : 2;
+		u64 chan_cdis : 1;
+		u64 pbtag_flip : 2;
+		u64 pbtag_cdis : 1;
+		u64 stylewq_flip : 2;
+		u64 stylewq_cdis : 1;
+		u64 qpg_flip : 2;
+		u64 qpg_cdis : 1;
+	} cn78xxp1;
+	struct cvmx_pki_ecc_ctl1_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_ctl1 cvmx_pki_ecc_ctl1_t;
+
+/**
+ * cvmx_pki_ecc_ctl2
+ *
+ * This register allows inserting ECC errors for testing.
+ *
+ */
+union cvmx_pki_ecc_ctl2 {
+	u64 u64;
+	struct cvmx_pki_ecc_ctl2_s {
+		u64 reserved_3_63 : 61;
+		u64 imem_flip : 2;
+		u64 imem_cdis : 1;
+	} s;
+	struct cvmx_pki_ecc_ctl2_s cn73xx;
+	struct cvmx_pki_ecc_ctl2_s cn78xx;
+	struct cvmx_pki_ecc_ctl2_s cn78xxp1;
+	struct cvmx_pki_ecc_ctl2_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_ctl2 cvmx_pki_ecc_ctl2_t;
+
+/**
+ * cvmx_pki_ecc_int0
+ */
+union cvmx_pki_ecc_int0 {
+	u64 u64;
+	struct cvmx_pki_ecc_int0_s {
+		u64 reserved_16_63 : 48;
+		u64 ldfif_dbe : 1;
+		u64 ldfif_sbe : 1;
+		u64 pbe_dbe : 1;
+		u64 pbe_sbe : 1;
+		u64 wadr_dbe : 1;
+		u64 wadr_sbe : 1;
+		u64 nxtptag_dbe : 1;
+		u64 nxtptag_sbe : 1;
+		u64 curptag_dbe : 1;
+		u64 curptag_sbe : 1;
+		u64 nxtblk_dbe : 1;
+		u64 nxtblk_sbe : 1;
+		u64 kmem_dbe : 1;
+		u64 kmem_sbe : 1;
+		u64 asm_dbe : 1;
+		u64 asm_sbe : 1;
+	} s;
+	struct cvmx_pki_ecc_int0_s cn73xx;
+	struct cvmx_pki_ecc_int0_s cn78xx;
+	struct cvmx_pki_ecc_int0_s cn78xxp1;
+	struct cvmx_pki_ecc_int0_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_int0 cvmx_pki_ecc_int0_t;
+
+/**
+ * cvmx_pki_ecc_int1
+ */
+union cvmx_pki_ecc_int1 {
+	u64 u64;
+	struct cvmx_pki_ecc_int1_s {
+		u64 reserved_34_63 : 30;
+		u64 sws_dbe : 1;
+		u64 sws_sbe : 1;
+		u64 wqeout_dbe : 1;
+		u64 wqeout_sbe : 1;
+		u64 doa_dbe : 1;
+		u64 doa_sbe : 1;
+		u64 bpid_dbe : 1;
+		u64 bpid_sbe : 1;
+		u64 reserved_20_25 : 6;
+		u64 plc_dbe : 1;
+		u64 plc_sbe : 1;
+		u64 pktwq_dbe : 1;
+		u64 pktwq_sbe : 1;
+		u64 reserved_12_15 : 4;
+		u64 tag_dbe : 1;
+		u64 tag_sbe : 1;
+		u64 aura_dbe : 1;
+		u64 aura_sbe : 1;
+		u64 chan_dbe : 1;
+		u64 chan_sbe : 1;
+		u64 pbtag_dbe : 1;
+		u64 pbtag_sbe : 1;
+		u64 stylewq_dbe : 1;
+		u64 stylewq_sbe : 1;
+		u64 qpg_dbe : 1;
+		u64 qpg_sbe : 1;
+	} s;
+	struct cvmx_pki_ecc_int1_s cn73xx;
+	struct cvmx_pki_ecc_int1_s cn78xx;
+	struct cvmx_pki_ecc_int1_s cn78xxp1;
+	struct cvmx_pki_ecc_int1_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_int1 cvmx_pki_ecc_int1_t;
+
+/**
+ * cvmx_pki_ecc_int2
+ */
+union cvmx_pki_ecc_int2 {
+	u64 u64;
+	struct cvmx_pki_ecc_int2_s {
+		u64 reserved_2_63 : 62;
+		u64 imem_dbe : 1;
+		u64 imem_sbe : 1;
+	} s;
+	struct cvmx_pki_ecc_int2_s cn73xx;
+	struct cvmx_pki_ecc_int2_s cn78xx;
+	struct cvmx_pki_ecc_int2_s cn78xxp1;
+	struct cvmx_pki_ecc_int2_s cnf75xx;
+};
+
+typedef union cvmx_pki_ecc_int2 cvmx_pki_ecc_int2_t;
+
+/**
+ * cvmx_pki_frm_len_chk#
+ */
+union cvmx_pki_frm_len_chkx {
+	u64 u64;
+	struct cvmx_pki_frm_len_chkx_s {
+		u64 reserved_32_63 : 32;
+		u64 maxlen : 16;
+		u64 minlen : 16;
+	} s;
+	struct cvmx_pki_frm_len_chkx_s cn73xx;
+	struct cvmx_pki_frm_len_chkx_s cn78xx;
+	struct cvmx_pki_frm_len_chkx_s cn78xxp1;
+	struct cvmx_pki_frm_len_chkx_s cnf75xx;
+};
+
+typedef union cvmx_pki_frm_len_chkx cvmx_pki_frm_len_chkx_t;
+
+/**
+ * cvmx_pki_gbl_pen
+ *
+ * This register contains global configuration information that applies to all
+ * pkinds. The values are opaque to PKI HW.
+ *
+ * This is intended for communication between the higher-level software SDK, and the
+ * SDK code that loads PKI_IMEM() with the parse engine code. This allows the loader to
+ * appropriately select the parse engine code with only those features required, so that
+ * performance will be optimized.
+ */
+union cvmx_pki_gbl_pen {
+	u64 u64;
+	struct cvmx_pki_gbl_pen_s {
+		u64 reserved_10_63 : 54;
+		u64 virt_pen : 1;
+		u64 clg_pen : 1;
+		u64 cl2_pen : 1;
+		u64 l4_pen : 1;
+		u64 il3_pen : 1;
+		u64 l3_pen : 1;
+		u64 mpls_pen : 1;
+		u64 fulc_pen : 1;
+		u64 dsa_pen : 1;
+		u64 hg_pen : 1;
+	} s;
+	struct cvmx_pki_gbl_pen_s cn73xx;
+	struct cvmx_pki_gbl_pen_s cn78xx;
+	struct cvmx_pki_gbl_pen_s cn78xxp1;
+	struct cvmx_pki_gbl_pen_s cnf75xx;
+};
+
+typedef union cvmx_pki_gbl_pen cvmx_pki_gbl_pen_t;
+
+/**
+ * cvmx_pki_gen_int
+ */
+union cvmx_pki_gen_int {
+	u64 u64;
+	struct cvmx_pki_gen_int_s {
+		u64 reserved_10_63 : 54;
+		u64 bufs_oflow : 1;
+		u64 pkt_size_oflow : 1;
+		u64 x2p_req_ofl : 1;
+		u64 drp_noavail : 1;
+		u64 dat : 1;
+		u64 eop : 1;
+		u64 sop : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} s;
+	struct cvmx_pki_gen_int_s cn73xx;
+	struct cvmx_pki_gen_int_s cn78xx;
+	struct cvmx_pki_gen_int_cn78xxp1 {
+		u64 reserved_8_63 : 56;
+		u64 x2p_req_ofl : 1;
+		u64 drp_noavail : 1;
+		u64 dat : 1;
+		u64 eop : 1;
+		u64 sop : 1;
+		u64 bckprs : 1;
+		u64 crcerr : 1;
+		u64 pktdrp : 1;
+	} cn78xxp1;
+	struct cvmx_pki_gen_int_s cnf75xx;
+};
+
+typedef union cvmx_pki_gen_int cvmx_pki_gen_int_t;
+
+/**
+ * cvmx_pki_icg#_cfg
+ *
+ * This register configures the cluster group.
+ *
+ */
+union cvmx_pki_icgx_cfg {
+	u64 u64;
+	struct cvmx_pki_icgx_cfg_s {
+		u64 reserved_53_63 : 11;
+		u64 maxipe_use : 5;
+		u64 reserved_36_47 : 12;
+		u64 clusters : 4;
+		u64 reserved_27_31 : 5;
+		u64 release_rqd : 1;
+		u64 mlo : 1;
+		u64 pena : 1;
+		u64 timer : 12;
+		u64 delay : 12;
+	} s;
+	struct cvmx_pki_icgx_cfg_s cn73xx;
+	struct cvmx_pki_icgx_cfg_s cn78xx;
+	struct cvmx_pki_icgx_cfg_s cn78xxp1;
+	struct cvmx_pki_icgx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pki_icgx_cfg cvmx_pki_icgx_cfg_t;
+
+/**
+ * cvmx_pki_imem#
+ */
+union cvmx_pki_imemx {
+	u64 u64;
+	struct cvmx_pki_imemx_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_pki_imemx_s cn73xx;
+	struct cvmx_pki_imemx_s cn78xx;
+	struct cvmx_pki_imemx_s cn78xxp1;
+	struct cvmx_pki_imemx_s cnf75xx;
+};
+
+typedef union cvmx_pki_imemx cvmx_pki_imemx_t;
+
+/**
+ * cvmx_pki_ltype#_map
+ *
+ * This register is the layer type map, indexed by PKI_LTYPE_E.
+ *
+ */
+union cvmx_pki_ltypex_map {
+	u64 u64;
+	struct cvmx_pki_ltypex_map_s {
+		u64 reserved_3_63 : 61;
+		u64 beltype : 3;
+	} s;
+	struct cvmx_pki_ltypex_map_s cn73xx;
+	struct cvmx_pki_ltypex_map_s cn78xx;
+	struct cvmx_pki_ltypex_map_s cn78xxp1;
+	struct cvmx_pki_ltypex_map_s cnf75xx;
+};
+
+typedef union cvmx_pki_ltypex_map cvmx_pki_ltypex_map_t;
+
+/**
+ * cvmx_pki_pbe_eco
+ */
+union cvmx_pki_pbe_eco {
+	u64 u64;
+	struct cvmx_pki_pbe_eco_s {
+		u64 reserved_32_63 : 32;
+		u64 eco_rw : 32;
+	} s;
+	struct cvmx_pki_pbe_eco_s cn73xx;
+	struct cvmx_pki_pbe_eco_s cn78xx;
+	struct cvmx_pki_pbe_eco_s cnf75xx;
+};
+
+typedef union cvmx_pki_pbe_eco cvmx_pki_pbe_eco_t;
+
+/**
+ * cvmx_pki_pcam_lookup
+ *
+ * For diagnostic use only, this register performs a PCAM lookup against the provided
+ * cluster and PCAM instance and loads results into PKI_PCAM_RESULT.
+ */
+union cvmx_pki_pcam_lookup {
+	u64 u64;
+	struct cvmx_pki_pcam_lookup_s {
+		u64 reserved_54_63 : 10;
+		u64 cl : 2;
+		u64 reserved_49_51 : 3;
+		u64 pcam : 1;
+		u64 term : 8;
+		u64 style : 8;
+		u64 data : 32;
+	} s;
+	struct cvmx_pki_pcam_lookup_s cn73xx;
+	struct cvmx_pki_pcam_lookup_s cn78xx;
+	struct cvmx_pki_pcam_lookup_s cn78xxp1;
+	struct cvmx_pki_pcam_lookup_s cnf75xx;
+};
+
+typedef union cvmx_pki_pcam_lookup cvmx_pki_pcam_lookup_t;
+
+/**
+ * cvmx_pki_pcam_result
+ *
+ * For diagnostic use only, this register returns PCAM results for the most recent write to
+ * PKI_PCAM_LOOKUP. The read will stall until the lookup is completed.
+ * PKI_CL()_ECC_CTL[PCAM_EN] must be clear before accessing this register. Read stall
+ * is implemented by delaying the PKI_PCAM_LOOKUP write acknowledge until the PCAM is
+ * free and the lookup can be issued.
+ */
+union cvmx_pki_pcam_result {
+	u64 u64;
+	struct cvmx_pki_pcam_result_s {
+		u64 reserved_41_63 : 23;
+		u64 match : 1;
+		u64 entry : 8;
+		u64 result : 32;
+	} s;
+	struct cvmx_pki_pcam_result_cn73xx {
+		u64 conflict : 1;
+		u64 reserved_41_62 : 22;
+		u64 match : 1;
+		u64 entry : 8;
+		u64 result : 32;
+	} cn73xx;
+	struct cvmx_pki_pcam_result_cn73xx cn78xx;
+	struct cvmx_pki_pcam_result_cn73xx cn78xxp1;
+	struct cvmx_pki_pcam_result_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pki_pcam_result cvmx_pki_pcam_result_t;
+
+/**
+ * cvmx_pki_pfe_diag
+ */
+union cvmx_pki_pfe_diag {
+	u64 u64;
+	struct cvmx_pki_pfe_diag_s {
+		u64 reserved_1_63 : 63;
+		u64 bad_rid : 1;
+	} s;
+	struct cvmx_pki_pfe_diag_s cn73xx;
+	struct cvmx_pki_pfe_diag_s cn78xx;
+	struct cvmx_pki_pfe_diag_s cn78xxp1;
+	struct cvmx_pki_pfe_diag_s cnf75xx;
+};
+
+typedef union cvmx_pki_pfe_diag cvmx_pki_pfe_diag_t;
+
+/**
+ * cvmx_pki_pfe_eco
+ */
+union cvmx_pki_pfe_eco {
+	u64 u64;
+	struct cvmx_pki_pfe_eco_s {
+		u64 reserved_32_63 : 32;
+		u64 eco_rw : 32;
+	} s;
+	struct cvmx_pki_pfe_eco_s cn73xx;
+	struct cvmx_pki_pfe_eco_s cn78xx;
+	struct cvmx_pki_pfe_eco_s cnf75xx;
+};
+
+typedef union cvmx_pki_pfe_eco cvmx_pki_pfe_eco_t;
+
+/**
+ * cvmx_pki_pix_clken
+ */
+union cvmx_pki_pix_clken {
+	u64 u64;
+	struct cvmx_pki_pix_clken_s {
+		u64 reserved_17_63 : 47;
+		u64 mech : 1;
+		u64 reserved_4_15 : 12;
+		u64 cls : 4;
+	} s;
+	struct cvmx_pki_pix_clken_s cn73xx;
+	struct cvmx_pki_pix_clken_s cn78xx;
+	struct cvmx_pki_pix_clken_s cn78xxp1;
+	struct cvmx_pki_pix_clken_s cnf75xx;
+};
+
+typedef union cvmx_pki_pix_clken cvmx_pki_pix_clken_t;
+
+/**
+ * cvmx_pki_pix_diag
+ */
+union cvmx_pki_pix_diag {
+	u64 u64;
+	struct cvmx_pki_pix_diag_s {
+		u64 reserved_4_63 : 60;
+		u64 nosched : 4;
+	} s;
+	struct cvmx_pki_pix_diag_s cn73xx;
+	struct cvmx_pki_pix_diag_s cn78xx;
+	struct cvmx_pki_pix_diag_s cn78xxp1;
+	struct cvmx_pki_pix_diag_s cnf75xx;
+};
+
+typedef union cvmx_pki_pix_diag cvmx_pki_pix_diag_t;
+
+/**
+ * cvmx_pki_pix_eco
+ */
+union cvmx_pki_pix_eco {
+	u64 u64;
+	struct cvmx_pki_pix_eco_s {
+		u64 reserved_32_63 : 32;
+		u64 eco_rw : 32;
+	} s;
+	struct cvmx_pki_pix_eco_s cn73xx;
+	struct cvmx_pki_pix_eco_s cn78xx;
+	struct cvmx_pki_pix_eco_s cnf75xx;
+};
+
+typedef union cvmx_pki_pix_eco cvmx_pki_pix_eco_t;
+
+/**
+ * cvmx_pki_pkind#_icgsel
+ */
+union cvmx_pki_pkindx_icgsel {
+	u64 u64;
+	struct cvmx_pki_pkindx_icgsel_s {
+		u64 reserved_2_63 : 62;
+		u64 icg : 2;
+	} s;
+	struct cvmx_pki_pkindx_icgsel_s cn73xx;
+	struct cvmx_pki_pkindx_icgsel_s cn78xx;
+	struct cvmx_pki_pkindx_icgsel_s cn78xxp1;
+	struct cvmx_pki_pkindx_icgsel_s cnf75xx;
+};
+
+typedef union cvmx_pki_pkindx_icgsel cvmx_pki_pkindx_icgsel_t;
+
+/**
+ * cvmx_pki_pknd#_inb_stat0
+ *
+ * This register counts inbound statistics, indexed by pkind.
+ *
+ */
+union cvmx_pki_pkndx_inb_stat0 {
+	u64 u64;
+	struct cvmx_pki_pkndx_inb_stat0_s {
+		u64 reserved_48_63 : 16;
+		u64 pkts : 48;
+	} s;
+	struct cvmx_pki_pkndx_inb_stat0_s cn73xx;
+	struct cvmx_pki_pkndx_inb_stat0_s cn78xx;
+	struct cvmx_pki_pkndx_inb_stat0_s cn78xxp1;
+	struct cvmx_pki_pkndx_inb_stat0_s cnf75xx;
+};
+
+typedef union cvmx_pki_pkndx_inb_stat0 cvmx_pki_pkndx_inb_stat0_t;
+
+/**
+ * cvmx_pki_pknd#_inb_stat1
+ *
+ * This register counts inbound statistics, indexed by pkind.
+ *
+ */
+union cvmx_pki_pkndx_inb_stat1 {
+	u64 u64;
+	struct cvmx_pki_pkndx_inb_stat1_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pki_pkndx_inb_stat1_s cn73xx;
+	struct cvmx_pki_pkndx_inb_stat1_s cn78xx;
+	struct cvmx_pki_pkndx_inb_stat1_s cn78xxp1;
+	struct cvmx_pki_pkndx_inb_stat1_s cnf75xx;
+};
+
+typedef union cvmx_pki_pkndx_inb_stat1 cvmx_pki_pkndx_inb_stat1_t;
+
+/**
+ * cvmx_pki_pknd#_inb_stat2
+ *
+ * This register counts inbound statistics, indexed by pkind.
+ *
+ */
+union cvmx_pki_pkndx_inb_stat2 {
+	u64 u64;
+	struct cvmx_pki_pkndx_inb_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 errs : 48;
+	} s;
+	struct cvmx_pki_pkndx_inb_stat2_s cn73xx;
+	struct cvmx_pki_pkndx_inb_stat2_s cn78xx;
+	struct cvmx_pki_pkndx_inb_stat2_s cn78xxp1;
+	struct cvmx_pki_pkndx_inb_stat2_s cnf75xx;
+};
+
+typedef union cvmx_pki_pkndx_inb_stat2 cvmx_pki_pkndx_inb_stat2_t;
+
+/**
+ * cvmx_pki_pkt_err
+ */
+union cvmx_pki_pkt_err {
+	u64 u64;
+	struct cvmx_pki_pkt_err_s {
+		u64 reserved_7_63 : 57;
+		u64 reasm : 7;
+	} s;
+	struct cvmx_pki_pkt_err_s cn73xx;
+	struct cvmx_pki_pkt_err_s cn78xx;
+	struct cvmx_pki_pkt_err_s cn78xxp1;
+	struct cvmx_pki_pkt_err_s cnf75xx;
+};
+
+typedef union cvmx_pki_pkt_err cvmx_pki_pkt_err_t;
+
+/**
+ * cvmx_pki_ptag_avail
+ *
+ * For diagnostic use only.
+ *
+ */
+union cvmx_pki_ptag_avail {
+	u64 u64;
+	struct cvmx_pki_ptag_avail_s {
+		u64 reserved_8_63 : 56;
+		u64 avail : 8;
+	} s;
+	struct cvmx_pki_ptag_avail_s cn73xx;
+	struct cvmx_pki_ptag_avail_s cn78xx;
+	struct cvmx_pki_ptag_avail_s cnf75xx;
+};
+
+typedef union cvmx_pki_ptag_avail cvmx_pki_ptag_avail_t;
+
+/**
+ * cvmx_pki_qpg_tbl#
+ *
+ * These registers are used by PKI BE to indirectly calculate the Portadd/Aura/Group
+ * from the Diffsrv, HiGig or VLAN information as described in QPG. See also
+ * PKI_QPG_TBLB().
+ */
+union cvmx_pki_qpg_tblx {
+	u64 u64;
+	struct cvmx_pki_qpg_tblx_s {
+		u64 reserved_60_63 : 4;
+		u64 padd : 12;
+		u64 grptag_ok : 3;
+		u64 reserved_42_44 : 3;
+		u64 grp_ok : 10;
+		u64 grptag_bad : 3;
+		u64 reserved_26_28 : 3;
+		u64 grp_bad : 10;
+		u64 reserved_12_15 : 4;
+		u64 aura_node : 2;
+		u64 laura : 10;
+	} s;
+	struct cvmx_pki_qpg_tblx_s cn73xx;
+	struct cvmx_pki_qpg_tblx_s cn78xx;
+	struct cvmx_pki_qpg_tblx_s cn78xxp1;
+	struct cvmx_pki_qpg_tblx_s cnf75xx;
+};
+
+typedef union cvmx_pki_qpg_tblx cvmx_pki_qpg_tblx_t;
+
+/**
+ * cvmx_pki_qpg_tblb#
+ *
+ * This register configures the QPG table. See also PKI_QPG_TBL().
+ *
+ */
+union cvmx_pki_qpg_tblbx {
+	u64 u64;
+	struct cvmx_pki_qpg_tblbx_s {
+		u64 reserved_10_63 : 54;
+		u64 dstat_id : 10;
+	} s;
+	struct cvmx_pki_qpg_tblbx_s cn73xx;
+	struct cvmx_pki_qpg_tblbx_s cn78xx;
+	struct cvmx_pki_qpg_tblbx_s cnf75xx;
+};
+
+typedef union cvmx_pki_qpg_tblbx cvmx_pki_qpg_tblbx_t;
+
+/**
+ * cvmx_pki_reasm_sop#
+ */
+union cvmx_pki_reasm_sopx {
+	u64 u64;
+	struct cvmx_pki_reasm_sopx_s {
+		u64 sop : 64;
+	} s;
+	struct cvmx_pki_reasm_sopx_s cn73xx;
+	struct cvmx_pki_reasm_sopx_s cn78xx;
+	struct cvmx_pki_reasm_sopx_s cn78xxp1;
+	struct cvmx_pki_reasm_sopx_s cnf75xx;
+};
+
+typedef union cvmx_pki_reasm_sopx cvmx_pki_reasm_sopx_t;
+
+/**
+ * cvmx_pki_req_wgt
+ *
+ * This register controls the round-robin weights between each PKI requestor. For diagnostic
+ * tuning only.
+ */
+union cvmx_pki_req_wgt {
+	u64 u64;
+	struct cvmx_pki_req_wgt_s {
+		u64 reserved_36_63 : 28;
+		u64 wgt8 : 4;
+		u64 wgt7 : 4;
+		u64 wgt6 : 4;
+		u64 wgt5 : 4;
+		u64 wgt4 : 4;
+		u64 wgt3 : 4;
+		u64 wgt2 : 4;
+		u64 wgt1 : 4;
+		u64 wgt0 : 4;
+	} s;
+	struct cvmx_pki_req_wgt_s cn73xx;
+	struct cvmx_pki_req_wgt_s cn78xx;
+	struct cvmx_pki_req_wgt_s cn78xxp1;
+	struct cvmx_pki_req_wgt_s cnf75xx;
+};
+
+typedef union cvmx_pki_req_wgt cvmx_pki_req_wgt_t;
+
+/**
+ * cvmx_pki_sft_rst
+ */
+union cvmx_pki_sft_rst {
+	u64 u64;
+	struct cvmx_pki_sft_rst_s {
+		u64 busy : 1;
+		u64 reserved_33_62 : 30;
+		u64 active : 1;
+		u64 reserved_1_31 : 31;
+		u64 rst : 1;
+	} s;
+	struct cvmx_pki_sft_rst_s cn73xx;
+	struct cvmx_pki_sft_rst_s cn78xx;
+	struct cvmx_pki_sft_rst_s cn78xxp1;
+	struct cvmx_pki_sft_rst_s cnf75xx;
+};
+
+typedef union cvmx_pki_sft_rst cvmx_pki_sft_rst_t;
+
+/**
+ * cvmx_pki_stat#_hist0
+ */
+union cvmx_pki_statx_hist0 {
+	u64 u64;
+	struct cvmx_pki_statx_hist0_s {
+		u64 reserved_48_63 : 16;
+		u64 h1to63 : 48;
+	} s;
+	struct cvmx_pki_statx_hist0_s cn73xx;
+	struct cvmx_pki_statx_hist0_s cn78xx;
+	struct cvmx_pki_statx_hist0_s cn78xxp1;
+	struct cvmx_pki_statx_hist0_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist0 cvmx_pki_statx_hist0_t;
+
+/**
+ * cvmx_pki_stat#_hist1
+ */
+union cvmx_pki_statx_hist1 {
+	u64 u64;
+	struct cvmx_pki_statx_hist1_s {
+		u64 reserved_48_63 : 16;
+		u64 h64to127 : 48;
+	} s;
+	struct cvmx_pki_statx_hist1_s cn73xx;
+	struct cvmx_pki_statx_hist1_s cn78xx;
+	struct cvmx_pki_statx_hist1_s cn78xxp1;
+	struct cvmx_pki_statx_hist1_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist1 cvmx_pki_statx_hist1_t;
+
+/**
+ * cvmx_pki_stat#_hist2
+ */
+union cvmx_pki_statx_hist2 {
+	u64 u64;
+	struct cvmx_pki_statx_hist2_s {
+		u64 reserved_48_63 : 16;
+		u64 h128to255 : 48;
+	} s;
+	struct cvmx_pki_statx_hist2_s cn73xx;
+	struct cvmx_pki_statx_hist2_s cn78xx;
+	struct cvmx_pki_statx_hist2_s cn78xxp1;
+	struct cvmx_pki_statx_hist2_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist2 cvmx_pki_statx_hist2_t;
+
+/**
+ * cvmx_pki_stat#_hist3
+ */
+union cvmx_pki_statx_hist3 {
+	u64 u64;
+	struct cvmx_pki_statx_hist3_s {
+		u64 reserved_48_63 : 16;
+		u64 h256to511 : 48;
+	} s;
+	struct cvmx_pki_statx_hist3_s cn73xx;
+	struct cvmx_pki_statx_hist3_s cn78xx;
+	struct cvmx_pki_statx_hist3_s cn78xxp1;
+	struct cvmx_pki_statx_hist3_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist3 cvmx_pki_statx_hist3_t;
+
+/**
+ * cvmx_pki_stat#_hist4
+ */
+union cvmx_pki_statx_hist4 {
+	u64 u64;
+	struct cvmx_pki_statx_hist4_s {
+		u64 reserved_48_63 : 16;
+		u64 h512to1023 : 48;
+	} s;
+	struct cvmx_pki_statx_hist4_s cn73xx;
+	struct cvmx_pki_statx_hist4_s cn78xx;
+	struct cvmx_pki_statx_hist4_s cn78xxp1;
+	struct cvmx_pki_statx_hist4_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist4 cvmx_pki_statx_hist4_t;
+
+/**
+ * cvmx_pki_stat#_hist5
+ */
+union cvmx_pki_statx_hist5 {
+	u64 u64;
+	struct cvmx_pki_statx_hist5_s {
+		u64 reserved_48_63 : 16;
+		u64 h1024to1518 : 48;
+	} s;
+	struct cvmx_pki_statx_hist5_s cn73xx;
+	struct cvmx_pki_statx_hist5_s cn78xx;
+	struct cvmx_pki_statx_hist5_s cn78xxp1;
+	struct cvmx_pki_statx_hist5_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist5 cvmx_pki_statx_hist5_t;
+
+/**
+ * cvmx_pki_stat#_hist6
+ */
+union cvmx_pki_statx_hist6 {
+	u64 u64;
+	struct cvmx_pki_statx_hist6_s {
+		u64 reserved_48_63 : 16;
+		u64 h1519 : 48;
+	} s;
+	struct cvmx_pki_statx_hist6_s cn73xx;
+	struct cvmx_pki_statx_hist6_s cn78xx;
+	struct cvmx_pki_statx_hist6_s cn78xxp1;
+	struct cvmx_pki_statx_hist6_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_hist6 cvmx_pki_statx_hist6_t;
+
+/**
+ * cvmx_pki_stat#_stat0
+ */
+union cvmx_pki_statx_stat0 {
+	u64 u64;
+	struct cvmx_pki_statx_stat0_s {
+		u64 reserved_48_63 : 16;
+		u64 pkts : 48;
+	} s;
+	struct cvmx_pki_statx_stat0_s cn73xx;
+	struct cvmx_pki_statx_stat0_s cn78xx;
+	struct cvmx_pki_statx_stat0_s cn78xxp1;
+	struct cvmx_pki_statx_stat0_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat0 cvmx_pki_statx_stat0_t;
+
+/**
+ * cvmx_pki_stat#_stat1
+ */
+union cvmx_pki_statx_stat1 {
+	u64 u64;
+	struct cvmx_pki_statx_stat1_s {
+		u64 reserved_48_63 : 16;
+		u64 octs : 48;
+	} s;
+	struct cvmx_pki_statx_stat1_s cn73xx;
+	struct cvmx_pki_statx_stat1_s cn78xx;
+	struct cvmx_pki_statx_stat1_s cn78xxp1;
+	struct cvmx_pki_statx_stat1_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat1 cvmx_pki_statx_stat1_t;
+
+/**
+ * cvmx_pki_stat#_stat10
+ */
+union cvmx_pki_statx_stat10 {
+	u64 u64;
+	struct cvmx_pki_statx_stat10_s {
+		u64 reserved_48_63 : 16;
+		u64 jabber : 48;
+	} s;
+	struct cvmx_pki_statx_stat10_s cn73xx;
+	struct cvmx_pki_statx_stat10_s cn78xx;
+	struct cvmx_pki_statx_stat10_s cn78xxp1;
+	struct cvmx_pki_statx_stat10_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat10 cvmx_pki_statx_stat10_t;
+
+/**
+ * cvmx_pki_stat#_stat11
+ */
+union cvmx_pki_statx_stat11 {
+	u64 u64;
+	struct cvmx_pki_statx_stat11_s {
+		u64 reserved_48_63 : 16;
+		u64 oversz : 48;
+	} s;
+	struct cvmx_pki_statx_stat11_s cn73xx;
+	struct cvmx_pki_statx_stat11_s cn78xx;
+	struct cvmx_pki_statx_stat11_s cn78xxp1;
+	struct cvmx_pki_statx_stat11_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat11 cvmx_pki_statx_stat11_t;
+
+/**
+ * cvmx_pki_stat#_stat12
+ */
+union cvmx_pki_statx_stat12 {
+	u64 u64;
+	struct cvmx_pki_statx_stat12_s {
+		u64 reserved_48_63 : 16;
+		u64 l2err : 48;
+	} s;
+	struct cvmx_pki_statx_stat12_s cn73xx;
+	struct cvmx_pki_statx_stat12_s cn78xx;
+	struct cvmx_pki_statx_stat12_s cn78xxp1;
+	struct cvmx_pki_statx_stat12_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat12 cvmx_pki_statx_stat12_t;
+
+/**
+ * cvmx_pki_stat#_stat13
+ */
+union cvmx_pki_statx_stat13 {
+	u64 u64;
+	struct cvmx_pki_statx_stat13_s {
+		u64 reserved_48_63 : 16;
+		u64 spec : 48;
+	} s;
+	struct cvmx_pki_statx_stat13_s cn73xx;
+	struct cvmx_pki_statx_stat13_s cn78xx;
+	struct cvmx_pki_statx_stat13_s cn78xxp1;
+	struct cvmx_pki_statx_stat13_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat13 cvmx_pki_statx_stat13_t;
+
+/**
+ * cvmx_pki_stat#_stat14
+ */
+union cvmx_pki_statx_stat14 {
+	u64 u64;
+	struct cvmx_pki_statx_stat14_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_bcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat14_s cn73xx;
+	struct cvmx_pki_statx_stat14_s cn78xx;
+	struct cvmx_pki_statx_stat14_s cn78xxp1;
+	struct cvmx_pki_statx_stat14_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat14 cvmx_pki_statx_stat14_t;
+
+/**
+ * cvmx_pki_stat#_stat15
+ */
+union cvmx_pki_statx_stat15 {
+	u64 u64;
+	struct cvmx_pki_statx_stat15_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_mcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat15_s cn73xx;
+	struct cvmx_pki_statx_stat15_s cn78xx;
+	struct cvmx_pki_statx_stat15_s cn78xxp1;
+	struct cvmx_pki_statx_stat15_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat15 cvmx_pki_statx_stat15_t;
+
+/**
+ * cvmx_pki_stat#_stat16
+ */
+union cvmx_pki_statx_stat16 {
+	u64 u64;
+	struct cvmx_pki_statx_stat16_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_bcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat16_s cn73xx;
+	struct cvmx_pki_statx_stat16_s cn78xx;
+	struct cvmx_pki_statx_stat16_s cn78xxp1;
+	struct cvmx_pki_statx_stat16_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat16 cvmx_pki_statx_stat16_t;
+
+/**
+ * cvmx_pki_stat#_stat17
+ */
+union cvmx_pki_statx_stat17 {
+	u64 u64;
+	struct cvmx_pki_statx_stat17_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_mcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat17_s cn73xx;
+	struct cvmx_pki_statx_stat17_s cn78xx;
+	struct cvmx_pki_statx_stat17_s cn78xxp1;
+	struct cvmx_pki_statx_stat17_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat17 cvmx_pki_statx_stat17_t;
+
+/**
+ * cvmx_pki_stat#_stat18
+ */
+union cvmx_pki_statx_stat18 {
+	u64 u64;
+	struct cvmx_pki_statx_stat18_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_spec : 48;
+	} s;
+	struct cvmx_pki_statx_stat18_s cn73xx;
+	struct cvmx_pki_statx_stat18_s cn78xx;
+	struct cvmx_pki_statx_stat18_s cn78xxp1;
+	struct cvmx_pki_statx_stat18_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat18 cvmx_pki_statx_stat18_t;
+
+/**
+ * cvmx_pki_stat#_stat2
+ */
+union cvmx_pki_statx_stat2 {
+	u64 u64;
+	struct cvmx_pki_statx_stat2_s {
+		u64 reserved_48_63 : 16;
+		u64 raw : 48;
+	} s;
+	struct cvmx_pki_statx_stat2_s cn73xx;
+	struct cvmx_pki_statx_stat2_s cn78xx;
+	struct cvmx_pki_statx_stat2_s cn78xxp1;
+	struct cvmx_pki_statx_stat2_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat2 cvmx_pki_statx_stat2_t;
+
+/**
+ * cvmx_pki_stat#_stat3
+ */
+union cvmx_pki_statx_stat3 {
+	u64 u64;
+	struct cvmx_pki_statx_stat3_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_pkts : 48;
+	} s;
+	struct cvmx_pki_statx_stat3_s cn73xx;
+	struct cvmx_pki_statx_stat3_s cn78xx;
+	struct cvmx_pki_statx_stat3_s cn78xxp1;
+	struct cvmx_pki_statx_stat3_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat3 cvmx_pki_statx_stat3_t;
+
+/**
+ * cvmx_pki_stat#_stat4
+ */
+union cvmx_pki_statx_stat4 {
+	u64 u64;
+	struct cvmx_pki_statx_stat4_s {
+		u64 reserved_48_63 : 16;
+		u64 drp_octs : 48;
+	} s;
+	struct cvmx_pki_statx_stat4_s cn73xx;
+	struct cvmx_pki_statx_stat4_s cn78xx;
+	struct cvmx_pki_statx_stat4_s cn78xxp1;
+	struct cvmx_pki_statx_stat4_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat4 cvmx_pki_statx_stat4_t;
+
+/**
+ * cvmx_pki_stat#_stat5
+ */
+union cvmx_pki_statx_stat5 {
+	u64 u64;
+	struct cvmx_pki_statx_stat5_s {
+		u64 reserved_48_63 : 16;
+		u64 bcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat5_s cn73xx;
+	struct cvmx_pki_statx_stat5_s cn78xx;
+	struct cvmx_pki_statx_stat5_s cn78xxp1;
+	struct cvmx_pki_statx_stat5_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat5 cvmx_pki_statx_stat5_t;
+
+/**
+ * cvmx_pki_stat#_stat6
+ */
+union cvmx_pki_statx_stat6 {
+	u64 u64;
+	struct cvmx_pki_statx_stat6_s {
+		u64 reserved_48_63 : 16;
+		u64 mcast : 48;
+	} s;
+	struct cvmx_pki_statx_stat6_s cn73xx;
+	struct cvmx_pki_statx_stat6_s cn78xx;
+	struct cvmx_pki_statx_stat6_s cn78xxp1;
+	struct cvmx_pki_statx_stat6_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat6 cvmx_pki_statx_stat6_t;
+
+/**
+ * cvmx_pki_stat#_stat7
+ */
+union cvmx_pki_statx_stat7 {
+	u64 u64;
+	struct cvmx_pki_statx_stat7_s {
+		u64 reserved_48_63 : 16;
+		u64 fcs : 48;
+	} s;
+	struct cvmx_pki_statx_stat7_s cn73xx;
+	struct cvmx_pki_statx_stat7_s cn78xx;
+	struct cvmx_pki_statx_stat7_s cn78xxp1;
+	struct cvmx_pki_statx_stat7_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat7 cvmx_pki_statx_stat7_t;
+
+/**
+ * cvmx_pki_stat#_stat8
+ */
+union cvmx_pki_statx_stat8 {
+	u64 u64;
+	struct cvmx_pki_statx_stat8_s {
+		u64 reserved_48_63 : 16;
+		u64 frag : 48;
+	} s;
+	struct cvmx_pki_statx_stat8_s cn73xx;
+	struct cvmx_pki_statx_stat8_s cn78xx;
+	struct cvmx_pki_statx_stat8_s cn78xxp1;
+	struct cvmx_pki_statx_stat8_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat8 cvmx_pki_statx_stat8_t;
+
+/**
+ * cvmx_pki_stat#_stat9
+ */
+union cvmx_pki_statx_stat9 {
+	u64 u64;
+	struct cvmx_pki_statx_stat9_s {
+		u64 reserved_48_63 : 16;
+		u64 undersz : 48;
+	} s;
+	struct cvmx_pki_statx_stat9_s cn73xx;
+	struct cvmx_pki_statx_stat9_s cn78xx;
+	struct cvmx_pki_statx_stat9_s cn78xxp1;
+	struct cvmx_pki_statx_stat9_s cnf75xx;
+};
+
+typedef union cvmx_pki_statx_stat9 cvmx_pki_statx_stat9_t;
+
+/**
+ * cvmx_pki_stat_ctl
+ *
+ * This register controls how the PKI statistics counters are handled.
+ *
+ */
+union cvmx_pki_stat_ctl {
+	u64 u64;
+	struct cvmx_pki_stat_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 mode : 2;
+	} s;
+	struct cvmx_pki_stat_ctl_s cn73xx;
+	struct cvmx_pki_stat_ctl_s cn78xx;
+	struct cvmx_pki_stat_ctl_s cn78xxp1;
+	struct cvmx_pki_stat_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pki_stat_ctl cvmx_pki_stat_ctl_t;
+
+/**
+ * cvmx_pki_style#_buf
+ *
+ * This register configures the PKI BE skip amounts and other information.
+ * It is indexed by final style, PKI_WQE_S[STYLE]<5:0>.
+ */
+union cvmx_pki_stylex_buf {
+	u64 u64;
+	struct cvmx_pki_stylex_buf_s {
+		u64 reserved_33_63 : 31;
+		u64 pkt_lend : 1;
+		u64 wqe_hsz : 2;
+		u64 wqe_skip : 2;
+		u64 first_skip : 6;
+		u64 later_skip : 6;
+		u64 opc_mode : 2;
+		u64 dis_wq_dat : 1;
+		u64 mb_size : 13;
+	} s;
+	struct cvmx_pki_stylex_buf_s cn73xx;
+	struct cvmx_pki_stylex_buf_s cn78xx;
+	struct cvmx_pki_stylex_buf_s cn78xxp1;
+	struct cvmx_pki_stylex_buf_s cnf75xx;
+};
+
+typedef union cvmx_pki_stylex_buf cvmx_pki_stylex_buf_t;
+
+/**
+ * cvmx_pki_style#_tag_mask
+ *
+ * This register configures the PKI BE tag algorithm.
+ * It is indexed by final style, PKI_WQE_S[STYLE]<5:0>.
+ */
+union cvmx_pki_stylex_tag_mask {
+	u64 u64;
+	struct cvmx_pki_stylex_tag_mask_s {
+		u64 reserved_16_63 : 48;
+		u64 mask : 16;
+	} s;
+	struct cvmx_pki_stylex_tag_mask_s cn73xx;
+	struct cvmx_pki_stylex_tag_mask_s cn78xx;
+	struct cvmx_pki_stylex_tag_mask_s cn78xxp1;
+	struct cvmx_pki_stylex_tag_mask_s cnf75xx;
+};
+
+typedef union cvmx_pki_stylex_tag_mask cvmx_pki_stylex_tag_mask_t;
+
+/**
+ * cvmx_pki_style#_tag_sel
+ *
+ * This register configures the PKI BE tag algorithm.
+ * It is indexed by final style, PKI_WQE_S[STYLE]<5:0>.
+ */
+union cvmx_pki_stylex_tag_sel {
+	u64 u64;
+	struct cvmx_pki_stylex_tag_sel_s {
+		u64 reserved_27_63 : 37;
+		u64 tag_idx3 : 3;
+		u64 reserved_19_23 : 5;
+		u64 tag_idx2 : 3;
+		u64 reserved_11_15 : 5;
+		u64 tag_idx1 : 3;
+		u64 reserved_3_7 : 5;
+		u64 tag_idx0 : 3;
+	} s;
+	struct cvmx_pki_stylex_tag_sel_s cn73xx;
+	struct cvmx_pki_stylex_tag_sel_s cn78xx;
+	struct cvmx_pki_stylex_tag_sel_s cn78xxp1;
+	struct cvmx_pki_stylex_tag_sel_s cnf75xx;
+};
+
+typedef union cvmx_pki_stylex_tag_sel cvmx_pki_stylex_tag_sel_t;
+
+/**
+ * cvmx_pki_style#_wq2
+ *
+ * This register configures the PKI BE WQE generation.
+ * It is indexed by final style, PKI_WQE_S[STYLE]<5:0>.
+ */
+union cvmx_pki_stylex_wq2 {
+	u64 u64;
+	struct cvmx_pki_stylex_wq2_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_pki_stylex_wq2_s cn73xx;
+	struct cvmx_pki_stylex_wq2_s cn78xx;
+	struct cvmx_pki_stylex_wq2_s cn78xxp1;
+	struct cvmx_pki_stylex_wq2_s cnf75xx;
+};
+
+typedef union cvmx_pki_stylex_wq2 cvmx_pki_stylex_wq2_t;
+
+/**
+ * cvmx_pki_style#_wq4
+ *
+ * This register configures the PKI BE WQE generation.
+ * It is indexed by final style, PKI_WQE_S[STYLE]<5:0>.
+ */
+union cvmx_pki_stylex_wq4 {
+	u64 u64;
+	struct cvmx_pki_stylex_wq4_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_pki_stylex_wq4_s cn73xx;
+	struct cvmx_pki_stylex_wq4_s cn78xx;
+	struct cvmx_pki_stylex_wq4_s cn78xxp1;
+	struct cvmx_pki_stylex_wq4_s cnf75xx;
+};
+
+typedef union cvmx_pki_stylex_wq4 cvmx_pki_stylex_wq4_t;
+
+/**
+ * cvmx_pki_tag_inc#_ctl
+ */
+union cvmx_pki_tag_incx_ctl {
+	u64 u64;
+	struct cvmx_pki_tag_incx_ctl_s {
+		u64 reserved_12_63 : 52;
+		u64 ptr_sel : 4;
+		u64 offset : 8;
+	} s;
+	struct cvmx_pki_tag_incx_ctl_s cn73xx;
+	struct cvmx_pki_tag_incx_ctl_s cn78xx;
+	struct cvmx_pki_tag_incx_ctl_s cn78xxp1;
+	struct cvmx_pki_tag_incx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pki_tag_incx_ctl cvmx_pki_tag_incx_ctl_t;
+
+/**
+ * cvmx_pki_tag_inc#_mask
+ */
+union cvmx_pki_tag_incx_mask {
+	u64 u64;
+	struct cvmx_pki_tag_incx_mask_s {
+		u64 en : 64;
+	} s;
+	struct cvmx_pki_tag_incx_mask_s cn73xx;
+	struct cvmx_pki_tag_incx_mask_s cn78xx;
+	struct cvmx_pki_tag_incx_mask_s cn78xxp1;
+	struct cvmx_pki_tag_incx_mask_s cnf75xx;
+};
+
+typedef union cvmx_pki_tag_incx_mask cvmx_pki_tag_incx_mask_t;
+
+/**
+ * cvmx_pki_tag_secret
+ *
+ * The source and destination initial values (IVs) in tag generation provide a mechanism for
+ * seeding with a random initialization value to reduce cache collision attacks.
+ */
+union cvmx_pki_tag_secret {
+	u64 u64;
+	struct cvmx_pki_tag_secret_s {
+		u64 dst6 : 16;
+		u64 src6 : 16;
+		u64 dst : 16;
+		u64 src : 16;
+	} s;
+	struct cvmx_pki_tag_secret_s cn73xx;
+	struct cvmx_pki_tag_secret_s cn78xx;
+	struct cvmx_pki_tag_secret_s cn78xxp1;
+	struct cvmx_pki_tag_secret_s cnf75xx;
+};
+
+typedef union cvmx_pki_tag_secret cvmx_pki_tag_secret_t;
+
+/**
+ * cvmx_pki_x2p_req_ofl
+ */
+union cvmx_pki_x2p_req_ofl {
+	u64 u64;
+	struct cvmx_pki_x2p_req_ofl_s {
+		u64 reserved_4_63 : 60;
+		u64 x2p_did : 4;
+	} s;
+	struct cvmx_pki_x2p_req_ofl_s cn73xx;
+	struct cvmx_pki_x2p_req_ofl_s cn78xx;
+	struct cvmx_pki_x2p_req_ofl_s cn78xxp1;
+	struct cvmx_pki_x2p_req_ofl_s cnf75xx;
+};
+
+typedef union cvmx_pki_x2p_req_ofl cvmx_pki_x2p_req_ofl_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 24/50] mips: octeon: Add cvmx-pko-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (22 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 23/50] mips: octeon: Add cvmx-pki-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 25/50] mips: octeon: Add cvmx-pow-defs.h " Stefan Roese
                   ` (28 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pko-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pko-defs.h  | 9388 +++++++++++++++++
 1 file changed, 9388 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-defs.h
new file mode 100644
index 0000000000..6ae01514dc
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-defs.h
@@ -0,0 +1,9388 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pko.
+ */
+
+#ifndef __CVMX_PKO_DEFS_H__
+#define __CVMX_PKO_DEFS_H__
+
+#define CVMX_PKO_CHANNEL_LEVEL			(0x00015400000800F0ull)
+#define CVMX_PKO_DPFI_ENA			(0x0001540000C00018ull)
+#define CVMX_PKO_DPFI_FLUSH			(0x0001540000C00008ull)
+#define CVMX_PKO_DPFI_FPA_AURA			(0x0001540000C00010ull)
+#define CVMX_PKO_DPFI_STATUS			(0x0001540000C00000ull)
+#define CVMX_PKO_DQX_BYTES(offset)		(0x00015400000000D8ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_CIR(offset)		(0x0001540000280018ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_DROPPED_BYTES(offset)	(0x00015400000000C8ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_DROPPED_PACKETS(offset)	(0x00015400000000C0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_FIFO(offset)		(0x0001540000300078ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_PACKETS(offset)		(0x00015400000000D0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_PICK(offset)		(0x0001540000300070ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_PIR(offset)		(0x0001540000280020ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_POINTERS(offset)		(0x0001540000280078ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_SCHEDULE(offset)		(0x0001540000280008ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_SCHED_STATE(offset)	(0x0001540000280028ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_SHAPE(offset)		(0x0001540000280010ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_SHAPE_STATE(offset)	(0x0001540000280030ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_SW_XOFF(offset)		(0x00015400002800E0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_TOPOLOGY(offset)		(0x0001540000300000ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_BUF_CNT(offset)		(0x00015400008000E8ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_BUF_CTL(offset)		(0x00015400008000F0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_BUF_CTL_W1C(offset)	(0x00015400008000F8ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_CNT(offset)		(0x0001540000000050ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_CTL(offset)		(0x0001540000000040ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQX_WM_CTL_W1C(offset)		(0x0001540000000048ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_DQ_CSR_BUS_DEBUG		(0x00015400003001F8ull)
+#define CVMX_PKO_DQ_DEBUG			(0x0001540000300128ull)
+#define CVMX_PKO_DRAIN_IRQ			(0x0001540000000140ull)
+#define CVMX_PKO_ENABLE				(0x0001540000D00008ull)
+#define CVMX_PKO_FORMATX_CTL(offset)		(0x0001540000900800ull + ((offset) & 127) * 8)
+#define CVMX_PKO_L1_SQA_DEBUG			(0x0001540000080128ull)
+#define CVMX_PKO_L1_SQB_DEBUG			(0x0001540000080130ull)
+#define CVMX_PKO_L1_SQX_CIR(offset)		(0x0001540000000018ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_DROPPED_BYTES(offset)	(0x0001540000000088ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_DROPPED_PACKETS(offset) (0x0001540000000080ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_GREEN(offset)		(0x0001540000080058ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_GREEN_BYTES(offset)	(0x00015400000000B8ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_GREEN_PACKETS(offset)	(0x00015400000000B0ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_LINK(offset)		(0x0001540000000038ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_PICK(offset)		(0x0001540000080070ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_RED(offset)		(0x0001540000080068ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_RED_BYTES(offset)	(0x0001540000000098ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_RED_PACKETS(offset)	(0x0001540000000090ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_SCHEDULE(offset)	(0x0001540000000008ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_SHAPE(offset)		(0x0001540000000010ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_SHAPE_STATE(offset)	(0x0001540000000030ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_SW_XOFF(offset)		(0x00015400000000E0ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_TOPOLOGY(offset)	(0x0001540000080000ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_YELLOW(offset)		(0x0001540000080060ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_YELLOW_BYTES(offset)	(0x00015400000000A8ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQX_YELLOW_PACKETS(offset)	(0x00015400000000A0ull + ((offset) & 31) * 512)
+#define CVMX_PKO_L1_SQ_CSR_BUS_DEBUG		(0x00015400000801F8ull)
+#define CVMX_PKO_L2_SQA_DEBUG			(0x0001540000100128ull)
+#define CVMX_PKO_L2_SQB_DEBUG			(0x0001540000100130ull)
+#define CVMX_PKO_L2_SQX_CIR(offset)		(0x0001540000080018ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_GREEN(offset)		(0x0001540000100058ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_PICK(offset)		(0x0001540000100070ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_PIR(offset)		(0x0001540000080020ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_POINTERS(offset)	(0x0001540000080078ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_RED(offset)		(0x0001540000100068ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_SCHEDULE(offset)	(0x0001540000080008ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_SCHED_STATE(offset)	(0x0001540000080028ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_SHAPE(offset)		(0x0001540000080010ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_SHAPE_STATE(offset)	(0x0001540000080030ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_SW_XOFF(offset)		(0x00015400000800E0ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_TOPOLOGY(offset)	(0x0001540000100000ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQX_YELLOW(offset)		(0x0001540000100060ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L2_SQ_CSR_BUS_DEBUG		(0x00015400001001F8ull)
+#define CVMX_PKO_L3_L2_SQX_CHANNEL(offset)	(0x0001540000080038ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQA_DEBUG			(0x0001540000180128ull)
+#define CVMX_PKO_L3_SQB_DEBUG			(0x0001540000180130ull)
+#define CVMX_PKO_L3_SQX_CIR(offset)		(0x0001540000100018ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_GREEN(offset)		(0x0001540000180058ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_PICK(offset)		(0x0001540000180070ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_PIR(offset)		(0x0001540000100020ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_POINTERS(offset)	(0x0001540000100078ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_RED(offset)		(0x0001540000180068ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_SCHEDULE(offset)	(0x0001540000100008ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_SCHED_STATE(offset)	(0x0001540000100028ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_SHAPE(offset)		(0x0001540000100010ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_SHAPE_STATE(offset)	(0x0001540000100030ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_SW_XOFF(offset)		(0x00015400001000E0ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_TOPOLOGY(offset)	(0x0001540000180000ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQX_YELLOW(offset)		(0x0001540000180060ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L3_SQ_CSR_BUS_DEBUG		(0x00015400001801F8ull)
+#define CVMX_PKO_L4_SQA_DEBUG			(0x0001540000200128ull)
+#define CVMX_PKO_L4_SQB_DEBUG			(0x0001540000200130ull)
+#define CVMX_PKO_L4_SQX_CIR(offset)		(0x0001540000180018ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_GREEN(offset)		(0x0001540000200058ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_PICK(offset)		(0x0001540000200070ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_PIR(offset)		(0x0001540000180020ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_POINTERS(offset)	(0x0001540000180078ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_RED(offset)		(0x0001540000200068ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_SCHEDULE(offset)	(0x0001540000180008ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_SCHED_STATE(offset)	(0x0001540000180028ull + ((offset) & 511) * 512)
+#define CVMX_PKO_L4_SQX_SHAPE(offset)		(0x0001540000180010ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_SHAPE_STATE(offset)	(0x0001540000180030ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_SW_XOFF(offset)		(0x00015400001800E0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_TOPOLOGY(offset)	(0x0001540000200000ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQX_YELLOW(offset)		(0x0001540000200060ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L4_SQ_CSR_BUS_DEBUG		(0x00015400002001F8ull)
+#define CVMX_PKO_L5_SQA_DEBUG			(0x0001540000280128ull)
+#define CVMX_PKO_L5_SQB_DEBUG			(0x0001540000280130ull)
+#define CVMX_PKO_L5_SQX_CIR(offset)		(0x0001540000200018ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_GREEN(offset)		(0x0001540000280058ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_PICK(offset)		(0x0001540000280070ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_PIR(offset)		(0x0001540000200020ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_POINTERS(offset)	(0x0001540000200078ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_RED(offset)		(0x0001540000280068ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_SCHEDULE(offset)	(0x0001540000200008ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_SCHED_STATE(offset)	(0x0001540000200028ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_SHAPE(offset)		(0x0001540000200010ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_SHAPE_STATE(offset)	(0x0001540000200030ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_SW_XOFF(offset)		(0x00015400002000E0ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_TOPOLOGY(offset)	(0x0001540000280000ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQX_YELLOW(offset)		(0x0001540000280060ull + ((offset) & 1023) * 512)
+#define CVMX_PKO_L5_SQ_CSR_BUS_DEBUG		(0x00015400002801F8ull)
+#define CVMX_PKO_LUTX(offset)			(0x0001540000B00000ull + ((offset) & 1023) * 8)
+#define CVMX_PKO_LUT_BIST_STATUS		(0x0001540000B02018ull)
+#define CVMX_PKO_LUT_ECC_CTL0			(0x0001540000BFFFD0ull)
+#define CVMX_PKO_LUT_ECC_DBE_STS0		(0x0001540000BFFFF0ull)
+#define CVMX_PKO_LUT_ECC_DBE_STS_CMB0		(0x0001540000BFFFD8ull)
+#define CVMX_PKO_LUT_ECC_SBE_STS0		(0x0001540000BFFFF8ull)
+#define CVMX_PKO_LUT_ECC_SBE_STS_CMB0		(0x0001540000BFFFE8ull)
+#define CVMX_PKO_MACX_CFG(offset)		(0x0001540000900000ull + ((offset) & 31) * 8)
+#define CVMX_PKO_MCI0_CRED_CNTX(offset)		(0x0001540000A40000ull + ((offset) & 31) * 8)
+#define CVMX_PKO_MCI0_MAX_CREDX(offset)		(0x0001540000A00000ull + ((offset) & 31) * 8)
+#define CVMX_PKO_MCI1_CRED_CNTX(offset)		(0x0001540000A80100ull + ((offset) & 31) * 8)
+#define CVMX_PKO_MCI1_MAX_CREDX(offset)		(0x0001540000A80000ull + ((offset) & 31) * 8)
+#define CVMX_PKO_MEM_COUNT0			(0x0001180050001080ull)
+#define CVMX_PKO_MEM_COUNT1			(0x0001180050001088ull)
+#define CVMX_PKO_MEM_DEBUG0			(0x0001180050001100ull)
+#define CVMX_PKO_MEM_DEBUG1			(0x0001180050001108ull)
+#define CVMX_PKO_MEM_DEBUG10			(0x0001180050001150ull)
+#define CVMX_PKO_MEM_DEBUG11			(0x0001180050001158ull)
+#define CVMX_PKO_MEM_DEBUG12			(0x0001180050001160ull)
+#define CVMX_PKO_MEM_DEBUG13			(0x0001180050001168ull)
+#define CVMX_PKO_MEM_DEBUG14			(0x0001180050001170ull)
+#define CVMX_PKO_MEM_DEBUG2			(0x0001180050001110ull)
+#define CVMX_PKO_MEM_DEBUG3			(0x0001180050001118ull)
+#define CVMX_PKO_MEM_DEBUG4			(0x0001180050001120ull)
+#define CVMX_PKO_MEM_DEBUG5			(0x0001180050001128ull)
+#define CVMX_PKO_MEM_DEBUG6			(0x0001180050001130ull)
+#define CVMX_PKO_MEM_DEBUG7			(0x0001180050001138ull)
+#define CVMX_PKO_MEM_DEBUG8			(0x0001180050001140ull)
+#define CVMX_PKO_MEM_DEBUG9			(0x0001180050001148ull)
+#define CVMX_PKO_MEM_IPORT_PTRS			(0x0001180050001030ull)
+#define CVMX_PKO_MEM_IPORT_QOS			(0x0001180050001038ull)
+#define CVMX_PKO_MEM_IQUEUE_PTRS		(0x0001180050001040ull)
+#define CVMX_PKO_MEM_IQUEUE_QOS			(0x0001180050001048ull)
+#define CVMX_PKO_MEM_PORT_PTRS			(0x0001180050001010ull)
+#define CVMX_PKO_MEM_PORT_QOS			(0x0001180050001018ull)
+#define CVMX_PKO_MEM_PORT_RATE0			(0x0001180050001020ull)
+#define CVMX_PKO_MEM_PORT_RATE1			(0x0001180050001028ull)
+#define CVMX_PKO_MEM_QUEUE_PTRS			(0x0001180050001000ull)
+#define CVMX_PKO_MEM_QUEUE_QOS			(0x0001180050001008ull)
+#define CVMX_PKO_MEM_THROTTLE_INT		(0x0001180050001058ull)
+#define CVMX_PKO_MEM_THROTTLE_PIPE		(0x0001180050001050ull)
+#define CVMX_PKO_NCB_BIST_STATUS		(0x0001540000EFFF00ull)
+#define CVMX_PKO_NCB_ECC_CTL0			(0x0001540000EFFFD0ull)
+#define CVMX_PKO_NCB_ECC_DBE_STS0		(0x0001540000EFFFF0ull)
+#define CVMX_PKO_NCB_ECC_DBE_STS_CMB0		(0x0001540000EFFFD8ull)
+#define CVMX_PKO_NCB_ECC_SBE_STS0		(0x0001540000EFFFF8ull)
+#define CVMX_PKO_NCB_ECC_SBE_STS_CMB0		(0x0001540000EFFFE8ull)
+#define CVMX_PKO_NCB_INT			(0x0001540000E00010ull)
+#define CVMX_PKO_NCB_TX_ERR_INFO		(0x0001540000E00008ull)
+#define CVMX_PKO_NCB_TX_ERR_WORD		(0x0001540000E00000ull)
+#define CVMX_PKO_PDM_BIST_STATUS		(0x00015400008FFF00ull)
+#define CVMX_PKO_PDM_CFG			(0x0001540000800000ull)
+#define CVMX_PKO_PDM_CFG_DBG			(0x0001540000800FF8ull)
+#define CVMX_PKO_PDM_CP_DBG			(0x0001540000800190ull)
+#define CVMX_PKO_PDM_DQX_MINPAD(offset)		(0x00015400008F0000ull + ((offset) & 1023) * 8)
+#define CVMX_PKO_PDM_DRPBUF_DBG			(0x00015400008000B0ull)
+#define CVMX_PKO_PDM_DWPBUF_DBG			(0x00015400008000A8ull)
+#define CVMX_PKO_PDM_ECC_CTL0			(0x00015400008FFFD0ull)
+#define CVMX_PKO_PDM_ECC_CTL1			(0x00015400008FFFD8ull)
+#define CVMX_PKO_PDM_ECC_DBE_STS0		(0x00015400008FFFF0ull)
+#define CVMX_PKO_PDM_ECC_DBE_STS_CMB0		(0x00015400008FFFE0ull)
+#define CVMX_PKO_PDM_ECC_SBE_STS0		(0x00015400008FFFF8ull)
+#define CVMX_PKO_PDM_ECC_SBE_STS_CMB0		(0x00015400008FFFE8ull)
+#define CVMX_PKO_PDM_FILLB_DBG0			(0x00015400008002A0ull)
+#define CVMX_PKO_PDM_FILLB_DBG1			(0x00015400008002A8ull)
+#define CVMX_PKO_PDM_FILLB_DBG2			(0x00015400008002B0ull)
+#define CVMX_PKO_PDM_FLSHB_DBG0			(0x00015400008002B8ull)
+#define CVMX_PKO_PDM_FLSHB_DBG1			(0x00015400008002C0ull)
+#define CVMX_PKO_PDM_INTF_DBG_RD		(0x0001540000900F20ull)
+#define CVMX_PKO_PDM_ISRD_DBG			(0x0001540000800090ull)
+#define CVMX_PKO_PDM_ISRD_DBG_DQ		(0x0001540000800290ull)
+#define CVMX_PKO_PDM_ISRM_DBG			(0x0001540000800098ull)
+#define CVMX_PKO_PDM_ISRM_DBG_DQ		(0x0001540000800298ull)
+#define CVMX_PKO_PDM_MEM_ADDR			(0x0001540000800018ull)
+#define CVMX_PKO_PDM_MEM_DATA			(0x0001540000800010ull)
+#define CVMX_PKO_PDM_MEM_RW_CTL			(0x0001540000800020ull)
+#define CVMX_PKO_PDM_MEM_RW_STS			(0x0001540000800028ull)
+#define CVMX_PKO_PDM_MWPBUF_DBG			(0x00015400008000A0ull)
+#define CVMX_PKO_PDM_STS			(0x0001540000800008ull)
+#define CVMX_PKO_PEB_BIST_STATUS		(0x0001540000900D00ull)
+#define CVMX_PKO_PEB_ECC_CTL0			(0x00015400009FFFD0ull)
+#define CVMX_PKO_PEB_ECC_CTL1			(0x00015400009FFFA8ull)
+#define CVMX_PKO_PEB_ECC_DBE_STS0		(0x00015400009FFFF0ull)
+#define CVMX_PKO_PEB_ECC_DBE_STS_CMB0		(0x00015400009FFFD8ull)
+#define CVMX_PKO_PEB_ECC_SBE_STS0		(0x00015400009FFFF8ull)
+#define CVMX_PKO_PEB_ECC_SBE_STS_CMB0		(0x00015400009FFFE8ull)
+#define CVMX_PKO_PEB_ECO			(0x0001540000901000ull)
+#define CVMX_PKO_PEB_ERR_INT			(0x0001540000900C00ull)
+#define CVMX_PKO_PEB_EXT_HDR_DEF_ERR_INFO	(0x0001540000900C08ull)
+#define CVMX_PKO_PEB_FCS_SOP_ERR_INFO		(0x0001540000900C18ull)
+#define CVMX_PKO_PEB_JUMP_DEF_ERR_INFO		(0x0001540000900C10ull)
+#define CVMX_PKO_PEB_MACX_CFG_WR_ERR_INFO	(0x0001540000900C50ull)
+#define CVMX_PKO_PEB_MAX_LINK_ERR_INFO		(0x0001540000900C48ull)
+#define CVMX_PKO_PEB_NCB_CFG			(0x0001540000900308ull)
+#define CVMX_PKO_PEB_PAD_ERR_INFO		(0x0001540000900C28ull)
+#define CVMX_PKO_PEB_PSE_FIFO_ERR_INFO		(0x0001540000900C20ull)
+#define CVMX_PKO_PEB_SUBD_ADDR_ERR_INFO		(0x0001540000900C38ull)
+#define CVMX_PKO_PEB_SUBD_SIZE_ERR_INFO		(0x0001540000900C40ull)
+#define CVMX_PKO_PEB_TRUNC_ERR_INFO		(0x0001540000900C30ull)
+#define CVMX_PKO_PEB_TSO_CFG			(0x0001540000900310ull)
+#define CVMX_PKO_PQA_DEBUG			(0x0001540000000128ull)
+#define CVMX_PKO_PQB_DEBUG			(0x0001540000000130ull)
+#define CVMX_PKO_PQ_CSR_BUS_DEBUG		(0x00015400000001F8ull)
+#define CVMX_PKO_PQ_DEBUG_GREEN			(0x0001540000000058ull)
+#define CVMX_PKO_PQ_DEBUG_LINKS			(0x0001540000000068ull)
+#define CVMX_PKO_PQ_DEBUG_YELLOW		(0x0001540000000060ull)
+#define CVMX_PKO_PSE_DQ_BIST_STATUS		(0x0001540000300138ull)
+#define CVMX_PKO_PSE_DQ_ECC_CTL0		(0x0001540000300100ull)
+#define CVMX_PKO_PSE_DQ_ECC_DBE_STS0		(0x0001540000300118ull)
+#define CVMX_PKO_PSE_DQ_ECC_DBE_STS_CMB0	(0x0001540000300120ull)
+#define CVMX_PKO_PSE_DQ_ECC_SBE_STS0		(0x0001540000300108ull)
+#define CVMX_PKO_PSE_DQ_ECC_SBE_STS_CMB0	(0x0001540000300110ull)
+#define CVMX_PKO_PSE_PQ_BIST_STATUS		(0x0001540000000138ull)
+#define CVMX_PKO_PSE_PQ_ECC_CTL0		(0x0001540000000100ull)
+#define CVMX_PKO_PSE_PQ_ECC_DBE_STS0		(0x0001540000000118ull)
+#define CVMX_PKO_PSE_PQ_ECC_DBE_STS_CMB0	(0x0001540000000120ull)
+#define CVMX_PKO_PSE_PQ_ECC_SBE_STS0		(0x0001540000000108ull)
+#define CVMX_PKO_PSE_PQ_ECC_SBE_STS_CMB0	(0x0001540000000110ull)
+#define CVMX_PKO_PSE_SQ1_BIST_STATUS		(0x0001540000080138ull)
+#define CVMX_PKO_PSE_SQ1_ECC_CTL0		(0x0001540000080100ull)
+#define CVMX_PKO_PSE_SQ1_ECC_DBE_STS0		(0x0001540000080118ull)
+#define CVMX_PKO_PSE_SQ1_ECC_DBE_STS_CMB0	(0x0001540000080120ull)
+#define CVMX_PKO_PSE_SQ1_ECC_SBE_STS0		(0x0001540000080108ull)
+#define CVMX_PKO_PSE_SQ1_ECC_SBE_STS_CMB0	(0x0001540000080110ull)
+#define CVMX_PKO_PSE_SQ2_BIST_STATUS		(0x0001540000100138ull)
+#define CVMX_PKO_PSE_SQ2_ECC_CTL0		(0x0001540000100100ull)
+#define CVMX_PKO_PSE_SQ2_ECC_DBE_STS0		(0x0001540000100118ull)
+#define CVMX_PKO_PSE_SQ2_ECC_DBE_STS_CMB0	(0x0001540000100120ull)
+#define CVMX_PKO_PSE_SQ2_ECC_SBE_STS0		(0x0001540000100108ull)
+#define CVMX_PKO_PSE_SQ2_ECC_SBE_STS_CMB0	(0x0001540000100110ull)
+#define CVMX_PKO_PSE_SQ3_BIST_STATUS		(0x0001540000180138ull)
+#define CVMX_PKO_PSE_SQ3_ECC_CTL0		(0x0001540000180100ull)
+#define CVMX_PKO_PSE_SQ3_ECC_DBE_STS0		(0x0001540000180118ull)
+#define CVMX_PKO_PSE_SQ3_ECC_DBE_STS_CMB0	(0x0001540000180120ull)
+#define CVMX_PKO_PSE_SQ3_ECC_SBE_STS0		(0x0001540000180108ull)
+#define CVMX_PKO_PSE_SQ3_ECC_SBE_STS_CMB0	(0x0001540000180110ull)
+#define CVMX_PKO_PSE_SQ4_BIST_STATUS		(0x0001540000200138ull)
+#define CVMX_PKO_PSE_SQ4_ECC_CTL0		(0x0001540000200100ull)
+#define CVMX_PKO_PSE_SQ4_ECC_DBE_STS0		(0x0001540000200118ull)
+#define CVMX_PKO_PSE_SQ4_ECC_DBE_STS_CMB0	(0x0001540000200120ull)
+#define CVMX_PKO_PSE_SQ4_ECC_SBE_STS0		(0x0001540000200108ull)
+#define CVMX_PKO_PSE_SQ4_ECC_SBE_STS_CMB0	(0x0001540000200110ull)
+#define CVMX_PKO_PSE_SQ5_BIST_STATUS		(0x0001540000280138ull)
+#define CVMX_PKO_PSE_SQ5_ECC_CTL0		(0x0001540000280100ull)
+#define CVMX_PKO_PSE_SQ5_ECC_DBE_STS0		(0x0001540000280118ull)
+#define CVMX_PKO_PSE_SQ5_ECC_DBE_STS_CMB0	(0x0001540000280120ull)
+#define CVMX_PKO_PSE_SQ5_ECC_SBE_STS0		(0x0001540000280108ull)
+#define CVMX_PKO_PSE_SQ5_ECC_SBE_STS_CMB0	(0x0001540000280110ull)
+#define CVMX_PKO_PTFX_STATUS(offset)		(0x0001540000900100ull + ((offset) & 31) * 8)
+#define CVMX_PKO_PTF_IOBP_CFG			(0x0001540000900300ull)
+#define CVMX_PKO_PTGFX_CFG(offset)		(0x0001540000900200ull + ((offset) & 7) * 8)
+#define CVMX_PKO_REG_BIST_RESULT		(0x0001180050000080ull)
+#define CVMX_PKO_REG_CMD_BUF			(0x0001180050000010ull)
+#define CVMX_PKO_REG_CRC_CTLX(offset)		(0x0001180050000028ull + ((offset) & 1) * 8)
+#define CVMX_PKO_REG_CRC_ENABLE			(0x0001180050000020ull)
+#define CVMX_PKO_REG_CRC_IVX(offset)		(0x0001180050000038ull + ((offset) & 1) * 8)
+#define CVMX_PKO_REG_DEBUG0			(0x0001180050000098ull)
+#define CVMX_PKO_REG_DEBUG1			(0x00011800500000A0ull)
+#define CVMX_PKO_REG_DEBUG2			(0x00011800500000A8ull)
+#define CVMX_PKO_REG_DEBUG3			(0x00011800500000B0ull)
+#define CVMX_PKO_REG_DEBUG4			(0x00011800500000B8ull)
+#define CVMX_PKO_REG_ENGINE_INFLIGHT		(0x0001180050000050ull)
+#define CVMX_PKO_REG_ENGINE_INFLIGHT1		(0x0001180050000318ull)
+#define CVMX_PKO_REG_ENGINE_STORAGEX(offset)	(0x0001180050000300ull + ((offset) & 1) * 8)
+#define CVMX_PKO_REG_ENGINE_THRESH		(0x0001180050000058ull)
+#define CVMX_PKO_REG_ERROR			(0x0001180050000088ull)
+#define CVMX_PKO_REG_FLAGS			(0x0001180050000000ull)
+#define CVMX_PKO_REG_GMX_PORT_MODE		(0x0001180050000018ull)
+#define CVMX_PKO_REG_INT_MASK			(0x0001180050000090ull)
+#define CVMX_PKO_REG_LOOPBACK_BPID		(0x0001180050000118ull)
+#define CVMX_PKO_REG_LOOPBACK_PKIND		(0x0001180050000068ull)
+#define CVMX_PKO_REG_MIN_PKT			(0x0001180050000070ull)
+#define CVMX_PKO_REG_PREEMPT			(0x0001180050000110ull)
+#define CVMX_PKO_REG_QUEUE_MODE			(0x0001180050000048ull)
+#define CVMX_PKO_REG_QUEUE_PREEMPT		(0x0001180050000108ull)
+#define CVMX_PKO_REG_QUEUE_PTRS1		(0x0001180050000100ull)
+#define CVMX_PKO_REG_READ_IDX			(0x0001180050000008ull)
+#define CVMX_PKO_REG_THROTTLE			(0x0001180050000078ull)
+#define CVMX_PKO_REG_TIMESTAMP			(0x0001180050000060ull)
+#define CVMX_PKO_SHAPER_CFG			(0x00015400000800F8ull)
+#define CVMX_PKO_STATE_UID_IN_USEX_RD(offset)	(0x0001540000900F00ull + ((offset) & 3) * 8)
+#define CVMX_PKO_STATUS				(0x0001540000D00000ull)
+#define CVMX_PKO_TXFX_PKT_CNT_RD(offset)	(0x0001540000900E00ull + ((offset) & 31) * 8)
+
+/**
+ * cvmx_pko_channel_level
+ */
+union cvmx_pko_channel_level {
+	u64 u64;
+	struct cvmx_pko_channel_level_s {
+		u64 reserved_1_63 : 63;
+		u64 cc_level : 1;
+	} s;
+	struct cvmx_pko_channel_level_s cn73xx;
+	struct cvmx_pko_channel_level_s cn78xx;
+	struct cvmx_pko_channel_level_s cn78xxp1;
+	struct cvmx_pko_channel_level_s cnf75xx;
+};
+
+typedef union cvmx_pko_channel_level cvmx_pko_channel_level_t;
+
+/**
+ * cvmx_pko_dpfi_ena
+ */
+union cvmx_pko_dpfi_ena {
+	u64 u64;
+	struct cvmx_pko_dpfi_ena_s {
+		u64 reserved_1_63 : 63;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_dpfi_ena_s cn73xx;
+	struct cvmx_pko_dpfi_ena_s cn78xx;
+	struct cvmx_pko_dpfi_ena_s cn78xxp1;
+	struct cvmx_pko_dpfi_ena_s cnf75xx;
+};
+
+typedef union cvmx_pko_dpfi_ena cvmx_pko_dpfi_ena_t;
+
+/**
+ * cvmx_pko_dpfi_flush
+ */
+union cvmx_pko_dpfi_flush {
+	u64 u64;
+	struct cvmx_pko_dpfi_flush_s {
+		u64 reserved_1_63 : 63;
+		u64 flush_en : 1;
+	} s;
+	struct cvmx_pko_dpfi_flush_s cn73xx;
+	struct cvmx_pko_dpfi_flush_s cn78xx;
+	struct cvmx_pko_dpfi_flush_s cn78xxp1;
+	struct cvmx_pko_dpfi_flush_s cnf75xx;
+};
+
+typedef union cvmx_pko_dpfi_flush cvmx_pko_dpfi_flush_t;
+
+/**
+ * cvmx_pko_dpfi_fpa_aura
+ */
+union cvmx_pko_dpfi_fpa_aura {
+	u64 u64;
+	struct cvmx_pko_dpfi_fpa_aura_s {
+		u64 reserved_12_63 : 52;
+		u64 node : 2;
+		u64 laura : 10;
+	} s;
+	struct cvmx_pko_dpfi_fpa_aura_s cn73xx;
+	struct cvmx_pko_dpfi_fpa_aura_s cn78xx;
+	struct cvmx_pko_dpfi_fpa_aura_s cn78xxp1;
+	struct cvmx_pko_dpfi_fpa_aura_s cnf75xx;
+};
+
+typedef union cvmx_pko_dpfi_fpa_aura cvmx_pko_dpfi_fpa_aura_t;
+
+/**
+ * cvmx_pko_dpfi_status
+ */
+union cvmx_pko_dpfi_status {
+	u64 u64;
+	struct cvmx_pko_dpfi_status_s {
+		u64 ptr_cnt : 32;
+		u64 reserved_27_31 : 5;
+		u64 xpd_fif_cnt : 4;
+		u64 dalc_fif_cnt : 4;
+		u64 alc_fif_cnt : 5;
+		u64 reserved_13_13 : 1;
+		u64 isrd_ptr1_rtn_full : 1;
+		u64 isrd_ptr0_rtn_full : 1;
+		u64 isrm_ptr1_rtn_full : 1;
+		u64 isrm_ptr0_rtn_full : 1;
+		u64 isrd_ptr1_val : 1;
+		u64 isrd_ptr0_val : 1;
+		u64 isrm_ptr1_val : 1;
+		u64 isrm_ptr0_val : 1;
+		u64 ptr_req_pend : 1;
+		u64 ptr_rtn_pend : 1;
+		u64 fpa_empty : 1;
+		u64 dpfi_empty : 1;
+		u64 cache_flushed : 1;
+	} s;
+	struct cvmx_pko_dpfi_status_s cn73xx;
+	struct cvmx_pko_dpfi_status_s cn78xx;
+	struct cvmx_pko_dpfi_status_cn78xxp1 {
+		u64 ptr_cnt : 32;
+		u64 reserved_13_31 : 19;
+		u64 isrd_ptr1_rtn_full : 1;
+		u64 isrd_ptr0_rtn_full : 1;
+		u64 isrm_ptr1_rtn_full : 1;
+		u64 isrm_ptr0_rtn_full : 1;
+		u64 isrd_ptr1_val : 1;
+		u64 isrd_ptr0_val : 1;
+		u64 isrm_ptr1_val : 1;
+		u64 isrm_ptr0_val : 1;
+		u64 ptr_req_pend : 1;
+		u64 ptr_rtn_pend : 1;
+		u64 fpa_empty : 1;
+		u64 dpfi_empty : 1;
+		u64 cache_flushed : 1;
+	} cn78xxp1;
+	struct cvmx_pko_dpfi_status_s cnf75xx;
+};
+
+typedef union cvmx_pko_dpfi_status cvmx_pko_dpfi_status_t;
+
+/**
+ * cvmx_pko_dq#_bytes
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_BYTES.
+ *
+ */
+union cvmx_pko_dqx_bytes {
+	u64 u64;
+	struct cvmx_pko_dqx_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_dqx_bytes_s cn73xx;
+	struct cvmx_pko_dqx_bytes_s cn78xx;
+	struct cvmx_pko_dqx_bytes_s cn78xxp1;
+	struct cvmx_pko_dqx_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_bytes cvmx_pko_dqx_bytes_t;
+
+/**
+ * cvmx_pko_dq#_cir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_dqx_cir {
+	u64 u64;
+	struct cvmx_pko_dqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_dqx_cir_s cn73xx;
+	struct cvmx_pko_dqx_cir_s cn78xx;
+	struct cvmx_pko_dqx_cir_s cn78xxp1;
+	struct cvmx_pko_dqx_cir_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_cir cvmx_pko_dqx_cir_t;
+
+/**
+ * cvmx_pko_dq#_dropped_bytes
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_BYTES.
+ *
+ */
+union cvmx_pko_dqx_dropped_bytes {
+	u64 u64;
+	struct cvmx_pko_dqx_dropped_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_dqx_dropped_bytes_s cn73xx;
+	struct cvmx_pko_dqx_dropped_bytes_s cn78xx;
+	struct cvmx_pko_dqx_dropped_bytes_s cn78xxp1;
+	struct cvmx_pko_dqx_dropped_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_dropped_bytes cvmx_pko_dqx_dropped_bytes_t;
+
+/**
+ * cvmx_pko_dq#_dropped_packets
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_PACKETS.
+ *
+ */
+union cvmx_pko_dqx_dropped_packets {
+	u64 u64;
+	struct cvmx_pko_dqx_dropped_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_dqx_dropped_packets_s cn73xx;
+	struct cvmx_pko_dqx_dropped_packets_s cn78xx;
+	struct cvmx_pko_dqx_dropped_packets_s cn78xxp1;
+	struct cvmx_pko_dqx_dropped_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_dropped_packets cvmx_pko_dqx_dropped_packets_t;
+
+/**
+ * cvmx_pko_dq#_fifo
+ */
+union cvmx_pko_dqx_fifo {
+	u64 u64;
+	struct cvmx_pko_dqx_fifo_s {
+		u64 reserved_15_63 : 49;
+		u64 p_con : 1;
+		u64 head : 7;
+		u64 tail : 7;
+	} s;
+	struct cvmx_pko_dqx_fifo_s cn73xx;
+	struct cvmx_pko_dqx_fifo_s cn78xx;
+	struct cvmx_pko_dqx_fifo_s cn78xxp1;
+	struct cvmx_pko_dqx_fifo_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_fifo cvmx_pko_dqx_fifo_t;
+
+/**
+ * cvmx_pko_dq#_packets
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_PACKETS.
+ *
+ */
+union cvmx_pko_dqx_packets {
+	u64 u64;
+	struct cvmx_pko_dqx_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_dqx_packets_s cn73xx;
+	struct cvmx_pko_dqx_packets_s cn78xx;
+	struct cvmx_pko_dqx_packets_s cn78xxp1;
+	struct cvmx_pko_dqx_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_packets cvmx_pko_dqx_packets_t;
+
+/**
+ * cvmx_pko_dq#_pick
+ *
+ * This CSR contains the meta for the DQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_dqx_pick {
+	u64 u64;
+	struct cvmx_pko_dqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_dqx_pick_s cn73xx;
+	struct cvmx_pko_dqx_pick_s cn78xx;
+	struct cvmx_pko_dqx_pick_s cn78xxp1;
+	struct cvmx_pko_dqx_pick_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_pick cvmx_pko_dqx_pick_t;
+
+/**
+ * cvmx_pko_dq#_pir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_dqx_pir {
+	u64 u64;
+	struct cvmx_pko_dqx_pir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_dqx_pir_s cn73xx;
+	struct cvmx_pko_dqx_pir_s cn78xx;
+	struct cvmx_pko_dqx_pir_s cn78xxp1;
+	struct cvmx_pko_dqx_pir_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_pir cvmx_pko_dqx_pir_t;
+
+/**
+ * cvmx_pko_dq#_pointers
+ *
+ * This register has the same bit fields as PKO_L3_SQ(0..255)_POINTERS.
+ *
+ */
+union cvmx_pko_dqx_pointers {
+	u64 u64;
+	struct cvmx_pko_dqx_pointers_s {
+		u64 reserved_26_63 : 38;
+		u64 prev : 10;
+		u64 reserved_10_15 : 6;
+		u64 next : 10;
+	} s;
+	struct cvmx_pko_dqx_pointers_cn73xx {
+		u64 reserved_24_63 : 40;
+		u64 prev : 8;
+		u64 reserved_8_15 : 8;
+		u64 next : 8;
+	} cn73xx;
+	struct cvmx_pko_dqx_pointers_s cn78xx;
+	struct cvmx_pko_dqx_pointers_s cn78xxp1;
+	struct cvmx_pko_dqx_pointers_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_pointers cvmx_pko_dqx_pointers_t;
+
+/**
+ * cvmx_pko_dq#_sched_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHED_STATE.
+ *
+ */
+union cvmx_pko_dqx_sched_state {
+	u64 u64;
+	struct cvmx_pko_dqx_sched_state_s {
+		u64 reserved_25_63 : 39;
+		u64 rr_count : 25;
+	} s;
+	struct cvmx_pko_dqx_sched_state_s cn73xx;
+	struct cvmx_pko_dqx_sched_state_s cn78xx;
+	struct cvmx_pko_dqx_sched_state_s cn78xxp1;
+	struct cvmx_pko_dqx_sched_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_sched_state cvmx_pko_dqx_sched_state_t;
+
+/**
+ * cvmx_pko_dq#_schedule
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHEDULE.
+ *
+ */
+union cvmx_pko_dqx_schedule {
+	u64 u64;
+	struct cvmx_pko_dqx_schedule_s {
+		u64 reserved_28_63 : 36;
+		u64 prio : 4;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_dqx_schedule_s cn73xx;
+	struct cvmx_pko_dqx_schedule_s cn78xx;
+	struct cvmx_pko_dqx_schedule_s cn78xxp1;
+	struct cvmx_pko_dqx_schedule_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_schedule cvmx_pko_dqx_schedule_t;
+
+/**
+ * cvmx_pko_dq#_shape
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_SHAPE.
+ *
+ */
+union cvmx_pko_dqx_shape {
+	u64 u64;
+	struct cvmx_pko_dqx_shape_s {
+		u64 reserved_27_63 : 37;
+		u64 schedule_list : 2;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_dqx_shape_s cn73xx;
+	struct cvmx_pko_dqx_shape_cn78xx {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} cn78xx;
+	struct cvmx_pko_dqx_shape_cn78xx cn78xxp1;
+	struct cvmx_pko_dqx_shape_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_shape cvmx_pko_dqx_shape_t;
+
+/**
+ * cvmx_pko_dq#_shape_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SHAPE_STATE.
+ * This register must not be written during normal operation.
+ */
+union cvmx_pko_dqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_dqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color : 2;
+		u64 pir_accum : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_dqx_shape_state_s cn73xx;
+	struct cvmx_pko_dqx_shape_state_s cn78xx;
+	struct cvmx_pko_dqx_shape_state_s cn78xxp1;
+	struct cvmx_pko_dqx_shape_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_shape_state cvmx_pko_dqx_shape_state_t;
+
+/**
+ * cvmx_pko_dq#_sw_xoff
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_SW_XOFF.
+ *
+ */
+union cvmx_pko_dqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_dqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_dqx_sw_xoff_s cn73xx;
+	struct cvmx_pko_dqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_dqx_sw_xoff_s cn78xxp1;
+	struct cvmx_pko_dqx_sw_xoff_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_sw_xoff cvmx_pko_dqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_dq#_topology
+ */
+union cvmx_pko_dqx_topology {
+	u64 u64;
+	struct cvmx_pko_dqx_topology_s {
+		u64 reserved_26_63 : 38;
+		u64 parent : 10;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_pko_dqx_topology_cn73xx {
+		u64 reserved_24_63 : 40;
+		u64 parent : 8;
+		u64 reserved_0_15 : 16;
+	} cn73xx;
+	struct cvmx_pko_dqx_topology_s cn78xx;
+	struct cvmx_pko_dqx_topology_s cn78xxp1;
+	struct cvmx_pko_dqx_topology_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_topology cvmx_pko_dqx_topology_t;
+
+/**
+ * cvmx_pko_dq#_wm_buf_cnt
+ */
+union cvmx_pko_dqx_wm_buf_cnt {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_buf_cnt_s {
+		u64 reserved_36_63 : 28;
+		u64 count : 36;
+	} s;
+	struct cvmx_pko_dqx_wm_buf_cnt_s cn73xx;
+	struct cvmx_pko_dqx_wm_buf_cnt_s cn78xx;
+	struct cvmx_pko_dqx_wm_buf_cnt_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_buf_cnt_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_buf_cnt cvmx_pko_dqx_wm_buf_cnt_t;
+
+/**
+ * cvmx_pko_dq#_wm_buf_ctl
+ */
+union cvmx_pko_dqx_wm_buf_ctl {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_buf_ctl_s {
+		u64 reserved_51_63 : 13;
+		u64 enable : 1;
+		u64 reserved_49_49 : 1;
+		u64 intr : 1;
+		u64 reserved_36_47 : 12;
+		u64 threshold : 36;
+	} s;
+	struct cvmx_pko_dqx_wm_buf_ctl_s cn73xx;
+	struct cvmx_pko_dqx_wm_buf_ctl_s cn78xx;
+	struct cvmx_pko_dqx_wm_buf_ctl_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_buf_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_buf_ctl cvmx_pko_dqx_wm_buf_ctl_t;
+
+/**
+ * cvmx_pko_dq#_wm_buf_ctl_w1c
+ */
+union cvmx_pko_dqx_wm_buf_ctl_w1c {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_buf_ctl_w1c_s {
+		u64 reserved_49_63 : 15;
+		u64 intr : 1;
+		u64 reserved_0_47 : 48;
+	} s;
+	struct cvmx_pko_dqx_wm_buf_ctl_w1c_s cn73xx;
+	struct cvmx_pko_dqx_wm_buf_ctl_w1c_s cn78xx;
+	struct cvmx_pko_dqx_wm_buf_ctl_w1c_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_buf_ctl_w1c_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_buf_ctl_w1c cvmx_pko_dqx_wm_buf_ctl_w1c_t;
+
+/**
+ * cvmx_pko_dq#_wm_cnt
+ */
+union cvmx_pko_dqx_wm_cnt {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_cnt_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_dqx_wm_cnt_s cn73xx;
+	struct cvmx_pko_dqx_wm_cnt_s cn78xx;
+	struct cvmx_pko_dqx_wm_cnt_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_cnt_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_cnt cvmx_pko_dqx_wm_cnt_t;
+
+/**
+ * cvmx_pko_dq#_wm_ctl
+ */
+union cvmx_pko_dqx_wm_ctl {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_ctl_s {
+		u64 reserved_52_63 : 12;
+		u64 ncb_query_rsp : 1;
+		u64 enable : 1;
+		u64 kind : 1;
+		u64 intr : 1;
+		u64 threshold : 48;
+	} s;
+	struct cvmx_pko_dqx_wm_ctl_s cn73xx;
+	struct cvmx_pko_dqx_wm_ctl_s cn78xx;
+	struct cvmx_pko_dqx_wm_ctl_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_ctl cvmx_pko_dqx_wm_ctl_t;
+
+/**
+ * cvmx_pko_dq#_wm_ctl_w1c
+ */
+union cvmx_pko_dqx_wm_ctl_w1c {
+	u64 u64;
+	struct cvmx_pko_dqx_wm_ctl_w1c_s {
+		u64 reserved_49_63 : 15;
+		u64 intr : 1;
+		u64 reserved_0_47 : 48;
+	} s;
+	struct cvmx_pko_dqx_wm_ctl_w1c_s cn73xx;
+	struct cvmx_pko_dqx_wm_ctl_w1c_s cn78xx;
+	struct cvmx_pko_dqx_wm_ctl_w1c_s cn78xxp1;
+	struct cvmx_pko_dqx_wm_ctl_w1c_s cnf75xx;
+};
+
+typedef union cvmx_pko_dqx_wm_ctl_w1c cvmx_pko_dqx_wm_ctl_w1c_t;
+
+/**
+ * cvmx_pko_dq_csr_bus_debug
+ */
+union cvmx_pko_dq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_dq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_dq_csr_bus_debug_s cn73xx;
+	struct cvmx_pko_dq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_dq_csr_bus_debug_s cn78xxp1;
+	struct cvmx_pko_dq_csr_bus_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_dq_csr_bus_debug cvmx_pko_dq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_dq_debug
+ */
+union cvmx_pko_dq_debug {
+	u64 u64;
+	struct cvmx_pko_dq_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_dq_debug_s cn73xx;
+	struct cvmx_pko_dq_debug_s cn78xx;
+	struct cvmx_pko_dq_debug_s cn78xxp1;
+	struct cvmx_pko_dq_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_dq_debug cvmx_pko_dq_debug_t;
+
+/**
+ * cvmx_pko_drain_irq
+ */
+union cvmx_pko_drain_irq {
+	u64 u64;
+	struct cvmx_pko_drain_irq_s {
+		u64 reserved_1_63 : 63;
+		u64 intr : 1;
+	} s;
+	struct cvmx_pko_drain_irq_s cn73xx;
+	struct cvmx_pko_drain_irq_s cn78xx;
+	struct cvmx_pko_drain_irq_s cn78xxp1;
+	struct cvmx_pko_drain_irq_s cnf75xx;
+};
+
+typedef union cvmx_pko_drain_irq cvmx_pko_drain_irq_t;
+
+/**
+ * cvmx_pko_enable
+ */
+union cvmx_pko_enable {
+	u64 u64;
+	struct cvmx_pko_enable_s {
+		u64 reserved_1_63 : 63;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_enable_s cn73xx;
+	struct cvmx_pko_enable_s cn78xx;
+	struct cvmx_pko_enable_s cn78xxp1;
+	struct cvmx_pko_enable_s cnf75xx;
+};
+
+typedef union cvmx_pko_enable cvmx_pko_enable_t;
+
+/**
+ * cvmx_pko_format#_ctl
+ *
+ * Describes packet marking calculations for YELLOW and for RED_SEND packets.
+ * PKO_SEND_HDR_S[FORMAT] selects the CSR used for the packet descriptor.
+ *
+ * All the packet marking calculations assume big-endian bits within a byte.
+ *
+ * For example, if MARKPTR is 3 and [OFFSET] is 5 and the packet is YELLOW,
+ * the PKO marking hardware would do this:
+ *
+ * _  byte[3]<2:0> |=   Y_VAL<3:1>
+ * _  byte[3]<2:0> &= ~Y_MASK<3:1>
+ * _  byte[4]<7>   |=   Y_VAL<0>
+ * _  byte[4]<7>   &= ~Y_MASK<0>
+ *
+ * where byte[3] is the 3rd byte in the packet, and byte[4] the 4th.
+ *
+ * For another example, if MARKPTR is 3 and [OFFSET] is 0 and the packet is RED_SEND,
+ *
+ * _   byte[3]<7:4> |=   R_VAL<3:0>
+ * _   byte[3]<7:4> &= ~R_MASK<3:0>
+ */
+union cvmx_pko_formatx_ctl {
+	u64 u64;
+	struct cvmx_pko_formatx_ctl_s {
+		u64 reserved_27_63 : 37;
+		u64 offset : 11;
+		u64 y_mask : 4;
+		u64 y_val : 4;
+		u64 r_mask : 4;
+		u64 r_val : 4;
+	} s;
+	struct cvmx_pko_formatx_ctl_s cn73xx;
+	struct cvmx_pko_formatx_ctl_s cn78xx;
+	struct cvmx_pko_formatx_ctl_s cn78xxp1;
+	struct cvmx_pko_formatx_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pko_formatx_ctl cvmx_pko_formatx_ctl_t;
+
+/**
+ * cvmx_pko_l1_sq#_cir
+ */
+union cvmx_pko_l1_sqx_cir {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l1_sqx_cir_s cn73xx;
+	struct cvmx_pko_l1_sqx_cir_s cn78xx;
+	struct cvmx_pko_l1_sqx_cir_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_cir_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_cir cvmx_pko_l1_sqx_cir_t;
+
+/**
+ * cvmx_pko_l1_sq#_dropped_bytes
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_BYTES.
+ *
+ */
+union cvmx_pko_l1_sqx_dropped_bytes {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_dropped_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_l1_sqx_dropped_bytes_s cn73xx;
+	struct cvmx_pko_l1_sqx_dropped_bytes_s cn78xx;
+	struct cvmx_pko_l1_sqx_dropped_bytes_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_dropped_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_dropped_bytes cvmx_pko_l1_sqx_dropped_bytes_t;
+
+/**
+ * cvmx_pko_l1_sq#_dropped_packets
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_PACKETS.
+ *
+ */
+union cvmx_pko_l1_sqx_dropped_packets {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_dropped_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_l1_sqx_dropped_packets_s cn73xx;
+	struct cvmx_pko_l1_sqx_dropped_packets_s cn78xx;
+	struct cvmx_pko_l1_sqx_dropped_packets_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_dropped_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_dropped_packets cvmx_pko_l1_sqx_dropped_packets_t;
+
+/**
+ * cvmx_pko_l1_sq#_green
+ */
+union cvmx_pko_l1_sqx_green {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_green_s {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 reserved_19_19 : 1;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l1_sqx_green_s cn73xx;
+	struct cvmx_pko_l1_sqx_green_s cn78xx;
+	struct cvmx_pko_l1_sqx_green_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_green_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_green cvmx_pko_l1_sqx_green_t;
+
+/**
+ * cvmx_pko_l1_sq#_green_bytes
+ */
+union cvmx_pko_l1_sqx_green_bytes {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_green_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_l1_sqx_green_bytes_s cn73xx;
+	struct cvmx_pko_l1_sqx_green_bytes_s cn78xx;
+	struct cvmx_pko_l1_sqx_green_bytes_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_green_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_green_bytes cvmx_pko_l1_sqx_green_bytes_t;
+
+/**
+ * cvmx_pko_l1_sq#_green_packets
+ */
+union cvmx_pko_l1_sqx_green_packets {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_green_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_l1_sqx_green_packets_s cn73xx;
+	struct cvmx_pko_l1_sqx_green_packets_s cn78xx;
+	struct cvmx_pko_l1_sqx_green_packets_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_green_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_green_packets cvmx_pko_l1_sqx_green_packets_t;
+
+/**
+ * cvmx_pko_l1_sq#_link
+ */
+union cvmx_pko_l1_sqx_link {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_link_s {
+		u64 reserved_49_63 : 15;
+		u64 link : 5;
+		u64 reserved_32_43 : 12;
+		u64 cc_word_cnt : 20;
+		u64 cc_packet_cnt : 10;
+		u64 cc_enable : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l1_sqx_link_cn73xx {
+		u64 reserved_48_63 : 16;
+		u64 link : 4;
+		u64 reserved_32_43 : 12;
+		u64 cc_word_cnt : 20;
+		u64 cc_packet_cnt : 10;
+		u64 cc_enable : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_l1_sqx_link_s cn78xx;
+	struct cvmx_pko_l1_sqx_link_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_link_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_link cvmx_pko_l1_sqx_link_t;
+
+/**
+ * cvmx_pko_l1_sq#_pick
+ *
+ * This CSR contains the meta for the L1 SQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_l1_sqx_pick {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_l1_sqx_pick_s cn73xx;
+	struct cvmx_pko_l1_sqx_pick_s cn78xx;
+	struct cvmx_pko_l1_sqx_pick_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_pick_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_pick cvmx_pko_l1_sqx_pick_t;
+
+/**
+ * cvmx_pko_l1_sq#_red
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l1_sqx_red {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_red_s {
+		u64 reserved_19_63 : 45;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l1_sqx_red_s cn73xx;
+	struct cvmx_pko_l1_sqx_red_s cn78xx;
+	struct cvmx_pko_l1_sqx_red_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_red_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_red cvmx_pko_l1_sqx_red_t;
+
+/**
+ * cvmx_pko_l1_sq#_red_bytes
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_BYTES.
+ *
+ */
+union cvmx_pko_l1_sqx_red_bytes {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_red_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_l1_sqx_red_bytes_s cn73xx;
+	struct cvmx_pko_l1_sqx_red_bytes_s cn78xx;
+	struct cvmx_pko_l1_sqx_red_bytes_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_red_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_red_bytes cvmx_pko_l1_sqx_red_bytes_t;
+
+/**
+ * cvmx_pko_l1_sq#_red_packets
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_PACKETS.
+ *
+ */
+union cvmx_pko_l1_sqx_red_packets {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_red_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_l1_sqx_red_packets_s cn73xx;
+	struct cvmx_pko_l1_sqx_red_packets_s cn78xx;
+	struct cvmx_pko_l1_sqx_red_packets_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_red_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_red_packets cvmx_pko_l1_sqx_red_packets_t;
+
+/**
+ * cvmx_pko_l1_sq#_schedule
+ */
+union cvmx_pko_l1_sqx_schedule {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_schedule_s {
+		u64 dummy : 40;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_l1_sqx_schedule_cn73xx {
+		u64 reserved_24_63 : 40;
+		u64 rr_quantum : 24;
+	} cn73xx;
+	struct cvmx_pko_l1_sqx_schedule_cn73xx cn78xx;
+	struct cvmx_pko_l1_sqx_schedule_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_schedule_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_schedule cvmx_pko_l1_sqx_schedule_t;
+
+/**
+ * cvmx_pko_l1_sq#_shape
+ */
+union cvmx_pko_l1_sqx_shape {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_shape_s {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_18_23 : 6;
+		u64 link : 5;
+		u64 reserved_9_12 : 4;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_l1_sqx_shape_cn73xx {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_17_23 : 7;
+		u64 link : 4;
+		u64 reserved_9_12 : 4;
+		u64 adjust : 9;
+	} cn73xx;
+	struct cvmx_pko_l1_sqx_shape_s cn78xx;
+	struct cvmx_pko_l1_sqx_shape_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_shape_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_shape cvmx_pko_l1_sqx_shape_t;
+
+/**
+ * cvmx_pko_l1_sq#_shape_state
+ *
+ * This register must not be written during normal operation.
+ *
+ */
+union cvmx_pko_l1_sqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color2 : 1;
+		u64 color : 1;
+		u64 reserved_26_51 : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_l1_sqx_shape_state_cn73xx {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 reserved_53_53 : 1;
+		u64 color : 1;
+		u64 reserved_26_51 : 26;
+		u64 cir_accum : 26;
+	} cn73xx;
+	struct cvmx_pko_l1_sqx_shape_state_cn73xx cn78xx;
+	struct cvmx_pko_l1_sqx_shape_state_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_shape_state_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_shape_state cvmx_pko_l1_sqx_shape_state_t;
+
+/**
+ * cvmx_pko_l1_sq#_sw_xoff
+ */
+union cvmx_pko_l1_sqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_l1_sqx_sw_xoff_s cn73xx;
+	struct cvmx_pko_l1_sqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_l1_sqx_sw_xoff_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_sw_xoff_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_sw_xoff cvmx_pko_l1_sqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_l1_sq#_topology
+ */
+union cvmx_pko_l1_sqx_topology {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_topology_s {
+		u64 reserved_41_63 : 23;
+		u64 prio_anchor : 9;
+		u64 reserved_21_31 : 11;
+		u64 link : 5;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l1_sqx_topology_cn73xx {
+		u64 reserved_40_63 : 24;
+		u64 prio_anchor : 8;
+		u64 reserved_20_31 : 12;
+		u64 link : 4;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_l1_sqx_topology_s cn78xx;
+	struct cvmx_pko_l1_sqx_topology_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_topology_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_topology cvmx_pko_l1_sqx_topology_t;
+
+/**
+ * cvmx_pko_l1_sq#_yellow
+ */
+union cvmx_pko_l1_sqx_yellow {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_yellow_s {
+		u64 reserved_19_63 : 45;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l1_sqx_yellow_s cn73xx;
+	struct cvmx_pko_l1_sqx_yellow_s cn78xx;
+	struct cvmx_pko_l1_sqx_yellow_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_yellow_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_yellow cvmx_pko_l1_sqx_yellow_t;
+
+/**
+ * cvmx_pko_l1_sq#_yellow_bytes
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_BYTES.
+ *
+ */
+union cvmx_pko_l1_sqx_yellow_bytes {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_yellow_bytes_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_l1_sqx_yellow_bytes_s cn73xx;
+	struct cvmx_pko_l1_sqx_yellow_bytes_s cn78xx;
+	struct cvmx_pko_l1_sqx_yellow_bytes_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_yellow_bytes_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_yellow_bytes cvmx_pko_l1_sqx_yellow_bytes_t;
+
+/**
+ * cvmx_pko_l1_sq#_yellow_packets
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN_PACKETS.
+ *
+ */
+union cvmx_pko_l1_sqx_yellow_packets {
+	u64 u64;
+	struct cvmx_pko_l1_sqx_yellow_packets_s {
+		u64 reserved_40_63 : 24;
+		u64 count : 40;
+	} s;
+	struct cvmx_pko_l1_sqx_yellow_packets_s cn73xx;
+	struct cvmx_pko_l1_sqx_yellow_packets_s cn78xx;
+	struct cvmx_pko_l1_sqx_yellow_packets_s cn78xxp1;
+	struct cvmx_pko_l1_sqx_yellow_packets_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqx_yellow_packets cvmx_pko_l1_sqx_yellow_packets_t;
+
+/**
+ * cvmx_pko_l1_sq_csr_bus_debug
+ */
+union cvmx_pko_l1_sq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_l1_sq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_l1_sq_csr_bus_debug_s cn73xx;
+	struct cvmx_pko_l1_sq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_l1_sq_csr_bus_debug_s cn78xxp1;
+	struct cvmx_pko_l1_sq_csr_bus_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sq_csr_bus_debug cvmx_pko_l1_sq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_l1_sqa_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l1_sqa_debug {
+	u64 u64;
+	struct cvmx_pko_l1_sqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l1_sqa_debug_s cn73xx;
+	struct cvmx_pko_l1_sqa_debug_s cn78xx;
+	struct cvmx_pko_l1_sqa_debug_s cn78xxp1;
+	struct cvmx_pko_l1_sqa_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqa_debug cvmx_pko_l1_sqa_debug_t;
+
+/**
+ * cvmx_pko_l1_sqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l1_sqb_debug {
+	u64 u64;
+	struct cvmx_pko_l1_sqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l1_sqb_debug_s cn73xx;
+	struct cvmx_pko_l1_sqb_debug_s cn78xx;
+	struct cvmx_pko_l1_sqb_debug_s cn78xxp1;
+	struct cvmx_pko_l1_sqb_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l1_sqb_debug cvmx_pko_l1_sqb_debug_t;
+
+/**
+ * cvmx_pko_l2_sq#_cir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l2_sqx_cir {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l2_sqx_cir_s cn73xx;
+	struct cvmx_pko_l2_sqx_cir_s cn78xx;
+	struct cvmx_pko_l2_sqx_cir_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_cir_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_cir cvmx_pko_l2_sqx_cir_t;
+
+/**
+ * cvmx_pko_l2_sq#_green
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_GREEN.
+ *
+ */
+union cvmx_pko_l2_sqx_green {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_green_s {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 reserved_19_19 : 1;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l2_sqx_green_s cn73xx;
+	struct cvmx_pko_l2_sqx_green_s cn78xx;
+	struct cvmx_pko_l2_sqx_green_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_green_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_green cvmx_pko_l2_sqx_green_t;
+
+/**
+ * cvmx_pko_l2_sq#_pick
+ *
+ * This CSR contains the meta for the L2 SQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_l2_sqx_pick {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_l2_sqx_pick_s cn73xx;
+	struct cvmx_pko_l2_sqx_pick_s cn78xx;
+	struct cvmx_pko_l2_sqx_pick_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_pick_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_pick cvmx_pko_l2_sqx_pick_t;
+
+/**
+ * cvmx_pko_l2_sq#_pir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l2_sqx_pir {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_pir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l2_sqx_pir_s cn73xx;
+	struct cvmx_pko_l2_sqx_pir_s cn78xx;
+	struct cvmx_pko_l2_sqx_pir_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_pir_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_pir cvmx_pko_l2_sqx_pir_t;
+
+/**
+ * cvmx_pko_l2_sq#_pointers
+ */
+union cvmx_pko_l2_sqx_pointers {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_pointers_s {
+		u64 reserved_25_63 : 39;
+		u64 prev : 9;
+		u64 reserved_9_15 : 7;
+		u64 next : 9;
+	} s;
+	struct cvmx_pko_l2_sqx_pointers_cn73xx {
+		u64 reserved_24_63 : 40;
+		u64 prev : 8;
+		u64 reserved_8_15 : 8;
+		u64 next : 8;
+	} cn73xx;
+	struct cvmx_pko_l2_sqx_pointers_s cn78xx;
+	struct cvmx_pko_l2_sqx_pointers_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_pointers_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_pointers cvmx_pko_l2_sqx_pointers_t;
+
+/**
+ * cvmx_pko_l2_sq#_red
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_RED.
+ *
+ */
+union cvmx_pko_l2_sqx_red {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_red_s {
+		u64 reserved_19_63 : 45;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l2_sqx_red_s cn73xx;
+	struct cvmx_pko_l2_sqx_red_s cn78xx;
+	struct cvmx_pko_l2_sqx_red_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_red_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_red cvmx_pko_l2_sqx_red_t;
+
+/**
+ * cvmx_pko_l2_sq#_sched_state
+ */
+union cvmx_pko_l2_sqx_sched_state {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_sched_state_s {
+		u64 reserved_25_63 : 39;
+		u64 rr_count : 25;
+	} s;
+	struct cvmx_pko_l2_sqx_sched_state_s cn73xx;
+	struct cvmx_pko_l2_sqx_sched_state_s cn78xx;
+	struct cvmx_pko_l2_sqx_sched_state_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_sched_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_sched_state cvmx_pko_l2_sqx_sched_state_t;
+
+/**
+ * cvmx_pko_l2_sq#_schedule
+ */
+union cvmx_pko_l2_sqx_schedule {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_schedule_s {
+		u64 reserved_28_63 : 36;
+		u64 prio : 4;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_l2_sqx_schedule_s cn73xx;
+	struct cvmx_pko_l2_sqx_schedule_s cn78xx;
+	struct cvmx_pko_l2_sqx_schedule_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_schedule_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_schedule cvmx_pko_l2_sqx_schedule_t;
+
+/**
+ * cvmx_pko_l2_sq#_shape
+ */
+union cvmx_pko_l2_sqx_shape {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_shape_s {
+		u64 reserved_27_63 : 37;
+		u64 schedule_list : 2;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_l2_sqx_shape_s cn73xx;
+	struct cvmx_pko_l2_sqx_shape_cn78xx {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} cn78xx;
+	struct cvmx_pko_l2_sqx_shape_cn78xx cn78xxp1;
+	struct cvmx_pko_l2_sqx_shape_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_shape cvmx_pko_l2_sqx_shape_t;
+
+/**
+ * cvmx_pko_l2_sq#_shape_state
+ *
+ * This register must not be written during normal operation.
+ *
+ */
+union cvmx_pko_l2_sqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color : 2;
+		u64 pir_accum : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_l2_sqx_shape_state_s cn73xx;
+	struct cvmx_pko_l2_sqx_shape_state_s cn78xx;
+	struct cvmx_pko_l2_sqx_shape_state_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_shape_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_shape_state cvmx_pko_l2_sqx_shape_state_t;
+
+/**
+ * cvmx_pko_l2_sq#_sw_xoff
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_SW_XOFF.
+ *
+ */
+union cvmx_pko_l2_sqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_l2_sqx_sw_xoff_s cn73xx;
+	struct cvmx_pko_l2_sqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_l2_sqx_sw_xoff_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_sw_xoff_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_sw_xoff cvmx_pko_l2_sqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_l2_sq#_topology
+ */
+union cvmx_pko_l2_sqx_topology {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_topology_s {
+		u64 reserved_41_63 : 23;
+		u64 prio_anchor : 9;
+		u64 reserved_21_31 : 11;
+		u64 parent : 5;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l2_sqx_topology_cn73xx {
+		u64 reserved_40_63 : 24;
+		u64 prio_anchor : 8;
+		u64 reserved_20_31 : 12;
+		u64 parent : 4;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_l2_sqx_topology_s cn78xx;
+	struct cvmx_pko_l2_sqx_topology_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_topology_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_topology cvmx_pko_l2_sqx_topology_t;
+
+/**
+ * cvmx_pko_l2_sq#_yellow
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l2_sqx_yellow {
+	u64 u64;
+	struct cvmx_pko_l2_sqx_yellow_s {
+		u64 reserved_19_63 : 45;
+		u64 head : 9;
+		u64 reserved_9_9 : 1;
+		u64 tail : 9;
+	} s;
+	struct cvmx_pko_l2_sqx_yellow_s cn73xx;
+	struct cvmx_pko_l2_sqx_yellow_s cn78xx;
+	struct cvmx_pko_l2_sqx_yellow_s cn78xxp1;
+	struct cvmx_pko_l2_sqx_yellow_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqx_yellow cvmx_pko_l2_sqx_yellow_t;
+
+/**
+ * cvmx_pko_l2_sq_csr_bus_debug
+ */
+union cvmx_pko_l2_sq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_l2_sq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_l2_sq_csr_bus_debug_s cn73xx;
+	struct cvmx_pko_l2_sq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_l2_sq_csr_bus_debug_s cn78xxp1;
+	struct cvmx_pko_l2_sq_csr_bus_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sq_csr_bus_debug cvmx_pko_l2_sq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_l2_sqa_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l2_sqa_debug {
+	u64 u64;
+	struct cvmx_pko_l2_sqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l2_sqa_debug_s cn73xx;
+	struct cvmx_pko_l2_sqa_debug_s cn78xx;
+	struct cvmx_pko_l2_sqa_debug_s cn78xxp1;
+	struct cvmx_pko_l2_sqa_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqa_debug cvmx_pko_l2_sqa_debug_t;
+
+/**
+ * cvmx_pko_l2_sqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l2_sqb_debug {
+	u64 u64;
+	struct cvmx_pko_l2_sqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l2_sqb_debug_s cn73xx;
+	struct cvmx_pko_l2_sqb_debug_s cn78xx;
+	struct cvmx_pko_l2_sqb_debug_s cn78xxp1;
+	struct cvmx_pko_l2_sqb_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l2_sqb_debug cvmx_pko_l2_sqb_debug_t;
+
+/**
+ * cvmx_pko_l3_l2_sq#_channel
+ *
+ * PKO_CHANNEL_LEVEL[CC_LEVEL] determines whether this CSR array is associated to
+ * the L2 SQs or the L3 SQs.
+ */
+union cvmx_pko_l3_l2_sqx_channel {
+	u64 u64;
+	struct cvmx_pko_l3_l2_sqx_channel_s {
+		u64 reserved_44_63 : 20;
+		u64 cc_channel : 12;
+		u64 cc_word_cnt : 20;
+		u64 cc_packet_cnt : 10;
+		u64 cc_enable : 1;
+		u64 hw_xoff : 1;
+	} s;
+	struct cvmx_pko_l3_l2_sqx_channel_s cn73xx;
+	struct cvmx_pko_l3_l2_sqx_channel_s cn78xx;
+	struct cvmx_pko_l3_l2_sqx_channel_s cn78xxp1;
+	struct cvmx_pko_l3_l2_sqx_channel_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_l2_sqx_channel cvmx_pko_l3_l2_sqx_channel_t;
+
+/**
+ * cvmx_pko_l3_sq#_cir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l3_sqx_cir {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l3_sqx_cir_s cn73xx;
+	struct cvmx_pko_l3_sqx_cir_s cn78xx;
+	struct cvmx_pko_l3_sqx_cir_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_cir_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_cir cvmx_pko_l3_sqx_cir_t;
+
+/**
+ * cvmx_pko_l3_sq#_green
+ */
+union cvmx_pko_l3_sqx_green {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_green_s {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l3_sqx_green_cn73xx {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 reserved_18_19 : 2;
+		u64 head : 8;
+		u64 reserved_8_9 : 2;
+		u64 tail : 8;
+	} cn73xx;
+	struct cvmx_pko_l3_sqx_green_s cn78xx;
+	struct cvmx_pko_l3_sqx_green_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_green_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_green cvmx_pko_l3_sqx_green_t;
+
+/**
+ * cvmx_pko_l3_sq#_pick
+ *
+ * This CSR contains the meta for the L3 SQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_l3_sqx_pick {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_l3_sqx_pick_s cn73xx;
+	struct cvmx_pko_l3_sqx_pick_s cn78xx;
+	struct cvmx_pko_l3_sqx_pick_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_pick_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_pick cvmx_pko_l3_sqx_pick_t;
+
+/**
+ * cvmx_pko_l3_sq#_pir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l3_sqx_pir {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_pir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l3_sqx_pir_s cn73xx;
+	struct cvmx_pko_l3_sqx_pir_s cn78xx;
+	struct cvmx_pko_l3_sqx_pir_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_pir_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_pir cvmx_pko_l3_sqx_pir_t;
+
+/**
+ * cvmx_pko_l3_sq#_pointers
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_POINTERS.
+ *
+ */
+union cvmx_pko_l3_sqx_pointers {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_pointers_s {
+		u64 reserved_25_63 : 39;
+		u64 prev : 9;
+		u64 reserved_9_15 : 7;
+		u64 next : 9;
+	} s;
+	struct cvmx_pko_l3_sqx_pointers_cn73xx {
+		u64 reserved_24_63 : 40;
+		u64 prev : 8;
+		u64 reserved_8_15 : 8;
+		u64 next : 8;
+	} cn73xx;
+	struct cvmx_pko_l3_sqx_pointers_s cn78xx;
+	struct cvmx_pko_l3_sqx_pointers_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_pointers_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_pointers cvmx_pko_l3_sqx_pointers_t;
+
+/**
+ * cvmx_pko_l3_sq#_red
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l3_sqx_red {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_red_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l3_sqx_red_cn73xx {
+		u64 reserved_18_63 : 46;
+		u64 head : 8;
+		u64 reserved_8_9 : 2;
+		u64 tail : 8;
+	} cn73xx;
+	struct cvmx_pko_l3_sqx_red_s cn78xx;
+	struct cvmx_pko_l3_sqx_red_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_red_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_red cvmx_pko_l3_sqx_red_t;
+
+/**
+ * cvmx_pko_l3_sq#_sched_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHED_STATE.
+ *
+ */
+union cvmx_pko_l3_sqx_sched_state {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_sched_state_s {
+		u64 reserved_25_63 : 39;
+		u64 rr_count : 25;
+	} s;
+	struct cvmx_pko_l3_sqx_sched_state_s cn73xx;
+	struct cvmx_pko_l3_sqx_sched_state_s cn78xx;
+	struct cvmx_pko_l3_sqx_sched_state_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_sched_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_sched_state cvmx_pko_l3_sqx_sched_state_t;
+
+/**
+ * cvmx_pko_l3_sq#_schedule
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHEDULE.
+ *
+ */
+union cvmx_pko_l3_sqx_schedule {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_schedule_s {
+		u64 reserved_28_63 : 36;
+		u64 prio : 4;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_l3_sqx_schedule_s cn73xx;
+	struct cvmx_pko_l3_sqx_schedule_s cn78xx;
+	struct cvmx_pko_l3_sqx_schedule_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_schedule_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_schedule cvmx_pko_l3_sqx_schedule_t;
+
+/**
+ * cvmx_pko_l3_sq#_shape
+ */
+union cvmx_pko_l3_sqx_shape {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_shape_s {
+		u64 reserved_27_63 : 37;
+		u64 schedule_list : 2;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_l3_sqx_shape_s cn73xx;
+	struct cvmx_pko_l3_sqx_shape_cn78xx {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} cn78xx;
+	struct cvmx_pko_l3_sqx_shape_cn78xx cn78xxp1;
+	struct cvmx_pko_l3_sqx_shape_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_shape cvmx_pko_l3_sqx_shape_t;
+
+/**
+ * cvmx_pko_l3_sq#_shape_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SHAPE_STATE.
+ * This register must not be written during normal operation.
+ */
+union cvmx_pko_l3_sqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color : 2;
+		u64 pir_accum : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_l3_sqx_shape_state_s cn73xx;
+	struct cvmx_pko_l3_sqx_shape_state_s cn78xx;
+	struct cvmx_pko_l3_sqx_shape_state_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_shape_state_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_shape_state cvmx_pko_l3_sqx_shape_state_t;
+
+/**
+ * cvmx_pko_l3_sq#_sw_xoff
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_SW_XOFF
+ *
+ */
+union cvmx_pko_l3_sqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_l3_sqx_sw_xoff_s cn73xx;
+	struct cvmx_pko_l3_sqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_l3_sqx_sw_xoff_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_sw_xoff_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_sw_xoff cvmx_pko_l3_sqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_l3_sq#_topology
+ */
+union cvmx_pko_l3_sqx_topology {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_topology_s {
+		u64 reserved_42_63 : 22;
+		u64 prio_anchor : 10;
+		u64 reserved_25_31 : 7;
+		u64 parent : 9;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l3_sqx_topology_cn73xx {
+		u64 reserved_40_63 : 24;
+		u64 prio_anchor : 8;
+		u64 reserved_24_31 : 8;
+		u64 parent : 8;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_l3_sqx_topology_s cn78xx;
+	struct cvmx_pko_l3_sqx_topology_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_topology_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_topology cvmx_pko_l3_sqx_topology_t;
+
+/**
+ * cvmx_pko_l3_sq#_yellow
+ */
+union cvmx_pko_l3_sqx_yellow {
+	u64 u64;
+	struct cvmx_pko_l3_sqx_yellow_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l3_sqx_yellow_cn73xx {
+		u64 reserved_18_63 : 46;
+		u64 head : 8;
+		u64 reserved_8_9 : 2;
+		u64 tail : 8;
+	} cn73xx;
+	struct cvmx_pko_l3_sqx_yellow_s cn78xx;
+	struct cvmx_pko_l3_sqx_yellow_s cn78xxp1;
+	struct cvmx_pko_l3_sqx_yellow_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqx_yellow cvmx_pko_l3_sqx_yellow_t;
+
+/**
+ * cvmx_pko_l3_sq_csr_bus_debug
+ */
+union cvmx_pko_l3_sq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_l3_sq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_l3_sq_csr_bus_debug_s cn73xx;
+	struct cvmx_pko_l3_sq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_l3_sq_csr_bus_debug_s cn78xxp1;
+	struct cvmx_pko_l3_sq_csr_bus_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sq_csr_bus_debug cvmx_pko_l3_sq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_l3_sqa_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l3_sqa_debug {
+	u64 u64;
+	struct cvmx_pko_l3_sqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l3_sqa_debug_s cn73xx;
+	struct cvmx_pko_l3_sqa_debug_s cn78xx;
+	struct cvmx_pko_l3_sqa_debug_s cn78xxp1;
+	struct cvmx_pko_l3_sqa_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqa_debug cvmx_pko_l3_sqa_debug_t;
+
+/**
+ * cvmx_pko_l3_sqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l3_sqb_debug {
+	u64 u64;
+	struct cvmx_pko_l3_sqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l3_sqb_debug_s cn73xx;
+	struct cvmx_pko_l3_sqb_debug_s cn78xx;
+	struct cvmx_pko_l3_sqb_debug_s cn78xxp1;
+	struct cvmx_pko_l3_sqb_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_l3_sqb_debug cvmx_pko_l3_sqb_debug_t;
+
+/**
+ * cvmx_pko_l4_sq#_cir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l4_sqx_cir {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l4_sqx_cir_s cn78xx;
+	struct cvmx_pko_l4_sqx_cir_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_cir cvmx_pko_l4_sqx_cir_t;
+
+/**
+ * cvmx_pko_l4_sq#_green
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_GREEN.
+ *
+ */
+union cvmx_pko_l4_sqx_green {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_green_s {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l4_sqx_green_s cn78xx;
+	struct cvmx_pko_l4_sqx_green_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_green cvmx_pko_l4_sqx_green_t;
+
+/**
+ * cvmx_pko_l4_sq#_pick
+ *
+ * This CSR contains the meta for the L4 SQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_l4_sqx_pick {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_l4_sqx_pick_s cn78xx;
+	struct cvmx_pko_l4_sqx_pick_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_pick cvmx_pko_l4_sqx_pick_t;
+
+/**
+ * cvmx_pko_l4_sq#_pir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l4_sqx_pir {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_pir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l4_sqx_pir_s cn78xx;
+	struct cvmx_pko_l4_sqx_pir_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_pir cvmx_pko_l4_sqx_pir_t;
+
+/**
+ * cvmx_pko_l4_sq#_pointers
+ */
+union cvmx_pko_l4_sqx_pointers {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_pointers_s {
+		u64 reserved_26_63 : 38;
+		u64 prev : 10;
+		u64 reserved_10_15 : 6;
+		u64 next : 10;
+	} s;
+	struct cvmx_pko_l4_sqx_pointers_s cn78xx;
+	struct cvmx_pko_l4_sqx_pointers_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_pointers cvmx_pko_l4_sqx_pointers_t;
+
+/**
+ * cvmx_pko_l4_sq#_red
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l4_sqx_red {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_red_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l4_sqx_red_s cn78xx;
+	struct cvmx_pko_l4_sqx_red_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_red cvmx_pko_l4_sqx_red_t;
+
+/**
+ * cvmx_pko_l4_sq#_sched_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHED_STATE.
+ *
+ */
+union cvmx_pko_l4_sqx_sched_state {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_sched_state_s {
+		u64 reserved_25_63 : 39;
+		u64 rr_count : 25;
+	} s;
+	struct cvmx_pko_l4_sqx_sched_state_s cn78xx;
+	struct cvmx_pko_l4_sqx_sched_state_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_sched_state cvmx_pko_l4_sqx_sched_state_t;
+
+/**
+ * cvmx_pko_l4_sq#_schedule
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHEDULE.
+ *
+ */
+union cvmx_pko_l4_sqx_schedule {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_schedule_s {
+		u64 reserved_28_63 : 36;
+		u64 prio : 4;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_l4_sqx_schedule_s cn78xx;
+	struct cvmx_pko_l4_sqx_schedule_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_schedule cvmx_pko_l4_sqx_schedule_t;
+
+/**
+ * cvmx_pko_l4_sq#_shape
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_SHAPE.
+ *
+ */
+union cvmx_pko_l4_sqx_shape {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_shape_s {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_l4_sqx_shape_s cn78xx;
+	struct cvmx_pko_l4_sqx_shape_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_shape cvmx_pko_l4_sqx_shape_t;
+
+/**
+ * cvmx_pko_l4_sq#_shape_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SHAPE_STATE.
+ * This register must not be written during normal operation.
+ */
+union cvmx_pko_l4_sqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color : 2;
+		u64 pir_accum : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_l4_sqx_shape_state_s cn78xx;
+	struct cvmx_pko_l4_sqx_shape_state_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_shape_state cvmx_pko_l4_sqx_shape_state_t;
+
+/**
+ * cvmx_pko_l4_sq#_sw_xoff
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_SW_XOFF.
+ *
+ */
+union cvmx_pko_l4_sqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_l4_sqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_l4_sqx_sw_xoff_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_sw_xoff cvmx_pko_l4_sqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_l4_sq#_topology
+ */
+union cvmx_pko_l4_sqx_topology {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_topology_s {
+		u64 reserved_42_63 : 22;
+		u64 prio_anchor : 10;
+		u64 reserved_25_31 : 7;
+		u64 parent : 9;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l4_sqx_topology_s cn78xx;
+	struct cvmx_pko_l4_sqx_topology_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_topology cvmx_pko_l4_sqx_topology_t;
+
+/**
+ * cvmx_pko_l4_sq#_yellow
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l4_sqx_yellow {
+	u64 u64;
+	struct cvmx_pko_l4_sqx_yellow_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l4_sqx_yellow_s cn78xx;
+	struct cvmx_pko_l4_sqx_yellow_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqx_yellow cvmx_pko_l4_sqx_yellow_t;
+
+/**
+ * cvmx_pko_l4_sq_csr_bus_debug
+ */
+union cvmx_pko_l4_sq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_l4_sq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_l4_sq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_l4_sq_csr_bus_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sq_csr_bus_debug cvmx_pko_l4_sq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_l4_sqa_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l4_sqa_debug {
+	u64 u64;
+	struct cvmx_pko_l4_sqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l4_sqa_debug_s cn78xx;
+	struct cvmx_pko_l4_sqa_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqa_debug cvmx_pko_l4_sqa_debug_t;
+
+/**
+ * cvmx_pko_l4_sqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l4_sqb_debug {
+	u64 u64;
+	struct cvmx_pko_l4_sqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l4_sqb_debug_s cn78xx;
+	struct cvmx_pko_l4_sqb_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l4_sqb_debug cvmx_pko_l4_sqb_debug_t;
+
+/**
+ * cvmx_pko_l5_sq#_cir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l5_sqx_cir {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_cir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l5_sqx_cir_s cn78xx;
+	struct cvmx_pko_l5_sqx_cir_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_cir cvmx_pko_l5_sqx_cir_t;
+
+/**
+ * cvmx_pko_l5_sq#_green
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_GREEN.
+ *
+ */
+union cvmx_pko_l5_sqx_green {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_green_s {
+		u64 reserved_41_63 : 23;
+		u64 rr_active : 1;
+		u64 active_vec : 20;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l5_sqx_green_s cn78xx;
+	struct cvmx_pko_l5_sqx_green_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_green cvmx_pko_l5_sqx_green_t;
+
+/**
+ * cvmx_pko_l5_sq#_pick
+ *
+ * This CSR contains the meta for the L5 SQ, and is for debug and reconfiguration
+ * only and should never be written. See also PKO_META_DESC_S.
+ */
+union cvmx_pko_l5_sqx_pick {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_pick_s {
+		u64 dq : 10;
+		u64 color : 2;
+		u64 child : 10;
+		u64 bubble : 1;
+		u64 p_con : 1;
+		u64 c_con : 1;
+		u64 uid : 7;
+		u64 jump : 1;
+		u64 fpd : 1;
+		u64 ds : 1;
+		u64 adjust : 9;
+		u64 pir_dis : 1;
+		u64 cir_dis : 1;
+		u64 red_algo_override : 2;
+		u64 length : 16;
+	} s;
+	struct cvmx_pko_l5_sqx_pick_s cn78xx;
+	struct cvmx_pko_l5_sqx_pick_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_pick cvmx_pko_l5_sqx_pick_t;
+
+/**
+ * cvmx_pko_l5_sq#_pir
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_CIR.
+ *
+ */
+union cvmx_pko_l5_sqx_pir {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_pir_s {
+		u64 reserved_41_63 : 23;
+		u64 burst_exponent : 4;
+		u64 burst_mantissa : 8;
+		u64 reserved_17_28 : 12;
+		u64 rate_divider_exponent : 4;
+		u64 rate_exponent : 4;
+		u64 rate_mantissa : 8;
+		u64 enable : 1;
+	} s;
+	struct cvmx_pko_l5_sqx_pir_s cn78xx;
+	struct cvmx_pko_l5_sqx_pir_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_pir cvmx_pko_l5_sqx_pir_t;
+
+/**
+ * cvmx_pko_l5_sq#_pointers
+ *
+ * This register has the same bit fields as PKO_L4_SQ()_POINTERS.
+ *
+ */
+union cvmx_pko_l5_sqx_pointers {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_pointers_s {
+		u64 reserved_26_63 : 38;
+		u64 prev : 10;
+		u64 reserved_10_15 : 6;
+		u64 next : 10;
+	} s;
+	struct cvmx_pko_l5_sqx_pointers_s cn78xx;
+	struct cvmx_pko_l5_sqx_pointers_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_pointers cvmx_pko_l5_sqx_pointers_t;
+
+/**
+ * cvmx_pko_l5_sq#_red
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l5_sqx_red {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_red_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l5_sqx_red_s cn78xx;
+	struct cvmx_pko_l5_sqx_red_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_red cvmx_pko_l5_sqx_red_t;
+
+/**
+ * cvmx_pko_l5_sq#_sched_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHED_STATE.
+ *
+ */
+union cvmx_pko_l5_sqx_sched_state {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_sched_state_s {
+		u64 reserved_25_63 : 39;
+		u64 rr_count : 25;
+	} s;
+	struct cvmx_pko_l5_sqx_sched_state_s cn78xx;
+	struct cvmx_pko_l5_sqx_sched_state_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_sched_state cvmx_pko_l5_sqx_sched_state_t;
+
+/**
+ * cvmx_pko_l5_sq#_schedule
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SCHEDULE.
+ *
+ */
+union cvmx_pko_l5_sqx_schedule {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_schedule_s {
+		u64 reserved_28_63 : 36;
+		u64 prio : 4;
+		u64 rr_quantum : 24;
+	} s;
+	struct cvmx_pko_l5_sqx_schedule_s cn78xx;
+	struct cvmx_pko_l5_sqx_schedule_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_schedule cvmx_pko_l5_sqx_schedule_t;
+
+/**
+ * cvmx_pko_l5_sq#_shape
+ */
+union cvmx_pko_l5_sqx_shape {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_shape_s {
+		u64 reserved_25_63 : 39;
+		u64 length_disable : 1;
+		u64 reserved_13_23 : 11;
+		u64 yellow_disable : 1;
+		u64 red_disable : 1;
+		u64 red_algo : 2;
+		u64 adjust : 9;
+	} s;
+	struct cvmx_pko_l5_sqx_shape_s cn78xx;
+	struct cvmx_pko_l5_sqx_shape_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_shape cvmx_pko_l5_sqx_shape_t;
+
+/**
+ * cvmx_pko_l5_sq#_shape_state
+ *
+ * This register has the same bit fields as PKO_L2_SQ()_SHAPE_STATE.
+ * This register must not be written during normal operation.
+ */
+union cvmx_pko_l5_sqx_shape_state {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_shape_state_s {
+		u64 reserved_60_63 : 4;
+		u64 tw_timestamp : 6;
+		u64 color : 2;
+		u64 pir_accum : 26;
+		u64 cir_accum : 26;
+	} s;
+	struct cvmx_pko_l5_sqx_shape_state_s cn78xx;
+	struct cvmx_pko_l5_sqx_shape_state_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_shape_state cvmx_pko_l5_sqx_shape_state_t;
+
+/**
+ * cvmx_pko_l5_sq#_sw_xoff
+ *
+ * This register has the same bit fields as PKO_L1_SQ()_SW_XOFF.
+ *
+ */
+union cvmx_pko_l5_sqx_sw_xoff {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_sw_xoff_s {
+		u64 reserved_4_63 : 60;
+		u64 drain_irq : 1;
+		u64 drain_null_link : 1;
+		u64 drain : 1;
+		u64 xoff : 1;
+	} s;
+	struct cvmx_pko_l5_sqx_sw_xoff_s cn78xx;
+	struct cvmx_pko_l5_sqx_sw_xoff_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_sw_xoff cvmx_pko_l5_sqx_sw_xoff_t;
+
+/**
+ * cvmx_pko_l5_sq#_topology
+ */
+union cvmx_pko_l5_sqx_topology {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_topology_s {
+		u64 reserved_42_63 : 22;
+		u64 prio_anchor : 10;
+		u64 reserved_26_31 : 6;
+		u64 parent : 10;
+		u64 reserved_5_15 : 11;
+		u64 rr_prio : 4;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_l5_sqx_topology_s cn78xx;
+	struct cvmx_pko_l5_sqx_topology_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_topology cvmx_pko_l5_sqx_topology_t;
+
+/**
+ * cvmx_pko_l5_sq#_yellow
+ *
+ * This register has the same bit fields as PKO_L3_SQ()_YELLOW.
+ *
+ */
+union cvmx_pko_l5_sqx_yellow {
+	u64 u64;
+	struct cvmx_pko_l5_sqx_yellow_s {
+		u64 reserved_20_63 : 44;
+		u64 head : 10;
+		u64 tail : 10;
+	} s;
+	struct cvmx_pko_l5_sqx_yellow_s cn78xx;
+	struct cvmx_pko_l5_sqx_yellow_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqx_yellow cvmx_pko_l5_sqx_yellow_t;
+
+/**
+ * cvmx_pko_l5_sq_csr_bus_debug
+ */
+union cvmx_pko_l5_sq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_l5_sq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_l5_sq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_l5_sq_csr_bus_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sq_csr_bus_debug cvmx_pko_l5_sq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_l5_sqa_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l5_sqa_debug {
+	u64 u64;
+	struct cvmx_pko_l5_sqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l5_sqa_debug_s cn78xx;
+	struct cvmx_pko_l5_sqa_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqa_debug cvmx_pko_l5_sqa_debug_t;
+
+/**
+ * cvmx_pko_l5_sqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_l5_sqb_debug {
+	u64 u64;
+	struct cvmx_pko_l5_sqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_l5_sqb_debug_s cn78xx;
+	struct cvmx_pko_l5_sqb_debug_s cn78xxp1;
+};
+
+typedef union cvmx_pko_l5_sqb_debug cvmx_pko_l5_sqb_debug_t;
+
+/**
+ * cvmx_pko_lut#
+ *
+ * PKO_LUT has a location for each used PKI_CHAN_E. The following table
+ * shows the mapping between LINK/MAC_NUM's, PKI_CHAN_E channels, and
+ * PKO_LUT indices.
+ *
+ * <pre>
+ *   LINK/   PKI_CHAN_E    Corresponding
+ * MAC_NUM   Range         PKO_LUT index   Description
+ * -------   -----------   -------------   -----------------
+ *     0     0x000-0x03F   0x040-0x07F     LBK Loopback
+ *     1     0x100-0x13F   0x080-0x0BF     DPI packet output
+ *     2     0x800-0x80F   0x000-0x00F     BGX0 Logical MAC 0
+ *     3     0x810-0x81F   0x010-0x01F     BGX0 Logical MAC 1
+ *     4     0x820-0x82F   0x020-0x02F     BGX0 Logical MAC 2
+ *     5     0x830-0x83F   0x030-0x03F     BGX0 Logical MAC 3
+ * </pre>
+ */
+union cvmx_pko_lutx {
+	u64 u64;
+	struct cvmx_pko_lutx_s {
+		u64 reserved_16_63 : 48;
+		u64 valid : 1;
+		u64 reserved_14_14 : 1;
+		u64 pq_idx : 5;
+		u64 queue_number : 9;
+	} s;
+	struct cvmx_pko_lutx_cn73xx {
+		u64 reserved_16_63 : 48;
+		u64 valid : 1;
+		u64 reserved_13_14 : 2;
+		u64 pq_idx : 4;
+		u64 reserved_8_8 : 1;
+		u64 queue_number : 8;
+	} cn73xx;
+	struct cvmx_pko_lutx_s cn78xx;
+	struct cvmx_pko_lutx_s cn78xxp1;
+	struct cvmx_pko_lutx_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_lutx cvmx_pko_lutx_t;
+
+/**
+ * cvmx_pko_lut_bist_status
+ */
+union cvmx_pko_lut_bist_status {
+	u64 u64;
+	struct cvmx_pko_lut_bist_status_s {
+		u64 reserved_1_63 : 63;
+		u64 bist_status : 1;
+	} s;
+	struct cvmx_pko_lut_bist_status_s cn73xx;
+	struct cvmx_pko_lut_bist_status_s cn78xx;
+	struct cvmx_pko_lut_bist_status_s cn78xxp1;
+	struct cvmx_pko_lut_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_bist_status cvmx_pko_lut_bist_status_t;
+
+/**
+ * cvmx_pko_lut_ecc_ctl0
+ */
+union cvmx_pko_lut_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_lut_ecc_ctl0_s {
+		u64 c2q_lut_ram_flip : 2;
+		u64 c2q_lut_ram_cdis : 1;
+		u64 reserved_0_60 : 61;
+	} s;
+	struct cvmx_pko_lut_ecc_ctl0_s cn73xx;
+	struct cvmx_pko_lut_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_lut_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_lut_ecc_ctl0_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_ecc_ctl0 cvmx_pko_lut_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_lut_ecc_dbe_sts0
+ */
+union cvmx_pko_lut_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_lut_ecc_dbe_sts0_s {
+		u64 c2q_lut_ram_dbe : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_lut_ecc_dbe_sts0_s cn73xx;
+	struct cvmx_pko_lut_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_lut_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_lut_ecc_dbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_ecc_dbe_sts0 cvmx_pko_lut_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_lut_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_lut_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_lut_ecc_dbe_sts_cmb0_s {
+		u64 lut_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_lut_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_lut_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_lut_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_lut_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_ecc_dbe_sts_cmb0 cvmx_pko_lut_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_lut_ecc_sbe_sts0
+ */
+union cvmx_pko_lut_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_lut_ecc_sbe_sts0_s {
+		u64 c2q_lut_ram_sbe : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_lut_ecc_sbe_sts0_s cn73xx;
+	struct cvmx_pko_lut_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_lut_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_lut_ecc_sbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_ecc_sbe_sts0 cvmx_pko_lut_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_lut_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_lut_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_lut_ecc_sbe_sts_cmb0_s {
+		u64 lut_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_lut_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_lut_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_lut_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_lut_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_lut_ecc_sbe_sts_cmb0 cvmx_pko_lut_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_mac#_cfg
+ *
+ * These registers create the links between the MACs and the TxFIFO used to store the data,
+ * and hold the per-MAC configuration bits.  These registers must be disabled (FIFO_NUM set
+ * to 31) prior to reconfiguration of any of the other bits.
+ *
+ * <pre>
+ *   CSR Name       Associated MAC
+ *   ------------   -------------------
+ *   PKO_MAC0_CFG   LBK loopback
+ *   PKO_MAC1_CFG   DPI packet output
+ *   PKO_MAC2_CFG   BGX0  logical MAC 0
+ *   PKO_MAC3_CFG   BGX0  logical MAC 1
+ *   PKO_MAC4_CFG   BGX0  logical MAC 2
+ *   PKO_MAC5_CFG   BGX0  logical MAC 3
+ *   PKO_MAC6_CFG   SRIO0 logical MAC 0
+ *   PKO_MAC7_CFG   SRIO0 logical MAC 1
+ *   PKO_MAC8_CFG   SRIO1 logical MAC 0
+ *   PKO_MAC9_CFG   SRIO1 logical MAC 1
+ * </pre>
+ */
+union cvmx_pko_macx_cfg {
+	u64 u64;
+	struct cvmx_pko_macx_cfg_s {
+		u64 reserved_17_63 : 47;
+		u64 min_pad_ena : 1;
+		u64 fcs_ena : 1;
+		u64 fcs_sop_off : 8;
+		u64 skid_max_cnt : 2;
+		u64 fifo_num : 5;
+	} s;
+	struct cvmx_pko_macx_cfg_s cn73xx;
+	struct cvmx_pko_macx_cfg_s cn78xx;
+	struct cvmx_pko_macx_cfg_s cn78xxp1;
+	struct cvmx_pko_macx_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_macx_cfg cvmx_pko_macx_cfg_t;
+
+/**
+ * cvmx_pko_mci0_cred_cnt#
+ */
+union cvmx_pko_mci0_cred_cntx {
+	u64 u64;
+	struct cvmx_pko_mci0_cred_cntx_s {
+		u64 reserved_13_63 : 51;
+		u64 cred_cnt : 13;
+	} s;
+	struct cvmx_pko_mci0_cred_cntx_s cn78xx;
+	struct cvmx_pko_mci0_cred_cntx_s cn78xxp1;
+};
+
+typedef union cvmx_pko_mci0_cred_cntx cvmx_pko_mci0_cred_cntx_t;
+
+/**
+ * cvmx_pko_mci0_max_cred#
+ */
+union cvmx_pko_mci0_max_credx {
+	u64 u64;
+	struct cvmx_pko_mci0_max_credx_s {
+		u64 reserved_12_63 : 52;
+		u64 max_cred_lim : 12;
+	} s;
+	struct cvmx_pko_mci0_max_credx_s cn78xx;
+	struct cvmx_pko_mci0_max_credx_s cn78xxp1;
+};
+
+typedef union cvmx_pko_mci0_max_credx cvmx_pko_mci0_max_credx_t;
+
+/**
+ * cvmx_pko_mci1_cred_cnt#
+ */
+union cvmx_pko_mci1_cred_cntx {
+	u64 u64;
+	struct cvmx_pko_mci1_cred_cntx_s {
+		u64 reserved_13_63 : 51;
+		u64 cred_cnt : 13;
+	} s;
+	struct cvmx_pko_mci1_cred_cntx_s cn73xx;
+	struct cvmx_pko_mci1_cred_cntx_s cn78xx;
+	struct cvmx_pko_mci1_cred_cntx_s cn78xxp1;
+	struct cvmx_pko_mci1_cred_cntx_s cnf75xx;
+};
+
+typedef union cvmx_pko_mci1_cred_cntx cvmx_pko_mci1_cred_cntx_t;
+
+/**
+ * cvmx_pko_mci1_max_cred#
+ */
+union cvmx_pko_mci1_max_credx {
+	u64 u64;
+	struct cvmx_pko_mci1_max_credx_s {
+		u64 reserved_12_63 : 52;
+		u64 max_cred_lim : 12;
+	} s;
+	struct cvmx_pko_mci1_max_credx_s cn73xx;
+	struct cvmx_pko_mci1_max_credx_s cn78xx;
+	struct cvmx_pko_mci1_max_credx_s cn78xxp1;
+	struct cvmx_pko_mci1_max_credx_s cnf75xx;
+};
+
+typedef union cvmx_pko_mci1_max_credx cvmx_pko_mci1_max_credx_t;
+
+/**
+ * cvmx_pko_mem_count0
+ *
+ * Notes:
+ * Total number of packets seen by PKO, per port
+ * A write to this address will clear the entry whose index is specified as COUNT[5:0].
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_count0 {
+	u64 u64;
+	struct cvmx_pko_mem_count0_s {
+		u64 reserved_32_63 : 32;
+		u64 count : 32;
+	} s;
+	struct cvmx_pko_mem_count0_s cn30xx;
+	struct cvmx_pko_mem_count0_s cn31xx;
+	struct cvmx_pko_mem_count0_s cn38xx;
+	struct cvmx_pko_mem_count0_s cn38xxp2;
+	struct cvmx_pko_mem_count0_s cn50xx;
+	struct cvmx_pko_mem_count0_s cn52xx;
+	struct cvmx_pko_mem_count0_s cn52xxp1;
+	struct cvmx_pko_mem_count0_s cn56xx;
+	struct cvmx_pko_mem_count0_s cn56xxp1;
+	struct cvmx_pko_mem_count0_s cn58xx;
+	struct cvmx_pko_mem_count0_s cn58xxp1;
+	struct cvmx_pko_mem_count0_s cn61xx;
+	struct cvmx_pko_mem_count0_s cn63xx;
+	struct cvmx_pko_mem_count0_s cn63xxp1;
+	struct cvmx_pko_mem_count0_s cn66xx;
+	struct cvmx_pko_mem_count0_s cn68xx;
+	struct cvmx_pko_mem_count0_s cn68xxp1;
+	struct cvmx_pko_mem_count0_s cn70xx;
+	struct cvmx_pko_mem_count0_s cn70xxp1;
+	struct cvmx_pko_mem_count0_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_count0 cvmx_pko_mem_count0_t;
+
+/**
+ * cvmx_pko_mem_count1
+ *
+ * Notes:
+ * Total number of bytes seen by PKO, per port
+ * A write to this address will clear the entry whose index is specified as COUNT[5:0].
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_count1 {
+	u64 u64;
+	struct cvmx_pko_mem_count1_s {
+		u64 reserved_48_63 : 16;
+		u64 count : 48;
+	} s;
+	struct cvmx_pko_mem_count1_s cn30xx;
+	struct cvmx_pko_mem_count1_s cn31xx;
+	struct cvmx_pko_mem_count1_s cn38xx;
+	struct cvmx_pko_mem_count1_s cn38xxp2;
+	struct cvmx_pko_mem_count1_s cn50xx;
+	struct cvmx_pko_mem_count1_s cn52xx;
+	struct cvmx_pko_mem_count1_s cn52xxp1;
+	struct cvmx_pko_mem_count1_s cn56xx;
+	struct cvmx_pko_mem_count1_s cn56xxp1;
+	struct cvmx_pko_mem_count1_s cn58xx;
+	struct cvmx_pko_mem_count1_s cn58xxp1;
+	struct cvmx_pko_mem_count1_s cn61xx;
+	struct cvmx_pko_mem_count1_s cn63xx;
+	struct cvmx_pko_mem_count1_s cn63xxp1;
+	struct cvmx_pko_mem_count1_s cn66xx;
+	struct cvmx_pko_mem_count1_s cn68xx;
+	struct cvmx_pko_mem_count1_s cn68xxp1;
+	struct cvmx_pko_mem_count1_s cn70xx;
+	struct cvmx_pko_mem_count1_s cn70xxp1;
+	struct cvmx_pko_mem_count1_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_count1 cvmx_pko_mem_count1_t;
+
+/**
+ * cvmx_pko_mem_debug0
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.cmnd[63:0]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug0 {
+	u64 u64;
+	struct cvmx_pko_mem_debug0_s {
+		u64 fau : 28;
+		u64 cmd : 14;
+		u64 segs : 6;
+		u64 size : 16;
+	} s;
+	struct cvmx_pko_mem_debug0_s cn30xx;
+	struct cvmx_pko_mem_debug0_s cn31xx;
+	struct cvmx_pko_mem_debug0_s cn38xx;
+	struct cvmx_pko_mem_debug0_s cn38xxp2;
+	struct cvmx_pko_mem_debug0_s cn50xx;
+	struct cvmx_pko_mem_debug0_s cn52xx;
+	struct cvmx_pko_mem_debug0_s cn52xxp1;
+	struct cvmx_pko_mem_debug0_s cn56xx;
+	struct cvmx_pko_mem_debug0_s cn56xxp1;
+	struct cvmx_pko_mem_debug0_s cn58xx;
+	struct cvmx_pko_mem_debug0_s cn58xxp1;
+	struct cvmx_pko_mem_debug0_s cn61xx;
+	struct cvmx_pko_mem_debug0_s cn63xx;
+	struct cvmx_pko_mem_debug0_s cn63xxp1;
+	struct cvmx_pko_mem_debug0_s cn66xx;
+	struct cvmx_pko_mem_debug0_s cn68xx;
+	struct cvmx_pko_mem_debug0_s cn68xxp1;
+	struct cvmx_pko_mem_debug0_s cn70xx;
+	struct cvmx_pko_mem_debug0_s cn70xxp1;
+	struct cvmx_pko_mem_debug0_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug0 cvmx_pko_mem_debug0_t;
+
+/**
+ * cvmx_pko_mem_debug1
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.curr[63:0]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug1 {
+	u64 u64;
+	struct cvmx_pko_mem_debug1_s {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 ptr : 40;
+	} s;
+	struct cvmx_pko_mem_debug1_s cn30xx;
+	struct cvmx_pko_mem_debug1_s cn31xx;
+	struct cvmx_pko_mem_debug1_s cn38xx;
+	struct cvmx_pko_mem_debug1_s cn38xxp2;
+	struct cvmx_pko_mem_debug1_s cn50xx;
+	struct cvmx_pko_mem_debug1_s cn52xx;
+	struct cvmx_pko_mem_debug1_s cn52xxp1;
+	struct cvmx_pko_mem_debug1_s cn56xx;
+	struct cvmx_pko_mem_debug1_s cn56xxp1;
+	struct cvmx_pko_mem_debug1_s cn58xx;
+	struct cvmx_pko_mem_debug1_s cn58xxp1;
+	struct cvmx_pko_mem_debug1_s cn61xx;
+	struct cvmx_pko_mem_debug1_s cn63xx;
+	struct cvmx_pko_mem_debug1_s cn63xxp1;
+	struct cvmx_pko_mem_debug1_s cn66xx;
+	struct cvmx_pko_mem_debug1_s cn68xx;
+	struct cvmx_pko_mem_debug1_s cn68xxp1;
+	struct cvmx_pko_mem_debug1_s cn70xx;
+	struct cvmx_pko_mem_debug1_s cn70xxp1;
+	struct cvmx_pko_mem_debug1_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug1 cvmx_pko_mem_debug1_t;
+
+/**
+ * cvmx_pko_mem_debug10
+ *
+ * Notes:
+ * Internal per-engine state intended for debug use only - pko.dat.ptr.ptrs1, pko.dat.ptr.ptrs2
+ * This CSR is a memory of 10 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug10 {
+	u64 u64;
+	struct cvmx_pko_mem_debug10_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug10_cn30xx {
+		u64 fau : 28;
+		u64 cmd : 14;
+		u64 segs : 6;
+		u64 size : 16;
+	} cn30xx;
+	struct cvmx_pko_mem_debug10_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug10_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug10_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug10_cn50xx {
+		u64 reserved_49_63 : 15;
+		u64 ptrs1 : 17;
+		u64 reserved_17_31 : 15;
+		u64 ptrs2 : 17;
+	} cn50xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn68xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn68xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug10_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug10_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug10 cvmx_pko_mem_debug10_t;
+
+/**
+ * cvmx_pko_mem_debug11
+ *
+ * Notes:
+ * Internal per-engine state intended for debug use only - pko.out.sta.state[22:0]
+ * This CSR is a memory of 10 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug11 {
+	u64 u64;
+	struct cvmx_pko_mem_debug11_s {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 reserved_0_39 : 40;
+	} s;
+	struct cvmx_pko_mem_debug11_cn30xx {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 ptr : 40;
+	} cn30xx;
+	struct cvmx_pko_mem_debug11_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug11_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug11_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug11_cn50xx {
+		u64 reserved_23_63 : 41;
+		u64 maj : 1;
+		u64 uid : 3;
+		u64 sop : 1;
+		u64 len : 1;
+		u64 chk : 1;
+		u64 cnt : 13;
+		u64 mod : 3;
+	} cn50xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn68xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn68xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug11_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug11_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug11 cvmx_pko_mem_debug11_t;
+
+/**
+ * cvmx_pko_mem_debug12
+ *
+ * Notes:
+ * Internal per-engine x4 state intended for debug use only - pko.out.ctl.cmnd[63:0]
+ * This CSR is a memory of 40 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug12 {
+	u64 u64;
+	struct cvmx_pko_mem_debug12_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug12_cn30xx {
+		u64 data : 64;
+	} cn30xx;
+	struct cvmx_pko_mem_debug12_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug12_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug12_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug12_cn50xx {
+		u64 fau : 28;
+		u64 cmd : 14;
+		u64 segs : 6;
+		u64 size : 16;
+	} cn50xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug12_cn68xx {
+		u64 state : 64;
+	} cn68xx;
+	struct cvmx_pko_mem_debug12_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug12_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug12_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug12 cvmx_pko_mem_debug12_t;
+
+/**
+ * cvmx_pko_mem_debug13
+ *
+ * Notes:
+ * Internal per-engine x4 state intended for debug use only - pko.out.ctl.head[63:0]
+ * This CSR is a memory of 40 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug13 {
+	u64 u64;
+	struct cvmx_pko_mem_debug13_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug13_cn30xx {
+		u64 reserved_51_63 : 13;
+		u64 widx : 17;
+		u64 ridx2 : 17;
+		u64 widx2 : 17;
+	} cn30xx;
+	struct cvmx_pko_mem_debug13_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug13_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug13_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug13_cn50xx {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 ptr : 40;
+	} cn50xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug13_cn68xx {
+		u64 state : 64;
+	} cn68xx;
+	struct cvmx_pko_mem_debug13_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug13_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug13_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug13 cvmx_pko_mem_debug13_t;
+
+/**
+ * cvmx_pko_mem_debug14
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko.prt.psb.save[63:0]
+ * This CSR is a memory of 132 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug14 {
+	u64 u64;
+	struct cvmx_pko_mem_debug14_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug14_cn30xx {
+		u64 reserved_17_63 : 47;
+		u64 ridx : 17;
+	} cn30xx;
+	struct cvmx_pko_mem_debug14_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug14_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug14_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug14_cn52xx {
+		u64 data : 64;
+	} cn52xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_debug14_cn52xx cn56xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_debug14_cn52xx cn61xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn63xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_debug14_cn52xx cn66xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn70xx;
+	struct cvmx_pko_mem_debug14_cn52xx cn70xxp1;
+	struct cvmx_pko_mem_debug14_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug14 cvmx_pko_mem_debug14_t;
+
+/**
+ * cvmx_pko_mem_debug2
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.head[63:0]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug2 {
+	u64 u64;
+	struct cvmx_pko_mem_debug2_s {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 ptr : 40;
+	} s;
+	struct cvmx_pko_mem_debug2_s cn30xx;
+	struct cvmx_pko_mem_debug2_s cn31xx;
+	struct cvmx_pko_mem_debug2_s cn38xx;
+	struct cvmx_pko_mem_debug2_s cn38xxp2;
+	struct cvmx_pko_mem_debug2_s cn50xx;
+	struct cvmx_pko_mem_debug2_s cn52xx;
+	struct cvmx_pko_mem_debug2_s cn52xxp1;
+	struct cvmx_pko_mem_debug2_s cn56xx;
+	struct cvmx_pko_mem_debug2_s cn56xxp1;
+	struct cvmx_pko_mem_debug2_s cn58xx;
+	struct cvmx_pko_mem_debug2_s cn58xxp1;
+	struct cvmx_pko_mem_debug2_s cn61xx;
+	struct cvmx_pko_mem_debug2_s cn63xx;
+	struct cvmx_pko_mem_debug2_s cn63xxp1;
+	struct cvmx_pko_mem_debug2_s cn66xx;
+	struct cvmx_pko_mem_debug2_s cn68xx;
+	struct cvmx_pko_mem_debug2_s cn68xxp1;
+	struct cvmx_pko_mem_debug2_s cn70xx;
+	struct cvmx_pko_mem_debug2_s cn70xxp1;
+	struct cvmx_pko_mem_debug2_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug2 cvmx_pko_mem_debug2_t;
+
+/**
+ * cvmx_pko_mem_debug3
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.resp[63:0]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug3 {
+	u64 u64;
+	struct cvmx_pko_mem_debug3_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug3_cn30xx {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 ptr : 40;
+	} cn30xx;
+	struct cvmx_pko_mem_debug3_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug3_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug3_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug3_cn50xx {
+		u64 data : 64;
+	} cn50xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn68xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn68xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug3_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug3_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug3 cvmx_pko_mem_debug3_t;
+
+/**
+ * cvmx_pko_mem_debug4
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.state[63:0]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug4 {
+	u64 u64;
+	struct cvmx_pko_mem_debug4_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug4_cn30xx {
+		u64 data : 64;
+	} cn30xx;
+	struct cvmx_pko_mem_debug4_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug4_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug4_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug4_cn50xx {
+		u64 cmnd_segs : 3;
+		u64 cmnd_siz : 16;
+		u64 cmnd_off : 6;
+		u64 uid : 3;
+		u64 dread_sop : 1;
+		u64 init_dwrite : 1;
+		u64 chk_once : 1;
+		u64 chk_mode : 1;
+		u64 active : 1;
+		u64 static_p : 1;
+		u64 qos : 3;
+		u64 qcb_ridx : 5;
+		u64 qid_off_max : 4;
+		u64 qid_off : 4;
+		u64 qid_base : 8;
+		u64 wait : 1;
+		u64 minor : 2;
+		u64 major : 3;
+	} cn50xx;
+	struct cvmx_pko_mem_debug4_cn52xx {
+		u64 curr_siz : 8;
+		u64 curr_off : 16;
+		u64 cmnd_segs : 6;
+		u64 cmnd_siz : 16;
+		u64 cmnd_off : 6;
+		u64 uid : 2;
+		u64 dread_sop : 1;
+		u64 init_dwrite : 1;
+		u64 chk_once : 1;
+		u64 chk_mode : 1;
+		u64 wait : 1;
+		u64 minor : 2;
+		u64 major : 3;
+	} cn52xx;
+	struct cvmx_pko_mem_debug4_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_debug4_cn52xx cn56xx;
+	struct cvmx_pko_mem_debug4_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_debug4_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug4_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug4_cn52xx cn61xx;
+	struct cvmx_pko_mem_debug4_cn52xx cn63xx;
+	struct cvmx_pko_mem_debug4_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_debug4_cn52xx cn66xx;
+	struct cvmx_pko_mem_debug4_cn68xx {
+		u64 curr_siz : 9;
+		u64 curr_off : 16;
+		u64 cmnd_segs : 6;
+		u64 cmnd_siz : 16;
+		u64 cmnd_off : 6;
+		u64 dread_sop : 1;
+		u64 init_dwrite : 1;
+		u64 chk_once : 1;
+		u64 chk_mode : 1;
+		u64 reserved_6_6 : 1;
+		u64 minor : 2;
+		u64 major : 4;
+	} cn68xx;
+	struct cvmx_pko_mem_debug4_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug4_cn52xx cn70xx;
+	struct cvmx_pko_mem_debug4_cn52xx cn70xxp1;
+	struct cvmx_pko_mem_debug4_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug4 cvmx_pko_mem_debug4_t;
+
+/**
+ * cvmx_pko_mem_debug5
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.state[127:64]
+ * This CSR is a memory of 12 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug5 {
+	u64 u64;
+	struct cvmx_pko_mem_debug5_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug5_cn30xx {
+		u64 dwri_mod : 1;
+		u64 dwri_sop : 1;
+		u64 dwri_len : 1;
+		u64 dwri_cnt : 13;
+		u64 cmnd_siz : 16;
+		u64 uid : 1;
+		u64 xfer_wor : 1;
+		u64 xfer_dwr : 1;
+		u64 cbuf_fre : 1;
+		u64 reserved_27_27 : 1;
+		u64 chk_mode : 1;
+		u64 active : 1;
+		u64 qos : 3;
+		u64 qcb_ridx : 5;
+		u64 qid_off : 3;
+		u64 qid_base : 7;
+		u64 wait : 1;
+		u64 minor : 2;
+		u64 major : 4;
+	} cn30xx;
+	struct cvmx_pko_mem_debug5_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug5_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug5_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug5_cn50xx {
+		u64 curr_ptr : 29;
+		u64 curr_siz : 16;
+		u64 curr_off : 16;
+		u64 cmnd_segs : 3;
+	} cn50xx;
+	struct cvmx_pko_mem_debug5_cn52xx {
+		u64 reserved_54_63 : 10;
+		u64 nxt_inflt : 6;
+		u64 curr_ptr : 40;
+		u64 curr_siz : 8;
+	} cn52xx;
+	struct cvmx_pko_mem_debug5_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_debug5_cn52xx cn56xx;
+	struct cvmx_pko_mem_debug5_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_debug5_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug5_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug5_cn61xx {
+		u64 reserved_56_63 : 8;
+		u64 ptp : 1;
+		u64 major_3 : 1;
+		u64 nxt_inflt : 6;
+		u64 curr_ptr : 40;
+		u64 curr_siz : 8;
+	} cn61xx;
+	struct cvmx_pko_mem_debug5_cn61xx cn63xx;
+	struct cvmx_pko_mem_debug5_cn61xx cn63xxp1;
+	struct cvmx_pko_mem_debug5_cn61xx cn66xx;
+	struct cvmx_pko_mem_debug5_cn68xx {
+		u64 reserved_57_63 : 7;
+		u64 uid : 3;
+		u64 ptp : 1;
+		u64 nxt_inflt : 6;
+		u64 curr_ptr : 40;
+		u64 curr_siz : 7;
+	} cn68xx;
+	struct cvmx_pko_mem_debug5_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug5_cn61xx cn70xx;
+	struct cvmx_pko_mem_debug5_cn61xx cn70xxp1;
+	struct cvmx_pko_mem_debug5_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug5 cvmx_pko_mem_debug5_t;
+
+/**
+ * cvmx_pko_mem_debug6
+ *
+ * Notes:
+ * Internal per-port state intended for debug use only - pko_prt_psb.port[63:0]
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug6 {
+	u64 u64;
+	struct cvmx_pko_mem_debug6_s {
+		u64 reserved_38_63 : 26;
+		u64 qos_active : 1;
+		u64 reserved_0_36 : 37;
+	} s;
+	struct cvmx_pko_mem_debug6_cn30xx {
+		u64 reserved_11_63 : 53;
+		u64 qid_offm : 3;
+		u64 static_p : 1;
+		u64 work_min : 3;
+		u64 dwri_chk : 1;
+		u64 dwri_uid : 1;
+		u64 dwri_mod : 2;
+	} cn30xx;
+	struct cvmx_pko_mem_debug6_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug6_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug6_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug6_cn50xx {
+		u64 reserved_11_63 : 53;
+		u64 curr_ptr : 11;
+	} cn50xx;
+	struct cvmx_pko_mem_debug6_cn52xx {
+		u64 reserved_37_63 : 27;
+		u64 qid_offres : 4;
+		u64 qid_offths : 4;
+		u64 preempter : 1;
+		u64 preemptee : 1;
+		u64 preempted : 1;
+		u64 active : 1;
+		u64 statc : 1;
+		u64 qos : 3;
+		u64 qcb_ridx : 5;
+		u64 qid_offmax : 4;
+		u64 qid_off : 4;
+		u64 qid_base : 8;
+	} cn52xx;
+	struct cvmx_pko_mem_debug6_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_debug6_cn52xx cn56xx;
+	struct cvmx_pko_mem_debug6_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_debug6_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug6_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug6_cn52xx cn61xx;
+	struct cvmx_pko_mem_debug6_cn52xx cn63xx;
+	struct cvmx_pko_mem_debug6_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_debug6_cn52xx cn66xx;
+	struct cvmx_pko_mem_debug6_cn68xx {
+		u64 reserved_38_63 : 26;
+		u64 qos_active : 1;
+		u64 qid_offths : 5;
+		u64 preempter : 1;
+		u64 preemptee : 1;
+		u64 active : 1;
+		u64 static_p : 1;
+		u64 qos : 3;
+		u64 qcb_ridx : 7;
+		u64 qid_offmax : 5;
+		u64 qid_off : 5;
+		u64 qid_base : 8;
+	} cn68xx;
+	struct cvmx_pko_mem_debug6_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug6_cn70xx {
+		u64 reserved_63_37 : 27;
+		u64 qid_offres : 4;
+		u64 qid_offths : 4;
+		u64 preempter : 1;
+		u64 preemptee : 1;
+		u64 preempted : 1;
+		u64 active : 1;
+		u64 staticb : 1;
+		u64 qos : 3;
+		u64 qcb_ridx : 5;
+		u64 qid_offmax : 4;
+		u64 qid_off : 4;
+		u64 qid_base : 8;
+	} cn70xx;
+	struct cvmx_pko_mem_debug6_cn70xx cn70xxp1;
+	struct cvmx_pko_mem_debug6_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug6 cvmx_pko_mem_debug6_t;
+
+/**
+ * cvmx_pko_mem_debug7
+ *
+ * Notes:
+ * Internal per-queue state intended for debug use only - pko_prt_qsb.state[63:0]
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug7 {
+	u64 u64;
+	struct cvmx_pko_mem_debug7_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_mem_debug7_cn30xx {
+		u64 reserved_58_63 : 6;
+		u64 dwb : 9;
+		u64 start : 33;
+		u64 size : 16;
+	} cn30xx;
+	struct cvmx_pko_mem_debug7_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug7_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug7_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug7_cn50xx {
+		u64 qos : 5;
+		u64 tail : 1;
+		u64 buf_siz : 13;
+		u64 buf_ptr : 33;
+		u64 qcb_widx : 6;
+		u64 qcb_ridx : 6;
+	} cn50xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug7_cn68xx {
+		u64 buf_siz : 11;
+		u64 buf_ptr : 37;
+		u64 qcb_widx : 8;
+		u64 qcb_ridx : 8;
+	} cn68xx;
+	struct cvmx_pko_mem_debug7_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug7_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug7_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug7 cvmx_pko_mem_debug7_t;
+
+/**
+ * cvmx_pko_mem_debug8
+ *
+ * Notes:
+ * Internal per-queue state intended for debug use only - pko_prt_qsb.state[91:64]
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug8 {
+	u64 u64;
+	struct cvmx_pko_mem_debug8_s {
+		u64 reserved_59_63 : 5;
+		u64 tail : 1;
+		u64 reserved_0_57 : 58;
+	} s;
+	struct cvmx_pko_mem_debug8_cn30xx {
+		u64 qos : 5;
+		u64 tail : 1;
+		u64 buf_siz : 13;
+		u64 buf_ptr : 33;
+		u64 qcb_widx : 6;
+		u64 qcb_ridx : 6;
+	} cn30xx;
+	struct cvmx_pko_mem_debug8_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug8_cn30xx cn38xx;
+	struct cvmx_pko_mem_debug8_cn30xx cn38xxp2;
+	struct cvmx_pko_mem_debug8_cn50xx {
+		u64 reserved_28_63 : 36;
+		u64 doorbell : 20;
+		u64 reserved_6_7 : 2;
+		u64 static_p : 1;
+		u64 s_tail : 1;
+		u64 static_q : 1;
+		u64 qos : 3;
+	} cn50xx;
+	struct cvmx_pko_mem_debug8_cn52xx {
+		u64 reserved_29_63 : 35;
+		u64 preempter : 1;
+		u64 doorbell : 20;
+		u64 reserved_7_7 : 1;
+		u64 preemptee : 1;
+		u64 static_p : 1;
+		u64 s_tail : 1;
+		u64 static_q : 1;
+		u64 qos : 3;
+	} cn52xx;
+	struct cvmx_pko_mem_debug8_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_debug8_cn52xx cn56xx;
+	struct cvmx_pko_mem_debug8_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_debug8_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug8_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug8_cn61xx {
+		u64 reserved_42_63 : 22;
+		u64 qid_qqos : 8;
+		u64 reserved_33_33 : 1;
+		u64 qid_idx : 4;
+		u64 preempter : 1;
+		u64 doorbell : 20;
+		u64 reserved_7_7 : 1;
+		u64 preemptee : 1;
+		u64 static_p : 1;
+		u64 s_tail : 1;
+		u64 static_q : 1;
+		u64 qos : 3;
+	} cn61xx;
+	struct cvmx_pko_mem_debug8_cn52xx cn63xx;
+	struct cvmx_pko_mem_debug8_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_debug8_cn61xx cn66xx;
+	struct cvmx_pko_mem_debug8_cn68xx {
+		u64 reserved_50_63 : 14;
+		u64 qid_qqos : 8;
+		u64 qid_idx : 5;
+		u64 preempter : 1;
+		u64 doorbell : 20;
+		u64 reserved_9_15 : 7;
+		u64 qid_qos : 6;
+		u64 qid_tail : 1;
+		u64 buf_siz : 2;
+	} cn68xx;
+	struct cvmx_pko_mem_debug8_cn68xx cn68xxp1;
+	struct cvmx_pko_mem_debug8_cn61xx cn70xx;
+	struct cvmx_pko_mem_debug8_cn61xx cn70xxp1;
+	struct cvmx_pko_mem_debug8_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug8 cvmx_pko_mem_debug8_t;
+
+/**
+ * cvmx_pko_mem_debug9
+ *
+ * Notes:
+ * Internal per-engine state intended for debug use only - pko.dat.ptr.ptrs0, pko.dat.ptr.ptrs3
+ * This CSR is a memory of 10 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.
+ */
+union cvmx_pko_mem_debug9 {
+	u64 u64;
+	struct cvmx_pko_mem_debug9_s {
+		u64 reserved_49_63 : 15;
+		u64 ptrs0 : 17;
+		u64 reserved_0_31 : 32;
+	} s;
+	struct cvmx_pko_mem_debug9_cn30xx {
+		u64 reserved_28_63 : 36;
+		u64 doorbell : 20;
+		u64 reserved_5_7 : 3;
+		u64 s_tail : 1;
+		u64 static_q : 1;
+		u64 qos : 3;
+	} cn30xx;
+	struct cvmx_pko_mem_debug9_cn30xx cn31xx;
+	struct cvmx_pko_mem_debug9_cn38xx {
+		u64 reserved_28_63 : 36;
+		u64 doorbell : 20;
+		u64 reserved_6_7 : 2;
+		u64 static_p : 1;
+		u64 s_tail : 1;
+		u64 static_q : 1;
+		u64 qos : 3;
+	} cn38xx;
+	struct cvmx_pko_mem_debug9_cn38xx cn38xxp2;
+	struct cvmx_pko_mem_debug9_cn50xx {
+		u64 reserved_49_63 : 15;
+		u64 ptrs0 : 17;
+		u64 reserved_17_31 : 15;
+		u64 ptrs3 : 17;
+	} cn50xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn52xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn52xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cn56xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn56xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cn58xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn58xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cn61xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn63xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn63xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cn66xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn68xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn68xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cn70xx;
+	struct cvmx_pko_mem_debug9_cn50xx cn70xxp1;
+	struct cvmx_pko_mem_debug9_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_debug9 cvmx_pko_mem_debug9_t;
+
+/**
+ * cvmx_pko_mem_iport_ptrs
+ *
+ * Notes:
+ * This CSR is a memory of 128 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is an IPORT.  A read of any
+ * entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_iport_ptrs {
+	u64 u64;
+	struct cvmx_pko_mem_iport_ptrs_s {
+		u64 reserved_63_63 : 1;
+		u64 crc : 1;
+		u64 static_p : 1;
+		u64 qos_mask : 8;
+		u64 min_pkt : 3;
+		u64 reserved_31_49 : 19;
+		u64 pipe : 7;
+		u64 reserved_21_23 : 3;
+		u64 intr : 5;
+		u64 reserved_13_15 : 3;
+		u64 eid : 5;
+		u64 reserved_7_7 : 1;
+		u64 ipid : 7;
+	} s;
+	struct cvmx_pko_mem_iport_ptrs_s cn68xx;
+	struct cvmx_pko_mem_iport_ptrs_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_iport_ptrs cvmx_pko_mem_iport_ptrs_t;
+
+/**
+ * cvmx_pko_mem_iport_qos
+ *
+ * Notes:
+ * Sets the QOS mask, per port.  These QOS_MASK bits are logically and physically the same QOS_MASK
+ * bits in PKO_MEM_IPORT_PTRS.  This CSR address allows the QOS_MASK bits to be written during PKO
+ * operation without affecting any other port state.  The engine to which port PID is mapped is engine
+ * EID.  Note that the port to engine mapping must be the same as was previously programmed via the
+ * PKO_MEM_IPORT_PTRS CSR.
+ * This CSR is a memory of 128 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is an IPORT.  A read of
+ * any entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_iport_qos {
+	u64 u64;
+	struct cvmx_pko_mem_iport_qos_s {
+		u64 reserved_61_63 : 3;
+		u64 qos_mask : 8;
+		u64 reserved_13_52 : 40;
+		u64 eid : 5;
+		u64 reserved_7_7 : 1;
+		u64 ipid : 7;
+	} s;
+	struct cvmx_pko_mem_iport_qos_s cn68xx;
+	struct cvmx_pko_mem_iport_qos_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_iport_qos cvmx_pko_mem_iport_qos_t;
+
+/**
+ * cvmx_pko_mem_iqueue_ptrs
+ *
+ * Notes:
+ * Sets the queue to port mapping and the initial command buffer pointer, per queue.  Unused queues must
+ * set BUF_PTR=0.  Each queue may map to at most one port.  No more than 32 queues may map to a port.
+ * The set of queues that is mapped to a port must be a contiguous array of queues.  The port to which
+ * queue QID is mapped is port IPID.  The index of queue QID in port IPID's queue list is IDX.  The last
+ * queue in port IPID's queue array must have its TAIL bit set.
+ * STATIC_Q marks queue QID as having static priority.  STATIC_P marks the port IPID to which QID is
+ * mapped as having at least one queue with static priority.  If any QID that maps to IPID has static
+ * priority, then all QID that map to IPID must have STATIC_P set.  Queues marked as static priority
+ * must be contiguous and begin at IDX 0.  The last queue that is marked as having static priority
+ * must have its S_TAIL bit set.
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is an IQUEUE.  A read of any
+ * entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_iqueue_ptrs {
+	u64 u64;
+	struct cvmx_pko_mem_iqueue_ptrs_s {
+		u64 s_tail : 1;
+		u64 static_p : 1;
+		u64 static_q : 1;
+		u64 qos_mask : 8;
+		u64 buf_ptr : 31;
+		u64 tail : 1;
+		u64 index : 5;
+		u64 reserved_15_15 : 1;
+		u64 ipid : 7;
+		u64 qid : 8;
+	} s;
+	struct cvmx_pko_mem_iqueue_ptrs_s cn68xx;
+	struct cvmx_pko_mem_iqueue_ptrs_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_iqueue_ptrs cvmx_pko_mem_iqueue_ptrs_t;
+
+/**
+ * cvmx_pko_mem_iqueue_qos
+ *
+ * Notes:
+ * Sets the QOS mask, per queue.  These QOS_MASK bits are logically and physically the same QOS_MASK
+ * bits in PKO_MEM_IQUEUE_PTRS.  This CSR address allows the QOS_MASK bits to be written during PKO
+ * operation without affecting any other queue state.  The port to which queue QID is mapped is port
+ * IPID.  Note that the queue to port mapping must be the same as was previously programmed via the
+ * PKO_MEM_IQUEUE_PTRS CSR.
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is an IQUEUE.  A read of any
+ * entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_iqueue_qos {
+	u64 u64;
+	struct cvmx_pko_mem_iqueue_qos_s {
+		u64 reserved_61_63 : 3;
+		u64 qos_mask : 8;
+		u64 reserved_15_52 : 38;
+		u64 ipid : 7;
+		u64 qid : 8;
+	} s;
+	struct cvmx_pko_mem_iqueue_qos_s cn68xx;
+	struct cvmx_pko_mem_iqueue_qos_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_iqueue_qos cvmx_pko_mem_iqueue_qos_t;
+
+/**
+ * cvmx_pko_mem_port_ptrs
+ *
+ * Notes:
+ * Sets the port to engine mapping, per port.  Ports marked as static priority need not be contiguous,
+ * but they must be the lowest numbered PIDs mapped to this EID and must have QOS_MASK=0xff.  If EID==8
+ * or EID==9, then PID[1:0] is used to direct the packet to the correct port on that interface.
+ * EID==15 can be used for unused PKO-internal ports.
+ * The reset configuration is the following:
+ *   PID EID(ext port) BP_PORT QOS_MASK STATIC_P
+ *   -------------------------------------------
+ *     0   0( 0)             0     0xff        0
+ *     1   1( 1)             1     0xff        0
+ *     2   2( 2)             2     0xff        0
+ *     3   3( 3)             3     0xff        0
+ *     4   0( 0)             4     0xff        0
+ *     5   1( 1)             5     0xff        0
+ *     6   2( 2)             6     0xff        0
+ *     7   3( 3)             7     0xff        0
+ *     8   0( 0)             8     0xff        0
+ *     9   1( 1)             9     0xff        0
+ *    10   2( 2)            10     0xff        0
+ *    11   3( 3)            11     0xff        0
+ *    12   0( 0)            12     0xff        0
+ *    13   1( 1)            13     0xff        0
+ *    14   2( 2)            14     0xff        0
+ *    15   3( 3)            15     0xff        0
+ *   -------------------------------------------
+ *    16   4(16)            16     0xff        0
+ *    17   5(17)            17     0xff        0
+ *    18   6(18)            18     0xff        0
+ *    19   7(19)            19     0xff        0
+ *    20   4(16)            20     0xff        0
+ *    21   5(17)            21     0xff        0
+ *    22   6(18)            22     0xff        0
+ *    23   7(19)            23     0xff        0
+ *    24   4(16)            24     0xff        0
+ *    25   5(17)            25     0xff        0
+ *    26   6(18)            26     0xff        0
+ *    27   7(19)            27     0xff        0
+ *    28   4(16)            28     0xff        0
+ *    29   5(17)            29     0xff        0
+ *    30   6(18)            30     0xff        0
+ *    31   7(19)            31     0xff        0
+ *   -------------------------------------------
+ *    32   8(32)            32     0xff        0
+ *    33   8(33)            33     0xff        0
+ *    34   8(34)            34     0xff        0
+ *    35   8(35)            35     0xff        0
+ *   -------------------------------------------
+ *    36   9(36)            36     0xff        0
+ *    37   9(37)            37     0xff        0
+ *    38   9(38)            38     0xff        0
+ *    39   9(39)            39     0xff        0
+ *
+ * This CSR is a memory of 48 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_port_ptrs {
+	u64 u64;
+	struct cvmx_pko_mem_port_ptrs_s {
+		u64 reserved_62_63 : 2;
+		u64 static_p : 1;
+		u64 qos_mask : 8;
+		u64 reserved_16_52 : 37;
+		u64 bp_port : 6;
+		u64 eid : 4;
+		u64 pid : 6;
+	} s;
+	struct cvmx_pko_mem_port_ptrs_s cn52xx;
+	struct cvmx_pko_mem_port_ptrs_s cn52xxp1;
+	struct cvmx_pko_mem_port_ptrs_s cn56xx;
+	struct cvmx_pko_mem_port_ptrs_s cn56xxp1;
+	struct cvmx_pko_mem_port_ptrs_s cn61xx;
+	struct cvmx_pko_mem_port_ptrs_s cn63xx;
+	struct cvmx_pko_mem_port_ptrs_s cn63xxp1;
+	struct cvmx_pko_mem_port_ptrs_s cn66xx;
+	struct cvmx_pko_mem_port_ptrs_s cn70xx;
+	struct cvmx_pko_mem_port_ptrs_s cn70xxp1;
+	struct cvmx_pko_mem_port_ptrs_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_port_ptrs cvmx_pko_mem_port_ptrs_t;
+
+/**
+ * cvmx_pko_mem_port_qos
+ *
+ * Notes:
+ * Sets the QOS mask, per port.  These QOS_MASK bits are logically and physically the same QOS_MASK
+ * bits in PKO_MEM_PORT_PTRS.  This CSR address allows the QOS_MASK bits to be written during PKO
+ * operation without affecting any other port state.  The engine to which port PID is mapped is engine
+ * EID.  Note that the port to engine mapping must be the same as was previously programmed via the
+ * PKO_MEM_PORT_PTRS CSR.
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_port_qos {
+	u64 u64;
+	struct cvmx_pko_mem_port_qos_s {
+		u64 reserved_61_63 : 3;
+		u64 qos_mask : 8;
+		u64 reserved_10_52 : 43;
+		u64 eid : 4;
+		u64 pid : 6;
+	} s;
+	struct cvmx_pko_mem_port_qos_s cn52xx;
+	struct cvmx_pko_mem_port_qos_s cn52xxp1;
+	struct cvmx_pko_mem_port_qos_s cn56xx;
+	struct cvmx_pko_mem_port_qos_s cn56xxp1;
+	struct cvmx_pko_mem_port_qos_s cn61xx;
+	struct cvmx_pko_mem_port_qos_s cn63xx;
+	struct cvmx_pko_mem_port_qos_s cn63xxp1;
+	struct cvmx_pko_mem_port_qos_s cn66xx;
+	struct cvmx_pko_mem_port_qos_s cn70xx;
+	struct cvmx_pko_mem_port_qos_s cn70xxp1;
+	struct cvmx_pko_mem_port_qos_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_port_qos cvmx_pko_mem_port_qos_t;
+
+/**
+ * cvmx_pko_mem_port_rate0
+ *
+ * Notes:
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_port_rate0 {
+	u64 u64;
+	struct cvmx_pko_mem_port_rate0_s {
+		u64 reserved_51_63 : 13;
+		u64 rate_word : 19;
+		u64 rate_pkt : 24;
+		u64 reserved_7_7 : 1;
+		u64 pid : 7;
+	} s;
+	struct cvmx_pko_mem_port_rate0_cn52xx {
+		u64 reserved_51_63 : 13;
+		u64 rate_word : 19;
+		u64 rate_pkt : 24;
+		u64 reserved_6_7 : 2;
+		u64 pid : 6;
+	} cn52xx;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn56xx;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn61xx;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn63xx;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn66xx;
+	struct cvmx_pko_mem_port_rate0_s cn68xx;
+	struct cvmx_pko_mem_port_rate0_s cn68xxp1;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn70xx;
+	struct cvmx_pko_mem_port_rate0_cn52xx cn70xxp1;
+	struct cvmx_pko_mem_port_rate0_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_port_rate0 cvmx_pko_mem_port_rate0_t;
+
+/**
+ * cvmx_pko_mem_port_rate1
+ *
+ * Notes:
+ * Writing PKO_MEM_PORT_RATE1[PID,RATE_LIM] has the side effect of setting the corresponding
+ * accumulator to zero.
+ * This CSR is a memory of 44 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_port_rate1 {
+	u64 u64;
+	struct cvmx_pko_mem_port_rate1_s {
+		u64 reserved_32_63 : 32;
+		u64 rate_lim : 24;
+		u64 reserved_7_7 : 1;
+		u64 pid : 7;
+	} s;
+	struct cvmx_pko_mem_port_rate1_cn52xx {
+		u64 reserved_32_63 : 32;
+		u64 rate_lim : 24;
+		u64 reserved_6_7 : 2;
+		u64 pid : 6;
+	} cn52xx;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn52xxp1;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn56xx;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn56xxp1;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn61xx;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn63xx;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn63xxp1;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn66xx;
+	struct cvmx_pko_mem_port_rate1_s cn68xx;
+	struct cvmx_pko_mem_port_rate1_s cn68xxp1;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn70xx;
+	struct cvmx_pko_mem_port_rate1_cn52xx cn70xxp1;
+	struct cvmx_pko_mem_port_rate1_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_mem_port_rate1 cvmx_pko_mem_port_rate1_t;
+
+/**
+ * cvmx_pko_mem_queue_ptrs
+ *
+ * Notes:
+ * Sets the queue to port mapping and the initial command buffer pointer, per queue
+ * Each queue may map to at most one port.  No more than 16 queues may map to a port.  The set of
+ * queues that is mapped to a port must be a contiguous array of queues.  The port to which queue QID
+ * is mapped is port PID.  The index of queue QID in port PID's queue list is IDX.  The last queue in
+ * port PID's queue array must have its TAIL bit set.  Unused queues must be mapped to port 63.
+ * STATIC_Q marks queue QID as having static priority.  STATIC_P marks the port PID to which QID is
+ * mapped as having at least one queue with static priority.  If any QID that maps to PID has static
+ * priority, then all QID that map to PID must have STATIC_P set.  Queues marked as static priority
+ * must be contiguous and begin at IDX 0.  The last queue that is marked as having static priority
+ * must have its S_TAIL bit set.
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_queue_ptrs {
+	u64 u64;
+	struct cvmx_pko_mem_queue_ptrs_s {
+		u64 s_tail : 1;
+		u64 static_p : 1;
+		u64 static_q : 1;
+		u64 qos_mask : 8;
+		u64 buf_ptr : 36;
+		u64 tail : 1;
+		u64 index : 3;
+		u64 port : 6;
+		u64 queue : 7;
+	} s;
+	struct cvmx_pko_mem_queue_ptrs_s cn30xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn31xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn38xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn38xxp2;
+	struct cvmx_pko_mem_queue_ptrs_s cn50xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn52xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn52xxp1;
+	struct cvmx_pko_mem_queue_ptrs_s cn56xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn56xxp1;
+	struct cvmx_pko_mem_queue_ptrs_s cn58xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn58xxp1;
+	struct cvmx_pko_mem_queue_ptrs_s cn61xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn63xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn63xxp1;
+	struct cvmx_pko_mem_queue_ptrs_s cn66xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn70xx;
+	struct cvmx_pko_mem_queue_ptrs_s cn70xxp1;
+	struct cvmx_pko_mem_queue_ptrs_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_queue_ptrs cvmx_pko_mem_queue_ptrs_t;
+
+/**
+ * cvmx_pko_mem_queue_qos
+ *
+ * Notes:
+ * Sets the QOS mask, per queue.  These QOS_MASK bits are logically and physically the same QOS_MASK
+ * bits in PKO_MEM_QUEUE_PTRS.  This CSR address allows the QOS_MASK bits to be written during PKO
+ * operation without affecting any other queue state.  The port to which queue QID is mapped is port
+ * PID.  Note that the queue to port mapping must be the same as was previously programmed via the
+ * PKO_MEM_QUEUE_PTRS CSR.
+ * This CSR is a memory of 256 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  A read of any entry that has not been
+ * previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_queue_qos {
+	u64 u64;
+	struct cvmx_pko_mem_queue_qos_s {
+		u64 reserved_61_63 : 3;
+		u64 qos_mask : 8;
+		u64 reserved_13_52 : 40;
+		u64 pid : 6;
+		u64 qid : 7;
+	} s;
+	struct cvmx_pko_mem_queue_qos_s cn30xx;
+	struct cvmx_pko_mem_queue_qos_s cn31xx;
+	struct cvmx_pko_mem_queue_qos_s cn38xx;
+	struct cvmx_pko_mem_queue_qos_s cn38xxp2;
+	struct cvmx_pko_mem_queue_qos_s cn50xx;
+	struct cvmx_pko_mem_queue_qos_s cn52xx;
+	struct cvmx_pko_mem_queue_qos_s cn52xxp1;
+	struct cvmx_pko_mem_queue_qos_s cn56xx;
+	struct cvmx_pko_mem_queue_qos_s cn56xxp1;
+	struct cvmx_pko_mem_queue_qos_s cn58xx;
+	struct cvmx_pko_mem_queue_qos_s cn58xxp1;
+	struct cvmx_pko_mem_queue_qos_s cn61xx;
+	struct cvmx_pko_mem_queue_qos_s cn63xx;
+	struct cvmx_pko_mem_queue_qos_s cn63xxp1;
+	struct cvmx_pko_mem_queue_qos_s cn66xx;
+	struct cvmx_pko_mem_queue_qos_s cn70xx;
+	struct cvmx_pko_mem_queue_qos_s cn70xxp1;
+	struct cvmx_pko_mem_queue_qos_s cnf71xx;
+};
+
+typedef union cvmx_pko_mem_queue_qos cvmx_pko_mem_queue_qos_t;
+
+/**
+ * cvmx_pko_mem_throttle_int
+ *
+ * Notes:
+ * Writing PACKET and WORD with 0 resets both counts for INT to 0 rather than add 0.
+ * Otherwise, writes to this CSR add to the existing WORD/PACKET counts for the interface INT.
+ *
+ * PKO tracks the number of (8-byte) WORD's and PACKET's in-flight (sum total in both PKO
+ * and the interface MAC) on the interface. (When PKO first selects a packet from a PKO queue, it
+ * increments the counts appropriately. When the interface MAC has (largely) completed sending
+ * the words/packet, PKO decrements the count appropriately.) When PKO_REG_FLAGS[ENA_THROTTLE]
+ * is set and the most-significant bit of the WORD or packet count for a interface is set,
+ * PKO will not transfer any packets over the interface. Software can limit the amount of
+ * packet data and/or the number of packets that OCTEON can send out the chip after receiving backpressure
+ * from the interface/pipe via these per-pipe throttle counts when PKO_REG_FLAGS[ENA_THROTTLE]=1.
+ * For example, to limit the number of packets outstanding in the interface to N, preset PACKET for
+ * the pipe to the value 0x20-N (0x20 is the smallest PACKET value with the most-significant bit set).
+ *
+ * This CSR is a memory of 32 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is an INTERFACE.  A read of any
+ * entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_throttle_int {
+	u64 u64;
+	struct cvmx_pko_mem_throttle_int_s {
+		u64 reserved_47_63 : 17;
+		u64 word : 15;
+		u64 reserved_14_31 : 18;
+		u64 packet : 6;
+		u64 reserved_5_7 : 3;
+		u64 intr : 5;
+	} s;
+	struct cvmx_pko_mem_throttle_int_s cn68xx;
+	struct cvmx_pko_mem_throttle_int_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_throttle_int cvmx_pko_mem_throttle_int_t;
+
+/**
+ * cvmx_pko_mem_throttle_pipe
+ *
+ * Notes:
+ * Writing PACKET and WORD with 0 resets both counts for PIPE to 0 rather than add 0.
+ * Otherwise, writes to this CSR add to the existing WORD/PACKET counts for the PKO pipe PIPE.
+ *
+ * PKO tracks the number of (8-byte) WORD's and PACKET's in-flight (sum total in both PKO
+ * and the interface MAC) on the pipe. (When PKO first selects a packet from a PKO queue, it
+ * increments the counts appropriately. When the interface MAC has (largely) completed sending
+ * the words/packet, PKO decrements the count appropriately.) When PKO_REG_FLAGS[ENA_THROTTLE]
+ * is set and the most-significant bit of the WORD or packet count for a PKO pipe is set,
+ * PKO will not transfer any packets over the PKO pipe. Software can limit the amount of
+ * packet data and/or the number of packets that OCTEON can send out the chip after receiving backpressure
+ * from the interface/pipe via these per-pipe throttle counts when PKO_REG_FLAGS[ENA_THROTTLE]=1.
+ * For example, to limit the number of packets outstanding in the pipe to N, preset PACKET for
+ * the pipe to the value 0x20-N (0x20 is the smallest PACKET value with the most-significant bit set).
+ *
+ * This CSR is a memory of 128 entries, and thus, the PKO_REG_READ_IDX CSR must be written before any
+ * CSR read operations to this address can be performed.  The index to this CSR is a PIPE.  A read of any
+ * entry that has not been previously written is illegal and will result in unpredictable CSR read data.
+ */
+union cvmx_pko_mem_throttle_pipe {
+	u64 u64;
+	struct cvmx_pko_mem_throttle_pipe_s {
+		u64 reserved_47_63 : 17;
+		u64 word : 15;
+		u64 reserved_14_31 : 18;
+		u64 packet : 6;
+		u64 reserved_7_7 : 1;
+		u64 pipe : 7;
+	} s;
+	struct cvmx_pko_mem_throttle_pipe_s cn68xx;
+	struct cvmx_pko_mem_throttle_pipe_s cn68xxp1;
+};
+
+typedef union cvmx_pko_mem_throttle_pipe cvmx_pko_mem_throttle_pipe_t;
+
+/**
+ * cvmx_pko_ncb_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_ncb_bist_status {
+	u64 u64;
+	struct cvmx_pko_ncb_bist_status_s {
+		u64 ncbi_l2_out_ram_bist_status : 1;
+		u64 ncbi_pp_out_ram_bist_status : 1;
+		u64 ncbo_pdm_cmd_dat_ram_bist_status : 1;
+		u64 ncbi_l2_pdm_pref_ram_bist_status : 1;
+		u64 ncbo_pp_fif_ram_bist_status : 1;
+		u64 ncbo_skid_fif_ram_bist_status : 1;
+		u64 reserved_0_57 : 58;
+	} s;
+	struct cvmx_pko_ncb_bist_status_s cn73xx;
+	struct cvmx_pko_ncb_bist_status_s cn78xx;
+	struct cvmx_pko_ncb_bist_status_s cn78xxp1;
+	struct cvmx_pko_ncb_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_bist_status cvmx_pko_ncb_bist_status_t;
+
+/**
+ * cvmx_pko_ncb_ecc_ctl0
+ */
+union cvmx_pko_ncb_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_ncb_ecc_ctl0_s {
+		u64 ncbi_l2_out_ram_flip : 2;
+		u64 ncbi_l2_out_ram_cdis : 1;
+		u64 ncbi_pp_out_ram_flip : 2;
+		u64 ncbi_pp_out_ram_cdis : 1;
+		u64 ncbo_pdm_cmd_dat_ram_flip : 2;
+		u64 ncbo_pdm_cmd_dat_ram_cdis : 1;
+		u64 ncbi_l2_pdm_pref_ram_flip : 2;
+		u64 ncbi_l2_pdm_pref_ram_cdis : 1;
+		u64 ncbo_pp_fif_ram_flip : 2;
+		u64 ncbo_pp_fif_ram_cdis : 1;
+		u64 ncbo_skid_fif_ram_flip : 2;
+		u64 ncbo_skid_fif_ram_cdis : 1;
+		u64 reserved_0_45 : 46;
+	} s;
+	struct cvmx_pko_ncb_ecc_ctl0_s cn73xx;
+	struct cvmx_pko_ncb_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_ncb_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_ncb_ecc_ctl0_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_ecc_ctl0 cvmx_pko_ncb_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_ncb_ecc_dbe_sts0
+ */
+union cvmx_pko_ncb_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_ncb_ecc_dbe_sts0_s {
+		u64 ncbi_l2_out_ram_dbe : 1;
+		u64 ncbi_pp_out_ram_dbe : 1;
+		u64 ncbo_pdm_cmd_dat_ram_dbe : 1;
+		u64 ncbi_l2_pdm_pref_ram_dbe : 1;
+		u64 ncbo_pp_fif_ram_dbe : 1;
+		u64 ncbo_skid_fif_ram_dbe : 1;
+		u64 reserved_0_57 : 58;
+	} s;
+	struct cvmx_pko_ncb_ecc_dbe_sts0_s cn73xx;
+	struct cvmx_pko_ncb_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_ncb_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_ncb_ecc_dbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_ecc_dbe_sts0 cvmx_pko_ncb_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_ncb_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_ncb_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_ncb_ecc_dbe_sts_cmb0_s {
+		u64 ncb_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_ncb_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_ncb_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_ncb_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_ncb_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_ecc_dbe_sts_cmb0 cvmx_pko_ncb_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_ncb_ecc_sbe_sts0
+ */
+union cvmx_pko_ncb_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_ncb_ecc_sbe_sts0_s {
+		u64 ncbi_l2_out_ram_sbe : 1;
+		u64 ncbi_pp_out_ram_sbe : 1;
+		u64 ncbo_pdm_cmd_dat_ram_sbe : 1;
+		u64 ncbi_l2_pdm_pref_ram_sbe : 1;
+		u64 ncbo_pp_fif_ram_sbe : 1;
+		u64 ncbo_skid_fif_ram_sbe : 1;
+		u64 reserved_0_57 : 58;
+	} s;
+	struct cvmx_pko_ncb_ecc_sbe_sts0_s cn73xx;
+	struct cvmx_pko_ncb_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_ncb_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_ncb_ecc_sbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_ecc_sbe_sts0 cvmx_pko_ncb_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_ncb_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_ncb_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_ncb_ecc_sbe_sts_cmb0_s {
+		u64 ncb_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_ncb_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_ncb_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_ncb_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_ncb_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_ecc_sbe_sts_cmb0 cvmx_pko_ncb_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_ncb_int
+ */
+union cvmx_pko_ncb_int {
+	u64 u64;
+	struct cvmx_pko_ncb_int_s {
+		u64 reserved_2_63 : 62;
+		u64 tso_segment_cnt : 1;
+		u64 ncb_tx_error : 1;
+	} s;
+	struct cvmx_pko_ncb_int_s cn73xx;
+	struct cvmx_pko_ncb_int_s cn78xx;
+	struct cvmx_pko_ncb_int_s cn78xxp1;
+	struct cvmx_pko_ncb_int_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_int cvmx_pko_ncb_int_t;
+
+/**
+ * cvmx_pko_ncb_tx_err_info
+ */
+union cvmx_pko_ncb_tx_err_info {
+	u64 u64;
+	struct cvmx_pko_ncb_tx_err_info_s {
+		u64 reserved_32_63 : 32;
+		u64 wcnt : 5;
+		u64 src : 12;
+		u64 dst : 8;
+		u64 tag : 4;
+		u64 eot : 1;
+		u64 sot : 1;
+		u64 valid : 1;
+	} s;
+	struct cvmx_pko_ncb_tx_err_info_s cn73xx;
+	struct cvmx_pko_ncb_tx_err_info_s cn78xx;
+	struct cvmx_pko_ncb_tx_err_info_s cn78xxp1;
+	struct cvmx_pko_ncb_tx_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_tx_err_info cvmx_pko_ncb_tx_err_info_t;
+
+/**
+ * cvmx_pko_ncb_tx_err_word
+ */
+union cvmx_pko_ncb_tx_err_word {
+	u64 u64;
+	struct cvmx_pko_ncb_tx_err_word_s {
+		u64 err_word : 64;
+	} s;
+	struct cvmx_pko_ncb_tx_err_word_s cn73xx;
+	struct cvmx_pko_ncb_tx_err_word_s cn78xx;
+	struct cvmx_pko_ncb_tx_err_word_s cn78xxp1;
+	struct cvmx_pko_ncb_tx_err_word_s cnf75xx;
+};
+
+typedef union cvmx_pko_ncb_tx_err_word cvmx_pko_ncb_tx_err_word_t;
+
+/**
+ * cvmx_pko_pdm_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pdm_bist_status {
+	u64 u64;
+	struct cvmx_pko_pdm_bist_status_s {
+		u64 flshb_cache_lo_ram_bist_status : 1;
+		u64 flshb_cache_hi_ram_bist_status : 1;
+		u64 isrm_ca_iinst_ram_bist_status : 1;
+		u64 isrm_ca_cm_ram_bist_status : 1;
+		u64 isrm_st_ram2_bist_status : 1;
+		u64 isrm_st_ram1_bist_status : 1;
+		u64 isrm_st_ram0_bist_status : 1;
+		u64 isrd_st_ram3_bist_status : 1;
+		u64 isrd_st_ram2_bist_status : 1;
+		u64 isrd_st_ram1_bist_status : 1;
+		u64 isrd_st_ram0_bist_status : 1;
+		u64 drp_hi_ram_bist_status : 1;
+		u64 drp_lo_ram_bist_status : 1;
+		u64 dwp_hi_ram_bist_status : 1;
+		u64 dwp_lo_ram_bist_status : 1;
+		u64 mwp_hi_ram_bist_status : 1;
+		u64 mwp_lo_ram_bist_status : 1;
+		u64 fillb_m_rsp_ram_hi_bist_status : 1;
+		u64 fillb_m_rsp_ram_lo_bist_status : 1;
+		u64 fillb_d_rsp_ram_hi_bist_status : 1;
+		u64 fillb_d_rsp_ram_lo_bist_status : 1;
+		u64 fillb_d_rsp_dat_fifo_bist_status : 1;
+		u64 fillb_m_rsp_dat_fifo_bist_status : 1;
+		u64 flshb_m_dat_ram_bist_status : 1;
+		u64 flshb_d_dat_ram_bist_status : 1;
+		u64 minpad_ram_bist_status : 1;
+		u64 mwp_hi_spt_ram_bist_status : 1;
+		u64 mwp_lo_spt_ram_bist_status : 1;
+		u64 buf_wm_ram_bist_status : 1;
+		u64 reserved_0_34 : 35;
+	} s;
+	struct cvmx_pko_pdm_bist_status_s cn73xx;
+	struct cvmx_pko_pdm_bist_status_s cn78xx;
+	struct cvmx_pko_pdm_bist_status_s cn78xxp1;
+	struct cvmx_pko_pdm_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_bist_status cvmx_pko_pdm_bist_status_t;
+
+/**
+ * cvmx_pko_pdm_cfg
+ */
+union cvmx_pko_pdm_cfg {
+	u64 u64;
+	struct cvmx_pko_pdm_cfg_s {
+		u64 reserved_13_63 : 51;
+		u64 dis_lpd_w2r_fill : 1;
+		u64 en_fr_w2r_ptr_swp : 1;
+		u64 dis_flsh_cache : 1;
+		u64 pko_pad_minlen : 7;
+		u64 diag_mode : 1;
+		u64 alloc_lds : 1;
+		u64 alloc_sts : 1;
+	} s;
+	struct cvmx_pko_pdm_cfg_s cn73xx;
+	struct cvmx_pko_pdm_cfg_s cn78xx;
+	struct cvmx_pko_pdm_cfg_s cn78xxp1;
+	struct cvmx_pko_pdm_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_cfg cvmx_pko_pdm_cfg_t;
+
+/**
+ * cvmx_pko_pdm_cfg_dbg
+ */
+union cvmx_pko_pdm_cfg_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_cfg_dbg_s {
+		u64 reserved_32_63 : 32;
+		u64 cp_stall_thrshld : 32;
+	} s;
+	struct cvmx_pko_pdm_cfg_dbg_s cn73xx;
+	struct cvmx_pko_pdm_cfg_dbg_s cn78xx;
+	struct cvmx_pko_pdm_cfg_dbg_s cn78xxp1;
+	struct cvmx_pko_pdm_cfg_dbg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_cfg_dbg cvmx_pko_pdm_cfg_dbg_t;
+
+/**
+ * cvmx_pko_pdm_cp_dbg
+ */
+union cvmx_pko_pdm_cp_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_cp_dbg_s {
+		u64 reserved_16_63 : 48;
+		u64 stateless_fif_cnt : 6;
+		u64 reserved_5_9 : 5;
+		u64 op_fif_not_full : 5;
+	} s;
+	struct cvmx_pko_pdm_cp_dbg_s cn73xx;
+	struct cvmx_pko_pdm_cp_dbg_s cn78xx;
+	struct cvmx_pko_pdm_cp_dbg_s cn78xxp1;
+	struct cvmx_pko_pdm_cp_dbg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_cp_dbg cvmx_pko_pdm_cp_dbg_t;
+
+/**
+ * cvmx_pko_pdm_dq#_minpad
+ */
+union cvmx_pko_pdm_dqx_minpad {
+	u64 u64;
+	struct cvmx_pko_pdm_dqx_minpad_s {
+		u64 reserved_1_63 : 63;
+		u64 minpad : 1;
+	} s;
+	struct cvmx_pko_pdm_dqx_minpad_s cn73xx;
+	struct cvmx_pko_pdm_dqx_minpad_s cn78xx;
+	struct cvmx_pko_pdm_dqx_minpad_s cn78xxp1;
+	struct cvmx_pko_pdm_dqx_minpad_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_dqx_minpad cvmx_pko_pdm_dqx_minpad_t;
+
+/**
+ * cvmx_pko_pdm_drpbuf_dbg
+ */
+union cvmx_pko_pdm_drpbuf_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_drpbuf_dbg_s {
+		u64 reserved_43_63 : 21;
+		u64 sel_nxt_ptr : 1;
+		u64 load_val : 1;
+		u64 rdy : 1;
+		u64 cur_state : 3;
+		u64 reserved_33_36 : 4;
+		u64 track_rd_cnt : 6;
+		u64 track_wr_cnt : 6;
+		u64 reserved_17_20 : 4;
+		u64 mem_addr : 13;
+		u64 mem_en : 4;
+	} s;
+	struct cvmx_pko_pdm_drpbuf_dbg_s cn73xx;
+	struct cvmx_pko_pdm_drpbuf_dbg_s cn78xx;
+	struct cvmx_pko_pdm_drpbuf_dbg_s cn78xxp1;
+	struct cvmx_pko_pdm_drpbuf_dbg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_drpbuf_dbg cvmx_pko_pdm_drpbuf_dbg_t;
+
+/**
+ * cvmx_pko_pdm_dwpbuf_dbg
+ */
+union cvmx_pko_pdm_dwpbuf_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_dwpbuf_dbg_s {
+		u64 reserved_48_63 : 16;
+		u64 cmd_proc : 1;
+		u64 reserved_46_46 : 1;
+		u64 mem_data_val : 1;
+		u64 insert_np : 1;
+		u64 reserved_43_43 : 1;
+		u64 sel_nxt_ptr : 1;
+		u64 load_val : 1;
+		u64 rdy : 1;
+		u64 cur_state : 3;
+		u64 mem_rdy : 1;
+		u64 reserved_33_35 : 3;
+		u64 track_rd_cnt : 6;
+		u64 track_wr_cnt : 6;
+		u64 reserved_19_20 : 2;
+		u64 insert_dp : 2;
+		u64 mem_addr : 13;
+		u64 mem_en : 4;
+	} s;
+	struct cvmx_pko_pdm_dwpbuf_dbg_cn73xx {
+		u64 reserved_48_63 : 16;
+		u64 cmd_proc : 1;
+		u64 reserved_46_46 : 1;
+		u64 mem_data_val : 1;
+		u64 insert_np : 1;
+		u64 reserved_43_43 : 1;
+		u64 sel_nxt_ptr : 1;
+		u64 load_val : 1;
+		u64 rdy : 1;
+		u64 reserved_37_39 : 3;
+		u64 mem_rdy : 1;
+		u64 reserved_19_35 : 17;
+		u64 insert_dp : 2;
+		u64 reserved_15_16 : 2;
+		u64 mem_addr : 11;
+		u64 mem_en : 4;
+	} cn73xx;
+	struct cvmx_pko_pdm_dwpbuf_dbg_s cn78xx;
+	struct cvmx_pko_pdm_dwpbuf_dbg_s cn78xxp1;
+	struct cvmx_pko_pdm_dwpbuf_dbg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_dwpbuf_dbg cvmx_pko_pdm_dwpbuf_dbg_t;
+
+/**
+ * cvmx_pko_pdm_ecc_ctl0
+ */
+union cvmx_pko_pdm_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_ctl0_s {
+		u64 flshb_cache_lo_ram_flip : 2;
+		u64 flshb_cache_lo_ram_cdis : 1;
+		u64 flshb_cache_hi_ram_flip : 2;
+		u64 flshb_cache_hi_ram_cdis : 1;
+		u64 isrm_ca_iinst_ram_flip : 2;
+		u64 isrm_ca_iinst_ram_cdis : 1;
+		u64 isrm_ca_cm_ram_flip : 2;
+		u64 isrm_ca_cm_ram_cdis : 1;
+		u64 isrm_st_ram2_flip : 2;
+		u64 isrm_st_ram2_cdis : 1;
+		u64 isrm_st_ram1_flip : 2;
+		u64 isrm_st_ram1_cdis : 1;
+		u64 isrm_st_ram0_flip : 2;
+		u64 isrm_st_ram0_cdis : 1;
+		u64 isrd_st_ram3_flip : 2;
+		u64 isrd_st_ram3_cdis : 1;
+		u64 isrd_st_ram2_flip : 2;
+		u64 isrd_st_ram2_cdis : 1;
+		u64 isrd_st_ram1_flip : 2;
+		u64 isrd_st_ram1_cdis : 1;
+		u64 isrd_st_ram0_flip : 2;
+		u64 isrd_st_ram0_cdis : 1;
+		u64 drp_hi_ram_flip : 2;
+		u64 drp_hi_ram_cdis : 1;
+		u64 drp_lo_ram_flip : 2;
+		u64 drp_lo_ram_cdis : 1;
+		u64 dwp_hi_ram_flip : 2;
+		u64 dwp_hi_ram_cdis : 1;
+		u64 dwp_lo_ram_flip : 2;
+		u64 dwp_lo_ram_cdis : 1;
+		u64 mwp_hi_ram_flip : 2;
+		u64 mwp_hi_ram_cdis : 1;
+		u64 mwp_lo_ram_flip : 2;
+		u64 mwp_lo_ram_cdis : 1;
+		u64 fillb_m_rsp_ram_hi_flip : 2;
+		u64 fillb_m_rsp_ram_hi_cdis : 1;
+		u64 fillb_m_rsp_ram_lo_flip : 2;
+		u64 fillb_m_rsp_ram_lo_cdis : 1;
+		u64 fillb_d_rsp_ram_hi_flip : 2;
+		u64 fillb_d_rsp_ram_hi_cdis : 1;
+		u64 fillb_d_rsp_ram_lo_flip : 2;
+		u64 fillb_d_rsp_ram_lo_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_pdm_ecc_ctl0_cn73xx {
+		u64 flshb_cache_lo_ram_flip : 2;
+		u64 flshb_cache_lo_ram_cdis : 1;
+		u64 flshb_cache_hi_ram_flip : 2;
+		u64 flshb_cache_hi_ram_cdis : 1;
+		u64 isrm_ca_iinst_ram_flip : 2;
+		u64 isrm_ca_iinst_ram_cdis : 1;
+		u64 isrm_ca_cm_ram_flip : 2;
+		u64 isrm_ca_cm_ram_cdis : 1;
+		u64 isrm_st_ram2_flip : 2;
+		u64 isrm_st_ram2_cdis : 1;
+		u64 isrm_st_ram1_flip : 2;
+		u64 isrm_st_ram1_cdis : 1;
+		u64 isrm_st_ram0_flip : 2;
+		u64 isrm_st_ram0_cdis : 1;
+		u64 isrd_st_ram3_flip : 2;
+		u64 isrd_st_ram3_cdis : 1;
+		u64 isrd_st_ram2_flip : 2;
+		u64 isrd_st_ram2_cdis : 1;
+		u64 isrd_st_ram1_flip : 2;
+		u64 isrd_st_ram1_cdis : 1;
+		u64 isrd_st_ram0_flip : 2;
+		u64 isrd_st_ram0_cdis : 1;
+		u64 drp_hi_ram_flip : 2;
+		u64 drp_hi_ram_cdis : 1;
+		u64 drp_lo_ram_flip : 2;
+		u64 drp_lo_ram_cdis : 1;
+		u64 dwp_hi_ram_flip : 2;
+		u64 dwp_hi_ram_cdis : 1;
+		u64 dwp_lo_ram_flip : 2;
+		u64 dwp_lo_ram_cdis : 1;
+		u64 reserved_13_18 : 6;
+		u64 fillb_m_rsp_ram_hi_flip : 2;
+		u64 fillb_m_rsp_ram_hi_cdis : 1;
+		u64 fillb_m_rsp_ram_lo_flip : 2;
+		u64 fillb_m_rsp_ram_lo_cdis : 1;
+		u64 fillb_d_rsp_ram_hi_flip : 2;
+		u64 fillb_d_rsp_ram_hi_cdis : 1;
+		u64 fillb_d_rsp_ram_lo_flip : 2;
+		u64 fillb_d_rsp_ram_lo_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_pdm_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pdm_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_ctl0 cvmx_pko_pdm_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pdm_ecc_ctl1
+ */
+union cvmx_pko_pdm_ecc_ctl1 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_ctl1_s {
+		u64 reserved_15_63 : 49;
+		u64 buf_wm_ram_flip : 2;
+		u64 buf_wm_ram_cdis : 1;
+		u64 mwp_mem0_ram_flip : 2;
+		u64 mwp_mem1_ram_flip : 2;
+		u64 mwp_mem2_ram_flip : 2;
+		u64 mwp_mem3_ram_flip : 2;
+		u64 mwp_ram_cdis : 1;
+		u64 minpad_ram_flip : 2;
+		u64 minpad_ram_cdis : 1;
+	} s;
+	struct cvmx_pko_pdm_ecc_ctl1_s cn73xx;
+	struct cvmx_pko_pdm_ecc_ctl1_s cn78xx;
+	struct cvmx_pko_pdm_ecc_ctl1_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_ctl1_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_ctl1 cvmx_pko_pdm_ecc_ctl1_t;
+
+/**
+ * cvmx_pko_pdm_ecc_dbe_sts0
+ */
+union cvmx_pko_pdm_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_dbe_sts0_s {
+		u64 flshb_cache_lo_ram_dbe : 1;
+		u64 flshb_cache_hi_ram_dbe : 1;
+		u64 isrm_ca_iinst_ram_dbe : 1;
+		u64 isrm_ca_cm_ram_dbe : 1;
+		u64 isrm_st_ram2_dbe : 1;
+		u64 isrm_st_ram1_dbe : 1;
+		u64 isrm_st_ram0_dbe : 1;
+		u64 isrd_st_ram3_dbe : 1;
+		u64 isrd_st_ram2_dbe : 1;
+		u64 isrd_st_ram1_dbe : 1;
+		u64 isrd_st_ram0_dbe : 1;
+		u64 drp_hi_ram_dbe : 1;
+		u64 drp_lo_ram_dbe : 1;
+		u64 dwp_hi_ram_dbe : 1;
+		u64 dwp_lo_ram_dbe : 1;
+		u64 mwp_hi_ram_dbe : 1;
+		u64 mwp_lo_ram_dbe : 1;
+		u64 fillb_m_rsp_ram_hi_dbe : 1;
+		u64 fillb_m_rsp_ram_lo_dbe : 1;
+		u64 fillb_d_rsp_ram_hi_dbe : 1;
+		u64 fillb_d_rsp_ram_lo_dbe : 1;
+		u64 minpad_ram_dbe : 1;
+		u64 mwp_hi_spt_ram_dbe : 1;
+		u64 mwp_lo_spt_ram_dbe : 1;
+		u64 buf_wm_ram_dbe : 1;
+		u64 reserved_0_38 : 39;
+	} s;
+	struct cvmx_pko_pdm_ecc_dbe_sts0_s cn73xx;
+	struct cvmx_pko_pdm_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pdm_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_dbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_dbe_sts0 cvmx_pko_pdm_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pdm_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pdm_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_dbe_sts_cmb0_s {
+		u64 pdm_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pdm_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pdm_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pdm_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_dbe_sts_cmb0 cvmx_pko_pdm_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pdm_ecc_sbe_sts0
+ */
+union cvmx_pko_pdm_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_sbe_sts0_s {
+		u64 flshb_cache_lo_ram_sbe : 1;
+		u64 flshb_cache_hi_ram_sbe : 1;
+		u64 isrm_ca_iinst_ram_sbe : 1;
+		u64 isrm_ca_cm_ram_sbe : 1;
+		u64 isrm_st_ram2_sbe : 1;
+		u64 isrm_st_ram1_sbe : 1;
+		u64 isrm_st_ram0_sbe : 1;
+		u64 isrd_st_ram3_sbe : 1;
+		u64 isrd_st_ram2_sbe : 1;
+		u64 isrd_st_ram1_sbe : 1;
+		u64 isrd_st_ram0_sbe : 1;
+		u64 drp_hi_ram_sbe : 1;
+		u64 drp_lo_ram_sbe : 1;
+		u64 dwp_hi_ram_sbe : 1;
+		u64 dwp_lo_ram_sbe : 1;
+		u64 mwp_hi_ram_sbe : 1;
+		u64 mwp_lo_ram_sbe : 1;
+		u64 fillb_m_rsp_ram_hi_sbe : 1;
+		u64 fillb_m_rsp_ram_lo_sbe : 1;
+		u64 fillb_d_rsp_ram_hi_sbe : 1;
+		u64 fillb_d_rsp_ram_lo_sbe : 1;
+		u64 minpad_ram_sbe : 1;
+		u64 mwp_hi_spt_ram_sbe : 1;
+		u64 mwp_lo_spt_ram_sbe : 1;
+		u64 buf_wm_ram_sbe : 1;
+		u64 reserved_0_38 : 39;
+	} s;
+	struct cvmx_pko_pdm_ecc_sbe_sts0_s cn73xx;
+	struct cvmx_pko_pdm_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pdm_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_sbe_sts0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_sbe_sts0 cvmx_pko_pdm_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pdm_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pdm_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pdm_ecc_sbe_sts_cmb0_s {
+		u64 pdm_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pdm_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pdm_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pdm_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pdm_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_ecc_sbe_sts_cmb0 cvmx_pko_pdm_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pdm_fillb_dbg0
+ */
+union cvmx_pko_pdm_fillb_dbg0 {
+	u64 u64;
+	struct cvmx_pko_pdm_fillb_dbg0_s {
+		u64 reserved_57_63 : 7;
+		u64 pd_seq : 5;
+		u64 resp_pd_seq : 5;
+		u64 d_rsp_lo_ram_addr_sel : 2;
+		u64 d_rsp_hi_ram_addr_sel : 2;
+		u64 d_rsp_rd_seq : 5;
+		u64 d_rsp_fifo_rd_seq : 5;
+		u64 d_fill_req_fifo_val : 1;
+		u64 d_rsp_ram_valid : 32;
+	} s;
+	struct cvmx_pko_pdm_fillb_dbg0_s cn73xx;
+	struct cvmx_pko_pdm_fillb_dbg0_s cn78xx;
+	struct cvmx_pko_pdm_fillb_dbg0_s cn78xxp1;
+	struct cvmx_pko_pdm_fillb_dbg0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_fillb_dbg0 cvmx_pko_pdm_fillb_dbg0_t;
+
+/**
+ * cvmx_pko_pdm_fillb_dbg1
+ */
+union cvmx_pko_pdm_fillb_dbg1 {
+	u64 u64;
+	struct cvmx_pko_pdm_fillb_dbg1_s {
+		u64 reserved_57_63 : 7;
+		u64 mp_seq : 5;
+		u64 resp_mp_seq : 5;
+		u64 m_rsp_lo_ram_addr_sel : 2;
+		u64 m_rsp_hi_ram_addr_sel : 2;
+		u64 m_rsp_rd_seq : 5;
+		u64 m_rsp_fifo_rd_seq : 5;
+		u64 m_fill_req_fifo_val : 1;
+		u64 m_rsp_ram_valid : 32;
+	} s;
+	struct cvmx_pko_pdm_fillb_dbg1_s cn73xx;
+	struct cvmx_pko_pdm_fillb_dbg1_s cn78xx;
+	struct cvmx_pko_pdm_fillb_dbg1_s cn78xxp1;
+	struct cvmx_pko_pdm_fillb_dbg1_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_fillb_dbg1 cvmx_pko_pdm_fillb_dbg1_t;
+
+/**
+ * cvmx_pko_pdm_fillb_dbg2
+ */
+union cvmx_pko_pdm_fillb_dbg2 {
+	u64 u64;
+	struct cvmx_pko_pdm_fillb_dbg2_s {
+		u64 reserved_9_63 : 55;
+		u64 fillb_sm : 5;
+		u64 reserved_3_3 : 1;
+		u64 iobp0_credit_cntr : 3;
+	} s;
+	struct cvmx_pko_pdm_fillb_dbg2_s cn73xx;
+	struct cvmx_pko_pdm_fillb_dbg2_s cn78xx;
+	struct cvmx_pko_pdm_fillb_dbg2_s cn78xxp1;
+	struct cvmx_pko_pdm_fillb_dbg2_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_fillb_dbg2 cvmx_pko_pdm_fillb_dbg2_t;
+
+/**
+ * cvmx_pko_pdm_flshb_dbg0
+ */
+union cvmx_pko_pdm_flshb_dbg0 {
+	u64 u64;
+	struct cvmx_pko_pdm_flshb_dbg0_s {
+		u64 reserved_44_63 : 20;
+		u64 flshb_sm : 7;
+		u64 flshb_ctl_sm : 9;
+		u64 cam_hptr : 5;
+		u64 cam_tptr : 5;
+		u64 expected_stdns : 6;
+		u64 d_flshb_eot_cntr : 3;
+		u64 m_flshb_eot_cntr : 3;
+		u64 ncbi_credit_cntr : 6;
+	} s;
+	struct cvmx_pko_pdm_flshb_dbg0_s cn73xx;
+	struct cvmx_pko_pdm_flshb_dbg0_s cn78xx;
+	struct cvmx_pko_pdm_flshb_dbg0_s cn78xxp1;
+	struct cvmx_pko_pdm_flshb_dbg0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_flshb_dbg0 cvmx_pko_pdm_flshb_dbg0_t;
+
+/**
+ * cvmx_pko_pdm_flshb_dbg1
+ */
+union cvmx_pko_pdm_flshb_dbg1 {
+	u64 u64;
+	struct cvmx_pko_pdm_flshb_dbg1_s {
+		u64 cam_stdn : 32;
+		u64 cam_valid : 32;
+	} s;
+	struct cvmx_pko_pdm_flshb_dbg1_s cn73xx;
+	struct cvmx_pko_pdm_flshb_dbg1_s cn78xx;
+	struct cvmx_pko_pdm_flshb_dbg1_s cn78xxp1;
+	struct cvmx_pko_pdm_flshb_dbg1_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_flshb_dbg1 cvmx_pko_pdm_flshb_dbg1_t;
+
+/**
+ * cvmx_pko_pdm_intf_dbg_rd
+ *
+ * For diagnostic use only.
+ *
+ */
+union cvmx_pko_pdm_intf_dbg_rd {
+	u64 u64;
+	struct cvmx_pko_pdm_intf_dbg_rd_s {
+		u64 reserved_48_63 : 16;
+		u64 in_flight : 8;
+		u64 pdm_req_cred_cnt : 8;
+		u64 pse_buf_waddr : 8;
+		u64 pse_buf_raddr : 8;
+		u64 resp_buf_waddr : 8;
+		u64 resp_buf_raddr : 8;
+	} s;
+	struct cvmx_pko_pdm_intf_dbg_rd_s cn73xx;
+	struct cvmx_pko_pdm_intf_dbg_rd_s cn78xx;
+	struct cvmx_pko_pdm_intf_dbg_rd_s cn78xxp1;
+	struct cvmx_pko_pdm_intf_dbg_rd_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_intf_dbg_rd cvmx_pko_pdm_intf_dbg_rd_t;
+
+/**
+ * cvmx_pko_pdm_isrd_dbg
+ */
+union cvmx_pko_pdm_isrd_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_isrd_dbg_s {
+		u64 isrd_vals_in : 4;
+		u64 reserved_59_59 : 1;
+		u64 req_hptr : 6;
+		u64 rdy_hptr : 6;
+		u64 reserved_44_46 : 3;
+		u64 in_arb_reqs : 8;
+		u64 in_arb_gnts : 7;
+		u64 cmt_arb_reqs : 7;
+		u64 cmt_arb_gnts : 7;
+		u64 in_use : 4;
+		u64 has_cred : 4;
+		u64 val_exec : 7;
+	} s;
+	struct cvmx_pko_pdm_isrd_dbg_s cn73xx;
+	struct cvmx_pko_pdm_isrd_dbg_s cn78xx;
+	struct cvmx_pko_pdm_isrd_dbg_cn78xxp1 {
+		u64 reserved_44_63 : 20;
+		u64 in_arb_reqs : 8;
+		u64 in_arb_gnts : 7;
+		u64 cmt_arb_reqs : 7;
+		u64 cmt_arb_gnts : 7;
+		u64 in_use : 4;
+		u64 has_cred : 4;
+		u64 val_exec : 7;
+	} cn78xxp1;
+	struct cvmx_pko_pdm_isrd_dbg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_isrd_dbg cvmx_pko_pdm_isrd_dbg_t;
+
+/**
+ * cvmx_pko_pdm_isrd_dbg_dq
+ */
+union cvmx_pko_pdm_isrd_dbg_dq {
+	u64 u64;
+	struct cvmx_pko_pdm_isrd_dbg_dq_s {
+		u64 reserved_46_63 : 18;
+		u64 pebrd_sic_dq : 10;
+		u64 reserved_34_35 : 2;
+		u64 pebfill_sic_dq : 10;
+		u64 reserved_22_23 : 2;
+		u64 fr_sic_dq : 10;
+		u64 reserved_10_11 : 2;
+		u64 cp_sic_dq : 10;
+	} s;
+	struct cvmx_pko_pdm_isrd_dbg_dq_s cn73xx;
+	struct cvmx_pko_pdm_isrd_dbg_dq_s cn78xx;
+	struct cvmx_pko_pdm_isrd_dbg_dq_s cn78xxp1;
+	struct cvmx_pko_pdm_isrd_dbg_dq_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_isrd_dbg_dq cvmx_pko_pdm_isrd_dbg_dq_t;
+
+/**
+ * cvmx_pko_pdm_isrm_dbg
+ */
+union cvmx_pko_pdm_isrm_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_isrm_dbg_s {
+		u64 val_in : 3;
+		u64 reserved_34_60 : 27;
+		u64 in_arb_reqs : 7;
+		u64 in_arb_gnts : 6;
+		u64 cmt_arb_reqs : 6;
+		u64 cmt_arb_gnts : 6;
+		u64 in_use : 3;
+		u64 has_cred : 3;
+		u64 val_exec : 3;
+	} s;
+	struct cvmx_pko_pdm_isrm_dbg_s cn73xx;
+	struct cvmx_pko_pdm_isrm_dbg_s cn78xx;
+	struct cvmx_pko_pdm_isrm_dbg_cn78xxp1 {
+		u64 reserved_34_63 : 30;
+		u64 in_arb_reqs : 7;
+		u64 in_arb_gnts : 6;
+		u64 cmt_arb_reqs : 6;
+		u64 cmt_arb_gnts : 6;
+		u64 in_use : 3;
+		u64 has_cred : 3;
+		u64 val_exec : 3;
+	} cn78xxp1;
+	struct cvmx_pko_pdm_isrm_dbg_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_isrm_dbg cvmx_pko_pdm_isrm_dbg_t;
+
+/**
+ * cvmx_pko_pdm_isrm_dbg_dq
+ */
+union cvmx_pko_pdm_isrm_dbg_dq {
+	u64 u64;
+	struct cvmx_pko_pdm_isrm_dbg_dq_s {
+		u64 reserved_34_63 : 30;
+		u64 ack_sic_dq : 10;
+		u64 reserved_22_23 : 2;
+		u64 fr_sic_dq : 10;
+		u64 reserved_10_11 : 2;
+		u64 cp_sic_dq : 10;
+	} s;
+	struct cvmx_pko_pdm_isrm_dbg_dq_s cn73xx;
+	struct cvmx_pko_pdm_isrm_dbg_dq_s cn78xx;
+	struct cvmx_pko_pdm_isrm_dbg_dq_s cn78xxp1;
+	struct cvmx_pko_pdm_isrm_dbg_dq_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_isrm_dbg_dq cvmx_pko_pdm_isrm_dbg_dq_t;
+
+/**
+ * cvmx_pko_pdm_mem_addr
+ */
+union cvmx_pko_pdm_mem_addr {
+	u64 u64;
+	struct cvmx_pko_pdm_mem_addr_s {
+		u64 memsel : 3;
+		u64 reserved_17_60 : 44;
+		u64 memaddr : 14;
+		u64 reserved_2_2 : 1;
+		u64 membanksel : 2;
+	} s;
+	struct cvmx_pko_pdm_mem_addr_s cn73xx;
+	struct cvmx_pko_pdm_mem_addr_s cn78xx;
+	struct cvmx_pko_pdm_mem_addr_s cn78xxp1;
+	struct cvmx_pko_pdm_mem_addr_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_mem_addr cvmx_pko_pdm_mem_addr_t;
+
+/**
+ * cvmx_pko_pdm_mem_data
+ */
+union cvmx_pko_pdm_mem_data {
+	u64 u64;
+	struct cvmx_pko_pdm_mem_data_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_pko_pdm_mem_data_s cn73xx;
+	struct cvmx_pko_pdm_mem_data_s cn78xx;
+	struct cvmx_pko_pdm_mem_data_s cn78xxp1;
+	struct cvmx_pko_pdm_mem_data_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_mem_data cvmx_pko_pdm_mem_data_t;
+
+/**
+ * cvmx_pko_pdm_mem_rw_ctl
+ */
+union cvmx_pko_pdm_mem_rw_ctl {
+	u64 u64;
+	struct cvmx_pko_pdm_mem_rw_ctl_s {
+		u64 reserved_2_63 : 62;
+		u64 read : 1;
+		u64 write : 1;
+	} s;
+	struct cvmx_pko_pdm_mem_rw_ctl_s cn73xx;
+	struct cvmx_pko_pdm_mem_rw_ctl_s cn78xx;
+	struct cvmx_pko_pdm_mem_rw_ctl_s cn78xxp1;
+	struct cvmx_pko_pdm_mem_rw_ctl_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_mem_rw_ctl cvmx_pko_pdm_mem_rw_ctl_t;
+
+/**
+ * cvmx_pko_pdm_mem_rw_sts
+ */
+union cvmx_pko_pdm_mem_rw_sts {
+	u64 u64;
+	struct cvmx_pko_pdm_mem_rw_sts_s {
+		u64 reserved_1_63 : 63;
+		u64 readdone : 1;
+	} s;
+	struct cvmx_pko_pdm_mem_rw_sts_s cn73xx;
+	struct cvmx_pko_pdm_mem_rw_sts_s cn78xx;
+	struct cvmx_pko_pdm_mem_rw_sts_s cn78xxp1;
+	struct cvmx_pko_pdm_mem_rw_sts_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_mem_rw_sts cvmx_pko_pdm_mem_rw_sts_t;
+
+/**
+ * cvmx_pko_pdm_mwpbuf_dbg
+ */
+union cvmx_pko_pdm_mwpbuf_dbg {
+	u64 u64;
+	struct cvmx_pko_pdm_mwpbuf_dbg_s {
+		u64 reserved_49_63 : 15;
+		u64 str_proc : 1;
+		u64 cmd_proc : 1;
+		u64 str_val : 1;
+		u64 mem_data_val : 1;
+		u64 insert_np : 1;
+		u64 insert_mp : 1;
+		u64 sel_nxt_ptr : 1;
+		u64 load_val : 1;
+		u64 rdy : 1;
+		u64 cur_state : 3;
+		u64 mem_rdy : 1;
+		u64 str_rdy : 1;
+		u64 contention_type : 2;
+		u64 track_rd_cnt : 6;
+		u64 track_wr_cnt : 6;
+		u64 mem_wen : 4;
+		u64 mem_addr : 13;
+		u64 mem_en : 4;
+	} s;
+	struct cvmx_pko_pdm_mwpbuf_dbg_cn73xx {
+		u64 reserved_49_63 : 15;
+		u64 str_proc : 1;
+		u64 cmd_proc : 1;
+		u64 str_val : 1;
+		u64 mem_data_val : 1;
+		u64 insert_np : 1;
+		u64 insert_mp : 1;
+		u64 sel_nxt_ptr : 1;
+		u64 load_val : 1;
+		u64 rdy : 1;
+		u64 cur_state : 3;
+		u64 mem_rdy : 1;
+		u64 str_rdy : 1;
+		u64 contention_type : 2;
+		u64 reserved_21_32 : 12;
+		u64 mem_wen : 4;
+		u64 reserved_15_16 : 2;
+		u64 mem_addr : 11;
+		u64 mem_en : 4;
+	} cn73xx;
+	struct cvmx_pko_pdm_mwpbuf_dbg_s cn78xx;
+	struct cvmx_pko_pdm_mwpbuf_dbg_s cn78xxp1;
+	struct cvmx_pko_pdm_mwpbuf_dbg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_mwpbuf_dbg cvmx_pko_pdm_mwpbuf_dbg_t;
+
+/**
+ * cvmx_pko_pdm_sts
+ */
+union cvmx_pko_pdm_sts {
+	u64 u64;
+	struct cvmx_pko_pdm_sts_s {
+		u64 reserved_38_63 : 26;
+		u64 cp_stalled_thrshld_hit : 1;
+		u64 reserved_35_36 : 2;
+		u64 mwpbuf_data_val_err : 1;
+		u64 drpbuf_data_val_err : 1;
+		u64 dwpbuf_data_val_err : 1;
+		u64 reserved_30_31 : 2;
+		u64 qcmd_iobx_err_sts : 4;
+		u64 qcmd_iobx_err : 1;
+		u64 sendpkt_lmtdma_err_sts : 4;
+		u64 sendpkt_lmtdma_err : 1;
+		u64 sendpkt_lmtst_err_sts : 4;
+		u64 sendpkt_lmtst_err : 1;
+		u64 fpa_no_ptrs : 1;
+		u64 reserved_12_13 : 2;
+		u64 cp_sendpkt_err_no_drp_code : 2;
+		u64 cp_sendpkt_err_no_drp : 1;
+		u64 reserved_7_8 : 2;
+		u64 cp_sendpkt_err_drop_code : 3;
+		u64 cp_sendpkt_err_drop : 1;
+		u64 reserved_1_2 : 2;
+		u64 desc_crc_err : 1;
+	} s;
+	struct cvmx_pko_pdm_sts_s cn73xx;
+	struct cvmx_pko_pdm_sts_s cn78xx;
+	struct cvmx_pko_pdm_sts_s cn78xxp1;
+	struct cvmx_pko_pdm_sts_s cnf75xx;
+};
+
+typedef union cvmx_pko_pdm_sts cvmx_pko_pdm_sts_t;
+
+/**
+ * cvmx_pko_peb_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_peb_bist_status {
+	u64 u64;
+	struct cvmx_pko_peb_bist_status_s {
+		u64 reserved_26_63 : 38;
+		u64 add_work_fifo : 1;
+		u64 pdm_pse_buf_ram : 1;
+		u64 iobp0_fifo_ram : 1;
+		u64 iobp1_fifo_ram : 1;
+		u64 state_mem0 : 1;
+		u64 reserved_19_20 : 2;
+		u64 state_mem3 : 1;
+		u64 iobp1_uid_fifo_ram : 1;
+		u64 nxt_link_ptr_ram : 1;
+		u64 pd_bank0_ram : 1;
+		u64 pd_bank1_ram : 1;
+		u64 pd_bank2_ram : 1;
+		u64 pd_bank3_ram : 1;
+		u64 pd_var_bank_ram : 1;
+		u64 pdm_resp_buf_ram : 1;
+		u64 tx_fifo_pkt_ram : 1;
+		u64 tx_fifo_hdr_ram : 1;
+		u64 tx_fifo_crc_ram : 1;
+		u64 ts_addwork_ram : 1;
+		u64 send_mem_ts_fifo : 1;
+		u64 send_mem_stdn_fifo : 1;
+		u64 send_mem_fifo : 1;
+		u64 pkt_mrk_ram : 1;
+		u64 peb_st_inf_ram : 1;
+		u64 peb_sm_jmp_ram : 1;
+	} s;
+	struct cvmx_pko_peb_bist_status_cn73xx {
+		u64 reserved_26_63 : 38;
+		u64 add_work_fifo : 1;
+		u64 pdm_pse_buf_ram : 1;
+		u64 iobp0_fifo_ram : 1;
+		u64 iobp1_fifo_ram : 1;
+		u64 state_mem0 : 1;
+		u64 reserved_19_20 : 2;
+		u64 state_mem3 : 1;
+		u64 iobp1_uid_fifo_ram : 1;
+		u64 nxt_link_ptr_ram : 1;
+		u64 pd_bank0_ram : 1;
+		u64 reserved_13_14 : 2;
+		u64 pd_bank3_ram : 1;
+		u64 pd_var_bank_ram : 1;
+		u64 pdm_resp_buf_ram : 1;
+		u64 tx_fifo_pkt_ram : 1;
+		u64 tx_fifo_hdr_ram : 1;
+		u64 tx_fifo_crc_ram : 1;
+		u64 ts_addwork_ram : 1;
+		u64 send_mem_ts_fifo : 1;
+		u64 send_mem_stdn_fifo : 1;
+		u64 send_mem_fifo : 1;
+		u64 pkt_mrk_ram : 1;
+		u64 peb_st_inf_ram : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_peb_bist_status_cn73xx cn78xx;
+	struct cvmx_pko_peb_bist_status_s cn78xxp1;
+	struct cvmx_pko_peb_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_peb_bist_status cvmx_pko_peb_bist_status_t;
+
+/**
+ * cvmx_pko_peb_ecc_ctl0
+ */
+union cvmx_pko_peb_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_ctl0_s {
+		u64 iobp1_uid_fifo_ram_flip : 2;
+		u64 iobp1_uid_fifo_ram_cdis : 1;
+		u64 iobp0_fifo_ram_flip : 2;
+		u64 iobp0_fifo_ram_cdis : 1;
+		u64 iobp1_fifo_ram_flip : 2;
+		u64 iobp1_fifo_ram_cdis : 1;
+		u64 pdm_resp_buf_ram_flip : 2;
+		u64 pdm_resp_buf_ram_cdis : 1;
+		u64 pdm_pse_buf_ram_flip : 2;
+		u64 pdm_pse_buf_ram_cdis : 1;
+		u64 peb_sm_jmp_ram_flip : 2;
+		u64 peb_sm_jmp_ram_cdis : 1;
+		u64 peb_st_inf_ram_flip : 2;
+		u64 peb_st_inf_ram_cdis : 1;
+		u64 pd_bank3_ram_flip : 2;
+		u64 pd_bank3_ram_cdis : 1;
+		u64 pd_bank2_ram_flip : 2;
+		u64 pd_bank2_ram_cdis : 1;
+		u64 pd_bank1_ram_flip : 2;
+		u64 pd_bank1_ram_cdis : 1;
+		u64 pd_bank0_ram_flip : 2;
+		u64 pd_bank0_ram_cdis : 1;
+		u64 pd_var_bank_ram_flip : 2;
+		u64 pd_var_bank_ram_cdis : 1;
+		u64 tx_fifo_crc_ram_flip : 2;
+		u64 tx_fifo_crc_ram_cdis : 1;
+		u64 tx_fifo_hdr_ram_flip : 2;
+		u64 tx_fifo_hdr_ram_cdis : 1;
+		u64 tx_fifo_pkt_ram_flip : 2;
+		u64 tx_fifo_pkt_ram_cdis : 1;
+		u64 add_work_fifo_flip : 2;
+		u64 add_work_fifo_cdis : 1;
+		u64 send_mem_fifo_flip : 2;
+		u64 send_mem_fifo_cdis : 1;
+		u64 send_mem_stdn_fifo_flip : 2;
+		u64 send_mem_stdn_fifo_cdis : 1;
+		u64 send_mem_ts_fifo_flip : 2;
+		u64 send_mem_ts_fifo_cdis : 1;
+		u64 nxt_link_ptr_ram_flip : 2;
+		u64 nxt_link_ptr_ram_cdis : 1;
+		u64 pkt_mrk_ram_flip : 2;
+		u64 pkt_mrk_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_peb_ecc_ctl0_cn73xx {
+		u64 iobp1_uid_fifo_ram_flip : 2;
+		u64 iobp1_uid_fifo_ram_cdis : 1;
+		u64 iobp0_fifo_ram_flip : 2;
+		u64 iobp0_fifo_ram_cdis : 1;
+		u64 iobp1_fifo_ram_flip : 2;
+		u64 iobp1_fifo_ram_cdis : 1;
+		u64 pdm_resp_buf_ram_flip : 2;
+		u64 pdm_resp_buf_ram_cdis : 1;
+		u64 pdm_pse_buf_ram_flip : 2;
+		u64 pdm_pse_buf_ram_cdis : 1;
+		u64 reserved_46_48 : 3;
+		u64 peb_st_inf_ram_flip : 2;
+		u64 peb_st_inf_ram_cdis : 1;
+		u64 pd_bank3_ram_flip : 2;
+		u64 pd_bank3_ram_cdis : 1;
+		u64 reserved_34_39 : 6;
+		u64 pd_bank0_ram_flip : 2;
+		u64 pd_bank0_ram_cdis : 1;
+		u64 pd_var_bank_ram_flip : 2;
+		u64 pd_var_bank_ram_cdis : 1;
+		u64 tx_fifo_crc_ram_flip : 2;
+		u64 tx_fifo_crc_ram_cdis : 1;
+		u64 tx_fifo_hdr_ram_flip : 2;
+		u64 tx_fifo_hdr_ram_cdis : 1;
+		u64 tx_fifo_pkt_ram_flip : 2;
+		u64 tx_fifo_pkt_ram_cdis : 1;
+		u64 add_work_fifo_flip : 2;
+		u64 add_work_fifo_cdis : 1;
+		u64 send_mem_fifo_flip : 2;
+		u64 send_mem_fifo_cdis : 1;
+		u64 send_mem_stdn_fifo_flip : 2;
+		u64 send_mem_stdn_fifo_cdis : 1;
+		u64 send_mem_ts_fifo_flip : 2;
+		u64 send_mem_ts_fifo_cdis : 1;
+		u64 nxt_link_ptr_ram_flip : 2;
+		u64 nxt_link_ptr_ram_cdis : 1;
+		u64 pkt_mrk_ram_flip : 2;
+		u64 pkt_mrk_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_peb_ecc_ctl0_cn73xx cn78xx;
+	struct cvmx_pko_peb_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_peb_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_ctl0 cvmx_pko_peb_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_peb_ecc_ctl1
+ */
+union cvmx_pko_peb_ecc_ctl1 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_ctl1_s {
+		u64 ts_addwork_ram_flip : 2;
+		u64 ts_addwork_ram_cdis : 1;
+		u64 state_mem0_flip : 2;
+		u64 state_mem0_cdis : 1;
+		u64 reserved_52_57 : 6;
+		u64 state_mem3_flip : 2;
+		u64 state_mem3_cdis : 1;
+		u64 reserved_0_48 : 49;
+	} s;
+	struct cvmx_pko_peb_ecc_ctl1_s cn73xx;
+	struct cvmx_pko_peb_ecc_ctl1_cn78xx {
+		u64 ts_addwork_ram_flip : 2;
+		u64 ts_addwork_ram_cdis : 1;
+		u64 reserved_0_60 : 61;
+	} cn78xx;
+	struct cvmx_pko_peb_ecc_ctl1_cn78xx cn78xxp1;
+	struct cvmx_pko_peb_ecc_ctl1_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_ctl1 cvmx_pko_peb_ecc_ctl1_t;
+
+/**
+ * cvmx_pko_peb_ecc_dbe_sts0
+ */
+union cvmx_pko_peb_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_dbe_sts0_s {
+		u64 iobp1_uid_fifo_ram_dbe : 1;
+		u64 iobp0_fifo_ram_dbe : 1;
+		u64 iobp1_fifo_ram_dbe : 1;
+		u64 pdm_resp_buf_ram_dbe : 1;
+		u64 pdm_pse_buf_ram_dbe : 1;
+		u64 peb_sm_jmp_ram_dbe : 1;
+		u64 peb_st_inf_ram_dbe : 1;
+		u64 pd_bank3_ram_dbe : 1;
+		u64 pd_bank2_ram_dbe : 1;
+		u64 pd_bank1_ram_dbe : 1;
+		u64 pd_bank0_ram_dbe : 1;
+		u64 pd_var_bank_ram_dbe : 1;
+		u64 tx_fifo_crc_ram_dbe : 1;
+		u64 tx_fifo_hdr_ram_dbe : 1;
+		u64 tx_fifo_pkt_ram_dbe : 1;
+		u64 add_work_fifo_dbe : 1;
+		u64 send_mem_fifo_dbe : 1;
+		u64 send_mem_stdn_fifo_dbe : 1;
+		u64 send_mem_ts_fifo_dbe : 1;
+		u64 nxt_link_ptr_ram_dbe : 1;
+		u64 pkt_mrk_ram_dbe : 1;
+		u64 ts_addwork_ram_dbe : 1;
+		u64 state_mem0_dbe : 1;
+		u64 reserved_39_40 : 2;
+		u64 state_mem3_dbe : 1;
+		u64 reserved_0_37 : 38;
+	} s;
+	struct cvmx_pko_peb_ecc_dbe_sts0_cn73xx {
+		u64 iobp1_uid_fifo_ram_dbe : 1;
+		u64 iobp0_fifo_ram_dbe : 1;
+		u64 iobp1_fifo_ram_dbe : 1;
+		u64 pdm_resp_buf_ram_dbe : 1;
+		u64 pdm_pse_buf_ram_dbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 peb_st_inf_ram_dbe : 1;
+		u64 pd_bank3_ram_dbe : 1;
+		u64 reserved_54_55 : 2;
+		u64 pd_bank0_ram_dbe : 1;
+		u64 pd_var_bank_ram_dbe : 1;
+		u64 tx_fifo_crc_ram_dbe : 1;
+		u64 tx_fifo_hdr_ram_dbe : 1;
+		u64 tx_fifo_pkt_ram_dbe : 1;
+		u64 add_work_fifo_dbe : 1;
+		u64 send_mem_fifo_dbe : 1;
+		u64 send_mem_stdn_fifo_dbe : 1;
+		u64 send_mem_ts_fifo_dbe : 1;
+		u64 nxt_link_ptr_ram_dbe : 1;
+		u64 pkt_mrk_ram_dbe : 1;
+		u64 ts_addwork_ram_dbe : 1;
+		u64 state_mem0_dbe : 1;
+		u64 reserved_39_40 : 2;
+		u64 state_mem3_dbe : 1;
+		u64 reserved_0_37 : 38;
+	} cn73xx;
+	struct cvmx_pko_peb_ecc_dbe_sts0_cn78xx {
+		u64 iobp1_uid_fifo_ram_dbe : 1;
+		u64 iobp0_fifo_ram_dbe : 1;
+		u64 iobp1_fifo_ram_dbe : 1;
+		u64 pdm_resp_buf_ram_dbe : 1;
+		u64 pdm_pse_buf_ram_dbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 peb_st_inf_ram_dbe : 1;
+		u64 pd_bank3_ram_dbe : 1;
+		u64 reserved_54_55 : 2;
+		u64 pd_bank0_ram_dbe : 1;
+		u64 pd_var_bank_ram_dbe : 1;
+		u64 tx_fifo_crc_ram_dbe : 1;
+		u64 tx_fifo_hdr_ram_dbe : 1;
+		u64 tx_fifo_pkt_ram_dbe : 1;
+		u64 add_work_fifo_dbe : 1;
+		u64 send_mem_fifo_dbe : 1;
+		u64 send_mem_stdn_fifo_dbe : 1;
+		u64 send_mem_ts_fifo_dbe : 1;
+		u64 nxt_link_ptr_ram_dbe : 1;
+		u64 pkt_mrk_ram_dbe : 1;
+		u64 ts_addwork_ram_dbe : 1;
+		u64 reserved_0_41 : 42;
+	} cn78xx;
+	struct cvmx_pko_peb_ecc_dbe_sts0_cn78xxp1 {
+		u64 iobp1_uid_fifo_ram_dbe : 1;
+		u64 iobp0_fifo_ram_dbe : 1;
+		u64 iobp1_fifo_ram_dbe : 1;
+		u64 pdm_resp_buf_ram_dbe : 1;
+		u64 pdm_pse_buf_ram_dbe : 1;
+		u64 peb_sm_jmp_ram_dbe : 1;
+		u64 peb_st_inf_ram_dbe : 1;
+		u64 pd_bank3_ram_dbe : 1;
+		u64 pd_bank2_ram_dbe : 1;
+		u64 pd_bank1_ram_dbe : 1;
+		u64 pd_bank0_ram_dbe : 1;
+		u64 pd_var_bank_ram_dbe : 1;
+		u64 tx_fifo_crc_ram_dbe : 1;
+		u64 tx_fifo_hdr_ram_dbe : 1;
+		u64 tx_fifo_pkt_ram_dbe : 1;
+		u64 add_work_fifo_dbe : 1;
+		u64 send_mem_fifo_dbe : 1;
+		u64 send_mem_stdn_fifo_dbe : 1;
+		u64 send_mem_ts_fifo_dbe : 1;
+		u64 nxt_link_ptr_ram_dbe : 1;
+		u64 pkt_mrk_ram_dbe : 1;
+		u64 ts_addwork_ram_dbe : 1;
+		u64 reserved_0_41 : 42;
+	} cn78xxp1;
+	struct cvmx_pko_peb_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_dbe_sts0 cvmx_pko_peb_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_peb_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_peb_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_dbe_sts_cmb0_s {
+		u64 peb_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_peb_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_peb_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_peb_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_peb_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_dbe_sts_cmb0 cvmx_pko_peb_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_peb_ecc_sbe_sts0
+ */
+union cvmx_pko_peb_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_sbe_sts0_s {
+		u64 iobp1_uid_fifo_ram_sbe : 1;
+		u64 iobp0_fifo_ram_sbe : 1;
+		u64 iobp1_fifo_ram_sbe : 1;
+		u64 pdm_resp_buf_ram_sbe : 1;
+		u64 pdm_pse_buf_ram_sbe : 1;
+		u64 peb_sm_jmp_ram_sbe : 1;
+		u64 peb_st_inf_ram_sbe : 1;
+		u64 pd_bank3_ram_sbe : 1;
+		u64 pd_bank2_ram_sbe : 1;
+		u64 pd_bank1_ram_sbe : 1;
+		u64 pd_bank0_ram_sbe : 1;
+		u64 pd_var_bank_ram_sbe : 1;
+		u64 tx_fifo_crc_ram_sbe : 1;
+		u64 tx_fifo_hdr_ram_sbe : 1;
+		u64 tx_fifo_pkt_ram_sbe : 1;
+		u64 add_work_fifo_sbe : 1;
+		u64 send_mem_fifo_sbe : 1;
+		u64 send_mem_stdn_fifo_sbe : 1;
+		u64 send_mem_ts_fifo_sbe : 1;
+		u64 nxt_link_ptr_ram_sbe : 1;
+		u64 pkt_mrk_ram_sbe : 1;
+		u64 ts_addwork_ram_sbe : 1;
+		u64 state_mem0_sbe : 1;
+		u64 reserved_39_40 : 2;
+		u64 state_mem3_sbe : 1;
+		u64 reserved_0_37 : 38;
+	} s;
+	struct cvmx_pko_peb_ecc_sbe_sts0_cn73xx {
+		u64 iobp1_uid_fifo_ram_sbe : 1;
+		u64 iobp0_fifo_ram_sbe : 1;
+		u64 iobp1_fifo_ram_sbe : 1;
+		u64 pdm_resp_buf_ram_sbe : 1;
+		u64 pdm_pse_buf_ram_sbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 peb_st_inf_ram_sbe : 1;
+		u64 pd_bank3_ram_sbe : 1;
+		u64 reserved_54_55 : 2;
+		u64 pd_bank0_ram_sbe : 1;
+		u64 pd_var_bank_ram_sbe : 1;
+		u64 tx_fifo_crc_ram_sbe : 1;
+		u64 tx_fifo_hdr_ram_sbe : 1;
+		u64 tx_fifo_pkt_ram_sbe : 1;
+		u64 add_work_fifo_sbe : 1;
+		u64 send_mem_fifo_sbe : 1;
+		u64 send_mem_stdn_fifo_sbe : 1;
+		u64 send_mem_ts_fifo_sbe : 1;
+		u64 nxt_link_ptr_ram_sbe : 1;
+		u64 pkt_mrk_ram_sbe : 1;
+		u64 ts_addwork_ram_sbe : 1;
+		u64 state_mem0_sbe : 1;
+		u64 reserved_39_40 : 2;
+		u64 state_mem3_sbe : 1;
+		u64 reserved_0_37 : 38;
+	} cn73xx;
+	struct cvmx_pko_peb_ecc_sbe_sts0_cn78xx {
+		u64 iobp1_uid_fifo_ram_sbe : 1;
+		u64 iobp0_fifo_ram_sbe : 1;
+		u64 iobp1_fifo_ram_sbe : 1;
+		u64 pdm_resp_buf_ram_sbe : 1;
+		u64 pdm_pse_buf_ram_sbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 peb_st_inf_ram_sbe : 1;
+		u64 pd_bank3_ram_sbe : 1;
+		u64 reserved_54_55 : 2;
+		u64 pd_bank0_ram_sbe : 1;
+		u64 pd_var_bank_ram_sbe : 1;
+		u64 tx_fifo_crc_ram_sbe : 1;
+		u64 tx_fifo_hdr_ram_sbe : 1;
+		u64 tx_fifo_pkt_ram_sbe : 1;
+		u64 add_work_fifo_sbe : 1;
+		u64 send_mem_fifo_sbe : 1;
+		u64 send_mem_stdn_fifo_sbe : 1;
+		u64 send_mem_ts_fifo_sbe : 1;
+		u64 nxt_link_ptr_ram_sbe : 1;
+		u64 pkt_mrk_ram_sbe : 1;
+		u64 ts_addwork_ram_sbe : 1;
+		u64 reserved_0_41 : 42;
+	} cn78xx;
+	struct cvmx_pko_peb_ecc_sbe_sts0_cn78xxp1 {
+		u64 iobp1_uid_fifo_ram_sbe : 1;
+		u64 iobp0_fifo_ram_sbe : 1;
+		u64 iobp1_fifo_ram_sbe : 1;
+		u64 pdm_resp_buf_ram_sbe : 1;
+		u64 pdm_pse_buf_ram_sbe : 1;
+		u64 peb_sm_jmp_ram_sbe : 1;
+		u64 peb_st_inf_ram_sbe : 1;
+		u64 pd_bank3_ram_sbe : 1;
+		u64 pd_bank2_ram_sbe : 1;
+		u64 pd_bank1_ram_sbe : 1;
+		u64 pd_bank0_ram_sbe : 1;
+		u64 pd_var_bank_ram_sbe : 1;
+		u64 tx_fifo_crc_ram_sbe : 1;
+		u64 tx_fifo_hdr_ram_sbe : 1;
+		u64 tx_fifo_pkt_ram_sbe : 1;
+		u64 add_work_fifo_sbe : 1;
+		u64 send_mem_fifo_sbe : 1;
+		u64 send_mem_stdn_fifo_sbe : 1;
+		u64 send_mem_ts_fifo_sbe : 1;
+		u64 nxt_link_ptr_ram_sbe : 1;
+		u64 pkt_mrk_ram_sbe : 1;
+		u64 ts_addwork_ram_sbe : 1;
+		u64 reserved_0_41 : 42;
+	} cn78xxp1;
+	struct cvmx_pko_peb_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_sbe_sts0 cvmx_pko_peb_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_peb_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_peb_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_peb_ecc_sbe_sts_cmb0_s {
+		u64 peb_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_peb_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_peb_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_peb_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_peb_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ecc_sbe_sts_cmb0 cvmx_pko_peb_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_peb_eco
+ */
+union cvmx_pko_peb_eco {
+	u64 u64;
+	struct cvmx_pko_peb_eco_s {
+		u64 reserved_32_63 : 32;
+		u64 eco_rw : 32;
+	} s;
+	struct cvmx_pko_peb_eco_s cn73xx;
+	struct cvmx_pko_peb_eco_s cn78xx;
+	struct cvmx_pko_peb_eco_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_eco cvmx_pko_peb_eco_t;
+
+/**
+ * cvmx_pko_peb_err_int
+ */
+union cvmx_pko_peb_err_int {
+	u64 u64;
+	struct cvmx_pko_peb_err_int_s {
+		u64 reserved_10_63 : 54;
+		u64 peb_macx_cfg_wr_err : 1;
+		u64 peb_max_link_err : 1;
+		u64 peb_subd_size_err : 1;
+		u64 peb_subd_addr_err : 1;
+		u64 peb_trunc_err : 1;
+		u64 peb_pad_err : 1;
+		u64 peb_pse_fifo_err : 1;
+		u64 peb_fcs_sop_err : 1;
+		u64 peb_jump_def_err : 1;
+		u64 peb_ext_hdr_def_err : 1;
+	} s;
+	struct cvmx_pko_peb_err_int_s cn73xx;
+	struct cvmx_pko_peb_err_int_s cn78xx;
+	struct cvmx_pko_peb_err_int_s cn78xxp1;
+	struct cvmx_pko_peb_err_int_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_err_int cvmx_pko_peb_err_int_t;
+
+/**
+ * cvmx_pko_peb_ext_hdr_def_err_info
+ */
+union cvmx_pko_peb_ext_hdr_def_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_ext_hdr_def_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_ext_hdr_def_err_info_s cn73xx;
+	struct cvmx_pko_peb_ext_hdr_def_err_info_s cn78xx;
+	struct cvmx_pko_peb_ext_hdr_def_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_ext_hdr_def_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ext_hdr_def_err_info cvmx_pko_peb_ext_hdr_def_err_info_t;
+
+/**
+ * cvmx_pko_peb_fcs_sop_err_info
+ */
+union cvmx_pko_peb_fcs_sop_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_fcs_sop_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_fcs_sop_err_info_s cn73xx;
+	struct cvmx_pko_peb_fcs_sop_err_info_s cn78xx;
+	struct cvmx_pko_peb_fcs_sop_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_fcs_sop_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_fcs_sop_err_info cvmx_pko_peb_fcs_sop_err_info_t;
+
+/**
+ * cvmx_pko_peb_jump_def_err_info
+ */
+union cvmx_pko_peb_jump_def_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_jump_def_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_jump_def_err_info_s cn73xx;
+	struct cvmx_pko_peb_jump_def_err_info_s cn78xx;
+	struct cvmx_pko_peb_jump_def_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_jump_def_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_jump_def_err_info cvmx_pko_peb_jump_def_err_info_t;
+
+/**
+ * cvmx_pko_peb_macx_cfg_wr_err_info
+ */
+union cvmx_pko_peb_macx_cfg_wr_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_macx_cfg_wr_err_info_s {
+		u64 reserved_8_63 : 56;
+		u64 val : 1;
+		u64 mac : 7;
+	} s;
+	struct cvmx_pko_peb_macx_cfg_wr_err_info_s cn73xx;
+	struct cvmx_pko_peb_macx_cfg_wr_err_info_s cn78xx;
+	struct cvmx_pko_peb_macx_cfg_wr_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_macx_cfg_wr_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_macx_cfg_wr_err_info cvmx_pko_peb_macx_cfg_wr_err_info_t;
+
+/**
+ * cvmx_pko_peb_max_link_err_info
+ */
+union cvmx_pko_peb_max_link_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_max_link_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_max_link_err_info_s cn73xx;
+	struct cvmx_pko_peb_max_link_err_info_s cn78xx;
+	struct cvmx_pko_peb_max_link_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_max_link_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_max_link_err_info cvmx_pko_peb_max_link_err_info_t;
+
+/**
+ * cvmx_pko_peb_ncb_cfg
+ */
+union cvmx_pko_peb_ncb_cfg {
+	u64 u64;
+	struct cvmx_pko_peb_ncb_cfg_s {
+		u64 reserved_1_63 : 63;
+		u64 rstp : 1;
+	} s;
+	struct cvmx_pko_peb_ncb_cfg_s cn73xx;
+	struct cvmx_pko_peb_ncb_cfg_s cn78xx;
+	struct cvmx_pko_peb_ncb_cfg_s cn78xxp1;
+	struct cvmx_pko_peb_ncb_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_ncb_cfg cvmx_pko_peb_ncb_cfg_t;
+
+/**
+ * cvmx_pko_peb_pad_err_info
+ */
+union cvmx_pko_peb_pad_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_pad_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_pad_err_info_s cn73xx;
+	struct cvmx_pko_peb_pad_err_info_s cn78xx;
+	struct cvmx_pko_peb_pad_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_pad_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_pad_err_info cvmx_pko_peb_pad_err_info_t;
+
+/**
+ * cvmx_pko_peb_pse_fifo_err_info
+ */
+union cvmx_pko_peb_pse_fifo_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_pse_fifo_err_info_s {
+		u64 reserved_25_63 : 39;
+		u64 link : 5;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_pse_fifo_err_info_cn73xx {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} cn73xx;
+	struct cvmx_pko_peb_pse_fifo_err_info_s cn78xx;
+	struct cvmx_pko_peb_pse_fifo_err_info_cn73xx cn78xxp1;
+	struct cvmx_pko_peb_pse_fifo_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_pse_fifo_err_info cvmx_pko_peb_pse_fifo_err_info_t;
+
+/**
+ * cvmx_pko_peb_subd_addr_err_info
+ */
+union cvmx_pko_peb_subd_addr_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_subd_addr_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_subd_addr_err_info_s cn73xx;
+	struct cvmx_pko_peb_subd_addr_err_info_s cn78xx;
+	struct cvmx_pko_peb_subd_addr_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_subd_addr_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_subd_addr_err_info cvmx_pko_peb_subd_addr_err_info_t;
+
+/**
+ * cvmx_pko_peb_subd_size_err_info
+ */
+union cvmx_pko_peb_subd_size_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_subd_size_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_subd_size_err_info_s cn73xx;
+	struct cvmx_pko_peb_subd_size_err_info_s cn78xx;
+	struct cvmx_pko_peb_subd_size_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_subd_size_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_subd_size_err_info cvmx_pko_peb_subd_size_err_info_t;
+
+/**
+ * cvmx_pko_peb_trunc_err_info
+ */
+union cvmx_pko_peb_trunc_err_info {
+	u64 u64;
+	struct cvmx_pko_peb_trunc_err_info_s {
+		u64 reserved_20_63 : 44;
+		u64 val : 1;
+		u64 fifo : 7;
+		u64 chan : 12;
+	} s;
+	struct cvmx_pko_peb_trunc_err_info_s cn73xx;
+	struct cvmx_pko_peb_trunc_err_info_s cn78xx;
+	struct cvmx_pko_peb_trunc_err_info_s cn78xxp1;
+	struct cvmx_pko_peb_trunc_err_info_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_trunc_err_info cvmx_pko_peb_trunc_err_info_t;
+
+/**
+ * cvmx_pko_peb_tso_cfg
+ */
+union cvmx_pko_peb_tso_cfg {
+	u64 u64;
+	struct cvmx_pko_peb_tso_cfg_s {
+		u64 reserved_44_63 : 20;
+		u64 fsf : 12;
+		u64 reserved_28_31 : 4;
+		u64 msf : 12;
+		u64 reserved_12_15 : 4;
+		u64 lsf : 12;
+	} s;
+	struct cvmx_pko_peb_tso_cfg_s cn73xx;
+	struct cvmx_pko_peb_tso_cfg_s cn78xx;
+	struct cvmx_pko_peb_tso_cfg_s cn78xxp1;
+	struct cvmx_pko_peb_tso_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_peb_tso_cfg cvmx_pko_peb_tso_cfg_t;
+
+/**
+ * cvmx_pko_pq_csr_bus_debug
+ */
+union cvmx_pko_pq_csr_bus_debug {
+	u64 u64;
+	struct cvmx_pko_pq_csr_bus_debug_s {
+		u64 csr_bus_debug : 64;
+	} s;
+	struct cvmx_pko_pq_csr_bus_debug_s cn73xx;
+	struct cvmx_pko_pq_csr_bus_debug_s cn78xx;
+	struct cvmx_pko_pq_csr_bus_debug_s cn78xxp1;
+	struct cvmx_pko_pq_csr_bus_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_pq_csr_bus_debug cvmx_pko_pq_csr_bus_debug_t;
+
+/**
+ * cvmx_pko_pq_debug_green
+ */
+union cvmx_pko_pq_debug_green {
+	u64 u64;
+	struct cvmx_pko_pq_debug_green_s {
+		u64 g_valid : 32;
+		u64 cred_ok_n : 32;
+	} s;
+	struct cvmx_pko_pq_debug_green_s cn73xx;
+	struct cvmx_pko_pq_debug_green_s cn78xx;
+	struct cvmx_pko_pq_debug_green_s cn78xxp1;
+	struct cvmx_pko_pq_debug_green_s cnf75xx;
+};
+
+typedef union cvmx_pko_pq_debug_green cvmx_pko_pq_debug_green_t;
+
+/**
+ * cvmx_pko_pq_debug_links
+ */
+union cvmx_pko_pq_debug_links {
+	u64 u64;
+	struct cvmx_pko_pq_debug_links_s {
+		u64 links_ready : 32;
+		u64 peb_lnk_rdy_ir : 32;
+	} s;
+	struct cvmx_pko_pq_debug_links_s cn73xx;
+	struct cvmx_pko_pq_debug_links_s cn78xx;
+	struct cvmx_pko_pq_debug_links_s cn78xxp1;
+	struct cvmx_pko_pq_debug_links_s cnf75xx;
+};
+
+typedef union cvmx_pko_pq_debug_links cvmx_pko_pq_debug_links_t;
+
+/**
+ * cvmx_pko_pq_debug_yellow
+ */
+union cvmx_pko_pq_debug_yellow {
+	u64 u64;
+	struct cvmx_pko_pq_debug_yellow_s {
+		u64 y_valid : 32;
+		u64 reserved_28_31 : 4;
+		u64 link_vv : 28;
+	} s;
+	struct cvmx_pko_pq_debug_yellow_s cn73xx;
+	struct cvmx_pko_pq_debug_yellow_s cn78xx;
+	struct cvmx_pko_pq_debug_yellow_s cn78xxp1;
+	struct cvmx_pko_pq_debug_yellow_s cnf75xx;
+};
+
+typedef union cvmx_pko_pq_debug_yellow cvmx_pko_pq_debug_yellow_t;
+
+/**
+ * cvmx_pko_pqa_debug
+ */
+union cvmx_pko_pqa_debug {
+	u64 u64;
+	struct cvmx_pko_pqa_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_pqa_debug_s cn73xx;
+	struct cvmx_pko_pqa_debug_s cn78xx;
+	struct cvmx_pko_pqa_debug_s cn78xxp1;
+	struct cvmx_pko_pqa_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_pqa_debug cvmx_pko_pqa_debug_t;
+
+/**
+ * cvmx_pko_pqb_debug
+ *
+ * This register has the same bit fields as PKO_PQA_DEBUG.
+ *
+ */
+union cvmx_pko_pqb_debug {
+	u64 u64;
+	struct cvmx_pko_pqb_debug_s {
+		u64 dbg_vec : 64;
+	} s;
+	struct cvmx_pko_pqb_debug_s cn73xx;
+	struct cvmx_pko_pqb_debug_s cn78xx;
+	struct cvmx_pko_pqb_debug_s cn78xxp1;
+	struct cvmx_pko_pqb_debug_s cnf75xx;
+};
+
+typedef union cvmx_pko_pqb_debug cvmx_pko_pqb_debug_t;
+
+/**
+ * cvmx_pko_pse_dq_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_dq_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_dq_bist_status_s {
+		u64 reserved_8_63 : 56;
+		u64 rt7_sram : 1;
+		u64 rt6_sram : 1;
+		u64 rt5_sram : 1;
+		u64 reserved_4_4 : 1;
+		u64 rt3_sram : 1;
+		u64 rt2_sram : 1;
+		u64 rt1_sram : 1;
+		u64 rt0_sram : 1;
+	} s;
+	struct cvmx_pko_pse_dq_bist_status_cn73xx {
+		u64 reserved_5_63 : 59;
+		u64 wt_sram : 1;
+		u64 reserved_2_3 : 2;
+		u64 rt1_sram : 1;
+		u64 rt0_sram : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_dq_bist_status_cn78xx {
+		u64 reserved_9_63 : 55;
+		u64 wt_sram : 1;
+		u64 rt7_sram : 1;
+		u64 rt6_sram : 1;
+		u64 rt5_sram : 1;
+		u64 rt4_sram : 1;
+		u64 rt3_sram : 1;
+		u64 rt2_sram : 1;
+		u64 rt1_sram : 1;
+		u64 rt0_sram : 1;
+	} cn78xx;
+	struct cvmx_pko_pse_dq_bist_status_cn78xx cn78xxp1;
+	struct cvmx_pko_pse_dq_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_bist_status cvmx_pko_pse_dq_bist_status_t;
+
+/**
+ * cvmx_pko_pse_dq_ecc_ctl0
+ */
+union cvmx_pko_pse_dq_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_dq_ecc_ctl0_s {
+		u64 dq_wt_ram_flip : 2;
+		u64 dq_wt_ram_cdis : 1;
+		u64 dq_rt7_flip : 2;
+		u64 dq_rt7_cdis : 1;
+		u64 dq_rt6_flip : 2;
+		u64 dq_rt6_cdis : 1;
+		u64 dq_rt5_flip : 2;
+		u64 dq_rt5_cdis : 1;
+		u64 dq_rt4_flip : 2;
+		u64 dq_rt4_cdis : 1;
+		u64 dq_rt3_flip : 2;
+		u64 dq_rt3_cdis : 1;
+		u64 dq_rt2_flip : 2;
+		u64 dq_rt2_cdis : 1;
+		u64 dq_rt1_flip : 2;
+		u64 dq_rt1_cdis : 1;
+		u64 dq_rt0_flip : 2;
+		u64 dq_rt0_cdis : 1;
+		u64 reserved_0_36 : 37;
+	} s;
+	struct cvmx_pko_pse_dq_ecc_ctl0_cn73xx {
+		u64 dq_wt_ram_flip : 2;
+		u64 dq_wt_ram_cdis : 1;
+		u64 reserved_43_60 : 18;
+		u64 dq_rt1_flip : 2;
+		u64 dq_rt1_cdis : 1;
+		u64 dq_rt0_flip : 2;
+		u64 dq_rt0_cdis : 1;
+		u64 reserved_0_36 : 37;
+	} cn73xx;
+	struct cvmx_pko_pse_dq_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_dq_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pse_dq_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_ecc_ctl0 cvmx_pko_pse_dq_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_dq_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_dq_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts0_s {
+		u64 dq_wt_ram_dbe : 1;
+		u64 dq_rt7_dbe : 1;
+		u64 dq_rt6_dbe : 1;
+		u64 dq_rt5_dbe : 1;
+		u64 dq_rt4_dbe : 1;
+		u64 dq_rt3_dbe : 1;
+		u64 dq_rt2_dbe : 1;
+		u64 dq_rt1_dbe : 1;
+		u64 dq_rt0_dbe : 1;
+		u64 reserved_0_54 : 55;
+	} s;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts0_cn73xx {
+		u64 dq_wt_ram_dbe : 1;
+		u64 reserved_57_62 : 6;
+		u64 dq_rt1_dbe : 1;
+		u64 dq_rt0_dbe : 1;
+		u64 reserved_0_54 : 55;
+	} cn73xx;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_ecc_dbe_sts0 cvmx_pko_pse_dq_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_dq_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_dq_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_s {
+		u64 pse_dq_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_ecc_dbe_sts_cmb0 cvmx_pko_pse_dq_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_dq_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_dq_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts0_s {
+		u64 dq_wt_ram_sbe : 1;
+		u64 dq_rt7_sbe : 1;
+		u64 dq_rt6_sbe : 1;
+		u64 dq_rt5_sbe : 1;
+		u64 dq_rt4_sbe : 1;
+		u64 dq_rt3_sbe : 1;
+		u64 dq_rt2_sbe : 1;
+		u64 dq_rt1_sbe : 1;
+		u64 dq_rt0_sbe : 1;
+		u64 reserved_0_54 : 55;
+	} s;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts0_cn73xx {
+		u64 dq_wt_ram_sbe : 1;
+		u64 reserved_57_62 : 6;
+		u64 dq_rt1_sbe : 1;
+		u64 dq_rt0_sbe : 1;
+		u64 reserved_0_54 : 55;
+	} cn73xx;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_ecc_sbe_sts0 cvmx_pko_pse_dq_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_dq_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_dq_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_s {
+		u64 pse_dq_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_dq_ecc_sbe_sts_cmb0 cvmx_pko_pse_dq_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_pq_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_pq_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_pq_bist_status_s {
+		u64 reserved_15_63 : 49;
+		u64 tp_sram : 1;
+		u64 irq_fifo_sram : 1;
+		u64 wmd_sram : 1;
+		u64 wms_sram : 1;
+		u64 cxd_sram : 1;
+		u64 dqd_sram : 1;
+		u64 dqs_sram : 1;
+		u64 pqd_sram : 1;
+		u64 pqr_sram : 1;
+		u64 pqy_sram : 1;
+		u64 pqg_sram : 1;
+		u64 std_sram : 1;
+		u64 st_sram : 1;
+		u64 reserved_1_1 : 1;
+		u64 cxs_sram : 1;
+	} s;
+	struct cvmx_pko_pse_pq_bist_status_cn73xx {
+		u64 reserved_15_63 : 49;
+		u64 tp_sram : 1;
+		u64 reserved_13_13 : 1;
+		u64 wmd_sram : 1;
+		u64 reserved_11_11 : 1;
+		u64 cxd_sram : 1;
+		u64 dqd_sram : 1;
+		u64 dqs_sram : 1;
+		u64 pqd_sram : 1;
+		u64 pqr_sram : 1;
+		u64 pqy_sram : 1;
+		u64 pqg_sram : 1;
+		u64 std_sram : 1;
+		u64 st_sram : 1;
+		u64 reserved_1_1 : 1;
+		u64 cxs_sram : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_pq_bist_status_s cn78xx;
+	struct cvmx_pko_pse_pq_bist_status_s cn78xxp1;
+	struct cvmx_pko_pse_pq_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_bist_status cvmx_pko_pse_pq_bist_status_t;
+
+/**
+ * cvmx_pko_pse_pq_ecc_ctl0
+ */
+union cvmx_pko_pse_pq_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_pq_ecc_ctl0_s {
+		u64 pq_cxs_ram_flip : 2;
+		u64 pq_cxs_ram_cdis : 1;
+		u64 pq_cxd_ram_flip : 2;
+		u64 pq_cxd_ram_cdis : 1;
+		u64 irq_fifo_sram_flip : 2;
+		u64 irq_fifo_sram_cdis : 1;
+		u64 tp_sram_flip : 2;
+		u64 tp_sram_cdis : 1;
+		u64 pq_std_ram_flip : 2;
+		u64 pq_std_ram_cdis : 1;
+		u64 pq_st_ram_flip : 2;
+		u64 pq_st_ram_cdis : 1;
+		u64 pq_wmd_ram_flip : 2;
+		u64 pq_wmd_ram_cdis : 1;
+		u64 pq_wms_ram_flip : 2;
+		u64 pq_wms_ram_cdis : 1;
+		u64 reserved_0_39 : 40;
+	} s;
+	struct cvmx_pko_pse_pq_ecc_ctl0_cn73xx {
+		u64 pq_cxs_ram_flip : 2;
+		u64 pq_cxs_ram_cdis : 1;
+		u64 pq_cxd_ram_flip : 2;
+		u64 pq_cxd_ram_cdis : 1;
+		u64 reserved_55_57 : 3;
+		u64 tp_sram_flip : 2;
+		u64 tp_sram_cdis : 1;
+		u64 pq_std_ram_flip : 2;
+		u64 pq_std_ram_cdis : 1;
+		u64 pq_st_ram_flip : 2;
+		u64 pq_st_ram_cdis : 1;
+		u64 pq_wmd_ram_flip : 2;
+		u64 pq_wmd_ram_cdis : 1;
+		u64 reserved_0_42 : 43;
+	} cn73xx;
+	struct cvmx_pko_pse_pq_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_pq_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pse_pq_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_ecc_ctl0 cvmx_pko_pse_pq_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_pq_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_pq_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts0_s {
+		u64 pq_cxs_ram_dbe : 1;
+		u64 pq_cxd_ram_dbe : 1;
+		u64 irq_fifo_sram_dbe : 1;
+		u64 tp_sram_dbe : 1;
+		u64 pq_std_ram_dbe : 1;
+		u64 pq_st_ram_dbe : 1;
+		u64 pq_wmd_ram_dbe : 1;
+		u64 pq_wms_ram_dbe : 1;
+		u64 reserved_0_55 : 56;
+	} s;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts0_cn73xx {
+		u64 pq_cxs_ram_dbe : 1;
+		u64 pq_cxd_ram_dbe : 1;
+		u64 reserved_61_61 : 1;
+		u64 tp_sram_dbe : 1;
+		u64 pq_std_ram_dbe : 1;
+		u64 pq_st_ram_dbe : 1;
+		u64 pq_wmd_ram_dbe : 1;
+		u64 reserved_0_56 : 57;
+	} cn73xx;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_ecc_dbe_sts0 cvmx_pko_pse_pq_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_pq_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_pq_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_s {
+		u64 pse_pq_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_ecc_dbe_sts_cmb0 cvmx_pko_pse_pq_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_pq_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_pq_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts0_s {
+		u64 pq_cxs_ram_sbe : 1;
+		u64 pq_cxd_ram_sbe : 1;
+		u64 irq_fifo_sram_sbe : 1;
+		u64 tp_sram_sbe : 1;
+		u64 pq_std_ram_sbe : 1;
+		u64 pq_st_ram_sbe : 1;
+		u64 pq_wmd_ram_sbe : 1;
+		u64 pq_wms_ram_sbe : 1;
+		u64 reserved_0_55 : 56;
+	} s;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts0_cn73xx {
+		u64 pq_cxs_ram_sbe : 1;
+		u64 pq_cxd_ram_sbe : 1;
+		u64 reserved_61_61 : 1;
+		u64 tp_sram_sbe : 1;
+		u64 pq_std_ram_sbe : 1;
+		u64 pq_st_ram_sbe : 1;
+		u64 pq_wmd_ram_sbe : 1;
+		u64 reserved_0_56 : 57;
+	} cn73xx;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_ecc_sbe_sts0 cvmx_pko_pse_pq_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_pq_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_pq_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_s {
+		u64 pse_pq_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_pq_ecc_sbe_sts_cmb0 cvmx_pko_pse_pq_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq1_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_sq1_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_bist_status_s {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 pc_sram : 1;
+		u64 xon_sram : 1;
+		u64 cc_sram : 1;
+		u64 vc1_sram : 1;
+		u64 vc0_sram : 1;
+		u64 reserved_21_22 : 2;
+		u64 tp1_sram : 1;
+		u64 tp0_sram : 1;
+		u64 xo_sram : 1;
+		u64 rt_sram : 1;
+		u64 reserved_9_16 : 8;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 cxd_sram : 1;
+		u64 cxs_sram : 1;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} s;
+	struct cvmx_pko_pse_sq1_bist_status_cn73xx {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 pc_sram : 1;
+		u64 xon_sram : 1;
+		u64 cc_sram : 1;
+		u64 vc1_sram : 1;
+		u64 vc0_sram : 1;
+		u64 reserved_20_22 : 3;
+		u64 tp0_sram : 1;
+		u64 xo_sram : 1;
+		u64 rt_sram : 1;
+		u64 reserved_9_16 : 8;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 cxd_sram : 1;
+		u64 cxs_sram : 1;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_sq1_bist_status_s cn78xx;
+	struct cvmx_pko_pse_sq1_bist_status_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_bist_status cvmx_pko_pse_sq1_bist_status_t;
+
+/**
+ * cvmx_pko_pse_sq1_ecc_ctl0
+ */
+union cvmx_pko_pse_sq1_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_ecc_ctl0_s {
+		u64 cxs_ram_flip : 2;
+		u64 cxs_ram_cdis : 1;
+		u64 cxd_ram_flip : 2;
+		u64 cxd_ram_cdis : 1;
+		u64 vc1_sram_flip : 2;
+		u64 vc1_sram_cdis : 1;
+		u64 vc0_sram_flip : 2;
+		u64 vc0_sram_cdis : 1;
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 pc_ram_flip : 2;
+		u64 pc_ram_cdis : 1;
+		u64 tw1_cmd_fifo_ram_flip : 2;
+		u64 tw1_cmd_fifo_ram_cdis : 1;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 tp1_sram_flip : 2;
+		u64 tp1_sram_cdis : 1;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 sts1_ram_flip : 2;
+		u64 sts1_ram_cdis : 1;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 std1_ram_flip : 2;
+		u64 std1_ram_cdis : 1;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_9 : 10;
+	} s;
+	struct cvmx_pko_pse_sq1_ecc_ctl0_cn73xx {
+		u64 cxs_ram_flip : 2;
+		u64 cxs_ram_cdis : 1;
+		u64 cxd_ram_flip : 2;
+		u64 cxd_ram_cdis : 1;
+		u64 reserved_55_57 : 3;
+		u64 vc0_sram_flip : 2;
+		u64 vc0_sram_cdis : 1;
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 pc_ram_flip : 2;
+		u64 pc_ram_cdis : 1;
+		u64 reserved_37_39 : 3;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 reserved_31_33 : 3;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 reserved_25_27 : 3;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 reserved_19_21 : 3;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_9 : 10;
+	} cn73xx;
+	struct cvmx_pko_pse_sq1_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_sq1_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_ecc_ctl0 cvmx_pko_pse_sq1_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_sq1_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_sq1_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts0_s {
+		u64 cxs_ram_dbe : 1;
+		u64 cxd_ram_dbe : 1;
+		u64 vc1_sram_dbe : 1;
+		u64 vc0_sram_dbe : 1;
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 pc_ram_dbe : 1;
+		u64 tw1_cmd_fifo_ram_dbe : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 tp1_sram_dbe : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 sts1_ram_dbe : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 std1_ram_dbe : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_45 : 46;
+	} s;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts0_cn73xx {
+		u64 cxs_ram_dbe : 1;
+		u64 cxd_ram_dbe : 1;
+		u64 reserved_61_61 : 1;
+		u64 vc0_sram_dbe : 1;
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 pc_ram_dbe : 1;
+		u64 reserved_55_55 : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 reserved_53_53 : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 reserved_51_51 : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 reserved_49_49 : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_45 : 46;
+	} cn73xx;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_ecc_dbe_sts0 cvmx_pko_pse_sq1_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_s {
+		u64 pse_sq1_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0 cvmx_pko_pse_sq1_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq1_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_sq1_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts0_s {
+		u64 cxs_ram_sbe : 1;
+		u64 cxd_ram_sbe : 1;
+		u64 vc1_sram_sbe : 1;
+		u64 vc0_sram_sbe : 1;
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 pc_ram_sbe : 1;
+		u64 tw1_cmd_fifo_ram_sbe : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 tp1_sram_sbe : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 sts1_ram_sbe : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 std1_ram_sbe : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_45 : 46;
+	} s;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts0_cn73xx {
+		u64 cxs_ram_sbe : 1;
+		u64 cxd_ram_sbe : 1;
+		u64 reserved_61_61 : 1;
+		u64 vc0_sram_sbe : 1;
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 pc_ram_sbe : 1;
+		u64 reserved_55_55 : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 reserved_53_53 : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 reserved_51_51 : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 reserved_49_49 : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_45 : 46;
+	} cn73xx;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_ecc_sbe_sts0 cvmx_pko_pse_sq1_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_s {
+		u64 pse_sq1_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0 cvmx_pko_pse_sq1_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq2_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_sq2_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_bist_status_s {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_21_27 : 7;
+		u64 tp1_sram : 1;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_9_16 : 8;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} s;
+	struct cvmx_pko_pse_sq2_bist_status_cn73xx {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_20_27 : 8;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_8_16 : 9;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_sq2_bist_status_s cn78xx;
+	struct cvmx_pko_pse_sq2_bist_status_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_bist_status cvmx_pko_pse_sq2_bist_status_t;
+
+/**
+ * cvmx_pko_pse_sq2_ecc_ctl0
+ */
+union cvmx_pko_pse_sq2_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_ecc_ctl0_s {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 tw1_cmd_fifo_ram_flip : 2;
+		u64 tw1_cmd_fifo_ram_cdis : 1;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 tp1_sram_flip : 2;
+		u64 tp1_sram_cdis : 1;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 sts1_ram_flip : 2;
+		u64 sts1_ram_cdis : 1;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 std1_ram_flip : 2;
+		u64 std1_ram_cdis : 1;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_24 : 25;
+	} s;
+	struct cvmx_pko_pse_sq2_ecc_ctl0_cn73xx {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 reserved_52_54 : 3;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 reserved_46_48 : 3;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 reserved_40_42 : 3;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 reserved_34_36 : 3;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_24 : 25;
+	} cn73xx;
+	struct cvmx_pko_pse_sq2_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_sq2_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_ecc_ctl0 cvmx_pko_pse_sq2_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_sq2_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_sq2_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts0_s {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 tw1_cmd_fifo_ram_dbe : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 tp1_sram_dbe : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 sts1_ram_dbe : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 std1_ram_dbe : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_50 : 51;
+	} s;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts0_cn73xx {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 reserved_60_60 : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 reserved_56_56 : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 reserved_54_54 : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_50 : 51;
+	} cn73xx;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_ecc_dbe_sts0 cvmx_pko_pse_sq2_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_s {
+		u64 pse_sq2_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0 cvmx_pko_pse_sq2_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq2_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_sq2_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts0_s {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 tw1_cmd_fifo_ram_sbe : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 tp1_sram_sbe : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 sts1_ram_sbe : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 std1_ram_sbe : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_50 : 51;
+	} s;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts0_cn73xx {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 reserved_60_60 : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 reserved_58_58 : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 reserved_56_56 : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 reserved_54_54 : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_50 : 51;
+	} cn73xx;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_ecc_sbe_sts0 cvmx_pko_pse_sq2_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_s {
+		u64 pse_sq2_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0 cvmx_pko_pse_sq2_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq3_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_sq3_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_bist_status_s {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_23_27 : 5;
+		u64 tp3_sram : 1;
+		u64 tp2_sram : 1;
+		u64 tp1_sram : 1;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_15_16 : 2;
+		u64 tw3_cmd_fifo : 1;
+		u64 reserved_12_13 : 2;
+		u64 tw2_cmd_fifo : 1;
+		u64 reserved_9_10 : 2;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} s;
+	struct cvmx_pko_pse_sq3_bist_status_cn73xx {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_20_27 : 8;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_8_16 : 9;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_sq3_bist_status_s cn78xx;
+	struct cvmx_pko_pse_sq3_bist_status_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_bist_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_bist_status cvmx_pko_pse_sq3_bist_status_t;
+
+/**
+ * cvmx_pko_pse_sq3_ecc_ctl0
+ */
+union cvmx_pko_pse_sq3_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_ecc_ctl0_s {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 tw3_cmd_fifo_ram_flip : 2;
+		u64 tw3_cmd_fifo_ram_cdis : 1;
+		u64 tw2_cmd_fifo_ram_flip : 2;
+		u64 tw2_cmd_fifo_ram_cdis : 1;
+		u64 tw1_cmd_fifo_ram_flip : 2;
+		u64 tw1_cmd_fifo_ram_cdis : 1;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 tp3_sram_flip : 2;
+		u64 tp3_sram_cdis : 1;
+		u64 tp2_sram_flip : 2;
+		u64 tp2_sram_cdis : 1;
+		u64 tp1_sram_flip : 2;
+		u64 tp1_sram_cdis : 1;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 sts3_ram_flip : 2;
+		u64 sts3_ram_cdis : 1;
+		u64 sts2_ram_flip : 2;
+		u64 sts2_ram_cdis : 1;
+		u64 sts1_ram_flip : 2;
+		u64 sts1_ram_cdis : 1;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 std3_ram_flip : 2;
+		u64 std3_ram_cdis : 1;
+		u64 std2_ram_flip : 2;
+		u64 std2_ram_cdis : 1;
+		u64 std1_ram_flip : 2;
+		u64 std1_ram_cdis : 1;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_pse_sq3_ecc_ctl0_cn73xx {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 reserved_46_54 : 9;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 reserved_34_42 : 9;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 reserved_22_30 : 9;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 reserved_10_18 : 9;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_pko_pse_sq3_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_sq3_ecc_ctl0_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_ecc_ctl0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_ecc_ctl0 cvmx_pko_pse_sq3_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_sq3_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_sq3_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts0_s {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 tw3_cmd_fifo_ram_dbe : 1;
+		u64 tw2_cmd_fifo_ram_dbe : 1;
+		u64 tw1_cmd_fifo_ram_dbe : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 tp3_sram_dbe : 1;
+		u64 tp2_sram_dbe : 1;
+		u64 tp1_sram_dbe : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 sts3_ram_dbe : 1;
+		u64 sts2_ram_dbe : 1;
+		u64 sts1_ram_dbe : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 std3_ram_dbe : 1;
+		u64 std2_ram_dbe : 1;
+		u64 std1_ram_dbe : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts0_cn73xx {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 reserved_58_60 : 3;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 reserved_54_56 : 3;
+		u64 tp0_sram_dbe : 1;
+		u64 reserved_50_52 : 3;
+		u64 sts0_ram_dbe : 1;
+		u64 reserved_46_48 : 3;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_42 : 43;
+	} cn73xx;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_ecc_dbe_sts0 cvmx_pko_pse_sq3_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_s {
+		u64 pse_sq3_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0 cvmx_pko_pse_sq3_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq3_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_sq3_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts0_s {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 tw3_cmd_fifo_ram_sbe : 1;
+		u64 tw2_cmd_fifo_ram_sbe : 1;
+		u64 tw1_cmd_fifo_ram_sbe : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 tp3_sram_sbe : 1;
+		u64 tp2_sram_sbe : 1;
+		u64 tp1_sram_sbe : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 sts3_ram_sbe : 1;
+		u64 sts2_ram_sbe : 1;
+		u64 sts1_ram_sbe : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 std3_ram_sbe : 1;
+		u64 std2_ram_sbe : 1;
+		u64 std1_ram_sbe : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts0_cn73xx {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 reserved_58_60 : 3;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 reserved_54_56 : 3;
+		u64 tp0_sram_sbe : 1;
+		u64 reserved_50_52 : 3;
+		u64 sts0_ram_sbe : 1;
+		u64 reserved_46_48 : 3;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_42 : 43;
+	} cn73xx;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts0_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts0_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_ecc_sbe_sts0 cvmx_pko_pse_sq3_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_s {
+		u64 pse_sq3_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_s cn73xx;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_s cn78xxp1;
+	struct cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_s cnf75xx;
+};
+
+typedef union cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0 cvmx_pko_pse_sq3_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq4_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_sq4_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_bist_status_s {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_23_27 : 5;
+		u64 tp3_sram : 1;
+		u64 tp2_sram : 1;
+		u64 tp1_sram : 1;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_15_16 : 2;
+		u64 tw3_cmd_fifo : 1;
+		u64 reserved_12_13 : 2;
+		u64 tw2_cmd_fifo : 1;
+		u64 reserved_9_10 : 2;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} s;
+	struct cvmx_pko_pse_sq4_bist_status_s cn78xx;
+	struct cvmx_pko_pse_sq4_bist_status_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_bist_status cvmx_pko_pse_sq4_bist_status_t;
+
+/**
+ * cvmx_pko_pse_sq4_ecc_ctl0
+ */
+union cvmx_pko_pse_sq4_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_ecc_ctl0_s {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 tw3_cmd_fifo_ram_flip : 2;
+		u64 tw3_cmd_fifo_ram_cdis : 1;
+		u64 tw2_cmd_fifo_ram_flip : 2;
+		u64 tw2_cmd_fifo_ram_cdis : 1;
+		u64 tw1_cmd_fifo_ram_flip : 2;
+		u64 tw1_cmd_fifo_ram_cdis : 1;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 tp3_sram_flip : 2;
+		u64 tp3_sram_cdis : 1;
+		u64 tp2_sram_flip : 2;
+		u64 tp2_sram_cdis : 1;
+		u64 tp1_sram_flip : 2;
+		u64 tp1_sram_cdis : 1;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 sts3_ram_flip : 2;
+		u64 sts3_ram_cdis : 1;
+		u64 sts2_ram_flip : 2;
+		u64 sts2_ram_cdis : 1;
+		u64 sts1_ram_flip : 2;
+		u64 sts1_ram_cdis : 1;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 std3_ram_flip : 2;
+		u64 std3_ram_cdis : 1;
+		u64 std2_ram_flip : 2;
+		u64 std2_ram_cdis : 1;
+		u64 std1_ram_flip : 2;
+		u64 std1_ram_cdis : 1;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_pse_sq4_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_sq4_ecc_ctl0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_ecc_ctl0 cvmx_pko_pse_sq4_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_sq4_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_sq4_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts0_s {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 tw3_cmd_fifo_ram_dbe : 1;
+		u64 tw2_cmd_fifo_ram_dbe : 1;
+		u64 tw1_cmd_fifo_ram_dbe : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 tp3_sram_dbe : 1;
+		u64 tp2_sram_dbe : 1;
+		u64 tp1_sram_dbe : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 sts3_ram_dbe : 1;
+		u64 sts2_ram_dbe : 1;
+		u64 sts1_ram_dbe : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 std3_ram_dbe : 1;
+		u64 std2_ram_dbe : 1;
+		u64 std1_ram_dbe : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_ecc_dbe_sts0 cvmx_pko_pse_sq4_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0_s {
+		u64 pse_sq4_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0 cvmx_pko_pse_sq4_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq4_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_sq4_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts0_s {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 tw3_cmd_fifo_ram_sbe : 1;
+		u64 tw2_cmd_fifo_ram_sbe : 1;
+		u64 tw1_cmd_fifo_ram_sbe : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 tp3_sram_sbe : 1;
+		u64 tp2_sram_sbe : 1;
+		u64 tp1_sram_sbe : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 sts3_ram_sbe : 1;
+		u64 sts2_ram_sbe : 1;
+		u64 sts1_ram_sbe : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 std3_ram_sbe : 1;
+		u64 std2_ram_sbe : 1;
+		u64 std1_ram_sbe : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_ecc_sbe_sts0 cvmx_pko_pse_sq4_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0_s {
+		u64 pse_sq4_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0 cvmx_pko_pse_sq4_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq5_bist_status
+ *
+ * Each bit is the BIST result of an individual memory (per bit, 0 = pass and 1 = fail).
+ *
+ */
+union cvmx_pko_pse_sq5_bist_status {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_bist_status_s {
+		u64 reserved_29_63 : 35;
+		u64 sc_sram : 1;
+		u64 reserved_23_27 : 5;
+		u64 tp3_sram : 1;
+		u64 tp2_sram : 1;
+		u64 tp1_sram : 1;
+		u64 tp0_sram : 1;
+		u64 reserved_18_18 : 1;
+		u64 rt_sram : 1;
+		u64 reserved_15_16 : 2;
+		u64 tw3_cmd_fifo : 1;
+		u64 reserved_12_13 : 2;
+		u64 tw2_cmd_fifo : 1;
+		u64 reserved_9_10 : 2;
+		u64 tw1_cmd_fifo : 1;
+		u64 std_sram : 1;
+		u64 sts_sram : 1;
+		u64 tw0_cmd_fifo : 1;
+		u64 reserved_3_4 : 2;
+		u64 nt_sram : 1;
+		u64 pt_sram : 1;
+		u64 wt_sram : 1;
+	} s;
+	struct cvmx_pko_pse_sq5_bist_status_s cn78xx;
+	struct cvmx_pko_pse_sq5_bist_status_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_bist_status cvmx_pko_pse_sq5_bist_status_t;
+
+/**
+ * cvmx_pko_pse_sq5_ecc_ctl0
+ */
+union cvmx_pko_pse_sq5_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_ecc_ctl0_s {
+		u64 sq_pt_ram_flip : 2;
+		u64 sq_pt_ram_cdis : 1;
+		u64 sq_nt_ram_flip : 2;
+		u64 sq_nt_ram_cdis : 1;
+		u64 rt_ram_flip : 2;
+		u64 rt_ram_cdis : 1;
+		u64 tw3_cmd_fifo_ram_flip : 2;
+		u64 tw3_cmd_fifo_ram_cdis : 1;
+		u64 tw2_cmd_fifo_ram_flip : 2;
+		u64 tw2_cmd_fifo_ram_cdis : 1;
+		u64 tw1_cmd_fifo_ram_flip : 2;
+		u64 tw1_cmd_fifo_ram_cdis : 1;
+		u64 tw0_cmd_fifo_ram_flip : 2;
+		u64 tw0_cmd_fifo_ram_cdis : 1;
+		u64 tp3_sram_flip : 2;
+		u64 tp3_sram_cdis : 1;
+		u64 tp2_sram_flip : 2;
+		u64 tp2_sram_cdis : 1;
+		u64 tp1_sram_flip : 2;
+		u64 tp1_sram_cdis : 1;
+		u64 tp0_sram_flip : 2;
+		u64 tp0_sram_cdis : 1;
+		u64 sts3_ram_flip : 2;
+		u64 sts3_ram_cdis : 1;
+		u64 sts2_ram_flip : 2;
+		u64 sts2_ram_cdis : 1;
+		u64 sts1_ram_flip : 2;
+		u64 sts1_ram_cdis : 1;
+		u64 sts0_ram_flip : 2;
+		u64 sts0_ram_cdis : 1;
+		u64 std3_ram_flip : 2;
+		u64 std3_ram_cdis : 1;
+		u64 std2_ram_flip : 2;
+		u64 std2_ram_cdis : 1;
+		u64 std1_ram_flip : 2;
+		u64 std1_ram_cdis : 1;
+		u64 std0_ram_flip : 2;
+		u64 std0_ram_cdis : 1;
+		u64 wt_ram_flip : 2;
+		u64 wt_ram_cdis : 1;
+		u64 sc_ram_flip : 2;
+		u64 sc_ram_cdis : 1;
+		u64 reserved_0_0 : 1;
+	} s;
+	struct cvmx_pko_pse_sq5_ecc_ctl0_s cn78xx;
+	struct cvmx_pko_pse_sq5_ecc_ctl0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_ecc_ctl0 cvmx_pko_pse_sq5_ecc_ctl0_t;
+
+/**
+ * cvmx_pko_pse_sq5_ecc_dbe_sts0
+ */
+union cvmx_pko_pse_sq5_ecc_dbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts0_s {
+		u64 sq_pt_ram_dbe : 1;
+		u64 sq_nt_ram_dbe : 1;
+		u64 rt_ram_dbe : 1;
+		u64 tw3_cmd_fifo_ram_dbe : 1;
+		u64 tw2_cmd_fifo_ram_dbe : 1;
+		u64 tw1_cmd_fifo_ram_dbe : 1;
+		u64 tw0_cmd_fifo_ram_dbe : 1;
+		u64 tp3_sram_dbe : 1;
+		u64 tp2_sram_dbe : 1;
+		u64 tp1_sram_dbe : 1;
+		u64 tp0_sram_dbe : 1;
+		u64 sts3_ram_dbe : 1;
+		u64 sts2_ram_dbe : 1;
+		u64 sts1_ram_dbe : 1;
+		u64 sts0_ram_dbe : 1;
+		u64 std3_ram_dbe : 1;
+		u64 std2_ram_dbe : 1;
+		u64 std1_ram_dbe : 1;
+		u64 std0_ram_dbe : 1;
+		u64 wt_ram_dbe : 1;
+		u64 sc_ram_dbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_ecc_dbe_sts0 cvmx_pko_pse_sq5_ecc_dbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0_s {
+		u64 pse_sq5_dbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0 cvmx_pko_pse_sq5_ecc_dbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_pse_sq5_ecc_sbe_sts0
+ */
+union cvmx_pko_pse_sq5_ecc_sbe_sts0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts0_s {
+		u64 sq_pt_ram_sbe : 1;
+		u64 sq_nt_ram_sbe : 1;
+		u64 rt_ram_sbe : 1;
+		u64 tw3_cmd_fifo_ram_sbe : 1;
+		u64 tw2_cmd_fifo_ram_sbe : 1;
+		u64 tw1_cmd_fifo_ram_sbe : 1;
+		u64 tw0_cmd_fifo_ram_sbe : 1;
+		u64 tp3_sram_sbe : 1;
+		u64 tp2_sram_sbe : 1;
+		u64 tp1_sram_sbe : 1;
+		u64 tp0_sram_sbe : 1;
+		u64 sts3_ram_sbe : 1;
+		u64 sts2_ram_sbe : 1;
+		u64 sts1_ram_sbe : 1;
+		u64 sts0_ram_sbe : 1;
+		u64 std3_ram_sbe : 1;
+		u64 std2_ram_sbe : 1;
+		u64 std1_ram_sbe : 1;
+		u64 std0_ram_sbe : 1;
+		u64 wt_ram_sbe : 1;
+		u64 sc_ram_sbe : 1;
+		u64 reserved_0_42 : 43;
+	} s;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts0_s cn78xx;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_ecc_sbe_sts0 cvmx_pko_pse_sq5_ecc_sbe_sts0_t;
+
+/**
+ * cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0
+ */
+union cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0 {
+	u64 u64;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0_s {
+		u64 pse_sq5_sbe_cmb0 : 1;
+		u64 reserved_0_62 : 63;
+	} s;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0_s cn78xx;
+	struct cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0_s cn78xxp1;
+};
+
+typedef union cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0 cvmx_pko_pse_sq5_ecc_sbe_sts_cmb0_t;
+
+/**
+ * cvmx_pko_ptf#_status
+ */
+union cvmx_pko_ptfx_status {
+	u64 u64;
+	struct cvmx_pko_ptfx_status_s {
+		u64 reserved_30_63 : 34;
+		u64 tx_fifo_pkt_credit_cnt : 10;
+		u64 total_in_flight_cnt : 8;
+		u64 in_flight_cnt : 7;
+		u64 mac_num : 5;
+	} s;
+	struct cvmx_pko_ptfx_status_s cn73xx;
+	struct cvmx_pko_ptfx_status_s cn78xx;
+	struct cvmx_pko_ptfx_status_s cn78xxp1;
+	struct cvmx_pko_ptfx_status_s cnf75xx;
+};
+
+typedef union cvmx_pko_ptfx_status cvmx_pko_ptfx_status_t;
+
+/**
+ * cvmx_pko_ptf_iobp_cfg
+ */
+union cvmx_pko_ptf_iobp_cfg {
+	u64 u64;
+	struct cvmx_pko_ptf_iobp_cfg_s {
+		u64 reserved_44_63 : 20;
+		u64 iobp1_ds_opt : 1;
+		u64 iobp0_l2_allocate : 1;
+		u64 iobp1_magic_addr : 35;
+		u64 max_read_size : 7;
+	} s;
+	struct cvmx_pko_ptf_iobp_cfg_s cn73xx;
+	struct cvmx_pko_ptf_iobp_cfg_s cn78xx;
+	struct cvmx_pko_ptf_iobp_cfg_s cn78xxp1;
+	struct cvmx_pko_ptf_iobp_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_ptf_iobp_cfg cvmx_pko_ptf_iobp_cfg_t;
+
+/**
+ * cvmx_pko_ptgf#_cfg
+ *
+ * This register configures a PKO TX FIFO group. PKO supports up to 17 independent
+ * TX FIFOs, where 0-15 are physical and 16 is Virtual/NULL. (PKO drops packets
+ * targeting the NULL FIFO, returning their buffers to the FPA.) PKO puts each
+ * FIFO into one of five groups:
+ *
+ * <pre>
+ *    CSR Name       FIFO's in FIFO Group
+ *   ------------------------------------
+ *   PKO_PTGF0_CFG      0,  1,  2,  3
+ *   PKO_PTGF1_CFG      4,  5,  6,  7
+ *   PKO_PTGF2_CFG      8,  9, 10, 11
+ *   PKO_PTGF3_CFG     12, 13, 14, 15
+ *   PKO_PTGF4_CFG      Virtual/NULL
+ * </pre>
+ */
+union cvmx_pko_ptgfx_cfg {
+	u64 u64;
+	struct cvmx_pko_ptgfx_cfg_s {
+		u64 reserved_7_63 : 57;
+		u64 reset : 1;
+		u64 rate : 3;
+		u64 size : 3;
+	} s;
+	struct cvmx_pko_ptgfx_cfg_cn73xx {
+		u64 reserved_7_63 : 57;
+		u64 reset : 1;
+		u64 reserved_5_5 : 1;
+		u64 rate : 2;
+		u64 size : 3;
+	} cn73xx;
+	struct cvmx_pko_ptgfx_cfg_s cn78xx;
+	struct cvmx_pko_ptgfx_cfg_s cn78xxp1;
+	struct cvmx_pko_ptgfx_cfg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_ptgfx_cfg cvmx_pko_ptgfx_cfg_t;
+
+/**
+ * cvmx_pko_reg_bist_result
+ *
+ * Notes:
+ * Access to the internal BiST results
+ * Each bit is the BiST result of an individual memory (per bit, 0=pass and 1=fail).
+ */
+union cvmx_pko_reg_bist_result {
+	u64 u64;
+	struct cvmx_pko_reg_bist_result_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_pko_reg_bist_result_cn30xx {
+		u64 reserved_27_63 : 37;
+		u64 psb2 : 5;
+		u64 count : 1;
+		u64 rif : 1;
+		u64 wif : 1;
+		u64 ncb : 1;
+		u64 out : 1;
+		u64 crc : 1;
+		u64 chk : 1;
+		u64 qsb : 2;
+		u64 qcb : 2;
+		u64 pdb : 4;
+		u64 psb : 7;
+	} cn30xx;
+	struct cvmx_pko_reg_bist_result_cn30xx cn31xx;
+	struct cvmx_pko_reg_bist_result_cn30xx cn38xx;
+	struct cvmx_pko_reg_bist_result_cn30xx cn38xxp2;
+	struct cvmx_pko_reg_bist_result_cn50xx {
+		u64 reserved_33_63 : 31;
+		u64 csr : 1;
+		u64 iob : 1;
+		u64 out_crc : 1;
+		u64 out_ctl : 3;
+		u64 out_sta : 1;
+		u64 out_wif : 1;
+		u64 prt_chk : 3;
+		u64 prt_nxt : 1;
+		u64 prt_psb : 6;
+		u64 ncb_inb : 2;
+		u64 prt_qcb : 2;
+		u64 prt_qsb : 3;
+		u64 dat_dat : 4;
+		u64 dat_ptr : 4;
+	} cn50xx;
+	struct cvmx_pko_reg_bist_result_cn52xx {
+		u64 reserved_35_63 : 29;
+		u64 csr : 1;
+		u64 iob : 1;
+		u64 out_dat : 1;
+		u64 out_ctl : 3;
+		u64 out_sta : 1;
+		u64 out_wif : 1;
+		u64 prt_chk : 3;
+		u64 prt_nxt : 1;
+		u64 prt_psb : 8;
+		u64 ncb_inb : 2;
+		u64 prt_qcb : 2;
+		u64 prt_qsb : 3;
+		u64 prt_ctl : 2;
+		u64 dat_dat : 2;
+		u64 dat_ptr : 4;
+	} cn52xx;
+	struct cvmx_pko_reg_bist_result_cn52xx cn52xxp1;
+	struct cvmx_pko_reg_bist_result_cn52xx cn56xx;
+	struct cvmx_pko_reg_bist_result_cn52xx cn56xxp1;
+	struct cvmx_pko_reg_bist_result_cn50xx cn58xx;
+	struct cvmx_pko_reg_bist_result_cn50xx cn58xxp1;
+	struct cvmx_pko_reg_bist_result_cn52xx cn61xx;
+	struct cvmx_pko_reg_bist_result_cn52xx cn63xx;
+	struct cvmx_pko_reg_bist_result_cn52xx cn63xxp1;
+	struct cvmx_pko_reg_bist_result_cn52xx cn66xx;
+	struct cvmx_pko_reg_bist_result_cn68xx {
+		u64 reserved_36_63 : 28;
+		u64 crc : 1;
+		u64 csr : 1;
+		u64 iob : 1;
+		u64 out_dat : 1;
+		u64 reserved_31_31 : 1;
+		u64 out_ctl : 2;
+		u64 out_sta : 1;
+		u64 out_wif : 1;
+		u64 prt_chk : 3;
+		u64 prt_nxt : 1;
+		u64 prt_psb7 : 1;
+		u64 reserved_21_21 : 1;
+		u64 prt_psb : 6;
+		u64 ncb_inb : 2;
+		u64 prt_qcb : 2;
+		u64 prt_qsb : 3;
+		u64 prt_ctl : 2;
+		u64 dat_dat : 2;
+		u64 dat_ptr : 4;
+	} cn68xx;
+	struct cvmx_pko_reg_bist_result_cn68xxp1 {
+		u64 reserved_35_63 : 29;
+		u64 csr : 1;
+		u64 iob : 1;
+		u64 out_dat : 1;
+		u64 reserved_31_31 : 1;
+		u64 out_ctl : 2;
+		u64 out_sta : 1;
+		u64 out_wif : 1;
+		u64 prt_chk : 3;
+		u64 prt_nxt : 1;
+		u64 prt_psb7 : 1;
+		u64 reserved_21_21 : 1;
+		u64 prt_psb : 6;
+		u64 ncb_inb : 2;
+		u64 prt_qcb : 2;
+		u64 prt_qsb : 3;
+		u64 prt_ctl : 2;
+		u64 dat_dat : 2;
+		u64 dat_ptr : 4;
+	} cn68xxp1;
+	struct cvmx_pko_reg_bist_result_cn70xx {
+		u64 reserved_30_63 : 34;
+		u64 csr : 1;
+		u64 iob : 1;
+		u64 out_dat : 1;
+		u64 out_ctl : 1;
+		u64 out_sta : 1;
+		u64 out_wif : 1;
+		u64 prt_chk : 3;
+		u64 prt_nxt : 1;
+		u64 prt_psb : 8;
+		u64 ncb_inb : 1;
+		u64 prt_qcb : 1;
+		u64 prt_qsb : 2;
+		u64 prt_ctl : 2;
+		u64 dat_dat : 2;
+		u64 dat_ptr : 4;
+	} cn70xx;
+	struct cvmx_pko_reg_bist_result_cn70xx cn70xxp1;
+	struct cvmx_pko_reg_bist_result_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_bist_result cvmx_pko_reg_bist_result_t;
+
+/**
+ * cvmx_pko_reg_cmd_buf
+ *
+ * Notes:
+ * Sets the command buffer parameters
+ * The size of the command buffer segments is measured in uint64s.  The pool specifies (1 of 8 free
+ * lists to be used when freeing command buffer segments.
+ */
+union cvmx_pko_reg_cmd_buf {
+	u64 u64;
+	struct cvmx_pko_reg_cmd_buf_s {
+		u64 reserved_23_63 : 41;
+		u64 pool : 3;
+		u64 reserved_13_19 : 7;
+		u64 size : 13;
+	} s;
+	struct cvmx_pko_reg_cmd_buf_s cn30xx;
+	struct cvmx_pko_reg_cmd_buf_s cn31xx;
+	struct cvmx_pko_reg_cmd_buf_s cn38xx;
+	struct cvmx_pko_reg_cmd_buf_s cn38xxp2;
+	struct cvmx_pko_reg_cmd_buf_s cn50xx;
+	struct cvmx_pko_reg_cmd_buf_s cn52xx;
+	struct cvmx_pko_reg_cmd_buf_s cn52xxp1;
+	struct cvmx_pko_reg_cmd_buf_s cn56xx;
+	struct cvmx_pko_reg_cmd_buf_s cn56xxp1;
+	struct cvmx_pko_reg_cmd_buf_s cn58xx;
+	struct cvmx_pko_reg_cmd_buf_s cn58xxp1;
+	struct cvmx_pko_reg_cmd_buf_s cn61xx;
+	struct cvmx_pko_reg_cmd_buf_s cn63xx;
+	struct cvmx_pko_reg_cmd_buf_s cn63xxp1;
+	struct cvmx_pko_reg_cmd_buf_s cn66xx;
+	struct cvmx_pko_reg_cmd_buf_s cn68xx;
+	struct cvmx_pko_reg_cmd_buf_s cn68xxp1;
+	struct cvmx_pko_reg_cmd_buf_cn70xx {
+		u64 reserved_23_63 : 41;
+		u64 pool : 3;
+		u64 reserved_19_13 : 7;
+		u64 size : 13;
+	} cn70xx;
+	struct cvmx_pko_reg_cmd_buf_cn70xx cn70xxp1;
+	struct cvmx_pko_reg_cmd_buf_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_cmd_buf cvmx_pko_reg_cmd_buf_t;
+
+/**
+ * cvmx_pko_reg_crc_ctl#
+ *
+ * Notes:
+ * Controls datapath reflection when calculating CRC
+ *
+ */
+union cvmx_pko_reg_crc_ctlx {
+	u64 u64;
+	struct cvmx_pko_reg_crc_ctlx_s {
+		u64 reserved_2_63 : 62;
+		u64 invres : 1;
+		u64 refin : 1;
+	} s;
+	struct cvmx_pko_reg_crc_ctlx_s cn38xx;
+	struct cvmx_pko_reg_crc_ctlx_s cn38xxp2;
+	struct cvmx_pko_reg_crc_ctlx_s cn58xx;
+	struct cvmx_pko_reg_crc_ctlx_s cn58xxp1;
+};
+
+typedef union cvmx_pko_reg_crc_ctlx cvmx_pko_reg_crc_ctlx_t;
+
+/**
+ * cvmx_pko_reg_crc_enable
+ *
+ * Notes:
+ * Enables CRC for the GMX ports.
+ *
+ */
+union cvmx_pko_reg_crc_enable {
+	u64 u64;
+	struct cvmx_pko_reg_crc_enable_s {
+		u64 reserved_32_63 : 32;
+		u64 enable : 32;
+	} s;
+	struct cvmx_pko_reg_crc_enable_s cn38xx;
+	struct cvmx_pko_reg_crc_enable_s cn38xxp2;
+	struct cvmx_pko_reg_crc_enable_s cn58xx;
+	struct cvmx_pko_reg_crc_enable_s cn58xxp1;
+};
+
+typedef union cvmx_pko_reg_crc_enable cvmx_pko_reg_crc_enable_t;
+
+/**
+ * cvmx_pko_reg_crc_iv#
+ *
+ * Notes:
+ * Determines the IV used by the CRC algorithm
+ * * PKO_CRC_IV
+ *  PKO_CRC_IV controls the initial state of the CRC algorithm.  Octane can
+ *  support a wide range of CRC algorithms and as such, the IV must be
+ *  carefully constructed to meet the specific algorithm.  The code below
+ *  determines the value to program into Octane based on the algorthim's IV
+ *  and width.  In the case of Octane, the width should always be 32.
+ *
+ *  PKO_CRC_IV0 sets the IV for ports 0-15 while PKO_CRC_IV1 sets the IV for
+ *  ports 16-31.
+ *
+ *   @verbatim
+ *   unsigned octane_crc_iv(unsigned algorithm_iv, unsigned poly, unsigned w)
+ *   [
+ *     int i;
+ *     int doit;
+ *     unsigned int current_val = algorithm_iv;
+ *
+ *     for(i = 0; i < w; i++) [
+ *       doit = current_val & 0x1;
+ *
+ *       if(doit) current_val ^= poly;
+ *       assert(!(current_val & 0x1));
+ *
+ *       current_val = (current_val >> 1) | (doit << (w-1));
+ *     ]
+ *
+ *     return current_val;
+ *   ]
+ *   @endverbatim
+ */
+union cvmx_pko_reg_crc_ivx {
+	u64 u64;
+	struct cvmx_pko_reg_crc_ivx_s {
+		u64 reserved_32_63 : 32;
+		u64 iv : 32;
+	} s;
+	struct cvmx_pko_reg_crc_ivx_s cn38xx;
+	struct cvmx_pko_reg_crc_ivx_s cn38xxp2;
+	struct cvmx_pko_reg_crc_ivx_s cn58xx;
+	struct cvmx_pko_reg_crc_ivx_s cn58xxp1;
+};
+
+typedef union cvmx_pko_reg_crc_ivx cvmx_pko_reg_crc_ivx_t;
+
+/**
+ * cvmx_pko_reg_debug0
+ *
+ * Notes:
+ * Note that this CSR is present only in chip revisions beginning with pass2.
+ *
+ */
+union cvmx_pko_reg_debug0 {
+	u64 u64;
+	struct cvmx_pko_reg_debug0_s {
+		u64 asserts : 64;
+	} s;
+	struct cvmx_pko_reg_debug0_cn30xx {
+		u64 reserved_17_63 : 47;
+		u64 asserts : 17;
+	} cn30xx;
+	struct cvmx_pko_reg_debug0_cn30xx cn31xx;
+	struct cvmx_pko_reg_debug0_cn30xx cn38xx;
+	struct cvmx_pko_reg_debug0_cn30xx cn38xxp2;
+	struct cvmx_pko_reg_debug0_s cn50xx;
+	struct cvmx_pko_reg_debug0_s cn52xx;
+	struct cvmx_pko_reg_debug0_s cn52xxp1;
+	struct cvmx_pko_reg_debug0_s cn56xx;
+	struct cvmx_pko_reg_debug0_s cn56xxp1;
+	struct cvmx_pko_reg_debug0_s cn58xx;
+	struct cvmx_pko_reg_debug0_s cn58xxp1;
+	struct cvmx_pko_reg_debug0_s cn61xx;
+	struct cvmx_pko_reg_debug0_s cn63xx;
+	struct cvmx_pko_reg_debug0_s cn63xxp1;
+	struct cvmx_pko_reg_debug0_s cn66xx;
+	struct cvmx_pko_reg_debug0_s cn68xx;
+	struct cvmx_pko_reg_debug0_s cn68xxp1;
+	struct cvmx_pko_reg_debug0_s cn70xx;
+	struct cvmx_pko_reg_debug0_s cn70xxp1;
+	struct cvmx_pko_reg_debug0_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_debug0 cvmx_pko_reg_debug0_t;
+
+/**
+ * cvmx_pko_reg_debug1
+ */
+union cvmx_pko_reg_debug1 {
+	u64 u64;
+	struct cvmx_pko_reg_debug1_s {
+		u64 asserts : 64;
+	} s;
+	struct cvmx_pko_reg_debug1_s cn50xx;
+	struct cvmx_pko_reg_debug1_s cn52xx;
+	struct cvmx_pko_reg_debug1_s cn52xxp1;
+	struct cvmx_pko_reg_debug1_s cn56xx;
+	struct cvmx_pko_reg_debug1_s cn56xxp1;
+	struct cvmx_pko_reg_debug1_s cn58xx;
+	struct cvmx_pko_reg_debug1_s cn58xxp1;
+	struct cvmx_pko_reg_debug1_s cn61xx;
+	struct cvmx_pko_reg_debug1_s cn63xx;
+	struct cvmx_pko_reg_debug1_s cn63xxp1;
+	struct cvmx_pko_reg_debug1_s cn66xx;
+	struct cvmx_pko_reg_debug1_s cn68xx;
+	struct cvmx_pko_reg_debug1_s cn68xxp1;
+	struct cvmx_pko_reg_debug1_s cn70xx;
+	struct cvmx_pko_reg_debug1_s cn70xxp1;
+	struct cvmx_pko_reg_debug1_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_debug1 cvmx_pko_reg_debug1_t;
+
+/**
+ * cvmx_pko_reg_debug2
+ */
+union cvmx_pko_reg_debug2 {
+	u64 u64;
+	struct cvmx_pko_reg_debug2_s {
+		u64 asserts : 64;
+	} s;
+	struct cvmx_pko_reg_debug2_s cn50xx;
+	struct cvmx_pko_reg_debug2_s cn52xx;
+	struct cvmx_pko_reg_debug2_s cn52xxp1;
+	struct cvmx_pko_reg_debug2_s cn56xx;
+	struct cvmx_pko_reg_debug2_s cn56xxp1;
+	struct cvmx_pko_reg_debug2_s cn58xx;
+	struct cvmx_pko_reg_debug2_s cn58xxp1;
+	struct cvmx_pko_reg_debug2_s cn61xx;
+	struct cvmx_pko_reg_debug2_s cn63xx;
+	struct cvmx_pko_reg_debug2_s cn63xxp1;
+	struct cvmx_pko_reg_debug2_s cn66xx;
+	struct cvmx_pko_reg_debug2_s cn68xx;
+	struct cvmx_pko_reg_debug2_s cn68xxp1;
+	struct cvmx_pko_reg_debug2_s cn70xx;
+	struct cvmx_pko_reg_debug2_s cn70xxp1;
+	struct cvmx_pko_reg_debug2_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_debug2 cvmx_pko_reg_debug2_t;
+
+/**
+ * cvmx_pko_reg_debug3
+ */
+union cvmx_pko_reg_debug3 {
+	u64 u64;
+	struct cvmx_pko_reg_debug3_s {
+		u64 asserts : 64;
+	} s;
+	struct cvmx_pko_reg_debug3_s cn50xx;
+	struct cvmx_pko_reg_debug3_s cn52xx;
+	struct cvmx_pko_reg_debug3_s cn52xxp1;
+	struct cvmx_pko_reg_debug3_s cn56xx;
+	struct cvmx_pko_reg_debug3_s cn56xxp1;
+	struct cvmx_pko_reg_debug3_s cn58xx;
+	struct cvmx_pko_reg_debug3_s cn58xxp1;
+	struct cvmx_pko_reg_debug3_s cn61xx;
+	struct cvmx_pko_reg_debug3_s cn63xx;
+	struct cvmx_pko_reg_debug3_s cn63xxp1;
+	struct cvmx_pko_reg_debug3_s cn66xx;
+	struct cvmx_pko_reg_debug3_s cn68xx;
+	struct cvmx_pko_reg_debug3_s cn68xxp1;
+	struct cvmx_pko_reg_debug3_s cn70xx;
+	struct cvmx_pko_reg_debug3_s cn70xxp1;
+	struct cvmx_pko_reg_debug3_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_debug3 cvmx_pko_reg_debug3_t;
+
+/**
+ * cvmx_pko_reg_debug4
+ */
+union cvmx_pko_reg_debug4 {
+	u64 u64;
+	struct cvmx_pko_reg_debug4_s {
+		u64 asserts : 64;
+	} s;
+	struct cvmx_pko_reg_debug4_s cn68xx;
+	struct cvmx_pko_reg_debug4_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_debug4 cvmx_pko_reg_debug4_t;
+
+/**
+ * cvmx_pko_reg_engine_inflight
+ *
+ * Notes:
+ * Sets the maximum number of inflight packets, per engine.  Values greater than 4 are illegal.
+ * Setting an engine's value to 0 effectively stops the engine.
+ */
+union cvmx_pko_reg_engine_inflight {
+	u64 u64;
+	struct cvmx_pko_reg_engine_inflight_s {
+		u64 engine15 : 4;
+		u64 engine14 : 4;
+		u64 engine13 : 4;
+		u64 engine12 : 4;
+		u64 engine11 : 4;
+		u64 engine10 : 4;
+		u64 engine9 : 4;
+		u64 engine8 : 4;
+		u64 engine7 : 4;
+		u64 engine6 : 4;
+		u64 engine5 : 4;
+		u64 engine4 : 4;
+		u64 engine3 : 4;
+		u64 engine2 : 4;
+		u64 engine1 : 4;
+		u64 engine0 : 4;
+	} s;
+	struct cvmx_pko_reg_engine_inflight_cn52xx {
+		u64 reserved_40_63 : 24;
+		u64 engine9 : 4;
+		u64 engine8 : 4;
+		u64 engine7 : 4;
+		u64 engine6 : 4;
+		u64 engine5 : 4;
+		u64 engine4 : 4;
+		u64 engine3 : 4;
+		u64 engine2 : 4;
+		u64 engine1 : 4;
+		u64 engine0 : 4;
+	} cn52xx;
+	struct cvmx_pko_reg_engine_inflight_cn52xx cn52xxp1;
+	struct cvmx_pko_reg_engine_inflight_cn52xx cn56xx;
+	struct cvmx_pko_reg_engine_inflight_cn52xx cn56xxp1;
+	struct cvmx_pko_reg_engine_inflight_cn61xx {
+		u64 reserved_56_63 : 8;
+		u64 engine13 : 4;
+		u64 engine12 : 4;
+		u64 engine11 : 4;
+		u64 engine10 : 4;
+		u64 engine9 : 4;
+		u64 engine8 : 4;
+		u64 engine7 : 4;
+		u64 engine6 : 4;
+		u64 engine5 : 4;
+		u64 engine4 : 4;
+		u64 engine3 : 4;
+		u64 engine2 : 4;
+		u64 engine1 : 4;
+		u64 engine0 : 4;
+	} cn61xx;
+	struct cvmx_pko_reg_engine_inflight_cn63xx {
+		u64 reserved_48_63 : 16;
+		u64 engine11 : 4;
+		u64 engine10 : 4;
+		u64 engine9 : 4;
+		u64 engine8 : 4;
+		u64 engine7 : 4;
+		u64 engine6 : 4;
+		u64 engine5 : 4;
+		u64 engine4 : 4;
+		u64 engine3 : 4;
+		u64 engine2 : 4;
+		u64 engine1 : 4;
+		u64 engine0 : 4;
+	} cn63xx;
+	struct cvmx_pko_reg_engine_inflight_cn63xx cn63xxp1;
+	struct cvmx_pko_reg_engine_inflight_cn61xx cn66xx;
+	struct cvmx_pko_reg_engine_inflight_s cn68xx;
+	struct cvmx_pko_reg_engine_inflight_s cn68xxp1;
+	struct cvmx_pko_reg_engine_inflight_cn61xx cn70xx;
+	struct cvmx_pko_reg_engine_inflight_cn61xx cn70xxp1;
+	struct cvmx_pko_reg_engine_inflight_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_engine_inflight cvmx_pko_reg_engine_inflight_t;
+
+/**
+ * cvmx_pko_reg_engine_inflight1
+ *
+ * Notes:
+ * Sets the maximum number of inflight packets, per engine.  Values greater than 8 are illegal.
+ * Setting an engine's value to 0 effectively stops the engine.
+ */
+union cvmx_pko_reg_engine_inflight1 {
+	u64 u64;
+	struct cvmx_pko_reg_engine_inflight1_s {
+		u64 reserved_16_63 : 48;
+		u64 engine19 : 4;
+		u64 engine18 : 4;
+		u64 engine17 : 4;
+		u64 engine16 : 4;
+	} s;
+	struct cvmx_pko_reg_engine_inflight1_s cn68xx;
+	struct cvmx_pko_reg_engine_inflight1_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_engine_inflight1 cvmx_pko_reg_engine_inflight1_t;
+
+/**
+ * cvmx_pko_reg_engine_storage#
+ *
+ * Notes:
+ * The PKO has 40KB of local storage, consisting of 20, 2KB chunks.  Up to 15 contiguous chunks may be mapped per engine.
+ * The total of all mapped storage must not exceed 40KB.
+ */
+union cvmx_pko_reg_engine_storagex {
+	u64 u64;
+	struct cvmx_pko_reg_engine_storagex_s {
+		u64 engine15 : 4;
+		u64 engine14 : 4;
+		u64 engine13 : 4;
+		u64 engine12 : 4;
+		u64 engine11 : 4;
+		u64 engine10 : 4;
+		u64 engine9 : 4;
+		u64 engine8 : 4;
+		u64 engine7 : 4;
+		u64 engine6 : 4;
+		u64 engine5 : 4;
+		u64 engine4 : 4;
+		u64 engine3 : 4;
+		u64 engine2 : 4;
+		u64 engine1 : 4;
+		u64 engine0 : 4;
+	} s;
+	struct cvmx_pko_reg_engine_storagex_s cn68xx;
+	struct cvmx_pko_reg_engine_storagex_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_engine_storagex cvmx_pko_reg_engine_storagex_t;
+
+/**
+ * cvmx_pko_reg_engine_thresh
+ *
+ * Notes:
+ * When not enabled, packet data may be sent as soon as it is written into PKO's internal buffers.
+ * When enabled and the packet fits entirely in the PKO's internal buffer, none of the packet data will
+ * be sent until all of it has been written into the PKO's internal buffer.  Note that a packet is
+ * considered to fit entirely only if the packet's size is <= BUFFER_SIZE-8.  When enabled and the
+ * packet does not fit entirely in the PKO's internal buffer, none of the packet data will be sent until
+ * at least BUFFER_SIZE-256 bytes of the packet have been written into the PKO's internal buffer
+ * (note that BUFFER_SIZE is a function of PKO_REG_GMX_PORT_MODE above)
+ */
+union cvmx_pko_reg_engine_thresh {
+	u64 u64;
+	struct cvmx_pko_reg_engine_thresh_s {
+		u64 reserved_20_63 : 44;
+		u64 mask : 20;
+	} s;
+	struct cvmx_pko_reg_engine_thresh_cn52xx {
+		u64 reserved_10_63 : 54;
+		u64 mask : 10;
+	} cn52xx;
+	struct cvmx_pko_reg_engine_thresh_cn52xx cn52xxp1;
+	struct cvmx_pko_reg_engine_thresh_cn52xx cn56xx;
+	struct cvmx_pko_reg_engine_thresh_cn52xx cn56xxp1;
+	struct cvmx_pko_reg_engine_thresh_cn61xx {
+		u64 reserved_14_63 : 50;
+		u64 mask : 14;
+	} cn61xx;
+	struct cvmx_pko_reg_engine_thresh_cn63xx {
+		u64 reserved_12_63 : 52;
+		u64 mask : 12;
+	} cn63xx;
+	struct cvmx_pko_reg_engine_thresh_cn63xx cn63xxp1;
+	struct cvmx_pko_reg_engine_thresh_cn61xx cn66xx;
+	struct cvmx_pko_reg_engine_thresh_s cn68xx;
+	struct cvmx_pko_reg_engine_thresh_s cn68xxp1;
+	struct cvmx_pko_reg_engine_thresh_cn61xx cn70xx;
+	struct cvmx_pko_reg_engine_thresh_cn61xx cn70xxp1;
+	struct cvmx_pko_reg_engine_thresh_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_engine_thresh cvmx_pko_reg_engine_thresh_t;
+
+/**
+ * cvmx_pko_reg_error
+ *
+ * Notes:
+ * Note that this CSR is present only in chip revisions beginning with pass2.
+ *
+ */
+union cvmx_pko_reg_error {
+	u64 u64;
+	struct cvmx_pko_reg_error_s {
+		u64 reserved_4_63 : 60;
+		u64 loopback : 1;
+		u64 currzero : 1;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} s;
+	struct cvmx_pko_reg_error_cn30xx {
+		u64 reserved_2_63 : 62;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} cn30xx;
+	struct cvmx_pko_reg_error_cn30xx cn31xx;
+	struct cvmx_pko_reg_error_cn30xx cn38xx;
+	struct cvmx_pko_reg_error_cn30xx cn38xxp2;
+	struct cvmx_pko_reg_error_cn50xx {
+		u64 reserved_3_63 : 61;
+		u64 currzero : 1;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} cn50xx;
+	struct cvmx_pko_reg_error_cn50xx cn52xx;
+	struct cvmx_pko_reg_error_cn50xx cn52xxp1;
+	struct cvmx_pko_reg_error_cn50xx cn56xx;
+	struct cvmx_pko_reg_error_cn50xx cn56xxp1;
+	struct cvmx_pko_reg_error_cn50xx cn58xx;
+	struct cvmx_pko_reg_error_cn50xx cn58xxp1;
+	struct cvmx_pko_reg_error_cn50xx cn61xx;
+	struct cvmx_pko_reg_error_cn50xx cn63xx;
+	struct cvmx_pko_reg_error_cn50xx cn63xxp1;
+	struct cvmx_pko_reg_error_cn50xx cn66xx;
+	struct cvmx_pko_reg_error_s cn68xx;
+	struct cvmx_pko_reg_error_s cn68xxp1;
+	struct cvmx_pko_reg_error_cn50xx cn70xx;
+	struct cvmx_pko_reg_error_cn50xx cn70xxp1;
+	struct cvmx_pko_reg_error_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_error cvmx_pko_reg_error_t;
+
+/**
+ * cvmx_pko_reg_flags
+ *
+ * Notes:
+ * When set, ENA_PKO enables the PKO picker and places the PKO in normal operation.  When set, ENA_DWB
+ * enables the use of DontWriteBacks during the buffer freeing operations.  When not set, STORE_BE inverts
+ * bits[2:0] of the STORE0 byte write address.  When set, RESET causes a 4-cycle reset pulse to the
+ * entire box.
+ */
+union cvmx_pko_reg_flags {
+	u64 u64;
+	struct cvmx_pko_reg_flags_s {
+		u64 reserved_9_63 : 55;
+		u64 dis_perf3 : 1;
+		u64 dis_perf2 : 1;
+		u64 dis_perf1 : 1;
+		u64 dis_perf0 : 1;
+		u64 ena_throttle : 1;
+		u64 reset : 1;
+		u64 store_be : 1;
+		u64 ena_dwb : 1;
+		u64 ena_pko : 1;
+	} s;
+	struct cvmx_pko_reg_flags_cn30xx {
+		u64 reserved_4_63 : 60;
+		u64 reset : 1;
+		u64 store_be : 1;
+		u64 ena_dwb : 1;
+		u64 ena_pko : 1;
+	} cn30xx;
+	struct cvmx_pko_reg_flags_cn30xx cn31xx;
+	struct cvmx_pko_reg_flags_cn30xx cn38xx;
+	struct cvmx_pko_reg_flags_cn30xx cn38xxp2;
+	struct cvmx_pko_reg_flags_cn30xx cn50xx;
+	struct cvmx_pko_reg_flags_cn30xx cn52xx;
+	struct cvmx_pko_reg_flags_cn30xx cn52xxp1;
+	struct cvmx_pko_reg_flags_cn30xx cn56xx;
+	struct cvmx_pko_reg_flags_cn30xx cn56xxp1;
+	struct cvmx_pko_reg_flags_cn30xx cn58xx;
+	struct cvmx_pko_reg_flags_cn30xx cn58xxp1;
+	struct cvmx_pko_reg_flags_cn61xx {
+		u64 reserved_9_63 : 55;
+		u64 dis_perf3 : 1;
+		u64 dis_perf2 : 1;
+		u64 reserved_4_6 : 3;
+		u64 reset : 1;
+		u64 store_be : 1;
+		u64 ena_dwb : 1;
+		u64 ena_pko : 1;
+	} cn61xx;
+	struct cvmx_pko_reg_flags_cn30xx cn63xx;
+	struct cvmx_pko_reg_flags_cn30xx cn63xxp1;
+	struct cvmx_pko_reg_flags_cn61xx cn66xx;
+	struct cvmx_pko_reg_flags_s cn68xx;
+	struct cvmx_pko_reg_flags_cn68xxp1 {
+		u64 reserved_7_63 : 57;
+		u64 dis_perf1 : 1;
+		u64 dis_perf0 : 1;
+		u64 ena_throttle : 1;
+		u64 reset : 1;
+		u64 store_be : 1;
+		u64 ena_dwb : 1;
+		u64 ena_pko : 1;
+	} cn68xxp1;
+	struct cvmx_pko_reg_flags_cn61xx cn70xx;
+	struct cvmx_pko_reg_flags_cn61xx cn70xxp1;
+	struct cvmx_pko_reg_flags_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_flags cvmx_pko_reg_flags_t;
+
+/**
+ * cvmx_pko_reg_gmx_port_mode
+ *
+ * Notes:
+ * The system has a total of 2 + 4 + 4 ports and 2 + 1 + 1 engines (GM0 + PCI + LOOP).
+ * This CSR sets the number of GMX0 ports and amount of local storage per engine.
+ * It has no effect on the number of ports or amount of local storage per engine for PCI and LOOP.
+ * When both GMX ports are used (MODE0=3), each GMX engine has 10kB of local
+ * storage.  Increasing MODE0 to 4 decreases the number of GMX ports to 1 and
+ * increases the local storage for the one remaining PKO GMX engine to 20kB.
+ * MODE0 value 0, 1, and 2, or greater than 4 are illegal.
+ *
+ * MODE0   GMX0  PCI   LOOP  GMX0                       PCI            LOOP
+ *         ports ports ports storage/engine             storage/engine storage/engine
+ * 3       2     4     4      10.0kB                    2.5kB          2.5kB
+ * 4       1     4     4      20.0kB                    2.5kB          2.5kB
+ */
+union cvmx_pko_reg_gmx_port_mode {
+	u64 u64;
+	struct cvmx_pko_reg_gmx_port_mode_s {
+		u64 reserved_6_63 : 58;
+		u64 mode1 : 3;
+		u64 mode0 : 3;
+	} s;
+	struct cvmx_pko_reg_gmx_port_mode_s cn30xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn31xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn38xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn38xxp2;
+	struct cvmx_pko_reg_gmx_port_mode_s cn50xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn52xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn52xxp1;
+	struct cvmx_pko_reg_gmx_port_mode_s cn56xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn56xxp1;
+	struct cvmx_pko_reg_gmx_port_mode_s cn58xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn58xxp1;
+	struct cvmx_pko_reg_gmx_port_mode_s cn61xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn63xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn63xxp1;
+	struct cvmx_pko_reg_gmx_port_mode_s cn66xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn70xx;
+	struct cvmx_pko_reg_gmx_port_mode_s cn70xxp1;
+	struct cvmx_pko_reg_gmx_port_mode_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_gmx_port_mode cvmx_pko_reg_gmx_port_mode_t;
+
+/**
+ * cvmx_pko_reg_int_mask
+ *
+ * Notes:
+ * When a mask bit is set, the corresponding interrupt is enabled.
+ *
+ */
+union cvmx_pko_reg_int_mask {
+	u64 u64;
+	struct cvmx_pko_reg_int_mask_s {
+		u64 reserved_4_63 : 60;
+		u64 loopback : 1;
+		u64 currzero : 1;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} s;
+	struct cvmx_pko_reg_int_mask_cn30xx {
+		u64 reserved_2_63 : 62;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} cn30xx;
+	struct cvmx_pko_reg_int_mask_cn30xx cn31xx;
+	struct cvmx_pko_reg_int_mask_cn30xx cn38xx;
+	struct cvmx_pko_reg_int_mask_cn30xx cn38xxp2;
+	struct cvmx_pko_reg_int_mask_cn50xx {
+		u64 reserved_3_63 : 61;
+		u64 currzero : 1;
+		u64 doorbell : 1;
+		u64 parity : 1;
+	} cn50xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn52xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn52xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cn56xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn56xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cn58xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn58xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cn61xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn63xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn63xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cn66xx;
+	struct cvmx_pko_reg_int_mask_s cn68xx;
+	struct cvmx_pko_reg_int_mask_s cn68xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cn70xx;
+	struct cvmx_pko_reg_int_mask_cn50xx cn70xxp1;
+	struct cvmx_pko_reg_int_mask_cn50xx cnf71xx;
+};
+
+typedef union cvmx_pko_reg_int_mask cvmx_pko_reg_int_mask_t;
+
+/**
+ * cvmx_pko_reg_loopback_bpid
+ *
+ * Notes:
+ * None.
+ *
+ */
+union cvmx_pko_reg_loopback_bpid {
+	u64 u64;
+	struct cvmx_pko_reg_loopback_bpid_s {
+		u64 reserved_59_63 : 5;
+		u64 bpid7 : 6;
+		u64 reserved_52_52 : 1;
+		u64 bpid6 : 6;
+		u64 reserved_45_45 : 1;
+		u64 bpid5 : 6;
+		u64 reserved_38_38 : 1;
+		u64 bpid4 : 6;
+		u64 reserved_31_31 : 1;
+		u64 bpid3 : 6;
+		u64 reserved_24_24 : 1;
+		u64 bpid2 : 6;
+		u64 reserved_17_17 : 1;
+		u64 bpid1 : 6;
+		u64 reserved_10_10 : 1;
+		u64 bpid0 : 6;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_pko_reg_loopback_bpid_s cn68xx;
+	struct cvmx_pko_reg_loopback_bpid_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_loopback_bpid cvmx_pko_reg_loopback_bpid_t;
+
+/**
+ * cvmx_pko_reg_loopback_pkind
+ *
+ * Notes:
+ * None.
+ *
+ */
+union cvmx_pko_reg_loopback_pkind {
+	u64 u64;
+	struct cvmx_pko_reg_loopback_pkind_s {
+		u64 reserved_59_63 : 5;
+		u64 pkind7 : 6;
+		u64 reserved_52_52 : 1;
+		u64 pkind6 : 6;
+		u64 reserved_45_45 : 1;
+		u64 pkind5 : 6;
+		u64 reserved_38_38 : 1;
+		u64 pkind4 : 6;
+		u64 reserved_31_31 : 1;
+		u64 pkind3 : 6;
+		u64 reserved_24_24 : 1;
+		u64 pkind2 : 6;
+		u64 reserved_17_17 : 1;
+		u64 pkind1 : 6;
+		u64 reserved_10_10 : 1;
+		u64 pkind0 : 6;
+		u64 num_ports : 4;
+	} s;
+	struct cvmx_pko_reg_loopback_pkind_s cn68xx;
+	struct cvmx_pko_reg_loopback_pkind_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_loopback_pkind cvmx_pko_reg_loopback_pkind_t;
+
+/**
+ * cvmx_pko_reg_min_pkt
+ *
+ * Notes:
+ * This CSR is used with PKO_MEM_IPORT_PTRS[MIN_PKT] to select the minimum packet size.  Packets whose
+ * size in bytes < (SIZEn+1) are zero-padded to (SIZEn+1) bytes.  Note that this does not include CRC bytes.
+ * SIZE0=0 is read-only and is used when no padding is desired.
+ */
+union cvmx_pko_reg_min_pkt {
+	u64 u64;
+	struct cvmx_pko_reg_min_pkt_s {
+		u64 size7 : 8;
+		u64 size6 : 8;
+		u64 size5 : 8;
+		u64 size4 : 8;
+		u64 size3 : 8;
+		u64 size2 : 8;
+		u64 size1 : 8;
+		u64 size0 : 8;
+	} s;
+	struct cvmx_pko_reg_min_pkt_s cn68xx;
+	struct cvmx_pko_reg_min_pkt_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_min_pkt cvmx_pko_reg_min_pkt_t;
+
+/**
+ * cvmx_pko_reg_preempt
+ */
+union cvmx_pko_reg_preempt {
+	u64 u64;
+	struct cvmx_pko_reg_preempt_s {
+		u64 reserved_16_63 : 48;
+		u64 min_size : 16;
+	} s;
+	struct cvmx_pko_reg_preempt_s cn52xx;
+	struct cvmx_pko_reg_preempt_s cn52xxp1;
+	struct cvmx_pko_reg_preempt_s cn56xx;
+	struct cvmx_pko_reg_preempt_s cn56xxp1;
+	struct cvmx_pko_reg_preempt_s cn61xx;
+	struct cvmx_pko_reg_preempt_s cn63xx;
+	struct cvmx_pko_reg_preempt_s cn63xxp1;
+	struct cvmx_pko_reg_preempt_s cn66xx;
+	struct cvmx_pko_reg_preempt_s cn68xx;
+	struct cvmx_pko_reg_preempt_s cn68xxp1;
+	struct cvmx_pko_reg_preempt_s cn70xx;
+	struct cvmx_pko_reg_preempt_s cn70xxp1;
+	struct cvmx_pko_reg_preempt_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_preempt cvmx_pko_reg_preempt_t;
+
+/**
+ * cvmx_pko_reg_queue_mode
+ *
+ * Notes:
+ * Sets the number of queues and amount of local storage per queue
+ * The system has a total of 256 queues and (256*8) words of local command storage.  This CSR sets the
+ * number of queues that are used.  Increasing the value of MODE by 1 decreases the number of queues
+ * by a power of 2 and increases the local storage per queue by a power of 2.
+ * MODEn queues storage/queue
+ * 0     256     64B ( 8 words)
+ * 1     128    128B (16 words)
+ * 2      64    256B (32 words)
+ */
+union cvmx_pko_reg_queue_mode {
+	u64 u64;
+	struct cvmx_pko_reg_queue_mode_s {
+		u64 reserved_2_63 : 62;
+		u64 mode : 2;
+	} s;
+	struct cvmx_pko_reg_queue_mode_s cn30xx;
+	struct cvmx_pko_reg_queue_mode_s cn31xx;
+	struct cvmx_pko_reg_queue_mode_s cn38xx;
+	struct cvmx_pko_reg_queue_mode_s cn38xxp2;
+	struct cvmx_pko_reg_queue_mode_s cn50xx;
+	struct cvmx_pko_reg_queue_mode_s cn52xx;
+	struct cvmx_pko_reg_queue_mode_s cn52xxp1;
+	struct cvmx_pko_reg_queue_mode_s cn56xx;
+	struct cvmx_pko_reg_queue_mode_s cn56xxp1;
+	struct cvmx_pko_reg_queue_mode_s cn58xx;
+	struct cvmx_pko_reg_queue_mode_s cn58xxp1;
+	struct cvmx_pko_reg_queue_mode_s cn61xx;
+	struct cvmx_pko_reg_queue_mode_s cn63xx;
+	struct cvmx_pko_reg_queue_mode_s cn63xxp1;
+	struct cvmx_pko_reg_queue_mode_s cn66xx;
+	struct cvmx_pko_reg_queue_mode_s cn68xx;
+	struct cvmx_pko_reg_queue_mode_s cn68xxp1;
+	struct cvmx_pko_reg_queue_mode_s cn70xx;
+	struct cvmx_pko_reg_queue_mode_s cn70xxp1;
+	struct cvmx_pko_reg_queue_mode_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_queue_mode cvmx_pko_reg_queue_mode_t;
+
+/**
+ * cvmx_pko_reg_queue_preempt
+ *
+ * Notes:
+ * Per QID, setting both PREEMPTER=1 and PREEMPTEE=1 is illegal and sets only PREEMPTER=1.
+ * This CSR is used with PKO_MEM_QUEUE_PTRS and PKO_REG_QUEUE_PTRS1.  When programming queues, the
+ * programming sequence must first write PKO_REG_QUEUE_PREEMPT, then PKO_REG_QUEUE_PTRS1 and then
+ * PKO_MEM_QUEUE_PTRS for each queue.  Preemption is supported only on queues that are ultimately
+ * mapped to engines 0-7.  It is illegal to set preemptee or preempter for a queue that is ultimately
+ * mapped to engines 8-11.
+ *
+ * Also, PKO_REG_ENGINE_INFLIGHT must be@least 2 for any engine on which preemption is enabled.
+ *
+ * See the descriptions of PKO_MEM_QUEUE_PTRS for further explanation of queue programming.
+ */
+union cvmx_pko_reg_queue_preempt {
+	u64 u64;
+	struct cvmx_pko_reg_queue_preempt_s {
+		u64 reserved_2_63 : 62;
+		u64 preemptee : 1;
+		u64 preempter : 1;
+	} s;
+	struct cvmx_pko_reg_queue_preempt_s cn52xx;
+	struct cvmx_pko_reg_queue_preempt_s cn52xxp1;
+	struct cvmx_pko_reg_queue_preempt_s cn56xx;
+	struct cvmx_pko_reg_queue_preempt_s cn56xxp1;
+	struct cvmx_pko_reg_queue_preempt_s cn61xx;
+	struct cvmx_pko_reg_queue_preempt_s cn63xx;
+	struct cvmx_pko_reg_queue_preempt_s cn63xxp1;
+	struct cvmx_pko_reg_queue_preempt_s cn66xx;
+	struct cvmx_pko_reg_queue_preempt_s cn68xx;
+	struct cvmx_pko_reg_queue_preempt_s cn68xxp1;
+	struct cvmx_pko_reg_queue_preempt_s cn70xx;
+	struct cvmx_pko_reg_queue_preempt_s cn70xxp1;
+	struct cvmx_pko_reg_queue_preempt_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_queue_preempt cvmx_pko_reg_queue_preempt_t;
+
+/**
+ * cvmx_pko_reg_queue_ptrs1
+ *
+ * Notes:
+ * This CSR is used with PKO_MEM_QUEUE_PTRS and PKO_MEM_QUEUE_QOS to allow access to queues 128-255
+ * and to allow up mapping of up to 16 queues per port.  When programming queues 128-255, the
+ * programming sequence must first write PKO_REG_QUEUE_PTRS1 and then write PKO_MEM_QUEUE_PTRS or
+ * PKO_MEM_QUEUE_QOS for each queue.
+ * See the descriptions of PKO_MEM_QUEUE_PTRS and PKO_MEM_QUEUE_QOS for further explanation of queue
+ * programming.
+ */
+union cvmx_pko_reg_queue_ptrs1 {
+	u64 u64;
+	struct cvmx_pko_reg_queue_ptrs1_s {
+		u64 reserved_2_63 : 62;
+		u64 idx3 : 1;
+		u64 qid7 : 1;
+	} s;
+	struct cvmx_pko_reg_queue_ptrs1_s cn50xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn52xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn52xxp1;
+	struct cvmx_pko_reg_queue_ptrs1_s cn56xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn56xxp1;
+	struct cvmx_pko_reg_queue_ptrs1_s cn58xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn58xxp1;
+	struct cvmx_pko_reg_queue_ptrs1_s cn61xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn63xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn63xxp1;
+	struct cvmx_pko_reg_queue_ptrs1_s cn66xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn70xx;
+	struct cvmx_pko_reg_queue_ptrs1_s cn70xxp1;
+	struct cvmx_pko_reg_queue_ptrs1_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_queue_ptrs1 cvmx_pko_reg_queue_ptrs1_t;
+
+/**
+ * cvmx_pko_reg_read_idx
+ *
+ * Notes:
+ * Provides the read index during a CSR read operation to any of the CSRs that are physically stored
+ * as memories.  The names of these CSRs begin with the prefix "PKO_MEM_".
+ * IDX[7:0] is the read index.  INC[7:0] is an increment that is added to IDX[7:0] after any CSR read.
+ * The intended use is to initially write this CSR such that IDX=0 and INC=1.  Then, the entire
+ * contents of a CSR memory can be read with consecutive CSR read commands.
+ */
+union cvmx_pko_reg_read_idx {
+	u64 u64;
+	struct cvmx_pko_reg_read_idx_s {
+		u64 reserved_16_63 : 48;
+		u64 inc : 8;
+		u64 index : 8;
+	} s;
+	struct cvmx_pko_reg_read_idx_s cn30xx;
+	struct cvmx_pko_reg_read_idx_s cn31xx;
+	struct cvmx_pko_reg_read_idx_s cn38xx;
+	struct cvmx_pko_reg_read_idx_s cn38xxp2;
+	struct cvmx_pko_reg_read_idx_s cn50xx;
+	struct cvmx_pko_reg_read_idx_s cn52xx;
+	struct cvmx_pko_reg_read_idx_s cn52xxp1;
+	struct cvmx_pko_reg_read_idx_s cn56xx;
+	struct cvmx_pko_reg_read_idx_s cn56xxp1;
+	struct cvmx_pko_reg_read_idx_s cn58xx;
+	struct cvmx_pko_reg_read_idx_s cn58xxp1;
+	struct cvmx_pko_reg_read_idx_s cn61xx;
+	struct cvmx_pko_reg_read_idx_s cn63xx;
+	struct cvmx_pko_reg_read_idx_s cn63xxp1;
+	struct cvmx_pko_reg_read_idx_s cn66xx;
+	struct cvmx_pko_reg_read_idx_s cn68xx;
+	struct cvmx_pko_reg_read_idx_s cn68xxp1;
+	struct cvmx_pko_reg_read_idx_s cn70xx;
+	struct cvmx_pko_reg_read_idx_s cn70xxp1;
+	struct cvmx_pko_reg_read_idx_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_read_idx cvmx_pko_reg_read_idx_t;
+
+/**
+ * cvmx_pko_reg_throttle
+ *
+ * Notes:
+ * This CSR is used with PKO_MEM_THROTTLE_PIPE and PKO_MEM_THROTTLE_INT.  INT_MASK corresponds to the
+ * interfaces listed in the description for PKO_MEM_IPORT_PTRS[INT].  Set INT_MASK[N] to enable the
+ * updating of PKO_MEM_THROTTLE_PIPE and PKO_MEM_THROTTLE_INT counts for packets destined for
+ * interface N.  INT_MASK has no effect on the updates caused by CSR writes to PKO_MEM_THROTTLE_PIPE
+ * and PKO_MEM_THROTTLE_INT.  Note that this does not disable the throttle logic, just the updating of
+ * the interface counts.
+ */
+union cvmx_pko_reg_throttle {
+	u64 u64;
+	struct cvmx_pko_reg_throttle_s {
+		u64 reserved_32_63 : 32;
+		u64 int_mask : 32;
+	} s;
+	struct cvmx_pko_reg_throttle_s cn68xx;
+	struct cvmx_pko_reg_throttle_s cn68xxp1;
+};
+
+typedef union cvmx_pko_reg_throttle cvmx_pko_reg_throttle_t;
+
+/**
+ * cvmx_pko_reg_timestamp
+ *
+ * Notes:
+ * None.
+ *
+ */
+union cvmx_pko_reg_timestamp {
+	u64 u64;
+	struct cvmx_pko_reg_timestamp_s {
+		u64 reserved_4_63 : 60;
+		u64 wqe_word : 4;
+	} s;
+	struct cvmx_pko_reg_timestamp_s cn61xx;
+	struct cvmx_pko_reg_timestamp_s cn63xx;
+	struct cvmx_pko_reg_timestamp_s cn63xxp1;
+	struct cvmx_pko_reg_timestamp_s cn66xx;
+	struct cvmx_pko_reg_timestamp_s cn68xx;
+	struct cvmx_pko_reg_timestamp_s cn68xxp1;
+	struct cvmx_pko_reg_timestamp_s cn70xx;
+	struct cvmx_pko_reg_timestamp_s cn70xxp1;
+	struct cvmx_pko_reg_timestamp_s cnf71xx;
+};
+
+typedef union cvmx_pko_reg_timestamp cvmx_pko_reg_timestamp_t;
+
+/**
+ * cvmx_pko_shaper_cfg
+ */
+union cvmx_pko_shaper_cfg {
+	u64 u64;
+	struct cvmx_pko_shaper_cfg_s {
+		u64 reserved_2_63 : 62;
+		u64 color_aware : 1;
+		u64 red_send_as_yellow : 1;
+	} s;
+	struct cvmx_pko_shaper_cfg_s cn73xx;
+	struct cvmx_pko_shaper_cfg_s cn78xx;
+	struct cvmx_pko_shaper_cfg_s cn78xxp1;
+	struct cvmx_pko_shaper_cfg_s cnf75xx;
+};
+
+typedef union cvmx_pko_shaper_cfg cvmx_pko_shaper_cfg_t;
+
+/**
+ * cvmx_pko_state_uid_in_use#_rd
+ *
+ * For diagnostic use only.
+ *
+ */
+union cvmx_pko_state_uid_in_usex_rd {
+	u64 u64;
+	struct cvmx_pko_state_uid_in_usex_rd_s {
+		u64 in_use : 64;
+	} s;
+	struct cvmx_pko_state_uid_in_usex_rd_s cn73xx;
+	struct cvmx_pko_state_uid_in_usex_rd_s cn78xx;
+	struct cvmx_pko_state_uid_in_usex_rd_s cn78xxp1;
+	struct cvmx_pko_state_uid_in_usex_rd_s cnf75xx;
+};
+
+typedef union cvmx_pko_state_uid_in_usex_rd cvmx_pko_state_uid_in_usex_rd_t;
+
+/**
+ * cvmx_pko_status
+ */
+union cvmx_pko_status {
+	u64 u64;
+	struct cvmx_pko_status_s {
+		u64 pko_rdy : 1;
+		u64 reserved_24_62 : 39;
+		u64 c2qlut_rdy : 1;
+		u64 ppfi_rdy : 1;
+		u64 iobp1_rdy : 1;
+		u64 ncb_rdy : 1;
+		u64 pse_rdy : 1;
+		u64 pdm_rdy : 1;
+		u64 peb_rdy : 1;
+		u64 csi_rdy : 1;
+		u64 reserved_5_15 : 11;
+		u64 ncb_bist_status : 1;
+		u64 c2qlut_bist_status : 1;
+		u64 pdm_bist_status : 1;
+		u64 peb_bist_status : 1;
+		u64 pse_bist_status : 1;
+	} s;
+	struct cvmx_pko_status_cn73xx {
+		u64 pko_rdy : 1;
+		u64 reserved_62_24 : 39;
+		u64 c2qlut_rdy : 1;
+		u64 ppfi_rdy : 1;
+		u64 iobp1_rdy : 1;
+		u64 ncb_rdy : 1;
+		u64 pse_rdy : 1;
+		u64 pdm_rdy : 1;
+		u64 peb_rdy : 1;
+		u64 csi_rdy : 1;
+		u64 reserved_15_5 : 11;
+		u64 ncb_bist_status : 1;
+		u64 c2qlut_bist_status : 1;
+		u64 pdm_bist_status : 1;
+		u64 peb_bist_status : 1;
+		u64 pse_bist_status : 1;
+	} cn73xx;
+	struct cvmx_pko_status_cn73xx cn78xx;
+	struct cvmx_pko_status_cn73xx cn78xxp1;
+	struct cvmx_pko_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_pko_status cvmx_pko_status_t;
+
+/**
+ * cvmx_pko_txf#_pkt_cnt_rd
+ */
+union cvmx_pko_txfx_pkt_cnt_rd {
+	u64 u64;
+	struct cvmx_pko_txfx_pkt_cnt_rd_s {
+		u64 reserved_8_63 : 56;
+		u64 cnt : 8;
+	} s;
+	struct cvmx_pko_txfx_pkt_cnt_rd_s cn73xx;
+	struct cvmx_pko_txfx_pkt_cnt_rd_s cn78xx;
+	struct cvmx_pko_txfx_pkt_cnt_rd_s cn78xxp1;
+	struct cvmx_pko_txfx_pkt_cnt_rd_s cnf75xx;
+};
+
+typedef union cvmx_pko_txfx_pkt_cnt_rd cvmx_pko_txfx_pkt_cnt_rd_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 25/50] mips: octeon: Add cvmx-pow-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (23 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 24/50] mips: octeon: Add cvmx-pko-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 26/50] mips: octeon: Add cvmx-rst-defs.h " Stefan Roese
                   ` (27 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pow-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-pow-defs.h  | 1135 +++++++++++++++++
 1 file changed, 1135 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-pow-defs.h
new file mode 100644
index 0000000000..92e3723eb3
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pow-defs.h
@@ -0,0 +1,1135 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon pow.
+ */
+
+#ifndef __CVMX_POW_DEFS_H__
+#define __CVMX_POW_DEFS_H__
+
+#define CVMX_POW_BIST_STAT	     (0x00016700000003F8ull)
+#define CVMX_POW_DS_PC		     (0x0001670000000398ull)
+#define CVMX_POW_ECC_ERR	     (0x0001670000000218ull)
+#define CVMX_POW_IQ_CNTX(offset)     (0x0001670000000340ull + ((offset) & 7) * 8)
+#define CVMX_POW_IQ_COM_CNT	     (0x0001670000000388ull)
+#define CVMX_POW_IQ_INT		     (0x0001670000000238ull)
+#define CVMX_POW_IQ_INT_EN	     (0x0001670000000240ull)
+#define CVMX_POW_IQ_THRX(offset)     (0x00016700000003A0ull + ((offset) & 7) * 8)
+#define CVMX_POW_NOS_CNT	     (0x0001670000000228ull)
+#define CVMX_POW_NW_TIM		     (0x0001670000000210ull)
+#define CVMX_POW_PF_RST_MSK	     (0x0001670000000230ull)
+#define CVMX_POW_PP_GRP_MSKX(offset) (0x0001670000000000ull + ((offset) & 15) * 8)
+#define CVMX_POW_QOS_RNDX(offset)    (0x00016700000001C0ull + ((offset) & 7) * 8)
+#define CVMX_POW_QOS_THRX(offset)    (0x0001670000000180ull + ((offset) & 7) * 8)
+#define CVMX_POW_TS_PC		     (0x0001670000000390ull)
+#define CVMX_POW_WA_COM_PC	     (0x0001670000000380ull)
+#define CVMX_POW_WA_PCX(offset)	     (0x0001670000000300ull + ((offset) & 7) * 8)
+#define CVMX_POW_WQ_INT		     (0x0001670000000200ull)
+#define CVMX_POW_WQ_INT_CNTX(offset) (0x0001670000000100ull + ((offset) & 15) * 8)
+#define CVMX_POW_WQ_INT_PC	     (0x0001670000000208ull)
+#define CVMX_POW_WQ_INT_THRX(offset) (0x0001670000000080ull + ((offset) & 15) * 8)
+#define CVMX_POW_WS_PCX(offset)	     (0x0001670000000280ull + ((offset) & 15) * 8)
+
+/**
+ * cvmx_pow_bist_stat
+ *
+ * Contains the BIST status for the POW memories ('0' = pass, '1' = fail).
+ *
+ */
+union cvmx_pow_bist_stat {
+	u64 u64;
+	struct cvmx_pow_bist_stat_s {
+		u64 reserved_32_63 : 32;
+		u64 pp : 16;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_pow_bist_stat_cn30xx {
+		u64 reserved_17_63 : 47;
+		u64 pp : 1;
+		u64 reserved_9_15 : 7;
+		u64 cam : 1;
+		u64 nbt1 : 1;
+		u64 nbt0 : 1;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 nbr1 : 1;
+		u64 nbr0 : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn30xx;
+	struct cvmx_pow_bist_stat_cn31xx {
+		u64 reserved_18_63 : 46;
+		u64 pp : 2;
+		u64 reserved_9_15 : 7;
+		u64 cam : 1;
+		u64 nbt1 : 1;
+		u64 nbt0 : 1;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 nbr1 : 1;
+		u64 nbr0 : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn31xx;
+	struct cvmx_pow_bist_stat_cn38xx {
+		u64 reserved_32_63 : 32;
+		u64 pp : 16;
+		u64 reserved_10_15 : 6;
+		u64 cam : 1;
+		u64 nbt : 1;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 nbr1 : 1;
+		u64 nbr0 : 1;
+		u64 pend1 : 1;
+		u64 pend0 : 1;
+		u64 adr1 : 1;
+		u64 adr0 : 1;
+	} cn38xx;
+	struct cvmx_pow_bist_stat_cn38xx cn38xxp2;
+	struct cvmx_pow_bist_stat_cn31xx cn50xx;
+	struct cvmx_pow_bist_stat_cn52xx {
+		u64 reserved_20_63 : 44;
+		u64 pp : 4;
+		u64 reserved_9_15 : 7;
+		u64 cam : 1;
+		u64 nbt1 : 1;
+		u64 nbt0 : 1;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 nbr1 : 1;
+		u64 nbr0 : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn52xx;
+	struct cvmx_pow_bist_stat_cn52xx cn52xxp1;
+	struct cvmx_pow_bist_stat_cn56xx {
+		u64 reserved_28_63 : 36;
+		u64 pp : 12;
+		u64 reserved_10_15 : 6;
+		u64 cam : 1;
+		u64 nbt : 1;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 nbr1 : 1;
+		u64 nbr0 : 1;
+		u64 pend1 : 1;
+		u64 pend0 : 1;
+		u64 adr1 : 1;
+		u64 adr0 : 1;
+	} cn56xx;
+	struct cvmx_pow_bist_stat_cn56xx cn56xxp1;
+	struct cvmx_pow_bist_stat_cn38xx cn58xx;
+	struct cvmx_pow_bist_stat_cn38xx cn58xxp1;
+	struct cvmx_pow_bist_stat_cn61xx {
+		u64 reserved_20_63 : 44;
+		u64 pp : 4;
+		u64 reserved_12_15 : 4;
+		u64 cam : 1;
+		u64 nbr : 3;
+		u64 nbt : 4;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn61xx;
+	struct cvmx_pow_bist_stat_cn63xx {
+		u64 reserved_22_63 : 42;
+		u64 pp : 6;
+		u64 reserved_12_15 : 4;
+		u64 cam : 1;
+		u64 nbr : 3;
+		u64 nbt : 4;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn63xx;
+	struct cvmx_pow_bist_stat_cn63xx cn63xxp1;
+	struct cvmx_pow_bist_stat_cn66xx {
+		u64 reserved_26_63 : 38;
+		u64 pp : 10;
+		u64 reserved_12_15 : 4;
+		u64 cam : 1;
+		u64 nbr : 3;
+		u64 nbt : 4;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn66xx;
+	struct cvmx_pow_bist_stat_cn70xx {
+		u64 reserved_12_63 : 52;
+		u64 cam : 1;
+		u64 reserved_10_10 : 1;
+		u64 nbr : 2;
+		u64 reserved_6_7 : 2;
+		u64 nbt : 2;
+		u64 index : 1;
+		u64 fidx : 1;
+		u64 pend : 1;
+		u64 adr : 1;
+	} cn70xx;
+	struct cvmx_pow_bist_stat_cn70xx cn70xxp1;
+	struct cvmx_pow_bist_stat_cn61xx cnf71xx;
+};
+
+typedef union cvmx_pow_bist_stat cvmx_pow_bist_stat_t;
+
+/**
+ * cvmx_pow_ds_pc
+ *
+ * Counts the number of de-schedule requests.  Write to clear.
+ *
+ */
+union cvmx_pow_ds_pc {
+	u64 u64;
+	struct cvmx_pow_ds_pc_s {
+		u64 reserved_32_63 : 32;
+		u64 ds_pc : 32;
+	} s;
+	struct cvmx_pow_ds_pc_s cn30xx;
+	struct cvmx_pow_ds_pc_s cn31xx;
+	struct cvmx_pow_ds_pc_s cn38xx;
+	struct cvmx_pow_ds_pc_s cn38xxp2;
+	struct cvmx_pow_ds_pc_s cn50xx;
+	struct cvmx_pow_ds_pc_s cn52xx;
+	struct cvmx_pow_ds_pc_s cn52xxp1;
+	struct cvmx_pow_ds_pc_s cn56xx;
+	struct cvmx_pow_ds_pc_s cn56xxp1;
+	struct cvmx_pow_ds_pc_s cn58xx;
+	struct cvmx_pow_ds_pc_s cn58xxp1;
+	struct cvmx_pow_ds_pc_s cn61xx;
+	struct cvmx_pow_ds_pc_s cn63xx;
+	struct cvmx_pow_ds_pc_s cn63xxp1;
+	struct cvmx_pow_ds_pc_s cn66xx;
+	struct cvmx_pow_ds_pc_s cn70xx;
+	struct cvmx_pow_ds_pc_s cn70xxp1;
+	struct cvmx_pow_ds_pc_s cnf71xx;
+};
+
+typedef union cvmx_pow_ds_pc cvmx_pow_ds_pc_t;
+
+/**
+ * cvmx_pow_ecc_err
+ *
+ * Contains the single and double error bits and the corresponding interrupt enables for the ECC-
+ * protected POW index memory.  Also contains the syndrome value in the event of an ECC error.
+ * Also contains the remote pointer error bit and interrupt enable.  RPE is set when the POW
+ * detected
+ * corruption on one or more of the input queue lists in L2/DRAM (POW's local copy of the tail
+ * pointer
+ * for the L2/DRAM input queue did not match the last entry on the the list).   This is caused by
+ * L2/DRAM corruption, and is generally a fatal error because it likely caused POW to load bad
+ * work
+ * queue entries.
+ * This register also contains the illegal operation error bits and the corresponding interrupt
+ * enables as follows:
+ *  <0> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP from PP in NULL_NULL state
+ *  <1> Received SWTAG/SWTAG_DESCH/DESCH/UPD_WQP from PP in NULL state
+ *  <2> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/GET_WORK from PP with pending tag switch to ORDERED
+ * or ATOMIC
+ *  <3> Received SWTAG/SWTAG_FULL/SWTAG_DESCH from PP with tag specified as NULL_NULL
+ *  <4> Received SWTAG_FULL/SWTAG_DESCH from PP with tag specified as NULL
+ *  <5> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/NULL_RD from PP with
+ * GET_WORK pending
+ *  <6> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/NULL_RD from PP with NULL_RD
+ * pending
+ *  <7> Received CLR_NSCHED from PP with SWTAG_DESCH/DESCH/CLR_NSCHED pending
+ *  <8> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/NULL_RD from PP with
+ * CLR_NSCHED pending
+ *  <9> Received illegal opcode
+ * <10> Received ADD_WORK with tag specified as NULL_NULL
+ * <11> Received DBG load from PP with DBG load pending
+ * <12> Received CSR load from PP with CSR load pending
+ */
+union cvmx_pow_ecc_err {
+	u64 u64;
+	struct cvmx_pow_ecc_err_s {
+		u64 reserved_45_63 : 19;
+		u64 iop_ie : 13;
+		u64 reserved_29_31 : 3;
+		u64 iop : 13;
+		u64 reserved_14_15 : 2;
+		u64 rpe_ie : 1;
+		u64 rpe : 1;
+		u64 reserved_9_11 : 3;
+		u64 syn : 5;
+		u64 dbe_ie : 1;
+		u64 sbe_ie : 1;
+		u64 dbe : 1;
+		u64 sbe : 1;
+	} s;
+	struct cvmx_pow_ecc_err_s cn30xx;
+	struct cvmx_pow_ecc_err_cn31xx {
+		u64 reserved_14_63 : 50;
+		u64 rpe_ie : 1;
+		u64 rpe : 1;
+		u64 reserved_9_11 : 3;
+		u64 syn : 5;
+		u64 dbe_ie : 1;
+		u64 sbe_ie : 1;
+		u64 dbe : 1;
+		u64 sbe : 1;
+	} cn31xx;
+	struct cvmx_pow_ecc_err_s cn38xx;
+	struct cvmx_pow_ecc_err_cn31xx cn38xxp2;
+	struct cvmx_pow_ecc_err_s cn50xx;
+	struct cvmx_pow_ecc_err_s cn52xx;
+	struct cvmx_pow_ecc_err_s cn52xxp1;
+	struct cvmx_pow_ecc_err_s cn56xx;
+	struct cvmx_pow_ecc_err_s cn56xxp1;
+	struct cvmx_pow_ecc_err_s cn58xx;
+	struct cvmx_pow_ecc_err_s cn58xxp1;
+	struct cvmx_pow_ecc_err_s cn61xx;
+	struct cvmx_pow_ecc_err_s cn63xx;
+	struct cvmx_pow_ecc_err_s cn63xxp1;
+	struct cvmx_pow_ecc_err_s cn66xx;
+	struct cvmx_pow_ecc_err_s cn70xx;
+	struct cvmx_pow_ecc_err_s cn70xxp1;
+	struct cvmx_pow_ecc_err_s cnf71xx;
+};
+
+typedef union cvmx_pow_ecc_err cvmx_pow_ecc_err_t;
+
+/**
+ * cvmx_pow_iq_cnt#
+ *
+ * Contains a read-only count of the number of work queue entries for each QOS level.
+ *
+ */
+union cvmx_pow_iq_cntx {
+	u64 u64;
+	struct cvmx_pow_iq_cntx_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_cnt : 32;
+	} s;
+	struct cvmx_pow_iq_cntx_s cn30xx;
+	struct cvmx_pow_iq_cntx_s cn31xx;
+	struct cvmx_pow_iq_cntx_s cn38xx;
+	struct cvmx_pow_iq_cntx_s cn38xxp2;
+	struct cvmx_pow_iq_cntx_s cn50xx;
+	struct cvmx_pow_iq_cntx_s cn52xx;
+	struct cvmx_pow_iq_cntx_s cn52xxp1;
+	struct cvmx_pow_iq_cntx_s cn56xx;
+	struct cvmx_pow_iq_cntx_s cn56xxp1;
+	struct cvmx_pow_iq_cntx_s cn58xx;
+	struct cvmx_pow_iq_cntx_s cn58xxp1;
+	struct cvmx_pow_iq_cntx_s cn61xx;
+	struct cvmx_pow_iq_cntx_s cn63xx;
+	struct cvmx_pow_iq_cntx_s cn63xxp1;
+	struct cvmx_pow_iq_cntx_s cn66xx;
+	struct cvmx_pow_iq_cntx_s cn70xx;
+	struct cvmx_pow_iq_cntx_s cn70xxp1;
+	struct cvmx_pow_iq_cntx_s cnf71xx;
+};
+
+typedef union cvmx_pow_iq_cntx cvmx_pow_iq_cntx_t;
+
+/**
+ * cvmx_pow_iq_com_cnt
+ *
+ * Contains a read-only count of the total number of work queue entries in all QOS levels.
+ *
+ */
+union cvmx_pow_iq_com_cnt {
+	u64 u64;
+	struct cvmx_pow_iq_com_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_cnt : 32;
+	} s;
+	struct cvmx_pow_iq_com_cnt_s cn30xx;
+	struct cvmx_pow_iq_com_cnt_s cn31xx;
+	struct cvmx_pow_iq_com_cnt_s cn38xx;
+	struct cvmx_pow_iq_com_cnt_s cn38xxp2;
+	struct cvmx_pow_iq_com_cnt_s cn50xx;
+	struct cvmx_pow_iq_com_cnt_s cn52xx;
+	struct cvmx_pow_iq_com_cnt_s cn52xxp1;
+	struct cvmx_pow_iq_com_cnt_s cn56xx;
+	struct cvmx_pow_iq_com_cnt_s cn56xxp1;
+	struct cvmx_pow_iq_com_cnt_s cn58xx;
+	struct cvmx_pow_iq_com_cnt_s cn58xxp1;
+	struct cvmx_pow_iq_com_cnt_s cn61xx;
+	struct cvmx_pow_iq_com_cnt_s cn63xx;
+	struct cvmx_pow_iq_com_cnt_s cn63xxp1;
+	struct cvmx_pow_iq_com_cnt_s cn66xx;
+	struct cvmx_pow_iq_com_cnt_s cn70xx;
+	struct cvmx_pow_iq_com_cnt_s cn70xxp1;
+	struct cvmx_pow_iq_com_cnt_s cnf71xx;
+};
+
+typedef union cvmx_pow_iq_com_cnt cvmx_pow_iq_com_cnt_t;
+
+/**
+ * cvmx_pow_iq_int
+ *
+ * "Contains the bits (1 per QOS level) that can trigger the input queue interrupt.  An IQ_INT
+ * bit
+ * will be set if POW_IQ_CNT#QOS# changes and the resulting value is equal to POW_IQ_THR#QOS#."
+ */
+union cvmx_pow_iq_int {
+	u64 u64;
+	struct cvmx_pow_iq_int_s {
+		u64 reserved_8_63 : 56;
+		u64 iq_int : 8;
+	} s;
+	struct cvmx_pow_iq_int_s cn52xx;
+	struct cvmx_pow_iq_int_s cn52xxp1;
+	struct cvmx_pow_iq_int_s cn56xx;
+	struct cvmx_pow_iq_int_s cn56xxp1;
+	struct cvmx_pow_iq_int_s cn61xx;
+	struct cvmx_pow_iq_int_s cn63xx;
+	struct cvmx_pow_iq_int_s cn63xxp1;
+	struct cvmx_pow_iq_int_s cn66xx;
+	struct cvmx_pow_iq_int_s cn70xx;
+	struct cvmx_pow_iq_int_s cn70xxp1;
+	struct cvmx_pow_iq_int_s cnf71xx;
+};
+
+typedef union cvmx_pow_iq_int cvmx_pow_iq_int_t;
+
+/**
+ * cvmx_pow_iq_int_en
+ *
+ * Contains the bits (1 per QOS level) that enable the input queue interrupt.
+ *
+ */
+union cvmx_pow_iq_int_en {
+	u64 u64;
+	struct cvmx_pow_iq_int_en_s {
+		u64 reserved_8_63 : 56;
+		u64 int_en : 8;
+	} s;
+	struct cvmx_pow_iq_int_en_s cn52xx;
+	struct cvmx_pow_iq_int_en_s cn52xxp1;
+	struct cvmx_pow_iq_int_en_s cn56xx;
+	struct cvmx_pow_iq_int_en_s cn56xxp1;
+	struct cvmx_pow_iq_int_en_s cn61xx;
+	struct cvmx_pow_iq_int_en_s cn63xx;
+	struct cvmx_pow_iq_int_en_s cn63xxp1;
+	struct cvmx_pow_iq_int_en_s cn66xx;
+	struct cvmx_pow_iq_int_en_s cn70xx;
+	struct cvmx_pow_iq_int_en_s cn70xxp1;
+	struct cvmx_pow_iq_int_en_s cnf71xx;
+};
+
+typedef union cvmx_pow_iq_int_en cvmx_pow_iq_int_en_t;
+
+/**
+ * cvmx_pow_iq_thr#
+ *
+ * Threshold value for triggering input queue interrupts.
+ *
+ */
+union cvmx_pow_iq_thrx {
+	u64 u64;
+	struct cvmx_pow_iq_thrx_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_thr : 32;
+	} s;
+	struct cvmx_pow_iq_thrx_s cn52xx;
+	struct cvmx_pow_iq_thrx_s cn52xxp1;
+	struct cvmx_pow_iq_thrx_s cn56xx;
+	struct cvmx_pow_iq_thrx_s cn56xxp1;
+	struct cvmx_pow_iq_thrx_s cn61xx;
+	struct cvmx_pow_iq_thrx_s cn63xx;
+	struct cvmx_pow_iq_thrx_s cn63xxp1;
+	struct cvmx_pow_iq_thrx_s cn66xx;
+	struct cvmx_pow_iq_thrx_s cn70xx;
+	struct cvmx_pow_iq_thrx_s cn70xxp1;
+	struct cvmx_pow_iq_thrx_s cnf71xx;
+};
+
+typedef union cvmx_pow_iq_thrx cvmx_pow_iq_thrx_t;
+
+/**
+ * cvmx_pow_nos_cnt
+ *
+ * Contains the number of work queue entries on the no-schedule list.
+ *
+ */
+union cvmx_pow_nos_cnt {
+	u64 u64;
+	struct cvmx_pow_nos_cnt_s {
+		u64 reserved_12_63 : 52;
+		u64 nos_cnt : 12;
+	} s;
+	struct cvmx_pow_nos_cnt_cn30xx {
+		u64 reserved_7_63 : 57;
+		u64 nos_cnt : 7;
+	} cn30xx;
+	struct cvmx_pow_nos_cnt_cn31xx {
+		u64 reserved_9_63 : 55;
+		u64 nos_cnt : 9;
+	} cn31xx;
+	struct cvmx_pow_nos_cnt_s cn38xx;
+	struct cvmx_pow_nos_cnt_s cn38xxp2;
+	struct cvmx_pow_nos_cnt_cn31xx cn50xx;
+	struct cvmx_pow_nos_cnt_cn52xx {
+		u64 reserved_10_63 : 54;
+		u64 nos_cnt : 10;
+	} cn52xx;
+	struct cvmx_pow_nos_cnt_cn52xx cn52xxp1;
+	struct cvmx_pow_nos_cnt_s cn56xx;
+	struct cvmx_pow_nos_cnt_s cn56xxp1;
+	struct cvmx_pow_nos_cnt_s cn58xx;
+	struct cvmx_pow_nos_cnt_s cn58xxp1;
+	struct cvmx_pow_nos_cnt_cn52xx cn61xx;
+	struct cvmx_pow_nos_cnt_cn63xx {
+		u64 reserved_11_63 : 53;
+		u64 nos_cnt : 11;
+	} cn63xx;
+	struct cvmx_pow_nos_cnt_cn63xx cn63xxp1;
+	struct cvmx_pow_nos_cnt_cn63xx cn66xx;
+	struct cvmx_pow_nos_cnt_cn52xx cn70xx;
+	struct cvmx_pow_nos_cnt_cn52xx cn70xxp1;
+	struct cvmx_pow_nos_cnt_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pow_nos_cnt cvmx_pow_nos_cnt_t;
+
+/**
+ * cvmx_pow_nw_tim
+ *
+ * Sets the minimum period for a new work request timeout.  Period is specified in n-1 notation
+ * where the increment value is 1024 clock cycles.  Thus, a value of 0x0 in this register
+ * translates
+ * to 1024 cycles, 0x1 translates to 2048 cycles, 0x2 translates to 3072 cycles, etc...  Note:
+ * the
+ * maximum period for a new work request timeout is 2 times the minimum period.  Note: the new
+ * work
+ * request timeout counter is reset when this register is written.
+ * There are two new work request timeout cases:
+ * - WAIT bit clear.  The new work request can timeout if the timer expires before the pre-fetch
+ *   engine has reached the end of all work queues.  This can occur if the executable work queue
+ *   entry is deep in the queue and the pre-fetch engine is subject to many resets (i.e. high
+ * switch,
+ *   de-schedule, or new work load from other PP's).  Thus, it is possible for a PP to receive a
+ * work
+ *   response with the NO_WORK bit set even though there was at least one executable entry in the
+ *   work queues.  The other (and typical) scenario for receiving a NO_WORK response with the
+ * WAIT
+ *   bit clear is that the pre-fetch engine has reached the end of all work queues without
+ * finding
+ *   executable work.
+ * - WAIT bit set.  The new work request can timeout if the timer expires before the pre-fetch
+ *   engine has found executable work.  In this case, the only scenario where the PP will receive
+ * a
+ *   work response with the NO_WORK bit set is if the timer expires.  Note: it is still possible
+ * for
+ *   a PP to receive a NO_WORK response even though there was at least one executable entry in
+ * the
+ *   work queues.
+ * In either case, it's important to note that switches and de-schedules are higher priority
+ * operations that can cause the pre-fetch engine to reset.  Thus in a system with many switches
+ * or
+ * de-schedules occurring, it's possible for the new work timer to expire (resulting in NO_WORK
+ * responses) before the pre-fetch engine is able to get very deep into the work queues.
+ */
+union cvmx_pow_nw_tim {
+	u64 u64;
+	struct cvmx_pow_nw_tim_s {
+		u64 reserved_10_63 : 54;
+		u64 nw_tim : 10;
+	} s;
+	struct cvmx_pow_nw_tim_s cn30xx;
+	struct cvmx_pow_nw_tim_s cn31xx;
+	struct cvmx_pow_nw_tim_s cn38xx;
+	struct cvmx_pow_nw_tim_s cn38xxp2;
+	struct cvmx_pow_nw_tim_s cn50xx;
+	struct cvmx_pow_nw_tim_s cn52xx;
+	struct cvmx_pow_nw_tim_s cn52xxp1;
+	struct cvmx_pow_nw_tim_s cn56xx;
+	struct cvmx_pow_nw_tim_s cn56xxp1;
+	struct cvmx_pow_nw_tim_s cn58xx;
+	struct cvmx_pow_nw_tim_s cn58xxp1;
+	struct cvmx_pow_nw_tim_s cn61xx;
+	struct cvmx_pow_nw_tim_s cn63xx;
+	struct cvmx_pow_nw_tim_s cn63xxp1;
+	struct cvmx_pow_nw_tim_s cn66xx;
+	struct cvmx_pow_nw_tim_s cn70xx;
+	struct cvmx_pow_nw_tim_s cn70xxp1;
+	struct cvmx_pow_nw_tim_s cnf71xx;
+};
+
+typedef union cvmx_pow_nw_tim cvmx_pow_nw_tim_t;
+
+/**
+ * cvmx_pow_pf_rst_msk
+ *
+ * Resets the work prefetch engine when work is stored in an internal buffer (either when the add
+ * work arrives or when the work is reloaded from an external buffer) for an enabled QOS level
+ * (1 bit per QOS level).
+ */
+union cvmx_pow_pf_rst_msk {
+	u64 u64;
+	struct cvmx_pow_pf_rst_msk_s {
+		u64 reserved_8_63 : 56;
+		u64 rst_msk : 8;
+	} s;
+	struct cvmx_pow_pf_rst_msk_s cn50xx;
+	struct cvmx_pow_pf_rst_msk_s cn52xx;
+	struct cvmx_pow_pf_rst_msk_s cn52xxp1;
+	struct cvmx_pow_pf_rst_msk_s cn56xx;
+	struct cvmx_pow_pf_rst_msk_s cn56xxp1;
+	struct cvmx_pow_pf_rst_msk_s cn58xx;
+	struct cvmx_pow_pf_rst_msk_s cn58xxp1;
+	struct cvmx_pow_pf_rst_msk_s cn61xx;
+	struct cvmx_pow_pf_rst_msk_s cn63xx;
+	struct cvmx_pow_pf_rst_msk_s cn63xxp1;
+	struct cvmx_pow_pf_rst_msk_s cn66xx;
+	struct cvmx_pow_pf_rst_msk_s cn70xx;
+	struct cvmx_pow_pf_rst_msk_s cn70xxp1;
+	struct cvmx_pow_pf_rst_msk_s cnf71xx;
+};
+
+typedef union cvmx_pow_pf_rst_msk cvmx_pow_pf_rst_msk_t;
+
+/**
+ * cvmx_pow_pp_grp_msk#
+ *
+ * Selects which group(s) a PP belongs to.  A '1' in any bit position sets the PP's membership in
+ * the corresponding group.  A value of 0x0 will prevent the PP from receiving new work.  Note:
+ * disabled or non-existent PP's should have this field set to 0xffff (the reset value) in order
+ * to
+ * maximize POW performance.
+ * Also contains the QOS level priorities for each PP.  0x0 is highest priority, and 0x7 the
+ * lowest.
+ * Setting the priority to 0xf will prevent that PP from receiving work from that QOS level.
+ * Priority values 0x8 through 0xe are reserved and should not be used.  For a given PP,
+ * priorities
+ * should begin at 0x0 and remain contiguous throughout the range.
+ */
+union cvmx_pow_pp_grp_mskx {
+	u64 u64;
+	struct cvmx_pow_pp_grp_mskx_s {
+		u64 reserved_48_63 : 16;
+		u64 qos7_pri : 4;
+		u64 qos6_pri : 4;
+		u64 qos5_pri : 4;
+		u64 qos4_pri : 4;
+		u64 qos3_pri : 4;
+		u64 qos2_pri : 4;
+		u64 qos1_pri : 4;
+		u64 qos0_pri : 4;
+		u64 grp_msk : 16;
+	} s;
+	struct cvmx_pow_pp_grp_mskx_cn30xx {
+		u64 reserved_16_63 : 48;
+		u64 grp_msk : 16;
+	} cn30xx;
+	struct cvmx_pow_pp_grp_mskx_cn30xx cn31xx;
+	struct cvmx_pow_pp_grp_mskx_cn30xx cn38xx;
+	struct cvmx_pow_pp_grp_mskx_cn30xx cn38xxp2;
+	struct cvmx_pow_pp_grp_mskx_s cn50xx;
+	struct cvmx_pow_pp_grp_mskx_s cn52xx;
+	struct cvmx_pow_pp_grp_mskx_s cn52xxp1;
+	struct cvmx_pow_pp_grp_mskx_s cn56xx;
+	struct cvmx_pow_pp_grp_mskx_s cn56xxp1;
+	struct cvmx_pow_pp_grp_mskx_s cn58xx;
+	struct cvmx_pow_pp_grp_mskx_s cn58xxp1;
+	struct cvmx_pow_pp_grp_mskx_s cn61xx;
+	struct cvmx_pow_pp_grp_mskx_s cn63xx;
+	struct cvmx_pow_pp_grp_mskx_s cn63xxp1;
+	struct cvmx_pow_pp_grp_mskx_s cn66xx;
+	struct cvmx_pow_pp_grp_mskx_s cn70xx;
+	struct cvmx_pow_pp_grp_mskx_s cn70xxp1;
+	struct cvmx_pow_pp_grp_mskx_s cnf71xx;
+};
+
+typedef union cvmx_pow_pp_grp_mskx cvmx_pow_pp_grp_mskx_t;
+
+/**
+ * cvmx_pow_qos_rnd#
+ *
+ * Contains the round definitions for issuing new work.  Each round consists of 8 bits with each
+ * bit
+ * corresponding to a QOS level.  There are 4 rounds contained in each register for a total of 32
+ * rounds.  The issue logic traverses through the rounds sequentially (lowest round to highest
+ * round)
+ * in an attempt to find new work for each PP.  Within each round, the issue logic traverses
+ * through
+ * the QOS levels sequentially (highest QOS to lowest QOS) skipping over each QOS level with a
+ * clear
+ * bit in the round mask.  Note: setting a QOS level to all zeroes in all issue round registers
+ * will
+ * prevent work from being issued from that QOS level.
+ */
+union cvmx_pow_qos_rndx {
+	u64 u64;
+	struct cvmx_pow_qos_rndx_s {
+		u64 reserved_32_63 : 32;
+		u64 rnd_p3 : 8;
+		u64 rnd_p2 : 8;
+		u64 rnd_p1 : 8;
+		u64 rnd : 8;
+	} s;
+	struct cvmx_pow_qos_rndx_s cn30xx;
+	struct cvmx_pow_qos_rndx_s cn31xx;
+	struct cvmx_pow_qos_rndx_s cn38xx;
+	struct cvmx_pow_qos_rndx_s cn38xxp2;
+	struct cvmx_pow_qos_rndx_s cn50xx;
+	struct cvmx_pow_qos_rndx_s cn52xx;
+	struct cvmx_pow_qos_rndx_s cn52xxp1;
+	struct cvmx_pow_qos_rndx_s cn56xx;
+	struct cvmx_pow_qos_rndx_s cn56xxp1;
+	struct cvmx_pow_qos_rndx_s cn58xx;
+	struct cvmx_pow_qos_rndx_s cn58xxp1;
+	struct cvmx_pow_qos_rndx_s cn61xx;
+	struct cvmx_pow_qos_rndx_s cn63xx;
+	struct cvmx_pow_qos_rndx_s cn63xxp1;
+	struct cvmx_pow_qos_rndx_s cn66xx;
+	struct cvmx_pow_qos_rndx_s cn70xx;
+	struct cvmx_pow_qos_rndx_s cn70xxp1;
+	struct cvmx_pow_qos_rndx_s cnf71xx;
+};
+
+typedef union cvmx_pow_qos_rndx cvmx_pow_qos_rndx_t;
+
+/**
+ * cvmx_pow_qos_thr#
+ *
+ * Contains the thresholds for allocating POW internal storage buffers.  If the number of
+ * remaining
+ * free buffers drops below the minimum threshold (MIN_THR) or the number of allocated buffers
+ * for
+ * this QOS level rises above the maximum threshold (MAX_THR), future incoming work queue entries
+ * will be buffered externally rather than internally.  This register also contains a read-only
+ * count
+ * of the current number of free buffers (FREE_CNT), the number of internal buffers currently
+ * allocated to this QOS level (BUF_CNT), and the total number of buffers on the de-schedule list
+ * (DES_CNT) (which is not the same as the total number of de-scheduled buffers).
+ */
+union cvmx_pow_qos_thrx {
+	u64 u64;
+	struct cvmx_pow_qos_thrx_s {
+		u64 reserved_60_63 : 4;
+		u64 des_cnt : 12;
+		u64 buf_cnt : 12;
+		u64 free_cnt : 12;
+		u64 reserved_23_23 : 1;
+		u64 max_thr : 11;
+		u64 reserved_11_11 : 1;
+		u64 min_thr : 11;
+	} s;
+	struct cvmx_pow_qos_thrx_cn30xx {
+		u64 reserved_55_63 : 9;
+		u64 des_cnt : 7;
+		u64 reserved_43_47 : 5;
+		u64 buf_cnt : 7;
+		u64 reserved_31_35 : 5;
+		u64 free_cnt : 7;
+		u64 reserved_18_23 : 6;
+		u64 max_thr : 6;
+		u64 reserved_6_11 : 6;
+		u64 min_thr : 6;
+	} cn30xx;
+	struct cvmx_pow_qos_thrx_cn31xx {
+		u64 reserved_57_63 : 7;
+		u64 des_cnt : 9;
+		u64 reserved_45_47 : 3;
+		u64 buf_cnt : 9;
+		u64 reserved_33_35 : 3;
+		u64 free_cnt : 9;
+		u64 reserved_20_23 : 4;
+		u64 max_thr : 8;
+		u64 reserved_8_11 : 4;
+		u64 min_thr : 8;
+	} cn31xx;
+	struct cvmx_pow_qos_thrx_s cn38xx;
+	struct cvmx_pow_qos_thrx_s cn38xxp2;
+	struct cvmx_pow_qos_thrx_cn31xx cn50xx;
+	struct cvmx_pow_qos_thrx_cn52xx {
+		u64 reserved_58_63 : 6;
+		u64 des_cnt : 10;
+		u64 reserved_46_47 : 2;
+		u64 buf_cnt : 10;
+		u64 reserved_34_35 : 2;
+		u64 free_cnt : 10;
+		u64 reserved_21_23 : 3;
+		u64 max_thr : 9;
+		u64 reserved_9_11 : 3;
+		u64 min_thr : 9;
+	} cn52xx;
+	struct cvmx_pow_qos_thrx_cn52xx cn52xxp1;
+	struct cvmx_pow_qos_thrx_s cn56xx;
+	struct cvmx_pow_qos_thrx_s cn56xxp1;
+	struct cvmx_pow_qos_thrx_s cn58xx;
+	struct cvmx_pow_qos_thrx_s cn58xxp1;
+	struct cvmx_pow_qos_thrx_cn52xx cn61xx;
+	struct cvmx_pow_qos_thrx_cn63xx {
+		u64 reserved_59_63 : 5;
+		u64 des_cnt : 11;
+		u64 reserved_47_47 : 1;
+		u64 buf_cnt : 11;
+		u64 reserved_35_35 : 1;
+		u64 free_cnt : 11;
+		u64 reserved_22_23 : 2;
+		u64 max_thr : 10;
+		u64 reserved_10_11 : 2;
+		u64 min_thr : 10;
+	} cn63xx;
+	struct cvmx_pow_qos_thrx_cn63xx cn63xxp1;
+	struct cvmx_pow_qos_thrx_cn63xx cn66xx;
+	struct cvmx_pow_qos_thrx_cn52xx cn70xx;
+	struct cvmx_pow_qos_thrx_cn52xx cn70xxp1;
+	struct cvmx_pow_qos_thrx_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pow_qos_thrx cvmx_pow_qos_thrx_t;
+
+/**
+ * cvmx_pow_ts_pc
+ *
+ * Counts the number of tag switch requests.  Write to clear.
+ *
+ */
+union cvmx_pow_ts_pc {
+	u64 u64;
+	struct cvmx_pow_ts_pc_s {
+		u64 reserved_32_63 : 32;
+		u64 ts_pc : 32;
+	} s;
+	struct cvmx_pow_ts_pc_s cn30xx;
+	struct cvmx_pow_ts_pc_s cn31xx;
+	struct cvmx_pow_ts_pc_s cn38xx;
+	struct cvmx_pow_ts_pc_s cn38xxp2;
+	struct cvmx_pow_ts_pc_s cn50xx;
+	struct cvmx_pow_ts_pc_s cn52xx;
+	struct cvmx_pow_ts_pc_s cn52xxp1;
+	struct cvmx_pow_ts_pc_s cn56xx;
+	struct cvmx_pow_ts_pc_s cn56xxp1;
+	struct cvmx_pow_ts_pc_s cn58xx;
+	struct cvmx_pow_ts_pc_s cn58xxp1;
+	struct cvmx_pow_ts_pc_s cn61xx;
+	struct cvmx_pow_ts_pc_s cn63xx;
+	struct cvmx_pow_ts_pc_s cn63xxp1;
+	struct cvmx_pow_ts_pc_s cn66xx;
+	struct cvmx_pow_ts_pc_s cn70xx;
+	struct cvmx_pow_ts_pc_s cn70xxp1;
+	struct cvmx_pow_ts_pc_s cnf71xx;
+};
+
+typedef union cvmx_pow_ts_pc cvmx_pow_ts_pc_t;
+
+/**
+ * cvmx_pow_wa_com_pc
+ *
+ * Counts the number of add new work requests for all QOS levels.  Write to clear.
+ *
+ */
+union cvmx_pow_wa_com_pc {
+	u64 u64;
+	struct cvmx_pow_wa_com_pc_s {
+		u64 reserved_32_63 : 32;
+		u64 wa_pc : 32;
+	} s;
+	struct cvmx_pow_wa_com_pc_s cn30xx;
+	struct cvmx_pow_wa_com_pc_s cn31xx;
+	struct cvmx_pow_wa_com_pc_s cn38xx;
+	struct cvmx_pow_wa_com_pc_s cn38xxp2;
+	struct cvmx_pow_wa_com_pc_s cn50xx;
+	struct cvmx_pow_wa_com_pc_s cn52xx;
+	struct cvmx_pow_wa_com_pc_s cn52xxp1;
+	struct cvmx_pow_wa_com_pc_s cn56xx;
+	struct cvmx_pow_wa_com_pc_s cn56xxp1;
+	struct cvmx_pow_wa_com_pc_s cn58xx;
+	struct cvmx_pow_wa_com_pc_s cn58xxp1;
+	struct cvmx_pow_wa_com_pc_s cn61xx;
+	struct cvmx_pow_wa_com_pc_s cn63xx;
+	struct cvmx_pow_wa_com_pc_s cn63xxp1;
+	struct cvmx_pow_wa_com_pc_s cn66xx;
+	struct cvmx_pow_wa_com_pc_s cn70xx;
+	struct cvmx_pow_wa_com_pc_s cn70xxp1;
+	struct cvmx_pow_wa_com_pc_s cnf71xx;
+};
+
+typedef union cvmx_pow_wa_com_pc cvmx_pow_wa_com_pc_t;
+
+/**
+ * cvmx_pow_wa_pc#
+ *
+ * Counts the number of add new work requests for each QOS level.  Write to clear.
+ *
+ */
+union cvmx_pow_wa_pcx {
+	u64 u64;
+	struct cvmx_pow_wa_pcx_s {
+		u64 reserved_32_63 : 32;
+		u64 wa_pc : 32;
+	} s;
+	struct cvmx_pow_wa_pcx_s cn30xx;
+	struct cvmx_pow_wa_pcx_s cn31xx;
+	struct cvmx_pow_wa_pcx_s cn38xx;
+	struct cvmx_pow_wa_pcx_s cn38xxp2;
+	struct cvmx_pow_wa_pcx_s cn50xx;
+	struct cvmx_pow_wa_pcx_s cn52xx;
+	struct cvmx_pow_wa_pcx_s cn52xxp1;
+	struct cvmx_pow_wa_pcx_s cn56xx;
+	struct cvmx_pow_wa_pcx_s cn56xxp1;
+	struct cvmx_pow_wa_pcx_s cn58xx;
+	struct cvmx_pow_wa_pcx_s cn58xxp1;
+	struct cvmx_pow_wa_pcx_s cn61xx;
+	struct cvmx_pow_wa_pcx_s cn63xx;
+	struct cvmx_pow_wa_pcx_s cn63xxp1;
+	struct cvmx_pow_wa_pcx_s cn66xx;
+	struct cvmx_pow_wa_pcx_s cn70xx;
+	struct cvmx_pow_wa_pcx_s cn70xxp1;
+	struct cvmx_pow_wa_pcx_s cnf71xx;
+};
+
+typedef union cvmx_pow_wa_pcx cvmx_pow_wa_pcx_t;
+
+/**
+ * cvmx_pow_wq_int
+ *
+ * Contains the bits (1 per group) that set work queue interrupts and are used to clear these
+ * interrupts.  Also contains the input queue interrupt temporary disable bits (1 per group). For
+ * more information regarding this register, see the interrupt section.
+ */
+union cvmx_pow_wq_int {
+	u64 u64;
+	struct cvmx_pow_wq_int_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_dis : 16;
+		u64 wq_int : 16;
+	} s;
+	struct cvmx_pow_wq_int_s cn30xx;
+	struct cvmx_pow_wq_int_s cn31xx;
+	struct cvmx_pow_wq_int_s cn38xx;
+	struct cvmx_pow_wq_int_s cn38xxp2;
+	struct cvmx_pow_wq_int_s cn50xx;
+	struct cvmx_pow_wq_int_s cn52xx;
+	struct cvmx_pow_wq_int_s cn52xxp1;
+	struct cvmx_pow_wq_int_s cn56xx;
+	struct cvmx_pow_wq_int_s cn56xxp1;
+	struct cvmx_pow_wq_int_s cn58xx;
+	struct cvmx_pow_wq_int_s cn58xxp1;
+	struct cvmx_pow_wq_int_s cn61xx;
+	struct cvmx_pow_wq_int_s cn63xx;
+	struct cvmx_pow_wq_int_s cn63xxp1;
+	struct cvmx_pow_wq_int_s cn66xx;
+	struct cvmx_pow_wq_int_s cn70xx;
+	struct cvmx_pow_wq_int_s cn70xxp1;
+	struct cvmx_pow_wq_int_s cnf71xx;
+};
+
+typedef union cvmx_pow_wq_int cvmx_pow_wq_int_t;
+
+/**
+ * cvmx_pow_wq_int_cnt#
+ *
+ * Contains a read-only copy of the counts used to trigger work queue interrupts.  For more
+ * information regarding this register, see the interrupt section.
+ */
+union cvmx_pow_wq_int_cntx {
+	u64 u64;
+	struct cvmx_pow_wq_int_cntx_s {
+		u64 reserved_28_63 : 36;
+		u64 tc_cnt : 4;
+		u64 ds_cnt : 12;
+		u64 iq_cnt : 12;
+	} s;
+	struct cvmx_pow_wq_int_cntx_cn30xx {
+		u64 reserved_28_63 : 36;
+		u64 tc_cnt : 4;
+		u64 reserved_19_23 : 5;
+		u64 ds_cnt : 7;
+		u64 reserved_7_11 : 5;
+		u64 iq_cnt : 7;
+	} cn30xx;
+	struct cvmx_pow_wq_int_cntx_cn31xx {
+		u64 reserved_28_63 : 36;
+		u64 tc_cnt : 4;
+		u64 reserved_21_23 : 3;
+		u64 ds_cnt : 9;
+		u64 reserved_9_11 : 3;
+		u64 iq_cnt : 9;
+	} cn31xx;
+	struct cvmx_pow_wq_int_cntx_s cn38xx;
+	struct cvmx_pow_wq_int_cntx_s cn38xxp2;
+	struct cvmx_pow_wq_int_cntx_cn31xx cn50xx;
+	struct cvmx_pow_wq_int_cntx_cn52xx {
+		u64 reserved_28_63 : 36;
+		u64 tc_cnt : 4;
+		u64 reserved_22_23 : 2;
+		u64 ds_cnt : 10;
+		u64 reserved_10_11 : 2;
+		u64 iq_cnt : 10;
+	} cn52xx;
+	struct cvmx_pow_wq_int_cntx_cn52xx cn52xxp1;
+	struct cvmx_pow_wq_int_cntx_s cn56xx;
+	struct cvmx_pow_wq_int_cntx_s cn56xxp1;
+	struct cvmx_pow_wq_int_cntx_s cn58xx;
+	struct cvmx_pow_wq_int_cntx_s cn58xxp1;
+	struct cvmx_pow_wq_int_cntx_cn52xx cn61xx;
+	struct cvmx_pow_wq_int_cntx_cn63xx {
+		u64 reserved_28_63 : 36;
+		u64 tc_cnt : 4;
+		u64 reserved_23_23 : 1;
+		u64 ds_cnt : 11;
+		u64 reserved_11_11 : 1;
+		u64 iq_cnt : 11;
+	} cn63xx;
+	struct cvmx_pow_wq_int_cntx_cn63xx cn63xxp1;
+	struct cvmx_pow_wq_int_cntx_cn63xx cn66xx;
+	struct cvmx_pow_wq_int_cntx_cn52xx cn70xx;
+	struct cvmx_pow_wq_int_cntx_cn52xx cn70xxp1;
+	struct cvmx_pow_wq_int_cntx_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pow_wq_int_cntx cvmx_pow_wq_int_cntx_t;
+
+/**
+ * cvmx_pow_wq_int_pc
+ *
+ * Contains the threshold value for the work queue interrupt periodic counter and also a read-
+ * only
+ * copy of the periodic counter.  For more information regarding this register, see the interrupt
+ * section.
+ */
+union cvmx_pow_wq_int_pc {
+	u64 u64;
+	struct cvmx_pow_wq_int_pc_s {
+		u64 reserved_60_63 : 4;
+		u64 pc : 28;
+		u64 reserved_28_31 : 4;
+		u64 pc_thr : 20;
+		u64 reserved_0_7 : 8;
+	} s;
+	struct cvmx_pow_wq_int_pc_s cn30xx;
+	struct cvmx_pow_wq_int_pc_s cn31xx;
+	struct cvmx_pow_wq_int_pc_s cn38xx;
+	struct cvmx_pow_wq_int_pc_s cn38xxp2;
+	struct cvmx_pow_wq_int_pc_s cn50xx;
+	struct cvmx_pow_wq_int_pc_s cn52xx;
+	struct cvmx_pow_wq_int_pc_s cn52xxp1;
+	struct cvmx_pow_wq_int_pc_s cn56xx;
+	struct cvmx_pow_wq_int_pc_s cn56xxp1;
+	struct cvmx_pow_wq_int_pc_s cn58xx;
+	struct cvmx_pow_wq_int_pc_s cn58xxp1;
+	struct cvmx_pow_wq_int_pc_s cn61xx;
+	struct cvmx_pow_wq_int_pc_s cn63xx;
+	struct cvmx_pow_wq_int_pc_s cn63xxp1;
+	struct cvmx_pow_wq_int_pc_s cn66xx;
+	struct cvmx_pow_wq_int_pc_s cn70xx;
+	struct cvmx_pow_wq_int_pc_s cn70xxp1;
+	struct cvmx_pow_wq_int_pc_s cnf71xx;
+};
+
+typedef union cvmx_pow_wq_int_pc cvmx_pow_wq_int_pc_t;
+
+/**
+ * cvmx_pow_wq_int_thr#
+ *
+ * Contains the thresholds for enabling and setting work queue interrupts.  For more information
+ * regarding this register, see the interrupt section.
+ * Note: Up to 4 of the POW's internal storage buffers can be allocated for hardware use and are
+ * therefore not available for incoming work queue entries.  Additionally, any PP that is not in
+ * the
+ * NULL_NULL state consumes a buffer.  Thus in a 4 PP system, it is not advisable to set either
+ * IQ_THR or DS_THR to greater than 512 - 4 - 4 = 504.  Doing so may prevent the interrupt from
+ * ever triggering.
+ */
+union cvmx_pow_wq_int_thrx {
+	u64 u64;
+	struct cvmx_pow_wq_int_thrx_s {
+		u64 reserved_29_63 : 35;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_23_23 : 1;
+		u64 ds_thr : 11;
+		u64 reserved_11_11 : 1;
+		u64 iq_thr : 11;
+	} s;
+	struct cvmx_pow_wq_int_thrx_cn30xx {
+		u64 reserved_29_63 : 35;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_18_23 : 6;
+		u64 ds_thr : 6;
+		u64 reserved_6_11 : 6;
+		u64 iq_thr : 6;
+	} cn30xx;
+	struct cvmx_pow_wq_int_thrx_cn31xx {
+		u64 reserved_29_63 : 35;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_20_23 : 4;
+		u64 ds_thr : 8;
+		u64 reserved_8_11 : 4;
+		u64 iq_thr : 8;
+	} cn31xx;
+	struct cvmx_pow_wq_int_thrx_s cn38xx;
+	struct cvmx_pow_wq_int_thrx_s cn38xxp2;
+	struct cvmx_pow_wq_int_thrx_cn31xx cn50xx;
+	struct cvmx_pow_wq_int_thrx_cn52xx {
+		u64 reserved_29_63 : 35;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_21_23 : 3;
+		u64 ds_thr : 9;
+		u64 reserved_9_11 : 3;
+		u64 iq_thr : 9;
+	} cn52xx;
+	struct cvmx_pow_wq_int_thrx_cn52xx cn52xxp1;
+	struct cvmx_pow_wq_int_thrx_s cn56xx;
+	struct cvmx_pow_wq_int_thrx_s cn56xxp1;
+	struct cvmx_pow_wq_int_thrx_s cn58xx;
+	struct cvmx_pow_wq_int_thrx_s cn58xxp1;
+	struct cvmx_pow_wq_int_thrx_cn52xx cn61xx;
+	struct cvmx_pow_wq_int_thrx_cn63xx {
+		u64 reserved_29_63 : 35;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_22_23 : 2;
+		u64 ds_thr : 10;
+		u64 reserved_10_11 : 2;
+		u64 iq_thr : 10;
+	} cn63xx;
+	struct cvmx_pow_wq_int_thrx_cn63xx cn63xxp1;
+	struct cvmx_pow_wq_int_thrx_cn63xx cn66xx;
+	struct cvmx_pow_wq_int_thrx_cn52xx cn70xx;
+	struct cvmx_pow_wq_int_thrx_cn52xx cn70xxp1;
+	struct cvmx_pow_wq_int_thrx_cn52xx cnf71xx;
+};
+
+typedef union cvmx_pow_wq_int_thrx cvmx_pow_wq_int_thrx_t;
+
+/**
+ * cvmx_pow_ws_pc#
+ *
+ * Counts the number of work schedules for each group.  Write to clear.
+ *
+ */
+union cvmx_pow_ws_pcx {
+	u64 u64;
+	struct cvmx_pow_ws_pcx_s {
+		u64 reserved_32_63 : 32;
+		u64 ws_pc : 32;
+	} s;
+	struct cvmx_pow_ws_pcx_s cn30xx;
+	struct cvmx_pow_ws_pcx_s cn31xx;
+	struct cvmx_pow_ws_pcx_s cn38xx;
+	struct cvmx_pow_ws_pcx_s cn38xxp2;
+	struct cvmx_pow_ws_pcx_s cn50xx;
+	struct cvmx_pow_ws_pcx_s cn52xx;
+	struct cvmx_pow_ws_pcx_s cn52xxp1;
+	struct cvmx_pow_ws_pcx_s cn56xx;
+	struct cvmx_pow_ws_pcx_s cn56xxp1;
+	struct cvmx_pow_ws_pcx_s cn58xx;
+	struct cvmx_pow_ws_pcx_s cn58xxp1;
+	struct cvmx_pow_ws_pcx_s cn61xx;
+	struct cvmx_pow_ws_pcx_s cn63xx;
+	struct cvmx_pow_ws_pcx_s cn63xxp1;
+	struct cvmx_pow_ws_pcx_s cn66xx;
+	struct cvmx_pow_ws_pcx_s cn70xx;
+	struct cvmx_pow_ws_pcx_s cn70xxp1;
+	struct cvmx_pow_ws_pcx_s cnf71xx;
+};
+
+typedef union cvmx_pow_ws_pcx cvmx_pow_ws_pcx_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 26/50] mips: octeon: Add cvmx-rst-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (24 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 25/50] mips: octeon: Add cvmx-pow-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 27/50] mips: octeon: Add cvmx-sata-defs.h " Stefan Roese
                   ` (26 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-rst-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-rst-defs.h  | 77 +++++++++++++++++++
 1 file changed, 77 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-rst-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-rst-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-rst-defs.h
new file mode 100644
index 0000000000..943e160105
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-rst-defs.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_RST_DEFS_H__
+#define __CVMX_RST_DEFS_H__
+
+#define CVMX_RST_CTLX(offset)	    (0x0001180006001640ull + ((offset) & 3) * 8)
+#define CVMX_RST_SOFT_PRSTX(offset) (0x00011800060016C0ull + ((offset) & 3) * 8)
+
+/**
+ * cvmx_rst_ctl#
+ */
+union cvmx_rst_ctlx {
+	u64 u64;
+	struct cvmx_rst_ctlx_s {
+		u64 reserved_10_63 : 54;
+		u64 prst_link : 1;
+		u64 rst_done : 1;
+		u64 rst_link : 1;
+		u64 host_mode : 1;
+		u64 reserved_4_5 : 2;
+		u64 rst_drv : 1;
+		u64 rst_rcv : 1;
+		u64 rst_chip : 1;
+		u64 rst_val : 1;
+	} s;
+	struct cvmx_rst_ctlx_s cn70xx;
+	struct cvmx_rst_ctlx_s cn70xxp1;
+	struct cvmx_rst_ctlx_s cn73xx;
+	struct cvmx_rst_ctlx_s cn78xx;
+	struct cvmx_rst_ctlx_s cn78xxp1;
+	struct cvmx_rst_ctlx_s cnf75xx;
+};
+
+typedef union cvmx_rst_ctlx cvmx_rst_ctlx_t;
+
+/**
+ * cvmx_rst_soft_prst#
+ */
+union cvmx_rst_soft_prstx {
+	u64 u64;
+	struct cvmx_rst_soft_prstx_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_prst : 1;
+	} s;
+	struct cvmx_rst_soft_prstx_s cn70xx;
+	struct cvmx_rst_soft_prstx_s cn70xxp1;
+	struct cvmx_rst_soft_prstx_s cn73xx;
+	struct cvmx_rst_soft_prstx_s cn78xx;
+	struct cvmx_rst_soft_prstx_s cn78xxp1;
+	struct cvmx_rst_soft_prstx_s cnf75xx;
+};
+
+typedef union cvmx_rst_soft_prstx cvmx_rst_soft_prstx_t;
+
+/**
+ * cvmx_rst_soft_rst
+ */
+union cvmx_rst_soft_rst {
+	u64 u64;
+	struct cvmx_rst_soft_rst_s {
+		u64 reserved_1_63 : 63;
+		u64 soft_rst : 1;
+	} s;
+	struct cvmx_rst_soft_rst_s cn70xx;
+	struct cvmx_rst_soft_rst_s cn70xxp1;
+	struct cvmx_rst_soft_rst_s cn73xx;
+	struct cvmx_rst_soft_rst_s cn78xx;
+	struct cvmx_rst_soft_rst_s cn78xxp1;
+	struct cvmx_rst_soft_rst_s cnf75xx;
+};
+
+typedef union cvmx_rst_soft_rst cvmx_rst_soft_rst_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 27/50] mips: octeon: Add cvmx-sata-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (25 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 26/50] mips: octeon: Add cvmx-rst-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 28/50] mips: octeon: Add cvmx-sli-defs.h " Stefan Roese
                   ` (25 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-sata-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-sata-defs.h | 311 ++++++++++++++++++
 1 file changed, 311 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sata-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-sata-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-sata-defs.h
new file mode 100644
index 0000000000..77af0e3f83
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-sata-defs.h
@@ -0,0 +1,311 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_SATA_DEFS_H__
+#define __CVMX_SATA_DEFS_H__
+
+#define CVMX_SATA_UCTL_CTL	   (0x000118006C000000ull)
+#define CVMX_SATA_UCTL_SHIM_CFG	   (0x000118006C0000E8ull)
+#define CVMX_SATA_UCTL_BIST_STATUS (0x000118006C000008ull)
+
+#define CVMX_SATA_UAHC_GBL_PI	    (0x00016C000000000Cull)
+#define CVMX_SATA_UAHC_GBL_TIMER1MS (0x00016C00000000E0ull)
+#define CVMX_SATA_UAHC_GBL_CAP	    (0x00016C0000000000ull)
+
+#define CVMX_SATA_UAHC_PX_CMD(offset)  (0x00016C0000000118ull + ((offset) & 1) * 128)
+#define CVMX_SATA_UAHC_PX_SCTL(offset) (0x00016C000000012Cull + ((offset) & 1) * 128)
+#define CVMX_SATA_UAHC_PX_SERR(offset) (0x00016C0000000130ull + ((offset) & 1) * 128)
+#define CVMX_SATA_UAHC_PX_IS(offset)   (0x00016C0000000110ull + ((offset) & 1) * 128)
+#define CVMX_SATA_UAHC_PX_SSTS(offset) (0x00016C0000000128ull + ((offset) & 1) * 128)
+#define CVMX_SATA_UAHC_PX_TFD(offset)  (0x00016C0000000120ull + ((offset) & 1) * 128)
+
+/**
+ * cvmx_sata_uctl_ctl
+ *
+ * This register controls clocks, resets, power, and BIST for the SATA.
+ *
+ * Accessible always.
+ *
+ * Reset by IOI reset.
+ */
+union cvmx_sata_uctl_ctl {
+	u64 u64;
+	struct cvmx_sata_uctl_ctl_s {
+		u64 clear_bist : 1;
+		u64 start_bist : 1;
+		u64 reserved_31_61 : 31;
+		u64 a_clk_en : 1;
+		u64 a_clk_byp_sel : 1;
+		u64 a_clkdiv_rst : 1;
+		u64 reserved_27_27 : 1;
+		u64 a_clkdiv_sel : 3;
+		u64 reserved_5_23 : 19;
+		u64 csclk_en : 1;
+		u64 reserved_2_3 : 2;
+		u64 sata_uahc_rst : 1;
+		u64 sata_uctl_rst : 1;
+	} s;
+	struct cvmx_sata_uctl_ctl_s cn70xx;
+	struct cvmx_sata_uctl_ctl_s cn70xxp1;
+	struct cvmx_sata_uctl_ctl_s cn73xx;
+};
+
+typedef union cvmx_sata_uctl_ctl cvmx_sata_uctl_ctl_t;
+
+/**
+ * cvmx_sata_uctl_bist_status
+ *
+ * Results from BIST runs of SATA's memories.
+ * Wait for NDONE==0, then look at defect indication.
+ *
+ * Accessible always.
+ *
+ * Reset by IOI reset.
+ */
+union cvmx_sata_uctl_bist_status {
+	u64 u64;
+	struct cvmx_sata_uctl_bist_status_s {
+		u64 reserved_42_63 : 22;
+		u64 uctl_xm_r_bist_ndone : 1;
+		u64 uctl_xm_w_bist_ndone : 1;
+		u64 reserved_36_39 : 4;
+		u64 uahc_p0_rxram_bist_ndone : 1;
+		u64 uahc_p1_rxram_bist_ndone : 1;
+		u64 uahc_p0_txram_bist_ndone : 1;
+		u64 uahc_p1_txram_bist_ndone : 1;
+		u64 reserved_10_31 : 22;
+		u64 uctl_xm_r_bist_status : 1;
+		u64 uctl_xm_w_bist_status : 1;
+		u64 reserved_4_7 : 4;
+		u64 uahc_p0_rxram_bist_status : 1;
+		u64 uahc_p1_rxram_bist_status : 1;
+		u64 uahc_p0_txram_bist_status : 1;
+		u64 uahc_p1_txram_bist_status : 1;
+	} s;
+	struct cvmx_sata_uctl_bist_status_s cn70xx;
+	struct cvmx_sata_uctl_bist_status_s cn70xxp1;
+	struct cvmx_sata_uctl_bist_status_s cn73xx;
+};
+
+typedef union cvmx_sata_uctl_bist_status cvmx_sata_uctl_bist_status_t;
+
+/**
+ * cvmx_sata_uctl_shim_cfg
+ *
+ * This register allows configuration of various shim (UCTL) features.
+ *
+ * Fields XS_NCB_OOB_* are captured when there are no outstanding OOB errors indicated in INTSTAT
+ * and a new OOB error arrives.
+ *
+ * Fields XS_BAD_DMA_* are captured when there are no outstanding DMA errors indicated in INTSTAT
+ * and a new DMA error arrives.
+ *
+ * Accessible only when SATA_UCTL_CTL[A_CLK_EN].
+ *
+ * Reset by IOI reset or SATA_UCTL_CTL[SATA_UCTL_RST].
+ */
+union cvmx_sata_uctl_shim_cfg {
+	u64 u64;
+	struct cvmx_sata_uctl_shim_cfg_s {
+		u64 xs_ncb_oob_wrn : 1;
+		u64 reserved_60_62 : 3;
+		u64 xs_ncb_oob_osrc : 12;
+		u64 xm_bad_dma_wrn : 1;
+		u64 reserved_44_46 : 3;
+		u64 xm_bad_dma_type : 4;
+		u64 reserved_14_39 : 26;
+		u64 dma_read_cmd : 2;
+		u64 reserved_11_11 : 1;
+		u64 dma_write_cmd : 1;
+		u64 dma_endian_mode : 2;
+		u64 reserved_2_7 : 6;
+		u64 csr_endian_mode : 2;
+	} s;
+	struct cvmx_sata_uctl_shim_cfg_cn70xx {
+		u64 xs_ncb_oob_wrn : 1;
+		u64 reserved_57_62 : 6;
+		u64 xs_ncb_oob_osrc : 9;
+		u64 xm_bad_dma_wrn : 1;
+		u64 reserved_44_46 : 3;
+		u64 xm_bad_dma_type : 4;
+		u64 reserved_13_39 : 27;
+		u64 dma_read_cmd : 1;
+		u64 reserved_10_11 : 2;
+		u64 dma_endian_mode : 2;
+		u64 reserved_2_7 : 6;
+		u64 csr_endian_mode : 2;
+	} cn70xx;
+	struct cvmx_sata_uctl_shim_cfg_cn70xx cn70xxp1;
+	struct cvmx_sata_uctl_shim_cfg_s cn73xx;
+};
+
+typedef union cvmx_sata_uctl_shim_cfg cvmx_sata_uctl_shim_cfg_t;
+
+/**
+ * cvmx_sata_uahc_gbl_cap
+ *
+ * See AHCI specification v1.3 section 3.1
+ *
+ */
+union cvmx_sata_uahc_gbl_cap {
+	u32 u32;
+	struct cvmx_sata_uahc_gbl_cap_s {
+		u32 s64a : 1;
+		u32 sncq : 1;
+		u32 ssntf : 1;
+		u32 smps : 1;
+		u32 sss : 1;
+		u32 salp : 1;
+		u32 sal : 1;
+		u32 sclo : 1;
+		u32 iss : 4;
+		u32 snzo : 1;
+		u32 sam : 1;
+		u32 spm : 1;
+		u32 fbss : 1;
+		u32 pmd : 1;
+		u32 ssc : 1;
+		u32 psc : 1;
+		u32 ncs : 5;
+		u32 cccs : 1;
+		u32 ems : 1;
+		u32 sxs : 1;
+		u32 np : 5;
+	} s;
+	struct cvmx_sata_uahc_gbl_cap_s cn70xx;
+	struct cvmx_sata_uahc_gbl_cap_s cn70xxp1;
+	struct cvmx_sata_uahc_gbl_cap_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_gbl_cap cvmx_sata_uahc_gbl_cap_t;
+
+/**
+ * cvmx_sata_uahc_p#_sctl
+ */
+union cvmx_sata_uahc_px_sctl {
+	u32 u32;
+	struct cvmx_sata_uahc_px_sctl_s {
+		u32 reserved_10_31 : 22;
+		u32 ipm : 2;
+		u32 reserved_6_7 : 2;
+		u32 spd : 2;
+		u32 reserved_3_3 : 1;
+		u32 det : 3;
+	} s;
+	struct cvmx_sata_uahc_px_sctl_s cn70xx;
+	struct cvmx_sata_uahc_px_sctl_s cn70xxp1;
+	struct cvmx_sata_uahc_px_sctl_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_px_sctl cvmx_sata_uahc_px_sctl_t;
+
+/**
+ * cvmx_sata_uahc_p#_cmd
+ */
+union cvmx_sata_uahc_px_cmd {
+	u32 u32;
+	struct cvmx_sata_uahc_px_cmd_s {
+		u32 icc : 4;
+		u32 asp : 1;
+		u32 alpe : 1;
+		u32 dlae : 1;
+		u32 atapi : 1;
+		u32 apste : 1;
+		u32 fbscp : 1;
+		u32 esp : 1;
+		u32 cpd : 1;
+		u32 mpsp : 1;
+		u32 hpcp : 1;
+		u32 pma : 1;
+		u32 cps : 1;
+		u32 cr : 1;
+		u32 fr : 1;
+		u32 mpss : 1;
+		u32 ccs : 5;
+		u32 reserved_5_7 : 3;
+		u32 fre : 1;
+		u32 clo : 1;
+		u32 pod : 1;
+		u32 sud : 1;
+		u32 st : 1;
+	} s;
+	struct cvmx_sata_uahc_px_cmd_s cn70xx;
+	struct cvmx_sata_uahc_px_cmd_s cn70xxp1;
+	struct cvmx_sata_uahc_px_cmd_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_px_cmd cvmx_sata_uahc_px_cmd_t;
+
+/**
+ * cvmx_sata_uahc_gbl_pi
+ *
+ * See AHCI specification v1.3 section 3.1.
+ *
+ */
+union cvmx_sata_uahc_gbl_pi {
+	u32 u32;
+	struct cvmx_sata_uahc_gbl_pi_s {
+		u32 reserved_2_31 : 30;
+		u32 pi : 2;
+	} s;
+	struct cvmx_sata_uahc_gbl_pi_s cn70xx;
+	struct cvmx_sata_uahc_gbl_pi_s cn70xxp1;
+	struct cvmx_sata_uahc_gbl_pi_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_gbl_pi cvmx_sata_uahc_gbl_pi_t;
+
+/**
+ * cvmx_sata_uahc_p#_ssts
+ */
+union cvmx_sata_uahc_px_ssts {
+	u32 u32;
+	struct cvmx_sata_uahc_px_ssts_s {
+		u32 reserved_12_31 : 20;
+		u32 ipm : 4;
+		u32 spd : 4;
+		u32 det : 4;
+	} s;
+	struct cvmx_sata_uahc_px_ssts_s cn70xx;
+	struct cvmx_sata_uahc_px_ssts_s cn70xxp1;
+	struct cvmx_sata_uahc_px_ssts_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_px_ssts cvmx_sata_uahc_px_ssts_t;
+
+/**
+ * cvmx_sata_uahc_p#_tfd
+ */
+union cvmx_sata_uahc_px_tfd {
+	u32 u32;
+	struct cvmx_sata_uahc_px_tfd_s {
+		u32 reserved_16_31 : 16;
+		u32 tferr : 8;
+		u32 sts : 8;
+	} s;
+	struct cvmx_sata_uahc_px_tfd_s cn70xx;
+	struct cvmx_sata_uahc_px_tfd_s cn70xxp1;
+	struct cvmx_sata_uahc_px_tfd_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_px_tfd cvmx_sata_uahc_px_tfd_t;
+
+/**
+ * cvmx_sata_uahc_gbl_timer1ms
+ */
+union cvmx_sata_uahc_gbl_timer1ms {
+	u32 u32;
+	struct cvmx_sata_uahc_gbl_timer1ms_s {
+		u32 reserved_20_31 : 12;
+		u32 timv : 20;
+	} s;
+	struct cvmx_sata_uahc_gbl_timer1ms_s cn70xx;
+	struct cvmx_sata_uahc_gbl_timer1ms_s cn70xxp1;
+	struct cvmx_sata_uahc_gbl_timer1ms_s cn73xx;
+};
+
+typedef union cvmx_sata_uahc_gbl_timer1ms cvmx_sata_uahc_gbl_timer1ms_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 28/50] mips: octeon: Add cvmx-sli-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (26 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 27/50] mips: octeon: Add cvmx-sata-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 29/50] mips: octeon: Add cvmx-smix-defs.h " Stefan Roese
                   ` (24 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-sli-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-sli-defs.h  | 6548 +++++++++++++++++
 1 file changed, 6548 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sli-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-sli-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-sli-defs.h
new file mode 100644
index 0000000000..221ed1bcde
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-sli-defs.h
@@ -0,0 +1,6548 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon sli.
+ */
+
+#ifndef __CVMX_SLI_DEFS_H__
+#define __CVMX_SLI_DEFS_H__
+
+#define CVMX_SLI_BIST_STATUS CVMX_SLI_BIST_STATUS_FUNC()
+static inline u64 CVMX_SLI_BIST_STATUS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000580ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000580ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028580ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028580ull;
+	}
+	return 0x0000000000028580ull;
+}
+
+#define CVMX_SLI_CIU_INT_ENB (0x00011F0000027110ull)
+#define CVMX_SLI_CIU_INT_SUM (0x00011F0000027100ull)
+static inline u64 CVMX_SLI_CTL_PORTX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000050ull + (offset) * 16;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010050ull + (offset) * 16;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000050ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000006E0ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000286E0ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000286E0ull + (offset) * 16;
+	}
+	return 0x00000000000286E0ull + (offset) * 16;
+}
+
+#define CVMX_SLI_CTL_STATUS CVMX_SLI_CTL_STATUS_FUNC()
+static inline u64 CVMX_SLI_CTL_STATUS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000570ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000570ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028570ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028570ull;
+	}
+	return 0x0000000000028570ull;
+}
+
+#define CVMX_SLI_DATA_OUT_CNT CVMX_SLI_DATA_OUT_CNT_FUNC()
+static inline u64 CVMX_SLI_DATA_OUT_CNT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000005F0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000005F0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000285F0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000285F0ull;
+	}
+	return 0x00000000000285F0ull;
+}
+
+#define CVMX_SLI_DBG_DATA   (0x0000000000000310ull)
+#define CVMX_SLI_DBG_SELECT (0x0000000000000300ull)
+static inline u64 CVMX_SLI_DMAX_CNT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000400ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028400ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028400ull + (offset) * 16;
+	}
+	return 0x0000000000028400ull + (offset) * 16;
+}
+
+static inline u64 CVMX_SLI_DMAX_INT_LEVEL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000003E0ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000003E0ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000283E0ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000283E0ull + (offset) * 16;
+	}
+	return 0x00000000000283E0ull + (offset) * 16;
+}
+
+static inline u64 CVMX_SLI_DMAX_TIM(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000420ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000420ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028420ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028420ull + (offset) * 16;
+	}
+	return 0x0000000000028420ull + (offset) * 16;
+}
+
+#define CVMX_SLI_INT_ENB_CIU	       (0x0000000000003CD0ull)
+#define CVMX_SLI_INT_ENB_PORTX(offset) (0x0000000000000340ull + ((offset) & 3) * 16)
+#define CVMX_SLI_INT_SUM	       (0x0000000000000330ull)
+#define CVMX_SLI_LAST_WIN_RDATA0       (0x0000000000000600ull)
+#define CVMX_SLI_LAST_WIN_RDATA1       (0x0000000000000610ull)
+#define CVMX_SLI_LAST_WIN_RDATA2       (0x00000000000006C0ull)
+#define CVMX_SLI_LAST_WIN_RDATA3       (0x00000000000006D0ull)
+#define CVMX_SLI_MACX_PFX_DMA_VF_INT(offset, block_id)                                             \
+	(0x0000000000027280ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_DMA_VF_INT_ENB(offset, block_id)                                         \
+	(0x0000000000027500ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_FLR_VF_INT(offset, block_id)                                             \
+	(0x0000000000027400ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_INT_ENB(offset, block_id)                                                \
+	(0x0000000000027080ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_INT_SUM(offset, block_id)                                                \
+	(0x0000000000027000ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_MBOX_INT(offset, block_id)                                               \
+	(0x0000000000027380ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_PKT_VF_INT(offset, block_id)                                             \
+	(0x0000000000027300ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_PKT_VF_INT_ENB(offset, block_id)                                         \
+	(0x0000000000027580ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_PP_VF_INT(offset, block_id)                                              \
+	(0x0000000000027200ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MACX_PFX_PP_VF_INT_ENB(offset, block_id)                                          \
+	(0x0000000000027480ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_MAC_CREDIT_CNT CVMX_SLI_MAC_CREDIT_CNT_FUNC()
+static inline u64 CVMX_SLI_MAC_CREDIT_CNT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003D70ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003D70ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023D70ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023D70ull;
+	}
+	return 0x0000000000023D70ull;
+}
+
+#define CVMX_SLI_MAC_CREDIT_CNT2 CVMX_SLI_MAC_CREDIT_CNT2_FUNC()
+static inline u64 CVMX_SLI_MAC_CREDIT_CNT2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000013E10ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003E10ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023E10ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023E10ull;
+	}
+	return 0x0000000000023E10ull;
+}
+
+#define CVMX_SLI_MAC_NUMBER CVMX_SLI_MAC_NUMBER_FUNC()
+static inline u64 CVMX_SLI_MAC_NUMBER_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003E00ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003E00ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020050ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020050ull;
+	}
+	return 0x0000000000020050ull;
+}
+
+#define CVMX_SLI_MEM_ACCESS_CTL CVMX_SLI_MEM_ACCESS_CTL_FUNC()
+static inline u64 CVMX_SLI_MEM_ACCESS_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000002F0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000002F0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000282F0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000282F0ull;
+	}
+	return 0x00000000000282F0ull;
+}
+
+static inline u64 CVMX_SLI_MEM_ACCESS_SUBIDX(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000000E0ull + (offset) * 16 - 16 * 12;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000000E0ull + (offset) * 16 - 16 * 12;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000280E0ull + (offset) * 16 - 16 * 12;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000280E0ull + (offset) * 16 - 16 * 12;
+	}
+	return 0x00000000000280E0ull + (offset) * 16 - 16 * 12;
+}
+
+#define CVMX_SLI_MEM_CTL CVMX_SLI_MEM_CTL_FUNC()
+static inline u64 CVMX_SLI_MEM_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000005E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000285E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000285E0ull;
+	}
+	return 0x00000000000285E0ull;
+}
+
+#define CVMX_SLI_MEM_INT_SUM CVMX_SLI_MEM_INT_SUM_FUNC()
+static inline u64 CVMX_SLI_MEM_INT_SUM_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000005D0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000285D0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000285D0ull;
+	}
+	return 0x00000000000285D0ull;
+}
+
+static inline u64 CVMX_SLI_MSIXX_TABLE_ADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000006000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000000000ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000000ull + (offset) * 16;
+	}
+	return 0x0000000000000000ull + (offset) * 16;
+}
+
+static inline u64 CVMX_SLI_MSIXX_TABLE_DATA(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000006008ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000000008ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000008ull + (offset) * 16;
+	}
+	return 0x0000000000000008ull + (offset) * 16;
+}
+
+#define CVMX_SLI_MSIX_MACX_PF_TABLE_ADDR(offset) (0x0000000000007C00ull + ((offset) & 3) * 16)
+#define CVMX_SLI_MSIX_MACX_PF_TABLE_DATA(offset) (0x0000000000007C08ull + ((offset) & 3) * 16)
+#define CVMX_SLI_MSIX_PBA0			 CVMX_SLI_MSIX_PBA0_FUNC()
+static inline u64 CVMX_SLI_MSIX_PBA0_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000007000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000001000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001000ull;
+	}
+	return 0x0000000000001000ull;
+}
+
+#define CVMX_SLI_MSIX_PBA1 CVMX_SLI_MSIX_PBA1_FUNC()
+static inline u64 CVMX_SLI_MSIX_PBA1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000007010ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000001008ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001008ull;
+	}
+	return 0x0000000000001008ull;
+}
+
+#define CVMX_SLI_MSI_ENB0 (0x0000000000003C50ull)
+#define CVMX_SLI_MSI_ENB1 (0x0000000000003C60ull)
+#define CVMX_SLI_MSI_ENB2 (0x0000000000003C70ull)
+#define CVMX_SLI_MSI_ENB3 (0x0000000000003C80ull)
+#define CVMX_SLI_MSI_RCV0 CVMX_SLI_MSI_RCV0_FUNC()
+static inline u64 CVMX_SLI_MSI_RCV0_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003C10ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003C10ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023C10ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023C10ull;
+	}
+	return 0x0000000000023C10ull;
+}
+
+#define CVMX_SLI_MSI_RCV1 CVMX_SLI_MSI_RCV1_FUNC()
+static inline u64 CVMX_SLI_MSI_RCV1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003C20ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003C20ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023C20ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023C20ull;
+	}
+	return 0x0000000000023C20ull;
+}
+
+#define CVMX_SLI_MSI_RCV2 CVMX_SLI_MSI_RCV2_FUNC()
+static inline u64 CVMX_SLI_MSI_RCV2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003C30ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003C30ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023C30ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023C30ull;
+	}
+	return 0x0000000000023C30ull;
+}
+
+#define CVMX_SLI_MSI_RCV3 CVMX_SLI_MSI_RCV3_FUNC()
+static inline u64 CVMX_SLI_MSI_RCV3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003C40ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003C40ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023C40ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023C40ull;
+	}
+	return 0x0000000000023C40ull;
+}
+
+#define CVMX_SLI_MSI_RD_MAP CVMX_SLI_MSI_RD_MAP_FUNC()
+static inline u64 CVMX_SLI_MSI_RD_MAP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003CA0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003CA0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023CA0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023CA0ull;
+	}
+	return 0x0000000000023CA0ull;
+}
+
+#define CVMX_SLI_MSI_W1C_ENB0 (0x0000000000003CF0ull)
+#define CVMX_SLI_MSI_W1C_ENB1 (0x0000000000003D00ull)
+#define CVMX_SLI_MSI_W1C_ENB2 (0x0000000000003D10ull)
+#define CVMX_SLI_MSI_W1C_ENB3 (0x0000000000003D20ull)
+#define CVMX_SLI_MSI_W1S_ENB0 (0x0000000000003D30ull)
+#define CVMX_SLI_MSI_W1S_ENB1 (0x0000000000003D40ull)
+#define CVMX_SLI_MSI_W1S_ENB2 (0x0000000000003D50ull)
+#define CVMX_SLI_MSI_W1S_ENB3 (0x0000000000003D60ull)
+#define CVMX_SLI_MSI_WR_MAP   CVMX_SLI_MSI_WR_MAP_FUNC()
+static inline u64 CVMX_SLI_MSI_WR_MAP_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003C90ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003C90ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023C90ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023C90ull;
+	}
+	return 0x0000000000023C90ull;
+}
+
+#define CVMX_SLI_NQM_RSP_ERR_SND_DBG (0x00011F0000028800ull)
+#define CVMX_SLI_PCIE_MSI_RCV	     CVMX_SLI_PCIE_MSI_RCV_FUNC()
+static inline u64 CVMX_SLI_PCIE_MSI_RCV_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003CB0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003CB0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023CB0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023CB0ull;
+	}
+	return 0x0000000000023CB0ull;
+}
+
+#define CVMX_SLI_PCIE_MSI_RCV_B1 CVMX_SLI_PCIE_MSI_RCV_B1_FUNC()
+static inline u64 CVMX_SLI_PCIE_MSI_RCV_B1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000650ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000650ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028650ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028650ull;
+	}
+	return 0x0000000000028650ull;
+}
+
+#define CVMX_SLI_PCIE_MSI_RCV_B2 CVMX_SLI_PCIE_MSI_RCV_B2_FUNC()
+static inline u64 CVMX_SLI_PCIE_MSI_RCV_B2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000660ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000660ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028660ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028660ull;
+	}
+	return 0x0000000000028660ull;
+}
+
+#define CVMX_SLI_PCIE_MSI_RCV_B3 CVMX_SLI_PCIE_MSI_RCV_B3_FUNC()
+static inline u64 CVMX_SLI_PCIE_MSI_RCV_B3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000670ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000670ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028670ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028670ull;
+	}
+	return 0x0000000000028670ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_CNTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000002400ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000002400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000100B0ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000100B0ull + (offset) * 0x20000ull;
+	}
+	return 0x00000000000100B0ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKTX_ERROR_INFO(offset) (0x00000000000100C0ull + ((offset) & 63) * 0x20000ull)
+static inline u64 CVMX_SLI_PKTX_INPUT_CONTROL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000004000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010000ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010000ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010000ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_INSTR_BADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000002800ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000002800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010010ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010010ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010010ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_INSTR_BAOFF_DBELL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000002C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000002C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010020ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010020ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010020ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_INSTR_FIFO_RSIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003000ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010030ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010030ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010030ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKTX_INSTR_HEADER(offset) (0x0000000000003400ull + ((offset) & 31) * 16)
+static inline u64 CVMX_SLI_PKTX_INT_LEVELS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000004400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000100A0ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000100A0ull + (offset) * 0x20000ull;
+	}
+	return 0x00000000000100A0ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKTX_IN_BP(offset)    (0x0000000000003800ull + ((offset) & 31) * 16)
+#define CVMX_SLI_PKTX_MBOX_INT(offset) (0x0000000000010210ull + ((offset) & 63) * 0x20000ull)
+static inline u64 CVMX_SLI_PKTX_OUTPUT_CONTROL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000004800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010050ull + (offset) * 0x20000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010050ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010050ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_OUT_SIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010060ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010060ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010060ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKTX_PF_VF_MBOX_SIGX(offset, block_id)                                            \
+	(0x0000000000010200ull + (((offset) & 1) + ((block_id) & 63) * 0x4000ull) * 8)
+static inline u64 CVMX_SLI_PKTX_SLIST_BADDR(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001400ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001400ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010070ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010070ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010070ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_SLIST_BAOFF_DBELL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001800ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001800ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010080ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010080ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010080ull + (offset) * 0x20000ull;
+}
+
+static inline u64 CVMX_SLI_PKTX_SLIST_FIFO_RSIZE(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001C00ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001C00ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010090ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010090ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010090ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKTX_VF_INT_SUM(offset) (0x00000000000100D0ull + ((offset) & 63) * 0x20000ull)
+#define CVMX_SLI_PKTX_VF_SIG(offset)	 (0x0000000000004C00ull + ((offset) & 63) * 16)
+#define CVMX_SLI_PKT_BIST_STATUS	 (0x0000000000029220ull)
+#define CVMX_SLI_PKT_CNT_INT		 CVMX_SLI_PKT_CNT_INT_FUNC()
+static inline u64 CVMX_SLI_PKT_CNT_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001130ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001130ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029130ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029130ull;
+	}
+	return 0x0000000000029130ull;
+}
+
+#define CVMX_SLI_PKT_CNT_INT_ENB   (0x0000000000001150ull)
+#define CVMX_SLI_PKT_CTL	   (0x0000000000001220ull)
+#define CVMX_SLI_PKT_DATA_OUT_ES   (0x00000000000010B0ull)
+#define CVMX_SLI_PKT_DATA_OUT_NS   (0x00000000000010A0ull)
+#define CVMX_SLI_PKT_DATA_OUT_ROR  (0x0000000000001090ull)
+#define CVMX_SLI_PKT_DPADDR	   (0x0000000000001080ull)
+#define CVMX_SLI_PKT_GBL_CONTROL   (0x0000000000029210ull)
+#define CVMX_SLI_PKT_INPUT_CONTROL (0x0000000000001170ull)
+#define CVMX_SLI_PKT_INSTR_ENB	   (0x0000000000001000ull)
+#define CVMX_SLI_PKT_INSTR_RD_SIZE (0x00000000000011A0ull)
+#define CVMX_SLI_PKT_INSTR_SIZE	   (0x0000000000001020ull)
+#define CVMX_SLI_PKT_INT	   CVMX_SLI_PKT_INT_FUNC()
+static inline u64 CVMX_SLI_PKT_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001160ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029160ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029160ull;
+	}
+	return 0x0000000000029160ull;
+}
+
+#define CVMX_SLI_PKT_INT_LEVELS (0x0000000000001120ull)
+#define CVMX_SLI_PKT_IN_BP	(0x0000000000001210ull)
+static inline u64 CVMX_SLI_PKT_IN_DONEX_CNTS(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000002000ull + (offset) * 16;
+
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000002000ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000010040ull + (offset) * 0x20000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000010040ull + (offset) * 0x20000ull;
+	}
+	return 0x0000000000010040ull + (offset) * 0x20000ull;
+}
+
+#define CVMX_SLI_PKT_IN_INSTR_COUNTS CVMX_SLI_PKT_IN_INSTR_COUNTS_FUNC()
+static inline u64 CVMX_SLI_PKT_IN_INSTR_COUNTS_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001200ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001200ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029200ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029200ull;
+	}
+	return 0x0000000000029200ull;
+}
+
+#define CVMX_SLI_PKT_IN_INT CVMX_SLI_PKT_IN_INT_FUNC()
+static inline u64 CVMX_SLI_PKT_IN_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001150ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029150ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029150ull;
+	}
+	return 0x0000000000029150ull;
+}
+
+#define CVMX_SLI_PKT_IN_JABBER	  (0x0000000000029170ull)
+#define CVMX_SLI_PKT_IN_PCIE_PORT (0x00000000000011B0ull)
+#define CVMX_SLI_PKT_IPTR	  (0x0000000000001070ull)
+#define CVMX_SLI_PKT_MAC0_SIG0	  (0x0000000000001300ull)
+#define CVMX_SLI_PKT_MAC0_SIG1	  (0x0000000000001310ull)
+#define CVMX_SLI_PKT_MAC1_SIG0	  (0x0000000000001320ull)
+#define CVMX_SLI_PKT_MAC1_SIG1	  (0x0000000000001330ull)
+#define CVMX_SLI_PKT_MACX_PFX_RINFO(offset, block_id)                                              \
+	(0x0000000000029030ull + (((offset) & 1) + ((block_id) & 3) * 0x2ull) * 16)
+#define CVMX_SLI_PKT_MACX_RINFO(offset) (0x0000000000001030ull + ((offset) & 3) * 16)
+#define CVMX_SLI_PKT_MEM_CTL		CVMX_SLI_PKT_MEM_CTL_FUNC()
+static inline u64 CVMX_SLI_PKT_MEM_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001120ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029120ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029120ull;
+	}
+	return 0x0000000000029120ull;
+}
+
+#define CVMX_SLI_PKT_OUTPUT_WMARK CVMX_SLI_PKT_OUTPUT_WMARK_FUNC()
+static inline u64 CVMX_SLI_PKT_OUTPUT_WMARK_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001180ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001180ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029180ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029180ull;
+	}
+	return 0x0000000000029180ull;
+}
+
+#define CVMX_SLI_PKT_OUT_BMODE	    (0x00000000000010D0ull)
+#define CVMX_SLI_PKT_OUT_BP_EN	    (0x0000000000001240ull)
+#define CVMX_SLI_PKT_OUT_BP_EN2_W1C (0x0000000000029290ull)
+#define CVMX_SLI_PKT_OUT_BP_EN2_W1S (0x0000000000029270ull)
+#define CVMX_SLI_PKT_OUT_BP_EN_W1C  (0x0000000000029280ull)
+#define CVMX_SLI_PKT_OUT_BP_EN_W1S  (0x0000000000029260ull)
+#define CVMX_SLI_PKT_OUT_ENB	    (0x0000000000001010ull)
+#define CVMX_SLI_PKT_PCIE_PORT	    (0x00000000000010E0ull)
+#define CVMX_SLI_PKT_PKIND_VALID    (0x0000000000029190ull)
+#define CVMX_SLI_PKT_PORT_IN_RST    (0x00000000000011F0ull)
+#define CVMX_SLI_PKT_RING_RST	    CVMX_SLI_PKT_RING_RST_FUNC()
+static inline u64 CVMX_SLI_PKT_RING_RST_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000011E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000291E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000291E0ull;
+	}
+	return 0x00000000000291E0ull;
+}
+
+#define CVMX_SLI_PKT_SLIST_ES  (0x0000000000001050ull)
+#define CVMX_SLI_PKT_SLIST_NS  (0x0000000000001040ull)
+#define CVMX_SLI_PKT_SLIST_ROR (0x0000000000001030ull)
+#define CVMX_SLI_PKT_TIME_INT  CVMX_SLI_PKT_TIME_INT_FUNC()
+static inline u64 CVMX_SLI_PKT_TIME_INT_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000001140ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000001140ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000029140ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000029140ull;
+	}
+	return 0x0000000000029140ull;
+}
+
+#define CVMX_SLI_PKT_TIME_INT_ENB    (0x0000000000001160ull)
+#define CVMX_SLI_PORTX_PKIND(offset) (0x0000000000000800ull + ((offset) & 31) * 16)
+#define CVMX_SLI_PP_PKT_CSR_CONTROL  (0x00011F00000282D0ull)
+#define CVMX_SLI_S2C_END_MERGE	     CVMX_SLI_S2C_END_MERGE_FUNC()
+static inline u64 CVMX_SLI_S2C_END_MERGE_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00011F0000015000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00011F0000025000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00011F0000025000ull;
+	}
+	return 0x00011F0000025000ull;
+}
+
+static inline u64 CVMX_SLI_S2M_PORTX_CTL(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003D80ull + (offset) * 16;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000003D80ull + (offset) * 16;
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000013D80ull + (offset) * 16;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000003D80ull + (offset) * 16;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000023D80ull + (offset) * 16;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000023D80ull + (offset) * 16;
+	}
+	return 0x0000000000023D80ull + (offset) * 16;
+}
+
+#define CVMX_SLI_SCRATCH_1 CVMX_SLI_SCRATCH_1_FUNC()
+static inline u64 CVMX_SLI_SCRATCH_1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000003C0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000003C0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000283C0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000283C0ull;
+	}
+	return 0x00000000000283C0ull;
+}
+
+#define CVMX_SLI_SCRATCH_2 CVMX_SLI_SCRATCH_2_FUNC()
+static inline u64 CVMX_SLI_SCRATCH_2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000003D0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000003D0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000283D0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000283D0ull;
+	}
+	return 0x00000000000283D0ull;
+}
+
+#define CVMX_SLI_STATE1 CVMX_SLI_STATE1_FUNC()
+static inline u64 CVMX_SLI_STATE1_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000620ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000620ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028620ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028620ull;
+	}
+	return 0x0000000000028620ull;
+}
+
+#define CVMX_SLI_STATE2 CVMX_SLI_STATE2_FUNC()
+static inline u64 CVMX_SLI_STATE2_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000630ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000630ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028630ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028630ull;
+	}
+	return 0x0000000000028630ull;
+}
+
+#define CVMX_SLI_STATE3 CVMX_SLI_STATE3_FUNC()
+static inline u64 CVMX_SLI_STATE3_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000640ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000640ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000028640ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000028640ull;
+	}
+	return 0x0000000000028640ull;
+}
+
+#define CVMX_SLI_TX_PIPE    (0x0000000000001230ull)
+#define CVMX_SLI_WINDOW_CTL CVMX_SLI_WINDOW_CTL_FUNC()
+static inline u64 CVMX_SLI_WINDOW_CTL_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000002E0ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00000000000002E0ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00000000000282E0ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00000000000282E0ull;
+	}
+	return 0x00000000000282E0ull;
+}
+
+#define CVMX_SLI_WIN_RD_ADDR CVMX_SLI_WIN_RD_ADDR_FUNC()
+static inline u64 CVMX_SLI_WIN_RD_ADDR_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000010ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000010ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020010ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020010ull;
+	}
+	return 0x0000000000020010ull;
+}
+
+#define CVMX_SLI_WIN_RD_DATA CVMX_SLI_WIN_RD_DATA_FUNC()
+static inline u64 CVMX_SLI_WIN_RD_DATA_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000040ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000040ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020040ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020040ull;
+	}
+	return 0x0000000000020040ull;
+}
+
+#define CVMX_SLI_WIN_WR_ADDR CVMX_SLI_WIN_WR_ADDR_FUNC()
+static inline u64 CVMX_SLI_WIN_WR_ADDR_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000000ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000000ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020000ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020000ull;
+	}
+	return 0x0000000000020000ull;
+}
+
+#define CVMX_SLI_WIN_WR_DATA CVMX_SLI_WIN_WR_DATA_FUNC()
+static inline u64 CVMX_SLI_WIN_WR_DATA_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000020ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000020ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020020ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020020ull;
+	}
+	return 0x0000000000020020ull;
+}
+
+#define CVMX_SLI_WIN_WR_MASK CVMX_SLI_WIN_WR_MASK_FUNC()
+static inline u64 CVMX_SLI_WIN_WR_MASK_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000030ull;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0000000000000030ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0000000000020030ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000020030ull;
+	}
+	return 0x0000000000020030ull;
+}
+
+/**
+ * cvmx_sli_bist_status
+ *
+ * This register contains results from BIST runs of MAC's memories: 0 = pass (or BIST in
+ * progress/never run), 1 = fail.
+ */
+union cvmx_sli_bist_status {
+	u64 u64;
+	struct cvmx_sli_bist_status_s {
+		u64 reserved_32_63 : 32;
+		u64 ncb_req : 1;
+		u64 n2p0_c : 1;
+		u64 n2p0_o : 1;
+		u64 n2p1_c : 1;
+		u64 n2p1_o : 1;
+		u64 cpl_p0 : 1;
+		u64 cpl_p1 : 1;
+		u64 reserved_19_24 : 6;
+		u64 p2n0_c0 : 1;
+		u64 p2n0_c1 : 1;
+		u64 p2n0_n : 1;
+		u64 p2n0_p0 : 1;
+		u64 p2n0_p1 : 1;
+		u64 p2n1_c0 : 1;
+		u64 p2n1_c1 : 1;
+		u64 p2n1_n : 1;
+		u64 p2n1_p0 : 1;
+		u64 p2n1_p1 : 1;
+		u64 reserved_6_8 : 3;
+		u64 dsi1_1 : 1;
+		u64 dsi1_0 : 1;
+		u64 dsi0_1 : 1;
+		u64 dsi0_0 : 1;
+		u64 msi : 1;
+		u64 ncb_cmd : 1;
+	} s;
+	struct cvmx_sli_bist_status_cn61xx {
+		u64 reserved_31_63 : 33;
+		u64 n2p0_c : 1;
+		u64 n2p0_o : 1;
+		u64 reserved_27_28 : 2;
+		u64 cpl_p0 : 1;
+		u64 cpl_p1 : 1;
+		u64 reserved_19_24 : 6;
+		u64 p2n0_c0 : 1;
+		u64 p2n0_c1 : 1;
+		u64 p2n0_n : 1;
+		u64 p2n0_p0 : 1;
+		u64 p2n0_p1 : 1;
+		u64 p2n1_c0 : 1;
+		u64 p2n1_c1 : 1;
+		u64 p2n1_n : 1;
+		u64 p2n1_p0 : 1;
+		u64 p2n1_p1 : 1;
+		u64 reserved_6_8 : 3;
+		u64 dsi1_1 : 1;
+		u64 dsi1_0 : 1;
+		u64 dsi0_1 : 1;
+		u64 dsi0_0 : 1;
+		u64 msi : 1;
+		u64 ncb_cmd : 1;
+	} cn61xx;
+	struct cvmx_sli_bist_status_cn63xx {
+		u64 reserved_31_63 : 33;
+		u64 n2p0_c : 1;
+		u64 n2p0_o : 1;
+		u64 n2p1_c : 1;
+		u64 n2p1_o : 1;
+		u64 cpl_p0 : 1;
+		u64 cpl_p1 : 1;
+		u64 reserved_19_24 : 6;
+		u64 p2n0_c0 : 1;
+		u64 p2n0_c1 : 1;
+		u64 p2n0_n : 1;
+		u64 p2n0_p0 : 1;
+		u64 p2n0_p1 : 1;
+		u64 p2n1_c0 : 1;
+		u64 p2n1_c1 : 1;
+		u64 p2n1_n : 1;
+		u64 p2n1_p0 : 1;
+		u64 p2n1_p1 : 1;
+		u64 reserved_6_8 : 3;
+		u64 dsi1_1 : 1;
+		u64 dsi1_0 : 1;
+		u64 dsi0_1 : 1;
+		u64 dsi0_0 : 1;
+		u64 msi : 1;
+		u64 ncb_cmd : 1;
+	} cn63xx;
+	struct cvmx_sli_bist_status_cn63xx cn63xxp1;
+	struct cvmx_sli_bist_status_cn61xx cn66xx;
+	struct cvmx_sli_bist_status_s cn68xx;
+	struct cvmx_sli_bist_status_s cn68xxp1;
+	struct cvmx_sli_bist_status_cn70xx {
+		u64 reserved_31_63 : 33;
+		u64 n2p0_c : 1;
+		u64 n2p0_o : 1;
+		u64 reserved_27_28 : 2;
+		u64 cpl_p0 : 1;
+		u64 cpl_p1 : 1;
+		u64 reserved_19_24 : 6;
+		u64 p2n0_c0 : 1;
+		u64 reserved_17_17 : 1;
+		u64 p2n0_n : 1;
+		u64 p2n0_p0 : 1;
+		u64 reserved_14_14 : 1;
+		u64 p2n1_c0 : 1;
+		u64 reserved_12_12 : 1;
+		u64 p2n1_n : 1;
+		u64 p2n1_p0 : 1;
+		u64 reserved_6_9 : 4;
+		u64 dsi1_1 : 1;
+		u64 dsi1_0 : 1;
+		u64 dsi0_1 : 1;
+		u64 dsi0_0 : 1;
+		u64 msi : 1;
+		u64 ncb_cmd : 1;
+	} cn70xx;
+	struct cvmx_sli_bist_status_cn70xx cn70xxp1;
+	struct cvmx_sli_bist_status_s cn73xx;
+	struct cvmx_sli_bist_status_s cn78xx;
+	struct cvmx_sli_bist_status_s cn78xxp1;
+	struct cvmx_sli_bist_status_cn61xx cnf71xx;
+	struct cvmx_sli_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_sli_bist_status cvmx_sli_bist_status_t;
+
+/**
+ * cvmx_sli_ciu_int_enb
+ *
+ * Interrupt enable register for a given SLI_CIU_INT_SUM register.
+ *
+ */
+union cvmx_sli_ciu_int_enb {
+	u64 u64;
+	struct cvmx_sli_ciu_int_enb_s {
+		u64 reserved_51_63 : 13;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 m3p0_pppf_err : 1;
+		u64 m3p0_pktpf_err : 1;
+		u64 m3p0_dmapf_err : 1;
+		u64 m2p0_pppf_err : 1;
+		u64 m2p0_ppvf_err : 1;
+		u64 m2p0_pktpf_err : 1;
+		u64 m2p0_pktvf_err : 1;
+		u64 m2p0_dmapf_err : 1;
+		u64 m2p0_dmavf_err : 1;
+		u64 m1p0_pppf_err : 1;
+		u64 m1p0_pktpf_err : 1;
+		u64 m1p0_dmapf_err : 1;
+		u64 m0p1_pppf_err : 1;
+		u64 m0p1_ppvf_err : 1;
+		u64 m0p1_pktpf_err : 1;
+		u64 m0p1_pktvf_err : 1;
+		u64 m0p1_dmapf_err : 1;
+		u64 m0p1_dmavf_err : 1;
+		u64 m0p0_pppf_err : 1;
+		u64 m0p0_ppvf_err : 1;
+		u64 m0p0_pktpf_err : 1;
+		u64 m0p0_pktvf_err : 1;
+		u64 m0p0_dmapf_err : 1;
+		u64 m0p0_dmavf_err : 1;
+		u64 m2v0_flr : 1;
+		u64 m2p0_flr : 1;
+		u64 reserved_5_8 : 4;
+		u64 m0v1_flr : 1;
+		u64 m0p1_flr : 1;
+		u64 m0v0_flr : 1;
+		u64 m0p0_flr : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_ciu_int_enb_s cn73xx;
+	struct cvmx_sli_ciu_int_enb_s cn78xx;
+	struct cvmx_sli_ciu_int_enb_s cnf75xx;
+};
+
+typedef union cvmx_sli_ciu_int_enb cvmx_sli_ciu_int_enb_t;
+
+/**
+ * cvmx_sli_ciu_int_sum
+ *
+ * The fields in this register are set when an interrupt condition occurs; write 1 to clear.
+ * A bit set in this register will send and interrupt to CIU
+ */
+union cvmx_sli_ciu_int_sum {
+	u64 u64;
+	struct cvmx_sli_ciu_int_sum_s {
+		u64 reserved_51_63 : 13;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 m3p0_pppf_err : 1;
+		u64 m3p0_pktpf_err : 1;
+		u64 m3p0_dmapf_err : 1;
+		u64 m2p0_pppf_err : 1;
+		u64 m2p0_ppvf_err : 1;
+		u64 m2p0_pktpf_err : 1;
+		u64 m2p0_pktvf_err : 1;
+		u64 m2p0_dmapf_err : 1;
+		u64 m2p0_dmavf_err : 1;
+		u64 m1p0_pppf_err : 1;
+		u64 m1p0_pktpf_err : 1;
+		u64 m1p0_dmapf_err : 1;
+		u64 m0p1_pppf_err : 1;
+		u64 m0p1_ppvf_err : 1;
+		u64 m0p1_pktpf_err : 1;
+		u64 m0p1_pktvf_err : 1;
+		u64 m0p1_dmapf_err : 1;
+		u64 m0p1_dmavf_err : 1;
+		u64 m0p0_pppf_err : 1;
+		u64 m0p0_ppvf_err : 1;
+		u64 m0p0_pktpf_err : 1;
+		u64 m0p0_pktvf_err : 1;
+		u64 m0p0_dmapf_err : 1;
+		u64 m0p0_dmavf_err : 1;
+		u64 m2v0_flr : 1;
+		u64 m2p0_flr : 1;
+		u64 reserved_5_8 : 4;
+		u64 m0v1_flr : 1;
+		u64 m0p1_flr : 1;
+		u64 m0v0_flr : 1;
+		u64 m0p0_flr : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_ciu_int_sum_s cn73xx;
+	struct cvmx_sli_ciu_int_sum_s cn78xx;
+	struct cvmx_sli_ciu_int_sum_s cnf75xx;
+};
+
+typedef union cvmx_sli_ciu_int_sum cvmx_sli_ciu_int_sum_t;
+
+/**
+ * cvmx_sli_ctl_port#
+ *
+ * These registers contains control information for access to ports. Indexed by SLI_PORT_E.
+ * Note: SLI_CTL_PORT0 controls PF0.
+ */
+union cvmx_sli_ctl_portx {
+	u64 u64;
+	struct cvmx_sli_ctl_portx_s {
+		u64 reserved_22_63 : 42;
+		u64 intd : 1;
+		u64 intc : 1;
+		u64 intb : 1;
+		u64 inta : 1;
+		u64 dis_port : 1;
+		u64 waitl_com : 1;
+		u64 intd_map : 2;
+		u64 intc_map : 2;
+		u64 intb_map : 2;
+		u64 inta_map : 2;
+		u64 ctlp_ro : 1;
+		u64 reserved_6_6 : 1;
+		u64 ptlp_ro : 1;
+		u64 reserved_1_4 : 4;
+		u64 wait_com : 1;
+	} s;
+	struct cvmx_sli_ctl_portx_s cn61xx;
+	struct cvmx_sli_ctl_portx_s cn63xx;
+	struct cvmx_sli_ctl_portx_s cn63xxp1;
+	struct cvmx_sli_ctl_portx_s cn66xx;
+	struct cvmx_sli_ctl_portx_s cn68xx;
+	struct cvmx_sli_ctl_portx_s cn68xxp1;
+	struct cvmx_sli_ctl_portx_cn70xx {
+		u64 reserved_22_63 : 42;
+		u64 intd : 1;
+		u64 intc : 1;
+		u64 intb : 1;
+		u64 inta : 1;
+		u64 dis_port : 1;
+		u64 waitl_com : 1;
+		u64 intd_map : 2;
+		u64 intc_map : 2;
+		u64 intb_map : 2;
+		u64 inta_map : 2;
+		u64 ctlp_ro : 1;
+		u64 reserved_6_6 : 1;
+		u64 ptlp_ro : 1;
+		u64 reserved_4_1 : 4;
+		u64 wait_com : 1;
+	} cn70xx;
+	struct cvmx_sli_ctl_portx_cn70xx cn70xxp1;
+	struct cvmx_sli_ctl_portx_cn73xx {
+		u64 reserved_18_63 : 46;
+		u64 dis_port : 1;
+		u64 waitl_com : 1;
+		u64 reserved_8_15 : 8;
+		u64 ctlp_ro : 1;
+		u64 reserved_6_6 : 1;
+		u64 ptlp_ro : 1;
+		u64 reserved_1_4 : 4;
+		u64 wait_com : 1;
+	} cn73xx;
+	struct cvmx_sli_ctl_portx_cn73xx cn78xx;
+	struct cvmx_sli_ctl_portx_cn73xx cn78xxp1;
+	struct cvmx_sli_ctl_portx_s cnf71xx;
+	struct cvmx_sli_ctl_portx_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_ctl_portx cvmx_sli_ctl_portx_t;
+
+/**
+ * cvmx_sli_ctl_status
+ *
+ * This register contains control and status for SLI. Write operations to this register are not
+ * ordered with write/read operations to the MAC memory space. To ensure that a write has
+ * completed, software must read the register before making an access (i.e. MAC memory space)
+ * that requires the value of this register to be updated.
+ */
+union cvmx_sli_ctl_status {
+	u64 u64;
+	struct cvmx_sli_ctl_status_s {
+		u64 reserved_32_63 : 32;
+		u64 m2s1_ncbi : 4;
+		u64 m2s0_ncbi : 4;
+		u64 oci_id : 4;
+		u64 p1_ntags : 6;
+		u64 p0_ntags : 6;
+		u64 chip_rev : 8;
+	} s;
+	struct cvmx_sli_ctl_status_cn61xx {
+		u64 reserved_14_63 : 50;
+		u64 p0_ntags : 6;
+		u64 chip_rev : 8;
+	} cn61xx;
+	struct cvmx_sli_ctl_status_cn63xx {
+		u64 reserved_20_63 : 44;
+		u64 p1_ntags : 6;
+		u64 p0_ntags : 6;
+		u64 chip_rev : 8;
+	} cn63xx;
+	struct cvmx_sli_ctl_status_cn63xx cn63xxp1;
+	struct cvmx_sli_ctl_status_cn61xx cn66xx;
+	struct cvmx_sli_ctl_status_cn63xx cn68xx;
+	struct cvmx_sli_ctl_status_cn63xx cn68xxp1;
+	struct cvmx_sli_ctl_status_cn63xx cn70xx;
+	struct cvmx_sli_ctl_status_cn63xx cn70xxp1;
+	struct cvmx_sli_ctl_status_cn73xx {
+		u64 reserved_32_63 : 32;
+		u64 m2s1_ncbi : 4;
+		u64 m2s0_ncbi : 4;
+		u64 reserved_20_23 : 4;
+		u64 p1_ntags : 6;
+		u64 p0_ntags : 6;
+		u64 chip_rev : 8;
+	} cn73xx;
+	struct cvmx_sli_ctl_status_s cn78xx;
+	struct cvmx_sli_ctl_status_s cn78xxp1;
+	struct cvmx_sli_ctl_status_cn61xx cnf71xx;
+	struct cvmx_sli_ctl_status_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_ctl_status cvmx_sli_ctl_status_t;
+
+/**
+ * cvmx_sli_data_out_cnt
+ *
+ * This register contains the EXEC data out FIFO count and the data unload counter.
+ *
+ */
+union cvmx_sli_data_out_cnt {
+	u64 u64;
+	struct cvmx_sli_data_out_cnt_s {
+		u64 reserved_44_63 : 20;
+		u64 p1_ucnt : 16;
+		u64 p1_fcnt : 6;
+		u64 p0_ucnt : 16;
+		u64 p0_fcnt : 6;
+	} s;
+	struct cvmx_sli_data_out_cnt_s cn61xx;
+	struct cvmx_sli_data_out_cnt_s cn63xx;
+	struct cvmx_sli_data_out_cnt_s cn63xxp1;
+	struct cvmx_sli_data_out_cnt_s cn66xx;
+	struct cvmx_sli_data_out_cnt_s cn68xx;
+	struct cvmx_sli_data_out_cnt_s cn68xxp1;
+	struct cvmx_sli_data_out_cnt_s cn70xx;
+	struct cvmx_sli_data_out_cnt_s cn70xxp1;
+	struct cvmx_sli_data_out_cnt_s cn73xx;
+	struct cvmx_sli_data_out_cnt_s cn78xx;
+	struct cvmx_sli_data_out_cnt_s cn78xxp1;
+	struct cvmx_sli_data_out_cnt_s cnf71xx;
+	struct cvmx_sli_data_out_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sli_data_out_cnt cvmx_sli_data_out_cnt_t;
+
+/**
+ * cvmx_sli_dbg_data
+ *
+ * SLI_DBG_DATA = SLI Debug Data Register
+ *
+ * Value returned on the debug-data lines from the RSLs
+ */
+union cvmx_sli_dbg_data {
+	u64 u64;
+	struct cvmx_sli_dbg_data_s {
+		u64 reserved_18_63 : 46;
+		u64 dsel_ext : 1;
+		u64 data : 17;
+	} s;
+	struct cvmx_sli_dbg_data_s cn61xx;
+	struct cvmx_sli_dbg_data_s cn63xx;
+	struct cvmx_sli_dbg_data_s cn63xxp1;
+	struct cvmx_sli_dbg_data_s cn66xx;
+	struct cvmx_sli_dbg_data_s cn68xx;
+	struct cvmx_sli_dbg_data_s cn68xxp1;
+	struct cvmx_sli_dbg_data_s cnf71xx;
+};
+
+typedef union cvmx_sli_dbg_data cvmx_sli_dbg_data_t;
+
+/**
+ * cvmx_sli_dbg_select
+ *
+ * SLI_DBG_SELECT = Debug Select Register
+ *
+ * Contains the debug select value last written to the RSLs.
+ */
+union cvmx_sli_dbg_select {
+	u64 u64;
+	struct cvmx_sli_dbg_select_s {
+		u64 reserved_33_63 : 31;
+		u64 adbg_sel : 1;
+		u64 dbg_sel : 32;
+	} s;
+	struct cvmx_sli_dbg_select_s cn61xx;
+	struct cvmx_sli_dbg_select_s cn63xx;
+	struct cvmx_sli_dbg_select_s cn63xxp1;
+	struct cvmx_sli_dbg_select_s cn66xx;
+	struct cvmx_sli_dbg_select_s cn68xx;
+	struct cvmx_sli_dbg_select_s cn68xxp1;
+	struct cvmx_sli_dbg_select_s cnf71xx;
+};
+
+typedef union cvmx_sli_dbg_select cvmx_sli_dbg_select_t;
+
+/**
+ * cvmx_sli_dma#_cnt
+ *
+ * These registers contain the DMA count values.
+ *
+ */
+union cvmx_sli_dmax_cnt {
+	u64 u64;
+	struct cvmx_sli_dmax_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_dmax_cnt_s cn61xx;
+	struct cvmx_sli_dmax_cnt_s cn63xx;
+	struct cvmx_sli_dmax_cnt_s cn63xxp1;
+	struct cvmx_sli_dmax_cnt_s cn66xx;
+	struct cvmx_sli_dmax_cnt_s cn68xx;
+	struct cvmx_sli_dmax_cnt_s cn68xxp1;
+	struct cvmx_sli_dmax_cnt_s cn70xx;
+	struct cvmx_sli_dmax_cnt_s cn70xxp1;
+	struct cvmx_sli_dmax_cnt_s cn73xx;
+	struct cvmx_sli_dmax_cnt_s cn78xx;
+	struct cvmx_sli_dmax_cnt_s cn78xxp1;
+	struct cvmx_sli_dmax_cnt_s cnf71xx;
+	struct cvmx_sli_dmax_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sli_dmax_cnt cvmx_sli_dmax_cnt_t;
+
+/**
+ * cvmx_sli_dma#_int_level
+ *
+ * These registers contain the thresholds for DMA count and timer interrupts.
+ *
+ */
+union cvmx_sli_dmax_int_level {
+	u64 u64;
+	struct cvmx_sli_dmax_int_level_s {
+		u64 time : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_dmax_int_level_s cn61xx;
+	struct cvmx_sli_dmax_int_level_s cn63xx;
+	struct cvmx_sli_dmax_int_level_s cn63xxp1;
+	struct cvmx_sli_dmax_int_level_s cn66xx;
+	struct cvmx_sli_dmax_int_level_s cn68xx;
+	struct cvmx_sli_dmax_int_level_s cn68xxp1;
+	struct cvmx_sli_dmax_int_level_s cn70xx;
+	struct cvmx_sli_dmax_int_level_s cn70xxp1;
+	struct cvmx_sli_dmax_int_level_s cn73xx;
+	struct cvmx_sli_dmax_int_level_s cn78xx;
+	struct cvmx_sli_dmax_int_level_s cn78xxp1;
+	struct cvmx_sli_dmax_int_level_s cnf71xx;
+	struct cvmx_sli_dmax_int_level_s cnf75xx;
+};
+
+typedef union cvmx_sli_dmax_int_level cvmx_sli_dmax_int_level_t;
+
+/**
+ * cvmx_sli_dma#_tim
+ *
+ * These registers contain the DMA timer values.
+ *
+ */
+union cvmx_sli_dmax_tim {
+	u64 u64;
+	struct cvmx_sli_dmax_tim_s {
+		u64 reserved_32_63 : 32;
+		u64 tim : 32;
+	} s;
+	struct cvmx_sli_dmax_tim_s cn61xx;
+	struct cvmx_sli_dmax_tim_s cn63xx;
+	struct cvmx_sli_dmax_tim_s cn63xxp1;
+	struct cvmx_sli_dmax_tim_s cn66xx;
+	struct cvmx_sli_dmax_tim_s cn68xx;
+	struct cvmx_sli_dmax_tim_s cn68xxp1;
+	struct cvmx_sli_dmax_tim_s cn70xx;
+	struct cvmx_sli_dmax_tim_s cn70xxp1;
+	struct cvmx_sli_dmax_tim_s cn73xx;
+	struct cvmx_sli_dmax_tim_s cn78xx;
+	struct cvmx_sli_dmax_tim_s cn78xxp1;
+	struct cvmx_sli_dmax_tim_s cnf71xx;
+	struct cvmx_sli_dmax_tim_s cnf75xx;
+};
+
+typedef union cvmx_sli_dmax_tim cvmx_sli_dmax_tim_t;
+
+/**
+ * cvmx_sli_int_enb_ciu
+ *
+ * Used to enable the various interrupting conditions of SLI
+ *
+ */
+union cvmx_sli_int_enb_ciu {
+	u64 u64;
+	struct cvmx_sli_int_enb_ciu_s {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_29_31 : 3;
+		u64 mio_int2 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 reserved_18_19 : 2;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_int_enb_ciu_cn61xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_28_31 : 4;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 reserved_18_19 : 2;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn61xx;
+	struct cvmx_sli_int_enb_ciu_cn63xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_18_31 : 14;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn63xx;
+	struct cvmx_sli_int_enb_ciu_cn63xx cn63xxp1;
+	struct cvmx_sli_int_enb_ciu_cn61xx cn66xx;
+	struct cvmx_sli_int_enb_ciu_cn68xx {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 reserved_51_51 : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_18_31 : 14;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn68xx;
+	struct cvmx_sli_int_enb_ciu_cn68xx cn68xxp1;
+	struct cvmx_sli_int_enb_ciu_cn70xx {
+		u64 reserved_63_61 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_47_38 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_31_29 : 3;
+		u64 mio_int2 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 reserved_19_18 : 2;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_7_6 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn70xx;
+	struct cvmx_sli_int_enb_ciu_cn70xx cn70xxp1;
+	struct cvmx_sli_int_enb_ciu_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_int_enb_ciu cvmx_sli_int_enb_ciu_t;
+
+/**
+ * cvmx_sli_int_enb_port#
+ *
+ * When a field in this register is set, and a corresponding interrupt condition asserts in
+ * SLI_INT_SUM, an interrupt is generated. Interrupts can be sent to PCIe0 or PCIe1.
+ */
+union cvmx_sli_int_enb_portx {
+	u64 u64;
+	struct cvmx_sli_int_enb_portx_s {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_30_31 : 2;
+		u64 mac2_int : 1;
+		u64 reserved_28_28 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 mio_int3 : 1;
+		u64 reserved_6_6 : 1;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_int_enb_portx_cn61xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_28_31 : 4;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn61xx;
+	struct cvmx_sli_int_enb_portx_cn63xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_20_31 : 12;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn63xx;
+	struct cvmx_sli_int_enb_portx_cn63xx cn63xxp1;
+	struct cvmx_sli_int_enb_portx_cn61xx cn66xx;
+	struct cvmx_sli_int_enb_portx_cn68xx {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 reserved_51_51 : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_20_31 : 12;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn68xx;
+	struct cvmx_sli_int_enb_portx_cn68xx cn68xxp1;
+	struct cvmx_sli_int_enb_portx_cn70xx {
+		u64 reserved_63_61 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_47_38 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_31_30 : 2;
+		u64 mac2_int : 1;
+		u64 mio_int2 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_7_6 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn70xx;
+	struct cvmx_sli_int_enb_portx_cn70xx cn70xxp1;
+	struct cvmx_sli_int_enb_portx_cn78xxp1 {
+		u64 reserved_60_63 : 4;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 reserved_50_51 : 2;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_29_31 : 3;
+		u64 vf_err : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 reserved_18_19 : 2;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 mio_int3 : 1;
+		u64 mio_int2 : 1;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 reserved_1_3 : 3;
+		u64 rml_to : 1;
+	} cn78xxp1;
+	struct cvmx_sli_int_enb_portx_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_int_enb_portx cvmx_sli_int_enb_portx_t;
+
+/**
+ * cvmx_sli_int_sum
+ *
+ * The fields in this register are set when an interrupt condition occurs; write 1 to clear. All
+ * fields of the register are valid when a PF reads the register. Not available to VFs, and
+ * writes by the
+ * VF do not modify the register.
+ */
+union cvmx_sli_int_sum {
+	u64 u64;
+	struct cvmx_sli_int_sum_s {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_30_31 : 2;
+		u64 mac2_int : 1;
+		u64 reserved_28_28 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 mio_int3 : 1;
+		u64 reserved_6_6 : 1;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_int_sum_cn61xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_28_31 : 4;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn61xx;
+	struct cvmx_sli_int_sum_cn63xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_20_31 : 12;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn63xx;
+	struct cvmx_sli_int_sum_cn63xx cn63xxp1;
+	struct cvmx_sli_int_sum_cn61xx cn66xx;
+	struct cvmx_sli_int_sum_cn68xx {
+		u64 reserved_62_63 : 2;
+		u64 pipe_err : 1;
+		u64 ill_pad : 1;
+		u64 reserved_58_59 : 2;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 reserved_51_51 : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_20_31 : 12;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn68xx;
+	struct cvmx_sli_int_sum_cn68xx cn68xxp1;
+	struct cvmx_sli_int_sum_cn70xx {
+		u64 reserved_61_63 : 3;
+		u64 ill_pad : 1;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 pin_bp : 1;
+		u64 pout_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_30_31 : 2;
+		u64 mac2_int : 1;
+		u64 mio_int2 : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 mac1_int : 1;
+		u64 mac0_int : 1;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 iob2big : 1;
+		u64 bar0_to : 1;
+		u64 reserved_1_1 : 1;
+		u64 rml_to : 1;
+	} cn70xx;
+	struct cvmx_sli_int_sum_cn70xx cn70xxp1;
+	struct cvmx_sli_int_sum_cn78xxp1 {
+		u64 reserved_60_63 : 4;
+		u64 sprt3_err : 1;
+		u64 sprt2_err : 1;
+		u64 sprt1_err : 1;
+		u64 sprt0_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 reserved_50_51 : 2;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+		u64 reserved_38_47 : 10;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_29_31 : 3;
+		u64 vf_err : 1;
+		u64 m3_un_wi : 1;
+		u64 m3_un_b0 : 1;
+		u64 m3_up_wi : 1;
+		u64 m3_up_b0 : 1;
+		u64 m2_un_wi : 1;
+		u64 m2_un_b0 : 1;
+		u64 m2_up_wi : 1;
+		u64 m2_up_b0 : 1;
+		u64 reserved_18_19 : 2;
+		u64 mio_int1 : 1;
+		u64 mio_int0 : 1;
+		u64 m1_un_wi : 1;
+		u64 m1_un_b0 : 1;
+		u64 m1_up_wi : 1;
+		u64 m1_up_b0 : 1;
+		u64 m0_un_wi : 1;
+		u64 m0_un_b0 : 1;
+		u64 m0_up_wi : 1;
+		u64 m0_up_b0 : 1;
+		u64 mio_int3 : 1;
+		u64 mio_int2 : 1;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 reserved_1_3 : 3;
+		u64 rml_to : 1;
+	} cn78xxp1;
+	struct cvmx_sli_int_sum_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_int_sum cvmx_sli_int_sum_t;
+
+/**
+ * cvmx_sli_last_win_rdata0
+ *
+ * The data from the last initiated window read by MAC 0.
+ *
+ */
+union cvmx_sli_last_win_rdata0 {
+	u64 u64;
+	struct cvmx_sli_last_win_rdata0_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_last_win_rdata0_s cn61xx;
+	struct cvmx_sli_last_win_rdata0_s cn63xx;
+	struct cvmx_sli_last_win_rdata0_s cn63xxp1;
+	struct cvmx_sli_last_win_rdata0_s cn66xx;
+	struct cvmx_sli_last_win_rdata0_s cn68xx;
+	struct cvmx_sli_last_win_rdata0_s cn68xxp1;
+	struct cvmx_sli_last_win_rdata0_s cn70xx;
+	struct cvmx_sli_last_win_rdata0_s cn70xxp1;
+	struct cvmx_sli_last_win_rdata0_s cnf71xx;
+};
+
+typedef union cvmx_sli_last_win_rdata0 cvmx_sli_last_win_rdata0_t;
+
+/**
+ * cvmx_sli_last_win_rdata1
+ *
+ * The data from the last initiated window read by MAC 1.
+ *
+ */
+union cvmx_sli_last_win_rdata1 {
+	u64 u64;
+	struct cvmx_sli_last_win_rdata1_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_last_win_rdata1_s cn61xx;
+	struct cvmx_sli_last_win_rdata1_s cn63xx;
+	struct cvmx_sli_last_win_rdata1_s cn63xxp1;
+	struct cvmx_sli_last_win_rdata1_s cn66xx;
+	struct cvmx_sli_last_win_rdata1_s cn68xx;
+	struct cvmx_sli_last_win_rdata1_s cn68xxp1;
+	struct cvmx_sli_last_win_rdata1_s cn70xx;
+	struct cvmx_sli_last_win_rdata1_s cn70xxp1;
+	struct cvmx_sli_last_win_rdata1_s cnf71xx;
+};
+
+typedef union cvmx_sli_last_win_rdata1 cvmx_sli_last_win_rdata1_t;
+
+/**
+ * cvmx_sli_last_win_rdata2
+ *
+ * The data from the last initiated window read by MAC 2.
+ *
+ */
+union cvmx_sli_last_win_rdata2 {
+	u64 u64;
+	struct cvmx_sli_last_win_rdata2_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_last_win_rdata2_s cn61xx;
+	struct cvmx_sli_last_win_rdata2_s cn66xx;
+	struct cvmx_sli_last_win_rdata2_s cn70xx;
+	struct cvmx_sli_last_win_rdata2_s cn70xxp1;
+	struct cvmx_sli_last_win_rdata2_s cnf71xx;
+};
+
+typedef union cvmx_sli_last_win_rdata2 cvmx_sli_last_win_rdata2_t;
+
+/**
+ * cvmx_sli_last_win_rdata3
+ *
+ * The data from the last initiated window read by MAC 3.
+ *
+ */
+union cvmx_sli_last_win_rdata3 {
+	u64 u64;
+	struct cvmx_sli_last_win_rdata3_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_last_win_rdata3_s cn61xx;
+	struct cvmx_sli_last_win_rdata3_s cn66xx;
+	struct cvmx_sli_last_win_rdata3_s cn70xx;
+	struct cvmx_sli_last_win_rdata3_s cn70xxp1;
+	struct cvmx_sli_last_win_rdata3_s cnf71xx;
+};
+
+typedef union cvmx_sli_last_win_rdata3 cvmx_sli_last_win_rdata3_t;
+
+/**
+ * cvmx_sli_mac#_pf#_dma_vf_int
+ *
+ * When an error response is received for a VF DMA transaction read, the appropriate VF indexed
+ * bit is set.  The appropriate PF should read the appropriate register.
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_dma_vf_int {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_dma_vf_int_s {
+		u64 vf_int : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_dma_vf_int_s cn73xx;
+	struct cvmx_sli_macx_pfx_dma_vf_int_s cn78xx;
+	struct cvmx_sli_macx_pfx_dma_vf_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_dma_vf_int cvmx_sli_macx_pfx_dma_vf_int_t;
+
+/**
+ * cvmx_sli_mac#_pf#_dma_vf_int_enb
+ *
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_dma_vf_int_enb {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_dma_vf_int_enb_s {
+		u64 vf_int_enb : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_dma_vf_int_enb_s cn73xx;
+	struct cvmx_sli_macx_pfx_dma_vf_int_enb_s cn78xx;
+	struct cvmx_sli_macx_pfx_dma_vf_int_enb_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_dma_vf_int_enb cvmx_sli_macx_pfx_dma_vf_int_enb_t;
+
+/**
+ * cvmx_sli_mac#_pf#_flr_vf_int
+ *
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_flr_vf_int {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_flr_vf_int_s {
+		u64 vf_int : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_flr_vf_int_s cn73xx;
+	struct cvmx_sli_macx_pfx_flr_vf_int_s cn78xx;
+	struct cvmx_sli_macx_pfx_flr_vf_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_flr_vf_int cvmx_sli_macx_pfx_flr_vf_int_t;
+
+/**
+ * cvmx_sli_mac#_pf#_int_enb
+ *
+ * Interrupt enable register for a given PF SLI_MAC()_PF()_INT_SUM register.
+ * Indexed by (MAC index) SLI_PORT_E.
+ */
+union cvmx_sli_macx_pfx_int_enb {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_int_enb_s {
+		u64 pppf_err : 1;
+		u64 ppvf_err : 1;
+		u64 pktpf_err : 1;
+		u64 pktvf_err : 1;
+		u64 dmapf_err : 1;
+		u64 dmavf_err : 1;
+		u64 vf_mbox : 1;
+		u64 reserved_38_56 : 19;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_12_31 : 20;
+		u64 un_wi : 1;
+		u64 un_b0 : 1;
+		u64 up_wi : 1;
+		u64 up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 reserved_2_3 : 2;
+		u64 mio_int : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_macx_pfx_int_enb_s cn73xx;
+	struct cvmx_sli_macx_pfx_int_enb_s cn78xx;
+	struct cvmx_sli_macx_pfx_int_enb_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_int_enb cvmx_sli_macx_pfx_int_enb_t;
+
+/**
+ * cvmx_sli_mac#_pf#_int_sum
+ *
+ * Interrupt summary register for a given PF. Indexed (MAC index) by SLI_PORT_E.
+ * The fields in this register are set when an interrupt condition occurs; write 1 to clear.
+ */
+union cvmx_sli_macx_pfx_int_sum {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_int_sum_s {
+		u64 pppf_err : 1;
+		u64 ppvf_err : 1;
+		u64 pktpf_err : 1;
+		u64 pktvf_err : 1;
+		u64 dmapf_err : 1;
+		u64 dmavf_err : 1;
+		u64 vf_mbox : 1;
+		u64 reserved_38_56 : 19;
+		u64 dtime : 2;
+		u64 dcnt : 2;
+		u64 dmafi : 2;
+		u64 reserved_12_31 : 20;
+		u64 un_wi : 1;
+		u64 un_b0 : 1;
+		u64 up_wi : 1;
+		u64 up_b0 : 1;
+		u64 reserved_6_7 : 2;
+		u64 ptime : 1;
+		u64 pcnt : 1;
+		u64 reserved_2_3 : 2;
+		u64 mio_int : 1;
+		u64 rml_to : 1;
+	} s;
+	struct cvmx_sli_macx_pfx_int_sum_s cn73xx;
+	struct cvmx_sli_macx_pfx_int_sum_s cn78xx;
+	struct cvmx_sli_macx_pfx_int_sum_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_int_sum cvmx_sli_macx_pfx_int_sum_t;
+
+/**
+ * cvmx_sli_mac#_pf#_mbox_int
+ *
+ * When a VF to PF MBOX write occurs the appropriate bit is set.
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_mbox_int {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_mbox_int_s {
+		u64 vf_int : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_mbox_int_s cn73xx;
+	struct cvmx_sli_macx_pfx_mbox_int_s cn78xx;
+	struct cvmx_sli_macx_pfx_mbox_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_mbox_int cvmx_sli_macx_pfx_mbox_int_t;
+
+/**
+ * cvmx_sli_mac#_pf#_pkt_vf_int
+ *
+ * When an error response is received for a VF PP transaction read, a doorbell
+ * overflow for a ring associated with a VF occurs or an illegal memory access from a VF occurs,
+ * the appropriate VF indexed bit is set.  The appropriate PF should read the appropriate
+ * register.
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_pkt_vf_int {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_s {
+		u64 vf_int : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_s cn73xx;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_s cn78xx;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_pkt_vf_int cvmx_sli_macx_pfx_pkt_vf_int_t;
+
+/**
+ * cvmx_sli_mac#_pf#_pkt_vf_int_enb
+ *
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_pkt_vf_int_enb {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_enb_s {
+		u64 vf_int_enb : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_enb_s cn73xx;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_enb_s cn78xx;
+	struct cvmx_sli_macx_pfx_pkt_vf_int_enb_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_pkt_vf_int_enb cvmx_sli_macx_pfx_pkt_vf_int_enb_t;
+
+/**
+ * cvmx_sli_mac#_pf#_pp_vf_int
+ *
+ * When an error response is received for a VF PP transaction read, the appropriate VF indexed
+ * bit is set.  The appropriate PF should read the appropriate register.
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_pp_vf_int {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_pp_vf_int_s {
+		u64 vf_int : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_pp_vf_int_s cn73xx;
+	struct cvmx_sli_macx_pfx_pp_vf_int_s cn78xx;
+	struct cvmx_sli_macx_pfx_pp_vf_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_pp_vf_int cvmx_sli_macx_pfx_pp_vf_int_t;
+
+/**
+ * cvmx_sli_mac#_pf#_pp_vf_int_enb
+ *
+ * Indexed by (MAC index) SLI_PORT_E.
+ * This CSR array is valid only for SLI_PORT_E::PEM0 PF0.
+ */
+union cvmx_sli_macx_pfx_pp_vf_int_enb {
+	u64 u64;
+	struct cvmx_sli_macx_pfx_pp_vf_int_enb_s {
+		u64 vf_int_enb : 64;
+	} s;
+	struct cvmx_sli_macx_pfx_pp_vf_int_enb_s cn73xx;
+	struct cvmx_sli_macx_pfx_pp_vf_int_enb_s cn78xx;
+	struct cvmx_sli_macx_pfx_pp_vf_int_enb_s cnf75xx;
+};
+
+typedef union cvmx_sli_macx_pfx_pp_vf_int_enb cvmx_sli_macx_pfx_pp_vf_int_enb_t;
+
+/**
+ * cvmx_sli_mac_credit_cnt
+ *
+ * This register contains the number of credits for the MAC port FIFOs used by the SLI. This
+ * value needs to be set before S2M traffic flow starts. A write operation to this register
+ * causes the credit counts in the SLI for the MAC ports to be reset to the value in this
+ * register if the corresponding disable bit in this register is set to 0.
+ */
+union cvmx_sli_mac_credit_cnt {
+	u64 u64;
+	struct cvmx_sli_mac_credit_cnt_s {
+		u64 reserved_54_63 : 10;
+		u64 p1_c_d : 1;
+		u64 p1_n_d : 1;
+		u64 p1_p_d : 1;
+		u64 p0_c_d : 1;
+		u64 p0_n_d : 1;
+		u64 p0_p_d : 1;
+		u64 p1_ccnt : 8;
+		u64 p1_ncnt : 8;
+		u64 p1_pcnt : 8;
+		u64 p0_ccnt : 8;
+		u64 p0_ncnt : 8;
+		u64 p0_pcnt : 8;
+	} s;
+	struct cvmx_sli_mac_credit_cnt_s cn61xx;
+	struct cvmx_sli_mac_credit_cnt_s cn63xx;
+	struct cvmx_sli_mac_credit_cnt_cn63xxp1 {
+		u64 reserved_48_63 : 16;
+		u64 p1_ccnt : 8;
+		u64 p1_ncnt : 8;
+		u64 p1_pcnt : 8;
+		u64 p0_ccnt : 8;
+		u64 p0_ncnt : 8;
+		u64 p0_pcnt : 8;
+	} cn63xxp1;
+	struct cvmx_sli_mac_credit_cnt_s cn66xx;
+	struct cvmx_sli_mac_credit_cnt_s cn68xx;
+	struct cvmx_sli_mac_credit_cnt_s cn68xxp1;
+	struct cvmx_sli_mac_credit_cnt_s cn70xx;
+	struct cvmx_sli_mac_credit_cnt_s cn70xxp1;
+	struct cvmx_sli_mac_credit_cnt_s cn73xx;
+	struct cvmx_sli_mac_credit_cnt_s cn78xx;
+	struct cvmx_sli_mac_credit_cnt_s cn78xxp1;
+	struct cvmx_sli_mac_credit_cnt_s cnf71xx;
+	struct cvmx_sli_mac_credit_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sli_mac_credit_cnt cvmx_sli_mac_credit_cnt_t;
+
+/**
+ * cvmx_sli_mac_credit_cnt2
+ *
+ * This register contains the number of credits for the MAC port FIFOs (for MACs 2 and 3) used by
+ * the SLI. This value must be set before S2M traffic flow starts. A write to this register
+ * causes the credit counts in the SLI for the MAC ports to be reset to the value in this
+ * register.
+ */
+union cvmx_sli_mac_credit_cnt2 {
+	u64 u64;
+	struct cvmx_sli_mac_credit_cnt2_s {
+		u64 reserved_54_63 : 10;
+		u64 p3_c_d : 1;
+		u64 p3_n_d : 1;
+		u64 p3_p_d : 1;
+		u64 p2_c_d : 1;
+		u64 p2_n_d : 1;
+		u64 p2_p_d : 1;
+		u64 p3_ccnt : 8;
+		u64 p3_ncnt : 8;
+		u64 p3_pcnt : 8;
+		u64 p2_ccnt : 8;
+		u64 p2_ncnt : 8;
+		u64 p2_pcnt : 8;
+	} s;
+	struct cvmx_sli_mac_credit_cnt2_s cn61xx;
+	struct cvmx_sli_mac_credit_cnt2_s cn66xx;
+	struct cvmx_sli_mac_credit_cnt2_s cn70xx;
+	struct cvmx_sli_mac_credit_cnt2_s cn70xxp1;
+	struct cvmx_sli_mac_credit_cnt2_s cn73xx;
+	struct cvmx_sli_mac_credit_cnt2_s cn78xx;
+	struct cvmx_sli_mac_credit_cnt2_s cn78xxp1;
+	struct cvmx_sli_mac_credit_cnt2_s cnf71xx;
+	struct cvmx_sli_mac_credit_cnt2_s cnf75xx;
+};
+
+typedef union cvmx_sli_mac_credit_cnt2 cvmx_sli_mac_credit_cnt2_t;
+
+/**
+ * cvmx_sli_mac_number
+ *
+ * When read from a MAC port, this register returns the MAC's port number, otherwise returns zero.
+ *
+ */
+union cvmx_sli_mac_number {
+	u64 u64;
+	struct cvmx_sli_mac_number_s {
+		u64 reserved_9_63 : 55;
+		u64 a_mode : 1;
+		u64 num : 8;
+	} s;
+	struct cvmx_sli_mac_number_s cn61xx;
+	struct cvmx_sli_mac_number_cn63xx {
+		u64 reserved_8_63 : 56;
+		u64 num : 8;
+	} cn63xx;
+	struct cvmx_sli_mac_number_s cn66xx;
+	struct cvmx_sli_mac_number_cn63xx cn68xx;
+	struct cvmx_sli_mac_number_cn63xx cn68xxp1;
+	struct cvmx_sli_mac_number_s cn70xx;
+	struct cvmx_sli_mac_number_s cn70xxp1;
+	struct cvmx_sli_mac_number_s cn73xx;
+	struct cvmx_sli_mac_number_s cn78xx;
+	struct cvmx_sli_mac_number_s cn78xxp1;
+	struct cvmx_sli_mac_number_s cnf71xx;
+	struct cvmx_sli_mac_number_s cnf75xx;
+};
+
+typedef union cvmx_sli_mac_number cvmx_sli_mac_number_t;
+
+/**
+ * cvmx_sli_mem_access_ctl
+ *
+ * This register contains control signals for access to the MAC address space.
+ *
+ */
+union cvmx_sli_mem_access_ctl {
+	u64 u64;
+	struct cvmx_sli_mem_access_ctl_s {
+		u64 reserved_14_63 : 50;
+		u64 max_word : 4;
+		u64 timer : 10;
+	} s;
+	struct cvmx_sli_mem_access_ctl_s cn61xx;
+	struct cvmx_sli_mem_access_ctl_s cn63xx;
+	struct cvmx_sli_mem_access_ctl_s cn63xxp1;
+	struct cvmx_sli_mem_access_ctl_s cn66xx;
+	struct cvmx_sli_mem_access_ctl_s cn68xx;
+	struct cvmx_sli_mem_access_ctl_s cn68xxp1;
+	struct cvmx_sli_mem_access_ctl_s cn70xx;
+	struct cvmx_sli_mem_access_ctl_s cn70xxp1;
+	struct cvmx_sli_mem_access_ctl_s cn73xx;
+	struct cvmx_sli_mem_access_ctl_s cn78xx;
+	struct cvmx_sli_mem_access_ctl_s cn78xxp1;
+	struct cvmx_sli_mem_access_ctl_s cnf71xx;
+	struct cvmx_sli_mem_access_ctl_s cnf75xx;
+};
+
+typedef union cvmx_sli_mem_access_ctl cvmx_sli_mem_access_ctl_t;
+
+/**
+ * cvmx_sli_mem_access_subid#
+ *
+ * These registers contains address index and control bits for access to memory from cores.
+ *
+ */
+union cvmx_sli_mem_access_subidx {
+	u64 u64;
+	struct cvmx_sli_mem_access_subidx_s {
+		u64 reserved_60_63 : 4;
+		u64 pvf : 16;
+		u64 reserved_43_43 : 1;
+		u64 zero : 1;
+		u64 port : 3;
+		u64 nmerge : 1;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 wtype : 2;
+		u64 rtype : 2;
+		u64 reserved_0_29 : 30;
+	} s;
+	struct cvmx_sli_mem_access_subidx_cn61xx {
+		u64 reserved_43_63 : 21;
+		u64 zero : 1;
+		u64 port : 3;
+		u64 nmerge : 1;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 wtype : 2;
+		u64 rtype : 2;
+		u64 ba : 30;
+	} cn61xx;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn63xx;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn63xxp1;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn66xx;
+	struct cvmx_sli_mem_access_subidx_cn68xx {
+		u64 reserved_43_63 : 21;
+		u64 zero : 1;
+		u64 port : 3;
+		u64 nmerge : 1;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 wtype : 2;
+		u64 rtype : 2;
+		u64 ba : 28;
+		u64 reserved_0_1 : 2;
+	} cn68xx;
+	struct cvmx_sli_mem_access_subidx_cn68xx cn68xxp1;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn70xx;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn70xxp1;
+	struct cvmx_sli_mem_access_subidx_cn73xx {
+		u64 reserved_60_63 : 4;
+		u64 pvf : 16;
+		u64 reserved_43_43 : 1;
+		u64 zero : 1;
+		u64 port : 3;
+		u64 nmerge : 1;
+		u64 esr : 2;
+		u64 esw : 2;
+		u64 wtype : 2;
+		u64 rtype : 2;
+		u64 ba : 30;
+	} cn73xx;
+	struct cvmx_sli_mem_access_subidx_cn73xx cn78xx;
+	struct cvmx_sli_mem_access_subidx_cn61xx cn78xxp1;
+	struct cvmx_sli_mem_access_subidx_cn61xx cnf71xx;
+	struct cvmx_sli_mem_access_subidx_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_mem_access_subidx cvmx_sli_mem_access_subidx_t;
+
+/**
+ * cvmx_sli_mem_ctl
+ *
+ * This register controls the ECC of the SLI memories.
+ *
+ */
+union cvmx_sli_mem_ctl {
+	u64 u64;
+	struct cvmx_sli_mem_ctl_s {
+		u64 reserved_27_63 : 37;
+		u64 tlpn1_fs : 2;
+		u64 tlpn1_ecc : 1;
+		u64 tlpp1_fs : 2;
+		u64 tlpp1_ecc : 1;
+		u64 tlpc1_fs : 2;
+		u64 tlpc1_ecc : 1;
+		u64 tlpn0_fs : 2;
+		u64 tlpn0_ecc : 1;
+		u64 tlpp0_fs : 2;
+		u64 tlpp0_ecc : 1;
+		u64 tlpc0_fs : 2;
+		u64 tlpc0_ecc : 1;
+		u64 nppr_fs : 2;
+		u64 nppr_ecc : 1;
+		u64 cpl1_fs : 2;
+		u64 cpl1_ecc : 1;
+		u64 cpl0_fs : 2;
+		u64 cpl0_ecc : 1;
+	} s;
+	struct cvmx_sli_mem_ctl_s cn73xx;
+	struct cvmx_sli_mem_ctl_s cn78xx;
+	struct cvmx_sli_mem_ctl_s cn78xxp1;
+	struct cvmx_sli_mem_ctl_s cnf75xx;
+};
+
+typedef union cvmx_sli_mem_ctl cvmx_sli_mem_ctl_t;
+
+/**
+ * cvmx_sli_mem_int_sum
+ *
+ * Set when an interrupt condition occurs; write one to clear.
+ *
+ */
+union cvmx_sli_mem_int_sum {
+	u64 u64;
+	struct cvmx_sli_mem_int_sum_s {
+		u64 reserved_18_63 : 46;
+		u64 tlpn1_dbe : 1;
+		u64 tlpn1_sbe : 1;
+		u64 tlpp1_dbe : 1;
+		u64 tlpp1_sbe : 1;
+		u64 tlpc1_dbe : 1;
+		u64 tlpc1_sbe : 1;
+		u64 tlpn0_dbe : 1;
+		u64 tlpn0_sbe : 1;
+		u64 tlpp0_dbe : 1;
+		u64 tlpp0_sbe : 1;
+		u64 tlpc0_dbe : 1;
+		u64 tlpc0_sbe : 1;
+		u64 nppr_dbe : 1;
+		u64 nppr_sbe : 1;
+		u64 cpl1_dbe : 1;
+		u64 cpl1_sbe : 1;
+		u64 cpl0_dbe : 1;
+		u64 cpl0_sbe : 1;
+	} s;
+	struct cvmx_sli_mem_int_sum_s cn73xx;
+	struct cvmx_sli_mem_int_sum_s cn78xx;
+	struct cvmx_sli_mem_int_sum_s cn78xxp1;
+	struct cvmx_sli_mem_int_sum_s cnf75xx;
+};
+
+typedef union cvmx_sli_mem_int_sum cvmx_sli_mem_int_sum_t;
+
+/**
+ * cvmx_sli_msi_enb0
+ *
+ * Used to enable the interrupt generation for the bits in the SLI_MSI_RCV0.
+ *
+ */
+union cvmx_sli_msi_enb0 {
+	u64 u64;
+	struct cvmx_sli_msi_enb0_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_msi_enb0_s cn61xx;
+	struct cvmx_sli_msi_enb0_s cn63xx;
+	struct cvmx_sli_msi_enb0_s cn63xxp1;
+	struct cvmx_sli_msi_enb0_s cn66xx;
+	struct cvmx_sli_msi_enb0_s cn68xx;
+	struct cvmx_sli_msi_enb0_s cn68xxp1;
+	struct cvmx_sli_msi_enb0_s cn70xx;
+	struct cvmx_sli_msi_enb0_s cn70xxp1;
+	struct cvmx_sli_msi_enb0_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_enb0 cvmx_sli_msi_enb0_t;
+
+/**
+ * cvmx_sli_msi_enb1
+ *
+ * Used to enable the interrupt generation for the bits in the SLI_MSI_RCV1.
+ *
+ */
+union cvmx_sli_msi_enb1 {
+	u64 u64;
+	struct cvmx_sli_msi_enb1_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_msi_enb1_s cn61xx;
+	struct cvmx_sli_msi_enb1_s cn63xx;
+	struct cvmx_sli_msi_enb1_s cn63xxp1;
+	struct cvmx_sli_msi_enb1_s cn66xx;
+	struct cvmx_sli_msi_enb1_s cn68xx;
+	struct cvmx_sli_msi_enb1_s cn68xxp1;
+	struct cvmx_sli_msi_enb1_s cn70xx;
+	struct cvmx_sli_msi_enb1_s cn70xxp1;
+	struct cvmx_sli_msi_enb1_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_enb1 cvmx_sli_msi_enb1_t;
+
+/**
+ * cvmx_sli_msi_enb2
+ *
+ * Used to enable the interrupt generation for the bits in the SLI_MSI_RCV2.
+ *
+ */
+union cvmx_sli_msi_enb2 {
+	u64 u64;
+	struct cvmx_sli_msi_enb2_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_msi_enb2_s cn61xx;
+	struct cvmx_sli_msi_enb2_s cn63xx;
+	struct cvmx_sli_msi_enb2_s cn63xxp1;
+	struct cvmx_sli_msi_enb2_s cn66xx;
+	struct cvmx_sli_msi_enb2_s cn68xx;
+	struct cvmx_sli_msi_enb2_s cn68xxp1;
+	struct cvmx_sli_msi_enb2_s cn70xx;
+	struct cvmx_sli_msi_enb2_s cn70xxp1;
+	struct cvmx_sli_msi_enb2_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_enb2 cvmx_sli_msi_enb2_t;
+
+/**
+ * cvmx_sli_msi_enb3
+ *
+ * Used to enable the interrupt generation for the bits in the SLI_MSI_RCV3.
+ *
+ */
+union cvmx_sli_msi_enb3 {
+	u64 u64;
+	struct cvmx_sli_msi_enb3_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_msi_enb3_s cn61xx;
+	struct cvmx_sli_msi_enb3_s cn63xx;
+	struct cvmx_sli_msi_enb3_s cn63xxp1;
+	struct cvmx_sli_msi_enb3_s cn66xx;
+	struct cvmx_sli_msi_enb3_s cn68xx;
+	struct cvmx_sli_msi_enb3_s cn68xxp1;
+	struct cvmx_sli_msi_enb3_s cn70xx;
+	struct cvmx_sli_msi_enb3_s cn70xxp1;
+	struct cvmx_sli_msi_enb3_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_enb3 cvmx_sli_msi_enb3_t;
+
+/**
+ * cvmx_sli_msi_rcv0
+ *
+ * This register contains bits <63:0> of the 256 bits of MSI interrupts.
+ *
+ */
+union cvmx_sli_msi_rcv0 {
+	u64 u64;
+	struct cvmx_sli_msi_rcv0_s {
+		u64 intr : 64;
+	} s;
+	struct cvmx_sli_msi_rcv0_s cn61xx;
+	struct cvmx_sli_msi_rcv0_s cn63xx;
+	struct cvmx_sli_msi_rcv0_s cn63xxp1;
+	struct cvmx_sli_msi_rcv0_s cn66xx;
+	struct cvmx_sli_msi_rcv0_s cn68xx;
+	struct cvmx_sli_msi_rcv0_s cn68xxp1;
+	struct cvmx_sli_msi_rcv0_s cn70xx;
+	struct cvmx_sli_msi_rcv0_s cn70xxp1;
+	struct cvmx_sli_msi_rcv0_s cn73xx;
+	struct cvmx_sli_msi_rcv0_s cn78xx;
+	struct cvmx_sli_msi_rcv0_s cn78xxp1;
+	struct cvmx_sli_msi_rcv0_s cnf71xx;
+	struct cvmx_sli_msi_rcv0_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_rcv0 cvmx_sli_msi_rcv0_t;
+
+/**
+ * cvmx_sli_msi_rcv1
+ *
+ * This register contains bits <127:64> of the 256 bits of MSI interrupts.
+ *
+ */
+union cvmx_sli_msi_rcv1 {
+	u64 u64;
+	struct cvmx_sli_msi_rcv1_s {
+		u64 intr : 64;
+	} s;
+	struct cvmx_sli_msi_rcv1_s cn61xx;
+	struct cvmx_sli_msi_rcv1_s cn63xx;
+	struct cvmx_sli_msi_rcv1_s cn63xxp1;
+	struct cvmx_sli_msi_rcv1_s cn66xx;
+	struct cvmx_sli_msi_rcv1_s cn68xx;
+	struct cvmx_sli_msi_rcv1_s cn68xxp1;
+	struct cvmx_sli_msi_rcv1_s cn70xx;
+	struct cvmx_sli_msi_rcv1_s cn70xxp1;
+	struct cvmx_sli_msi_rcv1_s cn73xx;
+	struct cvmx_sli_msi_rcv1_s cn78xx;
+	struct cvmx_sli_msi_rcv1_s cn78xxp1;
+	struct cvmx_sli_msi_rcv1_s cnf71xx;
+	struct cvmx_sli_msi_rcv1_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_rcv1 cvmx_sli_msi_rcv1_t;
+
+/**
+ * cvmx_sli_msi_rcv2
+ *
+ * This register contains bits <191:128> of the 256 bits of MSI interrupts.
+ *
+ */
+union cvmx_sli_msi_rcv2 {
+	u64 u64;
+	struct cvmx_sli_msi_rcv2_s {
+		u64 intr : 64;
+	} s;
+	struct cvmx_sli_msi_rcv2_s cn61xx;
+	struct cvmx_sli_msi_rcv2_s cn63xx;
+	struct cvmx_sli_msi_rcv2_s cn63xxp1;
+	struct cvmx_sli_msi_rcv2_s cn66xx;
+	struct cvmx_sli_msi_rcv2_s cn68xx;
+	struct cvmx_sli_msi_rcv2_s cn68xxp1;
+	struct cvmx_sli_msi_rcv2_s cn70xx;
+	struct cvmx_sli_msi_rcv2_s cn70xxp1;
+	struct cvmx_sli_msi_rcv2_s cn73xx;
+	struct cvmx_sli_msi_rcv2_s cn78xx;
+	struct cvmx_sli_msi_rcv2_s cn78xxp1;
+	struct cvmx_sli_msi_rcv2_s cnf71xx;
+	struct cvmx_sli_msi_rcv2_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_rcv2 cvmx_sli_msi_rcv2_t;
+
+/**
+ * cvmx_sli_msi_rcv3
+ *
+ * This register contains bits <255:192> of the 256 bits of MSI interrupts.
+ *
+ */
+union cvmx_sli_msi_rcv3 {
+	u64 u64;
+	struct cvmx_sli_msi_rcv3_s {
+		u64 intr : 64;
+	} s;
+	struct cvmx_sli_msi_rcv3_s cn61xx;
+	struct cvmx_sli_msi_rcv3_s cn63xx;
+	struct cvmx_sli_msi_rcv3_s cn63xxp1;
+	struct cvmx_sli_msi_rcv3_s cn66xx;
+	struct cvmx_sli_msi_rcv3_s cn68xx;
+	struct cvmx_sli_msi_rcv3_s cn68xxp1;
+	struct cvmx_sli_msi_rcv3_s cn70xx;
+	struct cvmx_sli_msi_rcv3_s cn70xxp1;
+	struct cvmx_sli_msi_rcv3_s cn73xx;
+	struct cvmx_sli_msi_rcv3_s cn78xx;
+	struct cvmx_sli_msi_rcv3_s cn78xxp1;
+	struct cvmx_sli_msi_rcv3_s cnf71xx;
+	struct cvmx_sli_msi_rcv3_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_rcv3 cvmx_sli_msi_rcv3_t;
+
+/**
+ * cvmx_sli_msi_rd_map
+ *
+ * This register is used to read the mapping function of the SLI_PCIE_MSI_RCV to SLI_MSI_RCV
+ * registers.
+ */
+union cvmx_sli_msi_rd_map {
+	u64 u64;
+	struct cvmx_sli_msi_rd_map_s {
+		u64 reserved_16_63 : 48;
+		u64 rd_int : 8;
+		u64 msi_int : 8;
+	} s;
+	struct cvmx_sli_msi_rd_map_s cn61xx;
+	struct cvmx_sli_msi_rd_map_s cn63xx;
+	struct cvmx_sli_msi_rd_map_s cn63xxp1;
+	struct cvmx_sli_msi_rd_map_s cn66xx;
+	struct cvmx_sli_msi_rd_map_s cn68xx;
+	struct cvmx_sli_msi_rd_map_s cn68xxp1;
+	struct cvmx_sli_msi_rd_map_s cn70xx;
+	struct cvmx_sli_msi_rd_map_s cn70xxp1;
+	struct cvmx_sli_msi_rd_map_s cn73xx;
+	struct cvmx_sli_msi_rd_map_s cn78xx;
+	struct cvmx_sli_msi_rd_map_s cn78xxp1;
+	struct cvmx_sli_msi_rd_map_s cnf71xx;
+	struct cvmx_sli_msi_rd_map_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_rd_map cvmx_sli_msi_rd_map_t;
+
+/**
+ * cvmx_sli_msi_w1c_enb0
+ *
+ * Used to clear bits in SLI_MSI_ENB0.
+ *
+ */
+union cvmx_sli_msi_w1c_enb0 {
+	u64 u64;
+	struct cvmx_sli_msi_w1c_enb0_s {
+		u64 clr : 64;
+	} s;
+	struct cvmx_sli_msi_w1c_enb0_s cn61xx;
+	struct cvmx_sli_msi_w1c_enb0_s cn63xx;
+	struct cvmx_sli_msi_w1c_enb0_s cn63xxp1;
+	struct cvmx_sli_msi_w1c_enb0_s cn66xx;
+	struct cvmx_sli_msi_w1c_enb0_s cn68xx;
+	struct cvmx_sli_msi_w1c_enb0_s cn68xxp1;
+	struct cvmx_sli_msi_w1c_enb0_s cn70xx;
+	struct cvmx_sli_msi_w1c_enb0_s cn70xxp1;
+	struct cvmx_sli_msi_w1c_enb0_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1c_enb0 cvmx_sli_msi_w1c_enb0_t;
+
+/**
+ * cvmx_sli_msi_w1c_enb1
+ *
+ * Used to clear bits in SLI_MSI_ENB1.
+ *
+ */
+union cvmx_sli_msi_w1c_enb1 {
+	u64 u64;
+	struct cvmx_sli_msi_w1c_enb1_s {
+		u64 clr : 64;
+	} s;
+	struct cvmx_sli_msi_w1c_enb1_s cn61xx;
+	struct cvmx_sli_msi_w1c_enb1_s cn63xx;
+	struct cvmx_sli_msi_w1c_enb1_s cn63xxp1;
+	struct cvmx_sli_msi_w1c_enb1_s cn66xx;
+	struct cvmx_sli_msi_w1c_enb1_s cn68xx;
+	struct cvmx_sli_msi_w1c_enb1_s cn68xxp1;
+	struct cvmx_sli_msi_w1c_enb1_s cn70xx;
+	struct cvmx_sli_msi_w1c_enb1_s cn70xxp1;
+	struct cvmx_sli_msi_w1c_enb1_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1c_enb1 cvmx_sli_msi_w1c_enb1_t;
+
+/**
+ * cvmx_sli_msi_w1c_enb2
+ *
+ * Used to clear bits in SLI_MSI_ENB2.
+ *
+ */
+union cvmx_sli_msi_w1c_enb2 {
+	u64 u64;
+	struct cvmx_sli_msi_w1c_enb2_s {
+		u64 clr : 64;
+	} s;
+	struct cvmx_sli_msi_w1c_enb2_s cn61xx;
+	struct cvmx_sli_msi_w1c_enb2_s cn63xx;
+	struct cvmx_sli_msi_w1c_enb2_s cn63xxp1;
+	struct cvmx_sli_msi_w1c_enb2_s cn66xx;
+	struct cvmx_sli_msi_w1c_enb2_s cn68xx;
+	struct cvmx_sli_msi_w1c_enb2_s cn68xxp1;
+	struct cvmx_sli_msi_w1c_enb2_s cn70xx;
+	struct cvmx_sli_msi_w1c_enb2_s cn70xxp1;
+	struct cvmx_sli_msi_w1c_enb2_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1c_enb2 cvmx_sli_msi_w1c_enb2_t;
+
+/**
+ * cvmx_sli_msi_w1c_enb3
+ *
+ * Used to clear bits in SLI_MSI_ENB3.
+ *
+ */
+union cvmx_sli_msi_w1c_enb3 {
+	u64 u64;
+	struct cvmx_sli_msi_w1c_enb3_s {
+		u64 clr : 64;
+	} s;
+	struct cvmx_sli_msi_w1c_enb3_s cn61xx;
+	struct cvmx_sli_msi_w1c_enb3_s cn63xx;
+	struct cvmx_sli_msi_w1c_enb3_s cn63xxp1;
+	struct cvmx_sli_msi_w1c_enb3_s cn66xx;
+	struct cvmx_sli_msi_w1c_enb3_s cn68xx;
+	struct cvmx_sli_msi_w1c_enb3_s cn68xxp1;
+	struct cvmx_sli_msi_w1c_enb3_s cn70xx;
+	struct cvmx_sli_msi_w1c_enb3_s cn70xxp1;
+	struct cvmx_sli_msi_w1c_enb3_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1c_enb3 cvmx_sli_msi_w1c_enb3_t;
+
+/**
+ * cvmx_sli_msi_w1s_enb0
+ *
+ * Used to set bits in SLI_MSI_ENB0.
+ *
+ */
+union cvmx_sli_msi_w1s_enb0 {
+	u64 u64;
+	struct cvmx_sli_msi_w1s_enb0_s {
+		u64 set : 64;
+	} s;
+	struct cvmx_sli_msi_w1s_enb0_s cn61xx;
+	struct cvmx_sli_msi_w1s_enb0_s cn63xx;
+	struct cvmx_sli_msi_w1s_enb0_s cn63xxp1;
+	struct cvmx_sli_msi_w1s_enb0_s cn66xx;
+	struct cvmx_sli_msi_w1s_enb0_s cn68xx;
+	struct cvmx_sli_msi_w1s_enb0_s cn68xxp1;
+	struct cvmx_sli_msi_w1s_enb0_s cn70xx;
+	struct cvmx_sli_msi_w1s_enb0_s cn70xxp1;
+	struct cvmx_sli_msi_w1s_enb0_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1s_enb0 cvmx_sli_msi_w1s_enb0_t;
+
+/**
+ * cvmx_sli_msi_w1s_enb1
+ *
+ * SLI_MSI_W1S_ENB0 = SLI MSI Write 1 To Set Enable1
+ * Used to set bits in SLI_MSI_ENB1.
+ */
+union cvmx_sli_msi_w1s_enb1 {
+	u64 u64;
+	struct cvmx_sli_msi_w1s_enb1_s {
+		u64 set : 64;
+	} s;
+	struct cvmx_sli_msi_w1s_enb1_s cn61xx;
+	struct cvmx_sli_msi_w1s_enb1_s cn63xx;
+	struct cvmx_sli_msi_w1s_enb1_s cn63xxp1;
+	struct cvmx_sli_msi_w1s_enb1_s cn66xx;
+	struct cvmx_sli_msi_w1s_enb1_s cn68xx;
+	struct cvmx_sli_msi_w1s_enb1_s cn68xxp1;
+	struct cvmx_sli_msi_w1s_enb1_s cn70xx;
+	struct cvmx_sli_msi_w1s_enb1_s cn70xxp1;
+	struct cvmx_sli_msi_w1s_enb1_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1s_enb1 cvmx_sli_msi_w1s_enb1_t;
+
+/**
+ * cvmx_sli_msi_w1s_enb2
+ *
+ * Used to set bits in SLI_MSI_ENB2.
+ *
+ */
+union cvmx_sli_msi_w1s_enb2 {
+	u64 u64;
+	struct cvmx_sli_msi_w1s_enb2_s {
+		u64 set : 64;
+	} s;
+	struct cvmx_sli_msi_w1s_enb2_s cn61xx;
+	struct cvmx_sli_msi_w1s_enb2_s cn63xx;
+	struct cvmx_sli_msi_w1s_enb2_s cn63xxp1;
+	struct cvmx_sli_msi_w1s_enb2_s cn66xx;
+	struct cvmx_sli_msi_w1s_enb2_s cn68xx;
+	struct cvmx_sli_msi_w1s_enb2_s cn68xxp1;
+	struct cvmx_sli_msi_w1s_enb2_s cn70xx;
+	struct cvmx_sli_msi_w1s_enb2_s cn70xxp1;
+	struct cvmx_sli_msi_w1s_enb2_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1s_enb2 cvmx_sli_msi_w1s_enb2_t;
+
+/**
+ * cvmx_sli_msi_w1s_enb3
+ *
+ * Used to set bits in SLI_MSI_ENB3.
+ *
+ */
+union cvmx_sli_msi_w1s_enb3 {
+	u64 u64;
+	struct cvmx_sli_msi_w1s_enb3_s {
+		u64 set : 64;
+	} s;
+	struct cvmx_sli_msi_w1s_enb3_s cn61xx;
+	struct cvmx_sli_msi_w1s_enb3_s cn63xx;
+	struct cvmx_sli_msi_w1s_enb3_s cn63xxp1;
+	struct cvmx_sli_msi_w1s_enb3_s cn66xx;
+	struct cvmx_sli_msi_w1s_enb3_s cn68xx;
+	struct cvmx_sli_msi_w1s_enb3_s cn68xxp1;
+	struct cvmx_sli_msi_w1s_enb3_s cn70xx;
+	struct cvmx_sli_msi_w1s_enb3_s cn70xxp1;
+	struct cvmx_sli_msi_w1s_enb3_s cnf71xx;
+};
+
+typedef union cvmx_sli_msi_w1s_enb3 cvmx_sli_msi_w1s_enb3_t;
+
+/**
+ * cvmx_sli_msi_wr_map
+ *
+ * This register is used to write the mapping function of the SLI_PCIE_MSI_RCV to SLI_MSI_RCV
+ * registers. At reset, the mapping function is one-to-one, that is MSI_INT 1 maps to CIU_INT 1,
+ * 2 to 2, 3 to 3, etc.
+ */
+union cvmx_sli_msi_wr_map {
+	u64 u64;
+	struct cvmx_sli_msi_wr_map_s {
+		u64 reserved_16_63 : 48;
+		u64 ciu_int : 8;
+		u64 msi_int : 8;
+	} s;
+	struct cvmx_sli_msi_wr_map_s cn61xx;
+	struct cvmx_sli_msi_wr_map_s cn63xx;
+	struct cvmx_sli_msi_wr_map_s cn63xxp1;
+	struct cvmx_sli_msi_wr_map_s cn66xx;
+	struct cvmx_sli_msi_wr_map_s cn68xx;
+	struct cvmx_sli_msi_wr_map_s cn68xxp1;
+	struct cvmx_sli_msi_wr_map_s cn70xx;
+	struct cvmx_sli_msi_wr_map_s cn70xxp1;
+	struct cvmx_sli_msi_wr_map_s cn73xx;
+	struct cvmx_sli_msi_wr_map_s cn78xx;
+	struct cvmx_sli_msi_wr_map_s cn78xxp1;
+	struct cvmx_sli_msi_wr_map_s cnf71xx;
+	struct cvmx_sli_msi_wr_map_s cnf75xx;
+};
+
+typedef union cvmx_sli_msi_wr_map cvmx_sli_msi_wr_map_t;
+
+/**
+ * cvmx_sli_msix#_table_addr
+ *
+ * The MSI-X table cannot be burst read or written.
+ *
+ * The combination of all MSI-X Tables contain (64 + 4) entries - one per
+ * ring plus one per PF. (i.e. 64 plus one per valid SLI_MAC()_PF()_INT_SUM
+ * present.)
+ *
+ * The MSI-X table for an individual PF has SLI_PKT_MAC()_PF()_RINFO[TRS]
+ * entries for the rings associated to the PF (up to 64 max) plus one
+ * more table entry for SLI_MAC()_PF()_INT_SUM. The
+ * SLI_MAC()_PF()_INT_SUM-related MSI-X table entry is
+ * always entry SLI_MSIX(SLI_PKT_MAC()_PF()_RINFO[TRS])_TABLE_ADDR and
+ * always present and valid. All earlier SLI_MSIX()_TABLE_ADDR entries
+ * correspond to rings. When SLI_PKT_MAC()_PF()_RINFO[NVFS]=0, SR-IOV
+ * virtual functions cannot be used, and all SLI_PKT_MAC()_PF()_RINFO[TRS]+1
+ * entries in the PF MSI-X table are present and valid for use by the PF.
+ * When SLI_PKT_MAC()_PF()_RINFO[NVFS]!=0, SR-IOV virtual functions may
+ * be used, and the first
+ *   SLI_PKT_MAC()_PF()_RINFO[NVFS]*SLI_PKT_MAC()_PF()_RINFO[RPVF]
+ * entries of the PF MSI-X table are present but not valid, and
+ * should never be accessed by the PF.
+ *
+ * The MSI-X table for an individual VF has SLI_PKT_MAC()_PF()_RINFO[RPVF]
+ * entries (up to 8 max), all valid, one per ring that the VF owns.
+ */
+union cvmx_sli_msixx_table_addr {
+	u64 u64;
+	struct cvmx_sli_msixx_table_addr_s {
+		u64 addr : 64;
+	} s;
+	struct cvmx_sli_msixx_table_addr_s cn73xx;
+	struct cvmx_sli_msixx_table_addr_s cn78xx;
+	struct cvmx_sli_msixx_table_addr_s cn78xxp1;
+	struct cvmx_sli_msixx_table_addr_s cnf75xx;
+};
+
+typedef union cvmx_sli_msixx_table_addr cvmx_sli_msixx_table_addr_t;
+
+/**
+ * cvmx_sli_msix#_table_data
+ *
+ * The MSI-X table cannot be burst read or written. VF/PF access is the same as
+ * described for the SLI_MSIX()_TABLE_ADDR.
+ */
+union cvmx_sli_msixx_table_data {
+	u64 u64;
+	struct cvmx_sli_msixx_table_data_s {
+		u64 reserved_33_63 : 31;
+		u64 vector_ctl : 1;
+		u64 data : 32;
+	} s;
+	struct cvmx_sli_msixx_table_data_s cn73xx;
+	struct cvmx_sli_msixx_table_data_s cn78xx;
+	struct cvmx_sli_msixx_table_data_s cn78xxp1;
+	struct cvmx_sli_msixx_table_data_s cnf75xx;
+};
+
+typedef union cvmx_sli_msixx_table_data cvmx_sli_msixx_table_data_t;
+
+/**
+ * cvmx_sli_msix_mac#_pf_table_addr
+ *
+ * These registers shadow the four physical MSIX PF ERR entries.
+ * Each MAC sees its entry at its own TRS offset.
+ */
+union cvmx_sli_msix_macx_pf_table_addr {
+	u64 u64;
+	struct cvmx_sli_msix_macx_pf_table_addr_s {
+		u64 addr : 64;
+	} s;
+	struct cvmx_sli_msix_macx_pf_table_addr_s cn78xxp1;
+};
+
+typedef union cvmx_sli_msix_macx_pf_table_addr cvmx_sli_msix_macx_pf_table_addr_t;
+
+/**
+ * cvmx_sli_msix_mac#_pf_table_data
+ *
+ * These registers shadow four physical MSIX PF ERR entries.
+ * Each MAC sees its entry at its own TRS offset.
+ */
+union cvmx_sli_msix_macx_pf_table_data {
+	u64 u64;
+	struct cvmx_sli_msix_macx_pf_table_data_s {
+		u64 reserved_33_63 : 31;
+		u64 vector_ctl : 1;
+		u64 data : 32;
+	} s;
+	struct cvmx_sli_msix_macx_pf_table_data_s cn78xxp1;
+};
+
+typedef union cvmx_sli_msix_macx_pf_table_data cvmx_sli_msix_macx_pf_table_data_t;
+
+/**
+ * cvmx_sli_msix_pba0
+ *
+ * The MSI-X pending bit array cannot be burst read.
+ * In SR-IOV mode, a VF will find its pending completion interrupts in bit
+ * positions [(RPVF-1):0]. If RPVF<64, bits [63:RPVF] are returned as zero.
+ *
+ * Each VF can read their own pending completion interrupts based on the ring/VF
+ * configuration. Therefore, a VF sees the PBA as smaller than what is shown below
+ * (unless it owns all 64 entries).  Unassigned bits will return zeros.
+ *
+ * <pre>
+ *    RPVF  Interrupts per VF   Pending bits returned
+ *    ----  -----------------   ---------------------
+ *      0            0          0
+ *      1            1          MSG_PND0
+ *      2            2          MSG_PND1  - MSG_PND0
+ *      4            4          MSG_PND3  - MSG_PND0
+ *      8            8          MSG_PND7  - MSG_PND0
+ * </pre>
+ *
+ * If SLI_PKT_MAC()_PF()_RINFO[TRS]=63 (i.e. 64 total DPI Packet Rings configured), a PF will
+ * find its pending completion interrupts in bit positions [63:0]. When
+ * SLI_PKT_MAC()_PF()_RINFO[TRS]=63,
+ * the PF will find its PCIe error interrupt in SLI_MSIX_PBA1, bit position 0.
+ *
+ * If SLI_PKT_MAC()_PF()_RINFO[TRS]<63 (i.e. 0, 1, 2, 4, or 8 rings configured), a PF will find
+ * its ring pending completion interrupts in bit positions [TNR:0]. It will find its PCIe
+ * error interrupt in bit position [(TNR+1)]. Bits [63:(TNR+2)] are returned as zero.
+ * When SLI_PKT_MAC()_PF()_RINFO[TRS]<63, SLI_MSIX_PBA1 is not used and returns zeros.
+ *
+ * If SR-IOV Mode is off there is no virtual function support, but the PF can configure up to 65
+ * entries (up to 64 DPI Packet Rings plus 1 PF ring) for itself.
+ */
+union cvmx_sli_msix_pba0 {
+	u64 u64;
+	struct cvmx_sli_msix_pba0_s {
+		u64 msg_pnd : 64;
+	} s;
+	struct cvmx_sli_msix_pba0_s cn73xx;
+	struct cvmx_sli_msix_pba0_s cn78xx;
+	struct cvmx_sli_msix_pba0_s cn78xxp1;
+	struct cvmx_sli_msix_pba0_s cnf75xx;
+};
+
+typedef union cvmx_sli_msix_pba0 cvmx_sli_msix_pba0_t;
+
+/**
+ * cvmx_sli_msix_pba1
+ *
+ * The MSI-X pending bit array cannot be burst read.
+ *
+ * PF_PND is assigned to PCIe related errors. The error bit can only be found in PBA1 when
+ * SLI_PKT_MAC()_PF()_RINFO[TRS]=63 (i.e. 64 total DPI Packet Rings configured).
+ *
+ * This register is accessible by the PF. A read by a particular PF only
+ * returns its own pending status. That is, any PF can read this register, but the hardware
+ * ensures
+ * that the PF only sees its own status.
+ */
+union cvmx_sli_msix_pba1 {
+	u64 u64;
+	struct cvmx_sli_msix_pba1_s {
+		u64 reserved_1_63 : 63;
+		u64 pf_pnd : 1;
+	} s;
+	struct cvmx_sli_msix_pba1_s cn73xx;
+	struct cvmx_sli_msix_pba1_s cn78xx;
+	struct cvmx_sli_msix_pba1_s cn78xxp1;
+	struct cvmx_sli_msix_pba1_s cnf75xx;
+};
+
+typedef union cvmx_sli_msix_pba1 cvmx_sli_msix_pba1_t;
+
+/**
+ * cvmx_sli_nqm_rsp_err_snd_dbg
+ *
+ * This register is for diagnostic use only.
+ *
+ */
+union cvmx_sli_nqm_rsp_err_snd_dbg {
+	u64 u64;
+	struct cvmx_sli_nqm_rsp_err_snd_dbg_s {
+		u64 reserved_12_63 : 52;
+		u64 vf_index : 12;
+	} s;
+	struct cvmx_sli_nqm_rsp_err_snd_dbg_s cn73xx;
+	struct cvmx_sli_nqm_rsp_err_snd_dbg_s cn78xx;
+	struct cvmx_sli_nqm_rsp_err_snd_dbg_s cnf75xx;
+};
+
+typedef union cvmx_sli_nqm_rsp_err_snd_dbg cvmx_sli_nqm_rsp_err_snd_dbg_t;
+
+/**
+ * cvmx_sli_pcie_msi_rcv
+ *
+ * This is the register where MSI write operations are directed from the MAC.
+ *
+ */
+union cvmx_sli_pcie_msi_rcv {
+	u64 u64;
+	struct cvmx_sli_pcie_msi_rcv_s {
+		u64 reserved_8_63 : 56;
+		u64 intr : 8;
+	} s;
+	struct cvmx_sli_pcie_msi_rcv_s cn61xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn63xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn63xxp1;
+	struct cvmx_sli_pcie_msi_rcv_s cn66xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn68xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn68xxp1;
+	struct cvmx_sli_pcie_msi_rcv_s cn70xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn70xxp1;
+	struct cvmx_sli_pcie_msi_rcv_s cn73xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn78xx;
+	struct cvmx_sli_pcie_msi_rcv_s cn78xxp1;
+	struct cvmx_sli_pcie_msi_rcv_s cnf71xx;
+	struct cvmx_sli_pcie_msi_rcv_s cnf75xx;
+};
+
+typedef union cvmx_sli_pcie_msi_rcv cvmx_sli_pcie_msi_rcv_t;
+
+/**
+ * cvmx_sli_pcie_msi_rcv_b1
+ *
+ * This register is where MSI write operations are directed from the MAC. This register can be
+ * used by the PCIe and SRIO MACs.
+ */
+union cvmx_sli_pcie_msi_rcv_b1 {
+	u64 u64;
+	struct cvmx_sli_pcie_msi_rcv_b1_s {
+		u64 reserved_16_63 : 48;
+		u64 intr : 8;
+		u64 reserved_0_7 : 8;
+	} s;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn61xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn63xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn63xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn66xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn68xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn68xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn70xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn70xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn73xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn78xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cn78xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cnf71xx;
+	struct cvmx_sli_pcie_msi_rcv_b1_s cnf75xx;
+};
+
+typedef union cvmx_sli_pcie_msi_rcv_b1 cvmx_sli_pcie_msi_rcv_b1_t;
+
+/**
+ * cvmx_sli_pcie_msi_rcv_b2
+ *
+ * This register is where MSI write operations are directed from the MAC.  This register can be
+ * used by PCIe and SRIO MACs.
+ */
+union cvmx_sli_pcie_msi_rcv_b2 {
+	u64 u64;
+	struct cvmx_sli_pcie_msi_rcv_b2_s {
+		u64 reserved_24_63 : 40;
+		u64 intr : 8;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn61xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn63xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn63xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn66xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn68xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn68xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn70xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn70xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn73xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn78xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cn78xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cnf71xx;
+	struct cvmx_sli_pcie_msi_rcv_b2_s cnf75xx;
+};
+
+typedef union cvmx_sli_pcie_msi_rcv_b2 cvmx_sli_pcie_msi_rcv_b2_t;
+
+/**
+ * cvmx_sli_pcie_msi_rcv_b3
+ *
+ * This register is where MSI write operations are directed from the MAC. This register can be
+ * used by PCIe and SRIO MACs.
+ */
+union cvmx_sli_pcie_msi_rcv_b3 {
+	u64 u64;
+	struct cvmx_sli_pcie_msi_rcv_b3_s {
+		u64 reserved_32_63 : 32;
+		u64 intr : 8;
+		u64 reserved_0_23 : 24;
+	} s;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn61xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn63xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn63xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn66xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn68xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn68xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn70xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn70xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn73xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn78xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cn78xxp1;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cnf71xx;
+	struct cvmx_sli_pcie_msi_rcv_b3_s cnf75xx;
+};
+
+typedef union cvmx_sli_pcie_msi_rcv_b3 cvmx_sli_pcie_msi_rcv_b3_t;
+
+/**
+ * cvmx_sli_pkt#_cnts
+ *
+ * This register contains the counters for output rings.
+ *
+ */
+union cvmx_sli_pktx_cnts {
+	u64 u64;
+	struct cvmx_sli_pktx_cnts_s {
+		u64 po_int : 1;
+		u64 pi_int : 1;
+		u64 mbox_int : 1;
+		u64 resend : 1;
+		u64 reserved_54_59 : 6;
+		u64 timer : 22;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_pktx_cnts_cn61xx {
+		u64 reserved_54_63 : 10;
+		u64 timer : 22;
+		u64 cnt : 32;
+	} cn61xx;
+	struct cvmx_sli_pktx_cnts_cn61xx cn63xx;
+	struct cvmx_sli_pktx_cnts_cn61xx cn63xxp1;
+	struct cvmx_sli_pktx_cnts_cn61xx cn66xx;
+	struct cvmx_sli_pktx_cnts_cn61xx cn68xx;
+	struct cvmx_sli_pktx_cnts_cn61xx cn68xxp1;
+	struct cvmx_sli_pktx_cnts_cn70xx {
+		u64 reserved_63_54 : 10;
+		u64 timer : 22;
+		u64 cnt : 32;
+	} cn70xx;
+	struct cvmx_sli_pktx_cnts_cn70xx cn70xxp1;
+	struct cvmx_sli_pktx_cnts_s cn73xx;
+	struct cvmx_sli_pktx_cnts_s cn78xx;
+	struct cvmx_sli_pktx_cnts_cn78xxp1 {
+		u64 po_int : 1;
+		u64 pi_int : 1;
+		u64 reserved_61_54 : 8;
+		u64 timer : 22;
+		u64 cnt : 32;
+	} cn78xxp1;
+	struct cvmx_sli_pktx_cnts_cn61xx cnf71xx;
+	struct cvmx_sli_pktx_cnts_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_cnts cvmx_sli_pktx_cnts_t;
+
+/**
+ * cvmx_sli_pkt#_error_info
+ *
+ * The fields in this register are set when an error conditions occur and can be cleared.
+ * These fields are for information purpose only and do NOT generate interrupts.
+ */
+union cvmx_sli_pktx_error_info {
+	u64 u64;
+	struct cvmx_sli_pktx_error_info_s {
+		u64 reserved_8_63 : 56;
+		u64 osize_err : 1;
+		u64 nobdell_err : 1;
+		u64 pins_err : 1;
+		u64 pop_err : 1;
+		u64 pdi_err : 1;
+		u64 pgl_err : 1;
+		u64 psldbof : 1;
+		u64 pidbof : 1;
+	} s;
+	struct cvmx_sli_pktx_error_info_s cn73xx;
+	struct cvmx_sli_pktx_error_info_s cn78xx;
+	struct cvmx_sli_pktx_error_info_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_error_info cvmx_sli_pktx_error_info_t;
+
+/**
+ * cvmx_sli_pkt#_in_bp
+ *
+ * "SLI_PKT[0..31]_IN_BP = SLI Packet ring# Input Backpressure
+ * The counters and thresholds for input packets to apply backpressure to processing of the
+ * packets."
+ */
+union cvmx_sli_pktx_in_bp {
+	u64 u64;
+	struct cvmx_sli_pktx_in_bp_s {
+		u64 wmark : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_pktx_in_bp_s cn61xx;
+	struct cvmx_sli_pktx_in_bp_s cn63xx;
+	struct cvmx_sli_pktx_in_bp_s cn63xxp1;
+	struct cvmx_sli_pktx_in_bp_s cn66xx;
+	struct cvmx_sli_pktx_in_bp_s cn70xx;
+	struct cvmx_sli_pktx_in_bp_s cn70xxp1;
+	struct cvmx_sli_pktx_in_bp_s cnf71xx;
+};
+
+typedef union cvmx_sli_pktx_in_bp cvmx_sli_pktx_in_bp_t;
+
+/**
+ * cvmx_sli_pkt#_input_control
+ *
+ * This register is the control for read operations for gather list and instructions.
+ *
+ */
+union cvmx_sli_pktx_input_control {
+	u64 u64;
+	struct cvmx_sli_pktx_input_control_s {
+		u64 reserved_55_63 : 9;
+		u64 rpvf : 7;
+		u64 reserved_31_47 : 17;
+		u64 mac_num : 2;
+		u64 quiet : 1;
+		u64 reserved_27_27 : 1;
+		u64 rdsize : 2;
+		u64 is_64b : 1;
+		u64 rst : 1;
+		u64 enb : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} s;
+	struct cvmx_sli_pktx_input_control_cn73xx {
+		u64 reserved_55_63 : 9;
+		u64 rpvf : 7;
+		u64 pvf_num : 16;
+		u64 reserved_31_31 : 1;
+		u64 mac_num : 2;
+		u64 quiet : 1;
+		u64 reserved_27_27 : 1;
+		u64 rdsize : 2;
+		u64 is_64b : 1;
+		u64 rst : 1;
+		u64 enb : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} cn73xx;
+	struct cvmx_sli_pktx_input_control_cn73xx cn78xx;
+	struct cvmx_sli_pktx_input_control_cn78xxp1 {
+		u64 reserved_39_63 : 25;
+		u64 vf_num : 7;
+		u64 reserved_31_31 : 1;
+		u64 mac_num : 2;
+		u64 reserved_27_28 : 2;
+		u64 rdsize : 2;
+		u64 is_64b : 1;
+		u64 rst : 1;
+		u64 enb : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} cn78xxp1;
+	struct cvmx_sli_pktx_input_control_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_input_control cvmx_sli_pktx_input_control_t;
+
+/**
+ * cvmx_sli_pkt#_instr_baddr
+ *
+ * This register contains the start-of-instruction for input packets. The address must be
+ * addressed-aligned to the size of the instruction.
+ */
+union cvmx_sli_pktx_instr_baddr {
+	u64 u64;
+	struct cvmx_sli_pktx_instr_baddr_s {
+		u64 addr : 61;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_sli_pktx_instr_baddr_s cn61xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn63xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn63xxp1;
+	struct cvmx_sli_pktx_instr_baddr_s cn66xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn68xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn68xxp1;
+	struct cvmx_sli_pktx_instr_baddr_s cn70xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn70xxp1;
+	struct cvmx_sli_pktx_instr_baddr_s cn73xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn78xx;
+	struct cvmx_sli_pktx_instr_baddr_s cn78xxp1;
+	struct cvmx_sli_pktx_instr_baddr_s cnf71xx;
+	struct cvmx_sli_pktx_instr_baddr_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_instr_baddr cvmx_sli_pktx_instr_baddr_t;
+
+/**
+ * cvmx_sli_pkt#_instr_baoff_dbell
+ *
+ * This register contains the doorbell and base address offset for the next read.
+ *
+ */
+union cvmx_sli_pktx_instr_baoff_dbell {
+	u64 u64;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s {
+		u64 aoff : 32;
+		u64 dbell : 32;
+	} s;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn61xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn63xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn63xxp1;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn66xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn68xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn68xxp1;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn70xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn70xxp1;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn73xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn78xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cn78xxp1;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cnf71xx;
+	struct cvmx_sli_pktx_instr_baoff_dbell_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_instr_baoff_dbell cvmx_sli_pktx_instr_baoff_dbell_t;
+
+/**
+ * cvmx_sli_pkt#_instr_fifo_rsize
+ *
+ * This register contains the FIFO field and ring size for instructions.
+ *
+ */
+union cvmx_sli_pktx_instr_fifo_rsize {
+	u64 u64;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s {
+		u64 max : 9;
+		u64 rrp : 9;
+		u64 wrp : 9;
+		u64 fcnt : 5;
+		u64 rsize : 32;
+	} s;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn61xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn63xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn63xxp1;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn66xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn68xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn68xxp1;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn70xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn70xxp1;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn73xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn78xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cn78xxp1;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cnf71xx;
+	struct cvmx_sli_pktx_instr_fifo_rsize_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_instr_fifo_rsize cvmx_sli_pktx_instr_fifo_rsize_t;
+
+/**
+ * cvmx_sli_pkt#_instr_header
+ *
+ * "SLI_PKT[0..31]_INSTR_HEADER = SLI Packet ring# Instruction Header.
+ * VAlues used to build input packet header."
+ */
+union cvmx_sli_pktx_instr_header {
+	u64 u64;
+	struct cvmx_sli_pktx_instr_header_s {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 reserved_38_42 : 5;
+		u64 rparmode : 2;
+		u64 reserved_35_35 : 1;
+		u64 rskp_len : 7;
+		u64 rngrpext : 2;
+		u64 rnqos : 1;
+		u64 rngrp : 1;
+		u64 rntt : 1;
+		u64 rntag : 1;
+		u64 use_ihdr : 1;
+		u64 reserved_16_20 : 5;
+		u64 par_mode : 2;
+		u64 reserved_13_13 : 1;
+		u64 skp_len : 7;
+		u64 ngrpext : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+	} s;
+	struct cvmx_sli_pktx_instr_header_cn61xx {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 reserved_38_42 : 5;
+		u64 rparmode : 2;
+		u64 reserved_35_35 : 1;
+		u64 rskp_len : 7;
+		u64 reserved_26_27 : 2;
+		u64 rnqos : 1;
+		u64 rngrp : 1;
+		u64 rntt : 1;
+		u64 rntag : 1;
+		u64 use_ihdr : 1;
+		u64 reserved_16_20 : 5;
+		u64 par_mode : 2;
+		u64 reserved_13_13 : 1;
+		u64 skp_len : 7;
+		u64 reserved_4_5 : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+	} cn61xx;
+	struct cvmx_sli_pktx_instr_header_cn61xx cn63xx;
+	struct cvmx_sli_pktx_instr_header_cn61xx cn63xxp1;
+	struct cvmx_sli_pktx_instr_header_cn61xx cn66xx;
+	struct cvmx_sli_pktx_instr_header_s cn68xx;
+	struct cvmx_sli_pktx_instr_header_cn61xx cn68xxp1;
+	struct cvmx_sli_pktx_instr_header_cn70xx {
+		u64 reserved_44_63 : 20;
+		u64 pbp : 1;
+		u64 reserved_38_42 : 5;
+		u64 rparmode : 2;
+		u64 reserved_35_35 : 1;
+		u64 rskp_len : 7;
+		u64 reserved_27_26 : 2;
+		u64 rnqos : 1;
+		u64 rngrp : 1;
+		u64 rntt : 1;
+		u64 rntag : 1;
+		u64 use_ihdr : 1;
+		u64 reserved_20_16 : 5;
+		u64 par_mode : 2;
+		u64 reserved_13_13 : 1;
+		u64 skp_len : 7;
+		u64 reserved_5_4 : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+	} cn70xx;
+	struct cvmx_sli_pktx_instr_header_cn70xx cn70xxp1;
+	struct cvmx_sli_pktx_instr_header_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_pktx_instr_header cvmx_sli_pktx_instr_header_t;
+
+/**
+ * cvmx_sli_pkt#_int_levels
+ *
+ * This register contains output-packet interrupt levels.
+ *
+ */
+union cvmx_sli_pktx_int_levels {
+	u64 u64;
+	struct cvmx_sli_pktx_int_levels_s {
+		u64 reserved_54_63 : 10;
+		u64 time : 22;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_pktx_int_levels_s cn73xx;
+	struct cvmx_sli_pktx_int_levels_s cn78xx;
+	struct cvmx_sli_pktx_int_levels_s cn78xxp1;
+	struct cvmx_sli_pktx_int_levels_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_int_levels cvmx_sli_pktx_int_levels_t;
+
+/**
+ * cvmx_sli_pkt#_mbox_int
+ *
+ * This register contains information to service mbox interrupts to the VF
+ * when the PF writes SLI_PKT()_PF_VF_MBOX_SIG(0).
+ */
+union cvmx_sli_pktx_mbox_int {
+	u64 u64;
+	struct cvmx_sli_pktx_mbox_int_s {
+		u64 po_int : 1;
+		u64 pi_int : 1;
+		u64 mbox_int : 1;
+		u64 resend : 1;
+		u64 reserved_1_59 : 59;
+		u64 mbox_en : 1;
+	} s;
+	struct cvmx_sli_pktx_mbox_int_s cn73xx;
+	struct cvmx_sli_pktx_mbox_int_s cn78xx;
+	struct cvmx_sli_pktx_mbox_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_mbox_int cvmx_sli_pktx_mbox_int_t;
+
+/**
+ * cvmx_sli_pkt#_out_size
+ *
+ * This register contains the BSIZE and ISIZE for output packet rings.
+ *
+ */
+union cvmx_sli_pktx_out_size {
+	u64 u64;
+	struct cvmx_sli_pktx_out_size_s {
+		u64 reserved_23_63 : 41;
+		u64 isize : 7;
+		u64 bsize : 16;
+	} s;
+	struct cvmx_sli_pktx_out_size_s cn61xx;
+	struct cvmx_sli_pktx_out_size_s cn63xx;
+	struct cvmx_sli_pktx_out_size_s cn63xxp1;
+	struct cvmx_sli_pktx_out_size_s cn66xx;
+	struct cvmx_sli_pktx_out_size_s cn68xx;
+	struct cvmx_sli_pktx_out_size_s cn68xxp1;
+	struct cvmx_sli_pktx_out_size_s cn70xx;
+	struct cvmx_sli_pktx_out_size_s cn70xxp1;
+	struct cvmx_sli_pktx_out_size_cn73xx {
+		u64 reserved_22_63 : 42;
+		u64 isize : 6;
+		u64 bsize : 16;
+	} cn73xx;
+	struct cvmx_sli_pktx_out_size_cn73xx cn78xx;
+	struct cvmx_sli_pktx_out_size_s cn78xxp1;
+	struct cvmx_sli_pktx_out_size_s cnf71xx;
+	struct cvmx_sli_pktx_out_size_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_out_size cvmx_sli_pktx_out_size_t;
+
+/**
+ * cvmx_sli_pkt#_output_control
+ *
+ * This register is the control for read operations for gather list and instructions.
+ *
+ */
+union cvmx_sli_pktx_output_control {
+	u64 u64;
+	struct cvmx_sli_pktx_output_control_s {
+		u64 reserved_14_63 : 50;
+		u64 tenb : 1;
+		u64 cenb : 1;
+		u64 iptr : 1;
+		u64 es : 2;
+		u64 nsr : 1;
+		u64 ror : 1;
+		u64 dptr : 1;
+		u64 bmode : 1;
+		u64 es_p : 2;
+		u64 nsr_p : 1;
+		u64 ror_p : 1;
+		u64 enb : 1;
+	} s;
+	struct cvmx_sli_pktx_output_control_s cn73xx;
+	struct cvmx_sli_pktx_output_control_s cn78xx;
+	struct cvmx_sli_pktx_output_control_s cn78xxp1;
+	struct cvmx_sli_pktx_output_control_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_output_control cvmx_sli_pktx_output_control_t;
+
+/**
+ * cvmx_sli_pkt#_pf_vf_mbox_sig#
+ *
+ * These registers are used for communication of data from the PF to the VF and vice versa.
+ *
+ * There are two registers per ring, SIG(0) and SIG(1). The PF and VF, both, have read and
+ * write access to these registers.
+ *
+ * For PF-to-VF ring interrupts, SLI_PKT(0..63)_MBOX_INT[MBOX_EN] must be set.
+ * When [MBOX_EN] is set, writes from the PF to byte 0 of the SIG(0) register will cause
+ * an interrupt by setting [MBOX_INT] in the corresponding ring address of
+ * SLI_PKT()_MBOX_INT[MBOX_INT],
+ * SLI_PKT_IN_DONE()_CNTS[MBOX_INT], and SLI_PKT()_CNTS[MBOX_INT].
+ *
+ * For VF-to-PF ring interrupt, SLI_MAC()_PF()_INT_ENB[VF_MBOX] must be set.
+ * When [VF_MBOX] is set, write from the VF to byte 0 of the SIG(1) register will cause an
+ * interrupt by setting ring address VF_INT field in corresponding SLI_MAC()_PF()_MBOX_INT
+ * register,
+ * which may cause an interrupt to occur through PF.
+ *
+ * Each PF and VF can only access the rings that it owns as programmed by
+ * SLI_PKT_MAC()_PF()_RINFO.
+ * The signaling is ring-based. If a VF owns more than one ring, it can ignore the other
+ * rings' registers if not needed.
+ */
+union cvmx_sli_pktx_pf_vf_mbox_sigx {
+	u64 u64;
+	struct cvmx_sli_pktx_pf_vf_mbox_sigx_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pktx_pf_vf_mbox_sigx_s cn73xx;
+	struct cvmx_sli_pktx_pf_vf_mbox_sigx_s cn78xx;
+	struct cvmx_sli_pktx_pf_vf_mbox_sigx_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_pf_vf_mbox_sigx cvmx_sli_pktx_pf_vf_mbox_sigx_t;
+
+/**
+ * cvmx_sli_pkt#_slist_baddr
+ *
+ * This register contains the start of scatter list for output-packet pointers. This address must
+ * be 16-byte aligned.
+ */
+union cvmx_sli_pktx_slist_baddr {
+	u64 u64;
+	struct cvmx_sli_pktx_slist_baddr_s {
+		u64 addr : 60;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_sli_pktx_slist_baddr_s cn61xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn63xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn63xxp1;
+	struct cvmx_sli_pktx_slist_baddr_s cn66xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn68xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn68xxp1;
+	struct cvmx_sli_pktx_slist_baddr_s cn70xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn70xxp1;
+	struct cvmx_sli_pktx_slist_baddr_s cn73xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn78xx;
+	struct cvmx_sli_pktx_slist_baddr_s cn78xxp1;
+	struct cvmx_sli_pktx_slist_baddr_s cnf71xx;
+	struct cvmx_sli_pktx_slist_baddr_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_slist_baddr cvmx_sli_pktx_slist_baddr_t;
+
+/**
+ * cvmx_sli_pkt#_slist_baoff_dbell
+ *
+ * This register contains the doorbell and base-address offset for next read operation.
+ *
+ */
+union cvmx_sli_pktx_slist_baoff_dbell {
+	u64 u64;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s {
+		u64 aoff : 32;
+		u64 dbell : 32;
+	} s;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn61xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn63xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn63xxp1;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn66xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn68xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn68xxp1;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn70xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn70xxp1;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn73xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn78xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cn78xxp1;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cnf71xx;
+	struct cvmx_sli_pktx_slist_baoff_dbell_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_slist_baoff_dbell cvmx_sli_pktx_slist_baoff_dbell_t;
+
+/**
+ * cvmx_sli_pkt#_slist_fifo_rsize
+ *
+ * This register contains the number of scatter pointer pairs in the scatter list.
+ *
+ */
+union cvmx_sli_pktx_slist_fifo_rsize {
+	u64 u64;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s {
+		u64 reserved_32_63 : 32;
+		u64 rsize : 32;
+	} s;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn61xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn63xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn63xxp1;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn66xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn68xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cn68xxp1;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx {
+		u64 reserved_63_32 : 32;
+		u64 rsize : 32;
+	} cn70xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx cn70xxp1;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx cn73xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx cn78xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx cn78xxp1;
+	struct cvmx_sli_pktx_slist_fifo_rsize_s cnf71xx;
+	struct cvmx_sli_pktx_slist_fifo_rsize_cn70xx cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_slist_fifo_rsize cvmx_sli_pktx_slist_fifo_rsize_t;
+
+/**
+ * cvmx_sli_pkt#_vf_int_sum
+ *
+ * This register contains summary interrupts bits for a VF. A VF read of this register
+ * for any of its 8 rings will return the same 8-bit summary for packet input, packet
+ * output and mailbox interrupts. If a PF reads this register it will return 0x0.
+ */
+union cvmx_sli_pktx_vf_int_sum {
+	u64 u64;
+	struct cvmx_sli_pktx_vf_int_sum_s {
+		u64 reserved_40_63 : 24;
+		u64 mbox : 8;
+		u64 reserved_24_31 : 8;
+		u64 pkt_out : 8;
+		u64 reserved_8_15 : 8;
+		u64 pkt_in : 8;
+	} s;
+	struct cvmx_sli_pktx_vf_int_sum_s cn73xx;
+	struct cvmx_sli_pktx_vf_int_sum_s cn78xx;
+	struct cvmx_sli_pktx_vf_int_sum_s cnf75xx;
+};
+
+typedef union cvmx_sli_pktx_vf_int_sum cvmx_sli_pktx_vf_int_sum_t;
+
+/**
+ * cvmx_sli_pkt#_vf_sig
+ *
+ * This register is used to signal between PF/VF. These 64 registers are index by VF number.
+ *
+ */
+union cvmx_sli_pktx_vf_sig {
+	u64 u64;
+	struct cvmx_sli_pktx_vf_sig_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pktx_vf_sig_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pktx_vf_sig cvmx_sli_pktx_vf_sig_t;
+
+/**
+ * cvmx_sli_pkt_bist_status
+ *
+ * This is the built-in self-test (BIST) status register. Each bit is the BIST result of an
+ * individual memory (per bit, 0 = pass and 1 = fail).
+ */
+union cvmx_sli_pkt_bist_status {
+	u64 u64;
+	struct cvmx_sli_pkt_bist_status_s {
+		u64 reserved_22_63 : 42;
+		u64 bist : 22;
+	} s;
+	struct cvmx_sli_pkt_bist_status_s cn73xx;
+	struct cvmx_sli_pkt_bist_status_s cn78xx;
+	struct cvmx_sli_pkt_bist_status_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_bist_status cvmx_sli_pkt_bist_status_t;
+
+/**
+ * cvmx_sli_pkt_cnt_int
+ *
+ * This register specifies which output packet rings are interrupting because of packet counters.
+ * A bit set in this interrupt register will set a corresponding bit in SLI_PKT_INT and can
+ * also cause SLI_MAC()_PF()_INT_SUM[PCNT] to be set if SLI_PKT()_OUTPUT_CONTROL[CENB] is set.
+ * When read by a function, this register informs which rings owned by the function (0 to N,
+ * N as large as 63) have this interrupt pending.
+ */
+union cvmx_sli_pkt_cnt_int {
+	u64 u64;
+	struct cvmx_sli_pkt_cnt_int_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_sli_pkt_cnt_int_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 port : 32;
+	} cn61xx;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn63xx;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn63xxp1;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn66xx;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn68xx;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn68xxp1;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn70xx;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cn70xxp1;
+	struct cvmx_sli_pkt_cnt_int_cn73xx {
+		u64 ring : 64;
+	} cn73xx;
+	struct cvmx_sli_pkt_cnt_int_cn73xx cn78xx;
+	struct cvmx_sli_pkt_cnt_int_cn73xx cn78xxp1;
+	struct cvmx_sli_pkt_cnt_int_cn61xx cnf71xx;
+	struct cvmx_sli_pkt_cnt_int_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_cnt_int cvmx_sli_pkt_cnt_int_t;
+
+/**
+ * cvmx_sli_pkt_cnt_int_enb
+ *
+ * Enable for the packets rings that are interrupting because of Packet Counters.
+ *
+ */
+union cvmx_sli_pkt_cnt_int_enb {
+	u64 u64;
+	struct cvmx_sli_pkt_cnt_int_enb_s {
+		u64 reserved_32_63 : 32;
+		u64 port : 32;
+	} s;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn61xx;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn63xx;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn63xxp1;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn66xx;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn68xx;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn68xxp1;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn70xx;
+	struct cvmx_sli_pkt_cnt_int_enb_s cn70xxp1;
+	struct cvmx_sli_pkt_cnt_int_enb_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_cnt_int_enb cvmx_sli_pkt_cnt_int_enb_t;
+
+/**
+ * cvmx_sli_pkt_ctl
+ *
+ * Control for packets.
+ *
+ */
+union cvmx_sli_pkt_ctl {
+	u64 u64;
+	struct cvmx_sli_pkt_ctl_s {
+		u64 reserved_5_63 : 59;
+		u64 ring_en : 1;
+		u64 pkt_bp : 4;
+	} s;
+	struct cvmx_sli_pkt_ctl_s cn61xx;
+	struct cvmx_sli_pkt_ctl_s cn63xx;
+	struct cvmx_sli_pkt_ctl_s cn63xxp1;
+	struct cvmx_sli_pkt_ctl_s cn66xx;
+	struct cvmx_sli_pkt_ctl_s cn68xx;
+	struct cvmx_sli_pkt_ctl_s cn68xxp1;
+	struct cvmx_sli_pkt_ctl_s cn70xx;
+	struct cvmx_sli_pkt_ctl_s cn70xxp1;
+	struct cvmx_sli_pkt_ctl_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_ctl cvmx_sli_pkt_ctl_t;
+
+/**
+ * cvmx_sli_pkt_data_out_es
+ *
+ * The Endian Swap for writing Data Out.
+ *
+ */
+union cvmx_sli_pkt_data_out_es {
+	u64 u64;
+	struct cvmx_sli_pkt_data_out_es_s {
+		u64 es : 64;
+	} s;
+	struct cvmx_sli_pkt_data_out_es_s cn61xx;
+	struct cvmx_sli_pkt_data_out_es_s cn63xx;
+	struct cvmx_sli_pkt_data_out_es_s cn63xxp1;
+	struct cvmx_sli_pkt_data_out_es_s cn66xx;
+	struct cvmx_sli_pkt_data_out_es_s cn68xx;
+	struct cvmx_sli_pkt_data_out_es_s cn68xxp1;
+	struct cvmx_sli_pkt_data_out_es_s cn70xx;
+	struct cvmx_sli_pkt_data_out_es_s cn70xxp1;
+	struct cvmx_sli_pkt_data_out_es_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_data_out_es cvmx_sli_pkt_data_out_es_t;
+
+/**
+ * cvmx_sli_pkt_data_out_ns
+ *
+ * The NS field for the TLP when writing packet data.
+ *
+ */
+union cvmx_sli_pkt_data_out_ns {
+	u64 u64;
+	struct cvmx_sli_pkt_data_out_ns_s {
+		u64 reserved_32_63 : 32;
+		u64 nsr : 32;
+	} s;
+	struct cvmx_sli_pkt_data_out_ns_s cn61xx;
+	struct cvmx_sli_pkt_data_out_ns_s cn63xx;
+	struct cvmx_sli_pkt_data_out_ns_s cn63xxp1;
+	struct cvmx_sli_pkt_data_out_ns_s cn66xx;
+	struct cvmx_sli_pkt_data_out_ns_s cn68xx;
+	struct cvmx_sli_pkt_data_out_ns_s cn68xxp1;
+	struct cvmx_sli_pkt_data_out_ns_s cn70xx;
+	struct cvmx_sli_pkt_data_out_ns_s cn70xxp1;
+	struct cvmx_sli_pkt_data_out_ns_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_data_out_ns cvmx_sli_pkt_data_out_ns_t;
+
+/**
+ * cvmx_sli_pkt_data_out_ror
+ *
+ * The ROR field for the TLP when writing Packet Data.
+ *
+ */
+union cvmx_sli_pkt_data_out_ror {
+	u64 u64;
+	struct cvmx_sli_pkt_data_out_ror_s {
+		u64 reserved_32_63 : 32;
+		u64 ror : 32;
+	} s;
+	struct cvmx_sli_pkt_data_out_ror_s cn61xx;
+	struct cvmx_sli_pkt_data_out_ror_s cn63xx;
+	struct cvmx_sli_pkt_data_out_ror_s cn63xxp1;
+	struct cvmx_sli_pkt_data_out_ror_s cn66xx;
+	struct cvmx_sli_pkt_data_out_ror_s cn68xx;
+	struct cvmx_sli_pkt_data_out_ror_s cn68xxp1;
+	struct cvmx_sli_pkt_data_out_ror_s cn70xx;
+	struct cvmx_sli_pkt_data_out_ror_s cn70xxp1;
+	struct cvmx_sli_pkt_data_out_ror_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_data_out_ror cvmx_sli_pkt_data_out_ror_t;
+
+/**
+ * cvmx_sli_pkt_dpaddr
+ *
+ * Used to detemine address and attributes for packet data writes.
+ *
+ */
+union cvmx_sli_pkt_dpaddr {
+	u64 u64;
+	struct cvmx_sli_pkt_dpaddr_s {
+		u64 reserved_32_63 : 32;
+		u64 dptr : 32;
+	} s;
+	struct cvmx_sli_pkt_dpaddr_s cn61xx;
+	struct cvmx_sli_pkt_dpaddr_s cn63xx;
+	struct cvmx_sli_pkt_dpaddr_s cn63xxp1;
+	struct cvmx_sli_pkt_dpaddr_s cn66xx;
+	struct cvmx_sli_pkt_dpaddr_s cn68xx;
+	struct cvmx_sli_pkt_dpaddr_s cn68xxp1;
+	struct cvmx_sli_pkt_dpaddr_s cn70xx;
+	struct cvmx_sli_pkt_dpaddr_s cn70xxp1;
+	struct cvmx_sli_pkt_dpaddr_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_dpaddr cvmx_sli_pkt_dpaddr_t;
+
+/**
+ * cvmx_sli_pkt_gbl_control
+ *
+ * This register contains control bits that affect all packet rings.
+ *
+ */
+union cvmx_sli_pkt_gbl_control {
+	u64 u64;
+	struct cvmx_sli_pkt_gbl_control_s {
+		u64 reserved_32_63 : 32;
+		u64 qtime : 16;
+		u64 reserved_14_15 : 2;
+		u64 bpkind : 6;
+		u64 reserved_4_7 : 4;
+		u64 pkpfval : 1;
+		u64 bpflr_d : 1;
+		u64 noptr_d : 1;
+		u64 picnt_d : 1;
+	} s;
+	struct cvmx_sli_pkt_gbl_control_s cn73xx;
+	struct cvmx_sli_pkt_gbl_control_s cn78xx;
+	struct cvmx_sli_pkt_gbl_control_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_gbl_control cvmx_sli_pkt_gbl_control_t;
+
+/**
+ * cvmx_sli_pkt_in_bp
+ *
+ * Which input rings have backpressure applied.
+ *
+ */
+union cvmx_sli_pkt_in_bp {
+	u64 u64;
+	struct cvmx_sli_pkt_in_bp_s {
+		u64 reserved_32_63 : 32;
+		u64 bp : 32;
+	} s;
+	struct cvmx_sli_pkt_in_bp_s cn61xx;
+	struct cvmx_sli_pkt_in_bp_s cn63xx;
+	struct cvmx_sli_pkt_in_bp_s cn63xxp1;
+	struct cvmx_sli_pkt_in_bp_s cn66xx;
+	struct cvmx_sli_pkt_in_bp_s cn70xx;
+	struct cvmx_sli_pkt_in_bp_s cn70xxp1;
+	struct cvmx_sli_pkt_in_bp_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_in_bp cvmx_sli_pkt_in_bp_t;
+
+/**
+ * cvmx_sli_pkt_in_done#_cnts
+ *
+ * This register contains counters for instructions completed on input rings.
+ *
+ */
+union cvmx_sli_pkt_in_donex_cnts {
+	u64 u64;
+	struct cvmx_sli_pkt_in_donex_cnts_s {
+		u64 po_int : 1;
+		u64 pi_int : 1;
+		u64 mbox_int : 1;
+		u64 resend : 1;
+		u64 reserved_49_59 : 11;
+		u64 cint_enb : 1;
+		u64 wmark : 16;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} cn61xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cn63xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cn63xxp1;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cn66xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cn68xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cn68xxp1;
+	struct cvmx_sli_pkt_in_donex_cnts_cn70xx {
+		u64 reserved_63_32 : 32;
+		u64 cnt : 32;
+	} cn70xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn70xx cn70xxp1;
+	struct cvmx_sli_pkt_in_donex_cnts_s cn73xx;
+	struct cvmx_sli_pkt_in_donex_cnts_s cn78xx;
+	struct cvmx_sli_pkt_in_donex_cnts_cn78xxp1 {
+		u64 po_int : 1;
+		u64 pi_int : 1;
+		u64 reserved_61_49 : 13;
+		u64 cint_enb : 1;
+		u64 wmark : 16;
+		u64 cnt : 32;
+	} cn78xxp1;
+	struct cvmx_sli_pkt_in_donex_cnts_cn61xx cnf71xx;
+	struct cvmx_sli_pkt_in_donex_cnts_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_in_donex_cnts cvmx_sli_pkt_in_donex_cnts_t;
+
+/**
+ * cvmx_sli_pkt_in_instr_counts
+ *
+ * This register contains keeps track of the number of instructions read into the FIFO and
+ * packets sent to PKI. This register is PF-only.
+ */
+union cvmx_sli_pkt_in_instr_counts {
+	u64 u64;
+	struct cvmx_sli_pkt_in_instr_counts_s {
+		u64 wr_cnt : 32;
+		u64 rd_cnt : 32;
+	} s;
+	struct cvmx_sli_pkt_in_instr_counts_s cn61xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn63xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn63xxp1;
+	struct cvmx_sli_pkt_in_instr_counts_s cn66xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn68xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn68xxp1;
+	struct cvmx_sli_pkt_in_instr_counts_s cn70xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn70xxp1;
+	struct cvmx_sli_pkt_in_instr_counts_s cn73xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn78xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cn78xxp1;
+	struct cvmx_sli_pkt_in_instr_counts_s cnf71xx;
+	struct cvmx_sli_pkt_in_instr_counts_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_in_instr_counts cvmx_sli_pkt_in_instr_counts_t;
+
+/**
+ * cvmx_sli_pkt_in_int
+ *
+ * This register specifies which input packets rings are interrupting because of done counts.
+ * A bit set in this interrupt register will set a corresponding bit in SLI_PKT_INT which
+ * can cause a MSI-X interrupt.  When read by a function, this register informs which rings
+ * owned by the function (0 to N, N as large as 63) have this interrupt pending.
+ * SLI_PKT_IN_INT conditions can cause MSI-X interrupts, but do not cause any
+ * SLI_MAC()_PF()_INT_SUM
+ * bit to set, and cannot cause INTA/B/C/D nor MSI interrupts.
+ */
+union cvmx_sli_pkt_in_int {
+	u64 u64;
+	struct cvmx_sli_pkt_in_int_s {
+		u64 ring : 64;
+	} s;
+	struct cvmx_sli_pkt_in_int_s cn73xx;
+	struct cvmx_sli_pkt_in_int_s cn78xx;
+	struct cvmx_sli_pkt_in_int_s cn78xxp1;
+	struct cvmx_sli_pkt_in_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_in_int cvmx_sli_pkt_in_int_t;
+
+/**
+ * cvmx_sli_pkt_in_jabber
+ *
+ * Register to set limit on SLI packet input packet sizes.
+ *
+ */
+union cvmx_sli_pkt_in_jabber {
+	u64 u64;
+	struct cvmx_sli_pkt_in_jabber_s {
+		u64 reserved_32_63 : 32;
+		u64 size : 32;
+	} s;
+	struct cvmx_sli_pkt_in_jabber_s cn73xx;
+	struct cvmx_sli_pkt_in_jabber_s cn78xx;
+	struct cvmx_sli_pkt_in_jabber_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_in_jabber cvmx_sli_pkt_in_jabber_t;
+
+/**
+ * cvmx_sli_pkt_in_pcie_port
+ *
+ * Assigns Packet Input rings to MAC ports.
+ *
+ */
+union cvmx_sli_pkt_in_pcie_port {
+	u64 u64;
+	struct cvmx_sli_pkt_in_pcie_port_s {
+		u64 pp : 64;
+	} s;
+	struct cvmx_sli_pkt_in_pcie_port_s cn61xx;
+	struct cvmx_sli_pkt_in_pcie_port_s cn63xx;
+	struct cvmx_sli_pkt_in_pcie_port_s cn63xxp1;
+	struct cvmx_sli_pkt_in_pcie_port_s cn66xx;
+	struct cvmx_sli_pkt_in_pcie_port_s cn68xx;
+	struct cvmx_sli_pkt_in_pcie_port_s cn68xxp1;
+	struct cvmx_sli_pkt_in_pcie_port_s cn70xx;
+	struct cvmx_sli_pkt_in_pcie_port_s cn70xxp1;
+	struct cvmx_sli_pkt_in_pcie_port_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_in_pcie_port cvmx_sli_pkt_in_pcie_port_t;
+
+/**
+ * cvmx_sli_pkt_input_control
+ *
+ * Control for reads for gather list and instructions.
+ *
+ */
+union cvmx_sli_pkt_input_control {
+	u64 u64;
+	struct cvmx_sli_pkt_input_control_s {
+		u64 prd_erst : 1;
+		u64 prd_rds : 7;
+		u64 gii_erst : 1;
+		u64 gii_rds : 7;
+		u64 reserved_41_47 : 7;
+		u64 prc_idle : 1;
+		u64 reserved_24_39 : 16;
+		u64 pin_rst : 1;
+		u64 pkt_rr : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} s;
+	struct cvmx_sli_pkt_input_control_s cn61xx;
+	struct cvmx_sli_pkt_input_control_cn63xx {
+		u64 reserved_23_63 : 41;
+		u64 pkt_rr : 1;
+		u64 pbp_dhi : 13;
+		u64 d_nsr : 1;
+		u64 d_esr : 2;
+		u64 d_ror : 1;
+		u64 use_csr : 1;
+		u64 nsr : 1;
+		u64 esr : 2;
+		u64 ror : 1;
+	} cn63xx;
+	struct cvmx_sli_pkt_input_control_cn63xx cn63xxp1;
+	struct cvmx_sli_pkt_input_control_s cn66xx;
+	struct cvmx_sli_pkt_input_control_s cn68xx;
+	struct cvmx_sli_pkt_input_control_s cn68xxp1;
+	struct cvmx_sli_pkt_input_control_s cn70xx;
+	struct cvmx_sli_pkt_input_control_s cn70xxp1;
+	struct cvmx_sli_pkt_input_control_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_input_control cvmx_sli_pkt_input_control_t;
+
+/**
+ * cvmx_sli_pkt_instr_enb
+ *
+ * Multi-ring instruction input enable register. This register is PF-only.
+ *
+ */
+union cvmx_sli_pkt_instr_enb {
+	u64 u64;
+	struct cvmx_sli_pkt_instr_enb_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_pkt_instr_enb_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 enb : 32;
+	} cn61xx;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn63xx;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn63xxp1;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn66xx;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn68xx;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn68xxp1;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn70xx;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cn70xxp1;
+	struct cvmx_sli_pkt_instr_enb_s cn78xxp1;
+	struct cvmx_sli_pkt_instr_enb_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_instr_enb cvmx_sli_pkt_instr_enb_t;
+
+/**
+ * cvmx_sli_pkt_instr_rd_size
+ *
+ * The number of instruction allowed to be read@one time.
+ *
+ */
+union cvmx_sli_pkt_instr_rd_size {
+	u64 u64;
+	struct cvmx_sli_pkt_instr_rd_size_s {
+		u64 rdsize : 64;
+	} s;
+	struct cvmx_sli_pkt_instr_rd_size_s cn61xx;
+	struct cvmx_sli_pkt_instr_rd_size_s cn63xx;
+	struct cvmx_sli_pkt_instr_rd_size_s cn63xxp1;
+	struct cvmx_sli_pkt_instr_rd_size_s cn66xx;
+	struct cvmx_sli_pkt_instr_rd_size_s cn68xx;
+	struct cvmx_sli_pkt_instr_rd_size_s cn68xxp1;
+	struct cvmx_sli_pkt_instr_rd_size_s cn70xx;
+	struct cvmx_sli_pkt_instr_rd_size_s cn70xxp1;
+	struct cvmx_sli_pkt_instr_rd_size_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_instr_rd_size cvmx_sli_pkt_instr_rd_size_t;
+
+/**
+ * cvmx_sli_pkt_instr_size
+ *
+ * Determines if instructions are 64 or 32 byte in size for a Packet-ring.
+ *
+ */
+union cvmx_sli_pkt_instr_size {
+	u64 u64;
+	struct cvmx_sli_pkt_instr_size_s {
+		u64 reserved_32_63 : 32;
+		u64 is_64b : 32;
+	} s;
+	struct cvmx_sli_pkt_instr_size_s cn61xx;
+	struct cvmx_sli_pkt_instr_size_s cn63xx;
+	struct cvmx_sli_pkt_instr_size_s cn63xxp1;
+	struct cvmx_sli_pkt_instr_size_s cn66xx;
+	struct cvmx_sli_pkt_instr_size_s cn68xx;
+	struct cvmx_sli_pkt_instr_size_s cn68xxp1;
+	struct cvmx_sli_pkt_instr_size_s cn70xx;
+	struct cvmx_sli_pkt_instr_size_s cn70xxp1;
+	struct cvmx_sli_pkt_instr_size_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_instr_size cvmx_sli_pkt_instr_size_t;
+
+/**
+ * cvmx_sli_pkt_int
+ *
+ * This register combines the SLI_PKT_CNT_INT, SLI_PKT_TIME_INT or SLI_PKT_IN_INT interrupt
+ * registers. When read by a function, this register informs which rings owned by the function
+ * (0 to N, N as large as 63) have an interrupt pending.
+ */
+union cvmx_sli_pkt_int {
+	u64 u64;
+	struct cvmx_sli_pkt_int_s {
+		u64 ring : 64;
+	} s;
+	struct cvmx_sli_pkt_int_s cn73xx;
+	struct cvmx_sli_pkt_int_s cn78xx;
+	struct cvmx_sli_pkt_int_s cn78xxp1;
+	struct cvmx_sli_pkt_int_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_int cvmx_sli_pkt_int_t;
+
+/**
+ * cvmx_sli_pkt_int_levels
+ *
+ * SLI_PKT_INT_LEVELS = SLI's Packet Interrupt Levels
+ * Output packet interrupt levels.
+ */
+union cvmx_sli_pkt_int_levels {
+	u64 u64;
+	struct cvmx_sli_pkt_int_levels_s {
+		u64 reserved_54_63 : 10;
+		u64 time : 22;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sli_pkt_int_levels_s cn61xx;
+	struct cvmx_sli_pkt_int_levels_s cn63xx;
+	struct cvmx_sli_pkt_int_levels_s cn63xxp1;
+	struct cvmx_sli_pkt_int_levels_s cn66xx;
+	struct cvmx_sli_pkt_int_levels_s cn68xx;
+	struct cvmx_sli_pkt_int_levels_s cn68xxp1;
+	struct cvmx_sli_pkt_int_levels_s cn70xx;
+	struct cvmx_sli_pkt_int_levels_s cn70xxp1;
+	struct cvmx_sli_pkt_int_levels_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_int_levels cvmx_sli_pkt_int_levels_t;
+
+/**
+ * cvmx_sli_pkt_iptr
+ *
+ * Controls using the Info-Pointer to store length and data.
+ *
+ */
+union cvmx_sli_pkt_iptr {
+	u64 u64;
+	struct cvmx_sli_pkt_iptr_s {
+		u64 reserved_32_63 : 32;
+		u64 iptr : 32;
+	} s;
+	struct cvmx_sli_pkt_iptr_s cn61xx;
+	struct cvmx_sli_pkt_iptr_s cn63xx;
+	struct cvmx_sli_pkt_iptr_s cn63xxp1;
+	struct cvmx_sli_pkt_iptr_s cn66xx;
+	struct cvmx_sli_pkt_iptr_s cn68xx;
+	struct cvmx_sli_pkt_iptr_s cn68xxp1;
+	struct cvmx_sli_pkt_iptr_s cn70xx;
+	struct cvmx_sli_pkt_iptr_s cn70xxp1;
+	struct cvmx_sli_pkt_iptr_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_iptr cvmx_sli_pkt_iptr_t;
+
+/**
+ * cvmx_sli_pkt_mac#_pf#_rinfo
+ *
+ * This register sets the total number and starting number of rings for a given MAC and PF
+ * combination. Indexed by (MAC index) SLI_PORT_E. In SR-IOV mode, SLI_PKT_MAC()_PF()_RINFO[RPVF]
+ * and SLI_PKT_MAC()_PF()_RINFO[NVFS] must be non zero and determine which rings the PFs and
+ * VFs own.
+ *
+ * An individual VF will own SLI_PKT_MAC()_PF()_RINFO[RPVF] number of rings.
+ *
+ * A PF will own the rings starting from ((SLI_PKT_MAC()_PF()_RINFO[SRN] +
+ * (SLI_PKT_MAC()_PF()_RINFO[RPVF] * SLI_PKT_MAC()_PF()_RINFO[NVFS]))
+ * to (SLI_PKT_MAC()_PF()_RINFO[SRN] + (SLI_PKT_MAC()_PF()_RINFO[TRS] -
+ * 1)). SLI_PKT()_INPUT_CONTROL[PVF_NUM] must be written to values that
+ * correlate with the fields in this register.
+ *
+ * e.g. Given:
+ * _ SLI_PKT_MAC0_PF0_RINFO[SRN] = 32,
+ * _ SLI_PKT_MAC0_PF0_RINFO[TRS] = 32,
+ * _ SLI_PKT_MAC0_PF0_RINFO[RPVF] = 4,
+ * _ SLI_PKT_MAC0_PF0_RINFO[NVFS] = 7:
+ * _ rings owned by VF1: 32,33,34,35
+ * _ rings owned by VF2: 36,37,38,39
+ * _ rings owned by VF3: 40,41,42,43
+ * _ rings owned by VF4: 44,45,46,47
+ * _ rings owned by VF5: 48,49,50,51
+ * _ rings owned by VF6: 52,53,54,55
+ * _ rings owned by VF7: 56,57,58,59
+ * _ rings owned by PF:  60,61,62,63
+ */
+union cvmx_sli_pkt_macx_pfx_rinfo {
+	u64 u64;
+	struct cvmx_sli_pkt_macx_pfx_rinfo_s {
+		u64 reserved_55_63 : 9;
+		u64 nvfs : 7;
+		u64 reserved_40_47 : 8;
+		u64 rpvf : 8;
+		u64 reserved_24_31 : 8;
+		u64 trs : 8;
+		u64 reserved_7_15 : 9;
+		u64 srn : 7;
+	} s;
+	struct cvmx_sli_pkt_macx_pfx_rinfo_s cn73xx;
+	struct cvmx_sli_pkt_macx_pfx_rinfo_s cn78xx;
+	struct cvmx_sli_pkt_macx_pfx_rinfo_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_macx_pfx_rinfo cvmx_sli_pkt_macx_pfx_rinfo_t;
+
+/**
+ * cvmx_sli_pkt_mac#_rinfo
+ *
+ * This register sets the total number and starting number of rings used by the MAC.
+ * This register is PF-only.
+ */
+union cvmx_sli_pkt_macx_rinfo {
+	u64 u64;
+	struct cvmx_sli_pkt_macx_rinfo_s {
+		u64 reserved_40_63 : 24;
+		u64 rpvf : 8;
+		u64 reserved_24_31 : 8;
+		u64 trs : 8;
+		u64 reserved_7_15 : 9;
+		u64 srn : 7;
+	} s;
+	struct cvmx_sli_pkt_macx_rinfo_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_macx_rinfo cvmx_sli_pkt_macx_rinfo_t;
+
+/**
+ * cvmx_sli_pkt_mac0_sig0
+ *
+ * This register is used to signal between PF/VF. This register can be R/W by the PF from MAC0
+ * and any VF.
+ */
+union cvmx_sli_pkt_mac0_sig0 {
+	u64 u64;
+	struct cvmx_sli_pkt_mac0_sig0_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pkt_mac0_sig0_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_mac0_sig0 cvmx_sli_pkt_mac0_sig0_t;
+
+/**
+ * cvmx_sli_pkt_mac0_sig1
+ *
+ * This register is used to signal between PF/VF. This register can be R/W by the PF from MAC0
+ * and any VF.
+ */
+union cvmx_sli_pkt_mac0_sig1 {
+	u64 u64;
+	struct cvmx_sli_pkt_mac0_sig1_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pkt_mac0_sig1_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_mac0_sig1 cvmx_sli_pkt_mac0_sig1_t;
+
+/**
+ * cvmx_sli_pkt_mac1_sig0
+ *
+ * This register is used to signal between PF/VF. This register can be R/W by the PF from MAC1
+ * and any VF.
+ */
+union cvmx_sli_pkt_mac1_sig0 {
+	u64 u64;
+	struct cvmx_sli_pkt_mac1_sig0_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pkt_mac1_sig0_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_mac1_sig0 cvmx_sli_pkt_mac1_sig0_t;
+
+/**
+ * cvmx_sli_pkt_mac1_sig1
+ *
+ * This register is used to signal between PF/VF. This register can be R/W by the PF from MAC1
+ * and any VF.
+ */
+union cvmx_sli_pkt_mac1_sig1 {
+	u64 u64;
+	struct cvmx_sli_pkt_mac1_sig1_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_pkt_mac1_sig1_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_mac1_sig1 cvmx_sli_pkt_mac1_sig1_t;
+
+/**
+ * cvmx_sli_pkt_mem_ctl
+ *
+ * This register controls the ECC of the SLI packet memories.
+ *
+ */
+union cvmx_sli_pkt_mem_ctl {
+	u64 u64;
+	struct cvmx_sli_pkt_mem_ctl_s {
+		u64 reserved_48_63 : 16;
+		u64 msix_mbox_fs : 2;
+		u64 msix_mbox_ecc : 1;
+		u64 reserved_36_44 : 9;
+		u64 pos_fs : 2;
+		u64 pos_ecc : 1;
+		u64 pinm_fs : 2;
+		u64 pinm_ecc : 1;
+		u64 pind_fs : 2;
+		u64 pind_ecc : 1;
+		u64 point_fs : 2;
+		u64 point_ecc : 1;
+		u64 slist_fs : 2;
+		u64 slist_ecc : 1;
+		u64 pop1_fs : 2;
+		u64 pop1_ecc : 1;
+		u64 pop0_fs : 2;
+		u64 pop0_ecc : 1;
+		u64 pfp_fs : 2;
+		u64 pfp_ecc : 1;
+		u64 pbn_fs : 2;
+		u64 pbn_ecc : 1;
+		u64 pdf_fs : 2;
+		u64 pdf_ecc : 1;
+		u64 psf_fs : 2;
+		u64 psf_ecc : 1;
+		u64 poi_fs : 2;
+		u64 poi_ecc : 1;
+	} s;
+	struct cvmx_sli_pkt_mem_ctl_cn73xx {
+		u64 reserved_48_63 : 16;
+		u64 msix_mbox_fs : 2;
+		u64 msix_mbox_ecc : 1;
+		u64 msix_data_fs : 2;
+		u64 msix_data_ecc : 1;
+		u64 msix_addr_fs : 2;
+		u64 msix_addr_ecc : 1;
+		u64 pof_fs : 2;
+		u64 pof_ecc : 1;
+		u64 pos_fs : 2;
+		u64 pos_ecc : 1;
+		u64 pinm_fs : 2;
+		u64 pinm_ecc : 1;
+		u64 pind_fs : 2;
+		u64 pind_ecc : 1;
+		u64 point_fs : 2;
+		u64 point_ecc : 1;
+		u64 slist_fs : 2;
+		u64 slist_ecc : 1;
+		u64 pop1_fs : 2;
+		u64 pop1_ecc : 1;
+		u64 pop0_fs : 2;
+		u64 pop0_ecc : 1;
+		u64 pfp_fs : 2;
+		u64 pfp_ecc : 1;
+		u64 pbn_fs : 2;
+		u64 pbn_ecc : 1;
+		u64 pdf_fs : 2;
+		u64 pdf_ecc : 1;
+		u64 psf_fs : 2;
+		u64 psf_ecc : 1;
+		u64 poi_fs : 2;
+		u64 poi_ecc : 1;
+	} cn73xx;
+	struct cvmx_sli_pkt_mem_ctl_cn73xx cn78xx;
+	struct cvmx_sli_pkt_mem_ctl_cn78xxp1 {
+		u64 reserved_44_63 : 20;
+		u64 msid_fs : 2;
+		u64 msia_fs : 2;
+		u64 msi_ecc : 1;
+		u64 posi_fs : 2;
+		u64 posi_ecc : 1;
+		u64 pos_fs : 2;
+		u64 pos_ecc : 1;
+		u64 pinm_fs : 2;
+		u64 pinm_ecc : 1;
+		u64 pind_fs : 2;
+		u64 pind_ecc : 1;
+		u64 point_fs : 2;
+		u64 point_ecc : 1;
+		u64 slist_fs : 2;
+		u64 slist_ecc : 1;
+		u64 pop1_fs : 2;
+		u64 pop1_ecc : 1;
+		u64 pop0_fs : 2;
+		u64 pop0_ecc : 1;
+		u64 pfp_fs : 2;
+		u64 pfp_ecc : 1;
+		u64 pbn_fs : 2;
+		u64 pbn_ecc : 1;
+		u64 pdf_fs : 2;
+		u64 pdf_ecc : 1;
+		u64 psf_fs : 2;
+		u64 psf_ecc : 1;
+		u64 poi_fs : 2;
+		u64 poi_ecc : 1;
+	} cn78xxp1;
+	struct cvmx_sli_pkt_mem_ctl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_mem_ctl cvmx_sli_pkt_mem_ctl_t;
+
+/**
+ * cvmx_sli_pkt_out_bmode
+ *
+ * Control the updating of the SLI_PKT#_CNT register.
+ *
+ */
+union cvmx_sli_pkt_out_bmode {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bmode_s {
+		u64 reserved_32_63 : 32;
+		u64 bmode : 32;
+	} s;
+	struct cvmx_sli_pkt_out_bmode_s cn61xx;
+	struct cvmx_sli_pkt_out_bmode_s cn63xx;
+	struct cvmx_sli_pkt_out_bmode_s cn63xxp1;
+	struct cvmx_sli_pkt_out_bmode_s cn66xx;
+	struct cvmx_sli_pkt_out_bmode_s cn68xx;
+	struct cvmx_sli_pkt_out_bmode_s cn68xxp1;
+	struct cvmx_sli_pkt_out_bmode_s cn70xx;
+	struct cvmx_sli_pkt_out_bmode_s cn70xxp1;
+	struct cvmx_sli_pkt_out_bmode_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_out_bmode cvmx_sli_pkt_out_bmode_t;
+
+/**
+ * cvmx_sli_pkt_out_bp_en
+ *
+ * This register enables sending backpressure to PKO.
+ *
+ */
+union cvmx_sli_pkt_out_bp_en {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bp_en_s {
+		u64 bp_en : 64;
+	} s;
+	struct cvmx_sli_pkt_out_bp_en_cn68xx {
+		u64 reserved_32_63 : 32;
+		u64 bp_en : 32;
+	} cn68xx;
+	struct cvmx_sli_pkt_out_bp_en_cn68xx cn68xxp1;
+	struct cvmx_sli_pkt_out_bp_en_s cn78xxp1;
+};
+
+typedef union cvmx_sli_pkt_out_bp_en cvmx_sli_pkt_out_bp_en_t;
+
+/**
+ * cvmx_sli_pkt_out_bp_en2_w1c
+ *
+ * This register disables sending backpressure to PKO.
+ *
+ */
+union cvmx_sli_pkt_out_bp_en2_w1c {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bp_en2_w1c_s {
+		u64 w1c : 64;
+	} s;
+	struct cvmx_sli_pkt_out_bp_en2_w1c_s cn73xx;
+};
+
+typedef union cvmx_sli_pkt_out_bp_en2_w1c cvmx_sli_pkt_out_bp_en2_w1c_t;
+
+/**
+ * cvmx_sli_pkt_out_bp_en2_w1s
+ *
+ * This register enables sending backpressure to PKO.
+ *
+ */
+union cvmx_sli_pkt_out_bp_en2_w1s {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bp_en2_w1s_s {
+		u64 w1s : 64;
+	} s;
+	struct cvmx_sli_pkt_out_bp_en2_w1s_s cn73xx;
+};
+
+typedef union cvmx_sli_pkt_out_bp_en2_w1s cvmx_sli_pkt_out_bp_en2_w1s_t;
+
+/**
+ * cvmx_sli_pkt_out_bp_en_w1c
+ *
+ * This register disables sending backpressure to PKO.
+ *
+ */
+union cvmx_sli_pkt_out_bp_en_w1c {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bp_en_w1c_s {
+		u64 w1c : 64;
+	} s;
+	struct cvmx_sli_pkt_out_bp_en_w1c_s cn73xx;
+	struct cvmx_sli_pkt_out_bp_en_w1c_s cn78xx;
+	struct cvmx_sli_pkt_out_bp_en_w1c_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_out_bp_en_w1c cvmx_sli_pkt_out_bp_en_w1c_t;
+
+/**
+ * cvmx_sli_pkt_out_bp_en_w1s
+ *
+ * This register enables sending backpressure to PKO.
+ *
+ */
+union cvmx_sli_pkt_out_bp_en_w1s {
+	u64 u64;
+	struct cvmx_sli_pkt_out_bp_en_w1s_s {
+		u64 w1s : 64;
+	} s;
+	struct cvmx_sli_pkt_out_bp_en_w1s_s cn73xx;
+	struct cvmx_sli_pkt_out_bp_en_w1s_s cn78xx;
+	struct cvmx_sli_pkt_out_bp_en_w1s_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_out_bp_en_w1s cvmx_sli_pkt_out_bp_en_w1s_t;
+
+/**
+ * cvmx_sli_pkt_out_enb
+ *
+ * Multi-ring packet output enable register. This register is PF-only.
+ *
+ */
+union cvmx_sli_pkt_out_enb {
+	u64 u64;
+	struct cvmx_sli_pkt_out_enb_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_pkt_out_enb_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 enb : 32;
+	} cn61xx;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn63xx;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn63xxp1;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn66xx;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn68xx;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn68xxp1;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn70xx;
+	struct cvmx_sli_pkt_out_enb_cn61xx cn70xxp1;
+	struct cvmx_sli_pkt_out_enb_s cn78xxp1;
+	struct cvmx_sli_pkt_out_enb_cn61xx cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_out_enb cvmx_sli_pkt_out_enb_t;
+
+/**
+ * cvmx_sli_pkt_output_wmark
+ *
+ * This register sets the value that determines when backpressure is applied to the PKO. When
+ * SLI_PKT()_SLIST_BAOFF_DBELL[DBELL] is less than [WMARK], backpressure is sent to PKO for
+ * the associated channel. This register is PF-only.
+ */
+union cvmx_sli_pkt_output_wmark {
+	u64 u64;
+	struct cvmx_sli_pkt_output_wmark_s {
+		u64 reserved_32_63 : 32;
+		u64 wmark : 32;
+	} s;
+	struct cvmx_sli_pkt_output_wmark_s cn61xx;
+	struct cvmx_sli_pkt_output_wmark_s cn63xx;
+	struct cvmx_sli_pkt_output_wmark_s cn63xxp1;
+	struct cvmx_sli_pkt_output_wmark_s cn66xx;
+	struct cvmx_sli_pkt_output_wmark_s cn68xx;
+	struct cvmx_sli_pkt_output_wmark_s cn68xxp1;
+	struct cvmx_sli_pkt_output_wmark_s cn70xx;
+	struct cvmx_sli_pkt_output_wmark_s cn70xxp1;
+	struct cvmx_sli_pkt_output_wmark_s cn73xx;
+	struct cvmx_sli_pkt_output_wmark_s cn78xx;
+	struct cvmx_sli_pkt_output_wmark_s cn78xxp1;
+	struct cvmx_sli_pkt_output_wmark_s cnf71xx;
+	struct cvmx_sli_pkt_output_wmark_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_output_wmark cvmx_sli_pkt_output_wmark_t;
+
+/**
+ * cvmx_sli_pkt_pcie_port
+ *
+ * Assigns Packet Ports to MAC ports.
+ *
+ */
+union cvmx_sli_pkt_pcie_port {
+	u64 u64;
+	struct cvmx_sli_pkt_pcie_port_s {
+		u64 pp : 64;
+	} s;
+	struct cvmx_sli_pkt_pcie_port_s cn61xx;
+	struct cvmx_sli_pkt_pcie_port_s cn63xx;
+	struct cvmx_sli_pkt_pcie_port_s cn63xxp1;
+	struct cvmx_sli_pkt_pcie_port_s cn66xx;
+	struct cvmx_sli_pkt_pcie_port_s cn68xx;
+	struct cvmx_sli_pkt_pcie_port_s cn68xxp1;
+	struct cvmx_sli_pkt_pcie_port_s cn70xx;
+	struct cvmx_sli_pkt_pcie_port_s cn70xxp1;
+	struct cvmx_sli_pkt_pcie_port_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_pcie_port cvmx_sli_pkt_pcie_port_t;
+
+/**
+ * cvmx_sli_pkt_pkind_valid
+ *
+ * Enables bits per PKIND that are allowed to be sent to PKI specified in the
+ * DPI_PKT_INST_HDR_S[PKIND] DPI packet instruction field.
+ */
+union cvmx_sli_pkt_pkind_valid {
+	u64 u64;
+	struct cvmx_sli_pkt_pkind_valid_s {
+		u64 enb : 64;
+	} s;
+	struct cvmx_sli_pkt_pkind_valid_s cn73xx;
+	struct cvmx_sli_pkt_pkind_valid_s cn78xx;
+	struct cvmx_sli_pkt_pkind_valid_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_pkind_valid cvmx_sli_pkt_pkind_valid_t;
+
+/**
+ * cvmx_sli_pkt_port_in_rst
+ *
+ * SLI_PKT_PORT_IN_RST = SLI Packet Port In Reset
+ * Vector bits related to ring-port for ones that are reset.
+ */
+union cvmx_sli_pkt_port_in_rst {
+	u64 u64;
+	struct cvmx_sli_pkt_port_in_rst_s {
+		u64 in_rst : 32;
+		u64 out_rst : 32;
+	} s;
+	struct cvmx_sli_pkt_port_in_rst_s cn61xx;
+	struct cvmx_sli_pkt_port_in_rst_s cn63xx;
+	struct cvmx_sli_pkt_port_in_rst_s cn63xxp1;
+	struct cvmx_sli_pkt_port_in_rst_s cn66xx;
+	struct cvmx_sli_pkt_port_in_rst_s cn68xx;
+	struct cvmx_sli_pkt_port_in_rst_s cn68xxp1;
+	struct cvmx_sli_pkt_port_in_rst_s cn70xx;
+	struct cvmx_sli_pkt_port_in_rst_s cn70xxp1;
+	struct cvmx_sli_pkt_port_in_rst_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_port_in_rst cvmx_sli_pkt_port_in_rst_t;
+
+/**
+ * cvmx_sli_pkt_ring_rst
+ *
+ * When read by a PF, this register informs which rings owned by the function (0 to N, N as large
+ * as 63) are in reset. See also SLI_PKT()_INPUT_CONTROL[RST].
+ */
+union cvmx_sli_pkt_ring_rst {
+	u64 u64;
+	struct cvmx_sli_pkt_ring_rst_s {
+		u64 rst : 64;
+	} s;
+	struct cvmx_sli_pkt_ring_rst_s cn73xx;
+	struct cvmx_sli_pkt_ring_rst_s cn78xx;
+	struct cvmx_sli_pkt_ring_rst_s cn78xxp1;
+	struct cvmx_sli_pkt_ring_rst_s cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_ring_rst cvmx_sli_pkt_ring_rst_t;
+
+/**
+ * cvmx_sli_pkt_slist_es
+ *
+ * The Endian Swap for Scatter List Read.
+ *
+ */
+union cvmx_sli_pkt_slist_es {
+	u64 u64;
+	struct cvmx_sli_pkt_slist_es_s {
+		u64 es : 64;
+	} s;
+	struct cvmx_sli_pkt_slist_es_s cn61xx;
+	struct cvmx_sli_pkt_slist_es_s cn63xx;
+	struct cvmx_sli_pkt_slist_es_s cn63xxp1;
+	struct cvmx_sli_pkt_slist_es_s cn66xx;
+	struct cvmx_sli_pkt_slist_es_s cn68xx;
+	struct cvmx_sli_pkt_slist_es_s cn68xxp1;
+	struct cvmx_sli_pkt_slist_es_s cn70xx;
+	struct cvmx_sli_pkt_slist_es_s cn70xxp1;
+	struct cvmx_sli_pkt_slist_es_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_slist_es cvmx_sli_pkt_slist_es_t;
+
+/**
+ * cvmx_sli_pkt_slist_ns
+ *
+ * The NS field for the TLP when fetching Scatter List.
+ *
+ */
+union cvmx_sli_pkt_slist_ns {
+	u64 u64;
+	struct cvmx_sli_pkt_slist_ns_s {
+		u64 reserved_32_63 : 32;
+		u64 nsr : 32;
+	} s;
+	struct cvmx_sli_pkt_slist_ns_s cn61xx;
+	struct cvmx_sli_pkt_slist_ns_s cn63xx;
+	struct cvmx_sli_pkt_slist_ns_s cn63xxp1;
+	struct cvmx_sli_pkt_slist_ns_s cn66xx;
+	struct cvmx_sli_pkt_slist_ns_s cn68xx;
+	struct cvmx_sli_pkt_slist_ns_s cn68xxp1;
+	struct cvmx_sli_pkt_slist_ns_s cn70xx;
+	struct cvmx_sli_pkt_slist_ns_s cn70xxp1;
+	struct cvmx_sli_pkt_slist_ns_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_slist_ns cvmx_sli_pkt_slist_ns_t;
+
+/**
+ * cvmx_sli_pkt_slist_ror
+ *
+ * The ROR field for the TLP when fetching Scatter List.
+ *
+ */
+union cvmx_sli_pkt_slist_ror {
+	u64 u64;
+	struct cvmx_sli_pkt_slist_ror_s {
+		u64 reserved_32_63 : 32;
+		u64 ror : 32;
+	} s;
+	struct cvmx_sli_pkt_slist_ror_s cn61xx;
+	struct cvmx_sli_pkt_slist_ror_s cn63xx;
+	struct cvmx_sli_pkt_slist_ror_s cn63xxp1;
+	struct cvmx_sli_pkt_slist_ror_s cn66xx;
+	struct cvmx_sli_pkt_slist_ror_s cn68xx;
+	struct cvmx_sli_pkt_slist_ror_s cn68xxp1;
+	struct cvmx_sli_pkt_slist_ror_s cn70xx;
+	struct cvmx_sli_pkt_slist_ror_s cn70xxp1;
+	struct cvmx_sli_pkt_slist_ror_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_slist_ror cvmx_sli_pkt_slist_ror_t;
+
+/**
+ * cvmx_sli_pkt_time_int
+ *
+ * This register specifies which output packets rings are interrupting because of packet timers.
+ * A bit set in this interrupt register will set a corresponding bit in SLI_PKT_INT and can
+ * also cause SLI_MAC()_PF()_INT_SUM[PTIME] to be set if
+ * SLI_PKT()_OUTPUT_CONTROL[TENB]
+ * is set. When read by a function, this register informs which rings owned by the function (0 to
+ * N,
+ * N as large as 63) have this interrupt pending.
+ */
+union cvmx_sli_pkt_time_int {
+	u64 u64;
+	struct cvmx_sli_pkt_time_int_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_sli_pkt_time_int_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 port : 32;
+	} cn61xx;
+	struct cvmx_sli_pkt_time_int_cn61xx cn63xx;
+	struct cvmx_sli_pkt_time_int_cn61xx cn63xxp1;
+	struct cvmx_sli_pkt_time_int_cn61xx cn66xx;
+	struct cvmx_sli_pkt_time_int_cn61xx cn68xx;
+	struct cvmx_sli_pkt_time_int_cn61xx cn68xxp1;
+	struct cvmx_sli_pkt_time_int_cn61xx cn70xx;
+	struct cvmx_sli_pkt_time_int_cn61xx cn70xxp1;
+	struct cvmx_sli_pkt_time_int_cn73xx {
+		u64 ring : 64;
+	} cn73xx;
+	struct cvmx_sli_pkt_time_int_cn73xx cn78xx;
+	struct cvmx_sli_pkt_time_int_cn73xx cn78xxp1;
+	struct cvmx_sli_pkt_time_int_cn61xx cnf71xx;
+	struct cvmx_sli_pkt_time_int_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_pkt_time_int cvmx_sli_pkt_time_int_t;
+
+/**
+ * cvmx_sli_pkt_time_int_enb
+ *
+ * The packets rings that are interrupting because of Packet Timers.
+ *
+ */
+union cvmx_sli_pkt_time_int_enb {
+	u64 u64;
+	struct cvmx_sli_pkt_time_int_enb_s {
+		u64 reserved_32_63 : 32;
+		u64 port : 32;
+	} s;
+	struct cvmx_sli_pkt_time_int_enb_s cn61xx;
+	struct cvmx_sli_pkt_time_int_enb_s cn63xx;
+	struct cvmx_sli_pkt_time_int_enb_s cn63xxp1;
+	struct cvmx_sli_pkt_time_int_enb_s cn66xx;
+	struct cvmx_sli_pkt_time_int_enb_s cn68xx;
+	struct cvmx_sli_pkt_time_int_enb_s cn68xxp1;
+	struct cvmx_sli_pkt_time_int_enb_s cn70xx;
+	struct cvmx_sli_pkt_time_int_enb_s cn70xxp1;
+	struct cvmx_sli_pkt_time_int_enb_s cnf71xx;
+};
+
+typedef union cvmx_sli_pkt_time_int_enb cvmx_sli_pkt_time_int_enb_t;
+
+/**
+ * cvmx_sli_port#_pkind
+ *
+ * SLI_PORT[0..31]_PKIND = SLI Port Pkind
+ *
+ * The SLI/DPI supports 32 input rings for fetching input packets. This register maps the input-rings (0-31) to a PKIND.
+ */
+union cvmx_sli_portx_pkind {
+	u64 u64;
+	struct cvmx_sli_portx_pkind_s {
+		u64 reserved_25_63 : 39;
+		u64 rpk_enb : 1;
+		u64 reserved_22_23 : 2;
+		u64 pkindr : 6;
+		u64 reserved_14_15 : 2;
+		u64 bpkind : 6;
+		u64 reserved_6_7 : 2;
+		u64 pkind : 6;
+	} s;
+	struct cvmx_sli_portx_pkind_s cn68xx;
+	struct cvmx_sli_portx_pkind_cn68xxp1 {
+		u64 reserved_14_63 : 50;
+		u64 bpkind : 6;
+		u64 reserved_6_7 : 2;
+		u64 pkind : 6;
+	} cn68xxp1;
+};
+
+typedef union cvmx_sli_portx_pkind cvmx_sli_portx_pkind_t;
+
+/**
+ * cvmx_sli_pp_pkt_csr_control
+ *
+ * This register provides access to SLI packet register space from the cores.
+ * These SLI packet registers include the following:
+ *  SLI_MSIXX_TABLE_ADDR,
+ *  SLI_MSIXX_TABLE_DATA,
+ *  SLI_MSIX_PBA0,
+ *  SLI_MSIX_PBA1,
+ *  SLI_PKTX_INPUT_CONTROL,
+ *  SLI_PKTX_INSTR_BADDR,
+ *  SLI_PKTX_INSTR_BAOFF_DBELL,
+ *  SLI_PKTX_INSTR_FIFO_RSIZE,
+ *  SLI_PKT_IN_DONEX_CNTS,
+ *  SLI_PKTX_OUTPUT_CONTROL,
+ *  SLI_PKTX_OUT_SIZE,
+ *  SLI_PKTX_SLIST_BADDR,
+ *  SLI_PKTX_SLIST_BAOFF_DBELL,
+ *  SLI_PKTX_SLIST_FIFO_RSIZE,
+ *  SLI_PKTX_INT_LEVELS,
+ *  SLI_PKTX_CNTS,
+ *  SLI_PKTX_ERROR_INFO,
+ *  SLI_PKTX_VF_INT_SUM,
+ *  SLI_PKTX_PF_VF_MBOX_SIG,
+ *  SLI_PKTX_MBOX_INT.
+ */
+union cvmx_sli_pp_pkt_csr_control {
+	u64 u64;
+	struct cvmx_sli_pp_pkt_csr_control_s {
+		u64 reserved_18_63 : 46;
+		u64 mac : 2;
+		u64 pvf : 16;
+	} s;
+	struct cvmx_sli_pp_pkt_csr_control_s cn73xx;
+	struct cvmx_sli_pp_pkt_csr_control_s cn78xx;
+	struct cvmx_sli_pp_pkt_csr_control_s cnf75xx;
+};
+
+typedef union cvmx_sli_pp_pkt_csr_control cvmx_sli_pp_pkt_csr_control_t;
+
+/**
+ * cvmx_sli_s2c_end_merge
+ *
+ * Writing this register will cause a merge to end.
+ *
+ */
+union cvmx_sli_s2c_end_merge {
+	u64 u64;
+	struct cvmx_sli_s2c_end_merge_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_sli_s2c_end_merge_s cn73xx;
+	struct cvmx_sli_s2c_end_merge_s cn78xx;
+	struct cvmx_sli_s2c_end_merge_s cn78xxp1;
+	struct cvmx_sli_s2c_end_merge_s cnf75xx;
+};
+
+typedef union cvmx_sli_s2c_end_merge cvmx_sli_s2c_end_merge_t;
+
+/**
+ * cvmx_sli_s2m_port#_ctl
+ *
+ * These registers contain control for access from SLI to a MAC port. Indexed by SLI_PORT_E.
+ * Write operations to these registers are not ordered with write/read operations to the MAC
+ * memory space. To ensure that a write operation has completed, read the register before
+ * making an access (i.e. MAC memory space) that requires the value of this register to be
+ * updated.
+ */
+union cvmx_sli_s2m_portx_ctl {
+	u64 u64;
+	struct cvmx_sli_s2m_portx_ctl_s {
+		u64 reserved_7_63 : 57;
+		u64 dvferr : 1;
+		u64 lcl_node : 1;
+		u64 wind_d : 1;
+		u64 bar0_d : 1;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx {
+		u64 reserved_5_63 : 59;
+		u64 wind_d : 1;
+		u64 bar0_d : 1;
+		u64 mrrs : 3;
+	} cn61xx;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn63xx;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn63xxp1;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn66xx;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn68xx;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn68xxp1;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn70xx;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cn70xxp1;
+	struct cvmx_sli_s2m_portx_ctl_cn73xx {
+		u64 reserved_7_63 : 57;
+		u64 dvferr : 1;
+		u64 lcl_node : 1;
+		u64 wind_d : 1;
+		u64 bar0_d : 1;
+		u64 ld_cmd : 2;
+		u64 reserved_0_0 : 1;
+	} cn73xx;
+	struct cvmx_sli_s2m_portx_ctl_cn73xx cn78xx;
+	struct cvmx_sli_s2m_portx_ctl_cn78xxp1 {
+		u64 reserved_6_63 : 58;
+		u64 lcl_node : 1;
+		u64 wind_d : 1;
+		u64 bar0_d : 1;
+		u64 ld_cmd : 2;
+		u64 reserved_0_0 : 1;
+	} cn78xxp1;
+	struct cvmx_sli_s2m_portx_ctl_cn61xx cnf71xx;
+	struct cvmx_sli_s2m_portx_ctl_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_s2m_portx_ctl cvmx_sli_s2m_portx_ctl_t;
+
+/**
+ * cvmx_sli_scratch_1
+ *
+ * This registers is a general purpose 64-bit scratch register for software use.
+ *
+ */
+union cvmx_sli_scratch_1 {
+	u64 u64;
+	struct cvmx_sli_scratch_1_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_scratch_1_s cn61xx;
+	struct cvmx_sli_scratch_1_s cn63xx;
+	struct cvmx_sli_scratch_1_s cn63xxp1;
+	struct cvmx_sli_scratch_1_s cn66xx;
+	struct cvmx_sli_scratch_1_s cn68xx;
+	struct cvmx_sli_scratch_1_s cn68xxp1;
+	struct cvmx_sli_scratch_1_s cn70xx;
+	struct cvmx_sli_scratch_1_s cn70xxp1;
+	struct cvmx_sli_scratch_1_s cn73xx;
+	struct cvmx_sli_scratch_1_s cn78xx;
+	struct cvmx_sli_scratch_1_s cn78xxp1;
+	struct cvmx_sli_scratch_1_s cnf71xx;
+	struct cvmx_sli_scratch_1_s cnf75xx;
+};
+
+typedef union cvmx_sli_scratch_1 cvmx_sli_scratch_1_t;
+
+/**
+ * cvmx_sli_scratch_2
+ *
+ * This registers is a general purpose 64-bit scratch register for software use.
+ *
+ */
+union cvmx_sli_scratch_2 {
+	u64 u64;
+	struct cvmx_sli_scratch_2_s {
+		u64 data : 64;
+	} s;
+	struct cvmx_sli_scratch_2_s cn61xx;
+	struct cvmx_sli_scratch_2_s cn63xx;
+	struct cvmx_sli_scratch_2_s cn63xxp1;
+	struct cvmx_sli_scratch_2_s cn66xx;
+	struct cvmx_sli_scratch_2_s cn68xx;
+	struct cvmx_sli_scratch_2_s cn68xxp1;
+	struct cvmx_sli_scratch_2_s cn70xx;
+	struct cvmx_sli_scratch_2_s cn70xxp1;
+	struct cvmx_sli_scratch_2_s cn73xx;
+	struct cvmx_sli_scratch_2_s cn78xx;
+	struct cvmx_sli_scratch_2_s cn78xxp1;
+	struct cvmx_sli_scratch_2_s cnf71xx;
+	struct cvmx_sli_scratch_2_s cnf75xx;
+};
+
+typedef union cvmx_sli_scratch_2 cvmx_sli_scratch_2_t;
+
+/**
+ * cvmx_sli_state1
+ *
+ * This register contains state machines in SLI and is for debug.
+ *
+ */
+union cvmx_sli_state1 {
+	u64 u64;
+	struct cvmx_sli_state1_s {
+		u64 cpl1 : 12;
+		u64 cpl0 : 12;
+		u64 arb : 1;
+		u64 csr : 39;
+	} s;
+	struct cvmx_sli_state1_s cn61xx;
+	struct cvmx_sli_state1_s cn63xx;
+	struct cvmx_sli_state1_s cn63xxp1;
+	struct cvmx_sli_state1_s cn66xx;
+	struct cvmx_sli_state1_s cn68xx;
+	struct cvmx_sli_state1_s cn68xxp1;
+	struct cvmx_sli_state1_s cn70xx;
+	struct cvmx_sli_state1_s cn70xxp1;
+	struct cvmx_sli_state1_s cn73xx;
+	struct cvmx_sli_state1_s cn78xx;
+	struct cvmx_sli_state1_s cn78xxp1;
+	struct cvmx_sli_state1_s cnf71xx;
+	struct cvmx_sli_state1_s cnf75xx;
+};
+
+typedef union cvmx_sli_state1 cvmx_sli_state1_t;
+
+/**
+ * cvmx_sli_state2
+ *
+ * This register contains state machines in SLI and is for debug.
+ *
+ */
+union cvmx_sli_state2 {
+	u64 u64;
+	struct cvmx_sli_state2_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_sli_state2_cn61xx {
+		u64 reserved_56_63 : 8;
+		u64 nnp1 : 8;
+		u64 reserved_47_47 : 1;
+		u64 rac : 1;
+		u64 csm1 : 15;
+		u64 csm0 : 15;
+		u64 nnp0 : 8;
+		u64 nnd : 8;
+	} cn61xx;
+	struct cvmx_sli_state2_cn61xx cn63xx;
+	struct cvmx_sli_state2_cn61xx cn63xxp1;
+	struct cvmx_sli_state2_cn61xx cn66xx;
+	struct cvmx_sli_state2_cn61xx cn68xx;
+	struct cvmx_sli_state2_cn61xx cn68xxp1;
+	struct cvmx_sli_state2_cn61xx cn70xx;
+	struct cvmx_sli_state2_cn61xx cn70xxp1;
+	struct cvmx_sli_state2_cn73xx {
+		u64 reserved_57_63 : 7;
+		u64 nnp1 : 8;
+		u64 reserved_48_48 : 1;
+		u64 rac : 1;
+		u64 csm1 : 15;
+		u64 csm0 : 15;
+		u64 nnp0 : 8;
+		u64 nnd : 9;
+	} cn73xx;
+	struct cvmx_sli_state2_cn73xx cn78xx;
+	struct cvmx_sli_state2_cn73xx cn78xxp1;
+	struct cvmx_sli_state2_cn61xx cnf71xx;
+	struct cvmx_sli_state2_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_state2 cvmx_sli_state2_t;
+
+/**
+ * cvmx_sli_state3
+ *
+ * This register contains state machines in SLI and is for debug.
+ *
+ */
+union cvmx_sli_state3 {
+	u64 u64;
+	struct cvmx_sli_state3_s {
+		u64 reserved_0_63 : 64;
+	} s;
+	struct cvmx_sli_state3_cn61xx {
+		u64 reserved_56_63 : 8;
+		u64 psm1 : 15;
+		u64 psm0 : 15;
+		u64 nsm1 : 13;
+		u64 nsm0 : 13;
+	} cn61xx;
+	struct cvmx_sli_state3_cn61xx cn63xx;
+	struct cvmx_sli_state3_cn61xx cn63xxp1;
+	struct cvmx_sli_state3_cn61xx cn66xx;
+	struct cvmx_sli_state3_cn61xx cn68xx;
+	struct cvmx_sli_state3_cn61xx cn68xxp1;
+	struct cvmx_sli_state3_cn61xx cn70xx;
+	struct cvmx_sli_state3_cn61xx cn70xxp1;
+	struct cvmx_sli_state3_cn73xx {
+		u64 reserved_60_63 : 4;
+		u64 psm1 : 15;
+		u64 psm0 : 15;
+		u64 nsm1 : 15;
+		u64 nsm0 : 15;
+	} cn73xx;
+	struct cvmx_sli_state3_cn73xx cn78xx;
+	struct cvmx_sli_state3_cn73xx cn78xxp1;
+	struct cvmx_sli_state3_cn61xx cnf71xx;
+	struct cvmx_sli_state3_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sli_state3 cvmx_sli_state3_t;
+
+/**
+ * cvmx_sli_tx_pipe
+ *
+ * SLI_TX_PIPE = SLI Packet TX Pipe
+ *
+ * Contains the starting pipe number and number of pipes used by the SLI packet Output.
+ * If a packet is recevied from PKO with an out of range PIPE number, the following occurs:
+ * - SLI_INT_SUM[PIPE_ERR] is set.
+ * - the out of range pipe value is used for returning credits to the PKO.
+ * - the PCIe packet engine will treat the PIPE value to be equal to [BASE].
+ */
+union cvmx_sli_tx_pipe {
+	u64 u64;
+	struct cvmx_sli_tx_pipe_s {
+		u64 reserved_24_63 : 40;
+		u64 nump : 8;
+		u64 reserved_7_15 : 9;
+		u64 base : 7;
+	} s;
+	struct cvmx_sli_tx_pipe_s cn68xx;
+	struct cvmx_sli_tx_pipe_s cn68xxp1;
+};
+
+typedef union cvmx_sli_tx_pipe cvmx_sli_tx_pipe_t;
+
+/**
+ * cvmx_sli_win_rd_addr
+ *
+ * When the LSB of this register is written, the address in this register will be read. The data
+ * returned from this read operation is placed in the WIN_RD_DATA register. This register should
+ * NOT
+ * be used to read SLI_* registers.
+ */
+union cvmx_sli_win_rd_addr {
+	u64 u64;
+	struct cvmx_sli_win_rd_addr_s {
+		u64 reserved_51_63 : 13;
+		u64 ld_cmd : 2;
+		u64 iobit : 1;
+		u64 rd_addr : 48;
+	} s;
+	struct cvmx_sli_win_rd_addr_s cn61xx;
+	struct cvmx_sli_win_rd_addr_s cn63xx;
+	struct cvmx_sli_win_rd_addr_s cn63xxp1;
+	struct cvmx_sli_win_rd_addr_s cn66xx;
+	struct cvmx_sli_win_rd_addr_s cn68xx;
+	struct cvmx_sli_win_rd_addr_s cn68xxp1;
+	struct cvmx_sli_win_rd_addr_s cn70xx;
+	struct cvmx_sli_win_rd_addr_s cn70xxp1;
+	struct cvmx_sli_win_rd_addr_s cn73xx;
+	struct cvmx_sli_win_rd_addr_s cn78xx;
+	struct cvmx_sli_win_rd_addr_s cn78xxp1;
+	struct cvmx_sli_win_rd_addr_s cnf71xx;
+	struct cvmx_sli_win_rd_addr_s cnf75xx;
+};
+
+typedef union cvmx_sli_win_rd_addr cvmx_sli_win_rd_addr_t;
+
+/**
+ * cvmx_sli_win_rd_data
+ *
+ * This register holds the data returned when a read operation is started by the writing of the
+ * SLI_WIN_RD_ADDR register.
+ */
+union cvmx_sli_win_rd_data {
+	u64 u64;
+	struct cvmx_sli_win_rd_data_s {
+		u64 rd_data : 64;
+	} s;
+	struct cvmx_sli_win_rd_data_s cn61xx;
+	struct cvmx_sli_win_rd_data_s cn63xx;
+	struct cvmx_sli_win_rd_data_s cn63xxp1;
+	struct cvmx_sli_win_rd_data_s cn66xx;
+	struct cvmx_sli_win_rd_data_s cn68xx;
+	struct cvmx_sli_win_rd_data_s cn68xxp1;
+	struct cvmx_sli_win_rd_data_s cn70xx;
+	struct cvmx_sli_win_rd_data_s cn70xxp1;
+	struct cvmx_sli_win_rd_data_s cn73xx;
+	struct cvmx_sli_win_rd_data_s cn78xx;
+	struct cvmx_sli_win_rd_data_s cn78xxp1;
+	struct cvmx_sli_win_rd_data_s cnf71xx;
+	struct cvmx_sli_win_rd_data_s cnf75xx;
+};
+
+typedef union cvmx_sli_win_rd_data cvmx_sli_win_rd_data_t;
+
+/**
+ * cvmx_sli_win_wr_addr
+ *
+ * This register contains the address to be written to when a write operation is started by
+ * writing the SLI_WIN_WR_DATA register.
+ *
+ * This register should NOT be used to write SLI_* registers.
+ */
+union cvmx_sli_win_wr_addr {
+	u64 u64;
+	struct cvmx_sli_win_wr_addr_s {
+		u64 reserved_49_63 : 15;
+		u64 iobit : 1;
+		u64 wr_addr : 45;
+		u64 reserved_0_2 : 3;
+	} s;
+	struct cvmx_sli_win_wr_addr_s cn61xx;
+	struct cvmx_sli_win_wr_addr_s cn63xx;
+	struct cvmx_sli_win_wr_addr_s cn63xxp1;
+	struct cvmx_sli_win_wr_addr_s cn66xx;
+	struct cvmx_sli_win_wr_addr_s cn68xx;
+	struct cvmx_sli_win_wr_addr_s cn68xxp1;
+	struct cvmx_sli_win_wr_addr_s cn70xx;
+	struct cvmx_sli_win_wr_addr_s cn70xxp1;
+	struct cvmx_sli_win_wr_addr_s cn73xx;
+	struct cvmx_sli_win_wr_addr_s cn78xx;
+	struct cvmx_sli_win_wr_addr_s cn78xxp1;
+	struct cvmx_sli_win_wr_addr_s cnf71xx;
+	struct cvmx_sli_win_wr_addr_s cnf75xx;
+};
+
+typedef union cvmx_sli_win_wr_addr cvmx_sli_win_wr_addr_t;
+
+/**
+ * cvmx_sli_win_wr_data
+ *
+ * This register contains the data to write to the address located in the SLI_WIN_WR_ADDR
+ * register. Writing the least-significant byte of this register causes a write operation to take
+ * place.
+ */
+union cvmx_sli_win_wr_data {
+	u64 u64;
+	struct cvmx_sli_win_wr_data_s {
+		u64 wr_data : 64;
+	} s;
+	struct cvmx_sli_win_wr_data_s cn61xx;
+	struct cvmx_sli_win_wr_data_s cn63xx;
+	struct cvmx_sli_win_wr_data_s cn63xxp1;
+	struct cvmx_sli_win_wr_data_s cn66xx;
+	struct cvmx_sli_win_wr_data_s cn68xx;
+	struct cvmx_sli_win_wr_data_s cn68xxp1;
+	struct cvmx_sli_win_wr_data_s cn70xx;
+	struct cvmx_sli_win_wr_data_s cn70xxp1;
+	struct cvmx_sli_win_wr_data_s cn73xx;
+	struct cvmx_sli_win_wr_data_s cn78xx;
+	struct cvmx_sli_win_wr_data_s cn78xxp1;
+	struct cvmx_sli_win_wr_data_s cnf71xx;
+	struct cvmx_sli_win_wr_data_s cnf75xx;
+};
+
+typedef union cvmx_sli_win_wr_data cvmx_sli_win_wr_data_t;
+
+/**
+ * cvmx_sli_win_wr_mask
+ *
+ * This register contains the mask for the data in the SLI_WIN_WR_DATA register.
+ *
+ */
+union cvmx_sli_win_wr_mask {
+	u64 u64;
+	struct cvmx_sli_win_wr_mask_s {
+		u64 reserved_8_63 : 56;
+		u64 wr_mask : 8;
+	} s;
+	struct cvmx_sli_win_wr_mask_s cn61xx;
+	struct cvmx_sli_win_wr_mask_s cn63xx;
+	struct cvmx_sli_win_wr_mask_s cn63xxp1;
+	struct cvmx_sli_win_wr_mask_s cn66xx;
+	struct cvmx_sli_win_wr_mask_s cn68xx;
+	struct cvmx_sli_win_wr_mask_s cn68xxp1;
+	struct cvmx_sli_win_wr_mask_s cn70xx;
+	struct cvmx_sli_win_wr_mask_s cn70xxp1;
+	struct cvmx_sli_win_wr_mask_s cn73xx;
+	struct cvmx_sli_win_wr_mask_s cn78xx;
+	struct cvmx_sli_win_wr_mask_s cn78xxp1;
+	struct cvmx_sli_win_wr_mask_s cnf71xx;
+	struct cvmx_sli_win_wr_mask_s cnf75xx;
+};
+
+typedef union cvmx_sli_win_wr_mask cvmx_sli_win_wr_mask_t;
+
+/**
+ * cvmx_sli_window_ctl
+ *
+ * Access to register space on the IOI (caused by window read/write operations) waits for a
+ * period of time specified by this register before timing out.
+ */
+union cvmx_sli_window_ctl {
+	u64 u64;
+	struct cvmx_sli_window_ctl_s {
+		u64 ocx_time : 32;
+		u64 time : 32;
+	} s;
+	struct cvmx_sli_window_ctl_cn61xx {
+		u64 reserved_32_63 : 32;
+		u64 time : 32;
+	} cn61xx;
+	struct cvmx_sli_window_ctl_cn61xx cn63xx;
+	struct cvmx_sli_window_ctl_cn61xx cn63xxp1;
+	struct cvmx_sli_window_ctl_cn61xx cn66xx;
+	struct cvmx_sli_window_ctl_cn61xx cn68xx;
+	struct cvmx_sli_window_ctl_cn61xx cn68xxp1;
+	struct cvmx_sli_window_ctl_cn61xx cn70xx;
+	struct cvmx_sli_window_ctl_cn61xx cn70xxp1;
+	struct cvmx_sli_window_ctl_s cn73xx;
+	struct cvmx_sli_window_ctl_s cn78xx;
+	struct cvmx_sli_window_ctl_s cn78xxp1;
+	struct cvmx_sli_window_ctl_cn61xx cnf71xx;
+	struct cvmx_sli_window_ctl_s cnf75xx;
+};
+
+typedef union cvmx_sli_window_ctl cvmx_sli_window_ctl_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 29/50] mips: octeon: Add cvmx-smix-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (27 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 28/50] mips: octeon: Add cvmx-sli-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 30/50] mips: octeon: Add cvmx-sriomaintx-defs.h " Stefan Roese
                   ` (23 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-smix-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-smix-defs.h | 360 ++++++++++++++++++
 1 file changed, 360 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-smix-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-smix-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-smix-defs.h
new file mode 100644
index 0000000000..c51d71b38f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-smix-defs.h
@@ -0,0 +1,360 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon smix.
+ */
+
+#ifndef __CVMX_SMIX_DEFS_H__
+#define __CVMX_SMIX_DEFS_H__
+
+static inline u64 CVMX_SMIX_CLK(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000001818ull + (offset) * 256;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001180000003818ull + (offset) * 128;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001180000003818ull + (offset) * 128;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003818ull + (offset) * 128;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003818ull + (offset) * 128;
+	}
+	return 0x0001180000003818ull + (offset) * 128;
+}
+
+static inline u64 CVMX_SMIX_CMD(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000001800ull + (offset) * 256;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001180000003800ull + (offset) * 128;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001180000003800ull + (offset) * 128;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003800ull + (offset) * 128;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003800ull + (offset) * 128;
+	}
+	return 0x0001180000003800ull + (offset) * 128;
+}
+
+static inline u64 CVMX_SMIX_EN(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000001820ull + (offset) * 256;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001180000003820ull + (offset) * 128;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001180000003820ull + (offset) * 128;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003820ull + (offset) * 128;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003820ull + (offset) * 128;
+	}
+	return 0x0001180000003820ull + (offset) * 128;
+}
+
+static inline u64 CVMX_SMIX_RD_DAT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000001810ull + (offset) * 256;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001180000003810ull + (offset) * 128;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001180000003810ull + (offset) * 128;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003810ull + (offset) * 128;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003810ull + (offset) * 128;
+	}
+	return 0x0001180000003810ull + (offset) * 128;
+}
+
+static inline u64 CVMX_SMIX_WR_DAT(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CN70XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CNF71XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN61XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000001808ull + (offset) * 256;
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x0001180000003808ull + (offset) * 128;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x0001180000003808ull + (offset) * 128;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003808ull + (offset) * 128;
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x0001180000003808ull + (offset) * 128;
+	}
+	return 0x0001180000003808ull + (offset) * 128;
+}
+
+/**
+ * cvmx_smi#_clk
+ *
+ * This register determines the SMI timing characteristics.
+ * If software wants to change SMI CLK timing parameters ([SAMPLE]/[SAMPLE_HI]), software
+ * must delay the SMI_()_CLK CSR write by at least 512 coprocessor-clock cycles after the
+ * previous SMI operation is finished.
+ */
+union cvmx_smix_clk {
+	u64 u64;
+	struct cvmx_smix_clk_s {
+		u64 reserved_25_63 : 39;
+		u64 mode : 1;
+		u64 reserved_21_23 : 3;
+		u64 sample_hi : 5;
+		u64 sample_mode : 1;
+		u64 reserved_14_14 : 1;
+		u64 clk_idle : 1;
+		u64 preamble : 1;
+		u64 sample : 4;
+		u64 phase : 8;
+	} s;
+	struct cvmx_smix_clk_cn30xx {
+		u64 reserved_21_63 : 43;
+		u64 sample_hi : 5;
+		u64 sample_mode : 1;
+		u64 reserved_14_14 : 1;
+		u64 clk_idle : 1;
+		u64 preamble : 1;
+		u64 sample : 4;
+		u64 phase : 8;
+	} cn30xx;
+	struct cvmx_smix_clk_cn30xx cn31xx;
+	struct cvmx_smix_clk_cn30xx cn38xx;
+	struct cvmx_smix_clk_cn30xx cn38xxp2;
+	struct cvmx_smix_clk_s cn50xx;
+	struct cvmx_smix_clk_s cn52xx;
+	struct cvmx_smix_clk_s cn52xxp1;
+	struct cvmx_smix_clk_s cn56xx;
+	struct cvmx_smix_clk_s cn56xxp1;
+	struct cvmx_smix_clk_cn30xx cn58xx;
+	struct cvmx_smix_clk_cn30xx cn58xxp1;
+	struct cvmx_smix_clk_s cn61xx;
+	struct cvmx_smix_clk_s cn63xx;
+	struct cvmx_smix_clk_s cn63xxp1;
+	struct cvmx_smix_clk_s cn66xx;
+	struct cvmx_smix_clk_s cn68xx;
+	struct cvmx_smix_clk_s cn68xxp1;
+	struct cvmx_smix_clk_s cn70xx;
+	struct cvmx_smix_clk_s cn70xxp1;
+	struct cvmx_smix_clk_s cn73xx;
+	struct cvmx_smix_clk_s cn78xx;
+	struct cvmx_smix_clk_s cn78xxp1;
+	struct cvmx_smix_clk_s cnf71xx;
+	struct cvmx_smix_clk_s cnf75xx;
+};
+
+typedef union cvmx_smix_clk cvmx_smix_clk_t;
+
+/**
+ * cvmx_smi#_cmd
+ *
+ * This register forces a read or write command to the PHY. Write operations to this register
+ * create SMI transactions. Software will poll (depending on the transaction type).
+ */
+union cvmx_smix_cmd {
+	u64 u64;
+	struct cvmx_smix_cmd_s {
+		u64 reserved_18_63 : 46;
+		u64 phy_op : 2;
+		u64 reserved_13_15 : 3;
+		u64 phy_adr : 5;
+		u64 reserved_5_7 : 3;
+		u64 reg_adr : 5;
+	} s;
+	struct cvmx_smix_cmd_cn30xx {
+		u64 reserved_17_63 : 47;
+		u64 phy_op : 1;
+		u64 reserved_13_15 : 3;
+		u64 phy_adr : 5;
+		u64 reserved_5_7 : 3;
+		u64 reg_adr : 5;
+	} cn30xx;
+	struct cvmx_smix_cmd_cn30xx cn31xx;
+	struct cvmx_smix_cmd_cn30xx cn38xx;
+	struct cvmx_smix_cmd_cn30xx cn38xxp2;
+	struct cvmx_smix_cmd_s cn50xx;
+	struct cvmx_smix_cmd_s cn52xx;
+	struct cvmx_smix_cmd_s cn52xxp1;
+	struct cvmx_smix_cmd_s cn56xx;
+	struct cvmx_smix_cmd_s cn56xxp1;
+	struct cvmx_smix_cmd_cn30xx cn58xx;
+	struct cvmx_smix_cmd_cn30xx cn58xxp1;
+	struct cvmx_smix_cmd_s cn61xx;
+	struct cvmx_smix_cmd_s cn63xx;
+	struct cvmx_smix_cmd_s cn63xxp1;
+	struct cvmx_smix_cmd_s cn66xx;
+	struct cvmx_smix_cmd_s cn68xx;
+	struct cvmx_smix_cmd_s cn68xxp1;
+	struct cvmx_smix_cmd_s cn70xx;
+	struct cvmx_smix_cmd_s cn70xxp1;
+	struct cvmx_smix_cmd_s cn73xx;
+	struct cvmx_smix_cmd_s cn78xx;
+	struct cvmx_smix_cmd_s cn78xxp1;
+	struct cvmx_smix_cmd_s cnf71xx;
+	struct cvmx_smix_cmd_s cnf75xx;
+};
+
+typedef union cvmx_smix_cmd cvmx_smix_cmd_t;
+
+/**
+ * cvmx_smi#_en
+ *
+ * Enables the SMI interface.
+ *
+ */
+union cvmx_smix_en {
+	u64 u64;
+	struct cvmx_smix_en_s {
+		u64 reserved_1_63 : 63;
+		u64 en : 1;
+	} s;
+	struct cvmx_smix_en_s cn30xx;
+	struct cvmx_smix_en_s cn31xx;
+	struct cvmx_smix_en_s cn38xx;
+	struct cvmx_smix_en_s cn38xxp2;
+	struct cvmx_smix_en_s cn50xx;
+	struct cvmx_smix_en_s cn52xx;
+	struct cvmx_smix_en_s cn52xxp1;
+	struct cvmx_smix_en_s cn56xx;
+	struct cvmx_smix_en_s cn56xxp1;
+	struct cvmx_smix_en_s cn58xx;
+	struct cvmx_smix_en_s cn58xxp1;
+	struct cvmx_smix_en_s cn61xx;
+	struct cvmx_smix_en_s cn63xx;
+	struct cvmx_smix_en_s cn63xxp1;
+	struct cvmx_smix_en_s cn66xx;
+	struct cvmx_smix_en_s cn68xx;
+	struct cvmx_smix_en_s cn68xxp1;
+	struct cvmx_smix_en_s cn70xx;
+	struct cvmx_smix_en_s cn70xxp1;
+	struct cvmx_smix_en_s cn73xx;
+	struct cvmx_smix_en_s cn78xx;
+	struct cvmx_smix_en_s cn78xxp1;
+	struct cvmx_smix_en_s cnf71xx;
+	struct cvmx_smix_en_s cnf75xx;
+};
+
+typedef union cvmx_smix_en cvmx_smix_en_t;
+
+/**
+ * cvmx_smi#_rd_dat
+ *
+ * This register contains the data in a read operation.
+ *
+ */
+union cvmx_smix_rd_dat {
+	u64 u64;
+	struct cvmx_smix_rd_dat_s {
+		u64 reserved_18_63 : 46;
+		u64 pending : 1;
+		u64 val : 1;
+		u64 dat : 16;
+	} s;
+	struct cvmx_smix_rd_dat_s cn30xx;
+	struct cvmx_smix_rd_dat_s cn31xx;
+	struct cvmx_smix_rd_dat_s cn38xx;
+	struct cvmx_smix_rd_dat_s cn38xxp2;
+	struct cvmx_smix_rd_dat_s cn50xx;
+	struct cvmx_smix_rd_dat_s cn52xx;
+	struct cvmx_smix_rd_dat_s cn52xxp1;
+	struct cvmx_smix_rd_dat_s cn56xx;
+	struct cvmx_smix_rd_dat_s cn56xxp1;
+	struct cvmx_smix_rd_dat_s cn58xx;
+	struct cvmx_smix_rd_dat_s cn58xxp1;
+	struct cvmx_smix_rd_dat_s cn61xx;
+	struct cvmx_smix_rd_dat_s cn63xx;
+	struct cvmx_smix_rd_dat_s cn63xxp1;
+	struct cvmx_smix_rd_dat_s cn66xx;
+	struct cvmx_smix_rd_dat_s cn68xx;
+	struct cvmx_smix_rd_dat_s cn68xxp1;
+	struct cvmx_smix_rd_dat_s cn70xx;
+	struct cvmx_smix_rd_dat_s cn70xxp1;
+	struct cvmx_smix_rd_dat_s cn73xx;
+	struct cvmx_smix_rd_dat_s cn78xx;
+	struct cvmx_smix_rd_dat_s cn78xxp1;
+	struct cvmx_smix_rd_dat_s cnf71xx;
+	struct cvmx_smix_rd_dat_s cnf75xx;
+};
+
+typedef union cvmx_smix_rd_dat cvmx_smix_rd_dat_t;
+
+/**
+ * cvmx_smi#_wr_dat
+ *
+ * This register provides the data for a write operation.
+ *
+ */
+union cvmx_smix_wr_dat {
+	u64 u64;
+	struct cvmx_smix_wr_dat_s {
+		u64 reserved_18_63 : 46;
+		u64 pending : 1;
+		u64 val : 1;
+		u64 dat : 16;
+	} s;
+	struct cvmx_smix_wr_dat_s cn30xx;
+	struct cvmx_smix_wr_dat_s cn31xx;
+	struct cvmx_smix_wr_dat_s cn38xx;
+	struct cvmx_smix_wr_dat_s cn38xxp2;
+	struct cvmx_smix_wr_dat_s cn50xx;
+	struct cvmx_smix_wr_dat_s cn52xx;
+	struct cvmx_smix_wr_dat_s cn52xxp1;
+	struct cvmx_smix_wr_dat_s cn56xx;
+	struct cvmx_smix_wr_dat_s cn56xxp1;
+	struct cvmx_smix_wr_dat_s cn58xx;
+	struct cvmx_smix_wr_dat_s cn58xxp1;
+	struct cvmx_smix_wr_dat_s cn61xx;
+	struct cvmx_smix_wr_dat_s cn63xx;
+	struct cvmx_smix_wr_dat_s cn63xxp1;
+	struct cvmx_smix_wr_dat_s cn66xx;
+	struct cvmx_smix_wr_dat_s cn68xx;
+	struct cvmx_smix_wr_dat_s cn68xxp1;
+	struct cvmx_smix_wr_dat_s cn70xx;
+	struct cvmx_smix_wr_dat_s cn70xxp1;
+	struct cvmx_smix_wr_dat_s cn73xx;
+	struct cvmx_smix_wr_dat_s cn78xx;
+	struct cvmx_smix_wr_dat_s cn78xxp1;
+	struct cvmx_smix_wr_dat_s cnf71xx;
+	struct cvmx_smix_wr_dat_s cnf75xx;
+};
+
+typedef union cvmx_smix_wr_dat cvmx_smix_wr_dat_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 30/50] mips: octeon: Add cvmx-sriomaintx-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (28 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 29/50] mips: octeon: Add cvmx-smix-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 31/50] mips: octeon: Add cvmx-sriox-defs.h " Stefan Roese
                   ` (22 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-sriomaintx-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-sriomaintx-defs.h       | 61 +++++++++++++++++++
 1 file changed, 61 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sriomaintx-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-sriomaintx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-sriomaintx-defs.h
new file mode 100644
index 0000000000..2558e7b2fd
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-sriomaintx-defs.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_SRIOMAINTX_DEFS_H__
+#define __CVMX_SRIOMAINTX_DEFS_H__
+
+static inline u64 CVMX_SRIOMAINTX_PORT_0_CTL2(unsigned long offset)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+		return 0x0000010000000154ull + (offset) * 0x100000000ull;
+	case OCTEON_CN66XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000154ull;
+	case OCTEON_CN63XX & OCTEON_FAMILY_MASK:
+		return 0x0000000000000154ull + (offset) * 0x100000000ull;
+	}
+	return 0x0000010000000154ull + (offset) * 0x100000000ull;
+}
+
+/**
+ * cvmx_sriomaint#_port_0_ctl2
+ *
+ * These registers are accessed when a local processor or an external
+ * device wishes to examine the port baudrate information.  The automatic
+ * baud rate feature is not available on this device. The SUP_* and ENB_*
+ * fields are set directly by the SRIO()_STATUS_REG[SPD] bits as a
+ * reference but otherwise have no effect.
+ *
+ * WARNING!!  Writes to this register will reinitialize the SRIO link.
+ */
+union cvmx_sriomaintx_port_0_ctl2 {
+	u32 u32;
+	struct cvmx_sriomaintx_port_0_ctl2_s {
+		u32 sel_baud : 4;
+		u32 baud_sup : 1;
+		u32 baud_enb : 1;
+		u32 sup_125g : 1;
+		u32 enb_125g : 1;
+		u32 sup_250g : 1;
+		u32 enb_250g : 1;
+		u32 sup_312g : 1;
+		u32 enb_312g : 1;
+		u32 sub_500g : 1;
+		u32 enb_500g : 1;
+		u32 sup_625g : 1;
+		u32 enb_625g : 1;
+		u32 reserved_2_15 : 14;
+		u32 tx_emph : 1;
+		u32 emph_en : 1;
+	} s;
+	struct cvmx_sriomaintx_port_0_ctl2_s cn63xx;
+	struct cvmx_sriomaintx_port_0_ctl2_s cn63xxp1;
+	struct cvmx_sriomaintx_port_0_ctl2_s cn66xx;
+	struct cvmx_sriomaintx_port_0_ctl2_s cnf75xx;
+};
+
+typedef union cvmx_sriomaintx_port_0_ctl2 cvmx_sriomaintx_port_0_ctl2_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 31/50] mips: octeon: Add cvmx-sriox-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (29 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 30/50] mips: octeon: Add cvmx-sriomaintx-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 32/50] mips: octeon: Add cvmx-sso-defs.h " Stefan Roese
                   ` (21 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-sriox-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../include/mach/cvmx-sriox-defs.h            | 44 +++++++++++++++++++
 1 file changed, 44 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sriox-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-sriox-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-sriox-defs.h
new file mode 100644
index 0000000000..ac988609a1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-sriox-defs.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_SRIOX_DEFS_H__
+#define __CVMX_SRIOX_DEFS_H__
+
+#define CVMX_SRIOX_STATUS_REG(offset) (0x00011800C8000100ull + ((offset) & 3) * 0x1000000ull)
+
+/**
+ * cvmx_srio#_status_reg
+ *
+ * The SRIO field displays if the port has been configured for SRIO operation.  This register
+ * can be read regardless of whether the SRIO is selected or being reset.  Although some other
+ * registers can be accessed while the ACCESS bit is zero (see individual registers for details),
+ * the majority of SRIO registers and all the SRIOMAINT registers can be used only when the
+ * ACCESS bit is asserted.
+ *
+ * This register is reset by the coprocessor-clock reset.
+ */
+union cvmx_sriox_status_reg {
+	u64 u64;
+	struct cvmx_sriox_status_reg_s {
+		u64 reserved_9_63 : 55;
+		u64 host : 1;
+		u64 spd : 4;
+		u64 run_type : 2;
+		u64 access : 1;
+		u64 srio : 1;
+	} s;
+	struct cvmx_sriox_status_reg_cn63xx {
+		u64 reserved_2_63 : 62;
+		u64 access : 1;
+		u64 srio : 1;
+	} cn63xx;
+	struct cvmx_sriox_status_reg_cn63xx cn63xxp1;
+	struct cvmx_sriox_status_reg_cn63xx cn66xx;
+	struct cvmx_sriox_status_reg_s cnf75xx;
+};
+
+typedef union cvmx_sriox_status_reg cvmx_sriox_status_reg_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 32/50] mips: octeon: Add cvmx-sso-defs.h header file
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (30 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 31/50] mips: octeon: Add cvmx-sriox-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 33/50] mips: octeon: Add misc remaining header files Stefan Roese
                   ` (20 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-sso-defs.h header file from 2013 U-Boot. It will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-sso-defs.h  | 2904 +++++++++++++++++
 1 file changed, 2904 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sso-defs.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-sso-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-sso-defs.h
new file mode 100644
index 0000000000..4fc69079ac
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-sso-defs.h
@@ -0,0 +1,2904 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) type definitions for
+ * Octeon sso.
+ */
+
+#ifndef __CVMX_SSO_DEFS_H__
+#define __CVMX_SSO_DEFS_H__
+
+#define CVMX_SSO_ACTIVE_CYCLES		(0x00016700000010E8ull)
+#define CVMX_SSO_ACTIVE_CYCLESX(offset) (0x0001670000001100ull + ((offset) & 3) * 8)
+#define CVMX_SSO_AW_ADD			(0x0001670000002080ull)
+#define CVMX_SSO_AW_CFG			(0x00016700000010F0ull)
+#define CVMX_SSO_AW_ECO			(0x0001670000001030ull)
+#define CVMX_SSO_AW_READ_ARB		(0x0001670000002090ull)
+#define CVMX_SSO_AW_STATUS		(0x00016700000010E0ull)
+#define CVMX_SSO_AW_TAG_LATENCY_PC	(0x00016700000020A8ull)
+#define CVMX_SSO_AW_TAG_REQ_PC		(0x00016700000020A0ull)
+#define CVMX_SSO_AW_WE			(0x0001670000001080ull)
+#define CVMX_SSO_BIST_STAT		(0x0001670000001078ull)
+#define CVMX_SSO_BIST_STATUS0		(0x0001670000001200ull)
+#define CVMX_SSO_BIST_STATUS1		(0x0001670000001208ull)
+#define CVMX_SSO_BIST_STATUS2		(0x0001670000001210ull)
+#define CVMX_SSO_CFG			(0x0001670000001088ull)
+#define CVMX_SSO_DS_PC			(0x0001670000001070ull)
+#define CVMX_SSO_ECC_CTL0		(0x0001670000001280ull)
+#define CVMX_SSO_ECC_CTL1		(0x0001670000001288ull)
+#define CVMX_SSO_ECC_CTL2		(0x0001670000001290ull)
+#define CVMX_SSO_ERR			(0x0001670000001038ull)
+#define CVMX_SSO_ERR0			(0x0001670000001240ull)
+#define CVMX_SSO_ERR1			(0x0001670000001248ull)
+#define CVMX_SSO_ERR2			(0x0001670000001250ull)
+#define CVMX_SSO_ERR_ENB		(0x0001670000001030ull)
+#define CVMX_SSO_FIDX_ECC_CTL		(0x00016700000010D0ull)
+#define CVMX_SSO_FIDX_ECC_ST		(0x00016700000010D8ull)
+#define CVMX_SSO_FPAGE_CNT		(0x0001670000001090ull)
+#define CVMX_SSO_GRPX_AQ_CNT(offset)	(0x0001670020000700ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_AQ_THR(offset)	(0x0001670020000800ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_DS_PC(offset)	(0x0001670020001400ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_EXT_PC(offset)	(0x0001670020001100ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_IAQ_THR(offset)	(0x0001670020000000ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_INT(offset)	(0x0001670020000400ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_INT_CNT(offset)	(0x0001670020000600ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_INT_THR(offset)	(0x0001670020000500ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_PRI(offset)	(0x0001670020000200ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_TAQ_THR(offset)	(0x0001670020000100ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_TS_PC(offset)	(0x0001670020001300ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_WA_PC(offset)	(0x0001670020001200ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GRPX_WS_PC(offset)	(0x0001670020001000ull + ((offset) & 255) * 0x10000ull)
+#define CVMX_SSO_GWE_CFG		(0x0001670000001098ull)
+#define CVMX_SSO_GWE_RANDOM		(0x00016700000010B0ull)
+#define CVMX_SSO_GW_ECO			(0x0001670000001038ull)
+#define CVMX_SSO_IDX_ECC_CTL		(0x00016700000010C0ull)
+#define CVMX_SSO_IDX_ECC_ST		(0x00016700000010C8ull)
+#define CVMX_SSO_IENTX_LINKS(offset)	(0x00016700A0060000ull + ((offset) & 4095) * 8)
+#define CVMX_SSO_IENTX_PENDTAG(offset)	(0x00016700A0040000ull + ((offset) & 4095) * 8)
+#define CVMX_SSO_IENTX_QLINKS(offset)	(0x00016700A0080000ull + ((offset) & 4095) * 8)
+#define CVMX_SSO_IENTX_TAG(offset)	(0x00016700A0000000ull + ((offset) & 4095) * 8)
+#define CVMX_SSO_IENTX_WQPGRP(offset)	(0x00016700A0020000ull + ((offset) & 4095) * 8)
+#define CVMX_SSO_IPL_CONFX(offset)	(0x0001670080080000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_IPL_DESCHEDX(offset)	(0x0001670080060000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_IPL_FREEX(offset)	(0x0001670080000000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_IPL_IAQX(offset)	(0x0001670080040000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_IQ_CNTX(offset)	(0x0001670000009000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_IQ_COM_CNT		(0x0001670000001058ull)
+#define CVMX_SSO_IQ_INT			(0x0001670000001048ull)
+#define CVMX_SSO_IQ_INT_EN		(0x0001670000001050ull)
+#define CVMX_SSO_IQ_THRX(offset)	(0x000167000000A000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_NOS_CNT		(0x0001670000001040ull)
+#define CVMX_SSO_NW_TIM			(0x0001670000001028ull)
+#define CVMX_SSO_OTH_ECC_CTL		(0x00016700000010B0ull)
+#define CVMX_SSO_OTH_ECC_ST		(0x00016700000010B8ull)
+#define CVMX_SSO_PAGE_CNT		(0x0001670000001090ull)
+#define CVMX_SSO_PND_ECC_CTL		(0x00016700000010A0ull)
+#define CVMX_SSO_PND_ECC_ST		(0x00016700000010A8ull)
+#define CVMX_SSO_PPX_ARB(offset)	(0x0001670040000000ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_PPX_GRP_MSK(offset)	(0x0001670000006000ull + ((offset) & 31) * 8)
+#define CVMX_SSO_PPX_QOS_PRI(offset)	(0x0001670000003000ull + ((offset) & 31) * 8)
+#define CVMX_SSO_PPX_SX_GRPMSKX(a, b, c)                                                           \
+	(0x0001670040001000ull + ((a) << 16) + ((b) << 5) + ((c) << 3))
+#define CVMX_SSO_PP_STRICT	  (0x00016700000010E0ull)
+#define CVMX_SSO_QOSX_RND(offset) (0x0001670000002000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_QOS_THRX(offset) (0x000167000000B000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_QOS_WE		  (0x0001670000001080ull)
+#define CVMX_SSO_RESET		  CVMX_SSO_RESET_FUNC()
+static inline u64 CVMX_SSO_RESET_FUNC(void)
+{
+	switch (cvmx_get_octeon_family()) {
+	case OCTEON_CNF75XX & OCTEON_FAMILY_MASK:
+	case OCTEON_CN78XX & OCTEON_FAMILY_MASK:
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+			return 0x00016700000010F8ull;
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+			return 0x00016700000010F8ull;
+	case OCTEON_CN73XX & OCTEON_FAMILY_MASK:
+		return 0x00016700000010F8ull;
+	case OCTEON_CN68XX & OCTEON_FAMILY_MASK:
+		return 0x00016700000010F0ull;
+	}
+	return 0x00016700000010F8ull;
+}
+
+#define CVMX_SSO_RWQ_HEAD_PTRX(offset)	(0x000167000000C000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_RWQ_POP_FPTR		(0x000167000000C408ull)
+#define CVMX_SSO_RWQ_PSH_FPTR		(0x000167000000C400ull)
+#define CVMX_SSO_RWQ_TAIL_PTRX(offset)	(0x000167000000C200ull + ((offset) & 7) * 8)
+#define CVMX_SSO_SL_PPX_LINKS(offset)	(0x0001670060000040ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_SL_PPX_PENDTAG(offset) (0x0001670060000000ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_SL_PPX_PENDWQP(offset) (0x0001670060000010ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_SL_PPX_TAG(offset)	(0x0001670060000020ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_SL_PPX_WQP(offset)	(0x0001670060000030ull + ((offset) & 63) * 0x10000ull)
+#define CVMX_SSO_TAQX_LINK(offset)	(0x00016700C0000000ull + ((offset) & 2047) * 4096)
+#define CVMX_SSO_TAQX_WAEX_TAG(offset, block_id)                                                   \
+	(0x00016700D0000000ull + (((offset) & 15) + ((block_id) & 2047) * 0x100ull) * 16)
+#define CVMX_SSO_TAQX_WAEX_WQP(offset, block_id)                                                   \
+	(0x00016700D0000008ull + (((offset) & 15) + ((block_id) & 2047) * 0x100ull) * 16)
+#define CVMX_SSO_TAQ_ADD		(0x00016700000020E0ull)
+#define CVMX_SSO_TAQ_CNT		(0x00016700000020C0ull)
+#define CVMX_SSO_TIAQX_STATUS(offset)	(0x00016700000C0000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_TOAQX_STATUS(offset)	(0x00016700000D0000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_TS_PC			(0x0001670000001068ull)
+#define CVMX_SSO_WA_COM_PC		(0x0001670000001060ull)
+#define CVMX_SSO_WA_PCX(offset)		(0x0001670000005000ull + ((offset) & 7) * 8)
+#define CVMX_SSO_WQ_INT			(0x0001670000001000ull)
+#define CVMX_SSO_WQ_INT_CNTX(offset)	(0x0001670000008000ull + ((offset) & 63) * 8)
+#define CVMX_SSO_WQ_INT_PC		(0x0001670000001020ull)
+#define CVMX_SSO_WQ_INT_THRX(offset)	(0x0001670000007000ull + ((offset) & 63) * 8)
+#define CVMX_SSO_WQ_IQ_DIS		(0x0001670000001010ull)
+#define CVMX_SSO_WS_CFG			(0x0001670000001088ull)
+#define CVMX_SSO_WS_ECO			(0x0001670000001048ull)
+#define CVMX_SSO_WS_PCX(offset)		(0x0001670000004000ull + ((offset) & 63) * 8)
+#define CVMX_SSO_XAQX_HEAD_NEXT(offset) (0x00016700000A0000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_XAQX_HEAD_PTR(offset)	(0x0001670000080000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_XAQX_TAIL_NEXT(offset) (0x00016700000B0000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_XAQX_TAIL_PTR(offset)	(0x0001670000090000ull + ((offset) & 255) * 8)
+#define CVMX_SSO_XAQ_AURA		(0x0001670000002100ull)
+#define CVMX_SSO_XAQ_LATENCY_PC		(0x00016700000020B8ull)
+#define CVMX_SSO_XAQ_REQ_PC		(0x00016700000020B0ull)
+
+/**
+ * cvmx_sso_active_cycles
+ *
+ * SSO_ACTIVE_CYCLES = SSO cycles SSO active
+ *
+ * This register counts every sclk cycle that the SSO clocks are active.
+ * **NOTE: Added in pass 2.0
+ */
+union cvmx_sso_active_cycles {
+	u64 u64;
+	struct cvmx_sso_active_cycles_s {
+		u64 act_cyc : 64;
+	} s;
+	struct cvmx_sso_active_cycles_s cn68xx;
+};
+
+typedef union cvmx_sso_active_cycles cvmx_sso_active_cycles_t;
+
+/**
+ * cvmx_sso_active_cycles#
+ *
+ * This register counts every coprocessor clock (SCLK) cycle that the SSO clocks are active.
+ *
+ */
+union cvmx_sso_active_cyclesx {
+	u64 u64;
+	struct cvmx_sso_active_cyclesx_s {
+		u64 act_cyc : 64;
+	} s;
+	struct cvmx_sso_active_cyclesx_s cn73xx;
+	struct cvmx_sso_active_cyclesx_s cn78xx;
+	struct cvmx_sso_active_cyclesx_s cn78xxp1;
+	struct cvmx_sso_active_cyclesx_s cnf75xx;
+};
+
+typedef union cvmx_sso_active_cyclesx cvmx_sso_active_cyclesx_t;
+
+/**
+ * cvmx_sso_aw_add
+ */
+union cvmx_sso_aw_add {
+	u64 u64;
+	struct cvmx_sso_aw_add_s {
+		u64 reserved_30_63 : 34;
+		u64 rsvd_free : 14;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_sso_aw_add_s cn73xx;
+	struct cvmx_sso_aw_add_s cn78xx;
+	struct cvmx_sso_aw_add_s cn78xxp1;
+	struct cvmx_sso_aw_add_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_add cvmx_sso_aw_add_t;
+
+/**
+ * cvmx_sso_aw_cfg
+ *
+ * This register controls the operation of the add-work block (AW).
+ *
+ */
+union cvmx_sso_aw_cfg {
+	u64 u64;
+	struct cvmx_sso_aw_cfg_s {
+		u64 reserved_9_63 : 55;
+		u64 ldt_short : 1;
+		u64 lol : 1;
+		u64 xaq_alloc_dis : 1;
+		u64 ocla_bp : 1;
+		u64 xaq_byp_dis : 1;
+		u64 stt : 1;
+		u64 ldt : 1;
+		u64 ldwb : 1;
+		u64 rwen : 1;
+	} s;
+	struct cvmx_sso_aw_cfg_s cn73xx;
+	struct cvmx_sso_aw_cfg_s cn78xx;
+	struct cvmx_sso_aw_cfg_s cn78xxp1;
+	struct cvmx_sso_aw_cfg_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_cfg cvmx_sso_aw_cfg_t;
+
+/**
+ * cvmx_sso_aw_eco
+ */
+union cvmx_sso_aw_eco {
+	u64 u64;
+	struct cvmx_sso_aw_eco_s {
+		u64 reserved_8_63 : 56;
+		u64 eco_rw : 8;
+	} s;
+	struct cvmx_sso_aw_eco_s cn73xx;
+	struct cvmx_sso_aw_eco_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_eco cvmx_sso_aw_eco_t;
+
+/**
+ * cvmx_sso_aw_read_arb
+ *
+ * This register fine tunes the AW read arbiter and is for diagnostic use.
+ *
+ */
+union cvmx_sso_aw_read_arb {
+	u64 u64;
+	struct cvmx_sso_aw_read_arb_s {
+		u64 reserved_30_63 : 34;
+		u64 xaq_lev : 6;
+		u64 reserved_21_23 : 3;
+		u64 xaq_min : 5;
+		u64 reserved_14_15 : 2;
+		u64 aw_tag_lev : 6;
+		u64 reserved_5_7 : 3;
+		u64 aw_tag_min : 5;
+	} s;
+	struct cvmx_sso_aw_read_arb_s cn73xx;
+	struct cvmx_sso_aw_read_arb_s cn78xx;
+	struct cvmx_sso_aw_read_arb_s cn78xxp1;
+	struct cvmx_sso_aw_read_arb_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_read_arb cvmx_sso_aw_read_arb_t;
+
+/**
+ * cvmx_sso_aw_status
+ *
+ * This register indicates the status of the add-work block (AW).
+ *
+ */
+union cvmx_sso_aw_status {
+	u64 u64;
+	struct cvmx_sso_aw_status_s {
+		u64 reserved_6_63 : 58;
+		u64 xaq_buf_cached : 6;
+	} s;
+	struct cvmx_sso_aw_status_s cn73xx;
+	struct cvmx_sso_aw_status_s cn78xx;
+	struct cvmx_sso_aw_status_s cn78xxp1;
+	struct cvmx_sso_aw_status_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_status cvmx_sso_aw_status_t;
+
+/**
+ * cvmx_sso_aw_tag_latency_pc
+ */
+union cvmx_sso_aw_tag_latency_pc {
+	u64 u64;
+	struct cvmx_sso_aw_tag_latency_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_sso_aw_tag_latency_pc_s cn73xx;
+	struct cvmx_sso_aw_tag_latency_pc_s cn78xx;
+	struct cvmx_sso_aw_tag_latency_pc_s cn78xxp1;
+	struct cvmx_sso_aw_tag_latency_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_tag_latency_pc cvmx_sso_aw_tag_latency_pc_t;
+
+/**
+ * cvmx_sso_aw_tag_req_pc
+ */
+union cvmx_sso_aw_tag_req_pc {
+	u64 u64;
+	struct cvmx_sso_aw_tag_req_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_sso_aw_tag_req_pc_s cn73xx;
+	struct cvmx_sso_aw_tag_req_pc_s cn78xx;
+	struct cvmx_sso_aw_tag_req_pc_s cn78xxp1;
+	struct cvmx_sso_aw_tag_req_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_tag_req_pc cvmx_sso_aw_tag_req_pc_t;
+
+/**
+ * cvmx_sso_aw_we
+ */
+union cvmx_sso_aw_we {
+	u64 u64;
+	struct cvmx_sso_aw_we_s {
+		u64 reserved_29_63 : 35;
+		u64 rsvd_free : 13;
+		u64 reserved_13_15 : 3;
+		u64 free_cnt : 13;
+	} s;
+	struct cvmx_sso_aw_we_s cn73xx;
+	struct cvmx_sso_aw_we_s cn78xx;
+	struct cvmx_sso_aw_we_s cn78xxp1;
+	struct cvmx_sso_aw_we_s cnf75xx;
+};
+
+typedef union cvmx_sso_aw_we cvmx_sso_aw_we_t;
+
+/**
+ * cvmx_sso_bist_stat
+ *
+ * SSO_BIST_STAT = SSO BIST Status Register
+ *
+ * Contains the BIST status for the SSO memories ('0' = pass, '1' = fail).
+ * Note that PP BIST status is not reported here as it was in previous designs.
+ *
+ *   There may be more for DDR interface buffers.
+ *   It's possible that a RAM will be used for SSO_PP_QOS_RND.
+ */
+union cvmx_sso_bist_stat {
+	u64 u64;
+	struct cvmx_sso_bist_stat_s {
+		u64 reserved_62_63 : 2;
+		u64 odu_pref : 2;
+		u64 reserved_54_59 : 6;
+		u64 fptr : 2;
+		u64 reserved_45_51 : 7;
+		u64 rwo_dat : 1;
+		u64 rwo : 2;
+		u64 reserved_35_41 : 7;
+		u64 rwi_dat : 1;
+		u64 reserved_32_33 : 2;
+		u64 soc : 1;
+		u64 reserved_28_30 : 3;
+		u64 ncbo : 4;
+		u64 reserved_21_23 : 3;
+		u64 index : 1;
+		u64 reserved_17_19 : 3;
+		u64 fidx : 1;
+		u64 reserved_10_15 : 6;
+		u64 pend : 2;
+		u64 reserved_2_7 : 6;
+		u64 oth : 2;
+	} s;
+	struct cvmx_sso_bist_stat_s cn68xx;
+	struct cvmx_sso_bist_stat_cn68xxp1 {
+		u64 reserved_54_63 : 10;
+		u64 fptr : 2;
+		u64 reserved_45_51 : 7;
+		u64 rwo_dat : 1;
+		u64 rwo : 2;
+		u64 reserved_35_41 : 7;
+		u64 rwi_dat : 1;
+		u64 reserved_32_33 : 2;
+		u64 soc : 1;
+		u64 reserved_28_30 : 3;
+		u64 ncbo : 4;
+		u64 reserved_21_23 : 3;
+		u64 index : 1;
+		u64 reserved_17_19 : 3;
+		u64 fidx : 1;
+		u64 reserved_10_15 : 6;
+		u64 pend : 2;
+		u64 reserved_2_7 : 6;
+		u64 oth : 2;
+	} cn68xxp1;
+};
+
+typedef union cvmx_sso_bist_stat cvmx_sso_bist_stat_t;
+
+/**
+ * cvmx_sso_bist_status0
+ *
+ * Contains the BIST status for the SSO memories.
+ *
+ */
+union cvmx_sso_bist_status0 {
+	u64 u64;
+	struct cvmx_sso_bist_status0_s {
+		u64 reserved_10_63 : 54;
+		u64 bist : 10;
+	} s;
+	struct cvmx_sso_bist_status0_s cn73xx;
+	struct cvmx_sso_bist_status0_s cn78xx;
+	struct cvmx_sso_bist_status0_s cn78xxp1;
+	struct cvmx_sso_bist_status0_s cnf75xx;
+};
+
+typedef union cvmx_sso_bist_status0 cvmx_sso_bist_status0_t;
+
+/**
+ * cvmx_sso_bist_status1
+ *
+ * Contains the BIST status for the SSO memories.
+ *
+ */
+union cvmx_sso_bist_status1 {
+	u64 u64;
+	struct cvmx_sso_bist_status1_s {
+		u64 reserved_7_63 : 57;
+		u64 bist : 7;
+	} s;
+	struct cvmx_sso_bist_status1_s cn73xx;
+	struct cvmx_sso_bist_status1_s cn78xx;
+	struct cvmx_sso_bist_status1_s cn78xxp1;
+	struct cvmx_sso_bist_status1_s cnf75xx;
+};
+
+typedef union cvmx_sso_bist_status1 cvmx_sso_bist_status1_t;
+
+/**
+ * cvmx_sso_bist_status2
+ *
+ * Contains the BIST status for the SSO memories.
+ *
+ */
+union cvmx_sso_bist_status2 {
+	u64 u64;
+	struct cvmx_sso_bist_status2_s {
+		u64 reserved_9_63 : 55;
+		u64 bist : 9;
+	} s;
+	struct cvmx_sso_bist_status2_s cn73xx;
+	struct cvmx_sso_bist_status2_s cn78xx;
+	struct cvmx_sso_bist_status2_s cn78xxp1;
+	struct cvmx_sso_bist_status2_s cnf75xx;
+};
+
+typedef union cvmx_sso_bist_status2 cvmx_sso_bist_status2_t;
+
+/**
+ * cvmx_sso_cfg
+ *
+ * SSO_CFG = SSO Config
+ *
+ * This register is an assortment of various SSO configuration bits.
+ */
+union cvmx_sso_cfg {
+	u64 u64;
+	struct cvmx_sso_cfg_s {
+		u64 reserved_16_63 : 48;
+		u64 qck_gw_rsp_adj : 3;
+		u64 qck_gw_rsp_dis : 1;
+		u64 qck_sw_dis : 1;
+		u64 rwq_alloc_dis : 1;
+		u64 soc_ccam_dis : 1;
+		u64 sso_cclk_dis : 1;
+		u64 rwo_flush : 1;
+		u64 wfe_thr : 1;
+		u64 rwio_byp_dis : 1;
+		u64 rwq_byp_dis : 1;
+		u64 stt : 1;
+		u64 ldt : 1;
+		u64 dwb : 1;
+		u64 rwen : 1;
+	} s;
+	struct cvmx_sso_cfg_s cn68xx;
+	struct cvmx_sso_cfg_cn68xxp1 {
+		u64 reserved_8_63 : 56;
+		u64 rwo_flush : 1;
+		u64 wfe_thr : 1;
+		u64 rwio_byp_dis : 1;
+		u64 rwq_byp_dis : 1;
+		u64 stt : 1;
+		u64 ldt : 1;
+		u64 dwb : 1;
+		u64 rwen : 1;
+	} cn68xxp1;
+};
+
+typedef union cvmx_sso_cfg cvmx_sso_cfg_t;
+
+/**
+ * cvmx_sso_ds_pc
+ *
+ * SSO_DS_PC = SSO De-Schedule Performance Counter
+ *
+ * Counts the number of de-schedule requests.
+ * Counter rolls over through zero when max value exceeded.
+ */
+union cvmx_sso_ds_pc {
+	u64 u64;
+	struct cvmx_sso_ds_pc_s {
+		u64 ds_pc : 64;
+	} s;
+	struct cvmx_sso_ds_pc_s cn68xx;
+	struct cvmx_sso_ds_pc_s cn68xxp1;
+};
+
+typedef union cvmx_sso_ds_pc cvmx_sso_ds_pc_t;
+
+/**
+ * cvmx_sso_ecc_ctl0
+ */
+union cvmx_sso_ecc_ctl0 {
+	u64 u64;
+	struct cvmx_sso_ecc_ctl0_s {
+		u64 reserved_30_63 : 34;
+		u64 toaqt_flip : 2;
+		u64 toaqt_cdis : 1;
+		u64 toaqh_flip : 2;
+		u64 toaqh_cdis : 1;
+		u64 tiaqt_flip : 2;
+		u64 tiaqt_cdis : 1;
+		u64 tiaqh_flip : 2;
+		u64 tiaqh_cdis : 1;
+		u64 llm_flip : 2;
+		u64 llm_cdis : 1;
+		u64 inp_flip : 2;
+		u64 inp_cdis : 1;
+		u64 qtc_flip : 2;
+		u64 qtc_cdis : 1;
+		u64 xaq_flip : 2;
+		u64 xaq_cdis : 1;
+		u64 fff_flip : 2;
+		u64 fff_cdis : 1;
+		u64 wes_flip : 2;
+		u64 wes_cdis : 1;
+	} s;
+	struct cvmx_sso_ecc_ctl0_s cn73xx;
+	struct cvmx_sso_ecc_ctl0_s cn78xx;
+	struct cvmx_sso_ecc_ctl0_s cn78xxp1;
+	struct cvmx_sso_ecc_ctl0_s cnf75xx;
+};
+
+typedef union cvmx_sso_ecc_ctl0 cvmx_sso_ecc_ctl0_t;
+
+/**
+ * cvmx_sso_ecc_ctl1
+ */
+union cvmx_sso_ecc_ctl1 {
+	u64 u64;
+	struct cvmx_sso_ecc_ctl1_s {
+		u64 reserved_21_63 : 43;
+		u64 thrint_flip : 2;
+		u64 thrint_cdis : 1;
+		u64 mask_flip : 2;
+		u64 mask_cdis : 1;
+		u64 gdw_flip : 2;
+		u64 gdw_cdis : 1;
+		u64 qidx_flip : 2;
+		u64 qidx_cdis : 1;
+		u64 tptr_flip : 2;
+		u64 tptr_cdis : 1;
+		u64 hptr_flip : 2;
+		u64 hptr_cdis : 1;
+		u64 cntr_flip : 2;
+		u64 cntr_cdis : 1;
+	} s;
+	struct cvmx_sso_ecc_ctl1_s cn73xx;
+	struct cvmx_sso_ecc_ctl1_s cn78xx;
+	struct cvmx_sso_ecc_ctl1_s cn78xxp1;
+	struct cvmx_sso_ecc_ctl1_s cnf75xx;
+};
+
+typedef union cvmx_sso_ecc_ctl1 cvmx_sso_ecc_ctl1_t;
+
+/**
+ * cvmx_sso_ecc_ctl2
+ */
+union cvmx_sso_ecc_ctl2 {
+	u64 u64;
+	struct cvmx_sso_ecc_ctl2_s {
+		u64 reserved_15_63 : 49;
+		u64 ncbo_flip : 2;
+		u64 ncbo_cdis : 1;
+		u64 pnd_flip : 2;
+		u64 pnd_cdis : 1;
+		u64 oth_flip : 2;
+		u64 oth_cdis : 1;
+		u64 nidx_flip : 2;
+		u64 nidx_cdis : 1;
+		u64 pidx_flip : 2;
+		u64 pidx_cdis : 1;
+	} s;
+	struct cvmx_sso_ecc_ctl2_s cn73xx;
+	struct cvmx_sso_ecc_ctl2_s cn78xx;
+	struct cvmx_sso_ecc_ctl2_s cn78xxp1;
+	struct cvmx_sso_ecc_ctl2_s cnf75xx;
+};
+
+typedef union cvmx_sso_ecc_ctl2 cvmx_sso_ecc_ctl2_t;
+
+/**
+ * cvmx_sso_err
+ *
+ * SSO_ERR = SSO Error Register
+ *
+ * Contains ECC and other misc error bits.
+ *
+ * <45> The free page error bit will assert when SSO_FPAGE_CNT <= 16 and
+ *      SSO_CFG[RWEN] is 1.  Software will want to disable the interrupt
+ *      associated with this error when recovering SSO pointers from the
+ *      FPA and SSO.
+ *
+ * This register also contains the illegal operation error bits:
+ *
+ * <42> Received ADDWQ with tag specified as EMPTY
+ * <41> Received illegal opcode
+ * <40> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/ALLOC_WE
+ *      from WS with CLR_NSCHED pending
+ * <39> Received CLR_NSCHED
+ *      from WS with SWTAG_DESCH/DESCH/CLR_NSCHED pending
+ * <38> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/ALLOC_WE
+ *      from WS with ALLOC_WE pending
+ * <37> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP/GET_WORK/ALLOC_WE/CLR_NSCHED
+ *      from WS with GET_WORK pending
+ * <36> Received SWTAG_FULL/SWTAG_DESCH
+ *      with tag specified as UNSCHEDULED
+ * <35> Received SWTAG/SWTAG_FULL/SWTAG_DESCH
+ *      with tag specified as EMPTY
+ * <34> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/GET_WORK
+ *      from WS with pending tag switch to ORDERED or ATOMIC
+ * <33> Received SWTAG/SWTAG_DESCH/DESCH/UPD_WQP
+ *      from WS in UNSCHEDULED state
+ * <32> Received SWTAG/SWTAG_FULL/SWTAG_DESCH/DESCH/UPD_WQP
+ *      from WS in EMPTY state
+ */
+union cvmx_sso_err {
+	u64 u64;
+	struct cvmx_sso_err_s {
+		u64 reserved_48_63 : 16;
+		u64 bfp : 1;
+		u64 awe : 1;
+		u64 fpe : 1;
+		u64 reserved_43_44 : 2;
+		u64 iop : 11;
+		u64 reserved_12_31 : 20;
+		u64 pnd_dbe0 : 1;
+		u64 pnd_sbe0 : 1;
+		u64 pnd_dbe1 : 1;
+		u64 pnd_sbe1 : 1;
+		u64 oth_dbe0 : 1;
+		u64 oth_sbe0 : 1;
+		u64 oth_dbe1 : 1;
+		u64 oth_sbe1 : 1;
+		u64 idx_dbe : 1;
+		u64 idx_sbe : 1;
+		u64 fidx_dbe : 1;
+		u64 fidx_sbe : 1;
+	} s;
+	struct cvmx_sso_err_s cn68xx;
+	struct cvmx_sso_err_s cn68xxp1;
+};
+
+typedef union cvmx_sso_err cvmx_sso_err_t;
+
+/**
+ * cvmx_sso_err0
+ *
+ * This register contains ECC and other miscellaneous error bits.
+ *
+ */
+union cvmx_sso_err0 {
+	u64 u64;
+	struct cvmx_sso_err0_s {
+		u64 reserved_52_63 : 12;
+		u64 toaqt_dbe : 1;
+		u64 toaqt_sbe : 1;
+		u64 toaqh_dbe : 1;
+		u64 toaqh_sbe : 1;
+		u64 tiaqt_dbe : 1;
+		u64 tiaqt_sbe : 1;
+		u64 tiaqh_dbe : 1;
+		u64 tiaqh_sbe : 1;
+		u64 llm_dbe : 1;
+		u64 llm_sbe : 1;
+		u64 inp_dbe : 1;
+		u64 inp_sbe : 1;
+		u64 qtc_dbe : 1;
+		u64 qtc_sbe : 1;
+		u64 xaq_dbe : 1;
+		u64 xaq_sbe : 1;
+		u64 fff_dbe : 1;
+		u64 fff_sbe : 1;
+		u64 wes_dbe : 1;
+		u64 wes_sbe : 1;
+		u64 reserved_6_31 : 26;
+		u64 addwq_dropped : 1;
+		u64 awempty : 1;
+		u64 grpdis : 1;
+		u64 bfp : 1;
+		u64 awe : 1;
+		u64 fpe : 1;
+	} s;
+	struct cvmx_sso_err0_s cn73xx;
+	struct cvmx_sso_err0_s cn78xx;
+	struct cvmx_sso_err0_s cn78xxp1;
+	struct cvmx_sso_err0_s cnf75xx;
+};
+
+typedef union cvmx_sso_err0 cvmx_sso_err0_t;
+
+/**
+ * cvmx_sso_err1
+ *
+ * This register contains ECC and other miscellaneous error bits.
+ *
+ */
+union cvmx_sso_err1 {
+	u64 u64;
+	struct cvmx_sso_err1_s {
+		u64 reserved_14_63 : 50;
+		u64 thrint_dbe : 1;
+		u64 thrint_sbe : 1;
+		u64 mask_dbe : 1;
+		u64 mask_sbe : 1;
+		u64 gdw_dbe : 1;
+		u64 gdw_sbe : 1;
+		u64 qidx_dbe : 1;
+		u64 qidx_sbe : 1;
+		u64 tptr_dbe : 1;
+		u64 tptr_sbe : 1;
+		u64 hptr_dbe : 1;
+		u64 hptr_sbe : 1;
+		u64 cntr_dbe : 1;
+		u64 cntr_sbe : 1;
+	} s;
+	struct cvmx_sso_err1_s cn73xx;
+	struct cvmx_sso_err1_s cn78xx;
+	struct cvmx_sso_err1_s cn78xxp1;
+	struct cvmx_sso_err1_s cnf75xx;
+};
+
+typedef union cvmx_sso_err1 cvmx_sso_err1_t;
+
+/**
+ * cvmx_sso_err2
+ *
+ * This register contains ECC and other miscellaneous error bits.
+ *
+ */
+union cvmx_sso_err2 {
+	u64 u64;
+	struct cvmx_sso_err2_s {
+		u64 reserved_42_63 : 22;
+		u64 ncbo_dbe : 1;
+		u64 ncbo_sbe : 1;
+		u64 pnd_dbe : 1;
+		u64 pnd_sbe : 1;
+		u64 oth_dbe : 1;
+		u64 oth_sbe : 1;
+		u64 nidx_dbe : 1;
+		u64 nidx_sbe : 1;
+		u64 pidx_dbe : 1;
+		u64 pidx_sbe : 1;
+		u64 reserved_13_31 : 19;
+		u64 iop : 13;
+	} s;
+	struct cvmx_sso_err2_s cn73xx;
+	struct cvmx_sso_err2_s cn78xx;
+	struct cvmx_sso_err2_s cn78xxp1;
+	struct cvmx_sso_err2_s cnf75xx;
+};
+
+typedef union cvmx_sso_err2 cvmx_sso_err2_t;
+
+/**
+ * cvmx_sso_err_enb
+ *
+ * SSO_ERR_ENB = SSO Error Enable Register
+ *
+ * Contains the interrupt enables corresponding to SSO_ERR.
+ */
+union cvmx_sso_err_enb {
+	u64 u64;
+	struct cvmx_sso_err_enb_s {
+		u64 reserved_48_63 : 16;
+		u64 bfp_ie : 1;
+		u64 awe_ie : 1;
+		u64 fpe_ie : 1;
+		u64 reserved_43_44 : 2;
+		u64 iop_ie : 11;
+		u64 reserved_12_31 : 20;
+		u64 pnd_dbe0_ie : 1;
+		u64 pnd_sbe0_ie : 1;
+		u64 pnd_dbe1_ie : 1;
+		u64 pnd_sbe1_ie : 1;
+		u64 oth_dbe0_ie : 1;
+		u64 oth_sbe0_ie : 1;
+		u64 oth_dbe1_ie : 1;
+		u64 oth_sbe1_ie : 1;
+		u64 idx_dbe_ie : 1;
+		u64 idx_sbe_ie : 1;
+		u64 fidx_dbe_ie : 1;
+		u64 fidx_sbe_ie : 1;
+	} s;
+	struct cvmx_sso_err_enb_s cn68xx;
+	struct cvmx_sso_err_enb_s cn68xxp1;
+};
+
+typedef union cvmx_sso_err_enb cvmx_sso_err_enb_t;
+
+/**
+ * cvmx_sso_fidx_ecc_ctl
+ *
+ * SSO_FIDX_ECC_CTL = SSO FIDX ECC Control
+ *
+ */
+union cvmx_sso_fidx_ecc_ctl {
+	u64 u64;
+	struct cvmx_sso_fidx_ecc_ctl_s {
+		u64 reserved_3_63 : 61;
+		u64 flip_synd : 2;
+		u64 ecc_ena : 1;
+	} s;
+	struct cvmx_sso_fidx_ecc_ctl_s cn68xx;
+	struct cvmx_sso_fidx_ecc_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_sso_fidx_ecc_ctl cvmx_sso_fidx_ecc_ctl_t;
+
+/**
+ * cvmx_sso_fidx_ecc_st
+ *
+ * SSO_FIDX_ECC_ST = SSO FIDX ECC Status
+ *
+ */
+union cvmx_sso_fidx_ecc_st {
+	u64 u64;
+	struct cvmx_sso_fidx_ecc_st_s {
+		u64 reserved_27_63 : 37;
+		u64 addr : 11;
+		u64 reserved_9_15 : 7;
+		u64 syndrom : 5;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_sso_fidx_ecc_st_s cn68xx;
+	struct cvmx_sso_fidx_ecc_st_s cn68xxp1;
+};
+
+typedef union cvmx_sso_fidx_ecc_st cvmx_sso_fidx_ecc_st_t;
+
+/**
+ * cvmx_sso_fpage_cnt
+ *
+ * SSO_FPAGE_CNT = SSO Free Page Cnt
+ *
+ * This register keeps track of the number of free pages pointers available for use in external memory.
+ */
+union cvmx_sso_fpage_cnt {
+	u64 u64;
+	struct cvmx_sso_fpage_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 fpage_cnt : 32;
+	} s;
+	struct cvmx_sso_fpage_cnt_s cn68xx;
+	struct cvmx_sso_fpage_cnt_s cn68xxp1;
+};
+
+typedef union cvmx_sso_fpage_cnt cvmx_sso_fpage_cnt_t;
+
+/**
+ * cvmx_sso_grp#_aq_cnt
+ */
+union cvmx_sso_grpx_aq_cnt {
+	u64 u64;
+	struct cvmx_sso_grpx_aq_cnt_s {
+		u64 reserved_33_63 : 31;
+		u64 aq_cnt : 33;
+	} s;
+	struct cvmx_sso_grpx_aq_cnt_s cn73xx;
+	struct cvmx_sso_grpx_aq_cnt_s cn78xx;
+	struct cvmx_sso_grpx_aq_cnt_s cn78xxp1;
+	struct cvmx_sso_grpx_aq_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_aq_cnt cvmx_sso_grpx_aq_cnt_t;
+
+/**
+ * cvmx_sso_grp#_aq_thr
+ */
+union cvmx_sso_grpx_aq_thr {
+	u64 u64;
+	struct cvmx_sso_grpx_aq_thr_s {
+		u64 reserved_33_63 : 31;
+		u64 aq_thr : 33;
+	} s;
+	struct cvmx_sso_grpx_aq_thr_s cn73xx;
+	struct cvmx_sso_grpx_aq_thr_s cn78xx;
+	struct cvmx_sso_grpx_aq_thr_s cn78xxp1;
+	struct cvmx_sso_grpx_aq_thr_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_aq_thr cvmx_sso_grpx_aq_thr_t;
+
+/**
+ * cvmx_sso_grp#_ds_pc
+ *
+ * Counts the number of deschedule requests for each group. Counter rolls over through zero when
+ * max value exceeded.
+ */
+union cvmx_sso_grpx_ds_pc {
+	u64 u64;
+	struct cvmx_sso_grpx_ds_pc_s {
+		u64 cnt : 64;
+	} s;
+	struct cvmx_sso_grpx_ds_pc_s cn73xx;
+	struct cvmx_sso_grpx_ds_pc_s cn78xx;
+	struct cvmx_sso_grpx_ds_pc_s cn78xxp1;
+	struct cvmx_sso_grpx_ds_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_ds_pc cvmx_sso_grpx_ds_pc_t;
+
+/**
+ * cvmx_sso_grp#_ext_pc
+ *
+ * Counts the number of cache lines of WAEs sent to L2/DDR. Counter rolls over through zero when
+ * max value exceeded.
+ */
+union cvmx_sso_grpx_ext_pc {
+	u64 u64;
+	struct cvmx_sso_grpx_ext_pc_s {
+		u64 cnt : 64;
+	} s;
+	struct cvmx_sso_grpx_ext_pc_s cn73xx;
+	struct cvmx_sso_grpx_ext_pc_s cn78xx;
+	struct cvmx_sso_grpx_ext_pc_s cn78xxp1;
+	struct cvmx_sso_grpx_ext_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_ext_pc cvmx_sso_grpx_ext_pc_t;
+
+/**
+ * cvmx_sso_grp#_iaq_thr
+ *
+ * These registers contain the thresholds for allocating SSO in-unit admission queue entries, see
+ * In-Unit Thresholds.
+ */
+union cvmx_sso_grpx_iaq_thr {
+	u64 u64;
+	struct cvmx_sso_grpx_iaq_thr_s {
+		u64 reserved_61_63 : 3;
+		u64 grp_cnt : 13;
+		u64 reserved_45_47 : 3;
+		u64 max_thr : 13;
+		u64 reserved_13_31 : 19;
+		u64 rsvd_thr : 13;
+	} s;
+	struct cvmx_sso_grpx_iaq_thr_s cn73xx;
+	struct cvmx_sso_grpx_iaq_thr_s cn78xx;
+	struct cvmx_sso_grpx_iaq_thr_s cn78xxp1;
+	struct cvmx_sso_grpx_iaq_thr_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_iaq_thr cvmx_sso_grpx_iaq_thr_t;
+
+/**
+ * cvmx_sso_grp#_int
+ *
+ * Contains the per-group interrupts and are used to clear these interrupts. For more information
+ * on this register, refer to Interrupts.
+ */
+union cvmx_sso_grpx_int {
+	u64 u64;
+	struct cvmx_sso_grpx_int_s {
+		u64 exe_dis : 1;
+		u64 reserved_2_62 : 61;
+		u64 exe_int : 1;
+		u64 aq_int : 1;
+	} s;
+	struct cvmx_sso_grpx_int_s cn73xx;
+	struct cvmx_sso_grpx_int_s cn78xx;
+	struct cvmx_sso_grpx_int_s cn78xxp1;
+	struct cvmx_sso_grpx_int_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_int cvmx_sso_grpx_int_t;
+
+/**
+ * cvmx_sso_grp#_int_cnt
+ *
+ * These registers contain a read-only copy of the counts used to trigger work-queue interrupts
+ * (one per group). For more information on this register, refer to Interrupts.
+ */
+union cvmx_sso_grpx_int_cnt {
+	u64 u64;
+	struct cvmx_sso_grpx_int_cnt_s {
+		u64 reserved_61_63 : 3;
+		u64 tc_cnt : 13;
+		u64 reserved_45_47 : 3;
+		u64 cq_cnt : 13;
+		u64 reserved_29_31 : 3;
+		u64 ds_cnt : 13;
+		u64 reserved_13_15 : 3;
+		u64 iaq_cnt : 13;
+	} s;
+	struct cvmx_sso_grpx_int_cnt_s cn73xx;
+	struct cvmx_sso_grpx_int_cnt_s cn78xx;
+	struct cvmx_sso_grpx_int_cnt_s cn78xxp1;
+	struct cvmx_sso_grpx_int_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_int_cnt cvmx_sso_grpx_int_cnt_t;
+
+/**
+ * cvmx_sso_grp#_int_thr
+ *
+ * These registers contain the thresholds for enabling and setting work-queue interrupts (one per
+ * group). For more information on this register, refer to Interrupts.
+ */
+union cvmx_sso_grpx_int_thr {
+	u64 u64;
+	struct cvmx_sso_grpx_int_thr_s {
+		u64 tc_en : 1;
+		u64 reserved_61_62 : 2;
+		u64 tc_thr : 13;
+		u64 reserved_45_47 : 3;
+		u64 cq_thr : 13;
+		u64 reserved_29_31 : 3;
+		u64 ds_thr : 13;
+		u64 reserved_13_15 : 3;
+		u64 iaq_thr : 13;
+	} s;
+	struct cvmx_sso_grpx_int_thr_s cn73xx;
+	struct cvmx_sso_grpx_int_thr_s cn78xx;
+	struct cvmx_sso_grpx_int_thr_s cn78xxp1;
+	struct cvmx_sso_grpx_int_thr_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_int_thr cvmx_sso_grpx_int_thr_t;
+
+/**
+ * cvmx_sso_grp#_pri
+ *
+ * Controls the priority and group affinity arbitration for each group.
+ *
+ */
+union cvmx_sso_grpx_pri {
+	u64 u64;
+	struct cvmx_sso_grpx_pri_s {
+		u64 reserved_30_63 : 34;
+		u64 wgt_left : 6;
+		u64 reserved_22_23 : 2;
+		u64 weight : 6;
+		u64 reserved_12_15 : 4;
+		u64 affinity : 4;
+		u64 reserved_3_7 : 5;
+		u64 pri : 3;
+	} s;
+	struct cvmx_sso_grpx_pri_s cn73xx;
+	struct cvmx_sso_grpx_pri_s cn78xx;
+	struct cvmx_sso_grpx_pri_s cn78xxp1;
+	struct cvmx_sso_grpx_pri_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_pri cvmx_sso_grpx_pri_t;
+
+/**
+ * cvmx_sso_grp#_taq_thr
+ *
+ * These registers contain the thresholds for allocating SSO transitory admission queue storage
+ * buffers, see Transitory-Admission Thresholds.
+ */
+union cvmx_sso_grpx_taq_thr {
+	u64 u64;
+	struct cvmx_sso_grpx_taq_thr_s {
+		u64 reserved_59_63 : 5;
+		u64 grp_cnt : 11;
+		u64 reserved_43_47 : 5;
+		u64 max_thr : 11;
+		u64 reserved_11_31 : 21;
+		u64 rsvd_thr : 11;
+	} s;
+	struct cvmx_sso_grpx_taq_thr_s cn73xx;
+	struct cvmx_sso_grpx_taq_thr_s cn78xx;
+	struct cvmx_sso_grpx_taq_thr_s cn78xxp1;
+	struct cvmx_sso_grpx_taq_thr_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_taq_thr cvmx_sso_grpx_taq_thr_t;
+
+/**
+ * cvmx_sso_grp#_ts_pc
+ *
+ * Counts the number of tag switch requests for each group being switched to. Counter rolls over
+ * through zero when max value exceeded.
+ */
+union cvmx_sso_grpx_ts_pc {
+	u64 u64;
+	struct cvmx_sso_grpx_ts_pc_s {
+		u64 cnt : 64;
+	} s;
+	struct cvmx_sso_grpx_ts_pc_s cn73xx;
+	struct cvmx_sso_grpx_ts_pc_s cn78xx;
+	struct cvmx_sso_grpx_ts_pc_s cn78xxp1;
+	struct cvmx_sso_grpx_ts_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_ts_pc cvmx_sso_grpx_ts_pc_t;
+
+/**
+ * cvmx_sso_grp#_wa_pc
+ *
+ * Counts the number of add new work requests for each group. The counter rolls over through zero
+ * when the max value exceeded.
+ */
+union cvmx_sso_grpx_wa_pc {
+	u64 u64;
+	struct cvmx_sso_grpx_wa_pc_s {
+		u64 cnt : 64;
+	} s;
+	struct cvmx_sso_grpx_wa_pc_s cn73xx;
+	struct cvmx_sso_grpx_wa_pc_s cn78xx;
+	struct cvmx_sso_grpx_wa_pc_s cn78xxp1;
+	struct cvmx_sso_grpx_wa_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_wa_pc cvmx_sso_grpx_wa_pc_t;
+
+/**
+ * cvmx_sso_grp#_ws_pc
+ *
+ * Counts the number of work schedules for each group. The counter rolls over through zero when
+ * the maximum value is exceeded.
+ */
+union cvmx_sso_grpx_ws_pc {
+	u64 u64;
+	struct cvmx_sso_grpx_ws_pc_s {
+		u64 cnt : 64;
+	} s;
+	struct cvmx_sso_grpx_ws_pc_s cn73xx;
+	struct cvmx_sso_grpx_ws_pc_s cn78xx;
+	struct cvmx_sso_grpx_ws_pc_s cn78xxp1;
+	struct cvmx_sso_grpx_ws_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_grpx_ws_pc cvmx_sso_grpx_ws_pc_t;
+
+/**
+ * cvmx_sso_gw_eco
+ */
+union cvmx_sso_gw_eco {
+	u64 u64;
+	struct cvmx_sso_gw_eco_s {
+		u64 reserved_8_63 : 56;
+		u64 eco_rw : 8;
+	} s;
+	struct cvmx_sso_gw_eco_s cn73xx;
+	struct cvmx_sso_gw_eco_s cnf75xx;
+};
+
+typedef union cvmx_sso_gw_eco cvmx_sso_gw_eco_t;
+
+/**
+ * cvmx_sso_gwe_cfg
+ *
+ * This register controls the operation of the get-work examiner (GWE).
+ *
+ */
+union cvmx_sso_gwe_cfg {
+	u64 u64;
+	struct cvmx_sso_gwe_cfg_s {
+		u64 reserved_12_63 : 52;
+		u64 odu_ffpgw_dis : 1;
+		u64 gwe_rfpgw_dis : 1;
+		u64 odu_prf_dis : 1;
+		u64 reserved_0_8 : 9;
+	} s;
+	struct cvmx_sso_gwe_cfg_cn68xx {
+		u64 reserved_12_63 : 52;
+		u64 odu_ffpgw_dis : 1;
+		u64 gwe_rfpgw_dis : 1;
+		u64 odu_prf_dis : 1;
+		u64 odu_bmp_dis : 1;
+		u64 reserved_5_7 : 3;
+		u64 gwe_hvy_dis : 1;
+		u64 gwe_poe : 1;
+		u64 gwe_fpor : 1;
+		u64 gwe_rah : 1;
+		u64 gwe_dis : 1;
+	} cn68xx;
+	struct cvmx_sso_gwe_cfg_cn68xxp1 {
+		u64 reserved_4_63 : 60;
+		u64 gwe_poe : 1;
+		u64 gwe_fpor : 1;
+		u64 gwe_rah : 1;
+		u64 gwe_dis : 1;
+	} cn68xxp1;
+	struct cvmx_sso_gwe_cfg_cn73xx {
+		u64 reserved_9_63 : 55;
+		u64 dis_wgt_credit : 1;
+		u64 ws_retries : 8;
+	} cn73xx;
+	struct cvmx_sso_gwe_cfg_cn73xx cn78xx;
+	struct cvmx_sso_gwe_cfg_cn73xx cn78xxp1;
+	struct cvmx_sso_gwe_cfg_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_gwe_cfg cvmx_sso_gwe_cfg_t;
+
+/**
+ * cvmx_sso_gwe_random
+ *
+ * This register contains the random search start position for the get-work examiner (GWE).
+ *
+ */
+union cvmx_sso_gwe_random {
+	u64 u64;
+	struct cvmx_sso_gwe_random_s {
+		u64 reserved_16_63 : 48;
+		u64 rnd : 16;
+	} s;
+	struct cvmx_sso_gwe_random_s cn73xx;
+	struct cvmx_sso_gwe_random_s cn78xx;
+	struct cvmx_sso_gwe_random_s cn78xxp1;
+	struct cvmx_sso_gwe_random_s cnf75xx;
+};
+
+typedef union cvmx_sso_gwe_random cvmx_sso_gwe_random_t;
+
+/**
+ * cvmx_sso_idx_ecc_ctl
+ *
+ * SSO_IDX_ECC_CTL = SSO IDX ECC Control
+ *
+ */
+union cvmx_sso_idx_ecc_ctl {
+	u64 u64;
+	struct cvmx_sso_idx_ecc_ctl_s {
+		u64 reserved_3_63 : 61;
+		u64 flip_synd : 2;
+		u64 ecc_ena : 1;
+	} s;
+	struct cvmx_sso_idx_ecc_ctl_s cn68xx;
+	struct cvmx_sso_idx_ecc_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_sso_idx_ecc_ctl cvmx_sso_idx_ecc_ctl_t;
+
+/**
+ * cvmx_sso_idx_ecc_st
+ *
+ * SSO_IDX_ECC_ST = SSO IDX ECC Status
+ *
+ */
+union cvmx_sso_idx_ecc_st {
+	u64 u64;
+	struct cvmx_sso_idx_ecc_st_s {
+		u64 reserved_27_63 : 37;
+		u64 addr : 11;
+		u64 reserved_9_15 : 7;
+		u64 syndrom : 5;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_sso_idx_ecc_st_s cn68xx;
+	struct cvmx_sso_idx_ecc_st_s cn68xxp1;
+};
+
+typedef union cvmx_sso_idx_ecc_st cvmx_sso_idx_ecc_st_t;
+
+/**
+ * cvmx_sso_ient#_links
+ *
+ * Returns unit memory status for an index.
+ *
+ */
+union cvmx_sso_ientx_links {
+	u64 u64;
+	struct cvmx_sso_ientx_links_s {
+		u64 reserved_28_63 : 36;
+		u64 prev_index : 12;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_sso_ientx_links_cn73xx {
+		u64 reserved_26_63 : 38;
+		u64 prev_index : 10;
+		u64 reserved_11_15 : 5;
+		u64 next_index_vld : 1;
+		u64 next_index : 10;
+	} cn73xx;
+	struct cvmx_sso_ientx_links_cn78xx {
+		u64 reserved_28_63 : 36;
+		u64 prev_index : 12;
+		u64 reserved_13_15 : 3;
+		u64 next_index_vld : 1;
+		u64 next_index : 12;
+	} cn78xx;
+	struct cvmx_sso_ientx_links_cn78xx cn78xxp1;
+	struct cvmx_sso_ientx_links_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_ientx_links cvmx_sso_ientx_links_t;
+
+/**
+ * cvmx_sso_ient#_pendtag
+ *
+ * Returns unit memory status for an index.
+ *
+ */
+union cvmx_sso_ientx_pendtag {
+	u64 u64;
+	struct cvmx_sso_ientx_pendtag_s {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s;
+	struct cvmx_sso_ientx_pendtag_s cn73xx;
+	struct cvmx_sso_ientx_pendtag_s cn78xx;
+	struct cvmx_sso_ientx_pendtag_s cn78xxp1;
+	struct cvmx_sso_ientx_pendtag_s cnf75xx;
+};
+
+typedef union cvmx_sso_ientx_pendtag cvmx_sso_ientx_pendtag_t;
+
+/**
+ * cvmx_sso_ient#_qlinks
+ *
+ * Returns unit memory status for an index.
+ *
+ */
+union cvmx_sso_ientx_qlinks {
+	u64 u64;
+	struct cvmx_sso_ientx_qlinks_s {
+		u64 reserved_12_63 : 52;
+		u64 next_index : 12;
+	} s;
+	struct cvmx_sso_ientx_qlinks_s cn73xx;
+	struct cvmx_sso_ientx_qlinks_s cn78xx;
+	struct cvmx_sso_ientx_qlinks_s cn78xxp1;
+	struct cvmx_sso_ientx_qlinks_s cnf75xx;
+};
+
+typedef union cvmx_sso_ientx_qlinks cvmx_sso_ientx_qlinks_t;
+
+/**
+ * cvmx_sso_ient#_tag
+ *
+ * Returns unit memory status for an index.
+ *
+ */
+union cvmx_sso_ientx_tag {
+	u64 u64;
+	struct cvmx_sso_ientx_tag_s {
+		u64 reserved_39_63 : 25;
+		u64 tailc : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s;
+	struct cvmx_sso_ientx_tag_s cn73xx;
+	struct cvmx_sso_ientx_tag_s cn78xx;
+	struct cvmx_sso_ientx_tag_s cn78xxp1;
+	struct cvmx_sso_ientx_tag_s cnf75xx;
+};
+
+typedef union cvmx_sso_ientx_tag cvmx_sso_ientx_tag_t;
+
+/**
+ * cvmx_sso_ient#_wqpgrp
+ *
+ * Returns unit memory status for an index.
+ *
+ */
+union cvmx_sso_ientx_wqpgrp {
+	u64 u64;
+	struct cvmx_sso_ientx_wqpgrp_s {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s;
+	struct cvmx_sso_ientx_wqpgrp_cn73xx {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_56_59 : 4;
+		u64 grp : 8;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} cn73xx;
+	struct cvmx_sso_ientx_wqpgrp_s cn78xx;
+	struct cvmx_sso_ientx_wqpgrp_s cn78xxp1;
+	struct cvmx_sso_ientx_wqpgrp_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_ientx_wqpgrp cvmx_sso_ientx_wqpgrp_t;
+
+/**
+ * cvmx_sso_ipl_conf#
+ *
+ * Returns list status for the conflicted list indexed by group.  Register
+ * fields are identical to those in SSO_IPL_IAQ() above.
+ */
+union cvmx_sso_ipl_confx {
+	u64 u64;
+	struct cvmx_sso_ipl_confx_s {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_25_25 : 1;
+		u64 queue_head : 12;
+		u64 reserved_12_12 : 1;
+		u64 queue_tail : 12;
+	} s;
+	struct cvmx_sso_ipl_confx_s cn73xx;
+	struct cvmx_sso_ipl_confx_s cn78xx;
+	struct cvmx_sso_ipl_confx_s cn78xxp1;
+	struct cvmx_sso_ipl_confx_s cnf75xx;
+};
+
+typedef union cvmx_sso_ipl_confx cvmx_sso_ipl_confx_t;
+
+/**
+ * cvmx_sso_ipl_desched#
+ *
+ * Returns list status for the deschedule list indexed by group.  Register
+ * fields are identical to those in SSO_IPL_IAQ() above.
+ */
+union cvmx_sso_ipl_deschedx {
+	u64 u64;
+	struct cvmx_sso_ipl_deschedx_s {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_25_25 : 1;
+		u64 queue_head : 12;
+		u64 reserved_12_12 : 1;
+		u64 queue_tail : 12;
+	} s;
+	struct cvmx_sso_ipl_deschedx_s cn73xx;
+	struct cvmx_sso_ipl_deschedx_s cn78xx;
+	struct cvmx_sso_ipl_deschedx_s cn78xxp1;
+	struct cvmx_sso_ipl_deschedx_s cnf75xx;
+};
+
+typedef union cvmx_sso_ipl_deschedx cvmx_sso_ipl_deschedx_t;
+
+/**
+ * cvmx_sso_ipl_free#
+ *
+ * Returns list status.
+ *
+ */
+union cvmx_sso_ipl_freex {
+	u64 u64;
+	struct cvmx_sso_ipl_freex_s {
+		u64 reserved_62_63 : 2;
+		u64 qnum_head : 3;
+		u64 qnum_tail : 3;
+		u64 reserved_28_55 : 28;
+		u64 queue_val : 1;
+		u64 reserved_25_26 : 2;
+		u64 queue_head : 12;
+		u64 reserved_12_12 : 1;
+		u64 queue_tail : 12;
+	} s;
+	struct cvmx_sso_ipl_freex_cn73xx {
+		u64 reserved_62_63 : 2;
+		u64 qnum_head : 3;
+		u64 qnum_tail : 3;
+		u64 reserved_28_55 : 28;
+		u64 queue_val : 1;
+		u64 reserved_23_26 : 4;
+		u64 queue_head : 10;
+		u64 reserved_10_12 : 3;
+		u64 queue_tail : 10;
+	} cn73xx;
+	struct cvmx_sso_ipl_freex_s cn78xx;
+	struct cvmx_sso_ipl_freex_s cn78xxp1;
+	struct cvmx_sso_ipl_freex_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_ipl_freex cvmx_sso_ipl_freex_t;
+
+/**
+ * cvmx_sso_ipl_iaq#
+ *
+ * Returns list status for the internal admission queue indexed by group.
+ *
+ */
+union cvmx_sso_ipl_iaqx {
+	u64 u64;
+	struct cvmx_sso_ipl_iaqx_s {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_25_25 : 1;
+		u64 queue_head : 12;
+		u64 reserved_12_12 : 1;
+		u64 queue_tail : 12;
+	} s;
+	struct cvmx_sso_ipl_iaqx_s cn73xx;
+	struct cvmx_sso_ipl_iaqx_s cn78xx;
+	struct cvmx_sso_ipl_iaqx_s cn78xxp1;
+	struct cvmx_sso_ipl_iaqx_s cnf75xx;
+};
+
+typedef union cvmx_sso_ipl_iaqx cvmx_sso_ipl_iaqx_t;
+
+/**
+ * cvmx_sso_iq_cnt#
+ *
+ * CSR reserved addresses: (64): 0x8200..0x83f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_IQ_CNTX = SSO Input Queue Count Register
+ *               (one per QOS level)
+ *
+ * Contains a read-only count of the number of work queue entries for each QOS
+ * level. Counts both in-unit and in-memory entries.
+ */
+union cvmx_sso_iq_cntx {
+	u64 u64;
+	struct cvmx_sso_iq_cntx_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_cnt : 32;
+	} s;
+	struct cvmx_sso_iq_cntx_s cn68xx;
+	struct cvmx_sso_iq_cntx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_iq_cntx cvmx_sso_iq_cntx_t;
+
+/**
+ * cvmx_sso_iq_com_cnt
+ *
+ * SSO_IQ_COM_CNT = SSO Input Queue Combined Count Register
+ *
+ * Contains a read-only count of the total number of work queue entries in all
+ * QOS levels.  Counts both in-unit and in-memory entries.
+ */
+union cvmx_sso_iq_com_cnt {
+	u64 u64;
+	struct cvmx_sso_iq_com_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_cnt : 32;
+	} s;
+	struct cvmx_sso_iq_com_cnt_s cn68xx;
+	struct cvmx_sso_iq_com_cnt_s cn68xxp1;
+};
+
+typedef union cvmx_sso_iq_com_cnt cvmx_sso_iq_com_cnt_t;
+
+/**
+ * cvmx_sso_iq_int
+ *
+ * SSO_IQ_INT = SSO Input Queue Interrupt Register
+ *
+ * Contains the bits (one per QOS level) that can trigger the input queue
+ * interrupt.  An IQ_INT bit will be set if SSO_IQ_CNT#QOS# changes and the
+ * resulting value is equal to SSO_IQ_THR#QOS#.
+ */
+union cvmx_sso_iq_int {
+	u64 u64;
+	struct cvmx_sso_iq_int_s {
+		u64 reserved_8_63 : 56;
+		u64 iq_int : 8;
+	} s;
+	struct cvmx_sso_iq_int_s cn68xx;
+	struct cvmx_sso_iq_int_s cn68xxp1;
+};
+
+typedef union cvmx_sso_iq_int cvmx_sso_iq_int_t;
+
+/**
+ * cvmx_sso_iq_int_en
+ *
+ * SSO_IQ_INT_EN = SSO Input Queue Interrupt Enable Register
+ *
+ * Contains the bits (one per QOS level) that enable the input queue interrupt.
+ */
+union cvmx_sso_iq_int_en {
+	u64 u64;
+	struct cvmx_sso_iq_int_en_s {
+		u64 reserved_8_63 : 56;
+		u64 int_en : 8;
+	} s;
+	struct cvmx_sso_iq_int_en_s cn68xx;
+	struct cvmx_sso_iq_int_en_s cn68xxp1;
+};
+
+typedef union cvmx_sso_iq_int_en cvmx_sso_iq_int_en_t;
+
+/**
+ * cvmx_sso_iq_thr#
+ *
+ * CSR reserved addresses: (24): 0x9040..0x90f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_IQ_THRX = SSO Input Queue Threshold Register
+ *               (one per QOS level)
+ *
+ * Threshold value for triggering input queue interrupts.
+ */
+union cvmx_sso_iq_thrx {
+	u64 u64;
+	struct cvmx_sso_iq_thrx_s {
+		u64 reserved_32_63 : 32;
+		u64 iq_thr : 32;
+	} s;
+	struct cvmx_sso_iq_thrx_s cn68xx;
+	struct cvmx_sso_iq_thrx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_iq_thrx cvmx_sso_iq_thrx_t;
+
+/**
+ * cvmx_sso_nos_cnt
+ *
+ * Contains the number of work-queue entries on the no-schedule list.
+ *
+ */
+union cvmx_sso_nos_cnt {
+	u64 u64;
+	struct cvmx_sso_nos_cnt_s {
+		u64 reserved_13_63 : 51;
+		u64 nos_cnt : 13;
+	} s;
+	struct cvmx_sso_nos_cnt_cn68xx {
+		u64 reserved_12_63 : 52;
+		u64 nos_cnt : 12;
+	} cn68xx;
+	struct cvmx_sso_nos_cnt_cn68xx cn68xxp1;
+	struct cvmx_sso_nos_cnt_s cn73xx;
+	struct cvmx_sso_nos_cnt_s cn78xx;
+	struct cvmx_sso_nos_cnt_s cn78xxp1;
+	struct cvmx_sso_nos_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sso_nos_cnt cvmx_sso_nos_cnt_t;
+
+/**
+ * cvmx_sso_nw_tim
+ *
+ * Sets the minimum period for a new-work-request timeout. The period is specified in n-1
+ * notation, with the increment value of 1024 clock cycles. Thus, a value of 0x0 in this register
+ * translates to 1024 cycles, 0x1 translates to 2048 cycles, 0x2 translates to 3072 cycles, etc.
+ */
+union cvmx_sso_nw_tim {
+	u64 u64;
+	struct cvmx_sso_nw_tim_s {
+		u64 reserved_10_63 : 54;
+		u64 nw_tim : 10;
+	} s;
+	struct cvmx_sso_nw_tim_s cn68xx;
+	struct cvmx_sso_nw_tim_s cn68xxp1;
+	struct cvmx_sso_nw_tim_s cn73xx;
+	struct cvmx_sso_nw_tim_s cn78xx;
+	struct cvmx_sso_nw_tim_s cn78xxp1;
+	struct cvmx_sso_nw_tim_s cnf75xx;
+};
+
+typedef union cvmx_sso_nw_tim cvmx_sso_nw_tim_t;
+
+/**
+ * cvmx_sso_oth_ecc_ctl
+ *
+ * SSO_OTH_ECC_CTL = SSO OTH ECC Control
+ *
+ */
+union cvmx_sso_oth_ecc_ctl {
+	u64 u64;
+	struct cvmx_sso_oth_ecc_ctl_s {
+		u64 reserved_6_63 : 58;
+		u64 flip_synd1 : 2;
+		u64 ecc_ena1 : 1;
+		u64 flip_synd0 : 2;
+		u64 ecc_ena0 : 1;
+	} s;
+	struct cvmx_sso_oth_ecc_ctl_s cn68xx;
+	struct cvmx_sso_oth_ecc_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_sso_oth_ecc_ctl cvmx_sso_oth_ecc_ctl_t;
+
+/**
+ * cvmx_sso_oth_ecc_st
+ *
+ * SSO_OTH_ECC_ST = SSO OTH ECC Status
+ *
+ */
+union cvmx_sso_oth_ecc_st {
+	u64 u64;
+	struct cvmx_sso_oth_ecc_st_s {
+		u64 reserved_59_63 : 5;
+		u64 addr1 : 11;
+		u64 reserved_43_47 : 5;
+		u64 syndrom1 : 7;
+		u64 reserved_27_35 : 9;
+		u64 addr0 : 11;
+		u64 reserved_11_15 : 5;
+		u64 syndrom0 : 7;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_sso_oth_ecc_st_s cn68xx;
+	struct cvmx_sso_oth_ecc_st_s cn68xxp1;
+};
+
+typedef union cvmx_sso_oth_ecc_st cvmx_sso_oth_ecc_st_t;
+
+/**
+ * cvmx_sso_page_cnt
+ */
+union cvmx_sso_page_cnt {
+	u64 u64;
+	struct cvmx_sso_page_cnt_s {
+		u64 reserved_32_63 : 32;
+		u64 cnt : 32;
+	} s;
+	struct cvmx_sso_page_cnt_s cn73xx;
+	struct cvmx_sso_page_cnt_s cn78xx;
+	struct cvmx_sso_page_cnt_s cn78xxp1;
+	struct cvmx_sso_page_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sso_page_cnt cvmx_sso_page_cnt_t;
+
+/**
+ * cvmx_sso_pnd_ecc_ctl
+ *
+ * SSO_PND_ECC_CTL = SSO PND ECC Control
+ *
+ */
+union cvmx_sso_pnd_ecc_ctl {
+	u64 u64;
+	struct cvmx_sso_pnd_ecc_ctl_s {
+		u64 reserved_6_63 : 58;
+		u64 flip_synd1 : 2;
+		u64 ecc_ena1 : 1;
+		u64 flip_synd0 : 2;
+		u64 ecc_ena0 : 1;
+	} s;
+	struct cvmx_sso_pnd_ecc_ctl_s cn68xx;
+	struct cvmx_sso_pnd_ecc_ctl_s cn68xxp1;
+};
+
+typedef union cvmx_sso_pnd_ecc_ctl cvmx_sso_pnd_ecc_ctl_t;
+
+/**
+ * cvmx_sso_pnd_ecc_st
+ *
+ * SSO_PND_ECC_ST = SSO PND ECC Status
+ *
+ */
+union cvmx_sso_pnd_ecc_st {
+	u64 u64;
+	struct cvmx_sso_pnd_ecc_st_s {
+		u64 reserved_59_63 : 5;
+		u64 addr1 : 11;
+		u64 reserved_43_47 : 5;
+		u64 syndrom1 : 7;
+		u64 reserved_27_35 : 9;
+		u64 addr0 : 11;
+		u64 reserved_11_15 : 5;
+		u64 syndrom0 : 7;
+		u64 reserved_0_3 : 4;
+	} s;
+	struct cvmx_sso_pnd_ecc_st_s cn68xx;
+	struct cvmx_sso_pnd_ecc_st_s cn68xxp1;
+};
+
+typedef union cvmx_sso_pnd_ecc_st cvmx_sso_pnd_ecc_st_t;
+
+/**
+ * cvmx_sso_pp#_arb
+ *
+ * For diagnostic use, returns the group affinity arbitration state for each core.
+ *
+ */
+union cvmx_sso_ppx_arb {
+	u64 u64;
+	struct cvmx_sso_ppx_arb_s {
+		u64 reserved_20_63 : 44;
+		u64 aff_left : 4;
+		u64 reserved_8_15 : 8;
+		u64 last_grp : 8;
+	} s;
+	struct cvmx_sso_ppx_arb_s cn73xx;
+	struct cvmx_sso_ppx_arb_s cn78xx;
+	struct cvmx_sso_ppx_arb_s cn78xxp1;
+	struct cvmx_sso_ppx_arb_s cnf75xx;
+};
+
+typedef union cvmx_sso_ppx_arb cvmx_sso_ppx_arb_t;
+
+/**
+ * cvmx_sso_pp#_grp_msk
+ *
+ * CSR reserved addresses: (24): 0x5040..0x50f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_PPX_GRP_MSK = SSO PP Group Mask Register
+ *                   (one bit per group per PP)
+ *
+ * Selects which group(s) a PP belongs to.  A '1' in any bit position sets the
+ * PP's membership in the corresponding group.  A value of 0x0 will prevent the
+ * PP from receiving new work.
+ *
+ * Note that these do not contain QOS level priorities for each PP.  This is a
+ * change from previous POW designs.
+ */
+union cvmx_sso_ppx_grp_msk {
+	u64 u64;
+	struct cvmx_sso_ppx_grp_msk_s {
+		u64 grp_msk : 64;
+	} s;
+	struct cvmx_sso_ppx_grp_msk_s cn68xx;
+	struct cvmx_sso_ppx_grp_msk_s cn68xxp1;
+};
+
+typedef union cvmx_sso_ppx_grp_msk cvmx_sso_ppx_grp_msk_t;
+
+/**
+ * cvmx_sso_pp#_qos_pri
+ *
+ * CSR reserved addresses: (56): 0x2040..0x21f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_PP(0..31)_QOS_PRI = SSO PP QOS Priority Register
+ *                                (one field per IQ per PP)
+ *
+ * Contains the QOS level priorities for each PP.
+ *      0x0       is the highest priority
+ *      0x7       is the lowest priority
+ *      0xf       prevents the PP from receiving work from that QOS level
+ *      0x8-0xe   Reserved
+ *
+ * For a given PP, priorities should begin at 0x0, and remain contiguous
+ * throughout the range.  Failure to do so may result in severe
+ * performance degradation.
+ *
+ *
+ * Priorities for IQs 0..7
+ */
+union cvmx_sso_ppx_qos_pri {
+	u64 u64;
+	struct cvmx_sso_ppx_qos_pri_s {
+		u64 reserved_60_63 : 4;
+		u64 qos7_pri : 4;
+		u64 reserved_52_55 : 4;
+		u64 qos6_pri : 4;
+		u64 reserved_44_47 : 4;
+		u64 qos5_pri : 4;
+		u64 reserved_36_39 : 4;
+		u64 qos4_pri : 4;
+		u64 reserved_28_31 : 4;
+		u64 qos3_pri : 4;
+		u64 reserved_20_23 : 4;
+		u64 qos2_pri : 4;
+		u64 reserved_12_15 : 4;
+		u64 qos1_pri : 4;
+		u64 reserved_4_7 : 4;
+		u64 qos0_pri : 4;
+	} s;
+	struct cvmx_sso_ppx_qos_pri_s cn68xx;
+	struct cvmx_sso_ppx_qos_pri_s cn68xxp1;
+};
+
+typedef union cvmx_sso_ppx_qos_pri cvmx_sso_ppx_qos_pri_t;
+
+/**
+ * cvmx_sso_pp#_s#_grpmsk#
+ *
+ * These registers select which group or groups a core belongs to. There are 2 sets of masks per
+ * core, each with 1 register corresponding to 64 groups.
+ */
+union cvmx_sso_ppx_sx_grpmskx {
+	u64 u64;
+	struct cvmx_sso_ppx_sx_grpmskx_s {
+		u64 grp_msk : 64;
+	} s;
+	struct cvmx_sso_ppx_sx_grpmskx_s cn73xx;
+	struct cvmx_sso_ppx_sx_grpmskx_s cn78xx;
+	struct cvmx_sso_ppx_sx_grpmskx_s cn78xxp1;
+	struct cvmx_sso_ppx_sx_grpmskx_s cnf75xx;
+};
+
+typedef union cvmx_sso_ppx_sx_grpmskx cvmx_sso_ppx_sx_grpmskx_t;
+
+/**
+ * cvmx_sso_pp_strict
+ *
+ * SSO_PP_STRICT = SSO Strict Priority
+ *
+ * This register controls getting work from the input queues.  If the bit
+ * corresponding to a PP is set, that PP will not take work off the input
+ * queues until it is known that there is no higher-priority work available.
+ *
+ * Setting SSO_PP_STRICT may incur a performance penalty if highest-priority
+ * work is not found early.
+ *
+ * It is possible to starve a PP of work with SSO_PP_STRICT.  If the
+ * SSO_PPX_GRP_MSK for a PP masks-out much of the work added to the input
+ * queues that are higher-priority for that PP, and if there is a constant
+ * stream of work through one or more of those higher-priority input queues,
+ * then that PP may not accept work from lower-priority input queues.  This can
+ * be alleviated by ensuring that most or all the work added to the
+ * higher-priority input queues for a PP with SSO_PP_STRICT set are in a group
+ * acceptable to that PP.
+ *
+ * It is also possible to neglect work in an input queue if SSO_PP_STRICT is
+ * used.  If an input queue is a lower-priority queue for all PPs, and if all
+ * the PPs have their corresponding bit in SSO_PP_STRICT set, then work may
+ * never be taken (or be seldom taken) from that queue.  This can be alleviated
+ * by ensuring that work in all input queues can be serviced by one or more PPs
+ * that do not have SSO_PP_STRICT set, or that the input queue is the
+ * highest-priority input queue for one or more PPs that do have SSO_PP_STRICT
+ * set.
+ */
+union cvmx_sso_pp_strict {
+	u64 u64;
+	struct cvmx_sso_pp_strict_s {
+		u64 reserved_32_63 : 32;
+		u64 pp_strict : 32;
+	} s;
+	struct cvmx_sso_pp_strict_s cn68xx;
+	struct cvmx_sso_pp_strict_s cn68xxp1;
+};
+
+typedef union cvmx_sso_pp_strict cvmx_sso_pp_strict_t;
+
+/**
+ * cvmx_sso_qos#_rnd
+ *
+ * CSR align addresses: ===========================================================================================================
+ * SSO_QOS(0..7)_RND = SSO QOS Issue Round Register
+ *                (one per IQ)
+ *
+ * The number of arbitration rounds each QOS level participates in.
+ */
+union cvmx_sso_qosx_rnd {
+	u64 u64;
+	struct cvmx_sso_qosx_rnd_s {
+		u64 reserved_8_63 : 56;
+		u64 rnds_qos : 8;
+	} s;
+	struct cvmx_sso_qosx_rnd_s cn68xx;
+	struct cvmx_sso_qosx_rnd_s cn68xxp1;
+};
+
+typedef union cvmx_sso_qosx_rnd cvmx_sso_qosx_rnd_t;
+
+/**
+ * cvmx_sso_qos_thr#
+ *
+ * CSR reserved addresses: (24): 0xa040..0xa0f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_QOS_THRX = SSO QOS Threshold Register
+ *                (one per QOS level)
+ *
+ * Contains the thresholds for allocating SSO internal storage buffers.  If the
+ * number of remaining free buffers drops below the minimum threshold (MIN_THR)
+ * or the number of allocated buffers for this QOS level rises above the
+ * maximum threshold (MAX_THR), future incoming work queue entries will be
+ * buffered externally rather than internally.  This register also contains the
+ * number of internal buffers currently allocated to this QOS level (BUF_CNT).
+ */
+union cvmx_sso_qos_thrx {
+	u64 u64;
+	struct cvmx_sso_qos_thrx_s {
+		u64 reserved_40_63 : 24;
+		u64 buf_cnt : 12;
+		u64 reserved_26_27 : 2;
+		u64 max_thr : 12;
+		u64 reserved_12_13 : 2;
+		u64 min_thr : 12;
+	} s;
+	struct cvmx_sso_qos_thrx_s cn68xx;
+	struct cvmx_sso_qos_thrx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_qos_thrx cvmx_sso_qos_thrx_t;
+
+/**
+ * cvmx_sso_qos_we
+ *
+ * SSO_QOS_WE = SSO WE Buffers
+ *
+ * This register contains a read-only count of the current number of free
+ * buffers (FREE_CNT) and the total number of tag chain heads on the de-schedule list
+ * (DES_CNT) (which is not the same as the total number of entries on all of the descheduled
+ * tag chains.)
+ */
+union cvmx_sso_qos_we {
+	u64 u64;
+	struct cvmx_sso_qos_we_s {
+		u64 reserved_26_63 : 38;
+		u64 des_cnt : 12;
+		u64 reserved_12_13 : 2;
+		u64 free_cnt : 12;
+	} s;
+	struct cvmx_sso_qos_we_s cn68xx;
+	struct cvmx_sso_qos_we_s cn68xxp1;
+};
+
+typedef union cvmx_sso_qos_we cvmx_sso_qos_we_t;
+
+/**
+ * cvmx_sso_reset
+ *
+ * Writing a 1 to SSO_RESET[RESET] resets the SSO. After receiving a store to this CSR, the SSO
+ * must not be sent any other operations for 2500 coprocessor (SCLK) cycles. Note that the
+ * contents of this register are reset along with the rest of the SSO.
+ */
+union cvmx_sso_reset {
+	u64 u64;
+	struct cvmx_sso_reset_s {
+		u64 busy : 1;
+		u64 reserved_1_62 : 62;
+		u64 reset : 1;
+	} s;
+	struct cvmx_sso_reset_cn68xx {
+		u64 reserved_1_63 : 63;
+		u64 reset : 1;
+	} cn68xx;
+	struct cvmx_sso_reset_s cn73xx;
+	struct cvmx_sso_reset_s cn78xx;
+	struct cvmx_sso_reset_s cn78xxp1;
+	struct cvmx_sso_reset_s cnf75xx;
+};
+
+typedef union cvmx_sso_reset cvmx_sso_reset_t;
+
+/**
+ * cvmx_sso_rwq_head_ptr#
+ *
+ * CSR reserved addresses: (24): 0xb040..0xb0f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_RWQ_HEAD_PTRX = SSO Remote Queue Head Register
+ *                (one per QOS level)
+ * Contains the ptr to the first entry of the remote linked list(s) for a particular
+ * QoS level. SW should initialize the remote linked list(s) by programming
+ * SSO_RWQ_HEAD_PTRX and SSO_RWQ_TAIL_PTRX to identical values.
+ */
+union cvmx_sso_rwq_head_ptrx {
+	u64 u64;
+	struct cvmx_sso_rwq_head_ptrx_s {
+		u64 reserved_38_63 : 26;
+		u64 ptr : 31;
+		u64 reserved_5_6 : 2;
+		u64 rctr : 5;
+	} s;
+	struct cvmx_sso_rwq_head_ptrx_s cn68xx;
+	struct cvmx_sso_rwq_head_ptrx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_rwq_head_ptrx cvmx_sso_rwq_head_ptrx_t;
+
+/**
+ * cvmx_sso_rwq_pop_fptr
+ *
+ * SSO_RWQ_POP_FPTR = SSO Pop Free Pointer
+ *
+ * This register is used by SW to remove pointers for buffer-reallocation and diagnostics, and
+ * should only be used when SSO is idle.
+ *
+ * To remove ALL pointers, software must insure that there are modulus 16
+ * pointers in the FPA.  To do this, SSO_CFG.RWQ_BYP_DIS must be set, the FPA
+ * pointer count read, and enough fake buffers pushed via SSO_RWQ_PSH_FPTR to
+ * bring the FPA pointer count up to mod 16.
+ */
+union cvmx_sso_rwq_pop_fptr {
+	u64 u64;
+	struct cvmx_sso_rwq_pop_fptr_s {
+		u64 val : 1;
+		u64 reserved_38_62 : 25;
+		u64 fptr : 31;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_sso_rwq_pop_fptr_s cn68xx;
+	struct cvmx_sso_rwq_pop_fptr_s cn68xxp1;
+};
+
+typedef union cvmx_sso_rwq_pop_fptr cvmx_sso_rwq_pop_fptr_t;
+
+/**
+ * cvmx_sso_rwq_psh_fptr
+ *
+ * CSR reserved addresses: (56): 0xc240..0xc3f8
+ * SSO_RWQ_PSH_FPTR = SSO Free Pointer FIFO
+ *
+ * This register is used by SW to initialize the SSO with a pool of free
+ * pointers by writing the FPTR field whenever FULL = 0. Free pointers are
+ * fetched/released from/to the pool when accessing WQE entries stored remotely
+ * (in remote linked lists).  Free pointers should be 128 byte aligned, each of
+ * 256 bytes. This register should only be used when SSO is idle.
+ *
+ * Software needs to set aside buffering for
+ *      8 + 48 + ROUNDUP(N/26)
+ *
+ * where as many as N DRAM work queue entries may be used.  The first 8 buffers
+ * are used to setup the SSO_RWQ_HEAD_PTR and SSO_RWQ_TAIL_PTRs, and the
+ * remainder are pushed via this register.
+ *
+ * IMPLEMENTATION NOTES--NOT FOR SPEC:
+ *      48 avoids false out of buffer error due to (16) FPA and in-sso FPA buffering (32)
+ *      26 is number of WAE's per 256B buffer
+ */
+union cvmx_sso_rwq_psh_fptr {
+	u64 u64;
+	struct cvmx_sso_rwq_psh_fptr_s {
+		u64 full : 1;
+		u64 reserved_38_62 : 25;
+		u64 fptr : 31;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_sso_rwq_psh_fptr_s cn68xx;
+	struct cvmx_sso_rwq_psh_fptr_s cn68xxp1;
+};
+
+typedef union cvmx_sso_rwq_psh_fptr cvmx_sso_rwq_psh_fptr_t;
+
+/**
+ * cvmx_sso_rwq_tail_ptr#
+ *
+ * CSR reserved addresses: (56): 0xc040..0xc1f8
+ * SSO_RWQ_TAIL_PTRX = SSO Remote Queue Tail Register
+ *                (one per QOS level)
+ * Contains the ptr to the last entry of the remote linked list(s) for a particular
+ * QoS level. SW must initialize the remote linked list(s) by programming
+ * SSO_RWQ_HEAD_PTRX and SSO_RWQ_TAIL_PTRX to identical values.
+ */
+union cvmx_sso_rwq_tail_ptrx {
+	u64 u64;
+	struct cvmx_sso_rwq_tail_ptrx_s {
+		u64 reserved_38_63 : 26;
+		u64 ptr : 31;
+		u64 reserved_5_6 : 2;
+		u64 rctr : 5;
+	} s;
+	struct cvmx_sso_rwq_tail_ptrx_s cn68xx;
+	struct cvmx_sso_rwq_tail_ptrx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_rwq_tail_ptrx cvmx_sso_rwq_tail_ptrx_t;
+
+/**
+ * cvmx_sso_sl_pp#_links
+ *
+ * Returns status of each core.
+ *
+ */
+union cvmx_sso_sl_ppx_links {
+	u64 u64;
+	struct cvmx_sso_sl_ppx_links_s {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_38_47 : 10;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_0_25 : 26;
+	} s;
+	struct cvmx_sso_sl_ppx_links_cn73xx {
+		u64 tailc : 1;
+		u64 reserved_58_62 : 5;
+		u64 index : 10;
+		u64 reserved_36_47 : 12;
+		u64 grp : 8;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_21_25 : 5;
+		u64 revlink_index : 10;
+		u64 link_index_vld : 1;
+		u64 link_index : 10;
+	} cn73xx;
+	struct cvmx_sso_sl_ppx_links_cn78xx {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_38_47 : 10;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_25_25 : 1;
+		u64 revlink_index : 12;
+		u64 link_index_vld : 1;
+		u64 link_index : 12;
+	} cn78xx;
+	struct cvmx_sso_sl_ppx_links_cn78xx cn78xxp1;
+	struct cvmx_sso_sl_ppx_links_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_sl_ppx_links cvmx_sso_sl_ppx_links_t;
+
+/**
+ * cvmx_sso_sl_pp#_pendtag
+ *
+ * Returns status of each core.
+ *
+ */
+union cvmx_sso_sl_ppx_pendtag {
+	u64 u64;
+	struct cvmx_sso_sl_ppx_pendtag_s {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 pend_gw_insert : 1;
+		u64 reserved_34_55 : 22;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s;
+	struct cvmx_sso_sl_ppx_pendtag_s cn73xx;
+	struct cvmx_sso_sl_ppx_pendtag_s cn78xx;
+	struct cvmx_sso_sl_ppx_pendtag_s cn78xxp1;
+	struct cvmx_sso_sl_ppx_pendtag_s cnf75xx;
+};
+
+typedef union cvmx_sso_sl_ppx_pendtag cvmx_sso_sl_ppx_pendtag_t;
+
+/**
+ * cvmx_sso_sl_pp#_pendwqp
+ *
+ * Returns status of each core.
+ *
+ */
+union cvmx_sso_sl_ppx_pendwqp {
+	u64 u64;
+	struct cvmx_sso_sl_ppx_pendwqp_s {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_56_56 : 1;
+		u64 pend_index : 12;
+		u64 reserved_42_43 : 2;
+		u64 pend_wqp : 42;
+	} s;
+	struct cvmx_sso_sl_ppx_pendwqp_cn73xx {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_54_56 : 3;
+		u64 pend_index : 10;
+		u64 reserved_42_43 : 2;
+		u64 pend_wqp : 42;
+	} cn73xx;
+	struct cvmx_sso_sl_ppx_pendwqp_s cn78xx;
+	struct cvmx_sso_sl_ppx_pendwqp_s cn78xxp1;
+	struct cvmx_sso_sl_ppx_pendwqp_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_sl_ppx_pendwqp cvmx_sso_sl_ppx_pendwqp_t;
+
+/**
+ * cvmx_sso_sl_pp#_tag
+ *
+ * Returns status of each core.
+ *
+ */
+union cvmx_sso_sl_ppx_tag {
+	u64 u64;
+	struct cvmx_sso_sl_ppx_tag_s {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_46_47 : 2;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s;
+	struct cvmx_sso_sl_ppx_tag_cn73xx {
+		u64 tailc : 1;
+		u64 reserved_58_62 : 5;
+		u64 index : 10;
+		u64 reserved_44_47 : 4;
+		u64 grp : 8;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tt : 2;
+		u64 tag : 32;
+	} cn73xx;
+	struct cvmx_sso_sl_ppx_tag_s cn78xx;
+	struct cvmx_sso_sl_ppx_tag_s cn78xxp1;
+	struct cvmx_sso_sl_ppx_tag_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_sl_ppx_tag cvmx_sso_sl_ppx_tag_t;
+
+/**
+ * cvmx_sso_sl_pp#_wqp
+ *
+ * Returns status of each core.
+ *
+ */
+union cvmx_sso_sl_ppx_wqp {
+	u64 u64;
+	struct cvmx_sso_sl_ppx_wqp_s {
+		u64 reserved_58_63 : 6;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s;
+	struct cvmx_sso_sl_ppx_wqp_cn73xx {
+		u64 reserved_56_63 : 8;
+		u64 grp : 8;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} cn73xx;
+	struct cvmx_sso_sl_ppx_wqp_s cn78xx;
+	struct cvmx_sso_sl_ppx_wqp_s cn78xxp1;
+	struct cvmx_sso_sl_ppx_wqp_cn73xx cnf75xx;
+};
+
+typedef union cvmx_sso_sl_ppx_wqp cvmx_sso_sl_ppx_wqp_t;
+
+/**
+ * cvmx_sso_taq#_link
+ *
+ * Returns TAQ status for a given line.
+ *
+ */
+union cvmx_sso_taqx_link {
+	u64 u64;
+	struct cvmx_sso_taqx_link_s {
+		u64 reserved_11_63 : 53;
+		u64 next : 11;
+	} s;
+	struct cvmx_sso_taqx_link_s cn73xx;
+	struct cvmx_sso_taqx_link_s cn78xx;
+	struct cvmx_sso_taqx_link_s cn78xxp1;
+	struct cvmx_sso_taqx_link_s cnf75xx;
+};
+
+typedef union cvmx_sso_taqx_link cvmx_sso_taqx_link_t;
+
+/**
+ * cvmx_sso_taq#_wae#_tag
+ *
+ * Returns TAQ status for a given line and WAE within that line.
+ *
+ */
+union cvmx_sso_taqx_waex_tag {
+	u64 u64;
+	struct cvmx_sso_taqx_waex_tag_s {
+		u64 reserved_34_63 : 30;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s;
+	struct cvmx_sso_taqx_waex_tag_s cn73xx;
+	struct cvmx_sso_taqx_waex_tag_s cn78xx;
+	struct cvmx_sso_taqx_waex_tag_s cn78xxp1;
+	struct cvmx_sso_taqx_waex_tag_s cnf75xx;
+};
+
+typedef union cvmx_sso_taqx_waex_tag cvmx_sso_taqx_waex_tag_t;
+
+/**
+ * cvmx_sso_taq#_wae#_wqp
+ *
+ * Returns TAQ status for a given line and WAE within that line.
+ *
+ */
+union cvmx_sso_taqx_waex_wqp {
+	u64 u64;
+	struct cvmx_sso_taqx_waex_wqp_s {
+		u64 reserved_42_63 : 22;
+		u64 wqp : 42;
+	} s;
+	struct cvmx_sso_taqx_waex_wqp_s cn73xx;
+	struct cvmx_sso_taqx_waex_wqp_s cn78xx;
+	struct cvmx_sso_taqx_waex_wqp_s cn78xxp1;
+	struct cvmx_sso_taqx_waex_wqp_s cnf75xx;
+};
+
+typedef union cvmx_sso_taqx_waex_wqp cvmx_sso_taqx_waex_wqp_t;
+
+/**
+ * cvmx_sso_taq_add
+ */
+union cvmx_sso_taq_add {
+	u64 u64;
+	struct cvmx_sso_taq_add_s {
+		u64 reserved_29_63 : 35;
+		u64 rsvd_free : 13;
+		u64 reserved_0_15 : 16;
+	} s;
+	struct cvmx_sso_taq_add_s cn73xx;
+	struct cvmx_sso_taq_add_s cn78xx;
+	struct cvmx_sso_taq_add_s cn78xxp1;
+	struct cvmx_sso_taq_add_s cnf75xx;
+};
+
+typedef union cvmx_sso_taq_add cvmx_sso_taq_add_t;
+
+/**
+ * cvmx_sso_taq_cnt
+ */
+union cvmx_sso_taq_cnt {
+	u64 u64;
+	struct cvmx_sso_taq_cnt_s {
+		u64 reserved_27_63 : 37;
+		u64 rsvd_free : 11;
+		u64 reserved_11_15 : 5;
+		u64 free_cnt : 11;
+	} s;
+	struct cvmx_sso_taq_cnt_s cn73xx;
+	struct cvmx_sso_taq_cnt_s cn78xx;
+	struct cvmx_sso_taq_cnt_s cn78xxp1;
+	struct cvmx_sso_taq_cnt_s cnf75xx;
+};
+
+typedef union cvmx_sso_taq_cnt cvmx_sso_taq_cnt_t;
+
+/**
+ * cvmx_sso_tiaq#_status
+ *
+ * Returns TAQ inbound status indexed by group.
+ *
+ */
+union cvmx_sso_tiaqx_status {
+	u64 u64;
+	struct cvmx_sso_tiaqx_status_s {
+		u64 wae_head : 4;
+		u64 wae_tail : 4;
+		u64 reserved_47_55 : 9;
+		u64 wae_used : 15;
+		u64 reserved_23_31 : 9;
+		u64 ent_head : 11;
+		u64 reserved_11_11 : 1;
+		u64 ent_tail : 11;
+	} s;
+	struct cvmx_sso_tiaqx_status_s cn73xx;
+	struct cvmx_sso_tiaqx_status_s cn78xx;
+	struct cvmx_sso_tiaqx_status_s cn78xxp1;
+	struct cvmx_sso_tiaqx_status_s cnf75xx;
+};
+
+typedef union cvmx_sso_tiaqx_status cvmx_sso_tiaqx_status_t;
+
+/**
+ * cvmx_sso_toaq#_status
+ *
+ * Returns TAQ outbound status indexed by group.
+ *
+ */
+union cvmx_sso_toaqx_status {
+	u64 u64;
+	struct cvmx_sso_toaqx_status_s {
+		u64 reserved_62_63 : 2;
+		u64 ext_vld : 1;
+		u64 partial : 1;
+		u64 wae_tail : 4;
+		u64 reserved_43_55 : 13;
+		u64 cl_used : 11;
+		u64 reserved_23_31 : 9;
+		u64 ent_head : 11;
+		u64 reserved_11_11 : 1;
+		u64 ent_tail : 11;
+	} s;
+	struct cvmx_sso_toaqx_status_s cn73xx;
+	struct cvmx_sso_toaqx_status_s cn78xx;
+	struct cvmx_sso_toaqx_status_s cn78xxp1;
+	struct cvmx_sso_toaqx_status_s cnf75xx;
+};
+
+typedef union cvmx_sso_toaqx_status cvmx_sso_toaqx_status_t;
+
+/**
+ * cvmx_sso_ts_pc
+ *
+ * SSO_TS_PC = SSO Tag Switch Performance Counter
+ *
+ * Counts the number of tag switch requests.
+ * Counter rolls over through zero when max value exceeded.
+ */
+union cvmx_sso_ts_pc {
+	u64 u64;
+	struct cvmx_sso_ts_pc_s {
+		u64 ts_pc : 64;
+	} s;
+	struct cvmx_sso_ts_pc_s cn68xx;
+	struct cvmx_sso_ts_pc_s cn68xxp1;
+};
+
+typedef union cvmx_sso_ts_pc cvmx_sso_ts_pc_t;
+
+/**
+ * cvmx_sso_wa_com_pc
+ *
+ * SSO_WA_COM_PC = SSO Work Add Combined Performance Counter
+ *
+ * Counts the number of add new work requests for all QOS levels.
+ * Counter rolls over through zero when max value exceeded.
+ */
+union cvmx_sso_wa_com_pc {
+	u64 u64;
+	struct cvmx_sso_wa_com_pc_s {
+		u64 wa_pc : 64;
+	} s;
+	struct cvmx_sso_wa_com_pc_s cn68xx;
+	struct cvmx_sso_wa_com_pc_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wa_com_pc cvmx_sso_wa_com_pc_t;
+
+/**
+ * cvmx_sso_wa_pc#
+ *
+ * CSR reserved addresses: (64): 0x4200..0x43f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_WA_PCX = SSO Work Add Performance Counter
+ *             (one per QOS level)
+ *
+ * Counts the number of add new work requests for each QOS level.
+ * Counter rolls over through zero when max value exceeded.
+ */
+union cvmx_sso_wa_pcx {
+	u64 u64;
+	struct cvmx_sso_wa_pcx_s {
+		u64 wa_pc : 64;
+	} s;
+	struct cvmx_sso_wa_pcx_s cn68xx;
+	struct cvmx_sso_wa_pcx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wa_pcx cvmx_sso_wa_pcx_t;
+
+/**
+ * cvmx_sso_wq_int
+ *
+ * Note, the old POW offsets ran from 0x0 to 0x3f8, leaving the next available slot at 0x400.
+ * To ensure no overlap, start on 4k boundary: 0x1000.
+ * SSO_WQ_INT = SSO Work Queue Interrupt Register
+ *
+ * Contains the bits (one per group) that set work queue interrupts and are
+ * used to clear these interrupts.  For more information regarding this
+ * register, see the interrupt section of the SSO spec.
+ */
+union cvmx_sso_wq_int {
+	u64 u64;
+	struct cvmx_sso_wq_int_s {
+		u64 wq_int : 64;
+	} s;
+	struct cvmx_sso_wq_int_s cn68xx;
+	struct cvmx_sso_wq_int_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wq_int cvmx_sso_wq_int_t;
+
+/**
+ * cvmx_sso_wq_int_cnt#
+ *
+ * CSR reserved addresses: (64): 0x7200..0x73f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_WQ_INT_CNTX = SSO Work Queue Interrupt Count Register
+ *                   (one per group)
+ *
+ * Contains a read-only copy of the counts used to trigger work queue
+ * interrupts.  For more information regarding this register, see the interrupt
+ * section.
+ */
+union cvmx_sso_wq_int_cntx {
+	u64 u64;
+	struct cvmx_sso_wq_int_cntx_s {
+		u64 reserved_32_63 : 32;
+		u64 tc_cnt : 4;
+		u64 reserved_26_27 : 2;
+		u64 ds_cnt : 12;
+		u64 reserved_12_13 : 2;
+		u64 iq_cnt : 12;
+	} s;
+	struct cvmx_sso_wq_int_cntx_s cn68xx;
+	struct cvmx_sso_wq_int_cntx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wq_int_cntx cvmx_sso_wq_int_cntx_t;
+
+/**
+ * cvmx_sso_wq_int_pc
+ *
+ * Contains the threshold value for the work-executable interrupt periodic counter and also a
+ * read-only copy of the periodic counter. For more information on this register, refer to
+ * Interrupts.
+ */
+union cvmx_sso_wq_int_pc {
+	u64 u64;
+	struct cvmx_sso_wq_int_pc_s {
+		u64 reserved_60_63 : 4;
+		u64 pc : 28;
+		u64 reserved_28_31 : 4;
+		u64 pc_thr : 20;
+		u64 reserved_0_7 : 8;
+	} s;
+	struct cvmx_sso_wq_int_pc_s cn68xx;
+	struct cvmx_sso_wq_int_pc_s cn68xxp1;
+	struct cvmx_sso_wq_int_pc_s cn73xx;
+	struct cvmx_sso_wq_int_pc_s cn78xx;
+	struct cvmx_sso_wq_int_pc_s cn78xxp1;
+	struct cvmx_sso_wq_int_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_wq_int_pc cvmx_sso_wq_int_pc_t;
+
+/**
+ * cvmx_sso_wq_int_thr#
+ *
+ * CSR reserved addresses: (96): 0x6100..0x63f8
+ * CSR align addresses: ===========================================================================================================
+ * SSO_WQ_INT_THR(0..63) = SSO Work Queue Interrupt Threshold Registers
+ *                         (one per group)
+ *
+ * Contains the thresholds for enabling and setting work queue interrupts.  For
+ * more information, see the interrupt section.
+ *
+ * Note: Up to 16 of the SSO's internal storage buffers can be allocated
+ * for hardware use and are therefore not available for incoming work queue
+ * entries.  Additionally, any WS that is not in the EMPTY state consumes a
+ * buffer.  Thus in a 32 PP system, it is not advisable to set either IQ_THR or
+ * DS_THR to greater than 2048 - 16 - 32*2 = 1968.  Doing so may prevent the
+ * interrupt from ever triggering.
+ *
+ * Priorities for QOS levels 0..7
+ */
+union cvmx_sso_wq_int_thrx {
+	u64 u64;
+	struct cvmx_sso_wq_int_thrx_s {
+		u64 reserved_33_63 : 31;
+		u64 tc_en : 1;
+		u64 tc_thr : 4;
+		u64 reserved_26_27 : 2;
+		u64 ds_thr : 12;
+		u64 reserved_12_13 : 2;
+		u64 iq_thr : 12;
+	} s;
+	struct cvmx_sso_wq_int_thrx_s cn68xx;
+	struct cvmx_sso_wq_int_thrx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wq_int_thrx cvmx_sso_wq_int_thrx_t;
+
+/**
+ * cvmx_sso_wq_iq_dis
+ *
+ * CSR reserved addresses: (1): 0x1008..0x1008
+ * SSO_WQ_IQ_DIS = SSO Input Queue Interrupt Temporary Disable Mask
+ *
+ * Contains the input queue interrupt temporary disable bits (one per group).
+ * For more information regarding this register, see the interrupt section.
+ */
+union cvmx_sso_wq_iq_dis {
+	u64 u64;
+	struct cvmx_sso_wq_iq_dis_s {
+		u64 iq_dis : 64;
+	} s;
+	struct cvmx_sso_wq_iq_dis_s cn68xx;
+	struct cvmx_sso_wq_iq_dis_s cn68xxp1;
+};
+
+typedef union cvmx_sso_wq_iq_dis cvmx_sso_wq_iq_dis_t;
+
+/**
+ * cvmx_sso_ws_cfg
+ *
+ * This register contains various SSO work-slot configuration bits.
+ *
+ */
+union cvmx_sso_ws_cfg {
+	u64 u64;
+	struct cvmx_sso_ws_cfg_s {
+		u64 reserved_56_63 : 8;
+		u64 ocla_bp : 8;
+		u64 reserved_7_47 : 41;
+		u64 aw_clk_dis : 1;
+		u64 gw_clk_dis : 1;
+		u64 disable_pw : 1;
+		u64 arbc_step_en : 1;
+		u64 ncbo_step_en : 1;
+		u64 soc_ccam_dis : 1;
+		u64 sso_cclk_dis : 1;
+	} s;
+	struct cvmx_sso_ws_cfg_s cn73xx;
+	struct cvmx_sso_ws_cfg_cn78xx {
+		u64 reserved_56_63 : 8;
+		u64 ocla_bp : 8;
+		u64 reserved_5_47 : 43;
+		u64 disable_pw : 1;
+		u64 arbc_step_en : 1;
+		u64 ncbo_step_en : 1;
+		u64 soc_ccam_dis : 1;
+		u64 sso_cclk_dis : 1;
+	} cn78xx;
+	struct cvmx_sso_ws_cfg_cn78xx cn78xxp1;
+	struct cvmx_sso_ws_cfg_s cnf75xx;
+};
+
+typedef union cvmx_sso_ws_cfg cvmx_sso_ws_cfg_t;
+
+/**
+ * cvmx_sso_ws_eco
+ */
+union cvmx_sso_ws_eco {
+	u64 u64;
+	struct cvmx_sso_ws_eco_s {
+		u64 reserved_8_63 : 56;
+		u64 eco_rw : 8;
+	} s;
+	struct cvmx_sso_ws_eco_s cn73xx;
+	struct cvmx_sso_ws_eco_s cnf75xx;
+};
+
+typedef union cvmx_sso_ws_eco cvmx_sso_ws_eco_t;
+
+/**
+ * cvmx_sso_ws_pc#
+ *
+ * CSR reserved addresses: (225): 0x3100..0x3800
+ * CSR align addresses: ===========================================================================================================
+ * SSO_WS_PCX = SSO Work Schedule Performance Counter
+ *              (one per group)
+ *
+ * Counts the number of work schedules for each group.
+ * Counter rolls over through zero when max value exceeded.
+ */
+union cvmx_sso_ws_pcx {
+	u64 u64;
+	struct cvmx_sso_ws_pcx_s {
+		u64 ws_pc : 64;
+	} s;
+	struct cvmx_sso_ws_pcx_s cn68xx;
+	struct cvmx_sso_ws_pcx_s cn68xxp1;
+};
+
+typedef union cvmx_sso_ws_pcx cvmx_sso_ws_pcx_t;
+
+/**
+ * cvmx_sso_xaq#_head_next
+ *
+ * These registers contain the pointer to the next buffer to become the head when the final cache
+ * line in this buffer is read.
+ */
+union cvmx_sso_xaqx_head_next {
+	u64 u64;
+	struct cvmx_sso_xaqx_head_next_s {
+		u64 reserved_42_63 : 22;
+		u64 ptr : 35;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_sso_xaqx_head_next_s cn73xx;
+	struct cvmx_sso_xaqx_head_next_s cn78xx;
+	struct cvmx_sso_xaqx_head_next_s cn78xxp1;
+	struct cvmx_sso_xaqx_head_next_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaqx_head_next cvmx_sso_xaqx_head_next_t;
+
+/**
+ * cvmx_sso_xaq#_head_ptr
+ *
+ * These registers contain the pointer to the first entry of the external linked list(s) for a
+ * particular group. Software must initialize the external linked list(s) by programming
+ * SSO_XAQ()_HEAD_PTR, SSO_XAQ()_HEAD_NEXT, SSO_XAQ()_TAIL_PTR and
+ * SSO_XAQ()_TAIL_NEXT to identical values.
+ */
+union cvmx_sso_xaqx_head_ptr {
+	u64 u64;
+	struct cvmx_sso_xaqx_head_ptr_s {
+		u64 reserved_42_63 : 22;
+		u64 ptr : 35;
+		u64 reserved_5_6 : 2;
+		u64 cl : 5;
+	} s;
+	struct cvmx_sso_xaqx_head_ptr_s cn73xx;
+	struct cvmx_sso_xaqx_head_ptr_s cn78xx;
+	struct cvmx_sso_xaqx_head_ptr_s cn78xxp1;
+	struct cvmx_sso_xaqx_head_ptr_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaqx_head_ptr cvmx_sso_xaqx_head_ptr_t;
+
+/**
+ * cvmx_sso_xaq#_tail_next
+ *
+ * These registers contain the pointer to the next buffer to become the tail when the final cache
+ * line in this buffer is written.  Register fields are identical to those in
+ * SSO_XAQ()_HEAD_NEXT above.
+ */
+union cvmx_sso_xaqx_tail_next {
+	u64 u64;
+	struct cvmx_sso_xaqx_tail_next_s {
+		u64 reserved_42_63 : 22;
+		u64 ptr : 35;
+		u64 reserved_0_6 : 7;
+	} s;
+	struct cvmx_sso_xaqx_tail_next_s cn73xx;
+	struct cvmx_sso_xaqx_tail_next_s cn78xx;
+	struct cvmx_sso_xaqx_tail_next_s cn78xxp1;
+	struct cvmx_sso_xaqx_tail_next_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaqx_tail_next cvmx_sso_xaqx_tail_next_t;
+
+/**
+ * cvmx_sso_xaq#_tail_ptr
+ *
+ * These registers contain the pointer to the last entry of the external linked list(s) for a
+ * particular group.  Register fields are identical to those in SSO_XAQ()_HEAD_PTR above.
+ * Software must initialize the external linked list(s) by programming
+ * SSO_XAQ()_HEAD_PTR, SSO_XAQ()_HEAD_NEXT, SSO_XAQ()_TAIL_PTR and
+ * SSO_XAQ()_TAIL_NEXT to identical values.
+ */
+union cvmx_sso_xaqx_tail_ptr {
+	u64 u64;
+	struct cvmx_sso_xaqx_tail_ptr_s {
+		u64 reserved_42_63 : 22;
+		u64 ptr : 35;
+		u64 reserved_5_6 : 2;
+		u64 cl : 5;
+	} s;
+	struct cvmx_sso_xaqx_tail_ptr_s cn73xx;
+	struct cvmx_sso_xaqx_tail_ptr_s cn78xx;
+	struct cvmx_sso_xaqx_tail_ptr_s cn78xxp1;
+	struct cvmx_sso_xaqx_tail_ptr_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaqx_tail_ptr cvmx_sso_xaqx_tail_ptr_t;
+
+/**
+ * cvmx_sso_xaq_aura
+ */
+union cvmx_sso_xaq_aura {
+	u64 u64;
+	struct cvmx_sso_xaq_aura_s {
+		u64 reserved_12_63 : 52;
+		u64 node : 2;
+		u64 laura : 10;
+	} s;
+	struct cvmx_sso_xaq_aura_s cn73xx;
+	struct cvmx_sso_xaq_aura_s cn78xx;
+	struct cvmx_sso_xaq_aura_s cn78xxp1;
+	struct cvmx_sso_xaq_aura_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaq_aura cvmx_sso_xaq_aura_t;
+
+/**
+ * cvmx_sso_xaq_latency_pc
+ */
+union cvmx_sso_xaq_latency_pc {
+	u64 u64;
+	struct cvmx_sso_xaq_latency_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_sso_xaq_latency_pc_s cn73xx;
+	struct cvmx_sso_xaq_latency_pc_s cn78xx;
+	struct cvmx_sso_xaq_latency_pc_s cn78xxp1;
+	struct cvmx_sso_xaq_latency_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaq_latency_pc cvmx_sso_xaq_latency_pc_t;
+
+/**
+ * cvmx_sso_xaq_req_pc
+ */
+union cvmx_sso_xaq_req_pc {
+	u64 u64;
+	struct cvmx_sso_xaq_req_pc_s {
+		u64 count : 64;
+	} s;
+	struct cvmx_sso_xaq_req_pc_s cn73xx;
+	struct cvmx_sso_xaq_req_pc_s cn78xx;
+	struct cvmx_sso_xaq_req_pc_s cn78xxp1;
+	struct cvmx_sso_xaq_req_pc_s cnf75xx;
+};
+
+typedef union cvmx_sso_xaq_req_pc cvmx_sso_xaq_req_pc_t;
+
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 33/50] mips: octeon: Add misc remaining header files
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (31 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 32/50] mips: octeon: Add cvmx-sso-defs.h " Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 34/50] mips: octeon: Misc changes required because of the newly added headers Stefan Roese
                   ` (19 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import misc remaining header files from 2013 U-Boot. These will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 .../mach-octeon/include/mach/cvmx-address.h   |  209 ++
 .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +++
 .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
 arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
 .../mach-octeon/include/mach/cvmx-error.h     |  456 +++
 arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 ++++
 .../include/mach/cvmx-global-resources.h      |  213 ++
 arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
 .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++++
 .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 ++++
 arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
 arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 ++
 .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
 .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 ++
 arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++++++
 .../include/mach/cvmx-pki-resources.h         |  157 +
 arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++++++
 .../mach/cvmx-pko-internal-ports-range.h      |   43 +
 .../include/mach/cvmx-pko3-queue.h            |  175 +
 arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991 +++++++++++++++++
 arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 ++
 .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
 arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 ++++++++
 25 files changed, 11656 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-address.h b/arch/mips/mach-octeon/include/mach/cvmx-address.h
new file mode 100644
index 0000000000..984f574a75
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-address.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Typedefs and defines for working with Octeon physical addresses.
+ */
+
+#ifndef __CVMX_ADDRESS_H__
+#define __CVMX_ADDRESS_H__
+
+typedef enum {
+	CVMX_MIPS_SPACE_XKSEG = 3LL,
+	CVMX_MIPS_SPACE_XKPHYS = 2LL,
+	CVMX_MIPS_SPACE_XSSEG = 1LL,
+	CVMX_MIPS_SPACE_XUSEG = 0LL
+} cvmx_mips_space_t;
+
+typedef enum {
+	CVMX_MIPS_XKSEG_SPACE_KSEG0 = 0LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG1 = 1LL,
+	CVMX_MIPS_XKSEG_SPACE_SSEG = 2LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG3 = 3LL
+} cvmx_mips_xkseg_space_t;
+
+/* decodes <14:13> of a kseg3 window address */
+typedef enum {
+	CVMX_ADD_WIN_SCR = 0L,
+	CVMX_ADD_WIN_DMA = 1L,
+	CVMX_ADD_WIN_UNUSED = 2L,
+	CVMX_ADD_WIN_UNUSED2 = 3L
+} cvmx_add_win_dec_t;
+
+/* decode within DMA space */
+typedef enum {
+	CVMX_ADD_WIN_DMA_ADD = 0L,
+	CVMX_ADD_WIN_DMA_SENDMEM = 1L,
+	/* store data must be normal DRAM memory space address in this case */
+	CVMX_ADD_WIN_DMA_SENDDMA = 2L,
+	/* see CVMX_ADD_WIN_DMA_SEND_DEC for data contents */
+	CVMX_ADD_WIN_DMA_SENDIO = 3L,
+	/* store data must be normal IO space address in this case */
+	CVMX_ADD_WIN_DMA_SENDSINGLE = 4L,
+	/* no write buffer data needed/used */
+} cvmx_add_win_dma_dec_t;
+
+/**
+ *   Physical Address Decode
+ *
+ * Octeon-I HW never interprets this X (<39:36> reserved
+ * for future expansion), software should set to 0.
+ *
+ *  - 0x0 XXX0 0000 0000 to      DRAM         Cached
+ *  - 0x0 XXX0 0FFF FFFF
+ *
+ *  - 0x0 XXX0 1000 0000 to      Boot Bus     Uncached  (Converted to 0x1 00X0 1000 0000
+ *  - 0x0 XXX0 1FFF FFFF         + EJTAG                           to 0x1 00X0 1FFF FFFF)
+ *
+ *  - 0x0 XXX0 2000 0000 to      DRAM         Cached
+ *  - 0x0 XXXF FFFF FFFF
+ *
+ *  - 0x1 00X0 0000 0000 to      Boot Bus     Uncached
+ *  - 0x1 00XF FFFF FFFF
+ *
+ *  - 0x1 01X0 0000 0000 to      Other NCB    Uncached
+ *  - 0x1 FFXF FFFF FFFF         devices
+ *
+ * Decode of all Octeon addresses
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 offset : 62;
+	} sva;
+
+	struct {
+		u64 zeroes : 33;
+		u64 offset : 31;
+	} suseg;
+
+	struct {
+		u64 ones : 33;
+		cvmx_mips_xkseg_space_t sp : 2;
+		u64 offset : 29;
+	} sxkseg;
+
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 cca : 3;
+		u64 mbz : 10;
+		u64 pa : 49;
+	} sxkphys;
+
+	struct {
+		u64 mbz : 15;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sphys;
+
+	struct {
+		u64 zeroes : 24;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} smem;
+
+	struct {
+		u64 mem_region : 2;
+		u64 mbz : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sio;
+
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 addr : 13;
+	} sscr;
+
+	/* there should only be stores to IOBDMA space, no loads */
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 unused2 : 3;
+		cvmx_add_win_dma_dec_t type : 3;
+		u64 addr : 7;
+	} sdma;
+
+	struct {
+		u64 didspace : 24;
+		u64 unused : 40;
+	} sfilldidspace;
+} cvmx_addr_t;
+
+/* These macros for used by 32 bit applications */
+
+#define CVMX_MIPS32_SPACE_KSEG0	     1l
+#define CVMX_ADD_SEG32(segment, add) (((s32)segment << 31) | (s32)(add))
+
+/*
+ * Currently all IOs are performed using XKPHYS addressing. Linux uses the
+ * CvmMemCtl register to enable XKPHYS addressing to IO space from user mode.
+ * Future OSes may need to change the upper bits of IO addresses. The
+ * following define controls the upper two bits for all IO addresses generated
+ * by the simple executive library
+ */
+#define CVMX_IO_SEG CVMX_MIPS_SPACE_XKPHYS
+
+/* These macros simplify the process of creating common IO addresses */
+#define CVMX_ADD_SEG(segment, add) ((((u64)segment) << 62) | (add))
+
+#define CVMX_ADD_IO_SEG(add) (add)
+
+#define CVMX_ADDR_DIDSPACE(did)	   (((CVMX_IO_SEG) << 22) | ((1ULL) << 8) | (did))
+#define CVMX_ADDR_DID(did)	   (CVMX_ADDR_DIDSPACE(did) << 40)
+#define CVMX_FULL_DID(did, subdid) (((did) << 3) | (subdid))
+
+/* from include/ncb_rsl_id.v */
+#define CVMX_OCT_DID_MIS  0ULL /* misc stuff */
+#define CVMX_OCT_DID_GMX0 1ULL
+#define CVMX_OCT_DID_GMX1 2ULL
+#define CVMX_OCT_DID_PCI  3ULL
+#define CVMX_OCT_DID_KEY  4ULL
+#define CVMX_OCT_DID_FPA  5ULL
+#define CVMX_OCT_DID_DFA  6ULL
+#define CVMX_OCT_DID_ZIP  7ULL
+#define CVMX_OCT_DID_RNG  8ULL
+#define CVMX_OCT_DID_IPD  9ULL
+#define CVMX_OCT_DID_PKT  10ULL
+#define CVMX_OCT_DID_TIM  11ULL
+#define CVMX_OCT_DID_TAG  12ULL
+/* the rest are not on the IO bus */
+#define CVMX_OCT_DID_L2C  16ULL
+#define CVMX_OCT_DID_LMC  17ULL
+#define CVMX_OCT_DID_SPX0 18ULL
+#define CVMX_OCT_DID_SPX1 19ULL
+#define CVMX_OCT_DID_PIP  20ULL
+#define CVMX_OCT_DID_ASX0 22ULL
+#define CVMX_OCT_DID_ASX1 23ULL
+#define CVMX_OCT_DID_IOB  30ULL
+
+#define CVMX_OCT_DID_PKT_SEND	 CVMX_FULL_DID(CVMX_OCT_DID_PKT, 2ULL)
+#define CVMX_OCT_DID_TAG_SWTAG	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 0ULL)
+#define CVMX_OCT_DID_TAG_TAG1	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 1ULL)
+#define CVMX_OCT_DID_TAG_TAG2	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 2ULL)
+#define CVMX_OCT_DID_TAG_TAG3	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 3ULL)
+#define CVMX_OCT_DID_TAG_NULL_RD CVMX_FULL_DID(CVMX_OCT_DID_TAG, 4ULL)
+#define CVMX_OCT_DID_TAG_TAG5	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 5ULL)
+#define CVMX_OCT_DID_TAG_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 7ULL)
+#define CVMX_OCT_DID_FAU_FAI	 CVMX_FULL_DID(CVMX_OCT_DID_IOB, 0ULL)
+#define CVMX_OCT_DID_TIM_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TIM, 0ULL)
+#define CVMX_OCT_DID_KEY_RW	 CVMX_FULL_DID(CVMX_OCT_DID_KEY, 0ULL)
+#define CVMX_OCT_DID_PCI_6	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 6ULL)
+#define CVMX_OCT_DID_MIS_BOO	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 0ULL)
+#define CVMX_OCT_DID_PCI_RML	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 0ULL)
+#define CVMX_OCT_DID_IPD_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_IPD, 7ULL)
+#define CVMX_OCT_DID_DFA_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_DFA, 7ULL)
+#define CVMX_OCT_DID_MIS_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 7ULL)
+#define CVMX_OCT_DID_ZIP_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_ZIP, 0ULL)
+
+/* Cast to unsigned long long, mainly for use in printfs. */
+#define CAST_ULL(v) ((unsigned long long)(v))
+
+#define UNMAPPED_PTR(x) ((1ULL << 63) | (x))
+
+#endif /* __CVMX_ADDRESS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
new file mode 100644
index 0000000000..ddc294348c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
@@ -0,0 +1,441 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Support functions for managing command queues used for
+ * various hardware blocks.
+ *
+ * The common command queue infrastructure abstracts out the
+ * software necessary for adding to Octeon's chained queue
+ * structures. These structures are used for commands to the
+ * PKO, ZIP, DFA, RAID, HNA, and DMA engine blocks. Although each
+ * hardware unit takes commands and CSRs of different types,
+ * they all use basic linked command buffers to store the
+ * pending request. In general, users of the CVMX API don't
+ * call cvmx-cmd-queue functions directly. Instead the hardware
+ * unit specific wrapper should be used. The wrappers perform
+ * unit specific validation and CSR writes to submit the
+ * commands.
+ *
+ * Even though most software will never directly interact with
+ * cvmx-cmd-queue, knowledge of its internal workings can help
+ * in diagnosing performance problems and help with debugging.
+ *
+ * Command queue pointers are stored in a global named block
+ * called "cvmx_cmd_queues". Except for the PKO queues, each
+ * hardware queue is stored in its own cache line to reduce SMP
+ * contention on spin locks. The PKO queues are stored such that
+ * every 16th queue is next to each other in memory. This scheme
+ * allows for queues being in separate cache lines when there
+ * are low number of queues per port. With 16 queues per port,
+ * the first queue for each port is in the same cache area. The
+ * second queues for each port are in another area, etc. This
+ * allows software to implement very efficient lockless PKO with
+ * 16 queues per port using a minimum of cache lines per core.
+ * All queues for a given core will be isolated in the same
+ * cache area.
+ *
+ * In addition to the memory pointer layout, cvmx-cmd-queue
+ * provides an optimized fair ll/sc locking mechanism for the
+ * queues. The lock uses a "ticket / now serving" model to
+ * maintain fair order on contended locks. In addition, it uses
+ * predicted locking time to limit cache contention. When a core
+ * know it must wait in line for a lock, it spins on the
+ * internal cycle counter to completely eliminate any causes of
+ * bus traffic.
+ */
+
+#ifndef __CVMX_CMD_QUEUE_H__
+#define __CVMX_CMD_QUEUE_H__
+
+/**
+ * By default we disable the max depth support. Most programs
+ * don't use it and it slows down the command queue processing
+ * significantly.
+ */
+#ifndef CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH
+#define CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH 0
+#endif
+
+/**
+ * Enumeration representing all hardware blocks that use command
+ * queues. Each hardware block has up to 65536 sub identifiers for
+ * multiple command queues. Not all chips support all hardware
+ * units.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_PKO_BASE = 0x00000,
+#define CVMX_CMD_QUEUE_PKO(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_PKO_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_ZIP = 0x10000,
+#define CVMX_CMD_QUEUE_ZIP_QUE(queue)                                                              \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_ZIP + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_DFA = 0x20000,
+	CVMX_CMD_QUEUE_RAID = 0x30000,
+	CVMX_CMD_QUEUE_DMA_BASE = 0x40000,
+#define CVMX_CMD_QUEUE_DMA(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_DMA_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_BCH = 0x50000,
+#define CVMX_CMD_QUEUE_BCH(queue) ((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_BCH + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_HNA = 0x60000,
+	CVMX_CMD_QUEUE_END = 0x70000,
+} cvmx_cmd_queue_id_t;
+
+#define CVMX_CMD_QUEUE_ZIP3_QUE(node, queue)                                                       \
+	((cvmx_cmd_queue_id_t)((node) << 24 | CVMX_CMD_QUEUE_ZIP | (0xffff & (queue))))
+
+/**
+ * Command write operations can fail if the command queue needs
+ * a new buffer and the associated FPA pool is empty. It can also
+ * fail if the number of queued command words reaches the maximum
+ * set@initialization.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_SUCCESS = 0,
+	CVMX_CMD_QUEUE_NO_MEMORY = -1,
+	CVMX_CMD_QUEUE_FULL = -2,
+	CVMX_CMD_QUEUE_INVALID_PARAM = -3,
+	CVMX_CMD_QUEUE_ALREADY_SETUP = -4,
+} cvmx_cmd_queue_result_t;
+
+typedef struct {
+	/* First 64-bit word: */
+	u64 fpa_pool : 16;
+	u64 base_paddr : 48;
+	s32 index;
+	u16 max_depth;
+	u16 pool_size_m1;
+} __cvmx_cmd_queue_state_t;
+
+/**
+ * command-queue locking uses a fair ticket spinlock algo,
+ * with 64-bit tickets for endianness-neutrality and
+ * counter overflow protection.
+ * Lock is free when both counters are of equal value.
+ */
+typedef struct {
+	u64 ticket;
+	u64 now_serving;
+} __cvmx_cmd_queue_lock_t;
+
+/**
+ * @INTERNAL
+ * This structure contains the global state of all command queues.
+ * It is stored in a bootmem named block and shared by all
+ * applications running on Octeon. Tickets are stored in a different
+ * cache line that queue information to reduce the contention on the
+ * ll/sc used to get a ticket. If this is not the case, the update
+ * of queue state causes the ll/sc to fail quite often.
+ */
+typedef struct {
+	__cvmx_cmd_queue_lock_t lock[(CVMX_CMD_QUEUE_END >> 16) * 256];
+	__cvmx_cmd_queue_state_t state[(CVMX_CMD_QUEUE_END >> 16) * 256];
+} __cvmx_cmd_queue_all_state_t;
+
+extern __cvmx_cmd_queue_all_state_t *__cvmx_cmd_queue_state_ptrs[CVMX_MAX_NODES];
+
+/**
+ * @INTERNAL
+ * Internal function to handle the corner cases
+ * of adding command words to a queue when the current
+ * block is getting full.
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_write_raw(cvmx_cmd_queue_id_t queue_id,
+						   __cvmx_cmd_queue_state_t *qptr, int cmd_count,
+						   const u64 *cmds);
+
+/**
+ * Initialize a command queue for use. The initial FPA buffer is
+ * allocated and the hardware unit is configured to point to the
+ * new command queue.
+ *
+ * @param queue_id  Hardware command queue to initialize.
+ * @param max_depth Maximum outstanding commands that can be queued.
+ * @param fpa_pool  FPA pool the command queues should come from.
+ * @param pool_size Size of each buffer in the FPA pool (bytes)
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_initialize(cvmx_cmd_queue_id_t queue_id, int max_depth,
+						  int fpa_pool, int pool_size);
+
+/**
+ * Shutdown a queue a free it's command buffers to the FPA. The
+ * hardware connected to the queue must be stopped before this
+ * function is called.
+ *
+ * @param queue_id Queue to shutdown
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_shutdown(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the number of command words pending in the queue. This
+ * function may be relatively slow for some hardware units.
+ *
+ * @param queue_id Hardware command queue to query
+ *
+ * @return Number of outstanding commands
+ */
+int cvmx_cmd_queue_length(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the command buffer to be written to. The purpose of this
+ * function is to allow CVMX routine access to the low level buffer
+ * for initial hardware setup. User applications should not call this
+ * function directly.
+ *
+ * @param queue_id Command queue to query
+ *
+ * @return Command buffer or NULL on failure
+ */
+void *cvmx_cmd_queue_buffer(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * @INTERNAL
+ * Retrieve or allocate command queue state named block
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(unsigned int node);
+
+/**
+ * @INTERNAL
+ * Get the index into the state arrays for the supplied queue id.
+ *
+ * @param queue_id Queue ID to get an index for
+ *
+ * @return Index into the state arrays
+ */
+static inline unsigned int __cvmx_cmd_queue_get_index(cvmx_cmd_queue_id_t queue_id)
+{
+	/* Warning: This code currently only works with devices that have 256
+	 * queues or less.  Devices with more than 16 queues are laid out in
+	 * memory to allow cores quick access to every 16th queue. This reduces
+	 * cache thrashing when you are running 16 queues per port to support
+	 * lockless operation
+	 */
+	unsigned int unit = (queue_id >> 16) & 0xff;
+	unsigned int q = (queue_id >> 4) & 0xf;
+	unsigned int core = queue_id & 0xf;
+
+	return (unit << 8) | (core << 4) | q;
+}
+
+static inline int __cvmx_cmd_queue_get_node(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int node = queue_id >> 24;
+	return node;
+}
+
+/**
+ * @INTERNAL
+ * Lock the supplied queue so nobody else is updating it at the same
+ * time as us.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_lock(cvmx_cmd_queue_id_t queue_id)
+{
+}
+
+/**
+ * @INTERNAL
+ * Unlock the queue, flushing all writes.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_unlock(cvmx_cmd_queue_id_t queue_id)
+{
+	CVMX_SYNCWS; /* nudge out the unlock. */
+}
+
+/**
+ * @INTERNAL
+ * Initialize a command-queue lock to "unlocked" state.
+ */
+static inline void __cvmx_cmd_queue_lock_init(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index = __cvmx_cmd_queue_get_index(queue_id);
+	unsigned int node = __cvmx_cmd_queue_get_node(queue_id);
+
+	__cvmx_cmd_queue_state_ptrs[node]->lock[index] = (__cvmx_cmd_queue_lock_t){ 0, 0 };
+	CVMX_SYNCWS;
+}
+
+/**
+ * @INTERNAL
+ * Get the queue state structure for the given queue id
+ *
+ * @param queue_id Queue id to get
+ *
+ * @return Queue structure or NULL on failure
+ */
+static inline __cvmx_cmd_queue_state_t *__cvmx_cmd_queue_get_state(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index;
+	unsigned int node;
+	__cvmx_cmd_queue_state_t *qptr;
+
+	node = __cvmx_cmd_queue_get_node(queue_id);
+	index = __cvmx_cmd_queue_get_index(queue_id);
+
+	if (cvmx_unlikely(!__cvmx_cmd_queue_state_ptrs[node]))
+		__cvmx_cmd_queue_init_state_ptr(node);
+
+	qptr = &__cvmx_cmd_queue_state_ptrs[node]->state[index];
+	return qptr;
+}
+
+/**
+ * Write an arbitrary number of command words to a command queue.
+ * This is a generic function; the fixed number of command word
+ * functions yield higher performance.
+ *
+ * @param queue_id  Hardware command queue to write to
+ * @param use_locking
+ *                  Use internal locking to ensure exclusive access for queue
+ *                  updates. If you don't use this locking you must ensure
+ *                  exclusivity some other way. Locking is strongly recommended.
+ * @param cmd_count Number of command words to write
+ * @param cmds      Array of commands to write
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write(cvmx_cmd_queue_id_t queue_id, bool use_locking, int cmd_count, const u64 *cmds)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	/* Most of the time there is lots of free words in current block */
+	if (cvmx_unlikely((qptr->index + cmd_count) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, cmd_count, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		/* Loop easy for compiler to unroll for the likely case */
+		while (cmd_count > 0) {
+			cmd_ptr[qptr->index++] = *cmds++;
+			cmd_count--;
+		}
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write two command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t cvmx_cmd_queue_write2(cvmx_cmd_queue_id_t queue_id,
+							    bool use_locking, u64 cmd1, u64 cmd2)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 2) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		u64 cmds[2];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 2, cmds);
+	} else {
+		/* Likely case to work fast */
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 2;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write three command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ * @param cmd3     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write3(cvmx_cmd_queue_id_t queue_id, bool use_locking, u64 cmd1, u64 cmd2, u64 cmd3)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+	u64 *cmd_ptr;
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 3) >= qptr->pool_size_m1)) {
+		/* Most of the time there is lots of free words in current block */
+		u64 cmds[3];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		cmds[2] = cmd3;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 3, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 3;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+		cmd_ptr[2] = cmd3;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+#endif /* __CVMX_CMD_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
new file mode 100644
index 0000000000..a8625b4228
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Definitions for enumerations used with Octeon CSRs.
+ */
+
+#ifndef __CVMX_CSR_ENUMS_H__
+#define __CVMX_CSR_ENUMS_H__
+
+typedef enum {
+	CVMX_IPD_OPC_MODE_STT = 0LL,
+	CVMX_IPD_OPC_MODE_STF = 1LL,
+	CVMX_IPD_OPC_MODE_STF1_STT = 2LL,
+	CVMX_IPD_OPC_MODE_STF2_STT = 3LL
+} cvmx_ipd_mode_t;
+
+/**
+ * Enumeration representing the amount of packet processing
+ * and validation performed by the input hardware.
+ */
+typedef enum {
+	CVMX_PIP_PORT_CFG_MODE_NONE = 0ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPL2 = 1ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPIP = 2ull
+} cvmx_pip_port_parse_mode_t;
+
+/**
+ * This enumeration controls how a QoS watcher matches a packet.
+ *
+ * @deprecated  This enumeration was used with cvmx_pip_config_watcher which has
+ *              been deprecated.
+ */
+typedef enum {
+	CVMX_PIP_QOS_WATCH_DISABLE = 0ull,
+	CVMX_PIP_QOS_WATCH_PROTNH = 1ull,
+	CVMX_PIP_QOS_WATCH_TCP = 2ull,
+	CVMX_PIP_QOS_WATCH_UDP = 3ull
+} cvmx_pip_qos_watch_types;
+
+/**
+ * This enumeration is used in PIP tag config to control how
+ * POW tags are generated by the hardware.
+ */
+typedef enum {
+	CVMX_PIP_TAG_MODE_TUPLE = 0ull,
+	CVMX_PIP_TAG_MODE_MASK = 1ull,
+	CVMX_PIP_TAG_MODE_IP_OR_MASK = 2ull,
+	CVMX_PIP_TAG_MODE_TUPLE_XOR_MASK = 3ull
+} cvmx_pip_tag_mode_t;
+
+/**
+ * Tag type definitions
+ */
+typedef enum {
+	CVMX_POW_TAG_TYPE_ORDERED = 0L,
+	CVMX_POW_TAG_TYPE_ATOMIC = 1L,
+	CVMX_POW_TAG_TYPE_NULL = 2L,
+	CVMX_POW_TAG_TYPE_NULL_NULL = 3L
+} cvmx_pow_tag_type_t;
+
+/**
+ * LCR bits 0 and 1 control the number of bits per character. See the following table for encodings:
+ *
+ * - 00 = 5 bits (bits 0-4 sent)
+ * - 01 = 6 bits (bits 0-5 sent)
+ * - 10 = 7 bits (bits 0-6 sent)
+ * - 11 = 8 bits (all bits sent)
+ */
+typedef enum {
+	CVMX_UART_BITS5 = 0,
+	CVMX_UART_BITS6 = 1,
+	CVMX_UART_BITS7 = 2,
+	CVMX_UART_BITS8 = 3
+} cvmx_uart_bits_t;
+
+typedef enum {
+	CVMX_UART_IID_NONE = 1,
+	CVMX_UART_IID_RX_ERROR = 6,
+	CVMX_UART_IID_RX_DATA = 4,
+	CVMX_UART_IID_RX_TIMEOUT = 12,
+	CVMX_UART_IID_TX_EMPTY = 2,
+	CVMX_UART_IID_MODEM = 0,
+	CVMX_UART_IID_BUSY = 7
+} cvmx_uart_iid_t;
+
+#endif /* __CVMX_CSR_ENUMS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr.h b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
new file mode 100644
index 0000000000..730d54bb92
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) address and type definitions for
+ * Octoen.
+ */
+
+#ifndef __CVMX_CSR_H__
+#define __CVMX_CSR_H__
+
+#include "cvmx-csr-enums.h"
+#include "cvmx-pip-defs.h"
+
+typedef cvmx_pip_prt_cfgx_t cvmx_pip_port_cfg_t;
+
+/* The CSRs for bootbus region zero used to be independent of the
+    other 1-7. As of SDK 1.7.0 these were combined. These macros
+    are for backwards compactability */
+#define CVMX_MIO_BOOT_REG_CFG0 CVMX_MIO_BOOT_REG_CFGX(0)
+#define CVMX_MIO_BOOT_REG_TIM0 CVMX_MIO_BOOT_REG_TIMX(0)
+
+/* The CN3XXX and CN58XX chips used to not have a LMC number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_LMC_BIST_CTL	  CVMX_LMCX_BIST_CTL(0)
+#define CVMX_LMC_BIST_RESULT	  CVMX_LMCX_BIST_RESULT(0)
+#define CVMX_LMC_COMP_CTL	  CVMX_LMCX_COMP_CTL(0)
+#define CVMX_LMC_CTL		  CVMX_LMCX_CTL(0)
+#define CVMX_LMC_CTL1		  CVMX_LMCX_CTL1(0)
+#define CVMX_LMC_DCLK_CNT_HI	  CVMX_LMCX_DCLK_CNT_HI(0)
+#define CVMX_LMC_DCLK_CNT_LO	  CVMX_LMCX_DCLK_CNT_LO(0)
+#define CVMX_LMC_DCLK_CTL	  CVMX_LMCX_DCLK_CTL(0)
+#define CVMX_LMC_DDR2_CTL	  CVMX_LMCX_DDR2_CTL(0)
+#define CVMX_LMC_DELAY_CFG	  CVMX_LMCX_DELAY_CFG(0)
+#define CVMX_LMC_DLL_CTL	  CVMX_LMCX_DLL_CTL(0)
+#define CVMX_LMC_DUAL_MEMCFG	  CVMX_LMCX_DUAL_MEMCFG(0)
+#define CVMX_LMC_ECC_SYND	  CVMX_LMCX_ECC_SYND(0)
+#define CVMX_LMC_FADR		  CVMX_LMCX_FADR(0)
+#define CVMX_LMC_IFB_CNT_HI	  CVMX_LMCX_IFB_CNT_HI(0)
+#define CVMX_LMC_IFB_CNT_LO	  CVMX_LMCX_IFB_CNT_LO(0)
+#define CVMX_LMC_MEM_CFG0	  CVMX_LMCX_MEM_CFG0(0)
+#define CVMX_LMC_MEM_CFG1	  CVMX_LMCX_MEM_CFG1(0)
+#define CVMX_LMC_OPS_CNT_HI	  CVMX_LMCX_OPS_CNT_HI(0)
+#define CVMX_LMC_OPS_CNT_LO	  CVMX_LMCX_OPS_CNT_LO(0)
+#define CVMX_LMC_PLL_BWCTL	  CVMX_LMCX_PLL_BWCTL(0)
+#define CVMX_LMC_PLL_CTL	  CVMX_LMCX_PLL_CTL(0)
+#define CVMX_LMC_PLL_STATUS	  CVMX_LMCX_PLL_STATUS(0)
+#define CVMX_LMC_READ_LEVEL_CTL	  CVMX_LMCX_READ_LEVEL_CTL(0)
+#define CVMX_LMC_READ_LEVEL_DBG	  CVMX_LMCX_READ_LEVEL_DBG(0)
+#define CVMX_LMC_READ_LEVEL_RANKX CVMX_LMCX_READ_LEVEL_RANKX(0)
+#define CVMX_LMC_RODT_COMP_CTL	  CVMX_LMCX_RODT_COMP_CTL(0)
+#define CVMX_LMC_RODT_CTL	  CVMX_LMCX_RODT_CTL(0)
+#define CVMX_LMC_WODT_CTL	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL0	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL1	  CVMX_LMCX_WODT_CTL1(0)
+
+/* The CN3XXX and CN58XX chips used to not have a TWSI bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_MIO_TWS_INT	 CVMX_MIO_TWSX_INT(0)
+#define CVMX_MIO_TWS_SW_TWSI	 CVMX_MIO_TWSX_SW_TWSI(0)
+#define CVMX_MIO_TWS_SW_TWSI_EXT CVMX_MIO_TWSX_SW_TWSI_EXT(0)
+#define CVMX_MIO_TWS_TWSI_SW	 CVMX_MIO_TWSX_TWSI_SW(0)
+
+/* The CN3XXX and CN58XX chips used to not have a SMI/MDIO bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_SMI_CLK	CVMX_SMIX_CLK(0)
+#define CVMX_SMI_CMD	CVMX_SMIX_CMD(0)
+#define CVMX_SMI_EN	CVMX_SMIX_EN(0)
+#define CVMX_SMI_RD_DAT CVMX_SMIX_RD_DAT(0)
+#define CVMX_SMI_WR_DAT CVMX_SMIX_WR_DAT(0)
+
+#endif /* __CVMX_CSR_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-error.h b/arch/mips/mach-octeon/include/mach/cvmx-error.h
new file mode 100644
index 0000000000..9a13ed4224
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-error.h
@@ -0,0 +1,456 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the Octeon extended error status.
+ */
+
+#ifndef __CVMX_ERROR_H__
+#define __CVMX_ERROR_H__
+
+/**
+ * There are generally many error status bits associated with a
+ * single logical group. The enumeration below is used to
+ * communicate high level groups to the error infastructure so
+ * error status bits can be enable or disabled in large groups.
+ */
+typedef enum {
+	CVMX_ERROR_GROUP_INTERNAL,
+	CVMX_ERROR_GROUP_L2C,
+	CVMX_ERROR_GROUP_ETHERNET,
+	CVMX_ERROR_GROUP_MGMT_PORT,
+	CVMX_ERROR_GROUP_PCI,
+	CVMX_ERROR_GROUP_SRIO,
+	CVMX_ERROR_GROUP_USB,
+	CVMX_ERROR_GROUP_LMC,
+	CVMX_ERROR_GROUP_ILK,
+	CVMX_ERROR_GROUP_DFM,
+	CVMX_ERROR_GROUP_ILA,
+} cvmx_error_group_t;
+
+/**
+ * Flags representing special handling for some error registers.
+ * These flags are passed to cvmx_error_initialize() to control
+ * the handling of bits where the same flags were passed to the
+ * added cvmx_error_info_t.
+ */
+typedef enum {
+	CVMX_ERROR_TYPE_NONE = 0,
+	CVMX_ERROR_TYPE_SBE = 1 << 0,
+	CVMX_ERROR_TYPE_DBE = 1 << 1,
+} cvmx_error_type_t;
+
+/**
+ * When registering for interest in an error status register, the
+ * type of the register needs to be known by cvmx-error. Most
+ * registers are either IO64 or IO32, but some blocks contain
+ * registers that can't be directly accessed. A good example of
+ * would be PCIe extended error state stored in config space.
+ */
+typedef enum {
+	__CVMX_ERROR_REGISTER_NONE,
+	CVMX_ERROR_REGISTER_IO64,
+	CVMX_ERROR_REGISTER_IO32,
+	CVMX_ERROR_REGISTER_PCICONFIG,
+	CVMX_ERROR_REGISTER_SRIOMAINT,
+} cvmx_error_register_t;
+
+struct cvmx_error_info;
+/**
+ * Error handling functions must have the following prototype.
+ */
+typedef int (*cvmx_error_func_t)(const struct cvmx_error_info *info);
+
+/**
+ * This structure is passed to all error handling functions.
+ */
+typedef struct cvmx_error_info {
+	cvmx_error_register_t reg_type;
+	u64 status_addr;
+	u64 status_mask;
+	u64 enable_addr;
+	u64 enable_mask;
+	cvmx_error_type_t flags;
+	cvmx_error_group_t group;
+	int group_index;
+	cvmx_error_func_t func;
+	u64 user_info;
+	struct {
+		cvmx_error_register_t reg_type;
+		u64 status_addr;
+		u64 status_mask;
+	} parent;
+} cvmx_error_info_t;
+
+/**
+ * Initialize the error status system. This should be called once
+ * before any other functions are called. This function adds default
+ * handlers for most all error events but does not enable them. Later
+ * calls to cvmx_error_enable() are needed.
+ *
+ * @param flags  Optional flags.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize(void);
+
+/**
+ * Poll the error status registers and call the appropriate error
+ * handlers. This should be called in the RSL interrupt handler
+ * for your application or operating system.
+ *
+ * @return Number of error handlers called. Zero means this call
+ *         found no errors and was spurious.
+ */
+int cvmx_error_poll(void);
+
+/**
+ * Register to be called when an error status bit is set. Most users
+ * will not need to call this function as cvmx_error_initialize()
+ * registers default handlers for most error conditions. This function
+ * is normally used to add more handlers without changing the existing
+ * handlers.
+ *
+ * @param new_info Information about the handler for a error register. The
+ *                 structure passed is copied and can be destroyed after the
+ *                 call. All members of the structure must be populated, even the
+ *                 parent information.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_add(const cvmx_error_info_t *new_info);
+
+/**
+ * Remove all handlers for a status register and mask. Normally
+ * this function should not be called. Instead a new handler should be
+ * installed to replace the existing handler. In the even that all
+ * reporting of a error bit should be removed, then use this
+ * function.
+ *
+ * @param reg_type Type of the status register to remove
+ * @param status_addr
+ *                 Status register to remove.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 removed.
+ * @param old_info If not NULL, this is filled with information about the handler
+ *                 that was removed.
+ *
+ * @return Zero on success, negative on failure (not found).
+ */
+int cvmx_error_remove(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+		      cvmx_error_info_t *old_info);
+
+/**
+ * Change the function and user_info for an existing error status
+ * register. This function should be used to replace the default
+ * handler with an application specific version as needed.
+ *
+ * @param reg_type Type of the status register to change
+ * @param status_addr
+ *                 Status register to change.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 changed.
+ * @param new_func New function to use to handle the error status
+ * @param new_user_info
+ *                 New user info parameter for the function
+ * @param old_func If not NULL, the old function is returned. Useful for restoring
+ *                 the old handler.
+ * @param old_user_info
+ *                 If not NULL, the old user info parameter.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_error_change_handler(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+			      cvmx_error_func_t new_func, u64 new_user_info,
+			      cvmx_error_func_t *old_func, u64 *old_user_info);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not enable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_enable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Disable all error registers for a logical group. This should be
+ * called whenever a logical group is brought offline. Many blocks
+ * will report spurious errors when offline unless this function
+ * is called.
+ *
+ * @param group  Logical group to disable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not disable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_disable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Enable all handlers for a specific status register mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 enabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_enable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * Disable all handlers for a specific status register and mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 disabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_disable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * @INTERNAL
+ * Function for processing non leaf error status registers. This function
+ * calls all handlers for this passed register and all children linked
+ * to it.
+ *
+ * @param info   Error register to check
+ *
+ * @return Number of error status bits found or zero if no bits were set.
+ */
+int __cvmx_error_decode(const cvmx_error_info_t *info);
+
+/**
+ * @INTERNAL
+ * This error bit handler simply prints a message and clears the status bit
+ *
+ * @param info   Error register to check
+ *
+ * @return
+ */
+int __cvmx_error_display(const cvmx_error_info_t *info);
+
+/**
+ * Find the handler for a specific status register and mask
+ *
+ * @param status_addr
+ *                Status register address
+ *
+ * @return  Return the handler on success or null on failure.
+ */
+cvmx_error_info_t *cvmx_error_get_index(u64 status_addr);
+
+void __cvmx_install_gmx_error_handler_for_xaui(void);
+
+/**
+ * 78xx related
+ */
+/**
+ * Compare two INTSN values.
+ *
+ * @param key INTSN value to search for
+ * @param data current entry from the searched array
+ *
+ * @return Negative, 0 or positive when respectively key is less than,
+ *		equal or greater than data.
+ */
+int cvmx_error_intsn_cmp(const void *key, const void *data);
+
+/**
+ * @INTERNAL
+ *
+ * @param intsn   Interrupt source number to display
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, -1 on error
+ */
+int cvmx_error_intsn_display_v3(int node, u32 intsn);
+
+/**
+ * Initialize the error status system for cn78xx. This should be called once
+ * before any other functions are called. This function enables the interrupts
+ * described in the array.
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize_cn78xx(int node);
+
+/**
+ * Enable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_enable_v3(int node, u32 intsn);
+
+/**
+ * Disable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_disable_v3(int node, u32 intsn);
+
+/**
+ * Clear interrupt for a specific INTSN.
+ *
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_clear_v3(int node, u32 intsn);
+
+/**
+ * Enable interrupts for a specific CSR(all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_csr_enable_v3(int node, u64 csr_address);
+
+/**
+ * Disable interrupts for a specific CSR (all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero
+ */
+int cvmx_error_csr_disable_v3(int node, u64 csr_address);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Disable all error registers for a logical group.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param type   Category in a logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				    int xipd_port);
+
+/**
+ * Disable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to disable
+ * @param type   Category in a logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				     int xipd_port);
+
+/**
+ * Clear all error registers for a logical group.
+ *
+ * @param group  Logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_clear_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to enable
+ *
+ *@return Zero.
+ */
+int cvmx_error_enable_type_v3(int node, cvmx_error_type_t type);
+
+/**
+ * Disable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to disable
+ *
+ *@return Zero.
+ */
+int cvmx_error_disable_type_v3(int node, cvmx_error_type_t type);
+
+void cvmx_octeon_hang(void) __attribute__((__noreturn__));
+
+/**
+ * @INTERNAL
+ *
+ * Process L2C single and multi-bit ECC errors
+ *
+ */
+int __cvmx_cn7xxx_l2c_l2d_ecc_error_display(int node, int intsn);
+
+/**
+ * Handle L2 cache TAG ECC errors and noway errors
+ *
+ * @param	CCPI node
+ * @param	intsn	intsn from error array.
+ * @param	remote	true for remote node (cn78xx only)
+ *
+ * @return	1 if handled, 0 if not handled
+ */
+int __cvmx_cn7xxx_l2c_tag_error_display(int node, int intsn, bool remote);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
new file mode 100644
index 0000000000..297fb3f4a2
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator.
+ */
+
+#ifndef __CVMX_FPA_H__
+#define __CVMX_FPA_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa1.h"
+#include "cvmx-fpa3.h"
+
+#define CVMX_FPA_MIN_BLOCK_SIZE 128
+#define CVMX_FPA_ALIGNMENT	128
+#define CVMX_FPA_POOL_NAME_LEN	16
+
+/* On CN78XX in backward-compatible mode, pool is mapped to AURA */
+#define CVMX_FPA_NUM_POOLS                                                                         \
+	(octeon_has_feature(OCTEON_FEATURE_FPA3) ? cvmx_fpa3_num_auras() : CVMX_FPA1_NUM_POOLS)
+
+/**
+ * Structure to store FPA pool configuration parameters.
+ */
+struct cvmx_fpa_pool_config {
+	s64 pool_num;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+typedef struct cvmx_fpa_pool_config cvmx_fpa_pool_config_t;
+
+/**
+ * Return the name of the pool
+ *
+ * @param pool_num   Pool to get the name of
+ * @return The name
+ */
+const char *cvmx_fpa_get_name(int pool_num);
+
+/**
+ * Initialize FPA per node
+ */
+int cvmx_fpa_global_init_node(int node);
+
+/**
+ * Enable the FPA
+ */
+static inline void cvmx_fpa_enable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_enable();
+	else
+		cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * Disable the FPA
+ */
+static inline void cvmx_fpa_disable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_disable();
+	/* FPA3 does not have a disable function */
+}
+
+/**
+ * @INTERNAL
+ * @deprecated OBSOLETE
+ *
+ * Kept for transition assistance only
+ */
+static inline void cvmx_fpa_global_initialize(void)
+{
+	cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * @INTERNAL
+ *
+ * Convert FPA1 style POOL into FPA3 AURA in
+ * backward compatibility mode.
+ */
+static inline cvmx_fpa3_gaura_t cvmx_fpa1_pool_to_fpa3_aura(cvmx_fpa1_pool_t pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		unsigned int node = cvmx_get_node_num();
+		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(node, pool);
+		return aura;
+	}
+	return CVMX_FPA3_INVALID_GAURA;
+}
+
+/**
+ * Get a new block from the FPA
+ *
+ * @param pool   Pool to get the block from
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_alloc(u64 pool)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_alloc(cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_alloc(pool);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa_async_alloc(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_async_alloc(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_async_alloc(scr_addr, pool);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_async_alloc_finish(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		return cvmx_fpa3_async_alloc_finish(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	else
+		return cvmx_fpa1_async_alloc_finish(scr_addr, pool);
+}
+
+/**
+ * Free a block allocated with a FPA pool.
+ * Does NOT provide memory ordering in cases where the memory block was
+ * modified by the core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free_nosync(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free_nosync(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free_nosync(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Free a block allocated with a FPA pool.  Provides required memory
+ * ordering in cases where memory block was modified by core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Setup a FPA pool to control a new block of memory.
+ * This can only be called once per pool. Make sure proper
+ * locking enforces this.
+ *
+ * @param pool       Pool to initialize
+ * @param name       Constant character string to name this pool.
+ *                   String is not copied.
+ * @param buffer     Pointer to the block of memory to use. This must be
+ *                   accessible by all processors and external hardware.
+ * @param block_size Size for each block controlled by the FPA
+ * @param num_blocks Number of blocks
+ *
+ * @return the pool number on Success,
+ *         -1 on failure
+ */
+int cvmx_fpa_setup_pool(int pool, const char *name, void *buffer, u64 block_size, u64 num_blocks);
+
+int cvmx_fpa_shutdown_pool(int pool);
+
+/**
+ * Gets the block size of buffer in specified pool
+ * @param pool	 Pool to get the block size from
+ * @return       Size of buffer in specified pool
+ */
+unsigned int cvmx_fpa_get_block_size(int pool);
+
+int cvmx_fpa_is_pool_available(int pool_num);
+u64 cvmx_fpa_get_pool_owner(int pool_num);
+int cvmx_fpa_get_max_pools(void);
+int cvmx_fpa_get_current_count(int pool_num);
+int cvmx_fpa_validate_pool(int pool);
+
+#endif /*  __CVM_FPA_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
new file mode 100644
index 0000000000..6985083a5d
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator on Octeon chips.
+ * These are the legacy models, i.e. prior to CN78XX/CN76XX.
+ */
+
+#ifndef __CVMX_FPA1_HW_H__
+#define __CVMX_FPA1_HW_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa3.h"
+
+/* Legacy pool range is 0..7 and 8 on CN68XX */
+typedef int cvmx_fpa1_pool_t;
+
+#define CVMX_FPA1_NUM_POOLS    8
+#define CVMX_FPA1_INVALID_POOL ((cvmx_fpa1_pool_t)-1)
+#define CVMX_FPA1_NAME_SIZE    16
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+} cvmx_fpa1_iobdma_data_t;
+
+/*
+ * Allocate or reserve the specified fpa pool.
+ *
+ * @param pool	  FPA pool to allocate/reserve. If -1 it
+ *                finds an empty pool to allocate.
+ * @return        Alloctaed pool number or CVMX_FPA1_POOL_INVALID
+ *                if fails to allocate the pool
+ */
+cvmx_fpa1_pool_t cvmx_fpa1_reserve_pool(cvmx_fpa1_pool_t pool);
+
+/**
+ * Free the specified fpa pool.
+ * @param pool	   Pool to free
+ * @return         0 for success -1 failure
+ */
+int cvmx_fpa1_release_pool(cvmx_fpa1_pool_t pool);
+
+static inline void cvmx_fpa1_free(void *ptr, cvmx_fpa1_pool_t pool, u64 num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Make sure that any previous writes to memory go out before we free
+	 * this buffer.  This also serves as a barrier to prevent GCC from
+	 * reordering operations to after the free.
+	 */
+	CVMX_SYNCWS;
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+static inline void cvmx_fpa1_free_nosync(void *ptr, cvmx_fpa1_pool_t pool,
+					 unsigned int num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Prevent GCC from reordering around free */
+	asm volatile("" : : : "memory");
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+/**
+ * Enable the FPA for use. Must be performed after any CSR
+ * configuration but before any other FPA functions.
+ */
+static inline void cvmx_fpa1_enable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	if (status.s.enb) {
+		/*
+		 * CN68XXP1 should not reset the FPA (doing so may break
+		 * the SSO, so we may end up enabling it more than once.
+		 * Just return and don't spew messages.
+		 */
+		return;
+	}
+
+	status.u64 = 0;
+	status.s.enb = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+/**
+ * Reset FPA to disable. Make sure buffers from all FPA pools are freed
+ * before disabling FPA.
+ */
+static inline void cvmx_fpa1_disable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
+		return;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	status.s.reset = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+static inline void *cvmx_fpa1_alloc(cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	for (;;) {
+		address = csr_rd(CVMX_ADDR_DID(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool)));
+		if (cvmx_likely(address)) {
+			return cvmx_phys_to_ptr(address);
+		} else {
+			if (csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool)) > 0)
+				udelay(50);
+			else
+				return NULL;
+		}
+	}
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ * @INTERNAL
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa1_async_alloc(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	cvmx_fpa1_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.s.scraddr = scr_addr >> 3;
+	data.s.len = 1;
+	data.s.did = CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool);
+	data.s.addr = 0;
+
+	cvmx_scratch_write64(scr_addr, 0ull);
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ * @INTERNAL
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa1_async_alloc_finish(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		return cvmx_fpa1_alloc(pool);
+}
+
+static inline u64 cvmx_fpa1_get_available(cvmx_fpa1_pool_t pool)
+{
+	return csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool));
+}
+
+#endif /* __CVMX_FPA1_HW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
new file mode 100644
index 0000000000..229982b831
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
@@ -0,0 +1,566 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the CN78XX Free Pool Allocator, a.k.a. FPA3
+ */
+
+#include "cvmx-address.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-scratch.h"
+
+#ifndef __CVMX_FPA3_H__
+#define __CVMX_FPA3_H__
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 2;
+	unsigned lpool : 6;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_pool_t;
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 6;
+	unsigned laura : 10;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_gaura_t;
+
+#define CVMX_FPA3_VALID_MAGIC	0xf9a3
+#define CVMX_FPA3_INVALID_GAURA ((cvmx_fpa3_gaura_t){ 0, 0, 0, 0, 0 })
+#define CVMX_FPA3_INVALID_POOL	((cvmx_fpa3_pool_t){ 0, 0, 0, 0, 0 })
+
+static inline bool __cvmx_fpa3_aura_valid(cvmx_fpa3_gaura_t aura)
+{
+	if (aura.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline bool __cvmx_fpa3_pool_valid(cvmx_fpa3_pool_t pool)
+{
+	if (pool.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline cvmx_fpa3_gaura_t __cvmx_fpa3_gaura(int node, int laura)
+{
+	cvmx_fpa3_gaura_t aura;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (laura < 0)
+		return CVMX_FPA3_INVALID_GAURA;
+
+	aura.node = node;
+	aura.laura = laura;
+	aura.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return aura;
+}
+
+static inline cvmx_fpa3_pool_t __cvmx_fpa3_pool(int node, int lpool)
+{
+	cvmx_fpa3_pool_t pool;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (lpool < 0)
+		return CVMX_FPA3_INVALID_POOL;
+
+	pool.node = node;
+	pool.lpool = lpool;
+	pool.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return pool;
+}
+
+#undef CVMX_FPA3_VALID_MAGIC
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	} cn78xx;
+} cvmx_fpa3_iobdma_data_t;
+
+/**
+ * Struct describing load allocate operation addresses for FPA pool.
+ */
+union cvmx_fpa3_load_data {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	};
+};
+
+typedef union cvmx_fpa3_load_data cvmx_fpa3_load_data_t;
+
+/**
+ * Struct describing store free operation addresses from FPA pool.
+ */
+union cvmx_fpa3_store_addr {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved2 : 10;
+		u64 aura : 10;
+		u64 fabs : 1;
+		u64 reserved3 : 3;
+		u64 dwb_count : 9;
+		u64 reserved4 : 3;
+	};
+};
+
+typedef union cvmx_fpa3_store_addr cvmx_fpa3_store_addr_t;
+
+enum cvmx_fpa3_pool_alignment_e {
+	FPA_NATURAL_ALIGNMENT,
+	FPA_OFFSET_ALIGNMENT,
+	FPA_OPAQUE_ALIGNMENT
+};
+
+#define CVMX_FPA3_AURAX_LIMIT_MAX ((1ull << 40) - 1)
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of POOLS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_pools(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 32;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 32;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of AURAS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_auras(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1024;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 512;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 512;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * Get the FPA3 POOL underneath FPA3 AURA, containing all its buffers
+ *
+ */
+static inline cvmx_fpa3_pool_t cvmx_fpa3_aura_to_pool(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_aurax_pool_t aurax_pool;
+
+	aurax_pool.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_POOL(aura.laura));
+
+	pool = __cvmx_fpa3_pool(aura.node, aurax_pool.s.pool);
+	return pool;
+}
+
+/**
+ * Get a new block from the FPA pool
+ *
+ * @param aura  - aura number
+ * @return pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_alloc(cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+	cvmx_fpa3_load_data_t load_addr;
+
+	load_addr.u64 = 0;
+	load_addr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	load_addr.io = 1;
+	load_addr.did = 0x29; /* Device ID. Indicates FPA. */
+	load_addr.node = aura.node;
+	load_addr.red = 0; /* Perform RED on allocation.
+				  * FIXME to use config option
+				  */
+	load_addr.aura = aura.laura;
+
+	address = cvmx_read64_uint64(load_addr.u64);
+	if (!address)
+		return NULL;
+	return cvmx_phys_to_ptr(address);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param aura     Global aura to get the block from
+ */
+static inline void cvmx_fpa3_async_alloc(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.cn78xx.scraddr = scr_addr >> 3;
+	data.cn78xx.len = 1;
+	data.cn78xx.did = 0x29;
+	data.cn78xx.node = aura.node;
+	data.cn78xx.aura = aura.laura;
+	cvmx_scratch_write64(scr_addr, 0ull);
+
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa3_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param aura Global aura the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_async_alloc_finish(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		/* Try regular alloc if async failed */
+		return cvmx_fpa3_alloc(aura);
+}
+
+/**
+ * Free a pointer back to the pool.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free(void *ptr, cvmx_fpa3_gaura_t aura, unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Make sure that any previous writes to memory go out before we free
+	   this buffer. This also serves as a barrier to prevent GCC from
+	   reordering operations to after the free. */
+	CVMX_SYNCWS;
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+/**
+ * Free a pointer back to the pool without flushing the write buffer.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free_nosync(void *ptr, cvmx_fpa3_gaura_t aura,
+					 unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Prevent GCC from reordering writes to (*ptr) */
+	asm volatile("" : : : "memory");
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+static inline int cvmx_fpa3_pool_is_enabled(cvmx_fpa3_pool_t pool)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	if (!__cvmx_fpa3_pool_valid(pool))
+		return -1;
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	return pool_cfg.cn78xx.ena;
+}
+
+static inline int cvmx_fpa3_config_red_params(unsigned int node, int qos_avg_en, int red_lvl_dly,
+					      int avg_dly)
+{
+	cvmx_fpa_gen_cfg_t fpa_cfg;
+	cvmx_fpa_red_delay_t red_delay;
+
+	fpa_cfg.u64 = cvmx_read_csr_node(node, CVMX_FPA_GEN_CFG);
+	fpa_cfg.s.avg_en = qos_avg_en;
+	fpa_cfg.s.lvl_dly = red_lvl_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_GEN_CFG, fpa_cfg.u64);
+
+	red_delay.u64 = cvmx_read_csr_node(node, CVMX_FPA_RED_DELAY);
+	red_delay.s.avg_dly = avg_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_RED_DELAY, red_delay.u64);
+	return 0;
+}
+
+/**
+ * Gets the buffer size of the specified pool,
+ *
+ * @param aura Global aura number
+ * @return Returns size of the buffers in the specified pool.
+ */
+static inline int cvmx_fpa3_get_aura_buf_size(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+	int block_size;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	block_size = pool_cfg.cn78xx.buf_size << 7;
+	return block_size;
+}
+
+/**
+ * Return the number of available buffers in an AURA
+ *
+ * @param aura to receive count for
+ * @return available buffer count
+ */
+static inline long long cvmx_fpa3_get_available(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_available_t avail_reg;
+	cvmx_fpa_aurax_cnt_t cnt_reg;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	long long ret;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	/* Get POOL available buffer count */
+	avail_reg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_AVAILABLE(pool.lpool));
+
+	/* Get AURA current available count */
+	cnt_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT(aura.laura));
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (limit_reg.cn78xx.limit < cnt_reg.cn78xx.cnt)
+		return 0;
+
+	/* Calculate AURA-based buffer allowance */
+	ret = limit_reg.cn78xx.limit - cnt_reg.cn78xx.cnt;
+
+	/* Use POOL real buffer availability when less then allowance */
+	if (ret > (long long)avail_reg.cn78xx.count)
+		ret = avail_reg.cn78xx.count;
+
+	return ret;
+}
+
+/**
+ * Configure the QoS parameters of an FPA3 AURA
+ *
+ * @param aura is the FPA3 AURA handle
+ * @param ena_bp enables backpressure when outstanding count exceeds 'bp_thresh'
+ * @param ena_red enables random early discard when outstanding count exceeds 'pass_thresh'
+ * @param pass_thresh is the maximum count to invoke flow control
+ * @param drop_thresh is the count threshold to begin dropping packets
+ * @param bp_thresh is the back-pressure threshold
+ *
+ */
+static inline void cvmx_fpa3_setup_aura_qos(cvmx_fpa3_gaura_t aura, bool ena_red, u64 pass_thresh,
+					    u64 drop_thresh, bool ena_bp, u64 bp_thresh)
+{
+	unsigned int shift = 0;
+	u64 shift_thresh;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	cvmx_fpa_aurax_cnt_levels_t aura_level;
+
+	if (!__cvmx_fpa3_aura_valid(aura))
+		return;
+
+	/* Get AURAX count limit for validation */
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (pass_thresh < 256)
+		pass_thresh = 255;
+
+	if (drop_thresh <= pass_thresh || drop_thresh > limit_reg.cn78xx.limit)
+		drop_thresh = limit_reg.cn78xx.limit;
+
+	if (bp_thresh < 256 || bp_thresh > limit_reg.cn78xx.limit)
+		bp_thresh = limit_reg.cn78xx.limit >> 1;
+
+	shift_thresh = (bp_thresh > drop_thresh) ? bp_thresh : drop_thresh;
+
+	/* Calculate shift so that the largest threshold fits in 8 bits */
+	for (shift = 0; shift < (1 << 6); shift++) {
+		if (0 == ((shift_thresh >> shift) & ~0xffull))
+			break;
+	};
+
+	aura_level.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura));
+	aura_level.s.pass = pass_thresh >> shift;
+	aura_level.s.drop = drop_thresh >> shift;
+	aura_level.s.bp = bp_thresh >> shift;
+	aura_level.s.shift = shift;
+	aura_level.s.red_ena = ena_red;
+	aura_level.s.bp_ena = ena_bp;
+	cvmx_write_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura), aura_level.u64);
+}
+
+cvmx_fpa3_gaura_t cvmx_fpa3_reserve_aura(int node, int desired_aura_num);
+int cvmx_fpa3_release_aura(cvmx_fpa3_gaura_t aura);
+cvmx_fpa3_pool_t cvmx_fpa3_reserve_pool(int node, int desired_pool_num);
+int cvmx_fpa3_release_pool(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_is_aura_available(int node, int aura_num);
+int cvmx_fpa3_is_pool_available(int node, int pool_num);
+
+cvmx_fpa3_pool_t cvmx_fpa3_setup_fill_pool(int node, int desired_pool, const char *name,
+					   unsigned int block_size, unsigned int num_blocks,
+					   void *buffer);
+
+/**
+ * Function to attach an aura to an existing pool
+ *
+ * @param node - configure fpa on this node
+ * @param pool - configured pool to attach aura to
+ * @param desired_aura - pointer to aura to use, set to -1 to allocate
+ * @param name - name to register
+ * @param block_size - size of buffers to use
+ * @param num_blocks - number of blocks to allocate
+ *
+ * @return configured gaura on success, CVMX_FPA3_INVALID_GAURA on failure
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_set_aura_for_pool(cvmx_fpa3_pool_t pool, int desired_aura,
+					      const char *name, unsigned int block_size,
+					      unsigned int num_blocks);
+
+/**
+ * Function to setup and initialize a pool.
+ *
+ * @param node - configure fpa on this node
+ * @param desired_aura - aura to use, -1 for dynamic allocation
+ * @param name - name to register
+ * @param block_size - size of buffers in pool
+ * @param num_blocks - max number of buffers allowed
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_setup_aura_and_pool(int node, int desired_aura, const char *name,
+						void *buffer, unsigned int block_size,
+						unsigned int num_blocks);
+
+int cvmx_fpa3_shutdown_aura_and_pool(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_aura(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_pool(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_pool_name(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_get_pool_buf_size(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_aura_name(cvmx_fpa3_gaura_t aura);
+
+/* FIXME: Need a different macro for stage2 of u-boot */
+
+static inline void cvmx_fpa3_stage2_init(int aura, int pool, u64 stack_paddr, int stacklen,
+					 int buffer_sz, int buf_cnt)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	/* Configure pool stack */
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), stack_paddr + stacklen);
+
+	/* Configure pool with buffer size */
+	pool_cfg.u64 = 0;
+	pool_cfg.cn78xx.nat_align = 1;
+	pool_cfg.cn78xx.buf_size = buffer_sz >> 7;
+	pool_cfg.cn78xx.l_type = 0x2;
+	pool_cfg.cn78xx.ena = 0;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+	/* Reset pool before starting */
+	pool_cfg.cn78xx.ena = 1;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CFG(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CNT_ADD(aura), buf_cnt);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), (u64)pool);
+}
+
+static inline void cvmx_fpa3_stage2_disable(int aura, int pool)
+{
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), 0);
+}
+
+#endif /* __CVMX_FPA3_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
new file mode 100644
index 0000000000..28c32ddbe1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CVMX_GLOBAL_RESOURCES_T_
+#define _CVMX_GLOBAL_RESOURCES_T_
+
+#define CVMX_GLOBAL_RESOURCES_DATA_NAME "cvmx-global-resources"
+
+/*In macros below abbreviation GR stands for global resources. */
+#define CVMX_GR_TAG_INVALID                                                                        \
+	cvmx_get_gr_tag('i', 'n', 'v', 'a', 'l', 'i', 'd', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+/*Tag for pko que table range. */
+#define CVMX_GR_TAG_PKO_QUEUES                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'q', 'u', 'e', 'u', 's', '.', '.', \
+			'.')
+/*Tag for a pko internal ports range */
+#define CVMX_GR_TAG_PKO_IPORTS                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'i', 'p', 'o', 'r', 't', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FPA                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'p', 'a', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FAU                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'a', 'u', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_SSO_GRP(n)                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 's', 'o', '_', '0', (n) + '0', '.', '.', '.',     \
+			'.', '.', '.');
+#define CVMX_GR_TAG_TIM(n)                                                                         \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 't', 'i', 'm', '_', (n) + '0', '.', '.', '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTERS(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'u', 's', 't', 'e', 'r', '_', (x + '0'),     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTER_GRP(x)                                                                 \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'g', 'r', 'p', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_STYLE(x)                                                                       \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 't', 'y', 'l', 'e', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_QPG_ENTRY(x)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'q', 'p', 'g', 'e', 't', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_BPID(x)                                                                        \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'b', 'p', 'i', 'd', 's', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_MTAG_IDX(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'm', 't', 'a', 'g', 'x', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_PCAM(x, y, z)                                                                  \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'c', 'a', 'm', '_', (x + '0'), (y + '0'),         \
+			(z + '0'), '.', '.', '.', '.')
+
+#define CVMX_GR_TAG_CIU3_IDT(_n)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 'i', 'd',  \
+			't', '.', '.')
+
+/* Allocation of the 512 SW INTSTs (in the  12 bit SW INTSN space) */
+#define CVMX_GR_TAG_CIU3_SWINTSN(_n)                                                               \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 's', 'w',  \
+			'i', 's', 'n')
+
+#define TAG_INIT_PART(A, B, C, D, E, F, G, H)                                                      \
+	((((u64)(A) & 0xff) << 56) | (((u64)(B) & 0xff) << 48) | (((u64)(C) & 0xff) << 40) |             \
+	 (((u64)(D) & 0xff) << 32) | (((u64)(E) & 0xff) << 24) | (((u64)(F) & 0xff) << 16) |             \
+	 (((u64)(G) & 0xff) << 8) | (((u64)(H) & 0xff)))
+
+struct global_resource_tag {
+	u64 lo;
+	u64 hi;
+};
+
+enum cvmx_resource_err { CVMX_RESOURCE_ALLOC_FAILED = -1, CVMX_RESOURCE_ALREADY_RESERVED = -2 };
+
+/*
+ * @INTERNAL
+ * Creates a tag from the specified characters.
+ */
+static inline struct global_resource_tag cvmx_get_gr_tag(char a, char b, char c, char d, char e,
+							 char f, char g, char h, char i, char j,
+							 char k, char l, char m, char n, char o,
+							 char p)
+{
+	struct global_resource_tag tag;
+
+	tag.lo = TAG_INIT_PART(a, b, c, d, e, f, g, h);
+	tag.hi = TAG_INIT_PART(i, j, k, l, m, n, o, p);
+	return tag;
+}
+
+static inline int cvmx_gr_same_tag(struct global_resource_tag gr1, struct global_resource_tag gr2)
+{
+	return (gr1.hi == gr2.hi) && (gr1.lo == gr2.lo);
+}
+
+/*
+ * @INTERNAL
+ * Creates a global resource range that can hold the specified number of
+ * elements
+ * @param tag is the tag of the range. The taga is created using the method
+ * cvmx_get_gr_tag()
+ * @param nelements is the number of elements to be held in the resource range.
+ */
+int cvmx_create_global_resource_range(struct global_resource_tag tag, int nelements);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag. It
+ * is assumed that prior
+ * to calling this the global resource range has already been created using
+ * cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @aligment specifes the required alignment of the returned base number.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_allocate_global_resource_range(struct global_resource_tag tag, u64 owner, int nelements,
+					int alignment);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag.
+ * The elements allocated need not be contiguous. It is assumed that prior to
+ * calling this the global resource range has already
+ * been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of the allocated
+ * elements.
+ * @param allocated_elements returns indexs of the allocated entries.
+ * @return returns 0 on success and -1 on failure.
+ */
+int cvmx_resource_alloc_many(struct global_resource_tag tag, u64 owner, int nelements,
+			     int allocated_elements[]);
+int cvmx_resource_alloc_reverse(struct global_resource_tag, u64 owner);
+/*
+ * @INTERNAL
+ * Reserve nelements starting from base in the global resource range with the
+ * specified tag.
+ * It is assumed that prior to calling this the global resource range has
+ * already been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @base specifies the base start of nelements.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_reserve_global_resource_range(struct global_resource_tag tag, u64 owner, int base,
+				       int nelements);
+/*
+ * @INTERNAL
+ * Free nelements starting@base in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param base is the base number
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_base(struct global_resource_tag tag, int base,
+					      int nelements);
+
+/*
+ * @INTERNAL
+ * Free nelements with the bases specified in bases[] with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param bases is an array containing the bases to be freed.
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_multiple(struct global_resource_tag tag, int bases[],
+					     int nelements);
+/*
+ * @INTERNAL
+ * Free elements from the specified owner in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param owner is the owner of resources that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_owner(struct global_resource_tag tag, int owner);
+
+/*
+ * @INTERNAL
+ * Frees all the global resources that have been created.
+ * For use only from the bootloader, when it shutdown and boots up the
+ * application or kernel.
+ */
+int free_global_resources(void);
+
+u64 cvmx_get_global_resource_owner(struct global_resource_tag tag, int base);
+/*
+ * @INTERNAL
+ * Shows the global resource range with the specified tag. Use mainly for debug.
+ */
+void cvmx_show_global_resource_range(struct global_resource_tag tag);
+
+/*
+ * @INTERNAL
+ * Shows all the global resources. Used mainly for debug.
+ */
+void cvmx_global_resources_show(void);
+
+u64 cvmx_allocate_app_id(void);
+u64 cvmx_get_app_id(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmx.h b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
new file mode 100644
index 0000000000..2df7da102a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the GMX hardware.
+ */
+
+#ifndef __CVMX_GMX_H__
+#define __CVMX_GMX_H__
+
+/* CSR typedefs have been moved to cvmx-gmx-defs.h */
+
+int cvmx_gmx_set_backpressure_override(u32 interface, u32 port_mask);
+int cvmx_agl_set_backpressure_override(u32 interface, u32 port_mask);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
new file mode 100644
index 0000000000..59772190aa
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ */
+
+/**
+ * @file
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ *
+ */
+
+#ifndef __CVMX_HWFAU_H__
+#define __CVMX_HWFAU_H__
+
+typedef int cvmx_fau_reg64_t;
+typedef int cvmx_fau_reg32_t;
+typedef int cvmx_fau_reg16_t;
+typedef int cvmx_fau_reg8_t;
+
+#define CVMX_FAU_REG_ANY -1
+
+/*
+ * Octeon Fetch and Add Unit (FAU)
+ */
+
+#define CVMX_FAU_LOAD_IO_ADDRESS cvmx_build_io_address(0x1e, 0)
+#define CVMX_FAU_BITS_SCRADDR	 63, 56
+#define CVMX_FAU_BITS_LEN	 55, 48
+#define CVMX_FAU_BITS_INEVAL	 35, 14
+#define CVMX_FAU_BITS_TAGWAIT	 13, 13
+#define CVMX_FAU_BITS_NOADD	 13, 13
+#define CVMX_FAU_BITS_SIZE	 12, 11
+#define CVMX_FAU_BITS_REGISTER	 10, 0
+
+#define CVMX_FAU_MAX_REGISTERS_8 (2048)
+
+typedef enum {
+	CVMX_FAU_OP_SIZE_8 = 0,
+	CVMX_FAU_OP_SIZE_16 = 1,
+	CVMX_FAU_OP_SIZE_32 = 2,
+	CVMX_FAU_OP_SIZE_64 = 3
+} cvmx_fau_op_size_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s64 value : 63;
+} cvmx_fau_tagwait64_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s32 value : 31;
+} cvmx_fau_tagwait32_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s16 value : 15;
+} cvmx_fau_tagwait16_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	int8_t value : 7;
+} cvmx_fau_tagwait8_t;
+
+/**
+ * Asynchronous tagwait return definition. If a timeout occurs,
+ * the error bit will be set. Otherwise the value of the
+ * register before the update will be returned.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 invalid : 1;
+		u64 data : 63; /* unpredictable if invalid is set */
+	} s;
+} cvmx_fau_async_tagwait_result_t;
+
+#define SWIZZLE_8  0
+#define SWIZZLE_16 0
+#define SWIZZLE_32 0
+
+/**
+ * @INTERNAL
+ * Builds a store I/O address for writing to the FAU
+ *
+ * @param noadd  0 = Store value is atomically added to the current value
+ *               1 = Store value is atomically written over the current value
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ *               - Step by 4 for 32 bit access.
+ *               - Step by 8 for 64 bit access.
+ * @return Address to store for atomic update
+ */
+static inline u64 __cvmx_hwfau_store_address(u64 noadd, u64 reg)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_NOADD, noadd) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * @INTERNAL
+ * Builds a I/O address for accessing the FAU
+ *
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @return Address to read from for atomic update
+ */
+static inline u64 __cvmx_hwfau_atomic_address(u64 tagwait, u64 reg, s64 value)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s64 cvmx_hwfau_fetch_and_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	return cvmx_read64_int64(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s32 cvmx_hwfau_fetch_and_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	return cvmx_read64_int32(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline s16 cvmx_hwfau_fetch_and_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	return cvmx_read64_int16(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline int8_t cvmx_hwfau_fetch_and_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	return cvmx_read64_int8(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 64 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 8 for 64 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait64_t cvmx_hwfau_tagwait_fetch_and_add64(cvmx_fau_reg64_t reg,
+								      s64 value)
+{
+	union {
+		u64 i64;
+		cvmx_fau_tagwait64_t t;
+	} result;
+	result.i64 = cvmx_read64_int64(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 32 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 4 for 32 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait32_t cvmx_hwfau_tagwait_fetch_and_add32(cvmx_fau_reg32_t reg,
+								      s32 value)
+{
+	union {
+		u64 i32;
+		cvmx_fau_tagwait32_t t;
+	} result;
+	reg ^= SWIZZLE_32;
+	result.i32 = cvmx_read64_int32(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 16 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait16_t cvmx_hwfau_tagwait_fetch_and_add16(cvmx_fau_reg16_t reg,
+								      s16 value)
+{
+	union {
+		u64 i16;
+		cvmx_fau_tagwait16_t t;
+	} result;
+	reg ^= SWIZZLE_16;
+	result.i16 = cvmx_read64_int16(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 8 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait8_t cvmx_hwfau_tagwait_fetch_and_add8(cvmx_fau_reg8_t reg,
+								    int8_t value)
+{
+	union {
+		u64 i8;
+		cvmx_fau_tagwait8_t t;
+	} result;
+	reg ^= SWIZZLE_8;
+	result.i8 = cvmx_read64_int8(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * @INTERNAL
+ * Builds I/O data for async operations
+ *
+ * @param scraddr Scratch pad byte address to write to.  Must be 8 byte aligned
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param size    The size of the operation:
+ *                - CVMX_FAU_OP_SIZE_8  (0) = 8 bits
+ *                - CVMX_FAU_OP_SIZE_16 (1) = 16 bits
+ *                - CVMX_FAU_OP_SIZE_32 (2) = 32 bits
+ *                - CVMX_FAU_OP_SIZE_64 (3) = 64 bits
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @return Data to write using cvmx_send_single
+ */
+static inline u64 __cvmx_fau_iobdma_data(u64 scraddr, s64 value, u64 tagwait,
+					 cvmx_fau_op_size_t size, u64 reg)
+{
+	return (CVMX_FAU_LOAD_IO_ADDRESS | cvmx_build_bits(CVMX_FAU_BITS_SCRADDR, scraddr >> 3) |
+		cvmx_build_bits(CVMX_FAU_BITS_LEN, 1) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_SIZE, size) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg, s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg, s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add. The old value is
+ * placed in the scratch memory@byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg, int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg,
+							    s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg,
+							    s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg,
+							    s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg,
+							   int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 64 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/** Allocates 64bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau64_alloc(int reserve);
+
+/** Allocates 32bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau32_alloc(int reserve);
+
+/** Allocates 16bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau16_alloc(int reserve);
+
+/** Allocates 8bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau8_alloc(int reserve);
+
+/** Frees the specified FAU register.
+ *  @param address Base address of register to release.
+ *  @return 0 on success; -1 on failure
+ */
+int cvmx_fau_free(int address);
+
+/** Display the fau registers array
+ */
+void cvmx_fau_show(void);
+
+#endif /* __CVMX_HWFAU_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
new file mode 100644
index 0000000000..459c19bbc0
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Output unit.
+ *
+ * Starting with SDK 1.7.0, the PKO output functions now support
+ * two types of locking. CVMX_PKO_LOCK_ATOMIC_TAG continues to
+ * function similarly to previous SDKs by using POW atomic tags
+ * to preserve ordering and exclusivity. As a new option, you
+ * can now pass CVMX_PKO_LOCK_CMD_QUEUE which uses a ll/sc
+ * memory based locking instead. This locking has the advantage
+ * of not affecting the tag state but doesn't preserve packet
+ * ordering. CVMX_PKO_LOCK_CMD_QUEUE is appropriate in most
+ * generic code while CVMX_PKO_LOCK_CMD_QUEUE should be used
+ * with hand tuned fast path code.
+ *
+ * Some of other SDK differences visible to the command command
+ * queuing:
+ * - PKO indexes are no longer stored in the FAU. A large
+ *   percentage of the FAU register block used to be tied up
+ *   maintaining PKO queue pointers. These are now stored in a
+ *   global named block.
+ * - The PKO <b>use_locking</b> parameter can now have a global
+ *   effect. Since all application use the same named block,
+ *   queue locking correctly applies across all operating
+ *   systems when using CVMX_PKO_LOCK_CMD_QUEUE.
+ * - PKO 3 word commands are now supported. Use
+ *   cvmx_pko_send_packet_finish3().
+ */
+
+#ifndef __CVMX_HWPKO_H__
+#define __CVMX_HWPKO_H__
+
+#include "cvmx-hwfau.h"
+#include "cvmx-fpa.h"
+#include "cvmx-pow.h"
+#include "cvmx-cmd-queue.h"
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+
+/* Adjust the command buffer size by 1 word so that in the case of using only
+** two word PKO commands no command words stradle buffers.  The useful values
+** for this are 0 and 1. */
+#define CVMX_PKO_COMMAND_BUFFER_SIZE_ADJUST (1)
+
+#define CVMX_PKO_MAX_OUTPUT_QUEUES_STATIC 256
+#define CVMX_PKO_MAX_OUTPUT_QUEUES                                                                 \
+	((OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) ? 256 : 128)
+#define CVMX_PKO_NUM_OUTPUT_PORTS                                                                  \
+	((OCTEON_IS_MODEL(OCTEON_CN63XX)) ? 44 : (OCTEON_IS_MODEL(OCTEON_CN66XX) ? 48 : 40))
+#define CVMX_PKO_MEM_QUEUE_PTRS_ILLEGAL_PID 63
+#define CVMX_PKO_QUEUE_STATIC_PRIORITY	    9
+#define CVMX_PKO_ILLEGAL_QUEUE		    0xFFFF
+#define CVMX_PKO_MAX_QUEUE_DEPTH	    0
+
+typedef enum {
+	CVMX_PKO_SUCCESS,
+	CVMX_PKO_INVALID_PORT,
+	CVMX_PKO_INVALID_QUEUE,
+	CVMX_PKO_INVALID_PRIORITY,
+	CVMX_PKO_NO_MEMORY,
+	CVMX_PKO_PORT_ALREADY_SETUP,
+	CVMX_PKO_CMD_QUEUE_INIT_ERROR
+} cvmx_pko_return_value_t;
+
+/**
+ * This enumeration represents the differnet locking modes supported by PKO.
+ */
+typedef enum {
+	CVMX_PKO_LOCK_NONE = 0,
+	CVMX_PKO_LOCK_ATOMIC_TAG = 1,
+	CVMX_PKO_LOCK_CMD_QUEUE = 2,
+} cvmx_pko_lock_t;
+
+typedef struct cvmx_pko_port_status {
+	u32 packets;
+	u64 octets;
+	u64 doorbell;
+} cvmx_pko_port_status_t;
+
+/**
+ * This structure defines the address to use on a packet enqueue
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t mem_space : 2;
+		u64 reserved : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved2 : 4;
+		u64 reserved3 : 15;
+		u64 port : 9;
+		u64 queue : 9;
+		u64 reserved4 : 3;
+	} s;
+} cvmx_pko_doorbell_address_t;
+
+/**
+ * Structure of the first packet output command word.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_fau_op_size_t size1 : 2;
+		cvmx_fau_op_size_t size0 : 2;
+		u64 subone1 : 1;
+		u64 reg1 : 11;
+		u64 subone0 : 1;
+		u64 reg0 : 11;
+		u64 le : 1;
+		u64 n2 : 1;
+		u64 wqp : 1;
+		u64 rsp : 1;
+		u64 gather : 1;
+		u64 ipoffp1 : 7;
+		u64 ignore_i : 1;
+		u64 dontfree : 1;
+		u64 segs : 6;
+		u64 total_bytes : 16;
+	} s;
+} cvmx_pko_command_word0_t;
+
+/**
+ * Call before any other calls to initialize the packet
+ * output system.
+ */
+
+void cvmx_pko_hw_init(u8 pool, unsigned int bufsize);
+
+/**
+ * Enables the packet output hardware. It must already be
+ * configured.
+ */
+void cvmx_pko_enable(void);
+
+/**
+ * Disables the packet output. Does not affect any configuration.
+ */
+void cvmx_pko_disable(void);
+
+/**
+ * Shutdown and free resources required by packet output.
+ */
+
+void cvmx_pko_shutdown(void);
+
+/**
+ * Configure a output port and the associated queues for use.
+ *
+ * @param port       Port to configure.
+ * @param base_queue First queue number to associate with this port.
+ * @param num_queues Number of queues t oassociate with this port
+ * @param priority   Array of priority levels for each queue. Values are
+ *                   allowed to be 1-8. A value of 8 get 8 times the traffic
+ *                   of a value of 1. There must be num_queues elements in the
+ *                   array.
+ */
+cvmx_pko_return_value_t cvmx_pko_config_port(int port, int base_queue, int num_queues,
+					     const u8 priority[]);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_doorbell_pkoid() directly if the PKO port identifier is
+ * known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell(u64 ipd_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+	u64 pko_port;
+
+	pko_port = ipd_port;
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		pko_port = cvmx_helper_cfg_ipd2pko_port_base(ipd_port);
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Prepare to send a packet.  This may initiate a tag switch to
+ * get exclusive access to the output queue structure, and
+ * performs other prep work for the packet send operation.
+ *
+ * cvmx_pko_send_packet_finish() MUST be called after this function is called,
+ * and must be called with the same port/queue/use_locking arguments.
+ *
+ * The use_locking parameter allows the caller to use three
+ * possible locking modes.
+ * - CVMX_PKO_LOCK_NONE
+ *      - PKO doesn't do any locking. It is the responsibility
+ *          of the application to make sure that no other core
+ *          is accessing the same queue at the same time.
+ * - CVMX_PKO_LOCK_ATOMIC_TAG
+ *      - PKO performs an atomic tagswitch to insure exclusive
+ *          access to the output queue. This will maintain
+ *          packet ordering on output.
+ * - CVMX_PKO_LOCK_CMD_QUEUE
+ *      - PKO uses the common command queue locks to insure
+ *          exclusive access to the output queue. This is a
+ *          memory based ll/sc. This is the most portable
+ *          locking mechanism.
+ *
+ * NOTE: If atomic locking is used, the POW entry CANNOT be
+ * descheduled, as it does not contain a valid WQE pointer.
+ *
+ * @param port   Port to send it on, this can be either IPD port or PKO
+ *		 port.
+ * @param queue  Queue to use
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ */
+static inline void cvmx_pko_send_packet_prepare(u64 port __attribute__((unused)), u64 queue,
+						cvmx_pko_lock_t use_locking)
+{
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG) {
+		/*
+		 * Must do a full switch here to handle all cases.  We use a
+		 * fake WQE pointer, as the POW does not access this memory.
+		 * The WQE pointer and group are only used if this work is
+		 * descheduled, which is not supported by the
+		 * cvmx_pko_send_packet_prepare/cvmx_pko_send_packet_finish
+		 * combination. Note that this is a special case in which these
+		 * fake values can be used - this is not a general technique.
+		 */
+		u32 tag = CVMX_TAG_SW_BITS_INTERNAL << CVMX_TAG_SW_SHIFT |
+			  CVMX_TAG_SUBGROUP_PKO << CVMX_TAG_SUBGROUP_SHIFT |
+			  (CVMX_TAG_SUBGROUP_MASK & queue);
+		cvmx_pow_tag_sw_full((cvmx_wqe_t *)cvmx_phys_to_ptr(0x80), tag,
+				     CVMX_POW_TAG_TYPE_ATOMIC, 0);
+	}
+}
+
+#define cvmx_pko_send_packet_prepare_pkoid cvmx_pko_send_packet_prepare
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			      cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish3_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			       cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Get the first pko_port for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_base_pko_port(int interface, int index);
+
+/**
+ * Get the number of pko_ports for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_num_pko_ports(int interface, int index);
+
+/**
+ * For a given port number, return the base pko output queue
+ * for the port.
+ *
+ * @param port   IPD port number
+ * @return Base output queue
+ */
+int cvmx_pko_get_base_queue(int port);
+
+/**
+ * For a given port number, return the number of pko output queues.
+ *
+ * @param port   IPD port number
+ * @return Number of output queues
+ */
+int cvmx_pko_get_num_queues(int port);
+
+/**
+ * Sets the internal FPA pool data structure for PKO comamnd queue.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ *
+ * @note the caller is responsable for setting up the pool with
+ * an appropriate buffer size and sufficient buffer count.
+ */
+void cvmx_pko_set_cmd_que_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param ipd_port Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ *
+ * Note:
+ *     - Only the doorbell for the base queue of the ipd_port is
+ *       collected.
+ *     - Retrieving the stats involves writing the index through
+ *       CVMX_PKO_REG_READ_IDX and reading the stat CSRs, in that
+ *       order. It is not MP-safe and caller should guarantee
+ *       atomicity.
+ */
+void cvmx_pko_get_port_status(u64 ipd_port, u64 clear, cvmx_pko_port_status_t *status);
+
+/**
+ * Rate limit a PKO port to a max packets/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port      Port to rate limit
+ * @param packets_s Maximum packet/sec
+ * @param burst     Maximum number of packets to burst in a row before rate
+ *                  limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_packets(int port, int packets_s, int burst);
+
+/**
+ * Rate limit a PKO port to a max bits/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port   Port to rate limit
+ * @param bits_s PKO rate limit in bits/sec
+ * @param burst  Maximum number of bits to burst before rate
+ *               limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_bits(int port, u64 bits_s, int burst);
+
+/**
+ * @INTERNAL
+ *
+ * Retrieve the PKO pipe number for a port
+ *
+ * @param interface
+ * @param index
+ *
+ * @return negative on error.
+ *
+ * This applies only to the non-loopback interfaces.
+ *
+ */
+int __cvmx_pko_get_pipe(int interface, int index);
+
+/**
+ * For a given PKO port number, return the base output queue
+ * for the port.
+ *
+ * @param pko_port   PKO port number
+ * @return           Base output queue
+ */
+int cvmx_pko_get_base_queue_pkoid(int pko_port);
+
+/**
+ * For a given PKO port number, return the number of output queues
+ * for the port.
+ *
+ * @param pko_port	PKO port number
+ * @return		the number of output queues
+ */
+int cvmx_pko_get_num_queues_pkoid(int pko_port);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * @param pko_port   Port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell_pkoid(u64 pko_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   Port to send it on
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish_pkoid(int pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				    cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   The PKO port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3_pkoid(u64 pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				     cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/*
+ * Obtain the number of PKO commands pending in a queue
+ *
+ * @param queue is the queue identifier to be queried
+ * @return the number of commands pending transmission or -1 on error
+ */
+int cvmx_pko_queue_pend_count(cvmx_cmd_queue_id_t queue);
+
+void cvmx_pko_set_cmd_queue_pool_buffer_count(u64 buffer_count);
+
+#endif /* __CVMX_HWPKO_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ilk.h b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
new file mode 100644
index 0000000000..727298352c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file contains defines for the ILK interface
+ */
+
+#ifndef __CVMX_ILK_H__
+#define __CVMX_ILK_H__
+
+/* CSR typedefs have been moved to cvmx-ilk-defs.h */
+
+/*
+ * Note: this macro must match the first ilk port in the ipd_port_map_68xx[]
+ * and ipd_port_map_78xx[] arrays.
+ */
+static inline int CVMX_ILK_GBL_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 5;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 6;
+	return -1;
+}
+
+static inline int CVMX_ILK_QLM_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 1;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 4;
+	return -1;
+}
+
+typedef struct {
+	int intf_en : 1;
+	int la_mode : 1;
+	int reserved : 14; /* unused */
+	int lane_speed : 16;
+	/* add more here */
+} cvmx_ilk_intf_t;
+
+#define CVMX_NUM_ILK_INTF 2
+static inline int CVMX_ILK_MAX_LANES(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 8;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 16;
+	return -1;
+}
+
+extern unsigned short cvmx_ilk_lane_mask[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int pipe;
+	unsigned int chan;
+} cvmx_ilk_pipe_chan_t;
+
+#define CVMX_ILK_MAX_PIPES 45
+/* Max number of channels allowed */
+#define CVMX_ILK_MAX_CHANS 256
+
+extern int cvmx_ilk_chans[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int chan;
+	unsigned int pknd;
+} cvmx_ilk_chan_pknd_t;
+
+#define CVMX_ILK_MAX_PKNDS 16 /* must be <45 */
+
+typedef struct {
+	int *chan_list; /* for discrete channels. or, must be null */
+	unsigned int num_chans;
+
+	unsigned int chan_start; /* for continuous channels */
+	unsigned int chan_end;
+	unsigned int chan_step;
+
+	unsigned int clr_on_rd;
+} cvmx_ilk_stats_ctrl_t;
+
+#define CVMX_ILK_MAX_CAL      288
+#define CVMX_ILK_MAX_CAL_IDX  (CVMX_ILK_MAX_CAL / 8)
+#define CVMX_ILK_TX_MIN_CAL   1
+#define CVMX_ILK_RX_MIN_CAL   1
+#define CVMX_ILK_CAL_GRP_SZ   8
+#define CVMX_ILK_PIPE_BPID_SZ 7
+#define CVMX_ILK_ENT_CTRL_SZ  2
+#define CVMX_ILK_RX_FIFO_WM   0x200
+
+typedef enum { PIPE_BPID = 0, LINK, XOFF, XON } cvmx_ilk_cal_ent_ctrl_t;
+
+typedef struct {
+	unsigned char pipe_bpid;
+	cvmx_ilk_cal_ent_ctrl_t ent_ctrl;
+} cvmx_ilk_cal_entry_t;
+
+typedef enum { CVMX_ILK_LPBK_DISA = 0, CVMX_ILK_LPBK_ENA } cvmx_ilk_lpbk_ena_t;
+
+typedef enum { CVMX_ILK_LPBK_INT = 0, CVMX_ILK_LPBK_EXT } cvmx_ilk_lpbk_mode_t;
+
+/**
+ * This header is placed in front of all received ILK look-aside mode packets
+ */
+typedef union {
+	u64 u64;
+
+	struct {
+		u32 reserved_63_57 : 7;	  /* bits 63...57 */
+		u32 nsp_cmd : 5;	  /* bits 56...52 */
+		u32 nsp_flags : 4;	  /* bits 51...48 */
+		u32 nsp_grp_id_upper : 6; /* bits 47...42 */
+		u32 reserved_41_40 : 2;	  /* bits 41...40 */
+		/* Protocol type, 1 for LA mode packet */
+		u32 la_mode : 1;	  /* bit  39      */
+		u32 nsp_grp_id_lower : 2; /* bits 38...37 */
+		u32 nsp_xid_upper : 4;	  /* bits 36...33 */
+		/* ILK channel number, 0 or 1 */
+		u32 ilk_channel : 1;   /* bit  32      */
+		u32 nsp_xid_lower : 8; /* bits 31...24 */
+		/* Unpredictable, may be any value */
+		u32 reserved_23_0 : 24; /* bits 23...0  */
+	} s;
+} cvmx_ilk_la_nsp_compact_hdr_t;
+
+typedef struct cvmx_ilk_LA_mode_struct {
+	int ilk_LA_mode;
+	int ilk_LA_mode_cal_ena;
+} cvmx_ilk_LA_mode_t;
+
+extern cvmx_ilk_LA_mode_t cvmx_ilk_LA_mode[CVMX_NUM_ILK_INTF];
+
+int cvmx_ilk_use_la_mode(int interface, int channel);
+int cvmx_ilk_start_interface(int interface, unsigned short num_lanes);
+int cvmx_ilk_start_interface_la(int interface, unsigned char num_lanes);
+int cvmx_ilk_set_pipe(int interface, int pipe_base, unsigned int pipe_len);
+int cvmx_ilk_tx_set_channel(int interface, cvmx_ilk_pipe_chan_t *pch, unsigned int num_chs);
+int cvmx_ilk_rx_set_pknd(int interface, cvmx_ilk_chan_pknd_t *chpknd, unsigned int num_pknd);
+int cvmx_ilk_enable(int interface);
+int cvmx_ilk_disable(int interface);
+int cvmx_ilk_get_intf_ena(int interface);
+int cvmx_ilk_get_chan_info(int interface, unsigned char **chans, unsigned char *num_chan);
+cvmx_ilk_la_nsp_compact_hdr_t cvmx_ilk_enable_la_header(int ipd_port, int mode);
+void cvmx_ilk_show_stats(int interface, cvmx_ilk_stats_ctrl_t *pstats);
+int cvmx_ilk_cal_setup_rx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent, int hi_wm,
+			  unsigned char cal_ena);
+int cvmx_ilk_cal_setup_tx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent,
+			  unsigned char cal_ena);
+int cvmx_ilk_lpbk(int interface, cvmx_ilk_lpbk_ena_t enable, cvmx_ilk_lpbk_mode_t mode);
+int cvmx_ilk_la_mode_enable_rx_calendar(int interface);
+
+#endif /* __CVMX_ILK_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd.h b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
new file mode 100644
index 0000000000..cdff36fffb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Input Packet Data unit.
+ */
+
+#ifndef __CVMX_IPD_H__
+#define __CVMX_IPD_H__
+
+#include "cvmx-pki.h"
+
+/* CSR typedefs have been moved to cvmx-ipd-defs.h */
+
+typedef cvmx_ipd_1st_mbuff_skip_t cvmx_ipd_mbuff_not_first_skip_t;
+typedef cvmx_ipd_1st_next_ptr_back_t cvmx_ipd_second_next_ptr_back_t;
+
+typedef struct cvmx_ipd_tag_fields {
+	u64 ipv6_src_ip : 1;
+	u64 ipv6_dst_ip : 1;
+	u64 ipv6_src_port : 1;
+	u64 ipv6_dst_port : 1;
+	u64 ipv6_next_header : 1;
+	u64 ipv4_src_ip : 1;
+	u64 ipv4_dst_ip : 1;
+	u64 ipv4_src_port : 1;
+	u64 ipv4_dst_port : 1;
+	u64 ipv4_protocol : 1;
+	u64 input_port : 1;
+} cvmx_ipd_tag_fields_t;
+
+typedef struct cvmx_pip_port_config {
+	u64 parse_mode;
+	u64 tag_type;
+	u64 tag_mode;
+	cvmx_ipd_tag_fields_t tag_fields;
+} cvmx_pip_port_config_t;
+
+typedef struct cvmx_ipd_config_struct {
+	u64 first_mbuf_skip;
+	u64 not_first_mbuf_skip;
+	u64 ipd_enable;
+	u64 enable_len_M8_fix;
+	u64 cache_mode;
+	cvmx_fpa_pool_config_t packet_pool;
+	cvmx_fpa_pool_config_t wqe_pool;
+	cvmx_pip_port_config_t port_config;
+} cvmx_ipd_config_t;
+
+extern cvmx_ipd_config_t cvmx_ipd_cfg;
+
+/**
+ * Gets the fpa pool number of packet pool
+ */
+static inline s64 cvmx_fpa_get_packet_pool(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of packet pool buffer
+ */
+static inline u64 cvmx_fpa_get_packet_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of packet pool
+ */
+static inline u64 cvmx_fpa_get_packet_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_count);
+}
+
+/**
+ * Gets the fpa pool number of wqe pool
+ */
+static inline s64 cvmx_fpa_get_wqe_pool(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of wqe pool buffer
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of wqe pool
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_count);
+}
+
+/**
+ * Sets the ipd related configuration in internal structure which is then used
+ * for seting IPD hardware block
+ */
+int cvmx_ipd_set_config(cvmx_ipd_config_t ipd_config);
+
+/**
+ * Gets the ipd related configuration from internal structure.
+ */
+void cvmx_ipd_get_config(cvmx_ipd_config_t *ipd_config);
+
+/**
+ * Sets the internal FPA pool data structure for packet buffer pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Sets the internal FPA pool data structure for wqe pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Gets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_get_packet_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.packet_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.packet_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.packet_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_packet_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Gets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_get_wqe_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.wqe_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.wqe_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.wqe_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_wqe_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Configure IPD
+ *
+ * @param mbuff_size Packets buffer size in 8 byte words
+ * @param first_mbuff_skip
+ *                   Number of 8 byte words to skip in the first buffer
+ * @param not_first_mbuff_skip
+ *                   Number of 8 byte words to skip in each following buffer
+ * @param first_back Must be same as first_mbuff_skip / 128
+ * @param second_back
+ *                   Must be same as not_first_mbuff_skip / 128
+ * @param wqe_fpa_pool
+ *                   FPA pool to get work entries from
+ * @param cache_mode
+ * @param back_pres_enable_flag
+ *                   Enable or disable port back pressure at a global level.
+ *                   This should always be 1 as more accurate control can be
+ *                   found in IPD_PORTX_BP_PAGE_CNT[BP_ENB].
+ */
+void cvmx_ipd_config(u64 mbuff_size, u64 first_mbuff_skip, u64 not_first_mbuff_skip, u64 first_back,
+		     u64 second_back, u64 wqe_fpa_pool, cvmx_ipd_mode_t cache_mode,
+		     u64 back_pres_enable_flag);
+/**
+ * Enable IPD
+ */
+void cvmx_ipd_enable(void);
+
+/**
+ * Disable IPD
+ */
+void cvmx_ipd_disable(void);
+
+void __cvmx_ipd_free_ptr(void);
+
+void cvmx_ipd_set_packet_pool_buffer_count(u64 buffer_count);
+void cvmx_ipd_set_wqe_pool_buffer_count(u64 buffer_count);
+
+/**
+ * Setup Random Early Drop on a specific input queue
+ *
+ * @param queue  Input queue to setup RED on (0-7)
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red_queue(int queue, int pass_thresh, int drop_thresh);
+
+/**
+ * Setup Random Early Drop to automatically begin dropping packets.
+ *
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red(int pass_thresh, int drop_thresh);
+
+#endif /*  __CVMX_IPD_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-packet.h b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
new file mode 100644
index 0000000000..f3cfe9c64f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Packet buffer defines.
+ */
+
+#ifndef __CVMX_PACKET_H__
+#define __CVMX_PACKET_H__
+
+union cvmx_buf_ptr_pki {
+	u64 u64;
+	struct {
+		u64 size : 16;
+		u64 packet_outside_wqe : 1;
+		u64 rsvd0 : 5;
+		u64 addr : 42;
+	};
+};
+
+typedef union cvmx_buf_ptr_pki cvmx_buf_ptr_pki_t;
+
+/**
+ * This structure defines a buffer pointer on Octeon
+ */
+union cvmx_buf_ptr {
+	void *ptr;
+	u64 u64;
+	struct {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 addr : 40;
+	} s;
+};
+
+typedef union cvmx_buf_ptr cvmx_buf_ptr_t;
+
+#endif /*  __CVMX_PACKET_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcie.h b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
new file mode 100644
index 0000000000..a819196c02
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PCIE_H__
+#define __CVMX_PCIE_H__
+
+#define CVMX_PCIE_MAX_PORTS 4
+#define CVMX_PCIE_PORTS                                                                            \
+	((OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX)) ?                      \
+		       CVMX_PCIE_MAX_PORTS :                                                             \
+		       (OCTEON_IS_MODEL(OCTEON_CN70XX) ? 3 : 2))
+
+/*
+ * The physical memory base mapped by BAR1.  256MB at the end of the
+ * first 4GB.
+ */
+#define CVMX_PCIE_BAR1_PHYS_BASE ((1ull << 32) - (1ull << 28))
+#define CVMX_PCIE_BAR1_PHYS_SIZE BIT_ULL(28)
+
+/*
+ * The RC base of BAR1.  gen1 has a 39-bit BAR2, gen2 has 41-bit BAR2,
+ * place BAR1 so it is the same for both.
+ */
+#define CVMX_PCIE_BAR1_RC_BASE BIT_ULL(41)
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 1 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 reserved_29_31 : 3;	 /* Must be zero */
+		u64 ty : 1;
+		u64 bus : 8;
+		u64 dev : 5;
+		u64 func : 3;
+		u64 reg : 12;
+	} config;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 2 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 address : 32;	 /* PCIe IO address */
+	} io;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 3-6 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 address : 36;	 /* PCIe Mem address */
+	} mem;
+} cvmx_pcie_address_t;
+
+/**
+ * Return the Core virtual base address for PCIe IO access. IOs are
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_io_base_address(int pcie_port);
+
+/**
+ * Size of the IO address region returned at address
+ * cvmx_pcie_get_io_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the IO window
+ */
+u64 cvmx_pcie_get_io_size(int pcie_port);
+
+/**
+ * Return the Core virtual base address for PCIe MEM access. Memory is
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_mem_base_address(int pcie_port);
+
+/**
+ * Size of the Mem address region returned@address
+ * cvmx_pcie_get_mem_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the Mem window
+ */
+u64 cvmx_pcie_get_mem_size(int pcie_port);
+
+/**
+ * Initialize a PCIe port for use in host(RC) mode. It doesn't enumerate the bus.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_initialize(int pcie_port);
+
+/**
+ * Shutdown a PCIe port and put it in reset
+ *
+ * @param pcie_port PCIe port to shutdown
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_shutdown(int pcie_port);
+
+/**
+ * Read 8bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u8 cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 16bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u16 cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 32bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u32 cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Write 8bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int fn, int reg, u8 val);
+
+/**
+ * Write 16bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int fn, int reg, u16 val);
+
+/**
+ * Write 32bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int fn, int reg, u32 val);
+
+/**
+ * Read a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to read from
+ * @param cfg_offset Address to read
+ *
+ * @return Value read
+ */
+u32 cvmx_pcie_cfgx_read(int pcie_port, u32 cfg_offset);
+u32 cvmx_pcie_cfgx_read_node(int node, int pcie_port, u32 cfg_offset);
+
+/**
+ * Write a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to write to
+ * @param cfg_offset Address to write
+ * @param val        Value to write
+ */
+void cvmx_pcie_cfgx_write(int pcie_port, u32 cfg_offset, u32 val);
+void cvmx_pcie_cfgx_write_node(int node, int pcie_port, u32 cfg_offset, u32 val);
+
+/**
+ * Write a 32bit value to the Octeon NPEI register space
+ *
+ * @param address Address to write to
+ * @param val     Value to write
+ */
+static inline void cvmx_pcie_npei_write32(u64 address, u32 val)
+{
+	cvmx_write64_uint32(address ^ 4, val);
+	cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Read a 32bit value from the Octeon NPEI register space
+ *
+ * @param address Address to read
+ * @return The result
+ */
+static inline u32 cvmx_pcie_npei_read32(u64 address)
+{
+	return cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Initialize a PCIe port for use in target(EP) mode.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_ep_initialize(int pcie_port);
+
+/**
+ * Wait for posted PCIe read/writes to reach the other side of
+ * the internal PCIe switch. This will insure that core
+ * read/writes are posted before anything after this function
+ * is called. This may be necessary when writing to memory that
+ * will later be read using the DMA/PKT engines.
+ *
+ * @param pcie_port PCIe port to wait for
+ */
+void cvmx_pcie_wait_for_pending(int pcie_port);
+
+/**
+ * Returns if a PCIe port is in host or target mode.
+ *
+ * @param pcie_port PCIe port number (PEM number)
+ *
+ * @return 0 if PCIe port is in target mode, !0 if in host mode.
+ */
+int cvmx_pcie_is_host_mode(int pcie_port);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip.h b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
new file mode 100644
index 0000000000..013f533fb7
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
@@ -0,0 +1,1080 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Processing unit.
+ */
+
+#ifndef __CVMX_PIP_H__
+#define __CVMX_PIP_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pki.h"
+#include "cvmx-helper-pki.h"
+
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-pki-resources.h"
+
+#define CVMX_PIP_NUM_INPUT_PORTS 46
+#define CVMX_PIP_NUM_WATCHERS	 8
+
+/*
+ * Encodes the different error and exception codes
+ */
+typedef enum {
+	CVMX_PIP_L4_NO_ERR = 0ull,
+	/*        1  = TCP (UDP) packet not long enough to cover TCP (UDP) header */
+	CVMX_PIP_L4_MAL_ERR = 1ull,
+	/*        2  = TCP/UDP checksum failure */
+	CVMX_PIP_CHK_ERR = 2ull,
+	/*        3  = TCP/UDP length check (TCP/UDP length does not match IP length) */
+	CVMX_PIP_L4_LENGTH_ERR = 3ull,
+	/*        4  = illegal TCP/UDP port (either source or dest port is zero) */
+	CVMX_PIP_BAD_PRT_ERR = 4ull,
+	/*        8  = TCP flags = FIN only */
+	CVMX_PIP_TCP_FLG8_ERR = 8ull,
+	/*        9  = TCP flags = 0 */
+	CVMX_PIP_TCP_FLG9_ERR = 9ull,
+	/*        10 = TCP flags = FIN+RST+* */
+	CVMX_PIP_TCP_FLG10_ERR = 10ull,
+	/*        11 = TCP flags = SYN+URG+* */
+	CVMX_PIP_TCP_FLG11_ERR = 11ull,
+	/*        12 = TCP flags = SYN+RST+* */
+	CVMX_PIP_TCP_FLG12_ERR = 12ull,
+	/*        13 = TCP flags = SYN+FIN+* */
+	CVMX_PIP_TCP_FLG13_ERR = 13ull
+} cvmx_pip_l4_err_t;
+
+typedef enum {
+	CVMX_PIP_IP_NO_ERR = 0ull,
+	/*        1 = not IPv4 or IPv6 */
+	CVMX_PIP_NOT_IP = 1ull,
+	/*        2 = IPv4 header checksum violation */
+	CVMX_PIP_IPV4_HDR_CHK = 2ull,
+	/*        3 = malformed (packet not long enough to cover IP hdr) */
+	CVMX_PIP_IP_MAL_HDR = 3ull,
+	/*        4 = malformed (packet not long enough to cover len in IP hdr) */
+	CVMX_PIP_IP_MAL_PKT = 4ull,
+	/*        5 = TTL / hop count equal zero */
+	CVMX_PIP_TTL_HOP = 5ull,
+	/*        6 = IPv4 options / IPv6 early extension headers */
+	CVMX_PIP_OPTS = 6ull
+} cvmx_pip_ip_exc_t;
+
+/**
+ * NOTES
+ *       late collision (data received before collision)
+ *            late collisions cannot be detected by the receiver
+ *            they would appear as JAM bits which would appear as bad FCS
+ *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+ */
+typedef enum {
+	/**
+	 * No error
+	 */
+	CVMX_PIP_RX_NO_ERR = 0ull,
+
+	CVMX_PIP_PARTIAL_ERR =
+		1ull, /* RGM+SPI            1 = partially received packet (buffering/bandwidth not adequate) */
+	CVMX_PIP_JABBER_ERR =
+		2ull, /* RGM+SPI            2 = receive packet too large and truncated */
+	CVMX_PIP_OVER_FCS_ERR =
+		3ull, /* RGM                3 = max frame error (pkt len > max frame len) (with FCS error) */
+	CVMX_PIP_OVER_ERR =
+		4ull, /* RGM+SPI            4 = max frame error (pkt len > max frame len) */
+	CVMX_PIP_ALIGN_ERR =
+		5ull, /* RGM                5 = nibble error (data not byte multiple - 100M and 10M only) */
+	CVMX_PIP_UNDER_FCS_ERR =
+		6ull, /* RGM                6 = min frame error (pkt len < min frame len) (with FCS error) */
+	CVMX_PIP_GMX_FCS_ERR = 7ull, /* RGM                7 = FCS error */
+	CVMX_PIP_UNDER_ERR =
+		8ull, /* RGM+SPI            8 = min frame error (pkt len < min frame len) */
+	CVMX_PIP_EXTEND_ERR = 9ull, /* RGM                9 = Frame carrier extend error */
+	CVMX_PIP_TERMINATE_ERR =
+		9ull, /* XAUI               9 = Packet was terminated with an idle cycle */
+	CVMX_PIP_LENGTH_ERR =
+		10ull, /* RGM               10 = length mismatch (len did not match len in L2 length/type) */
+	CVMX_PIP_DAT_ERR =
+		11ull, /* RGM               11 = Frame error (some or all data bits marked err) */
+	CVMX_PIP_DIP_ERR = 11ull, /*     SPI           11 = DIP4 error */
+	CVMX_PIP_SKIP_ERR =
+		12ull, /* RGM               12 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_NIBBLE_ERR =
+		13ull, /* RGM               13 = studder error (data not repeated - 100M and 10M only) */
+	CVMX_PIP_PIP_FCS = 16L, /* RGM+SPI           16 = FCS error */
+	CVMX_PIP_PIP_SKIP_ERR =
+		17L, /* RGM+SPI+PCI       17 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_PIP_L2_MAL_HDR =
+		18L, /* RGM+SPI+PCI       18 = malformed l2 (packet not long enough to cover L2 hdr) */
+	CVMX_PIP_PUNY_ERR =
+		47L /* SGMII             47 = PUNY error (packet was 4B or less when FCS stripping is enabled) */
+	/* NOTES
+	 *       xx = late collision (data received before collision)
+	 *            late collisions cannot be detected by the receiver
+	 *            they would appear as JAM bits which would appear as bad FCS
+	 *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+	 */
+} cvmx_pip_rcv_err_t;
+
+/**
+ * This defines the err_code field errors in the work Q entry
+ */
+typedef union {
+	cvmx_pip_l4_err_t l4_err;
+	cvmx_pip_ip_exc_t ip_exc;
+	cvmx_pip_rcv_err_t rcv_err;
+} cvmx_pip_err_t;
+
+/**
+ * Status statistics for a port
+ */
+typedef struct {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+} cvmx_pip_port_status_t;
+
+/**
+ * Definition of the PIP custom header that can be prepended
+ * to a packet by external hardware.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 rawfull : 1;
+		u64 reserved0 : 5;
+		cvmx_pip_port_parse_mode_t parse_mode : 2;
+		u64 reserved1 : 1;
+		u64 skip_len : 7;
+		u64 grpext : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 rs : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pip_pkt_inst_hdr_t;
+
+enum cvmx_pki_pcam_match {
+	CVMX_PKI_PCAM_MATCH_IP,
+	CVMX_PKI_PCAM_MATCH_IPV4,
+	CVMX_PKI_PCAM_MATCH_IPV6,
+	CVMX_PKI_PCAM_MATCH_TCP
+};
+
+/* CSR typedefs have been moved to cvmx-pip-defs.h */
+static inline int cvmx_pip_config_watcher(int index, int type, u16 match, u16 mask, int grp,
+					  int qos)
+{
+	if (index >= CVMX_PIP_NUM_WATCHERS) {
+		debug("ERROR: pip watcher %d is > than supported\n", index);
+		return -1;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		/* store in software for now, only when the watcher is enabled program the entry*/
+		if (type == CVMX_PIP_QOS_WATCH_PROTNH) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L3_FLAGS;
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_TCP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x060000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_UDP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x110000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == 0x4 /*CVMX_PIP_QOS_WATCH_ETHERTYPE*/) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+			if (match == 0x8100) {
+				debug("ERROR: default vlan entry already exist, cant set watcher\n");
+				return -1;
+			}
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 4;
+		} else {
+			debug("ERROR: Unsupported watcher type %d\n", type);
+			return -1;
+		}
+		if (grp >= 32) {
+			debug("ERROR: grp %d out of range for backward compat 78xx\n", grp);
+			return -1;
+		}
+		qos_watcher[index].sso_grp = (u8)(grp << 3 | qos);
+		qos_watcher[index].configured = 1;
+	} else {
+		/* Implement it later */
+	}
+	return 0;
+}
+
+static inline int __cvmx_pip_set_tag_type(int node, int style, int tag_type, int field)
+{
+	struct cvmx_pki_style_config style_cfg;
+	int style_num;
+	int pcam_offset;
+	int bank;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+
+	/* All other style parameters remain same except tag type */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tag_type;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to set tag type\n");
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	if (field == CVMX_PKI_PCAM_MATCH_IP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+		/* legacy will write to all clusters*/
+		bank = 0;
+		pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+							CVMX_PKI_CLUSTER_ALL);
+		if (pcam_offset < 0) {
+			debug("ERROR: pcam entry not available to enable qos watcher\n");
+			cvmx_pki_style_free(node, style_num);
+			return -1;
+		}
+		pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+		pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+		pcam_action.style_add = (u8)(style_num - style);
+		cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input,
+					  pcam_action);
+		field = CVMX_PKI_PCAM_MATCH_IPV6;
+	}
+	if (field == CVMX_PKI_PCAM_MATCH_IPV4) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_IPV6) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x86dd00000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_TCP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_L4_PORT;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x60000;
+		pcam_input.data_mask = 0xff0000;
+		pcam_action.pointer_advance = 0;
+	}
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	bank = pcam_input.field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		return -1;
+	}
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return style_num;
+}
+
+/* Only for legacy internal use */
+static inline int __cvmx_pip_enable_watcher_78xx(int node, int index, int style)
+{
+	struct cvmx_pki_style_config style_cfg;
+	struct cvmx_pki_qpg_config qpg_cfg;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+	int style_num;
+	int qpg_offset;
+	int pcam_offset;
+	int bank;
+
+	if (!qos_watcher[index].configured) {
+		debug("ERROR: qos watcher %d should be configured before enable\n", index);
+		return -1;
+	}
+	/* All other style parameters remain same except grp and qos and qps base */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	cvmx_pki_read_qpg_entry(node, style_cfg.parm_cfg.qpg_base, &qpg_cfg);
+	qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+	qpg_cfg.grp_ok = qos_watcher[index].sso_grp;
+	qpg_cfg.grp_bad = qos_watcher[index].sso_grp;
+	qpg_offset = cvmx_helper_pki_set_qpg_entry(node, &qpg_cfg);
+	if (qpg_offset == -1) {
+		debug("Warning: no new qpg entry available to enable watcher\n");
+		return -1;
+	}
+	/* try to reserve the style, if it is not configured already, reserve
+	   and configure it */
+	style_cfg.parm_cfg.qpg_base = qpg_offset;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to enable qos watcher\n");
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	/* legacy will write to all clusters*/
+	bank = qos_watcher[index].field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	pcam_input.field = qos_watcher[index].field;
+	pcam_input.field_mask = 0xff;
+	pcam_input.data = qos_watcher[index].data;
+	pcam_input.data_mask = qos_watcher[index].data_mask;
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	pcam_action.pointer_advance = qos_watcher[index].advance;
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return 0;
+}
+
+/**
+ * Configure an ethernet input port
+ *
+ * @param ipd_port Port number to configure
+ * @param port_cfg Port hardware configuration
+ * @param port_tag_cfg Port POW tagging configuration
+ */
+static inline void cvmx_pip_config_port(u64 ipd_port, cvmx_pip_prt_cfgx_t port_cfg,
+					cvmx_pip_prt_tagx_t port_tag_cfg)
+{
+	struct cvmx_pki_qpg_config qpg_cfg;
+	int qpg_offset;
+	u8 tcp_tag = 0xff;
+	u8 ip_tag = 0xaa;
+	int style, nstyle, n4style, n6style;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		struct cvmx_pki_port_config pki_prt_cfg;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		cvmx_pki_get_port_config(ipd_port, &pki_prt_cfg);
+		style = pki_prt_cfg.pkind_cfg.initial_style;
+		if (port_cfg.s.ih_pri || port_cfg.s.vlan_len || port_cfg.s.pad_len)
+			debug("Warning: 78xx: use different config for this option\n");
+		pki_prt_cfg.style_cfg.parm_cfg.minmax_sel = port_cfg.s.len_chk_sel;
+		pki_prt_cfg.style_cfg.parm_cfg.lenerr_en = port_cfg.s.lenerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.maxerr_en = port_cfg.s.maxerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.minerr_en = port_cfg.s.minerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.fcs_chk = port_cfg.s.crc_en;
+		if (port_cfg.s.grp_wat || port_cfg.s.qos_wat || port_cfg.s.grp_wat_47 ||
+		    port_cfg.s.qos_wat_47) {
+			u8 group_mask = (u8)(port_cfg.s.grp_wat | (u8)(port_cfg.s.grp_wat_47 << 4));
+			u8 qos_mask = (u8)(port_cfg.s.qos_wat | (u8)(port_cfg.s.qos_wat_47 << 4));
+			int i;
+
+			for (i = 0; i < CVMX_PIP_NUM_WATCHERS; i++) {
+				if ((group_mask & (1 << i)) || (qos_mask & (1 << i)))
+					__cvmx_pip_enable_watcher_78xx(xp.node, i, style);
+			}
+		}
+		if (port_tag_cfg.s.tag_mode) {
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+				cvmx_printf("Warning: mask tag is not supported in 78xx pass1\n");
+			else {
+			}
+			/* need to implement for 78xx*/
+		}
+		if (port_cfg.s.tag_inc)
+			debug("Warning: 78xx uses differnet method for tag generation\n");
+		pki_prt_cfg.style_cfg.parm_cfg.rawdrp = port_cfg.s.rawdrp;
+		pki_prt_cfg.pkind_cfg.parse_en.inst_hdr = port_cfg.s.inst_hdr;
+		if (port_cfg.s.hg_qos)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_HIGIG;
+		else if (port_cfg.s.qos_vlan)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_VLAN;
+		else if (port_cfg.s.qos_diff)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_DIFFSERV;
+		if (port_cfg.s.qos_vod)
+			debug("Warning: 78xx needs pcam entries installed to achieve qos_vod\n");
+		if (port_cfg.s.qos) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= port_cfg.s.qos;
+			qpg_cfg.grp_bad |= port_cfg.s.qos;
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify qos\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		if (port_tag_cfg.s.grp != pki_dflt_sso_grp[xp.node].group) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_cfg.grp_bad |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify group\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		pki_prt_cfg.pkind_cfg.parse_en.dsa_en = port_cfg.s.dsa_en;
+		pki_prt_cfg.pkind_cfg.parse_en.hg_en = port_cfg.s.higig_en;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_src =
+			port_tag_cfg.s.ip6_src_flag | port_tag_cfg.s.ip4_src_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_dst =
+			port_tag_cfg.s.ip6_dst_flag | port_tag_cfg.s.ip4_dst_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.ip_prot_nexthdr =
+			port_tag_cfg.s.ip6_nxth_flag | port_tag_cfg.s.ip4_pctl_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_src =
+			port_tag_cfg.s.ip6_sprt_flag | port_tag_cfg.s.ip4_sprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_dst =
+			port_tag_cfg.s.ip6_dprt_flag | port_tag_cfg.s.ip4_dprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.input_port = port_tag_cfg.s.inc_prt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.first_vlan = port_tag_cfg.s.inc_vlan;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.second_vlan = port_tag_cfg.s.inc_vs;
+
+		if (port_tag_cfg.s.tcp6_tag_type == port_tag_cfg.s.tcp4_tag_type)
+			tcp_tag = port_tag_cfg.s.tcp6_tag_type;
+		if (port_tag_cfg.s.ip6_tag_type == port_tag_cfg.s.ip4_tag_type)
+			ip_tag = port_tag_cfg.s.ip6_tag_type;
+		pki_prt_cfg.style_cfg.parm_cfg.tag_type =
+			(enum cvmx_sso_tag_type)port_tag_cfg.s.non_tag_type;
+		if (tcp_tag == ip_tag && tcp_tag == port_tag_cfg.s.non_tag_type)
+			pki_prt_cfg.style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tcp_tag;
+		else if (tcp_tag == ip_tag) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for ip6 && ip4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, ip_tag, CVMX_PKI_PCAM_MATCH_IP);
+		} else if (ip_tag == port_tag_cfg.s.non_tag_type) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for tcp6 & tcp4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, tcp_tag, CVMX_PKI_PCAM_MATCH_TCP);
+		} else {
+			if (ip_tag != 0xaa) {
+				nstyle = __cvmx_pip_set_tag_type(xp.node, style, ip_tag,
+								 CVMX_PKI_PCAM_MATCH_IP);
+				if (tcp_tag != 0xff)
+					__cvmx_pip_set_tag_type(xp.node, nstyle, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				else {
+					n4style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV4);
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					n6style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV6);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			} else {
+				n4style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip4_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV4);
+				n6style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip6_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV6);
+				if (tcp_tag != 0xff) {
+					__cvmx_pip_set_tag_type(xp.node, n4style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				} else {
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			}
+		}
+		pki_prt_cfg.style_cfg.parm_cfg.qpg_dis_padd = !port_tag_cfg.s.portadd_en;
+
+		if (port_cfg.s.mode == 0x1)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LA_TO_LG;
+		else if (port_cfg.s.mode == 0x2)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LC_TO_LG;
+		else
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_NOTHING;
+		/* This is only for backward compatibility, not all the parameters are supported in 78xx */
+		cvmx_pki_set_port_config(ipd_port, &pki_prt_cfg);
+	} else {
+		if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+			int interface, index, pknd;
+
+			interface = cvmx_helper_get_interface_num(ipd_port);
+			index = cvmx_helper_get_interface_index_num(ipd_port);
+			pknd = cvmx_helper_get_pknd(interface, index);
+
+			ipd_port = pknd; /* overload port_num with pknd */
+		}
+		csr_wr(CVMX_PIP_PRT_CFGX(ipd_port), port_cfg.u64);
+		csr_wr(CVMX_PIP_PRT_TAGX(ipd_port), port_tag_cfg.u64);
+	}
+}
+
+/**
+ * Configure the VLAN priority to QoS queue mapping.
+ *
+ * @param vlan_priority
+ *               VLAN priority (0-7)
+ * @param qos    QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_vlan_qos(u64 vlan_priority, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_vlanx_t pip_qos_vlanx;
+
+		pip_qos_vlanx.u64 = 0;
+		pip_qos_vlanx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_VLANX(vlan_priority), pip_qos_vlanx.u64);
+	}
+}
+
+/**
+ * Configure the Diffserv to QoS queue mapping.
+ *
+ * @param diffserv Diffserv field value (0-63)
+ * @param qos      QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_diffserv_qos(u64 diffserv, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_diffx_t pip_qos_diffx;
+
+		pip_qos_diffx.u64 = 0;
+		pip_qos_diffx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_DIFFX(diffserv), pip_qos_diffx.u64);
+	}
+}
+
+/**
+ * Get the status counters for a port for older non PKI chips.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_stats(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	cvmx_pip_stat_ctl_t pip_stat_ctl;
+	cvmx_pip_stat0_prtx_t stat0;
+	cvmx_pip_stat1_prtx_t stat1;
+	cvmx_pip_stat2_prtx_t stat2;
+	cvmx_pip_stat3_prtx_t stat3;
+	cvmx_pip_stat4_prtx_t stat4;
+	cvmx_pip_stat5_prtx_t stat5;
+	cvmx_pip_stat6_prtx_t stat6;
+	cvmx_pip_stat7_prtx_t stat7;
+	cvmx_pip_stat8_prtx_t stat8;
+	cvmx_pip_stat9_prtx_t stat9;
+	cvmx_pip_stat10_x_t stat10;
+	cvmx_pip_stat11_x_t stat11;
+	cvmx_pip_stat_inb_pktsx_t pip_stat_inb_pktsx;
+	cvmx_pip_stat_inb_octsx_t pip_stat_inb_octsx;
+	cvmx_pip_stat_inb_errsx_t pip_stat_inb_errsx;
+	int interface = cvmx_helper_get_interface_num(port_num);
+	int index = cvmx_helper_get_interface_index_num(port_num);
+
+	pip_stat_ctl.u64 = 0;
+	pip_stat_ctl.s.rdclr = clear;
+	csr_wr(CVMX_PIP_STAT_CTL, pip_stat_ctl.u64);
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+		/*
+		 * PIP_STAT_CTL[MODE] 0 means pkind.
+		 */
+		stat0.u64 = csr_rd(CVMX_PIP_STAT0_X(pknd));
+		stat1.u64 = csr_rd(CVMX_PIP_STAT1_X(pknd));
+		stat2.u64 = csr_rd(CVMX_PIP_STAT2_X(pknd));
+		stat3.u64 = csr_rd(CVMX_PIP_STAT3_X(pknd));
+		stat4.u64 = csr_rd(CVMX_PIP_STAT4_X(pknd));
+		stat5.u64 = csr_rd(CVMX_PIP_STAT5_X(pknd));
+		stat6.u64 = csr_rd(CVMX_PIP_STAT6_X(pknd));
+		stat7.u64 = csr_rd(CVMX_PIP_STAT7_X(pknd));
+		stat8.u64 = csr_rd(CVMX_PIP_STAT8_X(pknd));
+		stat9.u64 = csr_rd(CVMX_PIP_STAT9_X(pknd));
+		stat10.u64 = csr_rd(CVMX_PIP_STAT10_X(pknd));
+		stat11.u64 = csr_rd(CVMX_PIP_STAT11_X(pknd));
+	} else {
+		if (port_num >= 40) {
+			stat0.u64 = csr_rd(CVMX_PIP_XSTAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_XSTAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_XSTAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_XSTAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_XSTAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_XSTAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_XSTAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_XSTAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_XSTAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_XSTAT9_PRTX(port_num));
+			if (OCTEON_IS_MODEL(OCTEON_CN6XXX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_XSTAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_XSTAT11_PRTX(port_num));
+			}
+		} else {
+			stat0.u64 = csr_rd(CVMX_PIP_STAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_STAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_STAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_STAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_STAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_STAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_STAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_STAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_STAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_STAT9_PRTX(port_num));
+			if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_STAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_STAT11_PRTX(port_num));
+			}
+		}
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTS_PKNDX(pknd));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTS_PKNDX(pknd));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRS_PKNDX(pknd));
+	} else {
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTSX(port_num));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTSX(port_num));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRSX(port_num));
+	}
+
+	status->dropped_octets = stat0.s.drp_octs;
+	status->dropped_packets = stat0.s.drp_pkts;
+	status->octets = stat1.s.octs;
+	status->pci_raw_packets = stat2.s.raw;
+	status->packets = stat2.s.pkts;
+	status->multicast_packets = stat3.s.mcst;
+	status->broadcast_packets = stat3.s.bcst;
+	status->len_64_packets = stat4.s.h64;
+	status->len_65_127_packets = stat4.s.h65to127;
+	status->len_128_255_packets = stat5.s.h128to255;
+	status->len_256_511_packets = stat5.s.h256to511;
+	status->len_512_1023_packets = stat6.s.h512to1023;
+	status->len_1024_1518_packets = stat6.s.h1024to1518;
+	status->len_1519_max_packets = stat7.s.h1519;
+	status->fcs_align_err_packets = stat7.s.fcs;
+	status->runt_packets = stat8.s.undersz;
+	status->runt_crc_packets = stat8.s.frag;
+	status->oversize_packets = stat9.s.oversz;
+	status->oversize_crc_packets = stat9.s.jabber;
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		status->mcast_l2_red_packets = stat10.s.mcast;
+		status->bcast_l2_red_packets = stat10.s.bcast;
+		status->mcast_l3_red_packets = stat11.s.mcast;
+		status->bcast_l3_red_packets = stat11.s.bcast;
+	}
+	status->inb_packets = pip_stat_inb_pktsx.s.pkts;
+	status->inb_octets = pip_stat_inb_octsx.s.octs;
+	status->inb_errors = pip_stat_inb_errsx.s.errs;
+}
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_status(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		unsigned int node = cvmx_get_node_num();
+
+		cvmx_pki_get_port_stats(node, port_num, (struct cvmx_pki_port_stats *)status);
+	} else {
+		cvmx_pip_get_port_stats(port_num, clear, status);
+	}
+}
+
+/**
+ * Configure the hardware CRC engine
+ *
+ * @param interface Interface to configure (0 or 1)
+ * @param invert_result
+ *                 Invert the result of the CRC
+ * @param reflect  Reflect
+ * @param initialization_vector
+ *                 CRC initialization vector
+ */
+static inline void cvmx_pip_config_crc(u64 interface, u64 invert_result, u64 reflect,
+				       u32 initialization_vector)
+{
+	/* Only CN38XX & CN58XX */
+}
+
+/**
+ * Clear all bits in a tag mask. This should be called on
+ * startup before any calls to cvmx_pip_tag_mask_set. Each bit
+ * set in the final mask represent a byte used in the packet for
+ * tag generation.
+ *
+ * @param mask_index Which tag mask to clear (0..3)
+ */
+static inline void cvmx_pip_tag_mask_clear(u64 mask_index)
+{
+	u64 index;
+	cvmx_pip_tag_incx_t pip_tag_incx;
+
+	pip_tag_incx.u64 = 0;
+	pip_tag_incx.s.en = 0;
+	for (index = mask_index * 16; index < (mask_index + 1) * 16; index++)
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+}
+
+/**
+ * Sets a range of bits in the tag mask. The tag mask is used
+ * when the cvmx_pip_port_tag_cfg_t tag_mode is non zero.
+ * There are four separate masks that can be configured.
+ *
+ * @param mask_index Which tag mask to modify (0..3)
+ * @param offset     Offset into the bitmask to set bits at. Use the GCC macro
+ *                   offsetof() to determine the offsets into packet headers.
+ *                   For example, offsetof(ethhdr, protocol) returns the offset
+ *                   of the ethernet protocol field.  The bitmask selects which bytes
+ *                   to include the the tag, with bit offset X selecting byte@offset X
+ *                   from the beginning of the packet data.
+ * @param len        Number of bytes to include. Usually this is the sizeof()
+ *                   the field.
+ */
+static inline void cvmx_pip_tag_mask_set(u64 mask_index, u64 offset, u64 len)
+{
+	while (len--) {
+		cvmx_pip_tag_incx_t pip_tag_incx;
+		u64 index = mask_index * 16 + offset / 8;
+
+		pip_tag_incx.u64 = csr_rd(CVMX_PIP_TAG_INCX(index));
+		pip_tag_incx.s.en |= 0x80 >> (offset & 0x7);
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+		offset++;
+	}
+}
+
+/**
+ * Set byte count for Max-Sized and Min Sized frame check.
+ *
+ * @param interface   Which interface to set the limit
+ * @param max_size    Byte count for Max-Size frame check
+ */
+static inline void cvmx_pip_set_frame_check(int interface, u32 max_size)
+{
+	cvmx_pip_frm_len_chkx_t frm_len;
+
+	/* max_size and min_size are passed as 0, reset to default values. */
+	if (max_size < 1536)
+		max_size = 1536;
+
+	/* On CN68XX frame check is enabled for a pkind n and
+	   PIP_PRT_CFG[len_chk_sel] selects which set of
+	   MAXLEN/MINLEN to use. */
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int port;
+		int num_ports = cvmx_helper_ports_on_interface(interface);
+
+		for (port = 0; port < num_ports; port++) {
+			if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+				int ipd_port;
+
+				ipd_port = cvmx_helper_get_ipd_port(interface, port);
+				cvmx_pki_set_max_frm_len(ipd_port, max_size);
+			} else {
+				int pknd;
+				int sel;
+				cvmx_pip_prt_cfgx_t config;
+
+				pknd = cvmx_helper_get_pknd(interface, port);
+				config.u64 = csr_rd(CVMX_PIP_PRT_CFGX(pknd));
+				sel = config.s.len_chk_sel;
+				frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(sel));
+				frm_len.s.maxlen = max_size;
+				csr_wr(CVMX_PIP_FRM_LEN_CHKX(sel), frm_len.u64);
+			}
+		}
+	}
+	/* on cn6xxx and cn7xxx models, PIP_FRM_LEN_CHK0 applies to
+	 *     all incoming traffic */
+	else if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(0));
+		frm_len.s.maxlen = max_size;
+		csr_wr(CVMX_PIP_FRM_LEN_CHKX(0), frm_len.u64);
+	}
+}
+
+/**
+ * Initialize Bit Select Extractor config. Their are 8 bit positions and valids
+ * to be used when using the corresponding extractor.
+ *
+ * @param bit     Bit Select Extractor to use
+ * @param pos     Which position to update
+ * @param val     The value to update the position with
+ */
+static inline void cvmx_pip_set_bsel_pos(int bit, int pos, int val)
+{
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	if (bit < 0 || bit > 3) {
+		debug("ERROR: cvmx_pip_set_bsel_pos: Invalid Bit-Select Extractor (%d) passed\n",
+		      bit);
+		return;
+	}
+
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+	switch (pos) {
+	case 0:
+		bsel_pos.s.pos0_val = 1;
+		bsel_pos.s.pos0 = val & 0x7f;
+		break;
+	case 1:
+		bsel_pos.s.pos1_val = 1;
+		bsel_pos.s.pos1 = val & 0x7f;
+		break;
+	case 2:
+		bsel_pos.s.pos2_val = 1;
+		bsel_pos.s.pos2 = val & 0x7f;
+		break;
+	case 3:
+		bsel_pos.s.pos3_val = 1;
+		bsel_pos.s.pos3 = val & 0x7f;
+		break;
+	case 4:
+		bsel_pos.s.pos4_val = 1;
+		bsel_pos.s.pos4 = val & 0x7f;
+		break;
+	case 5:
+		bsel_pos.s.pos5_val = 1;
+		bsel_pos.s.pos5 = val & 0x7f;
+		break;
+	case 6:
+		bsel_pos.s.pos6_val = 1;
+		bsel_pos.s.pos6 = val & 0x7f;
+		break;
+	case 7:
+		bsel_pos.s.pos7_val = 1;
+		bsel_pos.s.pos7 = val & 0x7f;
+		break;
+	default:
+		debug("Warning: cvmx_pip_set_bsel_pos: Invalid pos(%d)\n", pos);
+		break;
+	}
+	csr_wr(CVMX_PIP_BSEL_EXT_POSX(bit), bsel_pos.u64);
+}
+
+/**
+ * Initialize offset and skip values to use by bit select extractor.
+
+ * @param bit	Bit Select Extractor to use
+ * @param offset	Offset to add to extractor mem addr to get final address
+ *			to lookup table.
+ * @param skip		Number of bytes to skip from start of packet 0-64
+ */
+static inline void cvmx_pip_bsel_config(int bit, int offset, int skip)
+{
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_cfg.s.offset = offset;
+	bsel_cfg.s.skip = skip;
+	csr_wr(CVMX_PIP_BSEL_EXT_CFGX(bit), bsel_cfg.u64);
+}
+
+/**
+ * Get the entry for the Bit Select Extractor Table.
+ * @param work   pointer to work queue entry
+ * @return       Index of the Bit Select Extractor Table
+ */
+static inline int cvmx_pip_get_bsel_table_index(cvmx_wqe_t *work)
+{
+	int bit = cvmx_wqe_get_port(work) & 0x3;
+	/* Get the Bit select table index. */
+	int index;
+	int y;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+
+	for (y = 0; y < 8; y++) {
+		char *ptr = (char *)cvmx_phys_to_ptr(work->packet_ptr.s.addr);
+		int bit_loc = 0;
+		int bit;
+
+		ptr += bsel_cfg.s.skip;
+		switch (y) {
+		case 0:
+			ptr += (bsel_pos.s.pos0 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos0 & 0x3);
+			break;
+		case 1:
+			ptr += (bsel_pos.s.pos1 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos1 & 0x3);
+			break;
+		case 2:
+			ptr += (bsel_pos.s.pos2 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos2 & 0x3);
+			break;
+		case 3:
+			ptr += (bsel_pos.s.pos3 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos3 & 0x3);
+			break;
+		case 4:
+			ptr += (bsel_pos.s.pos4 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos4 & 0x3);
+			break;
+		case 5:
+			ptr += (bsel_pos.s.pos5 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos5 & 0x3);
+			break;
+		case 6:
+			ptr += (bsel_pos.s.pos6 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos6 & 0x3);
+			break;
+		case 7:
+			ptr += (bsel_pos.s.pos7 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos7 & 0x3);
+			break;
+		}
+		bit = (*ptr >> bit_loc) & 1;
+		index |= bit << y;
+	}
+	index += bsel_cfg.s.offset;
+	index &= 0x1ff;
+	return index;
+}
+
+static inline int cvmx_pip_get_bsel_qos(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.qos;
+}
+
+static inline int cvmx_pip_get_bsel_grp(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.grp;
+}
+
+static inline int cvmx_pip_get_bsel_tt(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.tt;
+}
+
+static inline int cvmx_pip_get_bsel_tag(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	int port = cvmx_wqe_get_port(work);
+	int bit = port & 0x3;
+	int upper_tag = 0;
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_prt_tagx_t prt_tag;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+
+	prt_tag.u64 = csr_rd(CVMX_PIP_PRT_TAGX(port));
+	if (prt_tag.s.inc_prt_flag == 0)
+		upper_tag = bsel_cfg.s.upper_tag;
+	return bsel_tbl.s.tag | ((bsel_cfg.s.tag << 8) & 0xff00) | ((upper_tag << 16) & 0xffff0000);
+}
+
+#endif /*  __CVMX_PIP_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
new file mode 100644
index 0000000000..79b99b0bd7
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Resource management for PKI resources.
+ */
+
+#ifndef __CVMX_PKI_RESOURCES_H__
+#define __CVMX_PKI_RESOURCES_H__
+
+/**
+ * This function allocates/reserves a style from pool of global styles per node.
+ * @param node	 node to allocate style from.
+ * @param style	 style to allocate, if -1 it will be allocated
+		 first available style from style resource. If index is positive
+		 number and in range, it will try to allocate specified style.
+ * @return	 style number on success, -1 on failure.
+ */
+int cvmx_pki_style_alloc(int node, int style);
+
+/**
+ * This function allocates/reserves a cluster group from per node
+   cluster group resources.
+ * @param node		node to allocate cluster group from.
+   @param cl_grp	cluster group to allocate/reserve, if -1 ,
+			allocate any available cluster group.
+ * @return		cluster group number or -1 on failure
+ */
+int cvmx_pki_cluster_grp_alloc(int node, int cl_grp);
+
+/**
+ * This function allocates/reserves a cluster from per node
+   cluster resources.
+ * @param node		node to allocate cluster group from.
+   @param cluster_mask	mask of clusters  to allocate/reserve, if -1 ,
+			allocate any available clusters.
+ * @param num_clusters	number of clusters that will be allocated
+ */
+int cvmx_pki_cluster_alloc(int node, int num_clusters, u64 *cluster_mask);
+
+/**
+ * This function allocates/reserves a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191), if -1 ,
+			allocate any available pcam entry.
+ * @param bank		pcam bank where to allocate/reserve pcan entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is needed.
+ * @return		pcam entry of -1 on failure
+ */
+int cvmx_pki_pcam_entry_alloc(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+			qpg base_offset will be allocated. If base_offset is positive
+			number and in range, it will try to allocate specified base_offset.
+   @param count		number of consecutive qpg entries to allocate. They will be consecutive
+			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_alloc(int node, int base_offset, int count);
+
+/**
+ * This function frees a style from pool of global styles per node.
+ * @param node	 node to free style from.
+ * @param style	 style to free
+ * @return	 0 on success, -1 on failure.
+ */
+int cvmx_pki_style_free(int node, int style);
+
+/**
+ * This function frees a cluster group from per node
+   cluster group resources.
+ * @param node		node to free cluster group from.
+   @param cl_grp	cluster group to free
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_grp_free(int node, int cl_grp);
+
+/**
+ * This function frees QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+ *			qpg base_offset will be allocated. If base_offset is positive
+ *			number and in range, it will try to allocate specified base_offset.
+ * @param count		number of consecutive qpg entries to allocate. They will be consecutive
+ *			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_free(int node, int base_offset, int count);
+
+/**
+ * This function frees  clusters  from per node
+   clusters resources.
+ * @param node		node to free clusters from.
+ * @param cluster_mask  mask of clusters need freeing
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_free(int node, u64 cluster_mask);
+
+/**
+ * This function frees a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191) needs to be freed.
+ * @param bank		pcam bank where to free pcam entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is freed.
+ * @return		0 on success OR -1 on failure
+ */
+int cvmx_pki_pcam_entry_free(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves a bpid from pool of global bpid per node.
+ * @param node	node to allocate bpid from.
+ * @param bpid	bpid  to allocate, if -1 it will be allocated
+ *		first available boid from bpid resource. If index is positive
+ *		number and in range, it will try to allocate specified bpid.
+ * @return	bpid number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_bpid_alloc(int node, int bpid);
+
+/**
+ * This function frees a bpid from pool of global bpid per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_bpid_free(int node, int bpid);
+
+/**
+ * This function frees all the PKI software resources
+ * (clusters, styles, qpg_entry, pcam_entry etc) for the specified node
+ */
+
+/**
+ * This function allocates/reserves an index from pool of global MTAG-IDX per node.
+ * @param node	node to allocate index from.
+ * @param idx	index  to allocate, if -1 it will be allocated
+ * @return	MTAG index number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_mtag_idx_alloc(int node, int idx);
+
+/**
+ * This function frees an index from pool of global MTAG-IDX per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_mtag_idx_free(int node, int idx);
+
+void __cvmx_pki_global_rsrc_free(int node);
+
+#endif /*  __CVM_PKI_RESOURCES_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki.h b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
new file mode 100644
index 0000000000..c1feb55a1f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
@@ -0,0 +1,970 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Data unit.
+ */
+
+#ifndef __CVMX_PKI_H__
+#define __CVMX_PKI_H__
+
+#include "cvmx-fpa3.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+#include "cvmx-error.h"
+
+/* PKI AURA and BPID count are equal to FPA AURA count */
+#define CVMX_PKI_NUM_AURA	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_BPID	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_SSO_GROUP	       (cvmx_sso_num_xgrp())
+#define CVMX_PKI_NUM_CLUSTER_GROUP_MAX 1
+#define CVMX_PKI_NUM_CLUSTER_GROUP     (cvmx_pki_num_cl_grp())
+#define CVMX_PKI_NUM_CLUSTER	       (cvmx_pki_num_clusters())
+
+/* FIXME: Reduce some of these values, convert to routines XXX */
+#define CVMX_PKI_NUM_CHANNEL	    4096
+#define CVMX_PKI_NUM_PKIND	    64
+#define CVMX_PKI_NUM_INTERNAL_STYLE 256
+#define CVMX_PKI_NUM_FINAL_STYLE    64
+#define CVMX_PKI_NUM_QPG_ENTRY	    2048
+#define CVMX_PKI_NUM_MTAG_IDX	    (32 / 4) /* 32 registers grouped by 4*/
+#define CVMX_PKI_NUM_LTYPE	    32
+#define CVMX_PKI_NUM_PCAM_BANK	    2
+#define CVMX_PKI_NUM_PCAM_ENTRY	    192
+#define CVMX_PKI_NUM_FRAME_CHECK    2
+#define CVMX_PKI_NUM_BELTYPE	    32
+#define CVMX_PKI_MAX_FRAME_SIZE	    65535
+#define CVMX_PKI_FIND_AVAL_ENTRY    (-1)
+#define CVMX_PKI_CLUSTER_ALL	    0xf
+
+#ifdef CVMX_SUPPORT_SEPARATE_CLUSTER_CONFIG
+#define CVMX_PKI_TOTAL_PCAM_ENTRY                                                                  \
+	((CVMX_PKI_NUM_CLUSTER) * (CVMX_PKI_NUM_PCAM_BANK) * (CVMX_PKI_NUM_PCAM_ENTRY))
+#else
+#define CVMX_PKI_TOTAL_PCAM_ENTRY (CVMX_PKI_NUM_PCAM_BANK * CVMX_PKI_NUM_PCAM_ENTRY)
+#endif
+
+static inline unsigned int cvmx_pki_num_clusters(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 2;
+	return 4;
+}
+
+static inline unsigned int cvmx_pki_num_cl_grp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1;
+	return 0;
+}
+
+enum cvmx_pki_pkind_parse_mode {
+	CVMX_PKI_PARSE_LA_TO_LG = 0,  /* Parse LA(L2) to LG */
+	CVMX_PKI_PARSE_LB_TO_LG = 1,  /* Parse LB(custom) to LG */
+	CVMX_PKI_PARSE_LC_TO_LG = 3,  /* Parse LC(L3) to LG */
+	CVMX_PKI_PARSE_LG = 0x3f,     /* Parse LG */
+	CVMX_PKI_PARSE_NOTHING = 0x7f /* Parse nothing */
+};
+
+enum cvmx_pki_parse_mode_chg {
+	CVMX_PKI_PARSE_NO_CHG = 0x0,
+	CVMX_PKI_PARSE_SKIP_TO_LB = 0x1,
+	CVMX_PKI_PARSE_SKIP_TO_LC = 0x3,
+	CVMX_PKI_PARSE_SKIP_TO_LD = 0x7,
+	CVMX_PKI_PARSE_SKIP_TO_LG = 0x3f,
+	CVMX_PKI_PARSE_SKIP_ALL = 0x7f,
+};
+
+enum cvmx_pki_l2_len_mode { PKI_L2_LENCHK_EQUAL_GREATER = 0, PKI_L2_LENCHK_EQUAL_ONLY };
+
+enum cvmx_pki_cache_mode {
+	CVMX_PKI_OPC_MODE_STT = 0LL,	  /* All blocks write through DRAM,*/
+	CVMX_PKI_OPC_MODE_STF = 1LL,	  /* All blocks into L2 */
+	CVMX_PKI_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
+	CVMX_PKI_OPC_MODE_STF2_STT = 3LL  /* 1st, 2nd blocks L2, rest DRAM */
+};
+
+/**
+ * Tag type definitions
+ */
+enum cvmx_sso_tag_type {
+	CVMX_SSO_TAG_TYPE_ORDERED = 0L,
+	CVMX_SSO_TAG_TYPE_ATOMIC = 1L,
+	CVMX_SSO_TAG_TYPE_UNTAGGED = 2L,
+	CVMX_SSO_TAG_TYPE_EMPTY = 3L
+};
+
+enum cvmx_pki_qpg_qos {
+	CVMX_PKI_QPG_QOS_NONE = 0,
+	CVMX_PKI_QPG_QOS_VLAN,
+	CVMX_PKI_QPG_QOS_MPLS,
+	CVMX_PKI_QPG_QOS_DSA_SRC,
+	CVMX_PKI_QPG_QOS_DIFFSERV,
+	CVMX_PKI_QPG_QOS_HIGIG,
+};
+
+enum cvmx_pki_wqe_vlan { CVMX_PKI_USE_FIRST_VLAN = 0, CVMX_PKI_USE_SECOND_VLAN };
+
+/**
+ * Controls how the PKI statistics counters are handled
+ * The PKI_STAT*_X registers can be indexed either by port kind (pkind), or
+ * final style. (Does not apply to the PKI_STAT_INB* registers.)
+ *    0 = X represents the packet?s pkind
+ *    1 = X represents the low 6-bits of packet?s final style
+ */
+enum cvmx_pki_stats_mode { CVMX_PKI_STAT_MODE_PKIND, CVMX_PKI_STAT_MODE_STYLE };
+
+enum cvmx_pki_fpa_wait { CVMX_PKI_DROP_PKT, CVMX_PKI_WAIT_PKT };
+
+#define PKI_BELTYPE_E__NONE_M 0x0
+#define PKI_BELTYPE_E__MISC_M 0x1
+#define PKI_BELTYPE_E__IP4_M  0x2
+#define PKI_BELTYPE_E__IP6_M  0x3
+#define PKI_BELTYPE_E__TCP_M  0x4
+#define PKI_BELTYPE_E__UDP_M  0x5
+#define PKI_BELTYPE_E__SCTP_M 0x6
+#define PKI_BELTYPE_E__SNAP_M 0x7
+
+/* PKI_BELTYPE_E_t */
+enum cvmx_pki_beltype {
+	CVMX_PKI_BELTYPE_NONE = PKI_BELTYPE_E__NONE_M,
+	CVMX_PKI_BELTYPE_MISC = PKI_BELTYPE_E__MISC_M,
+	CVMX_PKI_BELTYPE_IP4 = PKI_BELTYPE_E__IP4_M,
+	CVMX_PKI_BELTYPE_IP6 = PKI_BELTYPE_E__IP6_M,
+	CVMX_PKI_BELTYPE_TCP = PKI_BELTYPE_E__TCP_M,
+	CVMX_PKI_BELTYPE_UDP = PKI_BELTYPE_E__UDP_M,
+	CVMX_PKI_BELTYPE_SCTP = PKI_BELTYPE_E__SCTP_M,
+	CVMX_PKI_BELTYPE_SNAP = PKI_BELTYPE_E__SNAP_M,
+	CVMX_PKI_BELTYPE_MAX = CVMX_PKI_BELTYPE_SNAP
+};
+
+struct cvmx_pki_frame_len {
+	u16 maxlen;
+	u16 minlen;
+};
+
+struct cvmx_pki_tag_fields {
+	u64 layer_g_src : 1;
+	u64 layer_f_src : 1;
+	u64 layer_e_src : 1;
+	u64 layer_d_src : 1;
+	u64 layer_c_src : 1;
+	u64 layer_b_src : 1;
+	u64 layer_g_dst : 1;
+	u64 layer_f_dst : 1;
+	u64 layer_e_dst : 1;
+	u64 layer_d_dst : 1;
+	u64 layer_c_dst : 1;
+	u64 layer_b_dst : 1;
+	u64 input_port : 1;
+	u64 mpls_label : 1;
+	u64 first_vlan : 1;
+	u64 second_vlan : 1;
+	u64 ip_prot_nexthdr : 1;
+	u64 tag_sync : 1;
+	u64 tag_spi : 1;
+	u64 tag_gtp : 1;
+	u64 tag_vni : 1;
+};
+
+struct cvmx_pki_pkind_parse {
+	u64 mpls_en : 1;
+	u64 inst_hdr : 1;
+	u64 lg_custom : 1;
+	u64 fulc_en : 1;
+	u64 dsa_en : 1;
+	u64 hg2_en : 1;
+	u64 hg_en : 1;
+};
+
+struct cvmx_pki_pool_config {
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+struct cvmx_pki_qpg_config {
+	int qpg_base;
+	int port_add;
+	int aura_num;
+	int grp_ok;
+	int grp_bad;
+	int grptag_ok;
+	int grptag_bad;
+};
+
+struct cvmx_pki_aura_config {
+	int aura_num;
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa3_gaura_t aura;
+	int buffer_count;
+};
+
+struct cvmx_pki_cluster_grp_config {
+	int grp_num;
+	u64 cluster_mask; /* Bit mask of cluster assigned to this cluster group */
+};
+
+struct cvmx_pki_sso_grp_config {
+	int group;
+	int priority;
+	int weight;
+	int affinity;
+	u64 core_mask;
+	u8 core_mask_set;
+};
+
+/* This is per style structure for configuring port parameters,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modifying
+ * that style will modify the respective parameters for all the ports
+ * which are using this style
+ */
+struct cvmx_pki_style_parm {
+	bool ip6_udp_opt;
+	bool lenerr_en;
+	bool maxerr_en;
+	bool minerr_en;
+	u8 lenerr_eqpad;
+	u8 minmax_sel;
+	bool qpg_dis_grptag;
+	bool fcs_strip;
+	bool fcs_chk;
+	bool rawdrp;
+	bool force_drop;
+	bool nodrop;
+	bool qpg_dis_padd;
+	bool qpg_dis_grp;
+	bool qpg_dis_aura;
+	u16 qpg_base;
+	enum cvmx_pki_qpg_qos qpg_qos;
+	u8 qpg_port_sh;
+	u8 qpg_port_msb;
+	u8 apad_nip;
+	u8 wqe_vs;
+	enum cvmx_sso_tag_type tag_type;
+	bool pkt_lend;
+	u8 wqe_hsz;
+	u16 wqe_skip;
+	u16 first_skip;
+	u16 later_skip;
+	enum cvmx_pki_cache_mode cache_mode;
+	u8 dis_wq_dat;
+	u64 mbuff_size;
+	bool len_lg;
+	bool len_lf;
+	bool len_le;
+	bool len_ld;
+	bool len_lc;
+	bool len_lb;
+	bool csum_lg;
+	bool csum_lf;
+	bool csum_le;
+	bool csum_ld;
+	bool csum_lc;
+	bool csum_lb;
+};
+
+/* This is per style structure for configuring port's tag configuration,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modiying that style
+ * will modify the respective parameters for all the ports which are
+ * using this style */
+enum cvmx_pki_mtag_ptrsel {
+	CVMX_PKI_MTAG_PTRSEL_SOP = 0,
+	CVMX_PKI_MTAG_PTRSEL_LA = 8,
+	CVMX_PKI_MTAG_PTRSEL_LB = 9,
+	CVMX_PKI_MTAG_PTRSEL_LC = 10,
+	CVMX_PKI_MTAG_PTRSEL_LD = 11,
+	CVMX_PKI_MTAG_PTRSEL_LE = 12,
+	CVMX_PKI_MTAG_PTRSEL_LF = 13,
+	CVMX_PKI_MTAG_PTRSEL_LG = 14,
+	CVMX_PKI_MTAG_PTRSEL_VL = 15,
+};
+
+struct cvmx_pki_mask_tag {
+	bool enable;
+	int base;   /* CVMX_PKI_MTAG_PTRSEL_XXX */
+	int offset; /* Offset from base. */
+	u64 val;    /* Bitmask:
+		1 = enable, 0 = disabled for each byte in the 64-byte array.*/
+};
+
+struct cvmx_pki_style_tag_cfg {
+	struct cvmx_pki_tag_fields tag_fields;
+	struct cvmx_pki_mask_tag mask_tag[4];
+};
+
+struct cvmx_pki_style_config {
+	struct cvmx_pki_style_parm parm_cfg;
+	struct cvmx_pki_style_tag_cfg tag_cfg;
+};
+
+struct cvmx_pki_pkind_config {
+	u8 cluster_grp;
+	bool fcs_pres;
+	struct cvmx_pki_pkind_parse parse_en;
+	enum cvmx_pki_pkind_parse_mode initial_parse_mode;
+	u8 fcs_skip;
+	u8 inst_skip;
+	int initial_style;
+	bool custom_l2_hdr;
+	u8 l2_scan_offset;
+	u64 lg_scan_offset;
+};
+
+struct cvmx_pki_port_config {
+	struct cvmx_pki_pkind_config pkind_cfg;
+	struct cvmx_pki_style_config style_cfg;
+};
+
+struct cvmx_pki_global_parse {
+	u64 virt_pen : 1;
+	u64 clg_pen : 1;
+	u64 cl2_pen : 1;
+	u64 l4_pen : 1;
+	u64 il3_pen : 1;
+	u64 l3_pen : 1;
+	u64 mpls_pen : 1;
+	u64 fulc_pen : 1;
+	u64 dsa_pen : 1;
+	u64 hg_pen : 1;
+};
+
+struct cvmx_pki_tag_sec {
+	u16 dst6;
+	u16 src6;
+	u16 dst;
+	u16 src;
+};
+
+struct cvmx_pki_global_config {
+	u64 cluster_mask[CVMX_PKI_NUM_CLUSTER_GROUP_MAX];
+	enum cvmx_pki_stats_mode stat_mode;
+	enum cvmx_pki_fpa_wait fpa_wait;
+	struct cvmx_pki_global_parse gbl_pen;
+	struct cvmx_pki_tag_sec tag_secret;
+	struct cvmx_pki_frame_len frm_len[CVMX_PKI_NUM_FRAME_CHECK];
+	enum cvmx_pki_beltype ltype_map[CVMX_PKI_NUM_BELTYPE];
+	int pki_enable;
+};
+
+#define CVMX_PKI_PCAM_TERM_E_NONE_M	 0x0
+#define CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M 0x2
+#define CVMX_PKI_PCAM_TERM_E_HIGIGD_M	 0x4
+#define CVMX_PKI_PCAM_TERM_E_HIGIG_M	 0x5
+#define CVMX_PKI_PCAM_TERM_E_SMACH_M	 0x8
+#define CVMX_PKI_PCAM_TERM_E_SMACL_M	 0x9
+#define CVMX_PKI_PCAM_TERM_E_DMACH_M	 0xA
+#define CVMX_PKI_PCAM_TERM_E_DMACL_M	 0xB
+#define CVMX_PKI_PCAM_TERM_E_GLORT_M	 0x12
+#define CVMX_PKI_PCAM_TERM_E_DSA_M	 0x13
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M	 0x18
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M	 0x19
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M	 0x1A
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M	 0x1B
+#define CVMX_PKI_PCAM_TERM_E_MPLS0_M	 0x1E
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M	 0x1F
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M	 0x20
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPML_M	 0x21
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M	 0x22
+#define CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M	 0x23
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M	 0x24
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M	 0x25
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPML_M	 0x26
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M	 0x27
+#define CVMX_PKI_PCAM_TERM_E_LD_VNI_M	 0x28
+#define CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M 0x2B
+#define CVMX_PKI_PCAM_TERM_E_LF_SPI_M	 0x2E
+#define CVMX_PKI_PCAM_TERM_E_L4_SPORT_M	 0x2f
+#define CVMX_PKI_PCAM_TERM_E_L4_PORT_M	 0x30
+#define CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M 0x39
+
+enum cvmx_pki_term {
+	CVMX_PKI_PCAM_TERM_NONE = CVMX_PKI_PCAM_TERM_E_NONE_M,
+	CVMX_PKI_PCAM_TERM_L2_CUSTOM = CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M,
+	CVMX_PKI_PCAM_TERM_HIGIGD = CVMX_PKI_PCAM_TERM_E_HIGIGD_M,
+	CVMX_PKI_PCAM_TERM_HIGIG = CVMX_PKI_PCAM_TERM_E_HIGIG_M,
+	CVMX_PKI_PCAM_TERM_SMACH = CVMX_PKI_PCAM_TERM_E_SMACH_M,
+	CVMX_PKI_PCAM_TERM_SMACL = CVMX_PKI_PCAM_TERM_E_SMACL_M,
+	CVMX_PKI_PCAM_TERM_DMACH = CVMX_PKI_PCAM_TERM_E_DMACH_M,
+	CVMX_PKI_PCAM_TERM_DMACL = CVMX_PKI_PCAM_TERM_E_DMACL_M,
+	CVMX_PKI_PCAM_TERM_GLORT = CVMX_PKI_PCAM_TERM_E_GLORT_M,
+	CVMX_PKI_PCAM_TERM_DSA = CVMX_PKI_PCAM_TERM_E_DSA_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE0 = CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE1 = CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE2 = CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE3 = CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M,
+	CVMX_PKI_PCAM_TERM_MPLS0 = CVMX_PKI_PCAM_TERM_E_MPLS0_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPHH = CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPMH = CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPML = CVMX_PKI_PCAM_TERM_E_L3_SIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPLL = CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M,
+	CVMX_PKI_PCAM_TERM_L3_FLAGS = CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPHH = CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPMH = CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPML = CVMX_PKI_PCAM_TERM_E_L3_DIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPLL = CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M,
+	CVMX_PKI_PCAM_TERM_LD_VNI = CVMX_PKI_PCAM_TERM_E_LD_VNI_M,
+	CVMX_PKI_PCAM_TERM_IL3_FLAGS = CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_LF_SPI = CVMX_PKI_PCAM_TERM_E_LF_SPI_M,
+	CVMX_PKI_PCAM_TERM_L4_PORT = CVMX_PKI_PCAM_TERM_E_L4_PORT_M,
+	CVMX_PKI_PCAM_TERM_L4_SPORT = CVMX_PKI_PCAM_TERM_E_L4_SPORT_M,
+	CVMX_PKI_PCAM_TERM_LG_CUSTOM = CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M
+};
+
+#define CVMX_PKI_DMACH_SHIFT	  32
+#define CVMX_PKI_DMACH_MASK	  cvmx_build_mask(16)
+#define CVMX_PKI_DMACL_MASK	  CVMX_PKI_DATA_MASK_32
+#define CVMX_PKI_DATA_MASK_32	  cvmx_build_mask(32)
+#define CVMX_PKI_DATA_MASK_16	  cvmx_build_mask(16)
+#define CVMX_PKI_DMAC_MATCH_EXACT cvmx_build_mask(48)
+
+struct cvmx_pki_pcam_input {
+	u64 style;
+	u64 style_mask; /* bits: 1-match, 0-dont care */
+	enum cvmx_pki_term field;
+	u32 field_mask; /* bits: 1-match, 0-dont care */
+	u64 data;
+	u64 data_mask; /* bits: 1-match, 0-dont care */
+};
+
+struct cvmx_pki_pcam_action {
+	enum cvmx_pki_parse_mode_chg parse_mode_chg;
+	enum cvmx_pki_layer_type layer_type_set;
+	int style_add;
+	int parse_flag_set;
+	int pointer_advance;
+};
+
+struct cvmx_pki_pcam_config {
+	int in_use;
+	int entry_num;
+	u64 cluster_mask;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+};
+
+/**
+ * Status statistics for a port
+ */
+struct cvmx_pki_port_stats {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+};
+
+/**
+ * PKI Packet Instruction Header Structure (PKI_INST_HDR_S)
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 w : 1;    /* INST_HDR size: 0 = 2 bytes, 1 = 4 or 8 bytes */
+		u64 raw : 1;  /* RAW packet indicator in WQE[RAW]: 1 = enable */
+		u64 utag : 1; /* Use INST_HDR[TAG] to compute WQE[TAG]: 1 = enable */
+		u64 uqpg : 1; /* Use INST_HDR[QPG] to compute QPG: 1 = enable */
+		u64 rsvd1 : 1;
+		u64 pm : 3; /* Packet parsing mode. Legal values = 0x0..0x7 */
+		u64 sl : 8; /* Number of bytes in INST_HDR. */
+		/* The following fields are not present, if INST_HDR[W] = 0: */
+		u64 utt : 1; /* Use INST_HDR[TT] to compute WQE[TT]: 1 = enable */
+		u64 tt : 2;  /* INST_HDR[TT] => WQE[TT], if INST_HDR[UTT] = 1 */
+		u64 rsvd2 : 2;
+		u64 qpg : 11; /* INST_HDR[QPG] => QPG, if INST_HDR[UQPG] = 1 */
+		u64 tag : 32; /* INST_HDR[TAG] => WQE[TAG], if INST_HDR[UTAG] = 1 */
+	} s;
+} cvmx_pki_inst_hdr_t;
+
+/**
+ * This function assignes the clusters to a group, later pkind can be
+ * configured to use that group depending on number of clusters pkind
+ * would use. A given cluster can only be enabled in a single cluster group.
+ * Number of clusters assign to that group determines how many engine can work
+ * in parallel to process the packet. Eack cluster can process x MPPS.
+ *
+ * @param node	Node
+ * @param cluster_group Group to attach clusters to.
+ * @param cluster_mask The mask of clusters which needs to be assigned to the group.
+ */
+static inline int cvmx_pki_attach_cluster_to_group(int node, u64 cluster_group, u64 cluster_mask)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cluster_group >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: config cluster group %d", (int)cluster_group);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group));
+	pki_cl_grp.s.clusters = cluster_mask;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group), pki_cl_grp.u64);
+	return 0;
+}
+
+static inline void cvmx_pki_write_global_parse(int node, struct cvmx_pki_global_parse gbl_pen)
+{
+	cvmx_pki_gbl_pen_t gbl_pen_reg;
+
+	gbl_pen_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_GBL_PEN);
+	gbl_pen_reg.s.virt_pen = gbl_pen.virt_pen;
+	gbl_pen_reg.s.clg_pen = gbl_pen.clg_pen;
+	gbl_pen_reg.s.cl2_pen = gbl_pen.cl2_pen;
+	gbl_pen_reg.s.l4_pen = gbl_pen.l4_pen;
+	gbl_pen_reg.s.il3_pen = gbl_pen.il3_pen;
+	gbl_pen_reg.s.l3_pen = gbl_pen.l3_pen;
+	gbl_pen_reg.s.mpls_pen = gbl_pen.mpls_pen;
+	gbl_pen_reg.s.fulc_pen = gbl_pen.fulc_pen;
+	gbl_pen_reg.s.dsa_pen = gbl_pen.dsa_pen;
+	gbl_pen_reg.s.hg_pen = gbl_pen.hg_pen;
+	cvmx_write_csr_node(node, CVMX_PKI_GBL_PEN, gbl_pen_reg.u64);
+}
+
+static inline void cvmx_pki_write_tag_secret(int node, struct cvmx_pki_tag_sec tag_secret)
+{
+	cvmx_pki_tag_secret_t tag_secret_reg;
+
+	tag_secret_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_TAG_SECRET);
+	tag_secret_reg.s.dst6 = tag_secret.dst6;
+	tag_secret_reg.s.src6 = tag_secret.src6;
+	tag_secret_reg.s.dst = tag_secret.dst;
+	tag_secret_reg.s.src = tag_secret.src;
+	cvmx_write_csr_node(node, CVMX_PKI_TAG_SECRET, tag_secret_reg.u64);
+}
+
+static inline void cvmx_pki_write_ltype_map(int node, enum cvmx_pki_layer_type layer,
+					    enum cvmx_pki_beltype backend)
+{
+	cvmx_pki_ltypex_map_t ltype_map;
+
+	if (layer > CVMX_PKI_LTYPE_E_MAX || backend > CVMX_PKI_BELTYPE_MAX) {
+		debug("ERROR: invalid ltype beltype mapping\n");
+		return;
+	}
+	ltype_map.u64 = cvmx_read_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer));
+	ltype_map.s.beltype = backend;
+	cvmx_write_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer), ltype_map.u64);
+}
+
+/**
+ * This function enables the cluster group to start parsing.
+ *
+ * @param node    Node number.
+ * @param cl_grp  Cluster group to enable parsing.
+ */
+static inline int cvmx_pki_parse_enable(int node, unsigned int cl_grp)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cl_grp >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: pki parse en group %d", (int)cl_grp);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp));
+	pki_cl_grp.s.pena = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp), pki_cl_grp.u64);
+	return 0;
+}
+
+/**
+ * This function enables the PKI to send bpid level backpressure to CN78XX inputs.
+ *
+ * @param node Node number.
+ */
+static inline void cvmx_pki_enable_backpressure(int node)
+{
+	cvmx_pki_buf_ctl_t pki_buf_ctl;
+
+	pki_buf_ctl.u64 = cvmx_read_csr_node(node, CVMX_PKI_BUF_CTL);
+	pki_buf_ctl.s.pbp_en = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_BUF_CTL, pki_buf_ctl.u64);
+}
+
+/**
+ * Clear the statistics counters for a port.
+ *
+ * @param node Node number.
+ * @param port Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ */
+void cvmx_pki_clear_port_stats(int node, u64 port);
+
+/**
+ * Get the status counters for index from PKI.
+ *
+ * @param node	  Node number.
+ * @param index   PKIND number, if PKI_STATS_CTL:mode = 0 or
+ *     style(flow) number, if PKI_STATS_CTL:mode = 1
+ * @param status  Where to put the results.
+ */
+void cvmx_pki_get_stats(int node, int index, struct cvmx_pki_port_stats *status);
+
+/**
+ * Get the statistics counters for a port.
+ *
+ * @param node	 Node number
+ * @param port   Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_port_stats(int node, u64 port, struct cvmx_pki_port_stats *status)
+{
+	int xipd = cvmx_helper_node_to_ipd_port(node, port);
+	int xiface = cvmx_helper_get_interface_num(xipd);
+	int index = cvmx_helper_get_interface_index_num(port);
+	int pknd = cvmx_helper_get_pknd(xiface, index);
+
+	cvmx_pki_get_stats(node, pknd, status);
+}
+
+/**
+ * Get the statistics counters for a flow represented by style in PKI.
+ *
+ * @param node Node number.
+ * @param style_num Style number to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 1 for collecting per style/flow stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_flow_stats(int node, u64 style_num,
+					   struct cvmx_pki_port_stats *status)
+{
+	cvmx_pki_get_stats(node, style_num, status);
+}
+
+/**
+ * Show integrated PKI configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_config_dump(unsigned int node);
+
+/**
+ * Show integrated PKI statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_stats_dump(unsigned int node);
+
+/**
+ * Clear PKI statistics.
+ *
+ * @param node	   node number
+ */
+void cvmx_pki_stats_clear(unsigned int node);
+
+/**
+ * This function enables PKI.
+ *
+ * @param node	 node to enable pki in.
+ */
+void cvmx_pki_enable(int node);
+
+/**
+ * This function disables PKI.
+ *
+ * @param node	node to disable pki in.
+ */
+void cvmx_pki_disable(int node);
+
+/**
+ * This function soft resets PKI.
+ *
+ * @param node	node to enable pki in.
+ */
+void cvmx_pki_reset(int node);
+
+/**
+ * This function sets the clusters in PKI.
+ *
+ * @param node	node to set clusters in.
+ */
+int cvmx_pki_setup_clusters(int node);
+
+/**
+ * This function reads global configuration of PKI block.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to read global configuration
+ */
+void cvmx_pki_read_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function writes global configuration of PKI into hw.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to global configuration
+ */
+void cvmx_pki_write_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function reads per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration read
+ *     from hardware.
+ */
+int cvmx_pki_read_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function writes per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration need
+ *     to be written in hardware.
+ */
+int cvmx_pki_write_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function reads parameters associated with tag configuration in hardware.
+ *
+ * @param node	 Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to tag configuration struct.
+ */
+void cvmx_pki_read_tag_config(int node, int style, u64 cluster_mask,
+			      struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function writes/configures parameters associated with tag
+ * configuration in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to taf configuration struct.
+ */
+void cvmx_pki_write_tag_config(int node, int style, u64 cluster_mask,
+			       struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function reads parameters associated with style in hardware.
+ *
+ * @param node	Node number.
+ * @param style  Style to read from.
+ * @param cluster_mask  Mask of clusters style belongs to.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_read_style_config(int node, int style, u64 cluster_mask,
+				struct cvmx_pki_style_config *style_cfg);
+
+/**
+ * This function writes/configures parameters associated with style in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_write_style_config(int node, u64 style, u64 cluster_mask,
+				 struct cvmx_pki_style_config *style_cfg);
+/**
+ * This function reads qpg entry at specified offset from qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to read from.
+ * @param qpg_cfg  Pointer to structure containing qpg values
+ */
+int cvmx_pki_read_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes qpg entry at specified offset in qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to write to.
+ * @param qpg_cfg  Pointer to stricture containing qpg values.
+ */
+void cvmx_pki_write_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes pcam entry at given offset in pcam table in hardware
+ *
+ * @param node  Node number.
+ * @param index	 Offset in pcam table.
+ * @param cluster_mask  Mask of clusters in which to write pcam entry.
+ * @param input  Input keys to pcam match passed as struct.
+ * @param action  PCAM match action passed as struct
+ */
+int cvmx_pki_pcam_write_entry(int node, int index, u64 cluster_mask,
+			      struct cvmx_pki_pcam_input input, struct cvmx_pki_pcam_action action);
+/**
+ * Configures the channel which will receive backpressure from the specified bpid.
+ * Each channel listens for backpressure on a specific bpid.
+ * Each bpid can backpressure multiple channels.
+ * @param node  Node number.
+ * @param bpid  BPID from which channel will receive backpressure.
+ * @param channel  Channel number to receive backpressue.
+ */
+int cvmx_pki_write_channel_bpid(int node, int channel, int bpid);
+
+/**
+ * Configures the bpid on which, specified channel will
+ * assert backpressure.
+ * Each bpid receives backpressure from auras.
+ * Multiple auras can backpressure single bpid.
+ * @param node  Node number.
+ * @param aura  Number which will assert backpressure on that bpid.
+ * @param bpid  To assert backpressure on.
+ */
+int cvmx_pki_write_aura_bpid(int node, int aura, int bpid);
+
+/**
+ * Enables/Disabled QoS (RED Drop, Tail Drop & backpressure) for the* PKI aura.
+ *
+ * @param node  Node number
+ * @param aura  To enable/disable QoS on.
+ * @param ena_red  Enable/Disable RED drop between pass and drop level
+ *    1-enable 0-disable
+ * @param ena_drop  Enable/disable tail drop when max drop level exceeds
+ *    1-enable 0-disable
+ * @param ena_bp  Enable/Disable asserting backpressure on bpid when
+ *    max DROP level exceeds.
+ *    1-enable 0-disable
+ */
+int cvmx_pki_enable_aura_qos(int node, int aura, bool ena_red, bool ena_drop, bool ena_bp);
+
+/**
+ * This function gives the initial style used by that pkind.
+ *
+ * @param node  Node number.
+ * @param pkind  PKIND number.
+ */
+int cvmx_pki_get_pkind_style(int node, int pkind);
+
+/**
+ * This function sets the wqe buffer mode. First packet data buffer can reside
+ * either in same buffer as wqe OR it can go in separate buffer. If used the later mode,
+ * make sure software allocate enough buffers to now have wqe separate from packet data.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param pkt_outside_wqe
+ *    0 = The packet link pointer will be at word [FIRST_SKIP] immediately
+ *    followed by packet data, in the same buffer as the work queue entry.
+ *    1 = The packet link pointer will be at word [FIRST_SKIP] in a new
+ *    buffer separate from the work queue entry. Words following the
+ *    WQE in the same cache line will be zeroed, other lines in the
+ *    buffer will not be modified and will retain stale data (from the
+ *    buffer?s previous use). This setting may decrease the peak PKI
+ *    performance by up to half on small packets.
+ */
+void cvmx_pki_set_wqe_mode(int node, u64 style, bool pkt_outside_wqe);
+
+/**
+ * This function sets the Packet mode of all ports and styles to little-endian.
+ * It Changes write operations of packet data to L2C to
+ * be in little-endian. Does not change the WQE header format, which is
+ * properly endian neutral.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ */
+void cvmx_pki_set_little_endian(int node, u64 style);
+
+/**
+ * Enables/Disables L2 length error check and max & min frame length checks.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to disable error for.
+ * @param l2len_err	 L2 length error check enable.
+ * @param maxframe_err	Max frame error check enable.
+ * @param minframe_err	Min frame error check enable.
+ *    1 -- Enabel err checks
+ *    0 -- Disable error checks
+ */
+void cvmx_pki_endis_l2_errs(int node, int pknd, bool l2len_err, bool maxframe_err,
+			    bool minframe_err);
+
+/**
+ * Enables/Disables fcs check and fcs stripping on the pkind.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to apply settings on.
+ * @param fcs_chk  Enable/disable fcs check.
+ *    1 -- enable fcs error check.
+ *    0 -- disable fcs error check.
+ * @param fcs_strip	 Strip L2 FCS bytes from packet, decrease WQE[LEN] by 4 bytes
+ *    1 -- strip L2 FCS.
+ *    0 -- Do not strip L2 FCS.
+ */
+void cvmx_pki_endis_fcs_check(int node, int pknd, bool fcs_chk, bool fcs_strip);
+
+/**
+ * This function shows the qpg table entries, read directly from hardware.
+ *
+ * @param node  Node number.
+ * @param num_entry  Number of entries to print.
+ */
+void cvmx_pki_show_qpg_entries(int node, u16 num_entry);
+
+/**
+ * This function shows the pcam table in raw format read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_pcam_entries(int node);
+
+/**
+ * This function shows the valid entries in readable format,
+ * read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_valid_pcam_entries(int node);
+
+/**
+ * This function shows the pkind attributes in readable format,
+ * read directly from hardware.
+ * @param node  Node number.
+ * @param pkind  PKIND number to print.
+ */
+void cvmx_pki_show_pkind_attributes(int node, int pkind);
+
+/**
+ * @INTERNAL
+ * This function is called by cvmx_helper_shutdown() to extract all FPA buffers
+ * out of the PKI. After this function completes, all FPA buffers that were
+ * prefetched by PKI will be in the appropriate FPA pool.
+ * This functions does not reset the PKI.
+ * WARNING: It is very important that PKI be reset soon after a call to this function.
+ *
+ * @param node  Node number.
+ */
+void __cvmx_pki_free_ptr(int node);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
new file mode 100644
index 0000000000..1fb49b3fb6
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_INTERNAL_PORTS_RANGE__
+#define __CVMX_INTERNAL_PORTS_RANGE__
+
+/*
+ * Allocated a block of internal ports for the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ * @param  count      the number of internal ports requested
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_alloc(int interface, int port, u64 count);
+
+/*
+ * Free the internal ports associated with the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_free(int interface, int port);
+
+/*
+ * Frees up all the allocated internal ports.
+ */
+void cvmx_pko_internal_ports_range_free_all(void);
+
+void cvmx_pko_internal_ports_range_show(void);
+
+int __cvmx_pko_internal_ports_range_init(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
new file mode 100644
index 0000000000..5f83989049
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PKO3_QUEUE_H__
+#define __CVMX_PKO3_QUEUE_H__
+
+/**
+ * @INTERNAL
+ *
+ * Find or allocate global port/dq map table
+ * which is a named table, contains entries for
+ * all possible OCI nodes.
+ *
+ * The table global pointer is stored in core-local variable
+ * so that every core will call this function once, on first use.
+ */
+int __cvmx_pko3_dq_table_setup(void);
+
+/*
+ * Get the base Descriptor Queue number for an IPD port on the local node
+ */
+int cvmx_pko3_get_queue_base(int ipd_port);
+
+/*
+ * Get the number of Descriptor Queues assigned for an IPD port
+ */
+int cvmx_pko3_get_queue_num(int ipd_port);
+
+/**
+ * Get L1/Port Queue number assigned to interface port.
+ *
+ * @param xiface is interface number.
+ * @param index is port index.
+ */
+int cvmx_pko3_get_port_queue(int xiface, int index);
+
+/*
+ * Configure L3 through L5 Scheduler Queues and Descriptor Queues
+ *
+ * The Scheduler Queues in Levels 3 to 5 and Descriptor Queues are
+ * configured one-to-one or many-to-one to a single parent Scheduler
+ * Queues. The level of the parent SQ is specified in an argument,
+ * as well as the number of children to attach to the specific parent.
+ * The children can have fair round-robin or priority-based scheduling
+ * when multiple children are assigned a single parent.
+ *
+ * @param node is the OCI node location for the queues to be configured
+ * @param parent_level is the level of the parent queue, 2 to 5.
+ * @param parent_queue is the number of the parent Scheduler Queue
+ * @param child_base is the number of the first child SQ or DQ to assign to
+ * @param parent
+ * @param child_count is the number of consecutive children to assign
+ * @param stat_prio_count is the priority setting for the children L2 SQs
+ *
+ * If <stat_prio_count> is -1, the Ln children will have equal Round-Robin
+ * relationship with eachother. If <stat_prio_count> is 0, all Ln children
+ * will be arranged in Weighted-Round-Robin, with the first having the most
+ * precedence. If <stat_prio_count> is between 1 and 8, it indicates how
+ * many children will have static priority settings (with the first having
+ * the most precedence), with the remaining Ln children having WRR scheduling.
+ *
+ * @returns 0 on success, -1 on failure.
+ *
+ * Note: this function supports the configuration of node-local unit.
+ */
+int cvmx_pko3_sq_config_children(unsigned int node, unsigned int parent_level,
+				 unsigned int parent_queue, unsigned int child_base,
+				 unsigned int child_count, int stat_prio_count);
+
+/*
+ * @INTERNAL
+ * Register a range of Descriptor Queues wth an interface port
+ *
+ * This function poulates the DQ-to-IPD translation table
+ * used by the application to retrieve the DQ range (typically ordered
+ * by priority) for a given IPD-port, which is either a physical port,
+ * or a channel on a channelized interface (i.e. ILK).
+ *
+ * @param xiface is the physical interface number
+ * @param index is either a physical port on an interface
+ * @param or a channel of an ILK interface
+ * @param dq_base is the first Descriptor Queue number in a consecutive range
+ * @param dq_count is the number of consecutive Descriptor Queues leading
+ * @param the same channel or port.
+ *
+ * Only a consecurive range of Descriptor Queues can be associated with any
+ * given channel/port, and usually they are ordered from most to least
+ * in terms of scheduling priority.
+ *
+ * Note: thus function only populates the node-local translation table.
+ *
+ * @returns 0 on success, -1 on failure.
+ */
+int __cvmx_pko3_ipd_dq_register(int xiface, int index, unsigned int dq_base, unsigned int dq_count);
+
+/**
+ * @INTERNAL
+ *
+ * Unregister DQs associated with CHAN_E (IPD port)
+ */
+int __cvmx_pko3_ipd_dq_unregister(int xiface, int index);
+
+/*
+ * Map channel number in PKO
+ *
+ * @param node is to specify the node to which this configuration is applied.
+ * @param pq_num specifies the Port Queue (i.e. L1) queue number.
+ * @param l2_l3_q_num  specifies L2/L3 queue number.
+ * @param channel specifies the channel number to map to the queue.
+ *
+ * The channel assignment applies to L2 or L3 Shaper Queues depending
+ * on the setting of channel credit level.
+ *
+ * @return returns none.
+ */
+void cvmx_pko3_map_channel(unsigned int node, unsigned int pq_num, unsigned int l2_l3_q_num,
+			   u16 channel);
+
+int cvmx_pko3_pq_config(unsigned int node, unsigned int mac_num, unsigned int pq_num);
+
+int cvmx_pko3_port_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			   unsigned int burst_bytes, int adj_bytes);
+int cvmx_pko3_dq_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+int cvmx_pko3_dq_pir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+typedef enum {
+	CVMX_PKO3_SHAPE_RED_STALL,
+	CVMX_PKO3_SHAPE_RED_DISCARD,
+	CVMX_PKO3_SHAPE_RED_PASS
+} red_action_t;
+
+void cvmx_pko3_dq_red(unsigned int node, unsigned int dq_num, red_action_t red_act,
+		      int8_t len_adjust);
+
+/**
+ * Macros to deal with short floating point numbers,
+ * where unsigned exponent, and an unsigned normalized
+ * mantissa are represented each with a defined field width.
+ *
+ */
+#define CVMX_SHOFT_MANT_BITS 8
+#define CVMX_SHOFT_EXP_BITS  4
+
+/**
+ * Convert short-float to an unsigned integer
+ * Note that it will lose precision.
+ */
+#define CVMX_SHOFT_TO_U64(m, e)                                                                    \
+	((((1ull << CVMX_SHOFT_MANT_BITS) | (m)) << (e)) >> CVMX_SHOFT_MANT_BITS)
+
+/**
+ * Convert to short-float from an unsigned integer
+ */
+#define CVMX_SHOFT_FROM_U64(ui, m, e)                                                              \
+	do {                                                                                       \
+		unsigned long long u;                                                              \
+		unsigned int k;                                                                    \
+		k = (1ull << (CVMX_SHOFT_MANT_BITS + 1)) - 1;                                      \
+		(e) = 0;                                                                           \
+		u = (ui) << CVMX_SHOFT_MANT_BITS;                                                  \
+		while ((u) > k) {                                                                  \
+			u >>= 1;                                                                   \
+			(e)++;                                                                     \
+		}                                                                                  \
+		(m) = u & (k >> 1);                                                                \
+	} while (0);
+
+#define CVMX_SHOFT_MAX()                                                                           \
+	CVMX_SHOFT_TO_U64((1 << CVMX_SHOFT_MANT_BITS) - 1, (1 << CVMX_SHOFT_EXP_BITS) - 1)
+#define CVMX_SHOFT_MIN() CVMX_SHOFT_TO_U64(0, 0)
+
+#endif /* __CVMX_PKO3_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow.h b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
new file mode 100644
index 0000000000..0680ca258f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
@@ -0,0 +1,2991 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Scheduling unit.
+ *
+ * New, starting with SDK 1.7.0, cvmx-pow supports a number of
+ * extended consistency checks. The define
+ * CVMX_ENABLE_POW_CHECKS controls the runtime insertion of POW
+ * internal state checks to find common programming errors. If
+ * CVMX_ENABLE_POW_CHECKS is not defined, checks are by default
+ * enabled. For example, cvmx-pow will check for the following
+ * program errors or POW state inconsistency.
+ * - Requesting a POW operation with an active tag switch in
+ *   progress.
+ * - Waiting for a tag switch to complete for an excessively
+ *   long period. This is normally a sign of an error in locking
+ *   causing deadlock.
+ * - Illegal tag switches from NULL_NULL.
+ * - Illegal tag switches from NULL.
+ * - Illegal deschedule request.
+ * - WQE pointer not matching the one attached to the core by
+ *   the POW.
+ */
+
+#ifndef __CVMX_POW_H__
+#define __CVMX_POW_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pow-defs.h"
+#include "cvmx-sso-defs.h"
+#include "cvmx-address.h"
+#include "cvmx-coremask.h"
+
+/* Default to having all POW constancy checks turned on */
+#ifndef CVMX_ENABLE_POW_CHECKS
+#define CVMX_ENABLE_POW_CHECKS 1
+#endif
+
+/*
+ * Special type for CN78XX style SSO groups (0..255),
+ * for distinction from legacy-style groups (0..15)
+ */
+typedef union {
+	u8 xgrp;
+	/* Fields that map XGRP for backwards compatibility */
+	struct __attribute__((__packed__)) {
+		u8 group : 5;
+		u8 qus : 3;
+	};
+} cvmx_xgrp_t;
+
+/*
+ * Softwsare-only structure to convey a return value
+ * containing multiple information fields about an work queue entry
+ */
+typedef struct {
+	u32 tag;
+	u16 index;
+	u8 grp; /* Legacy group # (0..15) */
+	u8 tag_type;
+} cvmx_pow_tag_info_t;
+
+/**
+ * Wait flag values for pow functions.
+ */
+typedef enum {
+	CVMX_POW_WAIT = 1,
+	CVMX_POW_NO_WAIT = 0,
+} cvmx_pow_wait_t;
+
+/**
+ *  POW tag operations.  These are used in the data stored to the POW.
+ */
+typedef enum {
+	CVMX_POW_TAG_OP_SWTAG = 0L,
+	CVMX_POW_TAG_OP_SWTAG_FULL = 1L,
+	CVMX_POW_TAG_OP_SWTAG_DESCH = 2L,
+	CVMX_POW_TAG_OP_DESCH = 3L,
+	CVMX_POW_TAG_OP_ADDWQ = 4L,
+	CVMX_POW_TAG_OP_UPDATE_WQP_GRP = 5L,
+	CVMX_POW_TAG_OP_SET_NSCHED = 6L,
+	CVMX_POW_TAG_OP_CLR_NSCHED = 7L,
+	CVMX_POW_TAG_OP_NOP = 15L
+} cvmx_pow_tag_op_t;
+
+/**
+ * This structure defines the store data on a store to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 no_sched : 1;
+		u64 unused : 2;
+		u64 index : 13;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused2 : 2;
+		u64 qos : 3;
+		u64 grp : 4;
+		cvmx_pow_tag_type_t type : 3;
+		u64 tag : 32;
+	} s_cn38xx;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 4;
+		u64 index : 11;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_clr;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 12;
+		u64 qos : 3;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_add;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 16;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_other;
+	struct {
+		u64 rsvd_62_63 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t type : 2;
+		u64 no_sched : 1;
+		u64 rsvd_48 : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 rsvd_42_43 : 2;
+		u64 wqp : 42;
+	} s_cn78xx_other;
+
+} cvmx_pow_tag_req_t;
+
+union cvmx_pow_tag_req_addr {
+	u64 u64;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 tag : 32;
+		u64 reserved_0_3 : 4;
+	} s_cn78xx;
+};
+
+/**
+ * This structure describes the address to load stuff from POW
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Address for new work request loads (did<2:0> == 0)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_4_39 : 36;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved_32_35 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 reserved_16_28 : 13;
+		u64 index : 12;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork_78xx;
+	/**
+	 * Address for loads to get POW internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_10_39 : 30;
+		u64 coreid : 4;
+		u64 get_rev : 1;
+		u64 get_cur : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} sstatus;
+	/**
+	 * Address for loads to get 68XX SS0 internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_14_39 : 26;
+		u64 coreid : 5;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sstatus_cn68xx;
+	/**
+	 * Address for memory loads to get POW internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_16_39 : 24;
+		u64 index : 11;
+		u64 get_des : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} smemload;
+	/**
+	 * Address for memory loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_20_39 : 20;
+		u64 index : 11;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} smemload_cn68xx;
+	/**
+	 * Address for index/pointer loads
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_9_39 : 31;
+		u64 qosgrp : 4;
+		u64 get_des_get_tail : 1;
+		u64 get_rmt : 1;
+		u64 reserved_0_2 : 3;
+	} sindexload;
+	/**
+	 * Address for a Index/Pointer loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_15_39 : 25;
+		u64 qos_grp : 6;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sindexload_cn68xx;
+	/**
+	 * Address for NULL_RD request (did<2:0> == 4)
+	 * when this is read, HW attempts to change the state to NULL if it is NULL_NULL
+	 * (the hardware cannot switch from NULL_NULL to NULL if a POW entry is not available -
+	 * software may need to recover by finishing another piece of work before a POW
+	 * entry can ever become available.)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_0_39 : 40;
+	} snull_rd;
+} cvmx_pow_load_addr_t;
+
+/**
+ * This structure defines the response to a load/SENDSINGLE to POW (except CSR reads)
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Response to new work request loads
+	 */
+	struct {
+		u64 no_work : 1;
+		u64 pend_switch : 1;
+		u64 tt : 2;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 addr : 42;
+	} s_work;
+
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 reserved_34_35 : 2;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDTAG)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_48_56 : 9;
+		u64 pend_index : 11;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0_cn68xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 pend_wqp : 36;
+	} s_sstatus1;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDWQP)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_51_56 : 6;
+		u64 pend_index : 11;
+		u64 reserved_38_39 : 2;
+		u64 pend_wqp : 38;
+	} s_sstatus1_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_56 : 1;
+		u64 prep_index : 12;
+		u64 reserved_42_43 : 2;
+		u64 pend_tag : 42;
+	} s_sso_ppx_pendwqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_TAG)
+	 */
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_46_47 : 2;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ppx_tag_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus3;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_WQP)
+	 */
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 index : 11;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_sstatus3_cn68xx;
+
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 tag : 42;
+	} s_sso_ppx_wqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus4;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_LINKS)
+	 */
+	struct {
+		u64 reserved_46_63 : 18;
+		u64 index : 11;
+		u64 reserved_34 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_24_25 : 2;
+		u64 revlink_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 link_index : 11;
+	} s_sstatus4_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_38_47 : 10;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_25 : 1;
+		u64 revlink_index : 12;
+		u64 link_index_vld : 1;
+		u64 link_index : 12;
+	} s_sso_ppx_links_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus5;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 0)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 reserved_35 : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_TAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0_cn68xx;
+
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sso_iaq_ppx_tag_cn78xx;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_smemload1;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_WQPGRP)
+	 */
+	struct {
+		u64 reserved_48_63 : 16;
+		u64 nosched : 1;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_smemload1_cn68xx;
+
+	/**
+	 * Entry structures for the CN7XXX chips.
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tailc : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ientx_tag_cn78xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_56_59 : 4;
+		u64 grp : 8;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn73xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn78xx;
+
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ientx_pendtag_cn78xx;
+
+	struct {
+		u64 reserved_26_63 : 38;
+		u64 prev_index : 10;
+		u64 reserved_11_15 : 5;
+		u64 next_index_vld : 1;
+		u64 next_index : 10;
+	} s_sso_ientx_links_cn73xx;
+
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 prev_index : 12;
+		u64 reserved_13_15 : 3;
+		u64 next_index_vld : 1;
+		u64 next_index : 12;
+	} s_sso_ientx_links_cn78xx;
+
+	/**
+	 * Result For POW Memory Load (get_des == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 fwd_index : 11;
+		u64 grp : 4;
+		u64 nosched : 1;
+		u64 pend_switch : 1;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_PENTAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_34_56 : 23;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ppx_pendtag_cn78xx;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_LINKS)
+	 */
+	struct {
+		u64 reserved_24_63 : 40;
+		u64 fwd_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 next_index : 11;
+	} s_smemload3_cn68xx;
+
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 free_val : 1;
+		u64 free_one : 1;
+		u64 reserved_49 : 1;
+		u64 free_head : 11;
+		u64 reserved_37 : 1;
+		u64 free_tail : 11;
+		u64 loc_val : 1;
+		u64 loc_one : 1;
+		u64 reserved_23 : 1;
+		u64 loc_head : 11;
+		u64 reserved_11 : 1;
+		u64 loc_tail : 11;
+	} sindexload0;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode ==
+	 * IPL_IQ/IPL_DESCHED/IPL_NOSCHED)
+	 */
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload0_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 nosched_val : 1;
+		u64 nosched_one : 1;
+		u64 reserved_49 : 1;
+		u64 nosched_head : 11;
+		u64 reserved_37 : 1;
+		u64 nosched_tail : 11;
+		u64 des_val : 1;
+		u64 des_one : 1;
+		u64 reserved_23 : 1;
+		u64 des_head : 11;
+		u64 reserved_11 : 1;
+		u64 des_tail : 11;
+	} sindexload1;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode == IPL_FREE0/IPL_FREE1/IPL_FREE2)
+	 */
+	struct {
+		u64 reserved_60_63 : 4;
+		u64 qnum_head : 2;
+		u64 qnum_tail : 2;
+		u64 reserved_28_55 : 28;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload1_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_head : 36;
+	} sindexload2;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_tail : 36;
+	} sindexload3;
+	/**
+	 * Response to NULL_RD request loads
+	 */
+	struct {
+		u64 unused : 62;
+		u64 state : 2;
+	} s_null_rd;
+
+} cvmx_pow_tag_load_resp_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pow_sl_tag_resp_t;
+
+/**
+ * This structure describes the address used for stores to the POW.
+ *  The store address is meaningful on stores to the POW.  The hardware assumes that an aligned
+ *  64-bit store was used for all these stores.
+ *  Note the assumption that the work queue entry is aligned on an 8-byte
+ *  boundary (since the low-order 3 address bits must be zero).
+ *  Note that not all fields are used by all operations.
+ *
+ *  NOTE: The following is the behavior of the pending switch bit at the PP
+ *       for POW stores (i.e. when did<7:3> == 0xc)
+ *     - did<2:0> == 0      => pending switch bit is set
+ *     - did<2:0> == 1      => no affect on the pending switch bit
+ *     - did<2:0> == 3      => pending switch bit is cleared
+ *     - did<2:0> == 7      => no affect on the pending switch bit
+ *     - did<2:0> == others => must not be used
+ *     - No other loads/stores have an affect on the pending switch bit
+ *     - The switch bus from POW can clear the pending switch bit
+ *
+ *  NOTE: did<2:0> == 2 is used by the HW for a special single-cycle ADDWQ command
+ *  that only contains the pointer). SW must never use did<2:0> == 2.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 mem_reg : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} stag;
+} cvmx_pow_tag_store_addr_t; /* FIXME- this type is unused */
+
+/**
+ * Decode of the store data when an IOBDMA SENDSINGLE is sent to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 unused : 36;
+		u64 wait : 1;
+		u64 unused2 : 3;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 unused1 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 unused2 : 13;
+		u64 index_grp_mask : 12;
+		u64 wait : 1;
+		u64 unused3 : 3;
+	} s_cn78xx;
+} cvmx_pow_iobdma_store_t;
+
+/* CSR typedefs have been moved to cvmx-pow-defs.h */
+
+/*enum for group priority parameters which needs modification*/
+enum cvmx_sso_group_modify_mask {
+	CVMX_SSO_MODIFY_GROUP_PRIORITY = 0x01,
+	CVMX_SSO_MODIFY_GROUP_WEIGHT = 0x02,
+	CVMX_SSO_MODIFY_GROUP_AFFINITY = 0x04
+};
+
+/**
+ * @INTERNAL
+ * Return the number of SSO groups for a given SoC model
+ */
+static inline unsigned int cvmx_sso_num_xgrp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 256;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 64;
+	printf("ERROR: %s: Unknown model\n", __func__);
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of POW groups on current model.
+ * In case of CN78XX/CN73XX this is the number of equivalent
+ * "legacy groups" on the chip when it is used in backward
+ * compatible mode.
+ */
+static inline unsigned int cvmx_pow_num_groups(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return cvmx_sso_num_xgrp() >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		return 64;
+	else
+		return 16;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of mask-set registers.
+ */
+static inline unsigned int cvmx_sso_num_maskset(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return 2;
+	else
+		return 1;
+}
+
+/**
+ * Get the POW tag for this core. This returns the current
+ * tag type, tag, group, and POW entry index associated with
+ * this core. Index is only valid if the tag type isn't NULL_NULL.
+ * If a tag switch is pending this routine returns the tag before
+ * the tag switch, not after.
+ *
+ * @return Current tag
+ */
+static inline cvmx_pow_tag_info_t cvmx_pow_get_current_tag(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_info_t result;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_tag_t sl_ppx_tag;
+		cvmx_xgrp_t xgrp;
+		int node, core;
+
+		CVMX_SYNCS;
+		node = cvmx_get_node_num();
+		core = cvmx_get_local_core_num();
+		sl_ppx_tag.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_TAG(core));
+		result.index = sl_ppx_tag.s.index;
+		result.tag_type = sl_ppx_tag.s.tt;
+		result.tag = sl_ppx_tag.s.tag;
+
+		/* Get native XGRP value */
+		xgrp.xgrp = sl_ppx_tag.s.grp;
+
+		/* Return legacy style group 0..15 */
+		result.grp = xgrp.group;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_pow_sl_tag_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 3;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s.grp;
+		result.index = load_resp.s.index;
+		result.tag_type = load_resp.s.tag_type;
+		result.tag = load_resp.s.tag;
+	} else {
+		cvmx_pow_tag_load_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s_sstatus2.grp;
+		result.index = load_resp.s_sstatus2.index;
+		result.tag_type = load_resp.s_sstatus2.tag_type;
+		result.tag = load_resp.s_sstatus2.tag;
+	}
+	return result;
+}
+
+/**
+ * Get the POW WQE for this core. This returns the work queue
+ * entry currently associated with this core.
+ *
+ * @return WQE pointer
+ */
+static inline cvmx_wqe_t *cvmx_pow_get_current_wqp(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_load_resp_t load_resp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_wqp_t sso_wqp;
+		int node = cvmx_get_node_num();
+		int core = cvmx_get_local_core_num();
+
+		sso_wqp.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_WQP(core));
+		if (sso_wqp.s.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(sso_wqp.s.wqp);
+		return (cvmx_wqe_t *)0;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 4;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		if (load_resp.s_sstatus3_cn68xx.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus3_cn68xx.wqp);
+		else
+			return (cvmx_wqe_t *)0;
+	} else {
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_addr.sstatus.get_wqp = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus4.wqp);
+	}
+}
+
+/**
+ * @INTERNAL
+ * Print a warning if a tag switch is pending for this core
+ *
+ * @param function Function name checking for a pending tag switch
+ */
+static inline void __cvmx_pow_warn_if_pending_switch(const char *function)
+{
+	u64 switch_complete;
+
+	CVMX_MF_CHORD(switch_complete);
+	cvmx_warn_if(!switch_complete, "%s called with tag switch in progress\n", function);
+}
+
+/**
+ * Waits for a tag switch to complete by polling the completion bit.
+ * Note that switches to NULL complete immediately and do not need
+ * to be waited for.
+ */
+static inline void cvmx_pow_tag_sw_wait(void)
+{
+	const u64 TIMEOUT_MS = 10; /* 10ms timeout */
+	u64 switch_complete;
+	u64 start_cycle;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		start_cycle = get_timer(0);
+
+	while (1) {
+		CVMX_MF_CHORD(switch_complete);
+		if (cvmx_likely(switch_complete))
+			break;
+
+		if (CVMX_ENABLE_POW_CHECKS) {
+			if (cvmx_unlikely(get_timer(start_cycle) > TIMEOUT_MS)) {
+				debug("WARNING: %s: Tag switch is taking a long time, possible deadlock\n",
+				      __func__);
+			}
+		}
+	}
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync_nocheck(cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.node = cvmx_get_node_num();
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork_78xx.wait = wait;
+	} else {
+		ptr.swork.mem_region = CVMX_IO_SEG;
+		ptr.swork.is_io = 1;
+		ptr.swork.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork.wait = wait;
+	}
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync(cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	return (cvmx_pow_work_request_sync_nocheck(wait));
+}
+
+/**
+ * Synchronous null_rd request.  Requests a switch out of NULL_NULL POW state.
+ * This function waits for any previous tag switch to complete before
+ * requesting the null_rd.
+ *
+ * @return Returns the POW state of type cvmx_pow_tag_type_t.
+ */
+static inline cvmx_pow_tag_type_t cvmx_pow_work_request_null_rd(void)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_NULL_RD;
+		ptr.swork_78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.snull_rd.mem_region = CVMX_IO_SEG;
+		ptr.snull_rd.is_io = 1;
+		ptr.snull_rd.did = CVMX_OCT_DID_TAG_NULL_RD;
+	}
+	result.u64 = csr_rd(ptr.u64);
+	return (cvmx_pow_tag_type_t)result.s_null_rd.state;
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async_nocheck(int scr_addr, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		data.s_cn78xx.node = cvmx_get_node_num();
+		data.s_cn78xx.scraddr = scr_addr >> 3;
+		data.s_cn78xx.len = 1;
+		data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s_cn78xx.wait = wait;
+	} else {
+		data.s.scraddr = scr_addr >> 3;
+		data.s.len = 1;
+		data.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s.wait = wait;
+	}
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async(int scr_addr, cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_work_request_async_nocheck(scr_addr, wait);
+}
+
+/**
+ * Gets result of asynchronous work request.  Performs a IOBDMA sync
+ * to wait for the response.
+ *
+ * @param scr_addr Scratch memory address to get result from
+ *                  Byte address, must be 8 byte aligned.
+ * @return Returns the WQE from the scratch register, or NULL if no work was
+ *         available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_response_async(int scr_addr)
+{
+	cvmx_pow_tag_load_resp_t result;
+
+	CVMX_SYNCIOBDMA;
+	result.u64 = cvmx_scratch_read64(scr_addr);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Checks if a work queue entry pointer returned by a work
+ * request is valid.  It may be invalid due to no work
+ * being available or due to a timeout.
+ *
+ * @param wqe_ptr pointer to a work queue entry returned by the POW
+ *
+ * @return 0 if pointer is valid
+ *         1 if invalid (no work was returned)
+ */
+static inline u64 cvmx_pow_work_invalid(cvmx_wqe_t *wqe_ptr)
+{
+	return (!wqe_ptr); /* FIXME: improve */
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details.
+	 * It is the application's responsibility to keep track of the
+	 * current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = tag_type;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+	}
+	/* Once this store arrives at POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function waits for any previous tag switch to complete, and also
+ * displays an error on tag switches to NULL.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details. It is the application's responsibility to keep
+	 * track of the current tag value if that is important.
+	 */
+
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch
+	 * cannot be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_nocheck(tag, tag_type);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param wqp      pointer to work queue entry to submit.  This entry is
+ *                 updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param group    group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full_nocheck(cvmx_wqe_t *wqp, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u64 group)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	unsigned int node = cvmx_get_node_num();
+	u64 wqp_phys = cvmx_ptr_to_phys(wqp);
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not
+	 * read from DRAM once the WQE is in flight.  See hardware manual
+	 * for complete details. It is the application's responsibility to
+	 * keep track of the current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int xgrp;
+
+		if (wqp_phys != 0x80) {
+			/* If WQE is valid, use its XGRP:
+			 * WQE GRP is 10 bits, and is mapped
+			 * to legacy GRP + QoS, includes node number.
+			 */
+			xgrp = wqp->word1.cn78xx.grp;
+			/* Use XGRP[node] too */
+			node = xgrp >> 8;
+			/* Modify XGRP with legacy group # from arg */
+			xgrp &= ~0xf8;
+			xgrp |= 0xf8 & (group << 3);
+
+		} else {
+			/* If no WQE, build XGRP with QoS=0 and current node */
+			xgrp = group << 3;
+			xgrp |= node << 8;
+		}
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = xgrp;
+		tag_req.s_cn78xx_other.wqp = wqp_phys;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s.addr = wqp_phys;
+	}
+	/* Once this store arrives@POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.
+ * Completion for the tag switch must be checked for separately.
+ * This function does NOT update the work queue entry in dram to match tag value
+ * and type, so the application must keep track of these if they are important
+ * to the application. This tag switch command must not be used for switches
+ * to NULL, as the tag switch pending bit will be set by the switch request,
+ * but never cleared by the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function waits for any pending tag switches to complete
+ * before requesting the tag switch.
+ *
+ * @param wqp      Pointer to work queue entry to submit.
+ *     This entry is updated to match the other parameters
+ * @param tag      Tag value to be assigned to work queue entry
+ * @param tag_type Type of tag
+ * @param group    Group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 group)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_full_nocheck(wqp, tag, tag_type, group);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that any previous tag switches have completed.
+ */
+static inline void cvmx_pow_tag_sw_null_nocheck(void)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called when we already have a NULL tag\n", __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.type = CVMX_POW_TAG_TYPE_NULL;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function waits for any pending tag switches to complete
+ * before requesting the switch to NULL.
+ */
+static inline void cvmx_pow_tag_sw_null(void)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_null_nocheck();
+}
+
+/**
+ * Submits work to an input queue.
+ * This function updates the work queue entry in DRAM to match the arguments given.
+ * Note that the tag provided is for the work queue entry submitted, and
+ * is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp      pointer to work queue entry to submit.
+ *                 This entry is updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param qos      Input queue to add to.
+ * @param grp      group value for the work queue entry.
+ */
+static inline void cvmx_pow_work_submit(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 qos, u64 grp)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	tag_req.u64 = 0;
+	ptr.u64 = 0;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		unsigned int xgrp;
+
+		xgrp = (grp & 0x1f) << 3;
+		xgrp |= (qos & 7);
+		xgrp |= 0x300 & (node << 8);
+
+		wqp->word1.cn78xx.rsvd_0 = 0;
+		wqp->word1.cn78xx.rsvd_1 = 0;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = xgrp;
+		CVMX_SYNCWS;
+
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+		tag_req.s_cn78xx_other.grp = xgrp;
+
+		ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		/* Reset all reserved bits */
+		wqp->word1.cn68xx.zero_0 = 0;
+		wqp->word1.cn68xx.zero_1 = 0;
+		wqp->word1.cn68xx.zero_2 = 0;
+		wqp->word1.cn68xx.qos = qos;
+		wqp->word1.cn68xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn68xx_add.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn68xx_add.type = tag_type;
+		tag_req.s_cn68xx_add.tag = tag;
+		tag_req.s_cn68xx_add.qos = qos;
+		tag_req.s_cn68xx_add.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	} else {
+		/* Reset all reserved bits */
+		wqp->word1.cn38xx.zero_2 = 0;
+		wqp->word1.cn38xx.qos = qos;
+		wqp->word1.cn38xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.qos = qos;
+		tag_req.s_cn38xx.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	}
+	/* SYNC write to memory before the work submit.
+	 * This is necessary as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from. There are
+ * 16 groups.
+ *
+ * @param core_num   core to apply mask to
+ * @param mask   Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline void cvmx_pow_set_group_mask(u64 core_num, u64 mask)
+{
+	u64 valid_mask;
+	int num_groups = cvmx_pow_num_groups();
+
+	if (num_groups >= 64)
+		valid_mask = ~0ull;
+	else
+		valid_mask = (1ull << num_groups) - 1;
+
+	if ((mask & valid_mask) == 0) {
+		printf("ERROR: %s empty group mask disables work on core# %llu, ignored.\n",
+		       __func__, (unsigned long long)core_num);
+		return;
+	}
+	cvmx_warn_if(mask & (~valid_mask), "%s group number range exceeded: %#llx\n", __func__,
+		     (unsigned long long)mask);
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int mask_set;
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node;
+		unsigned int rix;  /* Register index */
+		unsigned int grp;  /* Legacy group # */
+		unsigned int bit;  /* bit index */
+		unsigned int xgrp; /* native group # */
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = 0; rix < (cvmx_sso_num_xgrp() >> 6); rix++) {
+			grp_msk.u64 = 0;
+			for (bit = 0; bit < 64; bit++) {
+				/* 8-bit native XGRP number */
+				xgrp = (rix << 6) | bit;
+				/* Legacy 5-bit group number */
+				grp = (xgrp >> 3) & 0x1f;
+				/* Inspect legacy mask by legacy group */
+				if (mask & (1ull << grp))
+					grp_msk.s.grp_msk |= 1ull << bit;
+				/* Pre-set to all 0's */
+			}
+			for (mask_set = 0; mask_set < cvmx_sso_num_maskset(); mask_set++) {
+				csr_wr_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, mask_set, rix),
+					    grp_msk.u64);
+			}
+		}
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.s.grp_msk = mask;
+		csr_wr(CVMX_SSO_PPX_GRP_MSK(core_num), grp_msk.u64);
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.grp_msk = mask & 0xffff;
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from.
+ *
+ * @param core_num   core to apply mask to
+ * @return	Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline u64 cvmx_pow_get_group_mask(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read only mask_set=0 (both 'set' was written same) */
+			grp_msk.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			/* ASSUME: (this is how mask bits got written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* DO: mask[7:0] = grp_mask.u64[56,48,40,32,24,16,8,0] */
+			for (i = 0; i < 8; i++)
+				mask |= (grp_msk.u64 & ((u64)1 << (i * 8))) >> (7 * i);
+			/* we collected 8 MSBs in mask[7:0], <<=8 and continue */
+			if (cvmx_likely(rix != 0))
+				mask <<= 8;
+		}
+		return mask & 0xFFFFFFFF;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_SSO_PPX_GRP_MSK(core_num));
+		return grp_msk.u64;
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		return grp_msk.u64 & 0xffff;
+	}
+}
+
+/*
+ * Returns 0 if 78xx(73xx,75xx) is not programmed in legacy compatible mode
+ * Returns 1 if 78xx(73xx,75xx) is programmed in legacy compatible mode
+ * Returns 1 if octeon model is not 78xx(73xx,75xx)
+ */
+static inline u64 cvmx_pow_is_legacy78mode(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk0, grp_msk1;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		/* 1) in order for the 78_SSO to be in legacy compatible mode
+		 * the both mask_sets should be programmed the same */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read mask_set=0 (both 'set' was written same) */
+			grp_msk0.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			grp_msk1.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 1, rix));
+			if (grp_msk0.u64 != grp_msk1.u64) {
+				return 0;
+			}
+			/* (this is how mask bits should be written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* 2) in order for the 78_SSO to be in legacy compatible
+			 * mode above should be true (test only mask_set=0 */
+			for (i = 0; i < 8; i++) {
+				mask = (grp_msk0.u64 >> (i << 3)) & 0xFF;
+				if (!(mask == 0 || mask == 0xFF)) {
+					return 0;
+				}
+			}
+		}
+		/* if we come here, the 78_SSO is in legacy compatible mode */
+	}
+	return 1; /* the SSO/POW is in legacy (or compatible) mode */
+}
+
+/**
+ * This function sets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param core_num   core to apply priorities to
+ * @param priority   Vector of 8 priorities, one per POW Input Queue (0-7).
+ *                   Highest priority is 0 and lowest is 7. A priority value
+ *                   of 0xF instructs POW to skip the Input Queue when
+ *                   scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_set_priority(u64 core_num, const u8 priority[])
+{
+	/* Detect gaps between priorities and flag error */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR: POW static priorities should be contiguous (0x%llx)\n",
+			      (unsigned long long)prio_mask);
+			return;
+		}
+	}
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/*grp_pri.s.weight = 0x3f; these will be anyway overwritten */
+		/*grp_pri.s.affinity = 0xf; by the next csr_rd_node(..), */
+
+		for (group = 0; group < cvmx_sso_num_xgrp(); group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			grp_pri.s.pri = priority[group & 0x7];
+			csr_wr_node(node, CVMX_SSO_GRPX_PRI(group), grp_pri.u64);
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		qos_pri.s.qos0_pri = priority[0];
+		qos_pri.s.qos1_pri = priority[1];
+		qos_pri.s.qos2_pri = priority[2];
+		qos_pri.s.qos3_pri = priority[3];
+		qos_pri.s.qos4_pri = priority[4];
+		qos_pri.s.qos5_pri = priority[5];
+		qos_pri.s.qos6_pri = priority[6];
+		qos_pri.s.qos7_pri = priority[7];
+		csr_wr(CVMX_SSO_PPX_QOS_PRI(core_num), qos_pri.u64);
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.qos0_pri = priority[0];
+		grp_msk.s.qos1_pri = priority[1];
+		grp_msk.s.qos2_pri = priority[2];
+		grp_msk.s.qos3_pri = priority[3];
+		grp_msk.s.qos4_pri = priority[4];
+		grp_msk.s.qos5_pri = priority[5];
+		grp_msk.s.qos6_pri = priority[6];
+		grp_msk.s.qos7_pri = priority[7];
+
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param[in]  core_num core to get priorities for
+ * @param[out] priority Pointer to u8[] where to return priorities
+ *			Vector of 8 priorities, one per POW Input Queue (0-7).
+ *			Highest priority is 0 and lowest is 7. A priority value
+ *			of 0xF instructs POW to skip the Input Queue when
+ *			scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_get_priority(u64 core_num, u8 priority[])
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/* read priority only from the first 8 groups */
+		/* the next groups are programmed the same (periodicaly) */
+		for (group = 0; group < 8 /*cvmx_sso_num_xgrp() */; group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			priority[group /* & 0x7 */] = grp_pri.s.pri;
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		priority[0] = qos_pri.s.qos0_pri;
+		priority[1] = qos_pri.s.qos1_pri;
+		priority[2] = qos_pri.s.qos2_pri;
+		priority[3] = qos_pri.s.qos3_pri;
+		priority[4] = qos_pri.s.qos4_pri;
+		priority[5] = qos_pri.s.qos5_pri;
+		priority[6] = qos_pri.s.qos6_pri;
+		priority[7] = qos_pri.s.qos7_pri;
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		priority[0] = grp_msk.s.qos0_pri;
+		priority[1] = grp_msk.s.qos1_pri;
+		priority[2] = grp_msk.s.qos2_pri;
+		priority[3] = grp_msk.s.qos3_pri;
+		priority[4] = grp_msk.s.qos4_pri;
+		priority[5] = grp_msk.s.qos5_pri;
+		priority[6] = grp_msk.s.qos6_pri;
+		priority[7] = grp_msk.s.qos7_pri;
+	}
+
+	/* Detect gaps between priorities and flag error - (optional) */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR:%s: POW static priorities should be contiguous (0x%llx)\n",
+			      __func__, (unsigned long long)prio_mask);
+			return;
+		}
+	}
+}
+
+static inline void cvmx_sso_get_group_priority(int node, cvmx_xgrp_t xgrp, int *priority,
+					       int *weight, int *affinity)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	*affinity = grp_pri.s.affinity;
+	*priority = grp_pri.s.pri;
+	*weight = grp_pri.s.weight;
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function does NOT wait for any prior tag switches to complete, so the
+ * calling code must do this.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+						   u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_t *wqp = cvmx_pow_get_current_wqp();
+
+		if (!wqp) {
+			debug("ERROR: Failed to get WQE, %s\n", __func__);
+			return;
+		}
+		group &= 0x1f;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = group << 3;
+		CVMX_SYNCWS;
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = group << 3;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		group &= 0x3f;
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		group &= 0x0f;
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function waits for any prior tag switches to complete, so the
+ * calling code may call this function with a pending tag switch.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+					   u64 no_sched)
+{
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be started
+	 * if a previous switch is still pending.  */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_desched_nocheck(tag, tag_type, group, no_sched);
+}
+
+/**
+ * Descchedules the current work queue entry.
+ *
+ * @param no_sched no schedule flag value to be set on the work queue entry.
+ *     If this is set the entry will not be rescheduled.
+ */
+static inline void cvmx_pow_desched(u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not expected from NULL state\n",
+			     __func__);
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/******************************************************************************/
+/* OCTEON3-specific functions.                                                */
+/******************************************************************************/
+/**
+ * This function sets the the affinity of group to the cores in 78xx.
+ * It sets up all the cores in core_mask to accept work from the specified group.
+ *
+ * @param xgrp	Group to accept work from, 0 - 255.
+ * @param core_mask	Mask of all the cores which will accept work from this group
+ * @param mask_set	Every core has set of 2 masks which can be set to accept work
+ *     from 256 groups. At the time of get_work, cores can choose which mask_set
+ *     to get work from. 'mask_set' values range from 0 to 3, where	each of the
+ *     two bits represents a mask set. Cores will be added to the mask set with
+ *     corresponding bit set, and removed from the mask set with corresponding
+ *     bit clear.
+ * Note: cores can only accept work from SSO groups on the same node,
+ * so the node number for the group is derived from the core number.
+ */
+static inline void cvmx_sso_set_group_core_affinity(cvmx_xgrp_t xgrp,
+						    const struct cvmx_coremask *core_mask,
+						    u8 mask_set)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	int core;
+	int grp_index = xgrp.xgrp >> 6;
+	int bit_pos = xgrp.xgrp % 64;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	cvmx_coremask_for_each_core(core, core_mask)
+	{
+		unsigned int node, ncore;
+		u64 reg_addr;
+
+		node = cvmx_coremask_core_to_node(core);
+		ncore = cvmx_coremask_core_on_node(core);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 0, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 1)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 1, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 2)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+	}
+}
+
+/**
+ * This function sets the priority and group affinity arbitration for each group.
+ *
+ * @param node		Node number
+ * @param xgrp	Group 0 - 255 to apply mask parameters to
+ * @param priority	Priority of the group relative to other groups
+ *     0x0 - highest priority
+ *     0x7 - lowest priority
+ * @param weight	Cross-group arbitration weight to apply to this group.
+ *     valid values are 1-63
+ *     h/w default is 0x3f
+ * @param affinity	Processor affinity arbitration weight to apply to this group.
+ *     If zero, affinity is disabled.
+ *     valid values are 0-15
+ *     h/w default which is 0xf.
+ * @param modify_mask   mask of the parameters which needs to be modified.
+ *     enum cvmx_sso_group_modify_mask
+ *     to modify only priority -- set bit0
+ *     to modify only weight   -- set bit1
+ *     to modify only affinity -- set bit2
+ */
+static inline void cvmx_sso_set_group_priority(int node, cvmx_xgrp_t xgrp, int priority, int weight,
+					       int affinity,
+					       enum cvmx_sso_group_modify_mask modify_mask)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	if (weight <= 0)
+		weight = 0x3f; /* Force HW default when out of range */
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	if (grp_pri.s.weight == 0)
+		grp_pri.s.weight = 0x3f;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_PRIORITY)
+		grp_pri.s.pri = priority;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_WEIGHT)
+		grp_pri.s.weight = weight;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_AFFINITY)
+		grp_pri.s.affinity = affinity;
+	csr_wr_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp), grp_pri.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Only works on CN78XX style SSO.
+ *
+ * Work is requested from the SSO unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param xgrp  Group to receive work for (0-255).
+ * @param wait
+ *     1 to cause response to wait for work to become available (or timeout)
+ *     0 to cause response to return immediately
+ */
+static inline void cvmx_sso_work_request_grp_async_nocheck(int scr_addr, cvmx_xgrp_t xgrp,
+							   cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+	unsigned int node = cvmx_get_node_num();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	data.s_cn78xx.scraddr = scr_addr >> 3;
+	data.s_cn78xx.len = 1;
+	data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	data.s_cn78xx.grouped = 1;
+	data.s_cn78xx.index_grp_mask = (node << 8) | xgrp.xgrp;
+	data.s_cn78xx.wait = wait;
+	data.s_cn78xx.node = node;
+
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Synchronous work request from the node-local SSO without verifying
+ * pending tag switch. It requests work from a specific SSO group.
+ *
+ * @param lgrp The local group number (within the SSO of the node of the caller)
+ *     from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from SSO.
+ *     Returns NULL if no work was available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync_nocheck(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+	unsigned int node = cvmx_get_node_num() & 3;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	ptr.u64 = 0;
+	ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+	ptr.swork_78xx.is_io = 1;
+	ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.swork_78xx.node = node;
+	ptr.swork_78xx.grouped = 1;
+	ptr.swork_78xx.index = (lgrp & 0xff) | node << 8;
+	ptr.swork_78xx.wait = wait;
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request from the node-local SSO.
+ * It requests work from a specific SSO group.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param lgrp The node-local group number from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return The WQE pointer or NULL, if work is not available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_tag_sw_wait();
+	return cvmx_sso_work_request_grp_sync_nocheck(lgrp, wait);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Group mask array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask, if set, enables
+ *     the core to accept work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ *
+ * If the 'mask_set' argument is 3, both mask sets are configured
+ * with the same value (which is not typically the intention),
+ * so keep in mind the function needs to be called twice
+ * to set a different value into each of the mask sets,
+ * once with 'mask_set=1' and second time with 'mask_set=2'.
+ */
+static inline void cvmx_pow_set_xgrp_mask(u64 core_num, u8 mask_set, const u64 xgrp_mask[])
+{
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(((mask_set < 1) || (mask_set > 3)), "Invalid mask set");
+	}
+
+	if ((mask_set < 1) || (mask_set > 3))
+		mask_set = 3;
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < (cvmx_sso_num_xgrp() >> 6); grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Provide pointer to u64 mask[8] output array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask represents
+ *     the core accepts work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ */
+static inline void cvmx_pow_get_xgrp_mask(u64 core_num, u8 mask_set, u64 *xgrp_mask)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(mask_set != 1 && mask_set != 2, "Invalid mask set");
+	}
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp() >> 6; grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+	}
+}
+
+/**
+ * Executes SSO SWTAG command.
+ * This is similar to cvmx_pow_tag_sw() function, but uses linear
+ * (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+	tag_req.s_cn78xx_other.type = tag_type;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes SSO SWTAG_FULL command.
+ * This is similar to cvmx_pow_tag_sw_full() function, but
+ * uses linear (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_full_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 gxgrp;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be
+	 * started, if a previous switch is still pending. */
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+	gxgrp = node;
+	gxgrp = gxgrp << 8 | xgrp;
+	wqp->word1.cn78xx.grp = gxgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = gxgrp;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Submits work to an SSO group on any OCI node.
+ * This function updates the work queue entry in DRAM to match
+ * the arguments given.
+ * Note that the tag provided is for the work queue entry submitted,
+ * and is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp pointer to work queue entry to submit.
+ * This entry is updated to match the other parameters
+ * @param tag tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param xgrp native CN78XX group in the range 0..255
+ * @param node The OCI node number for the target group
+ *
+ * When this function is called on a model prior to CN78XX, which does
+ * not support OCI nodes, the 'node' argument is ignored, and the 'xgrp'
+ * parameter is converted into 'qos' (the lower 3 bits) and 'grp' (the higher
+ * 5 bits), following the backward-compatibility scheme of translating
+ * between new and old style group numbers.
+ */
+static inline void cvmx_pow_work_submit_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+	tag_req.s_cn78xx_other.grp = group;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+
+	/* SYNC write to memory before the work submit.  This is necessary
+	 ** as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes the SSO SWTAG_DESCHED operation.
+ * This is similar to the cvmx_pow_tag_sw_desched() function, but
+ * uses linear (vs. unified group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_desched_node(cvmx_wqe_t *wqe, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u8 xgrp, u64 no_sched,
+						u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqe->word1.cn78xx.tag = tag;
+	wqe->word1.cn78xx.tag_type = tag_type;
+	wqe->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = group;
+	tag_req.s_cn78xx_other.no_sched = no_sched;
+
+	ptr.u64 = 0;
+	ptr.s.mem_region = CVMX_IO_SEG;
+	ptr.s.is_io = 1;
+	ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/* Executes the UPD_WQP_GRP SSO operation.
+ *
+ * @param wqp  Pointer to the new work queue entry to switch to.
+ * @param xgrp SSO group in the range 0..255
+ *
+ * NOTE: The operation can be performed only on the local node.
+ */
+static inline void cvmx_sso_update_wqp_group(cvmx_wqe_t *wqp, u8 xgrp)
+{
+	union cvmx_pow_tag_req_addr addr;
+	cvmx_pow_tag_req_t data;
+	int node = cvmx_get_node_num();
+	int group = node << 8 | xgrp;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	data.u64 = 0;
+	data.s_cn78xx_other.op = CVMX_POW_TAG_OP_UPDATE_WQP_GRP;
+	data.s_cn78xx_other.grp = group;
+	data.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	addr.u64 = 0;
+	addr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	addr.s_cn78xx.is_io = 1;
+	addr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+	addr.s_cn78xx.node = node;
+	cvmx_write_io(addr.u64, data.u64);
+}
+
+/******************************************************************************/
+/* Define usage of bits within the 32 bit tag values.                         */
+/******************************************************************************/
+/*
+ * Number of bits of the tag used by software.  The SW bits
+ * are always a contiguous block of the high starting at bit 31.
+ * The hardware bits are always the low bits.  By default, the top 8 bits
+ * of the tag are reserved for software, and the low 24 are set by the IPD unit.
+ */
+#define CVMX_TAG_SW_BITS  (8)
+#define CVMX_TAG_SW_SHIFT (32 - CVMX_TAG_SW_BITS)
+
+/* Below is the list of values for the top 8 bits of the tag. */
+/*
+ * Tag values with top byte of this value are reserved for internal executive
+ * uses
+ */
+#define CVMX_TAG_SW_BITS_INTERNAL 0x1
+
+/*
+ * The executive divides the remaining 24 bits as follows:
+ * the upper 8 bits (bits 23 - 16 of the tag) define a subgroup
+ * the lower 16 bits (bits 15 - 0 of the tag) define are the value with
+ * the subgroup. Note that this section describes the format of tags generated
+ * by software - refer to the hardware documentation for a description of the
+ * tags values generated by the packet input hardware.
+ * Subgroups are defined here
+ */
+
+/* Mask for the value portion of the tag */
+#define CVMX_TAG_SUBGROUP_MASK	0xFFFF
+#define CVMX_TAG_SUBGROUP_SHIFT 16
+#define CVMX_TAG_SUBGROUP_PKO	0x1
+
+/* End of executive tag subgroup definitions */
+
+/* The remaining values software bit values 0x2 - 0xff are available
+ * for application use */
+
+/**
+ * This function creates a 32 bit tag value from the two values provided.
+ *
+ * @param sw_bits The upper bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the hw_bits parameter.
+ * @param hw_bits The lower bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the sw_bits parameter.
+ *
+ * @return 32 bit value of the combined hw and sw bits.
+ */
+static inline u32 cvmx_pow_tag_compose(u64 sw_bits, u64 hw_bits)
+{
+	return (((sw_bits & cvmx_build_mask(CVMX_TAG_SW_BITS)) << CVMX_TAG_SW_SHIFT) |
+		(hw_bits & cvmx_build_mask(32 - CVMX_TAG_SW_BITS)));
+}
+
+/**
+ * Extracts the bits allocated for software use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return N bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_sw_bits(u64 tag)
+{
+	return ((tag >> (32 - CVMX_TAG_SW_BITS)) & cvmx_build_mask(CVMX_TAG_SW_BITS));
+}
+
+/**
+ *
+ * Extracts the bits allocated for hardware use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return (32 - N) bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_hw_bits(u64 tag)
+{
+	return (tag & cvmx_build_mask(32 - CVMX_TAG_SW_BITS));
+}
+
+static inline u64 cvmx_sso3_get_wqe_count(int node)
+{
+	cvmx_sso_grpx_aq_cnt_t aq_cnt;
+	unsigned int grp = 0;
+	u64 cnt = 0;
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp(); grp++) {
+		aq_cnt.u64 = csr_rd_node(node, CVMX_SSO_GRPX_AQ_CNT(grp));
+		cnt += aq_cnt.s.aq_cnt;
+	}
+	return cnt;
+}
+
+static inline u64 cvmx_sso_get_total_wqe_count(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int node = cvmx_get_node_num();
+
+		return cvmx_sso3_get_wqe_count(node);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		cvmx_sso_iq_com_cnt_t sso_iq_com_cnt;
+
+		sso_iq_com_cnt.u64 = csr_rd(CVMX_SSO_IQ_COM_CNT);
+		return (sso_iq_com_cnt.s.iq_cnt);
+	} else {
+		cvmx_pow_iq_com_cnt_t pow_iq_com_cnt;
+
+		pow_iq_com_cnt.u64 = csr_rd(CVMX_POW_IQ_COM_CNT);
+		return (pow_iq_com_cnt.s.iq_cnt);
+	}
+}
+
+/**
+ * Store the current POW internal state into the supplied
+ * buffer. It is recommended that you pass a buffer of at least
+ * 128KB. The format of the capture may change based on SDK
+ * version and Octeon chip.
+ *
+ * @param buffer Buffer to store capture into
+ * @param buffer_size The size of the supplied buffer
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pow_capture(void *buffer, int buffer_size);
+
+/**
+ * Dump a POW capture to the console in a human readable format.
+ *
+ * @param buffer POW capture from cvmx_pow_capture()
+ * @param buffer_size Size of the buffer
+ */
+void cvmx_pow_display(void *buffer, int buffer_size);
+
+/**
+ * Return the number of POW entries supported by this chip
+ *
+ * @return Number of POW entries
+ */
+int cvmx_pow_get_num_entries(void);
+int cvmx_pow_get_dump_size(void);
+
+/**
+ * This will allocate count number of SSO groups on the specified node to the
+ * calling application. These groups will be for exclusive use of the
+ * application until they are freed.
+ * @param node The numa node for the allocation.
+ * @param base_group Pointer to the initial group, -1 to allocate anywhere.
+ * @param count  The number of consecutive groups to allocate.
+ * @return 0 on success and -1 on failure.
+ */
+int cvmx_sso_reserve_group_range(int node, int *base_group, int count);
+#define cvmx_sso_allocate_group_range cvmx_sso_reserve_group_range
+int cvmx_sso_reserve_group(int node);
+#define cvmx_sso_allocate_group cvmx_sso_reserve_group
+int cvmx_sso_release_group_range(int node, int base_group, int count);
+int cvmx_sso_release_group(int node, int group);
+
+/**
+ * Show integrated SSO configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_config_dump(unsigned int node);
+
+/**
+ * Show integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_dump(unsigned int node);
+
+/**
+ * Clear integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_clear(unsigned int node);
+
+/**
+ * Show SSO core-group affinity and priority per node (multi-node systems)
+ */
+void cvmx_pow_mask_priority_dump_node(unsigned int node, struct cvmx_coremask *avail_coremask);
+
+/**
+ * Show POW/SSO core-group affinity and priority (legacy, single-node systems)
+ */
+static inline void cvmx_pow_mask_priority_dump(struct cvmx_coremask *avail_coremask)
+{
+	cvmx_pow_mask_priority_dump_node(0 /*node */, avail_coremask);
+}
+
+/**
+ * Show SSO performance counters (multi-node systems)
+ */
+void cvmx_pow_show_perf_counters_node(unsigned int node);
+
+/**
+ * Show POW/SSO performance counters (legacy, single-node systems)
+ */
+static inline void cvmx_pow_show_perf_counters(void)
+{
+	cvmx_pow_show_perf_counters_node(0 /*node */);
+}
+
+#endif /* __CVMX_POW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-qlm.h b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
new file mode 100644
index 0000000000..19915eb82c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_QLM_H__
+#define __CVMX_QLM_H__
+
+/*
+ * Interface 0 on the 78xx can be connected to qlm 0 or qlm 2. When interface
+ * 0 is connected to qlm 0, this macro must be set to 0. When interface 0 is
+ * connected to qlm 2, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE0 0
+
+/*
+ * Interface 1 on the 78xx can be connected to qlm 1 or qlm 3. When interface
+ * 1 is connected to qlm 1, this macro must be set to 0. When interface 1 is
+ * connected to qlm 3, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE1 0
+
+/* Uncomment this line to print QLM JTAG state */
+/* #define CVMX_QLM_DUMP_STATE 1 */
+
+typedef struct {
+	const char *name;
+	int stop_bit;
+	int start_bit;
+} __cvmx_qlm_jtag_field_t;
+
+/**
+ * Return the number of QLMs supported by the chip
+ *
+ * @return  Number of QLMs
+ */
+int cvmx_qlm_get_num(void);
+
+/**
+ * Return the qlm number based on the interface
+ *
+ * @param xiface  Interface to look
+ */
+int cvmx_qlm_interface(int xiface);
+
+/**
+ * Return the qlm number based for a port in the interface
+ *
+ * @param xiface  interface to look up
+ * @param index  index in an interface
+ *
+ * @return the qlm number based on the xiface
+ */
+int cvmx_qlm_lmac(int xiface, int index);
+
+/**
+ * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
+ *
+ * @param BGX  BGX to search for.
+ *
+ * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
+ */
+int cvmx_qlm_mux_interface(int bgx);
+
+/**
+ * Return number of lanes for a given qlm
+ *
+ * @param qlm QLM block to query
+ *
+ * @return  Number of lanes
+ */
+int cvmx_qlm_get_lanes(int qlm);
+
+/**
+ * Get the QLM JTAG fields based on Octeon model on the supported chips.
+ *
+ * @return  qlm_jtag_field_t structure
+ */
+const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void);
+
+/**
+ * Get the QLM JTAG length by going through qlm_jtag_field for each
+ * Octeon model that is supported
+ *
+ * @return return the length.
+ */
+int cvmx_qlm_jtag_get_length(void);
+
+/**
+ * Initialize the QLM layer
+ */
+void cvmx_qlm_init(void);
+
+/**
+ * Get a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to get
+ * @param lane   Lane in QLM to get
+ * @param name   String name of field
+ *
+ * @return JTAG field value
+ */
+u64 cvmx_qlm_jtag_get(int qlm, int lane, const char *name);
+
+/**
+ * Set a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to set
+ * @param lane   Lane in QLM to set, or -1 for all lanes
+ * @param name   String name of field
+ * @param value  Value of the field
+ */
+void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, u64 value);
+
+/**
+ * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
+ * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function tweaks the
+ * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
+ */
+void __cvmx_qlm_speed_tweak(void);
+
+/**
+ * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
+ * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM tweak.
+ * This function tweaks the JTAG setting for a QLMs for PCIe to run better.
+ */
+void __cvmx_qlm_pcie_idle_dac_tweak(void);
+
+void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane);
+
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz.
+ *
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz(int qlm);
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz on specific node.
+ *
+ * @param node   Target QLM node
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm);
+
+enum cvmx_qlm_mode {
+	CVMX_QLM_MODE_DISABLED = -1,
+	CVMX_QLM_MODE_SGMII = 1,
+	CVMX_QLM_MODE_XAUI,
+	CVMX_QLM_MODE_RXAUI,
+	CVMX_QLM_MODE_PCIE,	/* gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X2, /* 1x2 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_2X1, /* 2x1 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X1, /* 1x1 gen2 / gen1 */
+	CVMX_QLM_MODE_SRIO_1X4, /* 1x4 short / long */
+	CVMX_QLM_MODE_SRIO_2X2, /* 2x2 short / long */
+	CVMX_QLM_MODE_SRIO_4X1, /* 4x1 short / long */
+	CVMX_QLM_MODE_ILK,
+	CVMX_QLM_MODE_QSGMII,
+	CVMX_QLM_MODE_SGMII_SGMII,
+	CVMX_QLM_MODE_SGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_SGMII,
+	CVMX_QLM_MODE_SGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_QSGMII,
+	CVMX_QLM_MODE_QSGMII_SGMII,
+	CVMX_QLM_MODE_RXAUI_1X2,
+	CVMX_QLM_MODE_SATA_2X1,
+	CVMX_QLM_MODE_XLAUI,
+	CVMX_QLM_MODE_XFI,
+	CVMX_QLM_MODE_10G_KR,
+	CVMX_QLM_MODE_40G_KR4,
+	CVMX_QLM_MODE_PCIE_1X8, /* 1x8 gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_RGMII_SGMII,
+	CVMX_QLM_MODE_RGMII_XFI,
+	CVMX_QLM_MODE_RGMII_10G_KR,
+	CVMX_QLM_MODE_RGMII_RXAUI,
+	CVMX_QLM_MODE_RGMII_XAUI,
+	CVMX_QLM_MODE_RGMII_XLAUI,
+	CVMX_QLM_MODE_RGMII_40G_KR4,
+	CVMX_QLM_MODE_MIXED,		/* BGX2 is mixed mode, DLM5(SGMII) & DLM6(XFI) */
+	CVMX_QLM_MODE_SGMII_2X1,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_10G_KR_1X2,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_XFI_1X2,		/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_RGMII_SGMII_1X1,	/* Configure BGX2, applies to DLM5 */
+	CVMX_QLM_MODE_RGMII_SGMII_2X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_10G_KR_1X1, /* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_XFI_1X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_SDL,		/* RMAC Pipe */
+	CVMX_QLM_MODE_CPRI,		/* RMAC */
+	CVMX_QLM_MODE_OCI
+};
+
+enum cvmx_gmx_inf_mode {
+	CVMX_GMX_INF_MODE_DISABLED = 0,
+	CVMX_GMX_INF_MODE_SGMII = 1,  /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_QSGMII = 2, /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_RXAUI = 3,  /* Only interface 0, interface 1 must be DISABLED */
+};
+
+/**
+ * Eye diagram captures are stored in the following structure
+ */
+typedef struct {
+	int width;	   /* Width in the x direction (time) */
+	int height;	   /* Height in the y direction (voltage) */
+	u32 data[64][128]; /* Error count@location, saturates as max */
+} cvmx_qlm_eye_t;
+
+/**
+ * These apply to DLM1 and DLM2 if its not in SATA mode
+ * Manual refers to lanes as follows:
+ *  DML 0 lane 0 == GSER0 lane 0
+ *  DML 0 lane 1 == GSER0 lane 1
+ *  DML 1 lane 2 == GSER1 lane 0
+ *  DML 1 lane 3 == GSER1 lane 1
+ *  DML 2 lane 4 == GSER2 lane 0
+ *  DML 2 lane 5 == GSER2 lane 1
+ */
+enum cvmx_pemx_cfg_mode {
+	CVMX_PEM_MD_GEN2_2LANE = 0, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN2_1LANE = 1, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN2_4LANE = 2, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+	CVMX_PEM_MD_GEN1_2LANE = 4, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN1_1LANE = 5, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN1_4LANE = 6, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+};
+
+/*
+ * Read QLM and return mode.
+ */
+enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int dlm_mode, int interface);
+void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier);
+
+void cvmx_qlm_display_registers(int qlm);
+
+int cvmx_qlm_measure_clock(int qlm);
+
+/**
+ * Measure the reference clock of a QLM on a multi-node setup
+ *
+ * @param node   node to measure
+ * @param qlm    QLM to measure
+ *
+ * @return Clock rate in Hz
+ */
+int cvmx_qlm_measure_clock_node(int node, int qlm);
+
+/*
+ * Perform RX equalization on a QLM
+ *
+ * @param node	Node the QLM is on
+ * @param qlm	QLM to perform RX equalization on
+ * @param lane	Lane to use, or -1 for all lanes
+ *
+ * @return Zero on success, negative if any lane failed RX equalization
+ */
+int __cvmx_qlm_rx_equalization(int node, int qlm, int lane);
+
+/**
+ * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
+ * Training may not update PHY Tx Taps. This function is not static
+ * so we can share it with BGX KR
+ *
+ * @param node	Node to apply errata workaround
+ * @param qlm	QLM to apply errata workaround
+ * @param lane	Lane to apply the errata
+ */
+int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane);
+
+void cvmx_qlm_gser_errata_25992(int node, int qlm);
+
+#ifdef CVMX_DUMP_GSER
+/**
+ * Dump GSER configuration for node 0
+ */
+int cvmx_dump_gser_config(unsigned int gser);
+/**
+ * Dump GSER status for node 0
+ */
+int cvmx_dump_gser_status(unsigned int gser);
+/**
+ * Dump GSER configuration
+ */
+int cvmx_dump_gser_config_node(unsigned int node, unsigned int gser);
+/**
+ * Dump GSER status
+ */
+int cvmx_dump_gser_status_node(unsigned int node, unsigned int gser);
+#endif
+
+int cvmx_qlm_eye_display(int node, int qlm, int qlm_lane, int format, const cvmx_qlm_eye_t *eye);
+
+void cvmx_prbs_process_cmd(int node, int qlm, int mode);
+
+#endif /* __CVMX_QLM_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-scratch.h b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
new file mode 100644
index 0000000000..d567a8453b
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file provides support for the processor local scratch memory.
+ * Scratch memory is byte addressable - all addresses are byte addresses.
+ */
+
+#ifndef __CVMX_SCRATCH_H__
+#define __CVMX_SCRATCH_H__
+
+/* Note: This define must be a long, not a long long in order to compile
+	without warnings for both 32bit and 64bit. */
+#define CVMX_SCRATCH_BASE (-32768l) /* 0xffffffffffff8000 */
+
+/* Scratch line for LMTST/LMTDMA on Octeon3 models */
+#ifdef CVMX_CAVIUM_OCTEON3
+#define CVMX_PKO_LMTLINE 2ull
+#endif
+
+/**
+ * Reads an 8 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u8 cvmx_scratch_read8(u64 address)
+{
+	return *CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 16 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u16 cvmx_scratch_read16(u64 address)
+{
+	return *CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 32 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u32 cvmx_scratch_read32(u64 address)
+{
+	return *CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 64 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u64 cvmx_scratch_read64(u64 address)
+{
+	return *CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Writes an 8 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write8(u64 address, u64 value)
+{
+	*CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address) = (u8)value;
+}
+
+/**
+ * Writes a 32 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write16(u64 address, u64 value)
+{
+	*CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address) = (u16)value;
+}
+
+/**
+ * Writes a 16 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write32(u64 address, u64 value)
+{
+	*CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address) = (u32)value;
+}
+
+/**
+ * Writes a 64 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write64(u64 address, u64 value)
+{
+	*CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address) = value;
+}
+
+#endif /* __CVMX_SCRATCH_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-wqe.h b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
new file mode 100644
index 0000000000..c9e3c8312a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
@@ -0,0 +1,1462 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This header file defines the work queue entry (wqe) data structure.
+ * Since this is a commonly used structure that depends on structures
+ * from several hardware blocks, those definitions have been placed
+ * in this file to create a single point of definition of the wqe
+ * format.
+ * Data structures are still named according to the block that they
+ * relate to.
+ */
+
+#ifndef __CVMX_WQE_H__
+#define __CVMX_WQE_H__
+
+#include "cvmx-packet.h"
+#include "cvmx-csr-enums.h"
+#include "cvmx-pki-defs.h"
+#include "cvmx-pip-defs.h"
+#include "octeon-feature.h"
+
+#define OCT_TAG_TYPE_STRING(x)						\
+	(((x) == CVMX_POW_TAG_TYPE_ORDERED) ?				\
+	 "ORDERED" :							\
+	 (((x) == CVMX_POW_TAG_TYPE_ATOMIC) ?				\
+	  "ATOMIC" :							\
+	  (((x) == CVMX_POW_TAG_TYPE_NULL) ? "NULL" : "NULL_NULL")))
+
+/* Error levels in WQE WORD2 (ERRLEV).*/
+#define PKI_ERRLEV_E__RE_M 0x0
+#define PKI_ERRLEV_E__LA_M 0x1
+#define PKI_ERRLEV_E__LB_M 0x2
+#define PKI_ERRLEV_E__LC_M 0x3
+#define PKI_ERRLEV_E__LD_M 0x4
+#define PKI_ERRLEV_E__LE_M 0x5
+#define PKI_ERRLEV_E__LF_M 0x6
+#define PKI_ERRLEV_E__LG_M 0x7
+
+enum cvmx_pki_errlevel {
+	CVMX_PKI_ERRLEV_E_RE = PKI_ERRLEV_E__RE_M,
+	CVMX_PKI_ERRLEV_E_LA = PKI_ERRLEV_E__LA_M,
+	CVMX_PKI_ERRLEV_E_LB = PKI_ERRLEV_E__LB_M,
+	CVMX_PKI_ERRLEV_E_LC = PKI_ERRLEV_E__LC_M,
+	CVMX_PKI_ERRLEV_E_LD = PKI_ERRLEV_E__LD_M,
+	CVMX_PKI_ERRLEV_E_LE = PKI_ERRLEV_E__LE_M,
+	CVMX_PKI_ERRLEV_E_LF = PKI_ERRLEV_E__LF_M,
+	CVMX_PKI_ERRLEV_E_LG = PKI_ERRLEV_E__LG_M
+};
+
+#define CVMX_PKI_ERRLEV_MAX BIT(3) /* The size of WORD2:ERRLEV field.*/
+
+/* Error code in WQE WORD2 (OPCODE).*/
+#define CVMX_PKI_OPCODE_RE_NONE	      0x0
+#define CVMX_PKI_OPCODE_RE_PARTIAL    0x1
+#define CVMX_PKI_OPCODE_RE_JABBER     0x2
+#define CVMX_PKI_OPCODE_RE_FCS	      0x7
+#define CVMX_PKI_OPCODE_RE_FCS_RCV    0x8
+#define CVMX_PKI_OPCODE_RE_TERMINATE  0x9
+#define CVMX_PKI_OPCODE_RE_RX_CTL     0xb
+#define CVMX_PKI_OPCODE_RE_SKIP	      0xc
+#define CVMX_PKI_OPCODE_RE_DMAPKT     0xf
+#define CVMX_PKI_OPCODE_RE_PKIPAR     0x13
+#define CVMX_PKI_OPCODE_RE_PKIPCAM    0x14
+#define CVMX_PKI_OPCODE_RE_MEMOUT     0x15
+#define CVMX_PKI_OPCODE_RE_BUFS_OFLOW 0x16
+#define CVMX_PKI_OPCODE_L2_FRAGMENT   0x20
+#define CVMX_PKI_OPCODE_L2_OVERRUN    0x21
+#define CVMX_PKI_OPCODE_L2_PFCS	      0x22
+#define CVMX_PKI_OPCODE_L2_PUNY	      0x23
+#define CVMX_PKI_OPCODE_L2_MAL	      0x24
+#define CVMX_PKI_OPCODE_L2_OVERSIZE   0x25
+#define CVMX_PKI_OPCODE_L2_UNDERSIZE  0x26
+#define CVMX_PKI_OPCODE_L2_LENMISM    0x27
+#define CVMX_PKI_OPCODE_IP_NOT	      0x41
+#define CVMX_PKI_OPCODE_IP_CHK	      0x42
+#define CVMX_PKI_OPCODE_IP_MAL	      0x43
+#define CVMX_PKI_OPCODE_IP_MALD	      0x44
+#define CVMX_PKI_OPCODE_IP_HOP	      0x45
+#define CVMX_PKI_OPCODE_L4_MAL	      0x61
+#define CVMX_PKI_OPCODE_L4_CHK	      0x62
+#define CVMX_PKI_OPCODE_L4_LEN	      0x63
+#define CVMX_PKI_OPCODE_L4_PORT	      0x64
+#define CVMX_PKI_OPCODE_TCP_FLAG      0x65
+
+#define CVMX_PKI_OPCODE_MAX BIT(8) /* The size of WORD2:OPCODE field.*/
+
+/* Layer types in pki */
+#define CVMX_PKI_LTYPE_E_NONE_M	      0x0
+#define CVMX_PKI_LTYPE_E_ENET_M	      0x1
+#define CVMX_PKI_LTYPE_E_VLAN_M	      0x2
+#define CVMX_PKI_LTYPE_E_SNAP_PAYLD_M 0x5
+#define CVMX_PKI_LTYPE_E_ARP_M	      0x6
+#define CVMX_PKI_LTYPE_E_RARP_M	      0x7
+#define CVMX_PKI_LTYPE_E_IP4_M	      0x8
+#define CVMX_PKI_LTYPE_E_IP4_OPT_M    0x9
+#define CVMX_PKI_LTYPE_E_IP6_M	      0xA
+#define CVMX_PKI_LTYPE_E_IP6_OPT_M    0xB
+#define CVMX_PKI_LTYPE_E_IPSEC_ESP_M  0xC
+#define CVMX_PKI_LTYPE_E_IPFRAG_M     0xD
+#define CVMX_PKI_LTYPE_E_IPCOMP_M     0xE
+#define CVMX_PKI_LTYPE_E_TCP_M	      0x10
+#define CVMX_PKI_LTYPE_E_UDP_M	      0x11
+#define CVMX_PKI_LTYPE_E_SCTP_M	      0x12
+#define CVMX_PKI_LTYPE_E_UDP_VXLAN_M  0x13
+#define CVMX_PKI_LTYPE_E_GRE_M	      0x14
+#define CVMX_PKI_LTYPE_E_NVGRE_M      0x15
+#define CVMX_PKI_LTYPE_E_GTP_M	      0x16
+#define CVMX_PKI_LTYPE_E_SW28_M	      0x1C
+#define CVMX_PKI_LTYPE_E_SW29_M	      0x1D
+#define CVMX_PKI_LTYPE_E_SW30_M	      0x1E
+#define CVMX_PKI_LTYPE_E_SW31_M	      0x1F
+
+enum cvmx_pki_layer_type {
+	CVMX_PKI_LTYPE_E_NONE = CVMX_PKI_LTYPE_E_NONE_M,
+	CVMX_PKI_LTYPE_E_ENET = CVMX_PKI_LTYPE_E_ENET_M,
+	CVMX_PKI_LTYPE_E_VLAN = CVMX_PKI_LTYPE_E_VLAN_M,
+	CVMX_PKI_LTYPE_E_SNAP_PAYLD = CVMX_PKI_LTYPE_E_SNAP_PAYLD_M,
+	CVMX_PKI_LTYPE_E_ARP = CVMX_PKI_LTYPE_E_ARP_M,
+	CVMX_PKI_LTYPE_E_RARP = CVMX_PKI_LTYPE_E_RARP_M,
+	CVMX_PKI_LTYPE_E_IP4 = CVMX_PKI_LTYPE_E_IP4_M,
+	CVMX_PKI_LTYPE_E_IP4_OPT = CVMX_PKI_LTYPE_E_IP4_OPT_M,
+	CVMX_PKI_LTYPE_E_IP6 = CVMX_PKI_LTYPE_E_IP6_M,
+	CVMX_PKI_LTYPE_E_IP6_OPT = CVMX_PKI_LTYPE_E_IP6_OPT_M,
+	CVMX_PKI_LTYPE_E_IPSEC_ESP = CVMX_PKI_LTYPE_E_IPSEC_ESP_M,
+	CVMX_PKI_LTYPE_E_IPFRAG = CVMX_PKI_LTYPE_E_IPFRAG_M,
+	CVMX_PKI_LTYPE_E_IPCOMP = CVMX_PKI_LTYPE_E_IPCOMP_M,
+	CVMX_PKI_LTYPE_E_TCP = CVMX_PKI_LTYPE_E_TCP_M,
+	CVMX_PKI_LTYPE_E_UDP = CVMX_PKI_LTYPE_E_UDP_M,
+	CVMX_PKI_LTYPE_E_SCTP = CVMX_PKI_LTYPE_E_SCTP_M,
+	CVMX_PKI_LTYPE_E_UDP_VXLAN = CVMX_PKI_LTYPE_E_UDP_VXLAN_M,
+	CVMX_PKI_LTYPE_E_GRE = CVMX_PKI_LTYPE_E_GRE_M,
+	CVMX_PKI_LTYPE_E_NVGRE = CVMX_PKI_LTYPE_E_NVGRE_M,
+	CVMX_PKI_LTYPE_E_GTP = CVMX_PKI_LTYPE_E_GTP_M,
+	CVMX_PKI_LTYPE_E_SW28 = CVMX_PKI_LTYPE_E_SW28_M,
+	CVMX_PKI_LTYPE_E_SW29 = CVMX_PKI_LTYPE_E_SW29_M,
+	CVMX_PKI_LTYPE_E_SW30 = CVMX_PKI_LTYPE_E_SW30_M,
+	CVMX_PKI_LTYPE_E_SW31 = CVMX_PKI_LTYPE_E_SW31_M,
+	CVMX_PKI_LTYPE_E_MAX = CVMX_PKI_LTYPE_E_SW31
+};
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 ptr_vlan : 8;
+		u64 ptr_layer_g : 8;
+		u64 ptr_layer_f : 8;
+		u64 ptr_layer_e : 8;
+		u64 ptr_layer_d : 8;
+		u64 ptr_layer_c : 8;
+		u64 ptr_layer_b : 8;
+		u64 ptr_layer_a : 8;
+	};
+} cvmx_pki_wqe_word4_t;
+
+/**
+ * HW decode / err_code in work queue entry
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 4;
+		u64 unassigned2 : 4;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn38xx;
+	struct {
+		u64 unused1 : 16;
+		u64 vlan : 16;
+		u64 unused2 : 32;
+	} svlan;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 8;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn38xx;
+} cvmx_pip_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 software : 1;
+		u64 lg_hdr_type : 5;
+		u64 lf_hdr_type : 5;
+		u64 le_hdr_type : 5;
+		u64 ld_hdr_type : 5;
+		u64 lc_hdr_type : 5;
+		u64 lb_hdr_type : 5;
+		u64 is_la_ether : 1;
+		u64 rsvd_0 : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 stat_inc : 1;
+		u64 pcam_flag4 : 1;
+		u64 pcam_flag3 : 1;
+		u64 pcam_flag2 : 1;
+		u64 pcam_flag1 : 1;
+		u64 is_frag : 1;
+		u64 is_l3_bcast : 1;
+		u64 is_l3_mcast : 1;
+		u64 is_l2_bcast : 1;
+		u64 is_l2_mcast : 1;
+		u64 is_raw : 1;
+		u64 err_level : 3;
+		u64 err_code : 8;
+	};
+} cvmx_pki_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	cvmx_pki_wqe_word2_t pki;
+	cvmx_pip_wqe_word2_t pip;
+} cvmx_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u16 hw_chksum;
+		u8 unused;
+		u64 next_ptr : 40;
+	} cn38xx;
+	struct {
+		u64 l4ptr : 8;	  /* 56..63 */
+		u64 unused0 : 8;  /* 48..55 */
+		u64 l3ptr : 8;	  /* 40..47 */
+		u64 l2ptr : 8;	  /* 32..39 */
+		u64 unused1 : 18; /* 14..31 */
+		u64 bpid : 6;	  /* 8..13 */
+		u64 unused2 : 2;  /* 6..7 */
+		u64 pknd : 6;	  /* 0..5 */
+	} cn68xx;
+} cvmx_pip_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 4;
+		u64 aura : 12;
+		u64 rsvd_1 : 1;
+		u64 apad : 3;
+		u64 channel : 12;
+		u64 bufs : 8;
+		u64 style : 8;
+		u64 rsvd_2 : 10;
+		u64 pknd : 6;
+	};
+} cvmx_pki_wqe_word0_t;
+
+/* Use reserved bit, set by HW to 0, to indicate buf_ptr legacy translation*/
+#define pki_wqe_translated word0.rsvd_1
+
+typedef union {
+	u64 u64;
+	cvmx_pip_wqe_word0_t pip;
+	cvmx_pki_wqe_word0_t pki;
+	struct {
+		u64 unused : 24;
+		u64 next_ptr : 40; /* On cn68xx this is unused as well */
+	} raw;
+} cvmx_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 rsvd_0 : 2;
+		u64 rsvd_1 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+} cvmx_pki_wqe_word1_t;
+
+#define pki_errata20776 word1.rsvd_0
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 varies : 14;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+	cvmx_pki_wqe_word1_t cn78xx;
+	struct {
+		u64 len : 16;
+		u64 zero_0 : 1;
+		u64 qos : 3;
+		u64 zero_1 : 1;
+		u64 grp : 6;
+		u64 zero_2 : 3;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn68xx;
+	struct {
+		u64 len : 16;
+		u64 ipprt : 6;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 zero_2 : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn38xx;
+} cvmx_wqe_word1_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 8;
+		u64 hwerr : 8;
+		u64 rsvd_1 : 24;
+		u64 sqid : 8;
+		u64 rsvd_2 : 4;
+		u64 vfnum : 12;
+	};
+} cvmx_wqe_word3_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 21;
+		u64 sqfc : 11;
+		u64 rsvd_1 : 5;
+		u64 sqtail : 11;
+		u64 rsvd_2 : 3;
+		u64 sqhead : 13;
+	};
+} cvmx_wqe_word4_t;
+
+/**
+ * Work queue entry format.
+ * Must be 8-byte aligned.
+ */
+typedef struct cvmx_wqe_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 *conditions.
+	 */
+	cvmx_pip_wqe_word2_t word2;
+
+	/* Pointer to the first segment of the packet. */
+	cvmx_buf_ptr_t packet_ptr;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[96];
+
+	/* If desired, SW can make the work Q entry any length. For the purposes
+	 * of discussion here, Assume 128B always, as this is all that the hardware
+	 * deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_t;
+
+/**
+ * Work queue entry format for NQM
+ * Must be 8-byte aligned
+ */
+typedef struct cvmx_wqe_nqm_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Reserved */
+	u64 word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word3_t word3;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word4_t word4;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[88];
+
+	/* If desired, SW can make the work Q entry any length.
+	 * For the purposes of discussion here, assume 128B always, as this is
+	 * all that the hardware deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_nqm_t;
+
+/**
+ * Work queue entry format for 78XX.
+ * In 78XX packet data always resides in WQE buffer unless option
+ * DIS_WQ_DAT=1 in PKI_STYLE_BUF, which causes packet data to use separate buffer.
+ *
+ * Must be 8-byte aligned.
+ */
+typedef struct {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 * conditions.
+	 */
+	cvmx_pki_wqe_word2_t word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Pointer to the first segment of the packet.*/
+	cvmx_buf_ptr_pki_t packet_ptr;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives contains a byte pointer to the start of Layer
+	 * A/B/C/D/E/F/G relative of start of packet.
+	 */
+	cvmx_pki_wqe_word4_t word4;
+
+	/*-------------------------------------------------------------------*/
+	/* WORDs 5/6/7 may be extended there, if WQE_HSZ is set.             */
+	/*-------------------------------------------------------------------*/
+	u64 wqe_data[11];
+
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_78xx_t;
+
+/* Node LS-bit position in the WQE[grp] or PKI_QPG_TBL[grp_ok].*/
+#define CVMX_WQE_GRP_NODE_SHIFT 8
+
+/*
+ * This is an accessor function into the WQE that retreives the
+ * ingress port number, which can also be used as a destination
+ * port number for the same port.
+ *
+ * @param work - Work Queue Entrey pointer
+ * @returns returns the normalized port number, also known as "ipd" port
+ */
+static inline int cvmx_wqe_get_port(cvmx_wqe_t *work)
+{
+	int port;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* In 78xx wqe entry has channel number not port*/
+		port = work->word0.pki.channel;
+		/* For BGX interfaces (0x800 - 0xdff) the 4 LSBs indicate
+		 * the PFC channel, must be cleared to normalize to "ipd"
+		 */
+		if (port & 0x800)
+			port &= 0xff0;
+		/* Node number is in AURA field, make it part of port # */
+		port |= (work->word0.pki.aura >> 10) << 12;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		port = work->word2.s_cn68xx.port;
+	} else {
+		port = work->word1.cn38xx.ipprt;
+	}
+
+	return port;
+}
+
+static inline void cvmx_wqe_set_port(cvmx_wqe_t *work, int port)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = port;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word2.s_cn68xx.port = port;
+	else
+		work->word1.cn38xx.ipprt = port;
+}
+
+static inline int cvmx_wqe_get_grp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		/* legacy: GRP[0..2] :=QOS */
+		grp = (0xff & work->word1.cn78xx.grp) >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_xgrp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.grp = grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.grp = grp;
+	else
+		work->word1.cn38xx.grp = grp;
+}
+
+static inline int cvmx_wqe_get_xgrp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		grp = work->word1.cn78xx.grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_grp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		/* Legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= 0x7;
+		work->word1.cn78xx.grp |= 0xff & (grp << 3);
+		work->word1.cn78xx.grp |= (node << 8);
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.grp = grp;
+	} else {
+		work->word1.cn38xx.grp = grp;
+	}
+}
+
+static inline int cvmx_wqe_get_qos(cvmx_wqe_t *work)
+{
+	int qos;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* Legacy: GRP[0..2] :=QOS */
+		qos = work->word1.cn78xx.grp & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		qos = work->word1.cn68xx.qos;
+	} else {
+		qos = work->word1.cn38xx.qos;
+	}
+
+	return qos;
+}
+
+static inline void cvmx_wqe_set_qos(cvmx_wqe_t *work, int qos)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= ~0x7;
+		work->word1.cn78xx.grp |= qos & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.qos = qos;
+	} else {
+		work->word1.cn38xx.qos = qos;
+	}
+}
+
+static inline int cvmx_wqe_get_len(cvmx_wqe_t *work)
+{
+	int len;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		len = work->word1.cn78xx.len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		len = work->word1.cn68xx.len;
+	else
+		len = work->word1.cn38xx.len;
+
+	return len;
+}
+
+static inline void cvmx_wqe_set_len(cvmx_wqe_t *work, int len)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.len = len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.len = len;
+	else
+		work->word1.cn38xx.len = len;
+}
+
+/**
+ * This function returns, if there was L2/L1 errors detected in packet.
+ *
+ * @param work	pointer to work queue entry
+ *
+ * @return	0 if packet had no error, non-zero to indicate error code.
+ *
+ * Please refer to HRM for the specific model for full enumaration of error codes.
+ * With Octeon1/Octeon2 models, the returned code indicates L1/L2 errors.
+ * On CN73XX/CN78XX, the return code is the value of PKI_OPCODE_E,
+ * if it is non-zero, otherwise the returned code will be derived from
+ * PKI_ERRLEV_E such that an error indicated in LayerA will return 0x20,
+ * LayerB - 0x30, LayerC - 0x40 and so forth.
+ */
+static inline int cvmx_wqe_get_rcv_err(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_RE || wqe->word2.err_code != 0)
+			return wqe->word2.err_code;
+		else
+			return (wqe->word2.err_level << 4) + 0x10;
+	} else if (work->word2.snoip.rcv_error) {
+		return work->word2.snoip.err_code;
+	}
+
+	return 0;
+}
+
+static inline u32 cvmx_wqe_get_tag(cvmx_wqe_t *work)
+{
+	return work->word1.tag;
+}
+
+static inline void cvmx_wqe_set_tag(cvmx_wqe_t *work, u32 tag)
+{
+	work->word1.tag = tag;
+}
+
+static inline int cvmx_wqe_get_tt(cvmx_wqe_t *work)
+{
+	return work->word1.tag_type;
+}
+
+static inline void cvmx_wqe_set_tt(cvmx_wqe_t *work, int tt)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word1.cn78xx.tag_type = (cvmx_pow_tag_type_t)tt;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn68xx.zero_2 = 0;
+	} else {
+		work->word1.cn38xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn38xx.zero_2 = 0;
+	}
+}
+
+static inline u8 cvmx_wqe_get_unused8(cvmx_wqe_t *work)
+{
+	u8 bits;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		bits = wqe->word2.rsvd_0;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		bits = work->word0.pip.cn68xx.unused1;
+	} else {
+		bits = work->word0.pip.cn38xx.unused;
+	}
+
+	return bits;
+}
+
+static inline void cvmx_wqe_set_unused8(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.rsvd_0 = v;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word0.pip.cn68xx.unused1 = v;
+	} else {
+		work->word0.pip.cn38xx.unused = v;
+	}
+}
+
+static inline u8 cvmx_wqe_get_user_flags(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return work->word0.pki.rsvd_2;
+	else
+		return 0;
+}
+
+static inline void cvmx_wqe_set_user_flags(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.rsvd_2 = v;
+}
+
+static inline int cvmx_wqe_get_channel(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.channel);
+	else
+		return cvmx_wqe_get_port(work);
+}
+
+static inline void cvmx_wqe_set_channel(cvmx_wqe_t *work, int channel)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = channel;
+	else
+		debug("%s: ERROR: not supported for model\n", __func__);
+}
+
+static inline int cvmx_wqe_get_aura(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.aura);
+	else
+		return (work->packet_ptr.s.pool);
+}
+
+static inline void cvmx_wqe_set_aura(cvmx_wqe_t *work, int aura)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.aura = aura;
+	else
+		work->packet_ptr.s.pool = aura;
+}
+
+static inline int cvmx_wqe_get_style(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.style);
+	return 0;
+}
+
+static inline void cvmx_wqe_set_style(cvmx_wqe_t *work, int style)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.style = style;
+}
+
+static inline int cvmx_wqe_is_l3_ip(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match all 4 values for v4/v6 with.without options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return !work->word2.s_cn38xx.not_IP;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv4(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			!work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv6(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline bool cvmx_wqe_is_l4_udp_or_tcp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_TCP)
+			return true;
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_UDP)
+			return true;
+		return false;
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return false;
+
+	return (work->word2.s_cn38xx.tcp_or_udp != 0);
+}
+
+static inline int cvmx_wqe_is_l2_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_bcast;
+	} else {
+		return work->word2.s_cn38xx.is_bcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l2_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_mcast;
+	} else {
+		return work->word2.s_cn38xx.is_mcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_bcast(cvmx_wqe_t *work, bool bcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_bcast = bcast;
+	} else {
+		work->word2.s_cn38xx.is_bcast = bcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_mcast(cvmx_wqe_t *work, bool mcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_mcast = mcast;
+	} else {
+		work->word2.s_cn38xx.is_mcast = mcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_bcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+static inline int cvmx_wqe_is_l3_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_mcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+/**
+ * This function returns is there was IP error detected in packet.
+ * For 78XX it does not flag ipv4 options and ipv6 extensions.
+ * For older chips if PIP_GBL_CTL was proviosned to flag ip4_otions and
+ * ipv6 extension, it will be flag them.
+ * @param work	pointer to work queue entry
+ * @return	1 -- If IP error was found in packet
+ *          0 -- If no IP error was found in packet.
+ */
+static inline int cvmx_wqe_is_ip_exception(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LC)
+			return 1;
+		else
+			return 0;
+	}
+
+	return work->word2.s.IP_exc;
+}
+
+static inline int cvmx_wqe_is_l4_error(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LF)
+			return 1;
+		else
+			return 0;
+	} else {
+		return work->word2.s.L4_error;
+	}
+}
+
+static inline void cvmx_wqe_set_vlan(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.vlan_valid = set;
+	} else {
+		work->word2.s.vlan_valid = set;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_valid;
+	} else {
+		return work->word2.s.vlan_valid;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan_stacked(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_stacked;
+	} else {
+		return work->word2.s.vlan_stacked;
+	}
+}
+
+/**
+ * Extract packet data buffer pointer from work queue entry.
+ *
+ * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
+ * for the linked buffer list.
+ * On CN78XX, the native buffer pointer structure is converted into
+ * the legacy format.
+ * The legacy buf_ptr is then stored in the WQE, and word0 reserved
+ * field is set to indicate that the buffer pointers were translated.
+ * If the packet data is only found inside the work queue entry,
+ * a standard buffer pointer structure is created for it.
+ */
+cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work);
+
+static inline int cvmx_wqe_get_bufs(cvmx_wqe_t *work)
+{
+	int bufs;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		bufs = work->word0.pki.bufs;
+	} else {
+		/* Adjust for packet-in-WQE cases */
+		if (cvmx_unlikely(work->word2.s_cn38xx.bufs == 0 && !work->word2.s.software))
+			(void)cvmx_wqe_get_packet_ptr(work);
+		bufs = work->word2.s_cn38xx.bufs;
+	}
+	return bufs;
+}
+
+/**
+ * Free Work Queue Entry memory
+ *
+ * Will return the WQE buffer to its pool, unless the WQE contains
+ * non-redundant packet data.
+ * This function is intended to be called AFTER the packet data
+ * has been passed along to PKO for transmission and release.
+ * It can also follow a call to cvmx_helper_free_packet_data()
+ * to release the WQE after associated data was released.
+ */
+void cvmx_wqe_free(cvmx_wqe_t *work);
+
+/**
+ * Check if a work entry has been intiated by software
+ *
+ */
+static inline bool cvmx_wqe_is_soft(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.software;
+	} else {
+		return work->word2.s.software;
+	}
+}
+
+/**
+ * Allocate a work-queue entry for delivering software-initiated
+ * event notifications.
+ * The application data is copied into the work-queue entry,
+ * if the space is sufficient.
+ */
+cvmx_wqe_t *cvmx_wqe_soft_create(void *data_p, unsigned int data_sz);
+
+/* Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
+ * CN78XX pass 1.x has a bug where the packet pointer in each segment is
+ * written in the opposite endianness of the configured mode. Fix these here.
+ */
+static inline void cvmx_wqe_pki_errata_20776(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !wqe->pki_errata20776) {
+		u64 bufs;
+		cvmx_buf_ptr_pki_t buffer_next;
+
+		bufs = wqe->word0.bufs;
+		buffer_next = wqe->packet_ptr;
+		while (bufs > 1) {
+			cvmx_buf_ptr_pki_t next;
+			void *nextaddr = cvmx_phys_to_ptr(buffer_next.addr - 8);
+
+			memcpy(&next, nextaddr, sizeof(next));
+			next.u64 = __builtin_bswap64(next.u64);
+			memcpy(nextaddr, &next, sizeof(next));
+			buffer_next = next;
+			bufs--;
+		}
+		wqe->pki_errata20776 = 1;
+	}
+}
+
+/**
+ * @INTERNAL
+ *
+ * Extract the native PKI-specific buffer pointer from WQE.
+ *
+ * NOTE: Provisional, may be superceded.
+ */
+static inline cvmx_buf_ptr_pki_t cvmx_wqe_get_pki_pkt_ptr(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_buf_ptr_pki_t x = { 0 };
+		return x;
+	}
+
+	cvmx_wqe_pki_errata_20776(work);
+	return wqe->packet_ptr;
+}
+
+/**
+ * Set the buffer segment count for a packet.
+ *
+ * @return Returns the actual resulting value in the WQE fielda
+ *
+ */
+static inline unsigned int cvmx_wqe_set_bufs(cvmx_wqe_t *work, unsigned int bufs)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word0.pki.bufs = bufs;
+		return work->word0.pki.bufs;
+	}
+
+	work->word2.s.bufs = bufs;
+	return work->word2.s.bufs;
+}
+
+/**
+ * Get the offset of Layer-3 header,
+ * only supported when Layer-3 protocol is IPv4 or IPv6.
+ *
+ * @return Returns the offset, or 0 if the offset is not known or unsupported.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_get_l3_offset(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return wqe->word4.ptr_layer_c;
+	} else {
+		return work->word2.s.ip_offset;
+	}
+
+	return 0;
+}
+
+/**
+ * Set the offset of Layer-3 header in a packet.
+ * Typically used when an IP packet is generated by software
+ * or when the Layer-2 header length is modified, and
+ * a subsequent recalculation of checksums is anticipated.
+ *
+ * @return Returns the actual value of the work entry offset field.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_set_l3_offset(cvmx_wqe_t *work, unsigned int ip_off)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			wqe->word4.ptr_layer_c = ip_off;
+	} else {
+		work->word2.s.ip_offset = ip_off;
+	}
+
+	return cvmx_wqe_get_l3_offset(work);
+}
+
+/**
+ * Set the indication that the packet contains a IPv4 Layer-3 * header.
+ * Use 'cvmx_wqe_set_l3_ipv6()' if the protocol is IPv6.
+ * When 'set' is false, the call will result in an indication
+ * that the Layer-3 protocol is neither IPv4 nor IPv6.
+ *
+ * FIXME: Add IPV4_OPT handling based on L3 header length.
+ */
+static inline void cvmx_wqe_set_l3_ipv4(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP4;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 0;
+	}
+}
+
+/**
+ * Set packet Layer-3 protocol to IPv6.
+ *
+ * FIXME: Add IPV6_OPT handling based on presence of extended headers.
+ */
+static inline void cvmx_wqe_set_l3_ipv6(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP6;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s_cn38xx.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 1;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to UDP.
+ */
+static inline void cvmx_wqe_set_l4_udp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_UDP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to TCP.
+ */
+static inline void cvmx_wqe_set_l4_tcp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_TCP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set the "software" flag in a work entry.
+ */
+static inline void cvmx_wqe_set_soft(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.software = set;
+	} else {
+		work->word2.s.software = set;
+	}
+}
+
+/**
+ * Return true if the packet is an IP fragment.
+ */
+static inline bool cvmx_wqe_is_l3_frag(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.is_frag != 0);
+	}
+
+	if (!work->word2.s_cn38xx.not_IP)
+		return (work->word2.s.is_frag != 0);
+
+	return false;
+}
+
+/**
+ * Set the indicator that the packet is an fragmented IP packet.
+ */
+static inline void cvmx_wqe_set_l3_frag(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_frag = set;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s.is_frag = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to RARP.
+ */
+static inline void cvmx_wqe_set_l3_rarp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_RARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_rarp = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to ARP.
+ */
+static inline void cvmx_wqe_set_l3_arp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_ARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_arp = set;
+	}
+}
+
+/**
+ * Return true if the packet Layer-3 protocol is ARP.
+ */
+static inline bool cvmx_wqe_is_l3_arp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.lc_hdr_type == CVMX_PKI_LTYPE_E_ARP);
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return (work->word2.snoip.is_arp != 0);
+
+	return false;
+}
+
+#endif /* __CVMX_WQE_H__ */
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 34/50] mips: octeon: Misc changes required because of the newly added headers
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (32 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 33/50] mips: octeon: Add misc remaining header files Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 35/50] mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach Stefan Roese
                   ` (18 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

With the newly added headers and their restructuring (which macro is
defined where), some changes in the already existing Octeon files are
necessary. This patch makes the necessary changes.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/bootoctlinux.c          |   1 +
 arch/mips/mach-octeon/cvmx-bootmem.c          |   6 -
 arch/mips/mach-octeon/cvmx-coremask.c         |   1 +
 .../mips/mach-octeon/include/mach/cvmx-regs.h | 330 +++++++++++++++++-
 .../mach-octeon/include/mach/octeon-feature.h |   2 +
 .../mach-octeon/include/mach/octeon-model.h   |   2 +
 .../mach-octeon/include/mach/octeon_ddr.h     | 187 +---------
 drivers/ram/octeon/octeon3_lmc.c              |  28 +-
 drivers/ram/octeon/octeon_ddr.c               |  22 +-
 9 files changed, 343 insertions(+), 236 deletions(-)

diff --git a/arch/mips/mach-octeon/bootoctlinux.c b/arch/mips/mach-octeon/bootoctlinux.c
index 26136902f3..e6eefc6103 100644
--- a/arch/mips/mach-octeon/bootoctlinux.c
+++ b/arch/mips/mach-octeon/bootoctlinux.c
@@ -24,6 +24,7 @@
 #include <mach/octeon-model.h>
 #include <mach/octeon-feature.h>
 #include <mach/bootoct_cmd.h>
+#include <mach/cvmx-ciu-defs.h>
 
 DECLARE_GLOBAL_DATA_PTR;
 
diff --git a/arch/mips/mach-octeon/cvmx-bootmem.c b/arch/mips/mach-octeon/cvmx-bootmem.c
index 80bb7ac6c8..4b10effefb 100644
--- a/arch/mips/mach-octeon/cvmx-bootmem.c
+++ b/arch/mips/mach-octeon/cvmx-bootmem.c
@@ -21,12 +21,6 @@
 
 DECLARE_GLOBAL_DATA_PTR;
 
-#define CVMX_MIPS32_SPACE_KSEG0		1L
-#define CVMX_MIPS_SPACE_XKPHYS		2LL
-
-#define CVMX_ADD_SEG(seg, add)		((((u64)(seg)) << 62) | (add))
-#define CVMX_ADD_SEG32(seg, add)	(((u32)(seg) << 31) | (u32)(add))
-
 /**
  * This is the physical location of a struct cvmx_bootmem_desc
  * structure in Octeon's memory. Note that dues to addressing
diff --git a/arch/mips/mach-octeon/cvmx-coremask.c b/arch/mips/mach-octeon/cvmx-coremask.c
index cff8c08b97..ed673e4993 100644
--- a/arch/mips/mach-octeon/cvmx-coremask.c
+++ b/arch/mips/mach-octeon/cvmx-coremask.c
@@ -14,6 +14,7 @@
 #include <mach/cvmx-fuse.h>
 #include <mach/octeon-model.h>
 #include <mach/octeon-feature.h>
+#include <mach/cvmx-ciu-defs.h>
 
 struct cvmx_coremask *get_coremask_override(struct cvmx_coremask *pcm)
 {
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-regs.h b/arch/mips/mach-octeon/include/mach/cvmx-regs.h
index b84fc9fd57..56528bc1bf 100644
--- a/arch/mips/mach-octeon/include/mach/cvmx-regs.h
+++ b/arch/mips/mach-octeon/include/mach/cvmx-regs.h
@@ -9,6 +9,7 @@
 #include <linux/bitfield.h>
 #include <linux/bitops.h>
 #include <linux/io.h>
+#include <mach/cvmx-address.h>
 
 /* General defines */
 #define CVMX_MAX_CORES		48
@@ -26,48 +27,116 @@
 
 #define MAX_CORE_TADS		8
 
-#define CAST_ULL(v)		((unsigned long long)(v))
 #define CASTPTR(type, v)	((type *)(long)(v))
+#define CAST64(v)		((long long)(long)(v))
 
 /* Regs */
-#define CVMX_CIU_PP_RST		0x0001010000000100ULL
 #define CVMX_CIU3_NMI		0x0001010000000160ULL
-#define CVMX_CIU_FUSE		0x00010100000001a0ULL
-#define CVMX_CIU_NMI		0x0001070000000718ULL
 
 #define CVMX_MIO_BOOT_LOC_CFGX(x) (0x0001180000000080ULL + ((x) & 1) * 8)
-#define MIO_BOOT_LOC_CFG_BASE		GENMASK_ULL(27, 3)
-#define MIO_BOOT_LOC_CFG_EN		BIT_ULL(31)
+#define MIO_BOOT_LOC_CFG_BASE	GENMASK_ULL(27, 3)
+#define MIO_BOOT_LOC_CFG_EN	BIT_ULL(31)
 
 #define CVMX_MIO_BOOT_LOC_ADR	0x0001180000000090ULL
-#define MIO_BOOT_LOC_ADR_ADR		GENMASK_ULL(7, 3)
+#define MIO_BOOT_LOC_ADR_ADR	GENMASK_ULL(7, 3)
 
 #define CVMX_MIO_BOOT_LOC_DAT	0x0001180000000098ULL
 
 #define CVMX_MIO_FUS_DAT2	0x0001180000001410ULL
-#define MIO_FUS_DAT2_NOCRYPTO		BIT_ULL(26)
-#define MIO_FUS_DAT2_NOMUL		BIT_ULL(27)
-#define MIO_FUS_DAT2_DORM_CRYPTO	BIT_ULL(34)
+#define MIO_FUS_DAT2_NOCRYPTO	BIT_ULL(26)
+#define MIO_FUS_DAT2_NOMUL	BIT_ULL(27)
+#define MIO_FUS_DAT2_DORM_CRYPTO BIT_ULL(34)
 
 #define CVMX_MIO_FUS_RCMD	0x0001180000001500ULL
-#define MIO_FUS_RCMD_ADDR		GENMASK_ULL(7, 0)
-#define MIO_FUS_RCMD_PEND		BIT_ULL(12)
-#define MIO_FUS_RCMD_DAT		GENMASK_ULL(23, 16)
+#define MIO_FUS_RCMD_ADDR	GENMASK_ULL(7, 0)
+#define MIO_FUS_RCMD_PEND	BIT_ULL(12)
+#define MIO_FUS_RCMD_DAT	GENMASK_ULL(23, 16)
 
 #define CVMX_RNM_CTL_STATUS	0x0001180040000000ULL
-#define RNM_CTL_STATUS_EER_VAL		BIT_ULL(9)
+#define RNM_CTL_STATUS_EER_VAL	BIT_ULL(9)
+
+#define CVMX_IOBDMA_ORDERED_IO_ADDR 0xffffffffffffa200ull
 
 /* turn the variable name into a string */
 #define CVMX_TMP_STR(x)		CVMX_TMP_STR2(x)
 #define CVMX_TMP_STR2(x)	#x
 
+#define CVMX_RDHWR(result, regstr)					\
+	asm volatile("rdhwr %[rt],$" CVMX_TMP_STR(regstr) : [rt] "=d"(result))
 #define CVMX_RDHWRNV(result, regstr)					\
-	asm volatile ("rdhwr %[rt],$" CVMX_TMP_STR(regstr) : [rt] "=d" (result))
+	asm("rdhwr %[rt],$" CVMX_TMP_STR(regstr) : [rt] "=d"(result))
+#define CVMX_POP(result, input)						\
+	asm("pop %[rd],%[rs]" : [rd] "=d"(result) : [rs] "d"(input))
+
+#define CVMX_SYNCW  asm volatile("syncw\nsyncw\n" : : : "memory")
+#define CVMX_SYNCS  asm volatile("syncs\n" : : : "memory")
+#define CVMX_SYNCWS asm volatile("syncws\n" : : : "memory")
+
+#define CVMX_CACHE_LINE_SIZE	128			   // In bytes
+#define CVMX_CACHE_LINE_MASK	(CVMX_CACHE_LINE_SIZE - 1) // In bytes
+#define CVMX_CACHE_LINE_ALIGNED __aligned(CVMX_CACHE_LINE_SIZE)
+
+#define CVMX_SYNCIOBDMA		asm volatile("synciobdma" : : : "memory")
+
+#define CVMX_MF_CHORD(dest)	CVMX_RDHWR(dest, 30)
+
+/*
+ * The macros cvmx_likely and cvmx_unlikely use the
+ * __builtin_expect GCC operation to control branch
+ * probabilities for a conditional. For example, an "if"
+ * statement in the code that will almost always be
+ * executed should be written as "if (cvmx_likely(...))".
+ * If the "else" section of an if statement is more
+ * probable, use "if (cvmx_unlikey(...))".
+ */
+#define cvmx_likely(x)	 __builtin_expect(!!(x), 1)
+#define cvmx_unlikely(x) __builtin_expect(!!(x), 0)
+
+#define CVMX_WAIT_FOR_FIELD64(address, type, field, op, value, to_us)	\
+	({								\
+		int result;						\
+		do {							\
+			u64 done = get_timer(0);			\
+			type c;						\
+			while (1) {					\
+				c.u64 = csr_rd(address);		\
+				if ((c.s.field)op(value)) {		\
+					result = 0;			\
+					break;				\
+				} else if (get_timer(done) > ((to_us) / 1000)) { \
+					result = -1;			\
+					break;				\
+				} else					\
+					udelay(100);			\
+			}						\
+		} while (0);						\
+		result;							\
+	})
 
-#define CVMX_SYNCW					\
-	asm volatile ("syncw\nsyncw\n" : : : "memory")
+#define CVMX_WAIT_FOR_FIELD64_NODE(node, address, type, field, op, value, to_us) \
+	({								\
+		int result;						\
+		do {							\
+			u64 done = get_timer(0);			\
+			type c;						\
+			while (1) {					\
+				c.u64 = csr_rd(address);		\
+				if ((c.s.field)op(value)) {		\
+					result = 0;			\
+					break;				\
+				} else if (get_timer(done) > ((to_us) / 1000)) { \
+					result = -1;			\
+					break;				\
+				} else					\
+					udelay(100);			\
+			}						\
+		} while (0);						\
+		result;							\
+	})
 
 /* ToDo: Currently only node = 0 supported */
+#define cvmx_get_node_num()	0
+
 static inline u64 csr_rd_node(int node, u64 addr)
 {
 	void __iomem *base;
@@ -76,11 +145,24 @@ static inline u64 csr_rd_node(int node, u64 addr)
 	return ioread64(base);
 }
 
+static inline u32 csr_rd32_node(int node, u64 addr)
+{
+	void __iomem *base;
+
+	base = ioremap_nocache(addr, 0x100);
+	return ioread32(base);
+}
+
 static inline u64 csr_rd(u64 addr)
 {
 	return csr_rd_node(0, addr);
 }
 
+static inline u32 csr_rd32(u64 addr)
+{
+	return csr_rd32_node(0, addr);
+}
+
 static inline void csr_wr_node(int node, u64 addr, u64 val)
 {
 	void __iomem *base;
@@ -89,11 +171,24 @@ static inline void csr_wr_node(int node, u64 addr, u64 val)
 	iowrite64(val, base);
 }
 
+static inline void csr_wr32_node(int node, u64 addr, u32 val)
+{
+	void __iomem *base;
+
+	base = ioremap_nocache(addr, 0x100);
+	iowrite32(val, base);
+}
+
 static inline void csr_wr(u64 addr, u64 val)
 {
 	csr_wr_node(0, addr, val);
 }
 
+static inline void csr_wr32(u64 addr, u32 val)
+{
+	csr_wr32_node(0, addr, val);
+}
+
 /*
  * We need to use the volatile access here, otherwise the IO accessor
  * functions might swap the bytes
@@ -103,21 +198,173 @@ static inline u64 cvmx_read64_uint64(u64 addr)
 	return *(volatile u64 *)addr;
 }
 
+static inline s64 cvmx_read64_int64(u64 addr)
+{
+	return *(volatile s64 *)addr;
+}
+
 static inline void cvmx_write64_uint64(u64 addr, u64 val)
 {
 	*(volatile u64 *)addr = val;
 }
 
+static inline void cvmx_write64_int64(u64 addr, s64 val)
+{
+	*(volatile s64 *)addr = val;
+}
+
 static inline u32 cvmx_read64_uint32(u64 addr)
 {
 	return *(volatile u32 *)addr;
 }
 
+static inline s32 cvmx_read64_int32(u64 addr)
+{
+	return *(volatile s32 *)addr;
+}
+
 static inline void cvmx_write64_uint32(u64 addr, u32 val)
 {
 	*(volatile u32 *)addr = val;
 }
 
+static inline void cvmx_write64_int32(u64 addr, s32 val)
+{
+	*(volatile s32 *)addr = val;
+}
+
+static inline void cvmx_write64_int16(u64 addr, s16 val)
+{
+	*(volatile s16 *)addr = val;
+}
+
+static inline void cvmx_write64_uint16(u64 addr, u16 val)
+{
+	*(volatile u16 *)addr = val;
+}
+
+static inline void cvmx_write64_int8(u64 addr, int8_t val)
+{
+	*(volatile int8_t *)addr = val;
+}
+
+static inline void cvmx_write64_uint8(u64 addr, u8 val)
+{
+	*(volatile u8 *)addr = val;
+}
+
+static inline s16 cvmx_read64_int16(u64 addr)
+{
+	return *(volatile s16 *)addr;
+}
+
+static inline u16 cvmx_read64_uint16(u64 addr)
+{
+	return *(volatile u16 *)addr;
+}
+
+static inline int8_t cvmx_read64_int8(u64 addr)
+{
+	return *(volatile int8_t *)addr;
+}
+
+static inline u8 cvmx_read64_uint8(u64 addr)
+{
+	return *(volatile u8 *)addr;
+}
+
+static inline void cvmx_send_single(u64 data)
+{
+	cvmx_write64_uint64(CVMX_IOBDMA_ORDERED_IO_ADDR, data);
+}
+
+/**
+ * Perform a 64-bit write to an IO address
+ *
+ * @param io_addr	I/O address to write to
+ * @param val		64-bit value to write
+ */
+static inline void cvmx_write_io(u64 io_addr, u64 val)
+{
+	cvmx_write64_uint64(io_addr, val);
+}
+
+/**
+ * Builds a memory address for I/O based on the Major and Sub DID.
+ *
+ * @param major_did 5 bit major did
+ * @param sub_did   3 bit sub did
+ * @return I/O base address
+ */
+static inline u64 cvmx_build_io_address(u64 major_did, u64 sub_did)
+{
+	return ((0x1ull << 48) | (major_did << 43) | (sub_did << 40));
+}
+
+/**
+ * Builds a bit mask given the required size in bits.
+ *
+ * @param bits   Number of bits in the mask
+ * @return The mask
+ */
+static inline u64 cvmx_build_mask(u64 bits)
+{
+	if (bits == 64)
+		return -1;
+
+	return ~((~0x0ull) << bits);
+}
+
+/**
+ * Extract bits out of a number
+ *
+ * @param input  Number to extract from
+ * @param lsb    Starting bit, least significant (0-63)
+ * @param width  Width in bits (1-64)
+ *
+ * @return Extracted number
+ */
+static inline u64 cvmx_bit_extract(u64 input, int lsb, int width)
+{
+	u64 result = input >> lsb;
+
+	result &= cvmx_build_mask(width);
+
+	return result;
+}
+
+/**
+ * Perform mask and shift to place the supplied value into
+ * the supplied bit rage.
+ *
+ * Example: cvmx_build_bits(39,24,value)
+ * <pre>
+ * 6       5       4       3       3       2       1
+ * 3       5       7       9       1       3       5       7      0
+ * +-------+-------+-------+-------+-------+-------+-------+------+
+ * 000000000000000000000000___________value000000000000000000000000
+ * </pre>
+ *
+ * @param high_bit Highest bit value can occupy (inclusive) 0-63
+ * @param low_bit  Lowest bit value can occupy inclusive 0-high_bit
+ * @param value    Value to use
+ * @return Value masked and shifted
+ */
+static inline u64 cvmx_build_bits(u64 high_bit, u64 low_bit, u64 value)
+{
+	return ((value & cvmx_build_mask(high_bit - low_bit + 1)) << low_bit);
+}
+
+static inline u64 cvmx_mask_to_localaddr(u64 addr)
+{
+	return (addr & 0xffffffffff);
+}
+
+static inline u64 cvmx_addr_on_node(u64 node, u64 addr)
+{
+	return (node << 40) | cvmx_mask_to_localaddr(addr);
+}
+
 static inline void *cvmx_phys_to_ptr(u64 addr)
 {
 	return (void *)CKSEG0ADDR(addr);
@@ -141,4 +388,53 @@ static inline unsigned int cvmx_get_core_num(void)
 	return core_num;
 }
 
+/**
+ * Node-local number of the core on which the program is currently running.
+ *
+ * @return core number on local node
+ */
+static inline unsigned int cvmx_get_local_core_num(void)
+{
+	unsigned int core_num, core_mask;
+
+	CVMX_RDHWRNV(core_num, 0);
+	/* note that MAX_CORES may not be power of 2 */
+	core_mask = (1 << CVMX_NODE_NO_SHIFT) - 1;
+
+	return core_num & core_mask;
+}
+
+/**
+ * Returns the number of bits set in the provided value.
+ * Simple wrapper for POP instruction.
+ *
+ * @param val    32 bit value to count set bits in
+ *
+ * @return Number of bits set
+ */
+static inline u32 cvmx_pop(u32 val)
+{
+	u32 pop;
+
+	CVMX_POP(pop, val);
+
+	return pop;
+}
+
+#define cvmx_read_csr_node(node, addr)	     csr_rd(addr)
+#define cvmx_write_csr_node(node, addr, val) csr_wr(addr, val)
+
+#define cvmx_printf  printf
+#define cvmx_vprintf vprintf
+
+#if defined(DEBUG)
+void cvmx_warn(const char *format, ...) __printf(1, 2);
+#else
+void cvmx_warn(const char *format, ...);
+#endif
+
+#define cvmx_warn_if(expression, format, ...)				\
+	if (expression)							\
+		cvmx_warn(format, ##__VA_ARGS__)
+
 #endif /* __CVMX_REGS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon-feature.h b/arch/mips/mach-octeon/include/mach/octeon-feature.h
index 1202716ba5..2eb1714e90 100644
--- a/arch/mips/mach-octeon/include/mach/octeon-feature.h
+++ b/arch/mips/mach-octeon/include/mach/octeon-feature.h
@@ -6,6 +6,8 @@
 #ifndef __OCTEON_FEATURE_H__
 #define __OCTEON_FEATURE_H__
 
+#include "cvmx-fuse.h"
+
 /*
  * Octeon models are declared after the macros in octeon-model.h with the
  * suffix _FEATURE. The individual features are declared with the
diff --git a/arch/mips/mach-octeon/include/mach/octeon-model.h b/arch/mips/mach-octeon/include/mach/octeon-model.h
index 22d6df6a9e..9164a4cfd6 100644
--- a/arch/mips/mach-octeon/include/mach/octeon-model.h
+++ b/arch/mips/mach-octeon/include/mach/octeon-model.h
@@ -28,6 +28,8 @@
  * use only, and may change without notice.
  */
 
+#include <asm/mipsregs.h>
+
 #define OCTEON_FAMILY_MASK      0x00ffff00
 #define OCTEON_PRID_MASK	0x00ffffff
 
diff --git a/arch/mips/mach-octeon/include/mach/octeon_ddr.h b/arch/mips/mach-octeon/include/mach/octeon_ddr.h
index 4473be4d44..e630dc5ae3 100644
--- a/arch/mips/mach-octeon/include/mach/octeon_ddr.h
+++ b/arch/mips/mach-octeon/include/mach/octeon_ddr.h
@@ -12,12 +12,8 @@
 #include <linux/io.h>
 #include <mach/octeon-model.h>
 #include <mach/cvmx/cvmx-lmcx-defs.h>
-
-/* Mapping is done starting from 0x11800.80000000 */
-#define CVMX_L2C_CTL		0x00800000
-#define CVMX_L2C_BIG_CTL	0x00800030
-#define CVMX_L2C_TADX_INT(i)	(0x00a00028 + (((i) & 7) * 0x40000))
-#define CVMX_L2C_MCIX_INT(i)	(0x00c00028 + (((i) & 3) * 0x40000))
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-l2c-defs.h>
 
 /* Some "external" (non-LMC) registers */
 #define CVMX_IPD_CLK_COUNT		0x00014F0000000338
@@ -68,34 +64,6 @@ static inline void l2c_wr(struct ddr_priv *priv, u64 addr, u64 val)
 	iowrite64(val, priv->l2c_base + addr);
 }
 
-/* Access other CSR registers not located inside the LMC address space */
-static inline u64 csr_rd(u64 addr)
-{
-	void __iomem *base;
-
-	base = ioremap_nocache(addr, 0x100);
-	return ioread64(base);
-}
-
-static inline void csr_wr(u64 addr, u64 val)
-{
-	void __iomem *base;
-
-	base = ioremap_nocache(addr, 0x100);
-	return iowrite64(val, base);
-}
-
-/* "Normal" access, without any offsets and/or mapping */
-static inline u64 cvmx_read64_uint64(u64 addr)
-{
-	return readq((void *)addr);
-}
-
-static inline void cvmx_write64_uint64(u64 addr, u64 val)
-{
-	writeq(val, (void *)addr);
-}
-
 /* Failsafe mode */
 #define FLAG_FAILSAFE_MODE		0x01000
 /* Note that the DDR clock initialized flags must be contiguous */
@@ -167,157 +135,6 @@ static inline int ddr_verbose(void)
 #define CVMX_DCACHE_INVALIDATE					\
 	{ CVMX_SYNC; asm volatile ("cache 9, 0($0)" : : ); }
 
-/**
- * cvmx_l2c_cfg
- *
- * Specify the RSL base addresses for the block
- *
- *                  L2C_CFG = L2C Configuration
- *
- * Description:
- */
-union cvmx_l2c_cfg {
-	u64 u64;
-	struct cvmx_l2c_cfg_s {
-		uint64_t reserved_20_63:44;
-		uint64_t bstrun:1;
-		uint64_t lbist:1;
-		uint64_t xor_bank:1;
-		uint64_t dpres1:1;
-		uint64_t dpres0:1;
-		uint64_t dfill_dis:1;
-		uint64_t fpexp:4;
-		uint64_t fpempty:1;
-		uint64_t fpen:1;
-		uint64_t idxalias:1;
-		uint64_t mwf_crd:4;
-		uint64_t rsp_arb_mode:1;
-		uint64_t rfb_arb_mode:1;
-		uint64_t lrf_arb_mode:1;
-	} s;
-};
-
-/**
- * cvmx_l2c_ctl
- *
- * L2C_CTL = L2C Control
- *
- *
- * Notes:
- * (1) If MAXVAB is != 0, VAB_THRESH should be less than MAXVAB.
- *
- * (2) L2DFDBE and L2DFSBE allows software to generate L2DSBE, L2DDBE, VBFSBE,
- * and VBFDBE errors for the purposes of testing error handling code.  When
- * one (or both) of these bits are set a PL2 which misses in the L2 will fill
- * with the appropriate error in the first 2 OWs of the fill. Software can
- * determine which OW pair gets the error by choosing the desired fill order
- * (address<6:5>).  A PL2 which hits in the L2 will not inject any errors.
- * Therefore sending a WBIL2 prior to the PL2 is recommended to make a miss
- * likely (if multiple processors are involved software must be careful to be
- * sure no other processor or IO device can bring the block into the L2).
- *
- * To generate a VBFSBE or VBFDBE, software must first get the cache block
- * into the cache with an error using a PL2 which misses the L2.  Then a
- * store partial to a portion of the cache block without the error must
- * change the block to dirty.  Then, a subsequent WBL2/WBIL2/victim will
- * trigger the VBFSBE/VBFDBE error.
- */
-union cvmx_l2c_ctl {
-	u64 u64;
-	struct cvmx_l2c_ctl_s {
-		uint64_t reserved_29_63:35;
-		uint64_t rdf_fast:1;
-		uint64_t disstgl2i:1;
-		uint64_t l2dfsbe:1;
-		uint64_t l2dfdbe:1;
-		uint64_t discclk:1;
-		uint64_t maxvab:4;
-		uint64_t maxlfb:4;
-		uint64_t rsp_arb_mode:1;
-		uint64_t xmc_arb_mode:1;
-		uint64_t reserved_2_13:12;
-		uint64_t disecc:1;
-		uint64_t disidxalias:1;
-	} s;
-
-	struct cvmx_l2c_ctl_cn73xx {
-		uint64_t reserved_32_63:32;
-		uint64_t ocla_qos:3;
-		uint64_t reserved_28_28:1;
-		uint64_t disstgl2i:1;
-		uint64_t reserved_25_26:2;
-		uint64_t discclk:1;
-		uint64_t reserved_16_23:8;
-		uint64_t rsp_arb_mode:1;
-		uint64_t xmc_arb_mode:1;
-		uint64_t rdf_cnt:8;
-		uint64_t reserved_4_5:2;
-		uint64_t disldwb:1;
-		uint64_t dissblkdty:1;
-		uint64_t disecc:1;
-		uint64_t disidxalias:1;
-	} cn73xx;
-
-	struct cvmx_l2c_ctl_cn73xx cn78xx;
-};
-
-/**
- * cvmx_l2c_big_ctl
- *
- * L2C_BIG_CTL = L2C Big memory control register
- *
- *
- * Notes:
- * (1) BIGRD interrupts can occur during normal operation as the PP's are
- * allowed to prefetch to non-existent memory locations.  Therefore,
- * BIGRD is for informational purposes only.
- *
- * (2) When HOLEWR/BIGWR blocks a store L2C_VER_ID, L2C_VER_PP, L2C_VER_IOB,
- * and L2C_VER_MSC will be loaded just like a store which is blocked by VRTWR.
- * Additionally, L2C_ERR_XMC will be loaded.
- */
-union cvmx_l2c_big_ctl {
-	u64 u64;
-	struct cvmx_l2c_big_ctl_s {
-		uint64_t reserved_8_63:56;
-		uint64_t maxdram:4;
-		uint64_t reserved_0_3:4;
-	} s;
-	struct cvmx_l2c_big_ctl_cn61xx {
-		uint64_t reserved_8_63:56;
-		uint64_t maxdram:4;
-		uint64_t reserved_1_3:3;
-		uint64_t disable:1;
-	} cn61xx;
-	struct cvmx_l2c_big_ctl_cn61xx cn63xx;
-	struct cvmx_l2c_big_ctl_cn61xx cn66xx;
-	struct cvmx_l2c_big_ctl_cn61xx cn68xx;
-	struct cvmx_l2c_big_ctl_cn61xx cn68xxp1;
-	struct cvmx_l2c_big_ctl_cn70xx {
-		uint64_t reserved_8_63:56;
-		uint64_t maxdram:4;
-		uint64_t reserved_1_3:3;
-		uint64_t disbig:1;
-	} cn70xx;
-	struct cvmx_l2c_big_ctl_cn70xx cn70xxp1;
-	struct cvmx_l2c_big_ctl_cn70xx cn73xx;
-	struct cvmx_l2c_big_ctl_cn70xx cn78xx;
-	struct cvmx_l2c_big_ctl_cn70xx cn78xxp1;
-	struct cvmx_l2c_big_ctl_cn61xx cnf71xx;
-	struct cvmx_l2c_big_ctl_cn70xx cnf75xx;
-};
-
-struct rlevel_byte_data {
-	int delay;
-	int loop_total;
-	int loop_count;
-	int best;
-	u64 bm;
-	int bmerrs;
-	int sqerrs;
-	int bestsq;
-};
-
 #define DEBUG_VALIDATE_BITMASK 0
 #if DEBUG_VALIDATE_BITMASK
 #define debug_bitmask_print printf
diff --git a/drivers/ram/octeon/octeon3_lmc.c b/drivers/ram/octeon/octeon3_lmc.c
index 327cdc5873..349abc179f 100644
--- a/drivers/ram/octeon/octeon3_lmc.c
+++ b/drivers/ram/octeon/octeon3_lmc.c
@@ -17,14 +17,8 @@
 
 /* Random number generator stuff */
 
-#define CVMX_RNM_CTL_STATUS	0x0001180040000000
 #define CVMX_OCT_DID_RNG	8ULL
 
-static u64 cvmx_build_io_address(u64 major_did, u64 sub_did)
-{
-	return ((0x1ull << 48) | (major_did << 43) | (sub_did << 40));
-}
-
 static u64 cvmx_rng_get_random64(void)
 {
 	return csr_rd(cvmx_build_io_address(CVMX_OCT_DID_RNG, 0));
@@ -285,10 +279,10 @@ static int test_dram_byte64(struct ddr_priv *priv, int lmc, u64 p,
 	int node = 0;
 
 	// Force full cacheline write-backs to boost traffic
-	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 	saved_dissblkdty = l2c_ctl.cn78xx.dissblkdty;
 	l2c_ctl.cn78xx.dissblkdty = 1;
-	l2c_wr(priv, CVMX_L2C_CTL, l2c_ctl.u64);
+	l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_ctl.u64);
 
 	if (octeon_is_cpuid(OCTEON_CN73XX) || octeon_is_cpuid(OCTEON_CNF75XX))
 		kbitno = 18;
@@ -489,9 +483,9 @@ static int test_dram_byte64(struct ddr_priv *priv, int lmc, u64 p,
 	}
 
 	// Restore original setting that could enable partial cacheline writes
-	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 	l2c_ctl.cn78xx.dissblkdty = saved_dissblkdty;
-	l2c_wr(priv, CVMX_L2C_CTL, l2c_ctl.u64);
+	l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_ctl.u64);
 
 	return errors;
 }
@@ -6315,17 +6309,17 @@ static void lmc_final(struct ddr_priv *priv)
 	lmc_rd(priv, CVMX_LMCX_INT(if_num));
 
 	for (tad = 0; tad < num_tads; tad++) {
-		l2c_wr(priv, CVMX_L2C_TADX_INT(tad),
-		       l2c_rd(priv, CVMX_L2C_TADX_INT(tad)));
+		l2c_wr(priv, CVMX_L2C_TADX_INT_REL(tad),
+		       l2c_rd(priv, CVMX_L2C_TADX_INT_REL(tad)));
 		debug("%-45s : (%d) 0x%08llx\n", "CVMX_L2C_TAD_INT", tad,
-		      l2c_rd(priv, CVMX_L2C_TADX_INT(tad)));
+		      l2c_rd(priv, CVMX_L2C_TADX_INT_REL(tad)));
 	}
 
 	for (mci = 0; mci < num_mcis; mci++) {
-		l2c_wr(priv, CVMX_L2C_MCIX_INT(mci),
-		       l2c_rd(priv, CVMX_L2C_MCIX_INT(mci)));
+		l2c_wr(priv, CVMX_L2C_MCIX_INT_REL(mci),
+		       l2c_rd(priv, CVMX_L2C_MCIX_INT_REL(mci)));
 		debug("%-45s : (%d) 0x%08llx\n", "L2C_MCI_INT", mci,
-		      l2c_rd(priv, CVMX_L2C_MCIX_INT(mci)));
+		      l2c_rd(priv, CVMX_L2C_MCIX_INT_REL(mci)));
 	}
 
 	debug("%-45s : 0x%08llx\n", "LMC_INT",
@@ -9827,7 +9821,7 @@ static void cvmx_dram_address_extract_info(struct ddr_priv *priv, u64 address,
 		address -= ADDRESS_HOLE;
 
 	/* Determine the LMC controllers */
-	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+	l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 
 	/* xbits depends on number of LMCs */
 	xbits = cvmx_dram_get_num_lmc(priv) >> 1;	// 4->2, 2->1, 1->0
diff --git a/drivers/ram/octeon/octeon_ddr.c b/drivers/ram/octeon/octeon_ddr.c
index aaff9c3687..98f9646487 100644
--- a/drivers/ram/octeon/octeon_ddr.c
+++ b/drivers/ram/octeon/octeon_ddr.c
@@ -144,7 +144,7 @@ static void cvmx_l2c_set_big_size(struct ddr_priv *priv, u64 mem_size, int mode)
 		big_ctl.u64 = 0;
 		big_ctl.s.maxdram = bits - 9;
 		big_ctl.cn61xx.disable = mode;
-		l2c_wr(priv, CVMX_L2C_BIG_CTL, big_ctl.u64);
+		l2c_wr(priv, CVMX_L2C_BIG_CTL_REL, big_ctl.u64);
 	}
 }
 
@@ -2273,15 +2273,15 @@ static int octeon_ddr_initialize(struct ddr_priv *priv, u32 cpu_hertz,
 		printf("Disabling L2 ECC based on disable_l2_ecc environment variable\n");
 		union cvmx_l2c_ctl l2c_val;
 
-		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 		l2c_val.s.disecc = 1;
-		l2c_wr(priv, CVMX_L2C_CTL, l2c_val.u64);
+		l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_val.u64);
 	} else {
 		union cvmx_l2c_ctl l2c_val;
 
-		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 		l2c_val.s.disecc = 0;
-		l2c_wr(priv, CVMX_L2C_CTL, l2c_val.u64);
+		l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_val.u64);
 	}
 
 	/*
@@ -2294,17 +2294,17 @@ static int octeon_ddr_initialize(struct ddr_priv *priv, u32 cpu_hertz,
 
 		puts("L2 index aliasing disabled.\n");
 
-		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 		l2c_val.s.disidxalias = 1;
-		l2c_wr(priv, CVMX_L2C_CTL, l2c_val.u64);
+		l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_val.u64);
 	} else {
 		union cvmx_l2c_ctl l2c_val;
 
 		/* Enable L2C index aliasing */
 
-		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+		l2c_val.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 		l2c_val.s.disidxalias = 0;
-		l2c_wr(priv, CVMX_L2C_CTL, l2c_val.u64);
+		l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_val.u64);
 	}
 
 	if (OCTEON_IS_OCTEON3()) {
@@ -2320,7 +2320,7 @@ static int octeon_ddr_initialize(struct ddr_priv *priv, u32 cpu_hertz,
 		u64 rdf_cnt;
 		char *s;
 
-		l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL);
+		l2c_ctl.u64 = l2c_rd(priv, CVMX_L2C_CTL_REL);
 
 		/*
 		 * It is more convenient to compute the ratio using clock
@@ -2337,7 +2337,7 @@ static int octeon_ddr_initialize(struct ddr_priv *priv, u32 cpu_hertz,
 		debug("%-45s : %d, cpu_hertz:%d, ddr_hertz:%d\n",
 		      "EARLY FILL COUNT  ", l2c_ctl.cn78xx.rdf_cnt, cpu_hertz,
 		      ddr_hertz);
-		l2c_wr(priv, CVMX_L2C_CTL, l2c_ctl.u64);
+		l2c_wr(priv, CVMX_L2C_CTL_REL, l2c_ctl.u64);
 	}
 
 	/* Check for lower DIMM socket populated */
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 35/50] mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (33 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 34/50] mips: octeon: Misc changes required because of the newly added headers Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 36/50] mips: octeon: Add cvmx-helper-cfg.c Stefan Roese
                   ` (17 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

To match all other cvmx-* header, this patch moves the already existing
cvmx-lmcx-defs.h header one directory up.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/include/mach/{cvmx => }/cvmx-lmcx-defs.h | 0
 arch/mips/mach-octeon/include/mach/octeon_ddr.h                | 2 +-
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename arch/mips/mach-octeon/include/mach/{cvmx => }/cvmx-lmcx-defs.h (100%)

diff --git a/arch/mips/mach-octeon/include/mach/cvmx/cvmx-lmcx-defs.h b/arch/mips/mach-octeon/include/mach/cvmx-lmcx-defs.h
similarity index 100%
rename from arch/mips/mach-octeon/include/mach/cvmx/cvmx-lmcx-defs.h
rename to arch/mips/mach-octeon/include/mach/cvmx-lmcx-defs.h
diff --git a/arch/mips/mach-octeon/include/mach/octeon_ddr.h b/arch/mips/mach-octeon/include/mach/octeon_ddr.h
index e630dc5ae3..97e7b554ff 100644
--- a/arch/mips/mach-octeon/include/mach/octeon_ddr.h
+++ b/arch/mips/mach-octeon/include/mach/octeon_ddr.h
@@ -11,7 +11,7 @@
 #include <linux/delay.h>
 #include <linux/io.h>
 #include <mach/octeon-model.h>
-#include <mach/cvmx/cvmx-lmcx-defs.h>
+#include <mach/cvmx-lmcx-defs.h>
 #include <mach/cvmx-regs.h>
 #include <mach/cvmx-l2c-defs.h>
 
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 36/50] mips: octeon: Add cvmx-helper-cfg.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (34 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 35/50] mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:05 ` [PATCH v1 37/50] mips: octeon: Add cvmx-helper-fdt.c Stefan Roese
                   ` (16 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-helper-cfg.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-helper-cfg.c | 1914 +++++++++++++++++++++++
 1 file changed, 1914 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-cfg.c

diff --git a/arch/mips/mach-octeon/cvmx-helper-cfg.c b/arch/mips/mach-octeon/cvmx-helper-cfg.c
new file mode 100644
index 0000000000..6b7dd8ac4d
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-helper-cfg.c
@@ -0,0 +1,1914 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper Functions for the Configuration Framework
+ */
+
+#include <log.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-csr.h>
+#include <mach/cvmx-bootmem.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-coremask.h>
+
+#include <mach/cvmx-agl-defs.h>
+#include <mach/cvmx-bgxx-defs.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-ipd-defs.h>
+#include <mach/cvmx-pki-defs.h>
+
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-board.h>
+#include <mach/cvmx-helper-fdt.h>
+#include <mach/cvmx-helper-bgx.h>
+#include <mach/cvmx-helper-cfg.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-helper-pki.h>
+
+#include <mach/cvmx-global-resources.h>
+#include <mach/cvmx-pko-internal-ports-range.h>
+#include <mach/cvmx-ilk.h>
+#include <mach/cvmx-pip.h>
+
+DECLARE_GLOBAL_DATA_PTR;
+
+int cvmx_npi_max_pknds;
+static bool port_cfg_data_initialized;
+
+struct cvmx_cfg_port_param cvmx_cfg_port[CVMX_MAX_NODES][CVMX_HELPER_MAX_IFACE]
+					[CVMX_HELPER_CFG_MAX_PORT_PER_IFACE];
+/*
+ * Indexed by the pko_port number
+ */
+static int __cvmx_cfg_pko_highest_queue;
+struct cvmx_cfg_pko_port_param
+cvmx_pko_queue_table[CVMX_HELPER_CFG_MAX_PKO_PORT] = {
+	[0 ... CVMX_HELPER_CFG_MAX_PKO_PORT - 1] = {
+		CVMX_HELPER_CFG_INVALID_VALUE,
+		CVMX_HELPER_CFG_INVALID_VALUE
+	}
+};
+
+cvmx_user_static_pko_queue_config_t
+__cvmx_pko_queue_static_config[CVMX_MAX_NODES];
+
+struct cvmx_cfg_pko_port_map
+cvmx_cfg_pko_port_map[CVMX_HELPER_CFG_MAX_PKO_PORT] = {
+	[0 ... CVMX_HELPER_CFG_MAX_PKO_PORT - 1] = {
+		CVMX_HELPER_CFG_INVALID_VALUE,
+		CVMX_HELPER_CFG_INVALID_VALUE,
+		CVMX_HELPER_CFG_INVALID_VALUE
+	}
+};
+
+/*
+ * This array assists translation from ipd_port to pko_port.
+ * The ``16'' is the rounded value for the 3rd 4-bit value of
+ * ipd_port, used to differentiate ``interfaces.''
+ */
+static struct cvmx_cfg_pko_port_pair
+ipd2pko_port_cache[16][CVMX_HELPER_CFG_MAX_PORT_PER_IFACE] = {
+	[0 ... 15] = {
+		[0 ... CVMX_HELPER_CFG_MAX_PORT_PER_IFACE - 1] = {
+			CVMX_HELPER_CFG_INVALID_VALUE,
+			CVMX_HELPER_CFG_INVALID_VALUE
+		}
+	}
+};
+
+/*
+ * Options
+ *
+ * Each array-elem's initial value is also the option's default value.
+ */
+static u64 cvmx_cfg_opts[CVMX_HELPER_CFG_OPT_MAX] = {
+	[0 ... CVMX_HELPER_CFG_OPT_MAX - 1] = 1
+};
+
+/*
+ * MISC
+ */
+
+static int cvmx_cfg_max_pko_engines; /* # of PKO DMA engines allocated */
+static int cvmx_pko_queue_alloc(u64 port, int count);
+static void cvmx_init_port_cfg(void);
+static const int dbg;
+
+int __cvmx_helper_cfg_pknd(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int pkind;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	/*
+	 * Only 8 PKNDs are assigned to ILK channels. The channels are wrapped
+	 * if more than 8 channels are configured, fix the index accordingly.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		if (cvmx_helper_interface_get_mode(xiface) ==
+		    CVMX_HELPER_INTERFACE_MODE_ILK)
+			index %= 8;
+	}
+
+	pkind = cvmx_cfg_port[xi.node][xi.interface][index].ccpp_pknd;
+	return pkind;
+}
+
+int __cvmx_helper_cfg_bpid(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	/*
+	 * Only 8 BIDs are assigned to ILK channels. The channels are wrapped
+	 * if more than 8 channels are configured, fix the index accordingly.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		if (cvmx_helper_interface_get_mode(xiface) ==
+		    CVMX_HELPER_INTERFACE_MODE_ILK)
+			index %= 8;
+	}
+
+	return cvmx_cfg_port[xi.node][xi.interface][index].ccpp_bpid;
+}
+
+int __cvmx_helper_cfg_pko_port_base(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	return cvmx_cfg_port[xi.node][xi.interface][index].ccpp_pko_port_base;
+}
+
+int __cvmx_helper_cfg_pko_port_num(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	return cvmx_cfg_port[xi.node][xi.interface][index].ccpp_pko_num_ports;
+}
+
+int __cvmx_helper_cfg_pko_queue_num(int pko_port)
+{
+	return cvmx_pko_queue_table[pko_port].ccppp_num_queues;
+}
+
+int __cvmx_helper_cfg_pko_queue_base(int pko_port)
+{
+	return cvmx_pko_queue_table[pko_port].ccppp_queue_base;
+}
+
+int __cvmx_helper_cfg_pko_max_queue(void)
+{
+	return __cvmx_cfg_pko_highest_queue;
+}
+
+int __cvmx_helper_cfg_pko_max_engine(void)
+{
+	return cvmx_cfg_max_pko_engines;
+}
+
+int cvmx_helper_cfg_opt_set(cvmx_helper_cfg_option_t opt, uint64_t val)
+{
+	if (opt >= CVMX_HELPER_CFG_OPT_MAX)
+		return -1;
+
+	cvmx_cfg_opts[opt] = val;
+
+	return 0;
+}
+
+uint64_t cvmx_helper_cfg_opt_get(cvmx_helper_cfg_option_t opt)
+{
+	if (opt >= CVMX_HELPER_CFG_OPT_MAX)
+		return (uint64_t)CVMX_HELPER_CFG_INVALID_VALUE;
+
+	return cvmx_cfg_opts[opt];
+}
+
+/*
+ * initialize the queue allocation list. the existing static allocation result
+ * is used as a starting point to ensure backward compatibility.
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_queue_grp_alloc(u64 start, uint64_t end, uint64_t count)
+{
+	u64 port;
+	int ret_val;
+
+	for (port = start; port < end; port++) {
+		ret_val = cvmx_pko_queue_alloc(port, count);
+		if (ret_val == -1) {
+			printf("ERROR: %sL Failed to allocate queue for port=%d count=%d\n",
+			       __func__, (int)port, (int)count);
+			return ret_val;
+		}
+	}
+	return 0;
+}
+
+int cvmx_pko_queue_init_from_cvmx_config_non_pknd(void)
+{
+	int ret_val = -1;
+	u64 count, start, end;
+
+	start = 0;
+	end = __cvmx_pko_queue_static_config[0].non_pknd.pko_ports_per_interface[0];
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_interface[0];
+	cvmx_pko_queue_grp_alloc(start, end, count);
+
+	start = 16;
+	end = start + __cvmx_pko_queue_static_config[0].non_pknd.pko_ports_per_interface[1];
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_interface[1];
+	ret_val = cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		/* Interface 4: AGL, PKO port 24 only, DPI 32-35 */
+		start = 24;
+		end = start + 1;
+		count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_interface[4];
+		ret_val = cvmx_pko_queue_grp_alloc(start, end, count);
+
+		if (ret_val != 0)
+			return -1;
+		end = 32; /* DPI first PKO poty */
+	}
+
+	start = end;
+	end = 36;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_pci;
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	start = end;
+	end = 40;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_loop;
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	start = end;
+	end = 42;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_srio[0];
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	start = end;
+	end = 44;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_srio[1];
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	start = end;
+	end = 46;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_srio[2];
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+
+	start = end;
+	end = 48;
+	count = __cvmx_pko_queue_static_config[0].non_pknd.pko_queues_per_port_srio[3];
+	cvmx_pko_queue_grp_alloc(start, end, count);
+	if (ret_val != 0)
+		return -1;
+	return 0;
+}
+
+int cvmx_helper_pko_queue_config_get(int node, cvmx_user_static_pko_queue_config_t *cfg)
+{
+	*cfg = __cvmx_pko_queue_static_config[node];
+	return 0;
+}
+
+int cvmx_helper_pko_queue_config_set(int node, cvmx_user_static_pko_queue_config_t *cfg)
+{
+	__cvmx_pko_queue_static_config[node] = *cfg;
+	return 0;
+}
+
+static int queue_range_init;
+
+int init_cvmx_pko_que_range(void)
+{
+	int rv = 0;
+
+	if (queue_range_init)
+		return 0;
+	queue_range_init = 1;
+	rv = cvmx_create_global_resource_range(CVMX_GR_TAG_PKO_QUEUES,
+					       CVMX_HELPER_CFG_MAX_PKO_QUEUES);
+	if (rv != 0)
+		printf("ERROR: %s: Failed to initialize pko queues range\n", __func__);
+
+	return rv;
+}
+
+/*
+ * get a block of "count" queues for "port"
+ *
+ * @param  port   the port for which the queues are requested
+ * @param  count  the number of queues requested
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+static int cvmx_pko_queue_alloc(u64 port, int count)
+{
+	int ret_val = -1;
+	int highest_queue;
+
+	init_cvmx_pko_que_range();
+
+	if (cvmx_pko_queue_table[port].ccppp_num_queues == count)
+		return cvmx_pko_queue_table[port].ccppp_queue_base;
+
+	if (cvmx_pko_queue_table[port].ccppp_num_queues > 0) {
+		printf("WARNING: %s port=%d already %d queues\n",
+		       __func__, (int)port,
+		       (int)cvmx_pko_queue_table[port].ccppp_num_queues);
+		return -1;
+	}
+
+	if (port >= CVMX_HELPER_CFG_MAX_PKO_QUEUES) {
+		printf("ERROR: %s port=%d > %d\n", __func__, (int)port,
+		       CVMX_HELPER_CFG_MAX_PKO_QUEUES);
+		return -1;
+	}
+
+	ret_val = cvmx_allocate_global_resource_range(CVMX_GR_TAG_PKO_QUEUES,
+						      port, count, 1);
+
+	debug("%s: pko_e_port=%i q_base=%i q_count=%i\n",
+	      __func__, (int)port, ret_val, (int)count);
+
+	if (ret_val == -1)
+		return ret_val;
+	cvmx_pko_queue_table[port].ccppp_queue_base = ret_val;
+	cvmx_pko_queue_table[port].ccppp_num_queues = count;
+
+	highest_queue = ret_val + count - 1;
+	if (highest_queue > __cvmx_cfg_pko_highest_queue)
+		__cvmx_cfg_pko_highest_queue = highest_queue;
+	return 0;
+}
+
+/*
+ * return the queues for "port"
+ *
+ * @param  port   the port for which the queues are returned
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_queue_free(uint64_t port)
+{
+	int ret_val = -1;
+
+	init_cvmx_pko_que_range();
+	if (port >= CVMX_HELPER_CFG_MAX_PKO_QUEUES) {
+		debug("ERROR: %s port=%d > %d", __func__, (int)port,
+		      CVMX_HELPER_CFG_MAX_PKO_QUEUES);
+		return -1;
+	}
+
+	ret_val = cvmx_free_global_resource_range_with_base(
+		CVMX_GR_TAG_PKO_QUEUES, cvmx_pko_queue_table[port].ccppp_queue_base,
+		cvmx_pko_queue_table[port].ccppp_num_queues);
+	if (ret_val != 0)
+		return ret_val;
+
+	cvmx_pko_queue_table[port].ccppp_num_queues = 0;
+	cvmx_pko_queue_table[port].ccppp_queue_base = CVMX_HELPER_CFG_INVALID_VALUE;
+	ret_val = 0;
+	return ret_val;
+}
+
+void cvmx_pko_queue_free_all(void)
+{
+	int i;
+
+	for (i = 0; i < CVMX_HELPER_CFG_MAX_PKO_PORT; i++)
+		if (cvmx_pko_queue_table[i].ccppp_queue_base !=
+		    CVMX_HELPER_CFG_INVALID_VALUE)
+			cvmx_pko_queue_free(i);
+}
+
+void cvmx_pko_queue_show(void)
+{
+	int i;
+
+	cvmx_show_global_resource_range(CVMX_GR_TAG_PKO_QUEUES);
+	for (i = 0; i < CVMX_HELPER_CFG_MAX_PKO_PORT; i++)
+		if (cvmx_pko_queue_table[i].ccppp_queue_base !=
+		    CVMX_HELPER_CFG_INVALID_VALUE)
+			debug("port=%d que_base=%d que_num=%d\n", i,
+			      (int)cvmx_pko_queue_table[i].ccppp_queue_base,
+			      (int)cvmx_pko_queue_table[i].ccppp_num_queues);
+}
+
+void cvmx_helper_cfg_show_cfg(void)
+{
+	int i, j;
+
+	for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+		debug("%s: interface%d mode %10s nports%4d\n", __func__, i,
+		      cvmx_helper_interface_mode_to_string(cvmx_helper_interface_get_mode(i)),
+		      cvmx_helper_interface_enumerate(i));
+
+		for (j = 0; j < cvmx_helper_interface_enumerate(i); j++) {
+			debug("\tpknd[%i][%d]%d", i, j,
+			      __cvmx_helper_cfg_pknd(i, j));
+			debug(" pko_port_base[%i][%d]%d", i, j,
+			      __cvmx_helper_cfg_pko_port_base(i, j));
+			debug(" pko_port_num[%i][%d]%d\n", i, j,
+			      __cvmx_helper_cfg_pko_port_num(i, j));
+		}
+	}
+
+	for (i = 0; i < CVMX_HELPER_CFG_MAX_PKO_PORT; i++) {
+		if (__cvmx_helper_cfg_pko_queue_base(i) !=
+		    CVMX_HELPER_CFG_INVALID_VALUE) {
+			debug("%s: pko_port%d qbase%d nqueues%d interface%d index%d\n",
+			      __func__, i, __cvmx_helper_cfg_pko_queue_base(i),
+			      __cvmx_helper_cfg_pko_queue_num(i),
+			      __cvmx_helper_cfg_pko_port_interface(i),
+			      __cvmx_helper_cfg_pko_port_index(i));
+		}
+	}
+}
+
+/*
+ * initialize cvmx_cfg_pko_port_map
+ */
+void cvmx_helper_cfg_init_pko_port_map(void)
+{
+	int i, j, k;
+	int pko_eid;
+	int pko_port_base, pko_port_max;
+	cvmx_helper_interface_mode_t mode;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	/*
+	 * one pko_eid is allocated to each port except for ILK, NPI, and
+	 * LOOP. Each of the three has one eid.
+	 */
+	pko_eid = 0;
+	for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+		mode = cvmx_helper_interface_get_mode(i);
+		for (j = 0; j < cvmx_helper_interface_enumerate(i); j++) {
+			pko_port_base = cvmx_cfg_port[0][i][j].ccpp_pko_port_base;
+			pko_port_max = pko_port_base + cvmx_cfg_port[0][i][j].ccpp_pko_num_ports;
+			if (!octeon_has_feature(OCTEON_FEATURE_PKO3)) {
+				cvmx_helper_cfg_assert(pko_port_base !=
+						       CVMX_HELPER_CFG_INVALID_VALUE);
+				cvmx_helper_cfg_assert(pko_port_max >= pko_port_base);
+			}
+			for (k = pko_port_base; k < pko_port_max; k++) {
+				cvmx_cfg_pko_port_map[k].ccppl_interface = i;
+				cvmx_cfg_pko_port_map[k].ccppl_index = j;
+				cvmx_cfg_pko_port_map[k].ccppl_eid = pko_eid;
+			}
+
+			if (!(mode == CVMX_HELPER_INTERFACE_MODE_NPI ||
+			      mode == CVMX_HELPER_INTERFACE_MODE_LOOP ||
+			      mode == CVMX_HELPER_INTERFACE_MODE_ILK))
+				pko_eid++;
+		}
+
+		if (mode == CVMX_HELPER_INTERFACE_MODE_NPI ||
+		    mode == CVMX_HELPER_INTERFACE_MODE_LOOP ||
+		    mode == CVMX_HELPER_INTERFACE_MODE_ILK)
+			pko_eid++;
+	}
+
+	/*
+	 * Legal pko_eids [0, 0x13] should not be exhausted.
+	 */
+	if (!octeon_has_feature(OCTEON_FEATURE_PKO3))
+		cvmx_helper_cfg_assert(pko_eid <= 0x14);
+
+	cvmx_cfg_max_pko_engines = pko_eid;
+}
+
+void cvmx_helper_cfg_set_jabber_and_frame_max(void)
+{
+	int interface, port;
+	/*Set the frame max size and jabber size to 65535. */
+	const unsigned int max_frame = 65535;
+
+	// FIXME: should support node argument for remote node init
+	if (octeon_has_feature(OCTEON_FEATURE_BGX)) {
+		int ipd_port;
+		int node = cvmx_get_node_num();
+
+		for (interface = 0;
+		     interface < cvmx_helper_get_number_of_interfaces();
+		     interface++) {
+			int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+			cvmx_helper_interface_mode_t imode = cvmx_helper_interface_get_mode(xiface);
+			int num_ports = cvmx_helper_ports_on_interface(xiface);
+
+			// FIXME: should be an easier way to determine
+			// that an interface is Ethernet/BGX
+			switch (imode) {
+			case CVMX_HELPER_INTERFACE_MODE_SGMII:
+			case CVMX_HELPER_INTERFACE_MODE_XAUI:
+			case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+			case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+			case CVMX_HELPER_INTERFACE_MODE_XFI:
+			case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+			case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+				for (port = 0; port < num_ports; port++) {
+					ipd_port = cvmx_helper_get_ipd_port(xiface, port);
+					cvmx_pki_set_max_frm_len(ipd_port, max_frame);
+					cvmx_helper_bgx_set_jabber(xiface, port, max_frame);
+				}
+				break;
+			default:
+				break;
+			}
+		}
+	} else {
+		/*Set the frame max size and jabber size to 65535. */
+		for (interface = 0; interface < cvmx_helper_get_number_of_interfaces();
+		     interface++) {
+			int xiface = cvmx_helper_node_interface_to_xiface(cvmx_get_node_num(),
+									  interface);
+			/*
+			 * Set the frame max size and jabber size to 65535, as the defaults
+			 * are too small.
+			 */
+			cvmx_helper_interface_mode_t imode = cvmx_helper_interface_get_mode(xiface);
+			int num_ports = cvmx_helper_ports_on_interface(xiface);
+
+			switch (imode) {
+			case CVMX_HELPER_INTERFACE_MODE_SGMII:
+			case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+			case CVMX_HELPER_INTERFACE_MODE_XAUI:
+			case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+				for (port = 0; port < num_ports; port++)
+					csr_wr(CVMX_GMXX_RXX_JABBER(port, interface), 65535);
+				/* Set max and min value for frame check */
+				cvmx_pip_set_frame_check(interface, -1);
+				break;
+
+			case CVMX_HELPER_INTERFACE_MODE_RGMII:
+			case CVMX_HELPER_INTERFACE_MODE_GMII:
+				/* Set max and min value for frame check */
+				cvmx_pip_set_frame_check(interface, -1);
+				for (port = 0; port < num_ports; port++) {
+					csr_wr(CVMX_GMXX_RXX_FRM_MAX(port, interface), 65535);
+					csr_wr(CVMX_GMXX_RXX_JABBER(port, interface), 65535);
+				}
+				break;
+			case CVMX_HELPER_INTERFACE_MODE_ILK:
+				/* Set max and min value for frame check */
+				cvmx_pip_set_frame_check(interface, -1);
+				for (port = 0; port < num_ports; port++) {
+					int ipd_port = cvmx_helper_get_ipd_port(interface, port);
+
+					cvmx_ilk_enable_la_header(ipd_port, 0);
+				}
+				break;
+			case CVMX_HELPER_INTERFACE_MODE_SRIO:
+				/* Set max and min value for frame check */
+				cvmx_pip_set_frame_check(interface, -1);
+				break;
+			case CVMX_HELPER_INTERFACE_MODE_AGL:
+				/* Set max and min value for frame check */
+				cvmx_pip_set_frame_check(interface, -1);
+				csr_wr(CVMX_AGL_GMX_RXX_FRM_MAX(0), 65535);
+				csr_wr(CVMX_AGL_GMX_RXX_JABBER(0), 65535);
+				break;
+			default:
+				break;
+			}
+		}
+	}
+}
+
+/**
+ * Enable storing short packets only in the WQE
+ * unless NO_WPTR is set, which already has the same effect
+ */
+void cvmx_helper_cfg_store_short_packets_in_wqe(void)
+{
+	int interface, port;
+	cvmx_ipd_ctl_status_t ipd_ctl_status;
+	unsigned int dyn_rs = 1;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI))
+		return;
+
+	/* NO_WPTR combines WQE with 1st MBUF, RS is redundant */
+	ipd_ctl_status.u64 = csr_rd(CVMX_IPD_CTL_STATUS);
+	if (ipd_ctl_status.s.no_wptr) {
+		dyn_rs = 0;
+		/* Note: consider also setting 'ignrs' wtn NO_WPTR is set */
+	}
+
+	for (interface = 0; interface < cvmx_helper_get_number_of_interfaces(); interface++) {
+		int num_ports = cvmx_helper_ports_on_interface(interface);
+
+		for (port = 0; port < num_ports; port++) {
+			cvmx_pip_port_cfg_t port_cfg;
+			int pknd = port;
+
+			if (octeon_has_feature(OCTEON_FEATURE_PKND))
+				pknd = cvmx_helper_get_pknd(interface, port);
+			else
+				pknd = cvmx_helper_get_ipd_port(interface, port);
+			port_cfg.u64 = csr_rd(CVMX_PIP_PRT_CFGX(pknd));
+			port_cfg.s.dyn_rs = dyn_rs;
+			csr_wr(CVMX_PIP_PRT_CFGX(pknd), port_cfg.u64);
+		}
+	}
+}
+
+int __cvmx_helper_cfg_pko_port_interface(int pko_port)
+{
+	return cvmx_cfg_pko_port_map[pko_port].ccppl_interface;
+}
+
+int __cvmx_helper_cfg_pko_port_index(int pko_port)
+{
+	return cvmx_cfg_pko_port_map[pko_port].ccppl_index;
+}
+
+int __cvmx_helper_cfg_pko_port_eid(int pko_port)
+{
+	return cvmx_cfg_pko_port_map[pko_port].ccppl_eid;
+}
+
+#define IPD2PKO_CACHE_Y(ipd_port) (ipd_port) >> 8
+#define IPD2PKO_CACHE_X(ipd_port) (ipd_port) & 0xff
+
+static inline int __cvmx_helper_cfg_ipd2pko_cachex(int ipd_port)
+{
+	int ipd_x = IPD2PKO_CACHE_X(ipd_port);
+
+	if (ipd_port & 0x800)
+		ipd_x = (ipd_x >> 4) & 3;
+	return ipd_x;
+}
+
+/*
+ * ipd_port to pko_port translation cache
+ */
+int __cvmx_helper_cfg_init_ipd2pko_cache(void)
+{
+	int i, j, n;
+	int ipd_y, ipd_x, ipd_port;
+
+	for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+		n = cvmx_helper_interface_enumerate(i);
+
+		for (j = 0; j < n; j++) {
+			ipd_port = cvmx_helper_get_ipd_port(i, j);
+			ipd_y = IPD2PKO_CACHE_Y(ipd_port);
+			ipd_x = __cvmx_helper_cfg_ipd2pko_cachex(ipd_port);
+			ipd2pko_port_cache[ipd_y][ipd_x] = (struct cvmx_cfg_pko_port_pair){
+				__cvmx_helper_cfg_pko_port_base(i, j),
+				__cvmx_helper_cfg_pko_port_num(i, j)
+			};
+		}
+	}
+
+	return 0;
+}
+
+int cvmx_helper_cfg_ipd2pko_port_base(int ipd_port)
+{
+	int ipd_y, ipd_x;
+
+	/* Internal PKO ports are not present in PKO3 */
+	if (octeon_has_feature(OCTEON_FEATURE_PKI))
+		return ipd_port;
+
+	ipd_y = IPD2PKO_CACHE_Y(ipd_port);
+	ipd_x = __cvmx_helper_cfg_ipd2pko_cachex(ipd_port);
+
+	return ipd2pko_port_cache[ipd_y][ipd_x].ccppp_base_port;
+}
+
+int cvmx_helper_cfg_ipd2pko_port_num(int ipd_port)
+{
+	int ipd_y, ipd_x;
+
+	ipd_y = IPD2PKO_CACHE_Y(ipd_port);
+	ipd_x = __cvmx_helper_cfg_ipd2pko_cachex(ipd_port);
+
+	return ipd2pko_port_cache[ipd_y][ipd_x].ccppp_nports;
+}
+
+/**
+ * Return the number of queues to be assigned to this pko_port
+ *
+ * @param pko_port
+ * @return the number of queues for this pko_port
+ *
+ */
+static int cvmx_helper_cfg_dft_nqueues(int pko_port)
+{
+	cvmx_helper_interface_mode_t mode;
+	int interface;
+	int n;
+	int ret;
+
+	interface = __cvmx_helper_cfg_pko_port_interface(pko_port);
+	mode = cvmx_helper_interface_get_mode(interface);
+
+	n = NUM_ELEMENTS(__cvmx_pko_queue_static_config[0].pknd.pko_cfg_iface);
+
+	if (mode == CVMX_HELPER_INTERFACE_MODE_LOOP) {
+		ret = __cvmx_pko_queue_static_config[0].pknd.pko_cfg_loop.queues_per_port;
+	} else if (mode == CVMX_HELPER_INTERFACE_MODE_NPI) {
+		ret = __cvmx_pko_queue_static_config[0].pknd.pko_cfg_npi.queues_per_port;
+	}
+
+	else if ((interface >= 0) && (interface < n)) {
+		ret = __cvmx_pko_queue_static_config[0].pknd.pko_cfg_iface[interface].queues_per_port;
+	} else {
+		/* Should never be called */
+		ret = 1;
+	}
+	/* Override for sanity in case of empty static config table */
+	if (ret == 0)
+		ret = 1;
+	return ret;
+}
+
+static int cvmx_helper_cfg_init_pko_iports_and_queues_using_static_config(void)
+{
+	int pko_port_base = 0;
+	int cvmx_cfg_default_pko_nports = 1;
+	int i, j, n, k;
+	int rv = 0;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	/* When not using config file, each port is assigned one internal pko port*/
+	for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+		n = cvmx_helper_interface_enumerate(i);
+		for (j = 0; j < n; j++) {
+			cvmx_cfg_port[0][i][j].ccpp_pko_port_base = pko_port_base;
+			cvmx_cfg_port[0][i][j].ccpp_pko_num_ports = cvmx_cfg_default_pko_nports;
+			/*
+			 * Initialize interface early here so that the
+			 * cvmx_helper_cfg_dft_nqueues() below
+			 * can get the interface number corresponding to the
+			 * pko port
+			 */
+			for (k = pko_port_base; k < pko_port_base + cvmx_cfg_default_pko_nports;
+			     k++) {
+				cvmx_cfg_pko_port_map[k].ccppl_interface = i;
+			}
+			pko_port_base += cvmx_cfg_default_pko_nports;
+		}
+	}
+	cvmx_helper_cfg_assert(pko_port_base <= CVMX_HELPER_CFG_MAX_PKO_PORT);
+
+	/* Assigning queues per pko */
+	for (i = 0; i < pko_port_base; i++) {
+		int base;
+
+		n = cvmx_helper_cfg_dft_nqueues(i);
+		base = cvmx_pko_queue_alloc(i, n);
+		if (base == -1) {
+			printf("ERROR: %s: failed to alloc %d queues for pko port=%d\n", __func__,
+			       n, i);
+			rv = -1;
+		}
+	}
+	return rv;
+}
+
+/**
+ * Returns if port is valid for a given interface
+ *
+ * @param xiface  interface to check
+ * @param index      port index in the interface
+ *
+ * @return status of the port present or not.
+ */
+int cvmx_helper_is_port_valid(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].valid;
+}
+
+void cvmx_helper_set_port_valid(int xiface, int index, bool valid)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].valid = valid;
+}
+
+void cvmx_helper_set_mac_phy_mode(int xiface, int index, bool valid)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].sgmii_phy_mode = valid;
+}
+
+bool cvmx_helper_get_mac_phy_mode(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].sgmii_phy_mode;
+}
+
+void cvmx_helper_set_1000x_mode(int xiface, int index, bool valid)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].sgmii_1000x_mode = valid;
+}
+
+bool cvmx_helper_get_1000x_mode(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].sgmii_1000x_mode;
+}
+
+void cvmx_helper_set_agl_rx_clock_delay_bypass(int xiface, int index, bool valid)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].agl_rx_clk_delay_bypass = valid;
+}
+
+bool cvmx_helper_get_agl_rx_clock_delay_bypass(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].agl_rx_clk_delay_bypass;
+}
+
+void cvmx_helper_set_agl_rx_clock_skew(int xiface, int index, uint8_t value)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].agl_rx_clk_skew = value;
+}
+
+uint8_t cvmx_helper_get_agl_rx_clock_skew(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].agl_rx_clk_skew;
+}
+
+void cvmx_helper_set_agl_refclk_sel(int xiface, int index, uint8_t value)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].agl_refclk_sel = value;
+}
+
+uint8_t cvmx_helper_get_agl_refclk_sel(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].agl_refclk_sel;
+}
+
+void cvmx_helper_set_port_force_link_up(int xiface, int index, bool value)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].force_link_up = value;
+}
+
+bool cvmx_helper_get_port_force_link_up(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].force_link_up;
+}
+
+void cvmx_helper_set_port_phy_present(int xiface, int index, bool value)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].phy_present = value;
+}
+
+bool cvmx_helper_get_port_phy_present(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].phy_present;
+}
+
+int __cvmx_helper_init_port_valid(void)
+{
+	int i, j, node;
+	bool valid;
+	static void *fdt_addr;
+	int rc;
+	struct cvmx_coremask pcm;
+
+	octeon_get_available_coremask(&pcm);
+
+	if (fdt_addr == 0)
+		fdt_addr = __cvmx_phys_addr_to_ptr((u64)gd->fdt_blob, 128 * 1024);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	if (octeon_has_feature(OCTEON_FEATURE_BGX)) {
+		rc = __cvmx_helper_parse_bgx_dt(fdt_addr);
+		if (!rc)
+			rc = __cvmx_fdt_parse_vsc7224(fdt_addr);
+		if (!rc)
+			rc = __cvmx_fdt_parse_avsp5410(fdt_addr);
+		if (!rc && octeon_has_feature(OCTEON_FEATURE_BGX_XCV))
+			rc = __cvmx_helper_parse_bgx_rgmii_dt(fdt_addr);
+
+		/* Some ports are not in sequence, the device tree does not
+		 * clear them.
+		 *
+		 * Also clear any ports that are not defined in the device tree.
+		 * Apply this to each node.
+		 */
+		for (node = 0; node < CVMX_MAX_NODES; node++) {
+			if (!cvmx_coremask_get64_node(&pcm, node))
+				continue;
+			for (i = 0; i < CVMX_HELPER_MAX_GMX; i++) {
+				int j;
+				int xiface = cvmx_helper_node_interface_to_xiface(node, i);
+
+				for (j = 0; j < cvmx_helper_interface_enumerate(i); j++) {
+					cvmx_bgxx_cmrx_config_t cmr_config;
+
+					cmr_config.u64 =
+						csr_rd_node(node, CVMX_BGXX_CMRX_CONFIG(j, i));
+					if ((cmr_config.s.lane_to_sds == 0xe4 &&
+					     cmr_config.s.lmac_type != 4 &&
+					     cmr_config.s.lmac_type != 1 &&
+					     cmr_config.s.lmac_type != 5) ||
+					    ((cvmx_helper_get_port_fdt_node_offset(xiface, j) ==
+					      CVMX_HELPER_CFG_INVALID_VALUE)))
+						cvmx_helper_set_port_valid(xiface, j, false);
+				}
+			}
+		}
+		return rc;
+	}
+
+	/* TODO: Update this to behave more like 78XX */
+	for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+		int n = cvmx_helper_interface_enumerate(i);
+
+		for (j = 0; j < n; j++) {
+			int ipd_port = cvmx_helper_get_ipd_port(i, j);
+
+			valid = (__cvmx_helper_board_get_port_from_dt(fdt_addr, ipd_port) == 1);
+			cvmx_helper_set_port_valid(i, j, valid);
+		}
+	}
+	return 0;
+}
+
+typedef int (*cvmx_import_config_t)(void);
+cvmx_import_config_t cvmx_import_app_config;
+
+int __cvmx_helper_init_port_config_data_local(void)
+{
+	int rv = 0;
+	int dbg = 0;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		if (cvmx_import_app_config) {
+			rv = (*cvmx_import_app_config)();
+			if (rv != 0) {
+				debug("failed to import config\n");
+				return -1;
+			}
+		}
+
+		cvmx_helper_cfg_init_pko_port_map();
+		__cvmx_helper_cfg_init_ipd2pko_cache();
+	} else {
+		if (cvmx_import_app_config) {
+			rv = (*cvmx_import_app_config)();
+			if (rv != 0) {
+				debug("failed to import config\n");
+				return -1;
+			}
+		}
+	}
+	if (dbg) {
+		cvmx_helper_cfg_show_cfg();
+		cvmx_pko_queue_show();
+	}
+	return rv;
+}
+
+/*
+ * This call is made from Linux octeon_ethernet driver
+ * to setup the PKO with a specific queue count and
+ * internal port count configuration.
+ */
+int cvmx_pko_alloc_iport_and_queues(int interface, int port, int port_cnt, int queue_cnt)
+{
+	int rv, p, port_start, cnt;
+
+	if (dbg)
+		debug("%s: intf %d/%d pcnt %d qcnt %d\n", __func__, interface, port, port_cnt,
+		      queue_cnt);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		rv = cvmx_pko_internal_ports_alloc(interface, port, port_cnt);
+		if (rv < 0) {
+			printf("ERROR: %s: failed to allocate internal ports forinterface=%d port=%d cnt=%d\n",
+			       __func__, interface, port, port_cnt);
+			return -1;
+		}
+		port_start = __cvmx_helper_cfg_pko_port_base(interface, port);
+		cnt = __cvmx_helper_cfg_pko_port_num(interface, port);
+	} else {
+		port_start = cvmx_helper_get_ipd_port(interface, port);
+		cnt = 1;
+	}
+
+	for (p = port_start; p < port_start + cnt; p++) {
+		rv = cvmx_pko_queue_alloc(p, queue_cnt);
+		if (rv < 0) {
+			printf("ERROR: %s: failed to allocate queues for port=%d cnt=%d\n",
+			       __func__, p, queue_cnt);
+			return -1;
+		}
+	}
+	return 0;
+}
+
+static void cvmx_init_port_cfg(void)
+{
+	int node, i, j;
+
+	if (port_cfg_data_initialized)
+		return;
+
+	for (node = 0; node < CVMX_MAX_NODES; node++) {
+		for (i = 0; i < CVMX_HELPER_MAX_IFACE; i++) {
+			for (j = 0; j < CVMX_HELPER_CFG_MAX_PORT_PER_IFACE; j++) {
+				struct cvmx_cfg_port_param *pcfg;
+				struct cvmx_srio_port_param *sr;
+
+				pcfg = &cvmx_cfg_port[node][i][j];
+				memset(pcfg, 0, sizeof(*pcfg));
+
+				pcfg->port_fdt_node = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->phy_fdt_node = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->phy_info = NULL;
+				pcfg->ccpp_pknd = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->ccpp_bpid = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->ccpp_pko_port_base = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->ccpp_pko_num_ports = CVMX_HELPER_CFG_INVALID_VALUE;
+				pcfg->agl_rx_clk_skew = 0;
+				pcfg->valid = true;
+				pcfg->sgmii_phy_mode = false;
+				pcfg->sgmii_1000x_mode = false;
+				pcfg->agl_rx_clk_delay_bypass = false;
+				pcfg->force_link_up = false;
+				pcfg->disable_an = false;
+				pcfg->link_down_pwr_dn = false;
+				pcfg->phy_present = false;
+				pcfg->tx_clk_delay_bypass = false;
+				pcfg->rgmii_tx_clk_delay = 0;
+				pcfg->enable_fec = false;
+				sr = &pcfg->srio_short;
+				sr->srio_rx_ctle_agc_override = false;
+				sr->srio_rx_ctle_zero = 0x6;
+				sr->srio_rx_agc_pre_ctle = 0x5;
+				sr->srio_rx_agc_post_ctle = 0x4;
+				sr->srio_tx_swing_override = false;
+				sr->srio_tx_swing = 0x7;
+				sr->srio_tx_premptap_override = false;
+				sr->srio_tx_premptap_pre = 0;
+				sr->srio_tx_premptap_post = 0xF;
+				sr->srio_tx_gain_override = false;
+				sr->srio_tx_gain = 0x3;
+				sr->srio_tx_vboost_override = 0;
+				sr->srio_tx_vboost = true;
+				sr = &pcfg->srio_long;
+				sr->srio_rx_ctle_agc_override = false;
+				sr->srio_rx_ctle_zero = 0x6;
+				sr->srio_rx_agc_pre_ctle = 0x5;
+				sr->srio_rx_agc_post_ctle = 0x4;
+				sr->srio_tx_swing_override = false;
+				sr->srio_tx_swing = 0x7;
+				sr->srio_tx_premptap_override = false;
+				sr->srio_tx_premptap_pre = 0;
+				sr->srio_tx_premptap_post = 0xF;
+				sr->srio_tx_gain_override = false;
+				sr->srio_tx_gain = 0x3;
+				sr->srio_tx_vboost_override = 0;
+				sr->srio_tx_vboost = true;
+				pcfg->agl_refclk_sel = 0;
+				pcfg->sfp_of_offset = -1;
+				pcfg->vsc7224_chan = NULL;
+			}
+		}
+	}
+	port_cfg_data_initialized = true;
+}
+
+int __cvmx_helper_init_port_config_data(int node)
+{
+	int rv = 0;
+	int i, j, n;
+	int num_interfaces, interface;
+	int pknd = 0, bpid = 0;
+	const int use_static_config = 1;
+
+	if (dbg)
+		printf("%s:\n", __func__);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* PKO3: only needs BPID, PKND to be setup,
+		 * while the rest of PKO3 init is done in cvmx-helper-pko3.c
+		 */
+		pknd = 0;
+		bpid = 0;
+		for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+			int xiface = cvmx_helper_node_interface_to_xiface(node, i);
+
+			n = cvmx_helper_interface_enumerate(xiface);
+			/*
+			 * Assign 8 pknds to ILK interface, these pknds will be
+			 * distributed among the channels configured
+			 */
+			if (cvmx_helper_interface_get_mode(xiface) ==
+			    CVMX_HELPER_INTERFACE_MODE_ILK) {
+				if (n > 8)
+					n = 8;
+			}
+			if (cvmx_helper_interface_get_mode(xiface) !=
+			    CVMX_HELPER_INTERFACE_MODE_NPI) {
+				for (j = 0; j < n; j++) {
+					struct cvmx_cfg_port_param *pcfg;
+
+					pcfg = &cvmx_cfg_port[node][i][j];
+					pcfg->ccpp_pknd = pknd++;
+					pcfg->ccpp_bpid = bpid++;
+				}
+			} else {
+				for (j = 0; j < n; j++) {
+					if (j == n / cvmx_npi_max_pknds) {
+						pknd++;
+						bpid++;
+					}
+					cvmx_cfg_port[node][i][j].ccpp_pknd = pknd;
+					cvmx_cfg_port[node][i][j].ccpp_bpid = bpid;
+				}
+				pknd++;
+				bpid++;
+			}
+		} /* for i=0 */
+		cvmx_helper_cfg_assert(pknd <= CVMX_HELPER_CFG_MAX_PIP_PKND);
+		cvmx_helper_cfg_assert(bpid <= CVMX_HELPER_CFG_MAX_PIP_BPID);
+	} else if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		if (use_static_config)
+			cvmx_helper_cfg_init_pko_iports_and_queues_using_static_config();
+
+		/* Initialize pknd and bpid */
+		for (i = 0; i < cvmx_helper_get_number_of_interfaces(); i++) {
+			n = cvmx_helper_interface_enumerate(i);
+			for (j = 0; j < n; j++) {
+				cvmx_cfg_port[0][i][j].ccpp_pknd = pknd++;
+				cvmx_cfg_port[0][i][j].ccpp_bpid = bpid++;
+			}
+		}
+		cvmx_helper_cfg_assert(pknd <= CVMX_HELPER_CFG_MAX_PIP_PKND);
+		cvmx_helper_cfg_assert(bpid <= CVMX_HELPER_CFG_MAX_PIP_BPID);
+	} else {
+		if (use_static_config)
+			cvmx_pko_queue_init_from_cvmx_config_non_pknd();
+	}
+
+	/* Remainder not used for PKO3 */
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return 0;
+
+	/* init ports, queues which are not initialized */
+	num_interfaces = cvmx_helper_get_number_of_interfaces();
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int num_ports = __cvmx_helper_early_ports_on_interface(interface);
+		int port, port_base, queue;
+
+		for (port = 0; port < num_ports; port++) {
+			bool init_req = false;
+
+			if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+				port_base = __cvmx_helper_cfg_pko_port_base(interface, port);
+				if (port_base == CVMX_HELPER_CFG_INVALID_VALUE)
+					init_req = true;
+			} else {
+				port_base = cvmx_helper_get_ipd_port(interface, port);
+				queue = __cvmx_helper_cfg_pko_queue_base(port_base);
+				if (queue == CVMX_HELPER_CFG_INVALID_VALUE)
+					init_req = true;
+			}
+
+			if (init_req) {
+				rv = cvmx_pko_alloc_iport_and_queues(interface, port, 1, 1);
+				if (rv < 0) {
+					debug("cvm_pko_alloc_iport_and_queues failed.\n");
+					return rv;
+				}
+			}
+		}
+	}
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_helper_cfg_init_pko_port_map();
+		__cvmx_helper_cfg_init_ipd2pko_cache();
+	}
+
+	if (dbg) {
+		cvmx_helper_cfg_show_cfg();
+		cvmx_pko_queue_show();
+	}
+	return rv;
+}
+
+/**
+ * @INTERNAL
+ * Store the FDT node offset in the device tree of a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param node_offset	node offset to store
+ */
+void cvmx_helper_set_port_fdt_node_offset(int xiface, int index, int node_offset)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].port_fdt_node = node_offset;
+}
+
+/**
+ * @INTERNAL
+ * Return the FDT node offset in the device tree of a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @return		node offset of port or -1 if invalid
+ */
+int cvmx_helper_get_port_fdt_node_offset(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].port_fdt_node;
+}
+
+/**
+ * Search for a port based on its FDT node offset
+ *
+ * @param	of_offset	Node offset of port to search for
+ * @param[out]	xiface		xinterface of match
+ * @param[out]	index		port index of match
+ *
+ * @return	0 if found, -1 if not found
+ */
+int cvmx_helper_cfg_get_xiface_index_by_fdt_node_offset(int of_offset, int *xiface, int *index)
+{
+	int iface;
+	int i;
+	int node;
+	struct cvmx_cfg_port_param *pcfg = NULL;
+	*xiface = -1;
+	*index = -1;
+
+	for (node = 0; node < CVMX_MAX_NODES; node++) {
+		for (iface = 0; iface < CVMX_HELPER_MAX_IFACE; iface++) {
+			for (i = 0; i < CVMX_HELPER_CFG_MAX_PORT_PER_IFACE; i++) {
+				pcfg = &cvmx_cfg_port[node][iface][i];
+				if (pcfg->valid && pcfg->port_fdt_node == of_offset) {
+					*xiface = cvmx_helper_node_interface_to_xiface(node, iface);
+					*index = i;
+					return 0;
+				}
+			}
+		}
+	}
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Store the FDT node offset in the device tree of a phy
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param node_offset	node offset to store
+ */
+void cvmx_helper_set_phy_fdt_node_offset(int xiface, int index, int node_offset)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].phy_fdt_node = node_offset;
+}
+
+/**
+ * @INTERNAL
+ * Return the FDT node offset in the device tree of a phy
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @return		node offset of phy or -1 if invalid
+ */
+int cvmx_helper_get_phy_fdt_node_offset(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].phy_fdt_node;
+}
+
+/**
+ * @INTERNAL
+ * Override default autonegotiation for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param enable	true to enable autonegotiation, false to force full
+ *			duplex, full speed.
+ */
+void cvmx_helper_set_port_autonegotiation(int xiface, int index, bool enable)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].disable_an = !enable;
+}
+
+/**
+ * @INTERNAL
+ * Returns if autonegotiation is enabled or not.
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return 0 if autonegotiation is disabled, 1 if enabled.
+ */
+bool cvmx_helper_get_port_autonegotiation(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return !cvmx_cfg_port[xi.node][xi.interface][index].disable_an;
+}
+
+/**
+ * @INTERNAL
+ * Override default forward error correction for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param enable	true to enable fec, false to disable it
+ */
+void cvmx_helper_set_port_fec(int xiface, int index, bool enable)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].enable_fec = enable;
+}
+
+/**
+ * @INTERNAL
+ * Returns if forward error correction is enabled or not.
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return false if fec is disabled, true if enabled.
+ */
+bool cvmx_helper_get_port_fec(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].enable_fec;
+}
+
+/**
+ * @INTERNAL
+ * Configure the SRIO RX interface AGC settings for host mode
+ *
+ * @param xiface	node and interface
+ * @param index		lane
+ * @param long_run	true for long run, false for short run
+ * @param agc_override	true to put AGC in manual mode
+ * @param ctle_zero	RX equalizer peaking control (default 0x6)
+ * @param agc_pre_ctle	AGC pre-CTLE gain (default 0x5)
+ * @param agc_post_ctle	AGC post-CTLE gain (default 0x4)
+ *
+ * NOTE: This must be called before SRIO is initialized to take effect
+ */
+void cvmx_helper_set_srio_rx(int xiface, int index, bool long_run, bool ctle_zero_override,
+			     u8 ctle_zero, bool agc_override, uint8_t agc_pre_ctle,
+			     uint8_t agc_post_ctle)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_cfg_port_param *pcfg = &cvmx_cfg_port[xi.node][xi.interface][index];
+	struct cvmx_srio_port_param *sr = long_run ? &pcfg->srio_long : &pcfg->srio_short;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	sr->srio_rx_ctle_zero_override = ctle_zero_override;
+	sr->srio_rx_ctle_zero = ctle_zero;
+	sr->srio_rx_ctle_agc_override = agc_override;
+	sr->srio_rx_agc_pre_ctle = agc_pre_ctle;
+	sr->srio_rx_agc_post_ctle = agc_post_ctle;
+}
+
+/**
+ * @INTERNAL
+ * Get the SRIO RX interface AGC settings for host mode
+ *
+ * @param xiface	node and interface
+ * @param index		lane
+ * @param long_run	true for long run, false for short run
+ * @param[out] agc_override	true to put AGC in manual mode
+ * @param[out] ctle_zero	RX equalizer peaking control (default 0x6)
+ * @param[out] agc_pre_ctle	AGC pre-CTLE gain (default 0x5)
+ * @param[out] agc_post_ctle	AGC post-CTLE gain (default 0x4)
+ */
+void cvmx_helper_get_srio_rx(int xiface, int index, bool long_run, bool *ctle_zero_override,
+			     u8 *ctle_zero, bool *agc_override, uint8_t *agc_pre_ctle,
+			     uint8_t *agc_post_ctle)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_cfg_port_param *pcfg = &cvmx_cfg_port[xi.node][xi.interface][index];
+	struct cvmx_srio_port_param *sr = long_run ? &pcfg->srio_long : &pcfg->srio_short;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	if (ctle_zero_override)
+		*ctle_zero_override = sr->srio_rx_ctle_zero_override;
+	if (ctle_zero)
+		*ctle_zero = sr->srio_rx_ctle_zero;
+	if (agc_override)
+		*agc_override = sr->srio_rx_ctle_agc_override;
+	if (agc_pre_ctle)
+		*agc_pre_ctle = sr->srio_rx_agc_pre_ctle;
+	if (agc_post_ctle)
+		*agc_post_ctle = sr->srio_rx_agc_post_ctle;
+}
+
+/**
+ * @INTERNAL
+ * Configure the SRIO TX interface for host mode
+ *
+ * @param xiface		node and interface
+ * @param index			lane
+ * @param long_run		true for long run, false for short run
+ * @param tx_swing		tx swing value to use (default 0x7), -1 to not
+ *				override.
+ * @param tx_gain		PCS SDS TX gain (default 0x3), -1 to not
+ *				override
+ * @param tx_premptap_override	true to override preemphasis control
+ * @param tx_premptap_pre	preemphasis pre tap value (default 0x0)
+ * @param tx_premptap_post	preemphasis post tap value (default 0xF)
+ * @param tx_vboost		vboost enable (1 = enable, -1 = don't override)
+ *				hardware default is 1.
+ *
+ * NOTE: This must be called before SRIO is initialized to take effect
+ */
+void cvmx_helper_set_srio_tx(int xiface, int index, bool long_run, int tx_swing, int tx_gain,
+			     bool tx_premptap_override, uint8_t tx_premptap_pre,
+			     u8 tx_premptap_post, int tx_vboost)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_cfg_port_param *pcfg = &cvmx_cfg_port[xi.node][xi.interface][index];
+	struct cvmx_srio_port_param *sr = long_run ? &pcfg->srio_long : &pcfg->srio_short;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	sr->srio_tx_swing_override = (tx_swing != -1);
+	sr->srio_tx_swing = tx_swing != -1 ? tx_swing : 0x7;
+	sr->srio_tx_gain_override = (tx_gain != -1);
+	sr->srio_tx_gain = tx_gain != -1 ? tx_gain : 0x3;
+	sr->srio_tx_premptap_override = tx_premptap_override;
+	sr->srio_tx_premptap_pre = tx_premptap_override ? tx_premptap_pre : 0;
+	sr->srio_tx_premptap_post = tx_premptap_override ? tx_premptap_post : 0xF;
+	sr->srio_tx_vboost_override = tx_vboost != -1;
+	sr->srio_tx_vboost = (tx_vboost != -1) ? tx_vboost : 1;
+}
+
+/**
+ * @INTERNAL
+ * Get the SRIO TX interface settings for host mode
+ *
+ * @param xiface			node and interface
+ * @param index				lane
+ * @param long_run			true for long run, false for short run
+ * @param[out] tx_swing_override	true to override pcs_sds_txX_swing
+ * @param[out] tx_swing			tx swing value to use (default 0x7)
+ * @param[out] tx_gain_override		true to override default gain
+ * @param[out] tx_gain			PCS SDS TX gain (default 0x3)
+ * @param[out] tx_premptap_override	true to override preemphasis control
+ * @param[out] tx_premptap_pre		preemphasis pre tap value (default 0x0)
+ * @param[out] tx_premptap_post		preemphasis post tap value (default 0xF)
+ * @param[out] tx_vboost_override	override vboost setting
+ * @param[out] tx_vboost		vboost enable (default true)
+ */
+void cvmx_helper_get_srio_tx(int xiface, int index, bool long_run, bool *tx_swing_override,
+			     u8 *tx_swing, bool *tx_gain_override, uint8_t *tx_gain,
+			     bool *tx_premptap_override, uint8_t *tx_premptap_pre,
+			     u8 *tx_premptap_post, bool *tx_vboost_override, bool *tx_vboost)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_cfg_port_param *pcfg = &cvmx_cfg_port[xi.node][xi.interface][index];
+	struct cvmx_srio_port_param *sr = long_run ? &pcfg->srio_long : &pcfg->srio_short;
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+
+	if (tx_swing_override)
+		*tx_swing_override = sr->srio_tx_swing_override;
+	if (tx_swing)
+		*tx_swing = sr->srio_tx_swing;
+	if (tx_gain_override)
+		*tx_gain_override = sr->srio_tx_gain_override;
+	if (tx_gain)
+		*tx_gain = sr->srio_tx_gain;
+	if (tx_premptap_override)
+		*tx_premptap_override = sr->srio_tx_premptap_override;
+	if (tx_premptap_pre)
+		*tx_premptap_pre = sr->srio_tx_premptap_pre;
+	if (tx_premptap_post)
+		*tx_premptap_post = sr->srio_tx_premptap_post;
+	if (tx_vboost_override)
+		*tx_vboost_override = sr->srio_tx_vboost_override;
+	if (tx_vboost)
+		*tx_vboost = sr->srio_tx_vboost;
+}
+
+/**
+ * @INTERNAL
+ * Sets the PHY info data structure
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param[in] phy_info	phy information data structure pointer
+ */
+void cvmx_helper_set_port_phy_info(int xiface, int index, struct cvmx_phy_info *phy_info)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].phy_info = phy_info;
+}
+
+/**
+ * @INTERNAL
+ * Returns the PHY information data structure for a port
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return pointer to PHY information data structure or NULL if not set
+ */
+struct cvmx_phy_info *cvmx_helper_get_port_phy_info(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].phy_info;
+}
+
+/**
+ * @INTERNAL
+ * Returns a pointer to the PHY LED configuration (if local GPIOs drive them)
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ *
+ * @return pointer to the PHY LED information data structure or NULL if not
+ *	   present
+ */
+struct cvmx_phy_gpio_leds *cvmx_helper_get_port_phy_leds(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].gpio_leds;
+}
+
+/**
+ * @INTERNAL
+ * Sets a pointer to the PHY LED configuration (if local GPIOs drive them)
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param leds		pointer to led data structure
+ */
+void cvmx_helper_set_port_phy_leds(int xiface, int index, struct cvmx_phy_gpio_leds *leds)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].gpio_leds = leds;
+}
+
+/**
+ * @INTERNAL
+ * Disables RGMII TX clock bypass and sets delay value
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param bypass	Set true to enable the clock bypass and false
+ *			to sync clock and data synchronously.
+ *			Default is false.
+ * @param clk_delay	Delay value to skew TXC from TXD
+ */
+void cvmx_helper_cfg_set_rgmii_tx_clk_delay(int xiface, int index, bool bypass, int clk_delay)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].tx_clk_delay_bypass = bypass;
+	cvmx_cfg_port[xi.node][xi.interface][index].rgmii_tx_clk_delay = clk_delay;
+}
+
+/**
+ * @INTERNAL
+ * Gets RGMII TX clock bypass and delay value
+ *
+ * @param xiface	node and interface
+ * @param index		portindex
+ * @param bypass	Set true to enable the clock bypass and false
+ *			to sync clock and data synchronously.
+ *			Default is false.
+ * @param clk_delay	Delay value to skew TXC from TXD, default is 0.
+ */
+void cvmx_helper_cfg_get_rgmii_tx_clk_delay(int xiface, int index, bool *bypass, int *clk_delay)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	*bypass = cvmx_cfg_port[xi.node][xi.interface][index].tx_clk_delay_bypass;
+
+	*clk_delay = cvmx_cfg_port[xi.node][xi.interface][index].rgmii_tx_clk_delay;
+}
+
+/**
+ * @INTERNAL
+ * Retrieve the SFP node offset in the device tree
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return offset in device tree or -1 if error or not defined.
+ */
+int cvmx_helper_cfg_get_sfp_fdt_offset(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].sfp_of_offset;
+}
+
+/**
+ * @INTERNAL
+ * Sets the SFP node offset
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ * @param sfp_of_offset	Offset of SFP node in device tree
+ */
+void cvmx_helper_cfg_set_sfp_fdt_offset(int xiface, int index, int sfp_of_offset)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].sfp_of_offset = sfp_of_offset;
+}
+
+/**
+ * Get data structure defining the Microsemi VSC7224 channel info
+ * or NULL if not present
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return pointer to vsc7224 data structure or NULL if not present
+ */
+struct cvmx_vsc7224_chan *cvmx_helper_cfg_get_vsc7224_chan_info(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].vsc7224_chan;
+}
+
+/**
+ * Sets the Microsemi VSC7224 channel info data structure
+ *
+ * @param	xiface	node and interface
+ * @param	index	port index
+ * @param[in]	vsc7224_info	Microsemi VSC7224 data structure
+ */
+void cvmx_helper_cfg_set_vsc7224_chan_info(int xiface, int index,
+					   struct cvmx_vsc7224_chan *vsc7224_chan_info)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].vsc7224_chan = vsc7224_chan_info;
+}
+
+/**
+ * Get data structure defining the Avago AVSP5410 phy info
+ * or NULL if not present
+ *
+ * @param xiface	node and interface
+ * @param index		port index
+ *
+ * @return pointer to avsp5410 data structure or NULL if not present
+ */
+struct cvmx_avsp5410 *cvmx_helper_cfg_get_avsp5410_info(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].avsp5410;
+}
+
+/**
+ * Sets the Avago AVSP5410 phy info data structure
+ *
+ * @param	xiface	node and interface
+ * @param	index	port index
+ * @param[in]	avsp5410_info	Avago AVSP5410 data structure
+ */
+void cvmx_helper_cfg_set_avsp5410_info(int xiface, int index, struct cvmx_avsp5410 *avsp5410_info)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].avsp5410 = avsp5410_info;
+}
+
+/**
+ * Gets the SFP data associated with a port
+ *
+ * @param	xiface	node and interface
+ * @param	index	port index
+ *
+ * @return	pointer to SFP data structure or NULL if none
+ */
+struct cvmx_fdt_sfp_info *cvmx_helper_cfg_get_sfp_info(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].sfp_info;
+}
+
+/**
+ * Sets the SFP data associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ * @param[in]	sfp_info	port SFP data or NULL for none
+ */
+void cvmx_helper_cfg_set_sfp_info(int xiface, int index, struct cvmx_fdt_sfp_info *sfp_info)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].sfp_info = sfp_info;
+}
+
+/**
+ * Returns a pointer to the phy device associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ *
+ * return	pointer to phy device or NULL if none
+ */
+struct phy_device *cvmx_helper_cfg_get_phy_device(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	return cvmx_cfg_port[xi.node][xi.interface][index].phydev;
+}
+
+/**
+ * Sets the phy device associated with a port
+ *
+ * @param	xiface		node and interface
+ * @param	index		port index
+ * @param[in]	phydev		phy device to assiciate
+ */
+void cvmx_helper_cfg_set_phy_device(int xiface, int index, struct phy_device *phydev)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (!port_cfg_data_initialized)
+		cvmx_init_port_cfg();
+	cvmx_cfg_port[xi.node][xi.interface][index].phydev = phydev;
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 37/50] mips: octeon: Add cvmx-helper-fdt.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (35 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 36/50] mips: octeon: Add cvmx-helper-cfg.c Stefan Roese
@ 2020-12-11 16:05 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 38/50] mips: octeon: Add cvmx-helper-jtag.c Stefan Roese
                   ` (15 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:05 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-helper-fdt.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-helper-fdt.c | 970 ++++++++++++++++++++++++
 1 file changed, 970 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-fdt.c

diff --git a/arch/mips/mach-octeon/cvmx-helper-fdt.c b/arch/mips/mach-octeon/cvmx-helper-fdt.c
new file mode 100644
index 0000000000..87bc6d2adc
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-helper-fdt.c
@@ -0,0 +1,970 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * FDT Helper functions similar to those provided to U-Boot.
+ */
+
+#include <log.h>
+#include <malloc.h>
+#include <net.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-csr.h>
+#include <mach/cvmx-bootmem.h>
+#include <mach/octeon-model.h>
+#include <mach/octeon_fdt.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-board.h>
+#include <mach/cvmx-helper-cfg.h>
+#include <mach/cvmx-helper-fdt.h>
+#include <mach/cvmx-helper-gpio.h>
+
+/** Structure used to get type of GPIO from device tree */
+struct gpio_compat {
+	char *compatible;	  /** Compatible string */
+	enum cvmx_gpio_type type; /** Type */
+	int8_t size;		  /** (max) Number of pins */
+};
+
+#define GPIO_REG_PCA953X_IN	0
+#define GPIO_REG_PCA953X_OUT	1
+#define GPIO_REG_PCA953X_INVERT 2
+#define GPIO_REG_PCA953X_DIR	3
+
+#define GPIO_REG_PCA957X_IN	0
+#define GPIO_REG_PCA957X_INVERT 1
+#define GPIO_REG_PCA957X_CFG	4
+#define GPIO_REG_PCA957X_OUT	5
+
+enum cvmx_i2c_mux_type { I2C_MUX, I2C_SWITCH };
+
+/** Structure used to get type of GPIO from device tree */
+struct mux_compat {
+	char *compatible;		 /** Compatible string */
+	enum cvmx_i2c_bus_type type;	 /** Mux chip type */
+	enum cvmx_i2c_mux_type mux_type; /** Type of mux */
+	u8 enable;			 /** Enable bit for mux */
+	u8 size;			 /** (max) Number of channels */
+};
+
+/**
+ * Local allocator to handle both SE and U-Boot that also zeroes out memory
+ *
+ * @param	size	number of bytes to allocate
+ *
+ * @return	pointer to allocated memory or NULL if out of memory.
+ *		Alignment is set to 8-bytes.
+ */
+void *__cvmx_fdt_alloc(size_t size)
+{
+	return calloc(size, 1);
+}
+
+/**
+ * Free allocated memory.
+ *
+ * @param	ptr	pointer to memory to free
+ *
+ * NOTE: This only works in U-Boot since SE does not really have a freeing
+ *	 mechanism.  In SE the memory is zeroed out.
+ */
+void __cvmx_fdt_free(void *ptr, size_t size)
+{
+	free(ptr);
+}
+
+/**
+ * Look up a phandle and follow it to its node then return the offset of that
+ * node.
+ *
+ * @param[in]	fdt_addr	pointer to FDT blob
+ * @param	node		node to read phandle from
+ * @param[in]	prop_name	name of property to find
+ * @param[in,out] lenp		Number of phandles, input max number
+ * @param[out]	nodes		Array of phandle nodes
+ *
+ * @return	-ve error code on error or 0 for success
+ */
+int cvmx_fdt_lookup_phandles(const void *fdt_addr, int node,
+			     const char *prop_name, int *lenp,
+			     int *nodes)
+{
+	const u32 *phandles;
+	int count;
+	int i;
+
+	phandles = fdt_getprop(fdt_addr, node, prop_name, &count);
+	if (!phandles || count < 0)
+		return -FDT_ERR_NOTFOUND;
+
+	count /= 4;
+	if (count > *lenp)
+		count = *lenp;
+
+	for (i = 0; i < count; i++)
+		nodes[i] = fdt_node_offset_by_phandle(fdt_addr,
+						      fdt32_to_cpu(phandles[i]));
+	*lenp = count;
+	return 0;
+}
+
+/**
+ * Given a FDT node return the CPU node number
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	node		FDT node number
+ *
+ * @return	CPU node number or error if negative
+ */
+int cvmx_fdt_get_cpu_node(const void *fdt_addr, int node)
+{
+	int parent = node;
+	const u32 *ranges;
+	int len = 0;
+
+	while (fdt_node_check_compatible(fdt_addr, parent, "simple-bus") != 0) {
+		parent = fdt_parent_offset(fdt_addr, parent);
+		if (parent < 0)
+			return parent;
+	}
+	ranges = fdt_getprop(fdt_addr, parent, "ranges", &len);
+	if (!ranges)
+		return len;
+
+	if (len == 0)
+		return 0;
+
+	if (len < 24)
+		return -FDT_ERR_TRUNCATED;
+
+	return fdt32_to_cpu(ranges[2]) / 0x10;
+}
+
+/**
+ * Get the total size of the flat device tree
+ *
+ * @param[in]	fdt_addr	Address of FDT
+ *
+ * @return	Size of flat device tree in bytes or error if negative.
+ */
+int cvmx_fdt_get_fdt_size(const void *fdt_addr)
+{
+	int rc;
+
+	rc = fdt_check_header(fdt_addr);
+	if (rc)
+		return rc;
+	return fdt_totalsize(fdt_addr);
+}
+
+/**
+ * Returns if a node is compatible with one of the items in the string list
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	node		Node offset to check
+ * @param[in]	strlist		Array of FDT device compatibility strings,
+ *				must end with NULL or empty string.
+ *
+ * @return	0 if@least one item matches, 1 if no matches
+ */
+int cvmx_fdt_node_check_compatible_list(const void *fdt_addr, int node, const char *const *strlist)
+{
+	while (*strlist && **strlist) {
+		if (!fdt_node_check_compatible(fdt_addr, node, *strlist))
+			return 0;
+		strlist++;
+	}
+	return 1;
+}
+
+/**
+ * Given a FDT node, return the next compatible node.
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	start_offset	Starting node offset or -1 to find the first
+ * @param	strlist		Array of FDT device compatibility strings, must
+ *				end with NULL or empty string.
+ *
+ * @return	next matching node or -1 if no more matches.
+ */
+int cvmx_fdt_node_offset_by_compatible_list(const void *fdt_addr, int startoffset,
+					    const char *const *strlist)
+{
+	int offset;
+
+	for (offset = fdt_next_node(fdt_addr, startoffset, NULL); offset >= 0;
+	     offset = fdt_next_node(fdt_addr, offset, NULL)) {
+		if (!cvmx_fdt_node_check_compatible_list(fdt_addr, offset, strlist))
+			return offset;
+	}
+	return -1;
+}
+
+/**
+ * Attaches a PHY to a SFP or QSFP.
+ *
+ * @param	sfp		sfp to attach PHY to
+ * @param	phy_info	phy descriptor to attach or NULL to detach
+ */
+void cvmx_sfp_attach_phy(struct cvmx_fdt_sfp_info *sfp, struct cvmx_phy_info *phy_info)
+{
+	sfp->phy_info = phy_info;
+	if (phy_info)
+		phy_info->sfp_info = sfp;
+}
+
+/**
+ * Assigns an IPD port to a SFP slot
+ *
+ * @param	sfp		Handle to SFP data structure
+ * @param	ipd_port	Port to assign it to
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_sfp_set_ipd_port(struct cvmx_fdt_sfp_info *sfp, int ipd_port)
+{
+	int i;
+
+	if (sfp->is_qsfp) {
+		int xiface;
+		cvmx_helper_interface_mode_t mode;
+
+		xiface = cvmx_helper_get_interface_num(ipd_port);
+		mode = cvmx_helper_interface_get_mode(xiface);
+		sfp->ipd_port[0] = ipd_port;
+
+		switch (mode) {
+		case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		case CVMX_HELPER_INTERFACE_MODE_XFI:
+		case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+			for (i = 1; i < 4; i++)
+				sfp->ipd_port[i] = cvmx_helper_get_ipd_port(xiface, i);
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+		case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+			sfp->ipd_port[0] = ipd_port;
+			for (i = 1; i < 4; i++)
+				sfp->ipd_port[i] = -1;
+			break;
+		default:
+			debug("%s: Interface mode %s for interface 0x%x, ipd_port %d not supported for QSFP\n",
+			      __func__, cvmx_helper_interface_mode_to_string(mode), xiface,
+			      ipd_port);
+			return -1;
+		}
+	} else {
+		sfp->ipd_port[0] = ipd_port;
+		for (i = 1; i < 4; i++)
+			sfp->ipd_port[i] = -1;
+	}
+	return 0;
+}
+
+/**
+ * Parses all of the channels assigned to a VSC7224 device
+ *
+ * @param[in]		fdt_addr	Address of flat device tree
+ * @param		of_offset	Offset of vsc7224 node
+ * @param[in,out]	vsc7224		Data structure to hold the data
+ *
+ * @return	0 for success, -1 on error
+ */
+static int cvmx_fdt_parse_vsc7224_channels(const void *fdt_addr, int of_offset,
+					   struct cvmx_vsc7224 *vsc7224)
+{
+	int parent_offset = of_offset;
+	int err = 0;
+	int reg;
+	int num_chan = 0;
+	struct cvmx_vsc7224_chan *channel;
+	struct cvmx_fdt_sfp_info *sfp_info;
+	int len;
+	int num_taps;
+	int i;
+	const u32 *tap_values;
+	int of_mac;
+	int xiface, index;
+	bool is_tx;
+	bool is_qsfp;
+	const char *mac_str;
+
+	debug("%s(%p, %d, %s)\n", __func__, fdt_addr, of_offset, vsc7224->name);
+	do {
+		/* Walk through all channels */
+		of_offset = fdt_node_offset_by_compatible(fdt_addr, of_offset,
+							  "vitesse,vsc7224-channel");
+		if (of_offset == -FDT_ERR_NOTFOUND) {
+			break;
+		} else if (of_offset < 0) {
+			debug("%s: Failed finding compatible channel\n",
+			      __func__);
+			err = -1;
+			break;
+		}
+		if (fdt_parent_offset(fdt_addr, of_offset) != parent_offset)
+			break;
+		reg = cvmx_fdt_get_int(fdt_addr, of_offset, "reg", -1);
+		if (reg < 0 || reg > 3) {
+			debug("%s: channel reg is either not present or out of range\n",
+			      __func__);
+			err = -1;
+			break;
+		}
+		is_tx = cvmx_fdt_get_bool(fdt_addr, of_offset, "direction-tx");
+
+		debug("%s(%s): Adding %cx channel %d\n",
+		      __func__, vsc7224->name, is_tx ? 't' : 'r',
+		      reg);
+		tap_values = (const uint32_t *)fdt_getprop(fdt_addr, of_offset, "taps", &len);
+		if (!tap_values) {
+			debug("%s: Error: no taps defined for vsc7224 channel %d\n",
+			      __func__, reg);
+			err = -1;
+			break;
+		}
+
+		if (vsc7224->channel[reg]) {
+			debug("%s: Error: channel %d already assigned at %p\n",
+			      __func__, reg,
+			      vsc7224->channel[reg]);
+			err = -1;
+			break;
+		}
+		if (len % 16) {
+			debug("%s: Error: tap format error for channel %d\n",
+			      __func__, reg);
+			err = -1;
+			break;
+		}
+		num_taps = len / 16;
+		debug("%s: Adding %d taps\n", __func__, num_taps);
+
+		channel = __cvmx_fdt_alloc(sizeof(*channel) +
+					   num_taps * sizeof(struct cvmx_vsc7224_tap));
+		if (!channel) {
+			debug("%s: Out of memory\n", __func__);
+			err = -1;
+			break;
+		}
+		vsc7224->channel[reg] = channel;
+		channel->num_taps = num_taps;
+		channel->lane = reg;
+		channel->of_offset = of_offset;
+		channel->is_tx = is_tx;
+		channel->pretap_disable = cvmx_fdt_get_bool(fdt_addr, of_offset, "pretap-disable");
+		channel->posttap_disable =
+			cvmx_fdt_get_bool(fdt_addr, of_offset, "posttap-disable");
+		channel->vsc7224 = vsc7224;
+		/* Read all the tap values */
+		for (i = 0; i < num_taps; i++) {
+			channel->taps[i].len = fdt32_to_cpu(tap_values[i * 4 + 0]);
+			channel->taps[i].main_tap = fdt32_to_cpu(tap_values[i * 4 + 1]);
+			channel->taps[i].pre_tap = fdt32_to_cpu(tap_values[i * 4 + 2]);
+			channel->taps[i].post_tap = fdt32_to_cpu(tap_values[i * 4 + 3]);
+			debug("%s: tap %d: len: %d, main_tap: 0x%x, pre_tap: 0x%x, post_tap: 0x%x\n",
+			      __func__, i, channel->taps[i].len, channel->taps[i].main_tap,
+			      channel->taps[i].pre_tap, channel->taps[i].post_tap);
+		}
+		/* Now find out which interface it's mapped to */
+		channel->ipd_port = -1;
+
+		mac_str = "sfp-mac";
+		if (fdt_getprop(fdt_addr, of_offset, mac_str, NULL)) {
+			is_qsfp = false;
+		} else if (fdt_getprop(fdt_addr, of_offset, "qsfp-mac", NULL)) {
+			is_qsfp = true;
+			mac_str = "qsfp-mac";
+		} else {
+			debug("%s: Error: MAC not found for %s channel %d\n", __func__,
+			      vsc7224->name, reg);
+			return -1;
+		}
+		of_mac = cvmx_fdt_lookup_phandle(fdt_addr, of_offset, mac_str);
+		if (of_mac < 0) {
+			debug("%s: Error %d with MAC %s phandle for %s\n", __func__, of_mac,
+			      mac_str, vsc7224->name);
+			return -1;
+		}
+
+		debug("%s: Found mac at offset %d\n", __func__, of_mac);
+		err = cvmx_helper_cfg_get_xiface_index_by_fdt_node_offset(of_mac, &xiface, &index);
+		if (!err) {
+			channel->xiface = xiface;
+			channel->index = index;
+			channel->ipd_port = cvmx_helper_get_ipd_port(xiface, index);
+
+			debug("%s: Found MAC, xiface: 0x%x, index: %d, ipd port: %d\n", __func__,
+			      xiface, index, channel->ipd_port);
+			if (channel->ipd_port >= 0) {
+				cvmx_helper_cfg_set_vsc7224_chan_info(xiface, index, channel);
+				debug("%s: Storing config channel for xiface 0x%x, index %d\n",
+				      __func__, xiface, index);
+			}
+			sfp_info = cvmx_helper_cfg_get_sfp_info(xiface, index);
+			if (!sfp_info) {
+				debug("%s: Warning: no (Q)SFP+ slot found for xinterface 0x%x, index %d for channel %d\n",
+				      __func__, xiface, index, channel->lane);
+				continue;
+			}
+
+			/* Link it */
+			channel->next = sfp_info->vsc7224_chan;
+			if (sfp_info->vsc7224_chan)
+				sfp_info->vsc7224_chan->prev = channel;
+			sfp_info->vsc7224_chan = channel;
+			sfp_info->is_vsc7224 = true;
+			debug("%s: Registering VSC7224 %s channel %d with SFP %s\n", __func__,
+			      vsc7224->name, channel->lane, sfp_info->name);
+			if (!sfp_info->mod_abs_changed) {
+				debug("%s: Registering cvmx_sfp_vsc7224_mod_abs_changed at %p for xinterface 0x%x, index %d\n",
+				      __func__, &cvmx_sfp_vsc7224_mod_abs_changed, xiface, index);
+				cvmx_sfp_register_mod_abs_changed(
+					sfp_info,
+					&cvmx_sfp_vsc7224_mod_abs_changed,
+					NULL);
+			}
+		}
+	} while (!err && num_chan < 4);
+
+	return err;
+}
+
+/**
+ * @INTERNAL
+ * Parses all instances of the Vitesse VSC7224 reclocking chip
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ *
+ * @return	0 for success, error otherwise
+ */
+int __cvmx_fdt_parse_vsc7224(const void *fdt_addr)
+{
+	int of_offset = -1;
+	struct cvmx_vsc7224 *vsc7224 = NULL;
+	struct cvmx_fdt_gpio_info *gpio_info = NULL;
+	int err = 0;
+	int of_parent;
+	static bool parsed;
+
+	debug("%s(%p)\n", __func__, fdt_addr);
+
+	if (parsed) {
+		debug("%s: Already parsed\n", __func__);
+		return 0;
+	}
+	do {
+		of_offset = fdt_node_offset_by_compatible(fdt_addr, of_offset,
+							  "vitesse,vsc7224");
+		debug("%s: of_offset: %d\n", __func__, of_offset);
+		if (of_offset == -FDT_ERR_NOTFOUND) {
+			break;
+		} else if (of_offset < 0) {
+			err = -1;
+			debug("%s: Error %d parsing FDT\n",
+			      __func__, of_offset);
+			break;
+		}
+
+		vsc7224 = __cvmx_fdt_alloc(sizeof(*vsc7224));
+
+		if (!vsc7224) {
+			debug("%s: Out of memory!\n", __func__);
+			return -1;
+		}
+		vsc7224->of_offset = of_offset;
+		vsc7224->i2c_addr = cvmx_fdt_get_int(fdt_addr, of_offset,
+						     "reg", -1);
+		of_parent = fdt_parent_offset(fdt_addr, of_offset);
+		vsc7224->i2c_bus = cvmx_fdt_get_i2c_bus(fdt_addr, of_parent);
+		if (vsc7224->i2c_addr < 0) {
+			debug("%s: Error: reg field missing\n", __func__);
+			err = -1;
+			break;
+		}
+		if (!vsc7224->i2c_bus) {
+			debug("%s: Error getting i2c bus\n", __func__);
+			err = -1;
+			break;
+		}
+		vsc7224->name = fdt_get_name(fdt_addr, of_offset, NULL);
+		debug("%s: Adding %s\n", __func__, vsc7224->name);
+		if (fdt_getprop(fdt_addr, of_offset, "reset", NULL)) {
+			gpio_info = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "reset");
+			vsc7224->reset_gpio = gpio_info;
+		}
+		if (fdt_getprop(fdt_addr, of_offset, "los", NULL)) {
+			gpio_info = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "los");
+			vsc7224->los_gpio = gpio_info;
+		}
+		debug("%s: Parsing channels\n", __func__);
+		err = cvmx_fdt_parse_vsc7224_channels(fdt_addr, of_offset, vsc7224);
+		if (err) {
+			debug("%s: Error parsing VSC7224 channels\n", __func__);
+			break;
+		}
+	} while (of_offset > 0);
+
+	if (err) {
+		debug("%s(): Error\n", __func__);
+		if (vsc7224) {
+			if (vsc7224->reset_gpio)
+				__cvmx_fdt_free(vsc7224->reset_gpio, sizeof(*vsc7224->reset_gpio));
+			if (vsc7224->los_gpio)
+				__cvmx_fdt_free(vsc7224->los_gpio, sizeof(*vsc7224->los_gpio));
+			if (vsc7224->i2c_bus)
+				cvmx_fdt_free_i2c_bus(vsc7224->i2c_bus);
+			__cvmx_fdt_free(vsc7224, sizeof(*vsc7224));
+		}
+	}
+	if (!err)
+		parsed = true;
+
+	return err;
+}
+
+/**
+ * @INTERNAL
+ * Parses all instances of the Avago AVSP5410 gearbox phy
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ *
+ * @return	0 for success, error otherwise
+ */
+int __cvmx_fdt_parse_avsp5410(const void *fdt_addr)
+{
+	int of_offset = -1;
+	struct cvmx_avsp5410 *avsp5410 = NULL;
+	struct cvmx_fdt_sfp_info *sfp_info;
+	int err = 0;
+	int of_parent;
+	static bool parsed;
+	int of_mac;
+	int xiface, index;
+	bool is_qsfp;
+	const char *mac_str;
+
+	debug("%s(%p)\n", __func__, fdt_addr);
+
+	if (parsed) {
+		debug("%s: Already parsed\n", __func__);
+		return 0;
+	}
+
+	do {
+		of_offset = fdt_node_offset_by_compatible(fdt_addr, of_offset,
+							  "avago,avsp-5410");
+		debug("%s: of_offset: %d\n", __func__, of_offset);
+		if (of_offset == -FDT_ERR_NOTFOUND) {
+			break;
+		} else if (of_offset < 0) {
+			err = -1;
+			debug("%s: Error %d parsing FDT\n", __func__, of_offset);
+			break;
+		}
+
+		avsp5410 = __cvmx_fdt_alloc(sizeof(*avsp5410));
+
+		if (!avsp5410) {
+			debug("%s: Out of memory!\n", __func__);
+			return -1;
+		}
+		avsp5410->of_offset = of_offset;
+		avsp5410->i2c_addr = cvmx_fdt_get_int(fdt_addr, of_offset,
+						      "reg", -1);
+		of_parent = fdt_parent_offset(fdt_addr, of_offset);
+		avsp5410->i2c_bus = cvmx_fdt_get_i2c_bus(fdt_addr, of_parent);
+		if (avsp5410->i2c_addr < 0) {
+			debug("%s: Error: reg field missing\n", __func__);
+			err = -1;
+			break;
+		}
+		if (!avsp5410->i2c_bus) {
+			debug("%s: Error getting i2c bus\n", __func__);
+			err = -1;
+			break;
+		}
+		avsp5410->name = fdt_get_name(fdt_addr, of_offset, NULL);
+		debug("%s: Adding %s\n", __func__, avsp5410->name);
+
+		/* Now find out which interface it's mapped to */
+		avsp5410->ipd_port = -1;
+
+		mac_str = "sfp-mac";
+		if (fdt_getprop(fdt_addr, of_offset, mac_str, NULL)) {
+			is_qsfp = false;
+		} else if (fdt_getprop(fdt_addr, of_offset, "qsfp-mac", NULL)) {
+			is_qsfp = true;
+			mac_str = "qsfp-mac";
+		} else {
+			debug("%s: Error: MAC not found for %s\n", __func__, avsp5410->name);
+			return -1;
+		}
+		of_mac = cvmx_fdt_lookup_phandle(fdt_addr, of_offset, mac_str);
+		if (of_mac < 0) {
+			debug("%s: Error %d with MAC %s phandle for %s\n", __func__, of_mac,
+			      mac_str, avsp5410->name);
+			return -1;
+		}
+
+		debug("%s: Found mac at offset %d\n", __func__, of_mac);
+		err = cvmx_helper_cfg_get_xiface_index_by_fdt_node_offset(of_mac, &xiface, &index);
+		if (!err) {
+			avsp5410->xiface = xiface;
+			avsp5410->index = index;
+			avsp5410->ipd_port = cvmx_helper_get_ipd_port(xiface, index);
+
+			debug("%s: Found MAC, xiface: 0x%x, index: %d, ipd port: %d\n", __func__,
+			      xiface, index, avsp5410->ipd_port);
+			if (avsp5410->ipd_port >= 0) {
+				cvmx_helper_cfg_set_avsp5410_info(xiface, index, avsp5410);
+				debug("%s: Storing config phy for xiface 0x%x, index %d\n",
+				      __func__, xiface, index);
+			}
+			sfp_info = cvmx_helper_cfg_get_sfp_info(xiface, index);
+			if (!sfp_info) {
+				debug("%s: Warning: no (Q)SFP+ slot found for xinterface 0x%x, index %d\n",
+				      __func__, xiface, index);
+				continue;
+			}
+
+			sfp_info->is_avsp5410 = true;
+			sfp_info->avsp5410 = avsp5410;
+			debug("%s: Registering AVSP5410 %s with SFP %s\n", __func__, avsp5410->name,
+			      sfp_info->name);
+			if (!sfp_info->mod_abs_changed) {
+				debug("%s: Registering cvmx_sfp_avsp5410_mod_abs_changed at %p for xinterface 0x%x, index %d\n",
+				      __func__, &cvmx_sfp_avsp5410_mod_abs_changed, xiface, index);
+				cvmx_sfp_register_mod_abs_changed(
+					sfp_info,
+					&cvmx_sfp_avsp5410_mod_abs_changed,
+					NULL);
+			}
+		}
+	} while (of_offset > 0);
+
+	if (err) {
+		debug("%s(): Error\n", __func__);
+		if (avsp5410) {
+			if (avsp5410->i2c_bus)
+				cvmx_fdt_free_i2c_bus(avsp5410->i2c_bus);
+			__cvmx_fdt_free(avsp5410, sizeof(*avsp5410));
+		}
+	}
+	if (!err)
+		parsed = true;
+
+	return err;
+}
+
+/**
+ * Parse QSFP GPIOs for SFP
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	of_offset	Offset of QSFP node
+ * @param[out]	sfp_info	Pointer to sfp info to fill in
+ *
+ * @return	0 for success
+ */
+static int cvmx_parse_qsfp(const void *fdt_addr, int of_offset, struct cvmx_fdt_sfp_info *sfp_info)
+{
+	sfp_info->select = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "select");
+	sfp_info->mod_abs = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "mod_prs");
+	sfp_info->reset = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "reset");
+	sfp_info->interrupt = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "interrupt");
+	sfp_info->lp_mode = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "lp_mode");
+	return 0;
+}
+
+/**
+ * Parse SFP GPIOs for SFP
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	of_offset	Offset of SFP node
+ * @param[out]	sfp_info	Pointer to sfp info to fill in
+ *
+ * @return	0 for success
+ */
+static int cvmx_parse_sfp(const void *fdt_addr, int of_offset, struct cvmx_fdt_sfp_info *sfp_info)
+{
+	sfp_info->mod_abs = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "mod_abs");
+	sfp_info->rx_los = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "rx_los");
+	sfp_info->tx_disable = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "tx_disable");
+	sfp_info->tx_error = cvmx_fdt_gpio_get_info_phandle(fdt_addr, of_offset, "tx_error");
+	return 0;
+}
+
+/**
+ * Parse SFP/QSFP EEPROM and diag
+ *
+ * @param[in]	fdt_addr	Pointer to flat device tree
+ * @param	of_offset	Offset of SFP node
+ * @param[out]	sfp_info	Pointer to sfp info to fill in
+ *
+ * @return	0 for success, -1 on error
+ */
+static int cvmx_parse_sfp_eeprom(const void *fdt_addr, int of_offset,
+				 struct cvmx_fdt_sfp_info *sfp_info)
+{
+	int of_eeprom;
+	int of_diag;
+
+	debug("%s(%p, %d, %s)\n", __func__, fdt_addr, of_offset, sfp_info->name);
+	of_eeprom = cvmx_fdt_lookup_phandle(fdt_addr, of_offset, "eeprom");
+	if (of_eeprom < 0) {
+		debug("%s: Missing \"eeprom\" from device tree for %s\n", __func__, sfp_info->name);
+		return -1;
+	}
+
+	sfp_info->i2c_bus = cvmx_fdt_get_i2c_bus(fdt_addr, fdt_parent_offset(fdt_addr, of_eeprom));
+	sfp_info->i2c_eeprom_addr = cvmx_fdt_get_int(fdt_addr, of_eeprom, "reg", 0x50);
+
+	debug("%s(%p, %d, %s, %d)\n", __func__, fdt_addr, of_offset, sfp_info->name,
+	      sfp_info->i2c_eeprom_addr);
+
+	if (!sfp_info->i2c_bus) {
+		debug("%s: Error: could not determine i2c bus for eeprom for %s\n", __func__,
+		      sfp_info->name);
+		return -1;
+	}
+	of_diag = cvmx_fdt_lookup_phandle(fdt_addr, of_offset, "diag");
+	if (of_diag >= 0)
+		sfp_info->i2c_diag_addr = cvmx_fdt_get_int(fdt_addr, of_diag, "reg", 0x51);
+	else
+		sfp_info->i2c_diag_addr = 0x51;
+	return 0;
+}
+
+/**
+ * Parse SFP information from device tree
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ *
+ * @return pointer to sfp info or NULL if error
+ */
+struct cvmx_fdt_sfp_info *cvmx_helper_fdt_parse_sfp_info(const void *fdt_addr, int of_offset)
+{
+	struct cvmx_fdt_sfp_info *sfp_info = NULL;
+	int err = -1;
+	bool is_qsfp;
+
+	if (!fdt_node_check_compatible(fdt_addr, of_offset, "ethernet,sfp-slot")) {
+		is_qsfp = false;
+	} else if (!fdt_node_check_compatible(fdt_addr, of_offset, "ethernet,qsfp-slot")) {
+		is_qsfp = true;
+	} else {
+		debug("%s: Error: incompatible sfp/qsfp slot, compatible=%s\n", __func__,
+		      (char *)fdt_getprop(fdt_addr, of_offset, "compatible", NULL));
+		goto error_exit;
+	}
+
+	debug("%s: %ssfp module found at offset %d\n", __func__, is_qsfp ? "q" : "", of_offset);
+	sfp_info = __cvmx_fdt_alloc(sizeof(*sfp_info));
+	if (!sfp_info) {
+		debug("%s: Error: out of memory\n", __func__);
+		goto error_exit;
+	}
+	sfp_info->name = fdt_get_name(fdt_addr, of_offset, NULL);
+	sfp_info->of_offset = of_offset;
+	sfp_info->is_qsfp = is_qsfp;
+	sfp_info->last_mod_abs = -1;
+	sfp_info->last_rx_los = -1;
+
+	if (is_qsfp)
+		err = cvmx_parse_qsfp(fdt_addr, of_offset, sfp_info);
+	else
+		err = cvmx_parse_sfp(fdt_addr, of_offset, sfp_info);
+	if (err) {
+		debug("%s: Error in %s parsing %ssfp GPIO info\n", __func__, sfp_info->name,
+		      is_qsfp ? "q" : "");
+		goto error_exit;
+	}
+	debug("%s: Parsing %ssfp module eeprom\n", __func__, is_qsfp ? "q" : "");
+	err = cvmx_parse_sfp_eeprom(fdt_addr, of_offset, sfp_info);
+	if (err) {
+		debug("%s: Error parsing eeprom info for %s\n", __func__, sfp_info->name);
+		goto error_exit;
+	}
+
+	/* Register default check for mod_abs changed */
+	if (!err)
+		cvmx_sfp_register_check_mod_abs(sfp_info, cvmx_sfp_check_mod_abs, NULL);
+
+error_exit:
+	/* Note: we don't free any data structures on error since it gets
+	 * rather complicated with i2c buses and whatnot.
+	 */
+	return err ? NULL : sfp_info;
+}
+
+/**
+ * @INTERNAL
+ * Parse a slice of the Inphi/Cortina CS4343 in the device tree
+ *
+ * @param[in]	fdt_addr	Address of flat device tree
+ * @param	of_offset	fdt offset of slice
+ * @param	phy_info	phy_info data structure
+ *
+ * @return	slice number if non-negative, otherwise error
+ */
+static int cvmx_fdt_parse_cs4343_slice(const void *fdt_addr, int of_offset,
+				       struct cvmx_phy_info *phy_info)
+{
+	struct cvmx_cs4343_slice_info *slice;
+	int reg;
+	int reg_offset;
+
+	reg = cvmx_fdt_get_int(fdt_addr, of_offset, "reg", -1);
+	reg_offset = cvmx_fdt_get_int(fdt_addr, of_offset, "slice_offset", -1);
+
+	if (reg < 0 || reg >= 4) {
+		debug("%s(%p, %d, %p): Error: reg %d undefined or out of range\n", __func__,
+		      fdt_addr, of_offset, phy_info, reg);
+		return -1;
+	}
+	if (reg_offset % 0x1000 || reg_offset > 0x3000 || reg_offset < 0) {
+		debug("%s(%p, %d, %p): Error: reg_offset 0x%x undefined or out of range\n",
+		      __func__, fdt_addr, of_offset, phy_info, reg_offset);
+		return -1;
+	}
+	if (!phy_info->cs4343_info) {
+		debug("%s: Error: phy info cs4343 datastructure is NULL\n", __func__);
+		return -1;
+	}
+	debug("%s(%p, %d, %p): %s, reg: %d, slice offset: 0x%x\n", __func__, fdt_addr, of_offset,
+	      phy_info, fdt_get_name(fdt_addr, of_offset, NULL), reg, reg_offset);
+	slice = &phy_info->cs4343_info->slice[reg];
+	slice->name = fdt_get_name(fdt_addr, of_offset, NULL);
+	slice->mphy = phy_info->cs4343_info;
+	slice->phy_info = phy_info;
+	slice->of_offset = of_offset;
+	slice->slice_no = reg;
+	slice->reg_offset = reg_offset;
+	/* SR settings */
+	slice->sr_stx_cmode_res = cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-cmode-res", 3);
+	slice->sr_stx_drv_lower_cm =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-drv-lower-cm", 8);
+	slice->sr_stx_level = cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-level", 0x1c);
+	slice->sr_stx_pre_peak = cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-pre-peak", 1);
+	slice->sr_stx_muxsubrate_sel =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-muxsubrate-sel", 0);
+	slice->sr_stx_post_peak = cvmx_fdt_get_int(fdt_addr, of_offset, "sr-stx-post-peak", 8);
+	/* CX settings */
+	slice->cx_stx_cmode_res = cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-cmode-res", 3);
+	slice->cx_stx_drv_lower_cm =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-drv-lower-cm", 8);
+	slice->cx_stx_level = cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-level", 0x1c);
+	slice->cx_stx_pre_peak = cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-pre-peak", 1);
+	slice->cx_stx_muxsubrate_sel =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-muxsubrate-sel", 0);
+	slice->cx_stx_post_peak = cvmx_fdt_get_int(fdt_addr, of_offset, "cx-stx-post-peak", 0xC);
+	/* 1000Base-X settings */
+	/* CX settings */
+	slice->basex_stx_cmode_res =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "basex-stx-cmode-res", 3);
+	slice->basex_stx_drv_lower_cm =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "basex-stx-drv-lower-cm", 8);
+	slice->basex_stx_level = cvmx_fdt_get_int(fdt_addr, of_offset,
+						  "basex-stx-level", 0x1c);
+	slice->basex_stx_pre_peak = cvmx_fdt_get_int(fdt_addr, of_offset,
+						     "basex-stx-pre-peak", 1);
+	slice->basex_stx_muxsubrate_sel =
+		cvmx_fdt_get_int(fdt_addr, of_offset,
+				 "basex-stx-muxsubrate-sel", 0);
+	slice->basex_stx_post_peak =
+		cvmx_fdt_get_int(fdt_addr, of_offset, "basex-stx-post-peak", 8);
+	/* Get the link LED gpio pin */
+	slice->link_gpio = cvmx_fdt_get_int(fdt_addr, of_offset,
+					    "link-led-gpio", -1);
+	slice->error_gpio = cvmx_fdt_get_int(fdt_addr, of_offset,
+					     "error-led-gpio", -1);
+	slice->los_gpio = cvmx_fdt_get_int(fdt_addr, of_offset,
+					   "los-input-gpio", -1);
+	slice->link_inverted = cvmx_fdt_get_bool(fdt_addr, of_offset,
+						 "link-led-gpio-inverted");
+	slice->error_inverted = cvmx_fdt_get_bool(fdt_addr, of_offset,
+						  "error-led-gpio-inverted");
+	slice->los_inverted = cvmx_fdt_get_bool(fdt_addr, of_offset,
+						"los-input-gpio-inverted");
+	/* Convert GPIOs to be die based if they're not already */
+	if (slice->link_gpio > 4 && slice->link_gpio <= 8)
+		slice->link_gpio -= 4;
+	if (slice->error_gpio > 4 && slice->error_gpio <= 8)
+		slice->error_gpio -= 4;
+	if (slice->los_gpio > 4 && slice->los_gpio <= 8)
+		slice->los_gpio -= 4;
+
+	return reg;
+}
+
+/**
+ * @INTERNAL
+ * Parses either a CS4343 phy or a slice of the phy from the device tree
+ * @param[in]	fdt_addr	Address of FDT
+ * @param	of_offset	offset of slice or phy in device tree
+ * @param	phy_info	phy_info data structure to fill in
+ *
+ * @return	0 for success, -1 on error
+ */
+int cvmx_fdt_parse_cs4343(const void *fdt_addr, int of_offset, struct cvmx_phy_info *phy_info)
+{
+	int of_slice = -1;
+	struct cvmx_cs4343_info *cs4343;
+	int err = -1;
+	int reg;
+
+	debug("%s(%p, %d, %p): %s (%s)\n", __func__,
+	      fdt_addr, of_offset, phy_info,
+	      fdt_get_name(fdt_addr, of_offset, NULL),
+	      (const char *)fdt_getprop(fdt_addr, of_offset, "compatible", NULL));
+
+	if (!phy_info->cs4343_info)
+		phy_info->cs4343_info = __cvmx_fdt_alloc(sizeof(struct cvmx_cs4343_info));
+	if (!phy_info->cs4343_info) {
+		debug("%s: Error: out of memory!\n", __func__);
+		return -1;
+	}
+	cs4343 = phy_info->cs4343_info;
+	/* If we're passed to a slice then process only that slice */
+	if (!fdt_node_check_compatible(fdt_addr, of_offset, "cortina,cs4343-slice")) {
+		err = 0;
+		of_slice = of_offset;
+		of_offset = fdt_parent_offset(fdt_addr, of_offset);
+		reg = cvmx_fdt_parse_cs4343_slice(fdt_addr, of_slice, phy_info);
+		if (reg >= 0)
+			phy_info->cs4343_slice_info = &cs4343->slice[reg];
+		else
+			err = reg;
+	} else if (!fdt_node_check_compatible(fdt_addr, of_offset,
+					      "cortina,cs4343")) {
+		/* Walk through and process all of the slices */
+		of_slice =
+			fdt_node_offset_by_compatible(fdt_addr, of_offset, "cortina,cs4343-slice");
+		while (of_slice > 0 && fdt_parent_offset(fdt_addr, of_slice) ==
+		       of_offset) {
+			debug("%s: Parsing slice %s\n", __func__,
+			      fdt_get_name(fdt_addr, of_slice, NULL));
+			err = cvmx_fdt_parse_cs4343_slice(fdt_addr, of_slice,
+							  phy_info);
+			if (err < 0)
+				break;
+			of_slice = fdt_node_offset_by_compatible(fdt_addr,
+								 of_slice,
+								 "cortina,cs4343-slice");
+		}
+	} else {
+		debug("%s: Error: unknown compatible string %s for %s\n", __func__,
+		      (const char *)fdt_getprop(fdt_addr, of_offset,
+						"compatible", NULL),
+		      fdt_get_name(fdt_addr, of_offset, NULL));
+	}
+
+	if (err >= 0) {
+		cs4343->name = fdt_get_name(fdt_addr, of_offset, NULL);
+		cs4343->phy_info = phy_info;
+		cs4343->of_offset = of_offset;
+	}
+
+	return err < 0 ? -1 : 0;
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 38/50] mips: octeon: Add cvmx-helper-jtag.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (36 preceding siblings ...)
  2020-12-11 16:05 ` [PATCH v1 37/50] mips: octeon: Add cvmx-helper-fdt.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 39/50] mips: octeon: Add cvmx-helper-util.c Stefan Roese
                   ` (14 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-helper-jtag.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-helper-jtag.c | 172 +++++++++++++++++++++++
 1 file changed, 172 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-jtag.c

diff --git a/arch/mips/mach-octeon/cvmx-helper-jtag.c b/arch/mips/mach-octeon/cvmx-helper-jtag.c
new file mode 100644
index 0000000000..a6fa69b4c5
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-helper-jtag.c
@@ -0,0 +1,172 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper utilities for qlm_jtag.
+ */
+
+#include <log.h>
+#include <asm/global_data.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-ciu-defs.h>
+
+DECLARE_GLOBAL_DATA_PTR;
+
+/**
+ * Initialize the internal QLM JTAG logic to allow programming
+ * of the JTAG chain by the cvmx_helper_qlm_jtag_*() functions.
+ * These functions should only be used at the direction of Cavium
+ * Networks. Programming incorrect values into the JTAG chain
+ * can cause chip damage.
+ */
+void cvmx_helper_qlm_jtag_init(void)
+{
+	union cvmx_ciu_qlm_jtgc jtgc;
+	int clock_div = 0;
+	int divisor;
+
+	divisor = gd->bus_clk / (1000000 * (OCTEON_IS_MODEL(OCTEON_CN68XX) ? 10 : 25));
+
+	divisor = (divisor - 1) >> 2;
+	/* Convert the divisor into a power of 2 shift */
+	while (divisor) {
+		clock_div++;
+		divisor >>= 1;
+	}
+
+	/*
+	 * Clock divider for QLM JTAG operations.  sclk is divided by
+	 * 2^(CLK_DIV + 2)
+	 */
+	jtgc.u64 = 0;
+	jtgc.s.clk_div = clock_div;
+	jtgc.s.mux_sel = 0;
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX) || OCTEON_IS_MODEL(OCTEON_CN66XX))
+		jtgc.s.bypass = 0x7;
+	else
+		jtgc.s.bypass = 0xf;
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		jtgc.s.bypass_ext = 1;
+	csr_wr(CVMX_CIU_QLM_JTGC, jtgc.u64);
+	csr_rd(CVMX_CIU_QLM_JTGC);
+}
+
+/**
+ * Write up to 32bits into the QLM jtag chain. Bits are shifted
+ * into the MSB and out the LSB, so you should shift in the low
+ * order bits followed by the high order bits. The JTAG chain for
+ * CN52XX and CN56XX is 4 * 268 bits long, or 1072. The JTAG chain
+ * for CN63XX is 4 * 300 bits long, or 1200.
+ *
+ * @param qlm    QLM to shift value into
+ * @param bits   Number of bits to shift in (1-32).
+ * @param data   Data to shift in. Bit 0 enters the chain first, followed by
+ *               bit 1, etc.
+ *
+ * @return The low order bits of the JTAG chain that shifted out of the
+ *         circle.
+ */
+uint32_t cvmx_helper_qlm_jtag_shift(int qlm, int bits, uint32_t data)
+{
+	union cvmx_ciu_qlm_jtgc jtgc;
+	union cvmx_ciu_qlm_jtgd jtgd;
+
+	jtgc.u64 = csr_rd(CVMX_CIU_QLM_JTGC);
+	jtgc.s.mux_sel = qlm;
+	csr_wr(CVMX_CIU_QLM_JTGC, jtgc.u64);
+	csr_rd(CVMX_CIU_QLM_JTGC);
+
+	jtgd.u64 = 0;
+	jtgd.s.shift = 1;
+	jtgd.s.shft_cnt = bits - 1;
+	jtgd.s.shft_reg = data;
+	jtgd.s.select = 1 << qlm;
+	csr_wr(CVMX_CIU_QLM_JTGD, jtgd.u64);
+	do {
+		jtgd.u64 = csr_rd(CVMX_CIU_QLM_JTGD);
+	} while (jtgd.s.shift);
+	return jtgd.s.shft_reg >> (32 - bits);
+}
+
+/**
+ * Shift long sequences of zeros into the QLM JTAG chain. It is
+ * common to need to shift more than 32 bits of zeros into the
+ * chain. This function is a convience wrapper around
+ * cvmx_helper_qlm_jtag_shift() to shift more than 32 bits of
+ * zeros at a time.
+ *
+ * @param qlm    QLM to shift zeros into
+ * @param bits
+ */
+void cvmx_helper_qlm_jtag_shift_zeros(int qlm, int bits)
+{
+	while (bits > 0) {
+		int n = bits;
+
+		if (n > 32)
+			n = 32;
+		cvmx_helper_qlm_jtag_shift(qlm, n, 0);
+		bits -= n;
+	}
+}
+
+/**
+ * Program the QLM JTAG chain into all lanes of the QLM. You must
+ * have already shifted in the proper number of bits into the
+ * JTAG chain. Updating invalid values can possibly cause chip damage.
+ *
+ * @param qlm    QLM to program
+ */
+void cvmx_helper_qlm_jtag_update(int qlm)
+{
+	union cvmx_ciu_qlm_jtgc jtgc;
+	union cvmx_ciu_qlm_jtgd jtgd;
+
+	jtgc.u64 = csr_rd(CVMX_CIU_QLM_JTGC);
+	jtgc.s.mux_sel = qlm;
+
+	csr_wr(CVMX_CIU_QLM_JTGC, jtgc.u64);
+	csr_rd(CVMX_CIU_QLM_JTGC);
+
+	/* Update the new data */
+	jtgd.u64 = 0;
+	jtgd.s.update = 1;
+	jtgd.s.select = 1 << qlm;
+	csr_wr(CVMX_CIU_QLM_JTGD, jtgd.u64);
+	do {
+		jtgd.u64 = csr_rd(CVMX_CIU_QLM_JTGD);
+	} while (jtgd.s.update);
+}
+
+/**
+ * Load the QLM JTAG chain with data from all lanes of the QLM.
+ *
+ * @param qlm    QLM to program
+ */
+void cvmx_helper_qlm_jtag_capture(int qlm)
+{
+	union cvmx_ciu_qlm_jtgc jtgc;
+	union cvmx_ciu_qlm_jtgd jtgd;
+
+	jtgc.u64 = csr_rd(CVMX_CIU_QLM_JTGC);
+	jtgc.s.mux_sel = qlm;
+
+	csr_wr(CVMX_CIU_QLM_JTGC, jtgc.u64);
+	csr_rd(CVMX_CIU_QLM_JTGC);
+
+	jtgd.u64 = 0;
+	jtgd.s.capture = 1;
+	jtgd.s.select = 1 << qlm;
+	csr_wr(CVMX_CIU_QLM_JTGD, jtgd.u64);
+	do {
+		jtgd.u64 = csr_rd(CVMX_CIU_QLM_JTGD);
+	} while (jtgd.s.capture);
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 39/50] mips: octeon: Add cvmx-helper-util.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (37 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 38/50] mips: octeon: Add cvmx-helper-jtag.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 40/50] mips: octeon: Add cvmx-helper.c Stefan Roese
                   ` (13 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-helper-util.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-helper-util.c | 1225 ++++++++++++++++++++++
 1 file changed, 1225 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper-util.c

diff --git a/arch/mips/mach-octeon/cvmx-helper-util.c b/arch/mips/mach-octeon/cvmx-helper-util.c
new file mode 100644
index 0000000000..4625b4591b
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-helper-util.c
@@ -0,0 +1,1225 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Small helper utilities.
+ */
+
+#include <log.h>
+#include <time.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-csr-enums.h>
+#include <mach/octeon-model.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-ipd-defs.h>
+#include <mach/cvmx-pko-defs.h>
+#include <mach/cvmx-ipd.h>
+#include <mach/cvmx-hwpko.h>
+#include <mach/cvmx-pki.h>
+#include <mach/cvmx-pip.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-helper-pki.h>
+
+/**
+ * @INTERNAL
+ * These are the interface types needed to convert interface numbers to ipd
+ * ports.
+ *
+ * @param GMII
+ *	This type is used for sgmii, rgmii, xaui and rxaui interfaces.
+ * @param ILK
+ *	This type is used for ilk interfaces.
+ * @param SRIO
+ *	This type is used for serial-RapidIo interfaces.
+ * @param NPI
+ *	This type is used for npi interfaces.
+ * @param LB
+ *	This type is used for loopback interfaces.
+ * @param INVALID_IF_TYPE
+ *	This type indicates the interface hasn't been configured.
+ */
+enum port_map_if_type { INVALID_IF_TYPE = 0, GMII, ILK, SRIO, NPI, LB };
+
+/**
+ * @INTERNAL
+ * This structure is used to map interface numbers to ipd ports.
+ *
+ * @param type
+ *	Interface type
+ * @param first_ipd_port
+ *	First IPD port number assigned to this interface.
+ * @param last_ipd_port
+ *	Last IPD port number assigned to this interface.
+ * @param ipd_port_adj
+ *	Different octeon chips require different ipd ports for the
+ *	same interface port/mode configuration. This value is used
+ *	to account for that difference.
+ */
+struct ipd_port_map {
+	enum port_map_if_type type;
+	int first_ipd_port;
+	int last_ipd_port;
+	int ipd_port_adj;
+};
+
+/**
+ * @INTERNAL
+ * Interface number to ipd port map for the octeon 68xx.
+ */
+static const struct ipd_port_map ipd_port_map_68xx[CVMX_HELPER_MAX_IFACE] = {
+	{ GMII, 0x800, 0x8ff, 0x40 }, /* Interface 0 */
+	{ GMII, 0x900, 0x9ff, 0x40 }, /* Interface 1 */
+	{ GMII, 0xa00, 0xaff, 0x40 }, /* Interface 2 */
+	{ GMII, 0xb00, 0xbff, 0x40 }, /* Interface 3 */
+	{ GMII, 0xc00, 0xcff, 0x40 }, /* Interface 4 */
+	{ ILK, 0x400, 0x4ff, 0x00 },  /* Interface 5 */
+	{ ILK, 0x500, 0x5ff, 0x00 },  /* Interface 6 */
+	{ NPI, 0x100, 0x120, 0x00 },  /* Interface 7 */
+	{ LB, 0x000, 0x008, 0x00 },   /* Interface 8 */
+};
+
+/**
+ * @INTERNAL
+ * Interface number to ipd port map for the octeon 78xx.
+ *
+ * This mapping corresponds to WQE(CHAN) enumeration in
+ * HRM Sections 11.15, PKI_CHAN_E, Section 11.6
+ *
+ */
+static const struct ipd_port_map ipd_port_map_78xx[CVMX_HELPER_MAX_IFACE] = {
+	{ GMII, 0x800, 0x83f, 0x00 }, /* Interface 0 - BGX0 */
+	{ GMII, 0x900, 0x93f, 0x00 }, /* Interface 1  -BGX1 */
+	{ GMII, 0xa00, 0xa3f, 0x00 }, /* Interface 2  -BGX2 */
+	{ GMII, 0xb00, 0xb3f, 0x00 }, /* Interface 3 - BGX3 */
+	{ GMII, 0xc00, 0xc3f, 0x00 }, /* Interface 4 - BGX4 */
+	{ GMII, 0xd00, 0xd3f, 0x00 }, /* Interface 5 - BGX5 */
+	{ ILK, 0x400, 0x4ff, 0x00 },  /* Interface 6 - ILK0 */
+	{ ILK, 0x500, 0x5ff, 0x00 },  /* Interface 7 - ILK1 */
+	{ NPI, 0x100, 0x13f, 0x00 },  /* Interface 8 - DPI */
+	{ LB, 0x000, 0x03f, 0x00 },   /* Interface 9 - LOOPBACK */
+};
+
+/**
+ * @INTERNAL
+ * Interface number to ipd port map for the octeon 73xx.
+ */
+static const struct ipd_port_map ipd_port_map_73xx[CVMX_HELPER_MAX_IFACE] = {
+	{ GMII, 0x800, 0x83f, 0x00 }, /* Interface 0 - BGX(0,0-3) */
+	{ GMII, 0x900, 0x93f, 0x00 }, /* Interface 1  -BGX(1,0-3) */
+	{ GMII, 0xa00, 0xa3f, 0x00 }, /* Interface 2  -BGX(2,0-3) */
+	{ NPI, 0x100, 0x17f, 0x00 },  /* Interface 3 - DPI */
+	{ LB, 0x000, 0x03f, 0x00 },   /* Interface 4 - LOOPBACK */
+};
+
+/**
+ * @INTERNAL
+ * Interface number to ipd port map for the octeon 75xx.
+ */
+static const struct ipd_port_map ipd_port_map_75xx[CVMX_HELPER_MAX_IFACE] = {
+	{ GMII, 0x800, 0x83f, 0x00 }, /* Interface 0 - BGX0 */
+	{ SRIO, 0x240, 0x241, 0x00 }, /* Interface 1 - SRIO 0 */
+	{ SRIO, 0x242, 0x243, 0x00 }, /* Interface 2 - SRIO 1 */
+	{ NPI, 0x100, 0x13f, 0x00 },  /* Interface 3 - DPI */
+	{ LB, 0x000, 0x03f, 0x00 },   /* Interface 4 - LOOPBACK */
+};
+
+/**
+ * Convert a interface mode into a human readable string
+ *
+ * @param mode   Mode to convert
+ *
+ * @return String
+ */
+const char *cvmx_helper_interface_mode_to_string(cvmx_helper_interface_mode_t mode)
+{
+	switch (mode) {
+	case CVMX_HELPER_INTERFACE_MODE_DISABLED:
+		return "DISABLED";
+	case CVMX_HELPER_INTERFACE_MODE_RGMII:
+		return "RGMII";
+	case CVMX_HELPER_INTERFACE_MODE_GMII:
+		return "GMII";
+	case CVMX_HELPER_INTERFACE_MODE_SPI:
+		return "SPI";
+	case CVMX_HELPER_INTERFACE_MODE_PCIE:
+		return "PCIE";
+	case CVMX_HELPER_INTERFACE_MODE_XAUI:
+		return "XAUI";
+	case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		return "RXAUI";
+	case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		return "SGMII";
+	case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+		return "QSGMII";
+	case CVMX_HELPER_INTERFACE_MODE_PICMG:
+		return "PICMG";
+	case CVMX_HELPER_INTERFACE_MODE_NPI:
+		return "NPI";
+	case CVMX_HELPER_INTERFACE_MODE_LOOP:
+		return "LOOP";
+	case CVMX_HELPER_INTERFACE_MODE_SRIO:
+		return "SRIO";
+	case CVMX_HELPER_INTERFACE_MODE_ILK:
+		return "ILK";
+	case CVMX_HELPER_INTERFACE_MODE_AGL:
+		return "AGL";
+	case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+		return "XLAUI";
+	case CVMX_HELPER_INTERFACE_MODE_XFI:
+		return "XFI";
+	case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+		return "40G_KR4";
+	case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+		return "10G_KR";
+	case CVMX_HELPER_INTERFACE_MODE_MIXED:
+		return "MIXED";
+	}
+	return "UNKNOWN";
+}
+
+/**
+ * Debug routine to dump the packet structure to the console
+ *
+ * @param work   Work queue entry containing the packet to dump
+ * @return
+ */
+int cvmx_helper_dump_packet(cvmx_wqe_t *work)
+{
+	u64 count;
+	u64 remaining_bytes;
+	union cvmx_buf_ptr buffer_ptr;
+	cvmx_buf_ptr_pki_t bptr;
+	cvmx_wqe_78xx_t *wqe = (void *)work;
+	u64 start_of_buffer;
+	u8 *data_address;
+	u8 *end_of_data;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_pki_dump_wqe(wqe);
+		cvmx_wqe_pki_errata_20776(work);
+	} else {
+		debug("WORD0 = %lx\n", (unsigned long)work->word0.u64);
+		debug("WORD1 = %lx\n", (unsigned long)work->word1.u64);
+		debug("WORD2 = %lx\n", (unsigned long)work->word2.u64);
+		debug("Packet Length:   %u\n", cvmx_wqe_get_len(work));
+		debug("    Input Port:  %u\n", cvmx_wqe_get_port(work));
+		debug("    QoS:         %u\n", cvmx_wqe_get_qos(work));
+		debug("    Buffers:     %u\n", cvmx_wqe_get_bufs(work));
+	}
+
+	if (cvmx_wqe_get_bufs(work) == 0) {
+		int wqe_pool;
+
+		if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+			debug("%s: ERROR: Unexpected bufs==0 in WQE\n", __func__);
+			return -1;
+		}
+		wqe_pool = (int)cvmx_fpa_get_wqe_pool();
+		buffer_ptr.u64 = 0;
+		buffer_ptr.s.pool = wqe_pool;
+
+		buffer_ptr.s.size = 128;
+		buffer_ptr.s.addr = cvmx_ptr_to_phys(work->packet_data);
+		if (cvmx_likely(!work->word2.s.not_IP)) {
+			union cvmx_pip_ip_offset pip_ip_offset;
+
+			pip_ip_offset.u64 = csr_rd(CVMX_PIP_IP_OFFSET);
+			buffer_ptr.s.addr +=
+				(pip_ip_offset.s.offset << 3) - work->word2.s.ip_offset;
+			buffer_ptr.s.addr += (work->word2.s.is_v6 ^ 1) << 2;
+		} else {
+			/*
+			 * WARNING: This code assume that the packet
+			 * is not RAW. If it was, we would use
+			 * PIP_GBL_CFG[RAW_SHF] instead of
+			 * PIP_GBL_CFG[NIP_SHF].
+			 */
+			union cvmx_pip_gbl_cfg pip_gbl_cfg;
+
+			pip_gbl_cfg.u64 = csr_rd(CVMX_PIP_GBL_CFG);
+			buffer_ptr.s.addr += pip_gbl_cfg.s.nip_shf;
+		}
+	} else {
+		buffer_ptr = work->packet_ptr;
+	}
+
+	remaining_bytes = cvmx_wqe_get_len(work);
+
+	while (remaining_bytes) {
+		/* native cn78xx buffer format, unless legacy-translated */
+		if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE) && !wqe->pki_wqe_translated) {
+			bptr.u64 = buffer_ptr.u64;
+			/* XXX- assumes cache-line aligned buffer */
+			start_of_buffer = (bptr.addr >> 7) << 7;
+			debug("    Buffer Start:%llx\n", (unsigned long long)start_of_buffer);
+			debug("    Buffer Data: %llx\n", (unsigned long long)bptr.addr);
+			debug("    Buffer Size: %u\n", bptr.size);
+			data_address = (uint8_t *)cvmx_phys_to_ptr(bptr.addr);
+			end_of_data = data_address + bptr.size;
+		} else {
+			start_of_buffer = ((buffer_ptr.s.addr >> 7) - buffer_ptr.s.back) << 7;
+			debug("    Buffer Start:%llx\n", (unsigned long long)start_of_buffer);
+			debug("    Buffer I   : %u\n", buffer_ptr.s.i);
+			debug("    Buffer Back: %u\n", buffer_ptr.s.back);
+			debug("    Buffer Pool: %u\n", buffer_ptr.s.pool);
+			debug("    Buffer Data: %llx\n", (unsigned long long)buffer_ptr.s.addr);
+			debug("    Buffer Size: %u\n", buffer_ptr.s.size);
+			data_address = (uint8_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr);
+			end_of_data = data_address + buffer_ptr.s.size;
+		}
+
+		debug("\t\t");
+		count = 0;
+		while (data_address < end_of_data) {
+			if (remaining_bytes == 0)
+				break;
+
+			remaining_bytes--;
+			debug("%02x", (unsigned int)*data_address);
+			data_address++;
+			if (remaining_bytes && count == 7) {
+				debug("\n\t\t");
+				count = 0;
+			} else {
+				count++;
+			}
+		}
+		debug("\n");
+
+		if (remaining_bytes) {
+			if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE) &&
+			    !wqe->pki_wqe_translated)
+				buffer_ptr.u64 = *(uint64_t *)cvmx_phys_to_ptr(bptr.addr - 8);
+			else
+				buffer_ptr.u64 =
+					*(uint64_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr - 8);
+		}
+	}
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ *
+ * Extract NO_WPTR mode from PIP/IPD register
+ */
+static int __cvmx_ipd_mode_no_wptr(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_NO_WPTR)) {
+		cvmx_ipd_ctl_status_t ipd_ctl_status;
+
+		ipd_ctl_status.u64 = csr_rd(CVMX_IPD_CTL_STATUS);
+		return ipd_ctl_status.s.no_wptr;
+	}
+	return 0;
+}
+
+static cvmx_buf_ptr_t __cvmx_packet_short_ptr[4];
+static int8_t __cvmx_wqe_pool = -1;
+
+/**
+ * @INTERNAL
+ * Prepare packet pointer templace for dynamic short
+ * packets.
+ */
+static void cvmx_packet_short_ptr_calculate(void)
+{
+	unsigned int i, off;
+	union cvmx_pip_gbl_cfg pip_gbl_cfg;
+	union cvmx_pip_ip_offset pip_ip_offset;
+
+	/* Fill in the common values for all cases */
+	for (i = 0; i < 4; i++) {
+		if (__cvmx_ipd_mode_no_wptr())
+			/* packet pool, set to 0 in hardware */
+			__cvmx_wqe_pool = 0;
+		else
+			/* WQE pool as configured */
+			__cvmx_wqe_pool = csr_rd(CVMX_IPD_WQE_FPA_QUEUE) & 7;
+
+		__cvmx_packet_short_ptr[i].s.pool = __cvmx_wqe_pool;
+		__cvmx_packet_short_ptr[i].s.size = cvmx_fpa_get_block_size(__cvmx_wqe_pool);
+		__cvmx_packet_short_ptr[i].s.size -= 32;
+		__cvmx_packet_short_ptr[i].s.addr = 32;
+	}
+
+	pip_gbl_cfg.u64 = csr_rd(CVMX_PIP_GBL_CFG);
+	pip_ip_offset.u64 = csr_rd(CVMX_PIP_IP_OFFSET);
+
+	/* RAW_FULL: index = 0 */
+	i = 0;
+	off = pip_gbl_cfg.s.raw_shf;
+	__cvmx_packet_short_ptr[i].s.addr += off;
+	__cvmx_packet_short_ptr[i].s.size -= off;
+	__cvmx_packet_short_ptr[i].s.back += off >> 7;
+
+	/* NON-IP: index = 1 */
+	i = 1;
+	off = pip_gbl_cfg.s.nip_shf;
+	__cvmx_packet_short_ptr[i].s.addr += off;
+	__cvmx_packet_short_ptr[i].s.size -= off;
+	__cvmx_packet_short_ptr[i].s.back += off >> 7;
+
+	/* IPv4: index = 2 */
+	i = 2;
+	off = (pip_ip_offset.s.offset << 3) + 4;
+	__cvmx_packet_short_ptr[i].s.addr += off;
+	__cvmx_packet_short_ptr[i].s.size -= off;
+	__cvmx_packet_short_ptr[i].s.back += off >> 7;
+
+	/* IPv6: index = 3 */
+	i = 3;
+	off = (pip_ip_offset.s.offset << 3) + 0;
+	__cvmx_packet_short_ptr[i].s.addr += off;
+	__cvmx_packet_short_ptr[i].s.size -= off;
+	__cvmx_packet_short_ptr[i].s.back += off >> 7;
+
+	/* For IPv4/IPv6: subtract work->word2.s.ip_offset
+	 * to addr, if it is smaller than IP_OFFSET[OFFSET]*8
+	 * which is stored in __cvmx_packet_short_ptr[3].s.addr
+	 */
+}
+
+/**
+ * Extract packet data buffer pointer from work queue entry.
+ *
+ * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
+ * for the linked buffer list.
+ * On CN78XX, the native buffer pointer structure is converted into
+ * the legacy format.
+ * The legacy buf_ptr is then stored in the WQE, and word0 reserved
+ * field is set to indicate that the buffer pointers were translated.
+ * If the packet data is only found inside the work queue entry,
+ * a standard buffer pointer structure is created for it.
+ */
+cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (void *)work;
+		cvmx_buf_ptr_t optr, lptr;
+		cvmx_buf_ptr_pki_t nptr;
+		unsigned int pool, bufs;
+		int node = cvmx_get_node_num();
+
+		/* In case of repeated calls of this function */
+		if (wqe->pki_wqe_translated || wqe->word2.software) {
+			optr.u64 = wqe->packet_ptr.u64;
+			return optr;
+		}
+
+		bufs = wqe->word0.bufs;
+		pool = wqe->word0.aura;
+		nptr.u64 = wqe->packet_ptr.u64;
+
+		optr.u64 = 0;
+		optr.s.pool = pool;
+		optr.s.addr = nptr.addr;
+		if (bufs == 1) {
+			optr.s.size = pki_dflt_pool[node].buffer_size -
+				      pki_dflt_style[node].parm_cfg.first_skip - 8 -
+				      wqe->word0.apad;
+		} else {
+			optr.s.size = nptr.size;
+		}
+
+		/* Calculate the "back" offset */
+		if (!nptr.packet_outside_wqe) {
+			optr.s.back = (nptr.addr -
+				       cvmx_ptr_to_phys(wqe)) >> 7;
+		} else {
+			optr.s.back =
+				(pki_dflt_style[node].parm_cfg.first_skip +
+				 8 + wqe->word0.apad) >> 7;
+		}
+		lptr = optr;
+
+		/* Follow pointer and convert all linked pointers */
+		while (bufs > 1) {
+			void *vptr;
+
+			vptr = cvmx_phys_to_ptr(lptr.s.addr);
+
+			memcpy(&nptr, vptr - 8, 8);
+			/*
+			 * Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
+			 * CN78XX pass 1.x has a bug where the packet pointer
+			 * in each segment is written in the opposite
+			 * endianness of the configured mode. Fix these here
+			 */
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+				nptr.u64 = __builtin_bswap64(nptr.u64);
+			lptr.u64 = 0;
+			lptr.s.pool = pool;
+			lptr.s.addr = nptr.addr;
+			lptr.s.size = nptr.size;
+			lptr.s.back = (pki_dflt_style[0].parm_cfg.later_skip + 8) >>
+				      7; /* TBD: not guaranteed !! */
+
+			memcpy(vptr - 8, &lptr, 8);
+			bufs--;
+		}
+		/* Store translated bufptr in WQE, and set indicator */
+		wqe->pki_wqe_translated = 1;
+		wqe->packet_ptr.u64 = optr.u64;
+		return optr;
+
+	} else {
+		unsigned int i;
+		unsigned int off = 0;
+		cvmx_buf_ptr_t bptr;
+
+		if (cvmx_likely(work->word2.s.bufs > 0))
+			return work->packet_ptr;
+
+		if (cvmx_unlikely(work->word2.s.software))
+			return work->packet_ptr;
+
+		/* first packet, precalculate packet_ptr templaces */
+		if (cvmx_unlikely(__cvmx_packet_short_ptr[0].u64 == 0))
+			cvmx_packet_short_ptr_calculate();
+
+		/* calculate templace index */
+		i = work->word2.s_cn38xx.not_IP | work->word2.s_cn38xx.rcv_error;
+		i = 2 ^ (i << 1);
+
+		/* IPv4/IPv6: Adjust IP offset */
+		if (cvmx_likely(i & 2)) {
+			i |= work->word2.s.is_v6;
+			off = work->word2.s.ip_offset;
+		} else {
+			/* RAWFULL/RAWSCHED should be handled here */
+			i = 1; /* not-IP */
+			off = 0;
+		}
+
+		/* Get the right templace */
+		bptr = __cvmx_packet_short_ptr[i];
+		bptr.s.addr -= off;
+		bptr.s.back = bptr.s.addr >> 7;
+
+		/* Add actual WQE paddr to the templace offset */
+		bptr.s.addr += cvmx_ptr_to_phys(work);
+
+		/* Adjust word2.bufs so that _free_data() handles it
+		 * in the same way as PKO
+		 */
+		work->word2.s.bufs = 1;
+
+		/* Store the new buffer pointer back into WQE */
+		work->packet_ptr = bptr;
+
+		/* Returned the synthetic buffer_pointer */
+		return bptr;
+	}
+}
+
+void cvmx_wqe_free(cvmx_wqe_t *work)
+{
+	unsigned int bufs, ncl = 1;
+	u64 paddr, paddr1;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (void *)work;
+		cvmx_fpa3_gaura_t aura;
+		cvmx_buf_ptr_pki_t bptr;
+
+		bufs = wqe->word0.bufs;
+
+		if (!wqe->pki_wqe_translated && bufs != 0) {
+			/* Handle cn78xx native untralsated WQE */
+
+			bptr = wqe->packet_ptr;
+
+			/* Do nothing - first packet buffer shares WQE buffer */
+			if (!bptr.packet_outside_wqe)
+				return;
+		} else if (cvmx_likely(bufs != 0)) {
+			/* Handle translated 78XX WQE */
+			paddr = (work->packet_ptr.s.addr & (~0x7full)) -
+				(work->packet_ptr.s.back << 7);
+			paddr1 = cvmx_ptr_to_phys(work);
+
+			/* do not free WQE if contains first data buffer */
+			if (paddr == paddr1)
+				return;
+		}
+
+		/* WQE is separate from packet buffer, free it */
+		aura = __cvmx_fpa3_gaura(wqe->word0.aura >> 10, wqe->word0.aura & 0x3ff);
+
+		cvmx_fpa3_free(work, aura, ncl);
+	} else {
+		/* handle legacy WQE */
+		bufs = work->word2.s_cn38xx.bufs;
+
+		if (cvmx_likely(bufs != 0)) {
+			/* Check if the first data buffer is inside WQE */
+			paddr = (work->packet_ptr.s.addr & (~0x7full)) -
+				(work->packet_ptr.s.back << 7);
+			paddr1 = cvmx_ptr_to_phys(work);
+
+			/* do not free WQE if contains first data buffer */
+			if (paddr == paddr1)
+				return;
+		}
+
+		/* precalculate packet_ptr, WQE pool number */
+		if (cvmx_unlikely(__cvmx_wqe_pool < 0))
+			cvmx_packet_short_ptr_calculate();
+		cvmx_fpa1_free(work, __cvmx_wqe_pool, ncl);
+	}
+}
+
+/**
+ * Free the packet buffers contained in a work queue entry.
+ * The work queue entry is also freed if it contains packet data.
+ * If however the packet starts outside the WQE, the WQE will
+ * not be freed. The application should call cvmx_wqe_free()
+ * to free the WQE buffer that contains no packet data.
+ *
+ * @param work   Work queue entry with packet to free
+ */
+void cvmx_helper_free_packet_data(cvmx_wqe_t *work)
+{
+	u64 number_buffers;
+	u64 start_of_buffer;
+	u64 next_buffer_ptr;
+	cvmx_fpa3_gaura_t aura;
+	unsigned int ncl;
+	cvmx_buf_ptr_t buffer_ptr;
+	cvmx_buf_ptr_pki_t bptr;
+	cvmx_wqe_78xx_t *wqe = (void *)work;
+	int o3_pki_wqe = 0;
+
+	number_buffers = cvmx_wqe_get_bufs(work);
+
+	buffer_ptr.u64 = work->packet_ptr.u64;
+
+	/* Zero-out WQE WORD3 so that the WQE is freed by cvmx_wqe_free() */
+	work->packet_ptr.u64 = 0;
+
+	if (number_buffers == 0)
+		return;
+
+	/* Interpret PKI-style bufptr unless it has been translated */
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE) &&
+	    !wqe->pki_wqe_translated) {
+		o3_pki_wqe = 1;
+		cvmx_wqe_pki_errata_20776(work);
+		aura = __cvmx_fpa3_gaura(wqe->word0.aura >> 10,
+					 wqe->word0.aura & 0x3ff);
+	} else {
+		start_of_buffer = ((buffer_ptr.s.addr >> 7) -
+				   buffer_ptr.s.back) << 7;
+		next_buffer_ptr =
+			*(uint64_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr - 8);
+		/*
+		 * Since the number of buffers is not zero, we know this is not
+		 * a dynamic short packet. We need to check if it is a packet
+		 * received with IPD_CTL_STATUS[NO_WPTR]. If this is true,
+		 * we need to free all buffers except for the first one.
+		 * The caller doesn't expect their WQE pointer to be freed
+		 */
+		if (cvmx_ptr_to_phys(work) == start_of_buffer) {
+			buffer_ptr.u64 = next_buffer_ptr;
+			number_buffers--;
+		}
+	}
+	while (number_buffers--) {
+		if (o3_pki_wqe) {
+			bptr.u64 = buffer_ptr.u64;
+
+			ncl = (bptr.size + CVMX_CACHE_LINE_SIZE - 1) /
+				CVMX_CACHE_LINE_SIZE;
+
+			/* XXX- assumes the buffer is cache-line aligned */
+			start_of_buffer = (bptr.addr >> 7) << 7;
+
+			/*
+			 * Read pointer to next buffer before we free the
+			 * current buffer.
+			 */
+			next_buffer_ptr = *(uint64_t *)cvmx_phys_to_ptr(bptr.addr - 8);
+			/* FPA AURA comes from WQE, includes node */
+			cvmx_fpa3_free(cvmx_phys_to_ptr(start_of_buffer),
+				       aura, ncl);
+		} else {
+			ncl = (buffer_ptr.s.size + CVMX_CACHE_LINE_SIZE - 1) /
+				      CVMX_CACHE_LINE_SIZE +
+			      buffer_ptr.s.back;
+			/*
+			 * Calculate buffer start using "back" offset,
+			 * Remember the back pointer is in cache lines,
+			 * not 64bit words
+			 */
+			start_of_buffer = ((buffer_ptr.s.addr >> 7) -
+					   buffer_ptr.s.back) << 7;
+			/*
+			 * Read pointer to next buffer before we free
+			 * the current buffer.
+			 */
+			next_buffer_ptr =
+				*(uint64_t *)cvmx_phys_to_ptr(buffer_ptr.s.addr - 8);
+			/* FPA pool comes from buf_ptr itself */
+			if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+				aura = cvmx_fpa1_pool_to_fpa3_aura(buffer_ptr.s.pool);
+				cvmx_fpa3_free(cvmx_phys_to_ptr(start_of_buffer),
+					       aura, ncl);
+			} else {
+				cvmx_fpa1_free(cvmx_phys_to_ptr(start_of_buffer),
+					       buffer_ptr.s.pool, ncl);
+			}
+		}
+		buffer_ptr.u64 = next_buffer_ptr;
+	}
+}
+
+void cvmx_helper_setup_legacy_red(int pass_thresh, int drop_thresh)
+{
+	unsigned int node = cvmx_get_node_num();
+	int aura, bpid;
+	int buf_cnt;
+	bool ena_red = 0, ena_drop = 0, ena_bp = 0;
+
+#define FPA_RED_AVG_DLY 1
+#define FPA_RED_LVL_DLY 3
+#define FPA_QOS_AVRG	0
+	/* Trying to make it backward compatible with older chips */
+
+	/* Setting up avg_dly and prb_dly, enable bits */
+	if (octeon_has_feature(OCTEON_FEATURE_FPA3)) {
+		cvmx_fpa3_config_red_params(node, FPA_QOS_AVRG,
+					    FPA_RED_LVL_DLY, FPA_RED_AVG_DLY);
+	}
+
+	/* Disable backpressure on queued buffers which is aura in 78xx*/
+	/*
+	 * Assumption is that all packets from all interface and ports goes
+	 * in same poolx/aurax for backward compatibility
+	 */
+	aura = cvmx_fpa_get_packet_pool();
+	buf_cnt = cvmx_fpa_get_packet_pool_buffer_count();
+	pass_thresh = buf_cnt - pass_thresh;
+	drop_thresh = buf_cnt - drop_thresh;
+	/* Map aura to bpid 0*/
+	bpid = 0;
+	cvmx_pki_write_aura_bpid(node, aura, bpid);
+	/* Don't enable back pressure */
+	ena_bp = 0;
+	/* enable RED */
+	ena_red = 1;
+	/*
+	 * This will enable RED on all interfaces since
+	 * they all have packet buffer coming from  same aura
+	 */
+	cvmx_helper_setup_aura_qos(node, aura, ena_red, ena_drop, pass_thresh,
+				   drop_thresh, ena_bp, 0);
+}
+
+/**
+ * Setup Random Early Drop to automatically begin dropping packets.
+ *
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_helper_setup_red(int pass_thresh, int drop_thresh)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKI))
+		cvmx_helper_setup_legacy_red(pass_thresh, drop_thresh);
+	else
+		cvmx_ipd_setup_red(pass_thresh, drop_thresh);
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Setup the common GMX settings that determine the number of
+ * ports. These setting apply to almost all configurations of all
+ * chips.
+ *
+ * @param xiface Interface to configure
+ * @param num_ports Number of ports on the interface
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_setup_gmx(int xiface, int num_ports)
+{
+	union cvmx_gmxx_tx_prts gmx_tx_prts;
+	union cvmx_gmxx_rx_prts gmx_rx_prts;
+	union cvmx_pko_reg_gmx_port_mode pko_mode;
+	union cvmx_gmxx_txx_thresh gmx_tx_thresh;
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int index;
+
+	/*
+	 * The common BGX settings are already done in the appropriate
+	 * enable functions, nothing to do here.
+	 */
+	if (octeon_has_feature(OCTEON_FEATURE_BGX))
+		return 0;
+
+	/* Tell GMX the number of TX ports on this interface */
+	gmx_tx_prts.u64 = csr_rd(CVMX_GMXX_TX_PRTS(xi.interface));
+	gmx_tx_prts.s.prts = num_ports;
+	csr_wr(CVMX_GMXX_TX_PRTS(xi.interface), gmx_tx_prts.u64);
+
+	/*
+	 * Tell GMX the number of RX ports on this interface.  This only applies
+	 * to *GMII and XAUI ports.
+	 */
+	switch (cvmx_helper_interface_get_mode(xiface)) {
+	case CVMX_HELPER_INTERFACE_MODE_RGMII:
+	case CVMX_HELPER_INTERFACE_MODE_SGMII:
+	case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+	case CVMX_HELPER_INTERFACE_MODE_GMII:
+	case CVMX_HELPER_INTERFACE_MODE_XAUI:
+	case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		if (num_ports > 4) {
+			debug("%s: Illegal num_ports\n", __func__);
+			return -1;
+		}
+
+		gmx_rx_prts.u64 = csr_rd(CVMX_GMXX_RX_PRTS(xi.interface));
+		gmx_rx_prts.s.prts = num_ports;
+		csr_wr(CVMX_GMXX_RX_PRTS(xi.interface), gmx_rx_prts.u64);
+		break;
+
+	default:
+		break;
+	}
+
+	/*
+	 * Skip setting CVMX_PKO_REG_GMX_PORT_MODE on 30XX, 31XX, 50XX,
+	 * and 68XX.
+	 */
+	if (!OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		/* Tell PKO the number of ports on this interface */
+		pko_mode.u64 = csr_rd(CVMX_PKO_REG_GMX_PORT_MODE);
+		if (xi.interface == 0) {
+			if (num_ports == 1)
+				pko_mode.s.mode0 = 4;
+			else if (num_ports == 2)
+				pko_mode.s.mode0 = 3;
+			else if (num_ports <= 4)
+				pko_mode.s.mode0 = 2;
+			else if (num_ports <= 8)
+				pko_mode.s.mode0 = 1;
+			else
+				pko_mode.s.mode0 = 0;
+		} else {
+			if (num_ports == 1)
+				pko_mode.s.mode1 = 4;
+			else if (num_ports == 2)
+				pko_mode.s.mode1 = 3;
+			else if (num_ports <= 4)
+				pko_mode.s.mode1 = 2;
+			else if (num_ports <= 8)
+				pko_mode.s.mode1 = 1;
+			else
+				pko_mode.s.mode1 = 0;
+		}
+		csr_wr(CVMX_PKO_REG_GMX_PORT_MODE, pko_mode.u64);
+	}
+
+	/*
+	 * Set GMX to buffer as much data as possible before starting
+	 * transmit. This reduces the chances that we have a TX under run
+	 * due to memory contention. Any packet that fits entirely in the
+	 * GMX FIFO can never have an under run regardless of memory load.
+	 */
+	gmx_tx_thresh.u64 = csr_rd(CVMX_GMXX_TXX_THRESH(0, xi.interface));
+	/* ccn - common cnt numberator */
+	int ccn = 0x100;
+
+	/* Choose the max value for the number of ports */
+	if (num_ports <= 1)
+		gmx_tx_thresh.s.cnt = ccn / 1;
+	else if (num_ports == 2)
+		gmx_tx_thresh.s.cnt = ccn / 2;
+	else
+		gmx_tx_thresh.s.cnt = ccn / 4;
+
+	/*
+	 * SPI and XAUI can have lots of ports but the GMX hardware
+	 * only ever has a max of 4
+	 */
+	if (num_ports > 4)
+		num_ports = 4;
+	for (index = 0; index < num_ports; index++)
+		csr_wr(CVMX_GMXX_TXX_THRESH(index, xi.interface), gmx_tx_thresh.u64);
+
+	/*
+	 * For o68, we need to setup the pipes
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX) && xi.interface < CVMX_HELPER_MAX_GMX) {
+		union cvmx_gmxx_txx_pipe config;
+
+		for (index = 0; index < num_ports; index++) {
+			config.u64 = 0;
+
+			if (__cvmx_helper_cfg_pko_port_base(xiface, index) >= 0) {
+				config.u64 = csr_rd(CVMX_GMXX_TXX_PIPE(index,
+								       xi.interface));
+				config.s.nump = __cvmx_helper_cfg_pko_port_num(xiface,
+									       index);
+				config.s.base = __cvmx_helper_cfg_pko_port_base(xiface,
+										index);
+				csr_wr(CVMX_GMXX_TXX_PIPE(index, xi.interface),
+				       config.u64);
+			}
+		}
+	}
+
+	return 0;
+}
+
+int cvmx_helper_get_pko_port(int interface, int port)
+{
+	return cvmx_pko_get_base_pko_port(interface, port);
+}
+
+int cvmx_helper_get_ipd_port(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		const struct ipd_port_map *port_map;
+		int ipd_port;
+
+		if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+			port_map = ipd_port_map_68xx;
+			ipd_port = 0;
+		} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+			port_map = ipd_port_map_78xx;
+			ipd_port = cvmx_helper_node_to_ipd_port(xi.node, 0);
+		} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+			port_map = ipd_port_map_73xx;
+			ipd_port = 0;
+		} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+			port_map = ipd_port_map_75xx;
+			ipd_port = 0;
+		} else {
+			return -1;
+		}
+
+		ipd_port += port_map[xi.interface].first_ipd_port;
+		if (port_map[xi.interface].type == GMII) {
+			cvmx_helper_interface_mode_t mode;
+
+			mode = cvmx_helper_interface_get_mode(xiface);
+			if (mode == CVMX_HELPER_INTERFACE_MODE_XAUI ||
+			    (mode == CVMX_HELPER_INTERFACE_MODE_RXAUI &&
+			     OCTEON_IS_MODEL(OCTEON_CN68XX))) {
+				ipd_port += port_map[xi.interface].ipd_port_adj;
+				return ipd_port;
+			} else {
+				return ipd_port + (index * 16);
+			}
+		} else if (port_map[xi.interface].type == ILK) {
+			return ipd_port + index;
+		} else if (port_map[xi.interface].type == NPI) {
+			return ipd_port + index;
+		} else if (port_map[xi.interface].type == SRIO) {
+			return ipd_port + index;
+		} else if (port_map[xi.interface].type == LB) {
+			return ipd_port + index;
+		}
+
+		debug("ERROR: %s: interface %u:%u bad mode\n",
+		      __func__, xi.node, xi.interface);
+		return -1;
+	} else if (cvmx_helper_interface_get_mode(xiface) ==
+		   CVMX_HELPER_INTERFACE_MODE_AGL) {
+		return 24;
+	}
+
+	switch (xi.interface) {
+	case 0:
+		return index;
+	case 1:
+		return index + 16;
+	case 2:
+		return index + 32;
+	case 3:
+		return index + 36;
+	case 4:
+		return index + 40;
+	case 5:
+		return index + 42;
+	case 6:
+		return index + 44;
+	case 7:
+		return index + 46;
+	}
+	return -1;
+}
+
+int cvmx_helper_get_pknd(int xiface, int index)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		return __cvmx_helper_cfg_pknd(xiface, index);
+
+	return CVMX_INVALID_PKND;
+}
+
+int cvmx_helper_get_bpid(int interface, int port)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		return __cvmx_helper_cfg_bpid(interface, port);
+
+	return CVMX_INVALID_BPID;
+}
+
+/**
+ * Display interface statistics.
+ *
+ * @param port IPD/PKO port number
+ *
+ * @return none
+ */
+void cvmx_helper_show_stats(int port)
+{
+	cvmx_pip_port_status_t status;
+	cvmx_pko_port_status_t pko_status;
+
+	/* ILK stats */
+	if (octeon_has_feature(OCTEON_FEATURE_ILK))
+		__cvmx_helper_ilk_show_stats();
+
+	/* PIP stats */
+	cvmx_pip_get_port_stats(port, 0, &status);
+	debug("port %d: the number of packets - ipd: %d\n", port,
+	      (int)status.packets);
+
+	/* PKO stats */
+	cvmx_pko_get_port_status(port, 0, &pko_status);
+	debug("port %d: the number of packets - pko: %d\n", port,
+	      (int)pko_status.packets);
+
+	/* TODO: other stats */
+}
+
+/**
+ * Returns the interface number for an IPD/PKO port number.
+ *
+ * @param ipd_port IPD/PKO port number
+ *
+ * @return Interface number
+ */
+int cvmx_helper_get_interface_num(int ipd_port)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		const struct ipd_port_map *port_map;
+		int i;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		port_map = ipd_port_map_68xx;
+		for (i = 0; i < CVMX_HELPER_MAX_IFACE; i++) {
+			if (xp.port >= port_map[i].first_ipd_port &&
+			    xp.port <= port_map[i].last_ipd_port)
+				return i;
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		const struct ipd_port_map *port_map;
+		int i;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		port_map = ipd_port_map_78xx;
+		for (i = 0; i < CVMX_HELPER_MAX_IFACE; i++) {
+			if (xp.port >= port_map[i].first_ipd_port &&
+			    xp.port <= port_map[i].last_ipd_port)
+				return cvmx_helper_node_interface_to_xiface(xp.node, i);
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		const struct ipd_port_map *port_map;
+		int i;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		port_map = ipd_port_map_73xx;
+		for (i = 0; i < CVMX_HELPER_MAX_IFACE; i++) {
+			if (xp.port >= port_map[i].first_ipd_port &&
+			    xp.port <= port_map[i].last_ipd_port)
+				return i;
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		const struct ipd_port_map *port_map;
+		int i;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		port_map = ipd_port_map_75xx;
+		for (i = 0; i < CVMX_HELPER_MAX_IFACE; i++) {
+			if (xp.port >= port_map[i].first_ipd_port &&
+			    xp.port <= port_map[i].last_ipd_port)
+				return i;
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN70XX) && ipd_port == 24) {
+		return 4;
+	}
+
+	if (ipd_port < 16)
+		return 0;
+	else if (ipd_port < 32)
+		return 1;
+	else if (ipd_port < 36)
+		return 2;
+	else if (ipd_port < 40)
+		return 3;
+	else if (ipd_port < 42)
+		return 4;
+	else if (ipd_port < 44)
+		return 5;
+	else if (ipd_port < 46)
+		return 6;
+	else if (ipd_port < 48)
+		return 7;
+
+	debug("%s: Illegal IPD port number %d\n", __func__, ipd_port);
+	return -1;
+}
+
+/**
+ * Returns the interface index number for an IPD/PKO port
+ * number.
+ *
+ * @param ipd_port IPD/PKO port number
+ *
+ * @return Interface index number
+ */
+int cvmx_helper_get_interface_index_num(int ipd_port)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		const struct ipd_port_map *port_map;
+		int port;
+		enum port_map_if_type type = INVALID_IF_TYPE;
+		int i;
+		int num_interfaces;
+
+		if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+			port_map = ipd_port_map_68xx;
+		} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+			struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+			port_map = ipd_port_map_78xx;
+			ipd_port = xp.port;
+		} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+			struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+			port_map = ipd_port_map_73xx;
+			ipd_port = xp.port;
+		} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+			struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+			port_map = ipd_port_map_75xx;
+			ipd_port = xp.port;
+		} else {
+			return -1;
+		}
+
+		num_interfaces = cvmx_helper_get_number_of_interfaces();
+
+		/* Get the interface type of the ipd port */
+		for (i = 0; i < num_interfaces; i++) {
+			if (ipd_port >= port_map[i].first_ipd_port &&
+			    ipd_port <= port_map[i].last_ipd_port) {
+				type = port_map[i].type;
+				break;
+			}
+		}
+
+		/* Convert the ipd port to the interface port */
+		switch (type) {
+		/* Ethernet interfaces have a channel in lower 4 bits
+		 * that is does not discriminate traffic, and is ignored.
+		 */
+		case GMII:
+			port = ipd_port - port_map[i].first_ipd_port;
+
+			/* CN68XX adds 0x40 to IPD_PORT when in XAUI/RXAUI
+			 * mode of operation, adjust for that case
+			 */
+			if (port >= port_map[i].ipd_port_adj)
+				port -= port_map[i].ipd_port_adj;
+
+			port >>= 4;
+			return port;
+
+		/*
+		 * These interfaces do not have physical ports,
+		 * but have logical channels instead that separate
+		 * traffic into logical streams
+		 */
+		case ILK:
+		case SRIO:
+		case NPI:
+		case LB:
+			port = ipd_port - port_map[i].first_ipd_port;
+			return port;
+
+		default:
+			printf("ERROR: %s: Illegal IPD port number %#x\n",
+			       __func__, ipd_port);
+			return -1;
+		}
+	}
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return ipd_port & 3;
+	if (ipd_port < 32)
+		return ipd_port & 15;
+	else if (ipd_port < 40)
+		return ipd_port & 3;
+	else if (ipd_port < 48)
+		return ipd_port & 1;
+
+	debug("%s: Illegal IPD port number\n", __func__);
+
+	return -1;
+}
+
+/**
+ * Prints out a buffer with the address, hex bytes, and ASCII
+ *
+ * @param	addr	Start address to print on the left
+ * @param[in]	buffer	array of bytes to print
+ * @param	count	Number of bytes to print
+ */
+void cvmx_print_buffer_u8(unsigned int addr, const uint8_t *buffer,
+			  size_t count)
+{
+	uint i;
+
+	while (count) {
+		unsigned int linelen = count < 16 ? count : 16;
+
+		debug("%08x:", addr);
+
+		for (i = 0; i < linelen; i++)
+			debug(" %0*x", 2, buffer[i]);
+
+		while (i++ < 17)
+			debug("   ");
+
+		for (i = 0; i < linelen; i++) {
+			if (buffer[i] >= 0x20 && buffer[i] < 0x7f)
+				debug("%c", buffer[i]);
+			else
+				debug(".");
+		}
+		debug("\n");
+		addr += linelen;
+		buffer += linelen;
+		count -= linelen;
+	}
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 40/50] mips: octeon: Add cvmx-helper.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (38 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 39/50] mips: octeon: Add cvmx-helper-util.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 41/50] mips: octeon: Add cvmx-pcie.c Stefan Roese
                   ` (12 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-helper.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-helper.c | 2611 +++++++++++++++++++++++++++
 1 file changed, 2611 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-helper.c

diff --git a/arch/mips/mach-octeon/cvmx-helper.c b/arch/mips/mach-octeon/cvmx-helper.c
new file mode 100644
index 0000000000..529e03a147
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-helper.c
@@ -0,0 +1,2611 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper functions for common, but complicated tasks.
+ */
+
+#include <log.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-csr.h>
+#include <mach/cvmx-bootmem.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-coremask.h>
+
+#include <mach/cvmx-agl-defs.h>
+#include <mach/cvmx-asxx-defs.h>
+#include <mach/cvmx-bgxx-defs.h>
+#include <mach/cvmx-dbg-defs.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-gserx-defs.h>
+#include <mach/cvmx-ipd-defs.h>
+#include <mach/cvmx-l2c-defs.h>
+#include <mach/cvmx-npi-defs.h>
+#include <mach/cvmx-pcsx-defs.h>
+#include <mach/cvmx-pexp-defs.h>
+#include <mach/cvmx-pki-defs.h>
+#include <mach/cvmx-pko-defs.h>
+#include <mach/cvmx-smix-defs.h>
+#include <mach/cvmx-sriox-defs.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-board.h>
+#include <mach/cvmx-helper-fdt.h>
+#include <mach/cvmx-helper-bgx.h>
+#include <mach/cvmx-helper-cfg.h>
+#include <mach/cvmx-helper-ipd.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-helper-pki.h>
+#include <mach/cvmx-helper-pko.h>
+#include <mach/cvmx-helper-pko3.h>
+#include <mach/cvmx-global-resources.h>
+#include <mach/cvmx-pko-internal-ports-range.h>
+#include <mach/cvmx-pko3-queue.h>
+#include <mach/cvmx-gmx.h>
+#include <mach/cvmx-hwpko.h>
+#include <mach/cvmx-ilk.h>
+#include <mach/cvmx-ipd.h>
+#include <mach/cvmx-pip.h>
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by an interface.
+ *
+ * @param mode		Interface mode.
+ *
+ * @param enumerate	Method the get number of interface ports.
+ *
+ * @param probe		Method to probe an interface to get the number of
+ *			connected ports.
+ *
+ * @param enable	Method to enable an interface
+ *
+ * @param link_get	Method to get the state of an interface link.
+ *
+ * @param link_set	Method to configure an interface link to the specified
+ *			state.
+ *
+ * @param loopback	Method to configure a port in loopback.
+ */
+struct iface_ops {
+	cvmx_helper_interface_mode_t mode;
+	int (*enumerate)(int xiface);
+	int (*probe)(int xiface);
+	int (*enable)(int xiface);
+	cvmx_helper_link_info_t (*link_get)(int ipd_port);
+	int (*link_set)(int ipd_port, cvmx_helper_link_info_t link_info);
+	int (*loopback)(int ipd_port, int en_in, int en_ex);
+};
+
+/**
+ * @INTERNAL
+ * This structure is used by disabled interfaces.
+ */
+static const struct iface_ops iface_ops_dis = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_DISABLED,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as gmii.
+ */
+static const struct iface_ops iface_ops_gmii = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_GMII,
+	.enumerate = __cvmx_helper_rgmii_probe,
+	.probe = __cvmx_helper_rgmii_probe,
+	.enable = __cvmx_helper_rgmii_enable,
+	.link_get = __cvmx_helper_gmii_link_get,
+	.link_set = __cvmx_helper_rgmii_link_set,
+	.loopback = __cvmx_helper_rgmii_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as rgmii.
+ */
+static const struct iface_ops iface_ops_rgmii = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_RGMII,
+	.enumerate = __cvmx_helper_rgmii_probe,
+	.probe = __cvmx_helper_rgmii_probe,
+	.enable = __cvmx_helper_rgmii_enable,
+	.link_get = __cvmx_helper_rgmii_link_get,
+	.link_set = __cvmx_helper_rgmii_link_set,
+	.loopback = __cvmx_helper_rgmii_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as sgmii that use the gmx mac.
+ */
+static const struct iface_ops iface_ops_sgmii = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_SGMII,
+	.enumerate = __cvmx_helper_sgmii_enumerate,
+	.probe = __cvmx_helper_sgmii_probe,
+	.enable = __cvmx_helper_sgmii_enable,
+	.link_get = __cvmx_helper_sgmii_link_get,
+	.link_set = __cvmx_helper_sgmii_link_set,
+	.loopback = __cvmx_helper_sgmii_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as sgmii that use the bgx mac.
+ */
+static const struct iface_ops iface_ops_bgx_sgmii = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_SGMII,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_sgmii_enable,
+	.link_get = __cvmx_helper_bgx_sgmii_link_get,
+	.link_set = __cvmx_helper_bgx_sgmii_link_set,
+	.loopback = __cvmx_helper_bgx_sgmii_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as qsgmii.
+ */
+static const struct iface_ops iface_ops_qsgmii = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_QSGMII,
+	.enumerate = __cvmx_helper_sgmii_enumerate,
+	.probe = __cvmx_helper_sgmii_probe,
+	.enable = __cvmx_helper_sgmii_enable,
+	.link_get = __cvmx_helper_sgmii_link_get,
+	.link_set = __cvmx_helper_sgmii_link_set,
+	.loopback = __cvmx_helper_sgmii_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as xaui using the gmx mac.
+ */
+static const struct iface_ops iface_ops_xaui = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_XAUI,
+	.enumerate = __cvmx_helper_xaui_enumerate,
+	.probe = __cvmx_helper_xaui_probe,
+	.enable = __cvmx_helper_xaui_enable,
+	.link_get = __cvmx_helper_xaui_link_get,
+	.link_set = __cvmx_helper_xaui_link_set,
+	.loopback = __cvmx_helper_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as xaui using the gmx mac.
+ */
+static const struct iface_ops iface_ops_bgx_xaui = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_XAUI,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as rxaui.
+ */
+static const struct iface_ops iface_ops_rxaui = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_RXAUI,
+	.enumerate = __cvmx_helper_xaui_enumerate,
+	.probe = __cvmx_helper_xaui_probe,
+	.enable = __cvmx_helper_xaui_enable,
+	.link_get = __cvmx_helper_xaui_link_get,
+	.link_set = __cvmx_helper_xaui_link_set,
+	.loopback = __cvmx_helper_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as xaui using the gmx mac.
+ */
+static const struct iface_ops iface_ops_bgx_rxaui = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_RXAUI,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as xlaui.
+ */
+static const struct iface_ops iface_ops_bgx_xlaui = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_XLAUI,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as xfi.
+ */
+static const struct iface_ops iface_ops_bgx_xfi = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_XFI,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+static const struct iface_ops iface_ops_bgx_10G_KR = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_10G_KR,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+static const struct iface_ops iface_ops_bgx_40G_KR4 = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_40G_KR4,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_xaui_enable,
+	.link_get = __cvmx_helper_bgx_xaui_link_get,
+	.link_set = __cvmx_helper_bgx_xaui_link_set,
+	.loopback = __cvmx_helper_bgx_xaui_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as ilk.
+ */
+static const struct iface_ops iface_ops_ilk = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_ILK,
+	.enumerate = __cvmx_helper_ilk_enumerate,
+	.probe = __cvmx_helper_ilk_probe,
+	.enable = __cvmx_helper_ilk_enable,
+	.link_get = __cvmx_helper_ilk_link_get,
+	.link_set = __cvmx_helper_ilk_link_set,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as npi.
+ */
+static const struct iface_ops iface_ops_npi = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_NPI,
+	.enumerate = __cvmx_helper_npi_probe,
+	.probe = __cvmx_helper_npi_probe,
+	.enable = __cvmx_helper_npi_enable,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as srio.
+ */
+static const struct iface_ops iface_ops_srio = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_SRIO,
+	.enumerate = __cvmx_helper_srio_probe,
+	.probe = __cvmx_helper_srio_probe,
+	.enable = __cvmx_helper_srio_enable,
+	.link_get = __cvmx_helper_srio_link_get,
+	.link_set = __cvmx_helper_srio_link_set,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as agl.
+ */
+static const struct iface_ops iface_ops_agl = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_AGL,
+	.enumerate = __cvmx_helper_agl_enumerate,
+	.probe = __cvmx_helper_agl_probe,
+	.enable = __cvmx_helper_agl_enable,
+	.link_get = __cvmx_helper_agl_link_get,
+	.link_set = __cvmx_helper_agl_link_set,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as mixed mode, some ports are sgmii and some are xfi.
+ */
+static const struct iface_ops iface_ops_bgx_mixed = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_MIXED,
+	.enumerate = __cvmx_helper_bgx_enumerate,
+	.probe = __cvmx_helper_bgx_probe,
+	.enable = __cvmx_helper_bgx_mixed_enable,
+	.link_get = __cvmx_helper_bgx_mixed_link_get,
+	.link_set = __cvmx_helper_bgx_mixed_link_set,
+	.loopback = __cvmx_helper_bgx_mixed_configure_loopback,
+};
+
+/**
+ * @INTERNAL
+ * This structure specifies the interface methods used by interfaces
+ * configured as loop.
+ */
+static const struct iface_ops iface_ops_loop = {
+	.mode = CVMX_HELPER_INTERFACE_MODE_LOOP,
+	.enumerate = __cvmx_helper_loop_enumerate,
+	.probe = __cvmx_helper_loop_probe,
+};
+
+const struct iface_ops *iface_node_ops[CVMX_MAX_NODES][CVMX_HELPER_MAX_IFACE];
+#define iface_ops iface_node_ops[0]
+
+struct cvmx_iface {
+	int cvif_ipd_nports;
+	int cvif_has_fcs; /* PKO fcs for this interface. */
+	enum cvmx_pko_padding cvif_padding;
+	cvmx_helper_link_info_t *cvif_ipd_port_link_info;
+};
+
+/*
+ * This has to be static as u-boot expects to probe an interface and
+ * gets the number of its ports.
+ */
+static struct cvmx_iface cvmx_interfaces[CVMX_MAX_NODES][CVMX_HELPER_MAX_IFACE];
+
+int __cvmx_helper_get_num_ipd_ports(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_iface *piface;
+
+	if (xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return -1;
+
+	piface = &cvmx_interfaces[xi.node][xi.interface];
+	return piface->cvif_ipd_nports;
+}
+
+enum cvmx_pko_padding __cvmx_helper_get_pko_padding(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_iface *piface;
+
+	if (xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return CVMX_PKO_PADDING_NONE;
+
+	piface = &cvmx_interfaces[xi.node][xi.interface];
+	return piface->cvif_padding;
+}
+
+int __cvmx_helper_init_interface(int xiface, int num_ipd_ports, int has_fcs,
+				 enum cvmx_pko_padding pad)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_iface *piface;
+	cvmx_helper_link_info_t *p;
+	int i;
+	int sz;
+	u64 addr;
+	char name[32];
+
+	if (xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return -1;
+
+	piface = &cvmx_interfaces[xi.node][xi.interface];
+	piface->cvif_ipd_nports = num_ipd_ports;
+	piface->cvif_padding = pad;
+
+	piface->cvif_has_fcs = has_fcs;
+
+	/*
+	 * allocate the per-ipd_port link_info structure
+	 */
+	sz = piface->cvif_ipd_nports * sizeof(cvmx_helper_link_info_t);
+	snprintf(name, sizeof(name), "__int_%d_link_info", xi.interface);
+	addr = CAST64(cvmx_bootmem_alloc_named_range_once(sz, 0, 0,
+							  __alignof(cvmx_helper_link_info_t),
+							  name, NULL));
+	piface->cvif_ipd_port_link_info =
+		(cvmx_helper_link_info_t *)__cvmx_phys_addr_to_ptr(addr, sz);
+	if (!piface->cvif_ipd_port_link_info) {
+		if (sz != 0)
+			debug("iface %d failed to alloc link info\n", xi.interface);
+		return -1;
+	}
+
+	/* Initialize them */
+	p = piface->cvif_ipd_port_link_info;
+
+	for (i = 0; i < piface->cvif_ipd_nports; i++) {
+		(*p).u64 = 0;
+		p++;
+	}
+	return 0;
+}
+
+/*
+ * Shut down the interfaces; free the resources.
+ * @INTERNAL
+ */
+void __cvmx_helper_shutdown_interfaces_node(unsigned int node)
+{
+	int i;
+	int nifaces; /* number of interfaces */
+	struct cvmx_iface *piface;
+
+	nifaces = cvmx_helper_get_number_of_interfaces();
+	for (i = 0; i < nifaces; i++) {
+		piface = &cvmx_interfaces[node][i];
+
+		/*
+		 * For SE apps, bootmem was meant to be allocated and never
+		 * freed.
+		 */
+		piface->cvif_ipd_port_link_info = 0;
+	}
+}
+
+void __cvmx_helper_shutdown_interfaces(void)
+{
+	unsigned int node = cvmx_get_node_num();
+
+	__cvmx_helper_shutdown_interfaces_node(node);
+}
+
+int __cvmx_helper_set_link_info(int xiface, int index, cvmx_helper_link_info_t link_info)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_iface *piface;
+
+	if (xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return -1;
+
+	piface = &cvmx_interfaces[xi.node][xi.interface];
+
+	if (piface->cvif_ipd_port_link_info) {
+		piface->cvif_ipd_port_link_info[index] = link_info;
+		return 0;
+	}
+
+	return -1;
+}
+
+cvmx_helper_link_info_t __cvmx_helper_get_link_info(int xiface, int port)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_iface *piface;
+	cvmx_helper_link_info_t err;
+
+	err.u64 = 0;
+
+	if (xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return err;
+	piface = &cvmx_interfaces[xi.node][xi.interface];
+
+	if (piface->cvif_ipd_port_link_info)
+		return piface->cvif_ipd_port_link_info[port];
+
+	return err;
+}
+
+/**
+ * Returns if FCS is enabled for the specified interface and port
+ *
+ * @param xiface - interface to check
+ *
+ * @return zero if FCS is not used, otherwise FCS is used.
+ */
+int __cvmx_helper_get_has_fcs(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	return cvmx_interfaces[xi.node][xi.interface].cvif_has_fcs;
+}
+
+u64 cvmx_rgmii_backpressure_dis = 1;
+
+typedef int (*cvmx_export_config_t)(void);
+cvmx_export_config_t cvmx_export_app_config;
+
+void cvmx_rgmii_set_back_pressure(uint64_t backpressure_dis)
+{
+	cvmx_rgmii_backpressure_dis = backpressure_dis;
+}
+
+/*
+ * internal functions that are not exported in the .h file but must be
+ * declared to make gcc happy.
+ */
+extern cvmx_helper_link_info_t __cvmx_helper_get_link_info(int interface, int port);
+
+/**
+ * cvmx_override_iface_phy_mode(int interface, int index) is a function pointer.
+ * It is meant to allow customization of interfaces which do not have a PHY.
+ *
+ * @returns 0 if MAC decides TX_CONFIG_REG or 1 if PHY decides  TX_CONFIG_REG.
+ *
+ * If this function pointer is NULL then it defaults to the MAC.
+ */
+int (*cvmx_override_iface_phy_mode)(int interface, int index);
+
+/**
+ * cvmx_override_ipd_port_setup(int ipd_port) is a function
+ * pointer. It is meant to allow customization of the IPD
+ * port/port kind setup before packet input/output comes online.
+ * It is called after cvmx-helper does the default IPD configuration,
+ * but before IPD is enabled. Users should set this pointer to a
+ * function before calling any cvmx-helper operations.
+ */
+void (*cvmx_override_ipd_port_setup)(int ipd_port) = NULL;
+
+/**
+ * Return the number of interfaces the chip has. Each interface
+ * may have multiple ports. Most chips support two interfaces,
+ * but the CNX0XX and CNX1XX are exceptions. These only support
+ * one interface.
+ *
+ * @return Number of interfaces on chip
+ */
+int cvmx_helper_get_number_of_interfaces(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 9;
+	else if (OCTEON_IS_MODEL(OCTEON_CN66XX))
+		if (OCTEON_IS_MODEL(OCTEON_CN66XX_PASS1_0))
+			return 7;
+		else
+			return 8;
+	else if (OCTEON_IS_MODEL(OCTEON_CN63XX))
+		return 6;
+	else if (OCTEON_IS_MODEL(OCTEON_CN61XX) || OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		return 4;
+	else if (OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return 5;
+	else if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 10;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 5;
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 5;
+	else
+		return 3;
+}
+
+int __cvmx_helper_early_ports_on_interface(int interface)
+{
+	int ports;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		return cvmx_helper_interface_enumerate(interface);
+
+	ports = cvmx_helper_interface_enumerate(interface);
+	ports = __cvmx_helper_board_interface_probe(interface, ports);
+
+	return ports;
+}
+
+/**
+ * Return the number of ports on an interface. Depending on the
+ * chip and configuration, this can be 1-16. A value of 0
+ * specifies that the interface doesn't exist or isn't usable.
+ *
+ * @param xiface xiface to get the port count for
+ *
+ * @return Number of ports on interface. Can be Zero.
+ */
+int cvmx_helper_ports_on_interface(int xiface)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		return cvmx_helper_interface_enumerate(xiface);
+	else
+		return __cvmx_helper_get_num_ipd_ports(xiface);
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for CN70XX.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cn70xx(int interface)
+{
+	/* SGMII/RXAUI/QSGMII */
+	if (interface < 2) {
+		enum cvmx_qlm_mode qlm_mode =
+			cvmx_qlm_get_dlm_mode(0, interface);
+
+		if (qlm_mode == CVMX_QLM_MODE_SGMII)
+			iface_ops[interface] = &iface_ops_sgmii;
+		else if (qlm_mode == CVMX_QLM_MODE_QSGMII)
+			iface_ops[interface] = &iface_ops_qsgmii;
+		else if (qlm_mode == CVMX_QLM_MODE_RXAUI)
+			iface_ops[interface] = &iface_ops_rxaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} else if (interface == 2) { /* DPI */
+		iface_ops[interface] = &iface_ops_npi;
+	} else if (interface == 3) { /* LOOP */
+		iface_ops[interface] = &iface_ops_loop;
+	} else if (interface == 4) { /* RGMII (AGL) */
+		cvmx_agl_prtx_ctl_t prtx_ctl;
+
+		prtx_ctl.u64 = csr_rd(CVMX_AGL_PRTX_CTL(0));
+		if (prtx_ctl.s.mode == 0)
+			iface_ops[interface] = &iface_ops_agl;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} else {
+		iface_ops[interface] = &iface_ops_dis;
+	}
+
+	return iface_ops[interface]->mode;
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for CN78XX.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cn78xx(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	/* SGMII/RXAUI/XAUI */
+	if (xi.interface < 6) {
+		int qlm = cvmx_qlm_lmac(xiface, 0);
+		enum cvmx_qlm_mode qlm_mode;
+
+		if (qlm == -1) {
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_dis;
+			return iface_node_ops[xi.node][xi.interface]->mode;
+		}
+		qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, qlm);
+
+		if (qlm_mode == CVMX_QLM_MODE_SGMII)
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_bgx_sgmii;
+		else if (qlm_mode == CVMX_QLM_MODE_XAUI)
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_bgx_xaui;
+		else if (qlm_mode == CVMX_QLM_MODE_XLAUI)
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_bgx_xlaui;
+		else if (qlm_mode == CVMX_QLM_MODE_XFI)
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_bgx_xfi;
+		else if (qlm_mode == CVMX_QLM_MODE_RXAUI)
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_bgx_rxaui;
+		else
+			iface_node_ops[xi.node][xi.interface] = &iface_ops_dis;
+	} else if (xi.interface < 8) {
+		enum cvmx_qlm_mode qlm_mode;
+		int found = 0;
+		int i;
+		int intf, lane_mask;
+
+		if (xi.interface == 6) {
+			intf = 6;
+			lane_mask = cvmx_ilk_lane_mask[xi.node][0];
+		} else {
+			intf = 7;
+			lane_mask = cvmx_ilk_lane_mask[xi.node][1];
+		}
+		switch (lane_mask) {
+		default:
+		case 0x0:
+			iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xf:
+			qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, 4);
+			if (qlm_mode == CVMX_QLM_MODE_ILK)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xff:
+			found = 0;
+			for (i = 4; i < 6; i++) {
+				qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, i);
+				if (qlm_mode == CVMX_QLM_MODE_ILK)
+					found++;
+			}
+			if (found == 2)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xfff:
+			found = 0;
+			for (i = 4; i < 7; i++) {
+				qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, i);
+				if (qlm_mode == CVMX_QLM_MODE_ILK)
+					found++;
+			}
+			if (found == 3)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xff00:
+			found = 0;
+			for (i = 6; i < 8; i++) {
+				qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, i);
+				if (qlm_mode == CVMX_QLM_MODE_ILK)
+					found++;
+			}
+			if (found == 2)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xf0:
+			qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, 5);
+			if (qlm_mode == CVMX_QLM_MODE_ILK)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xf00:
+			qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, 6);
+			if (qlm_mode == CVMX_QLM_MODE_ILK)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xf000:
+			qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, 7);
+			if (qlm_mode == CVMX_QLM_MODE_ILK)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		case 0xfff0:
+			found = 0;
+			for (i = 5; i < 8; i++) {
+				qlm_mode = cvmx_qlm_get_mode_cn78xx(xi.node, i);
+				if (qlm_mode == CVMX_QLM_MODE_ILK)
+					found++;
+			}
+			if (found == 3)
+				iface_node_ops[xi.node][intf] = &iface_ops_ilk;
+			else
+				iface_node_ops[xi.node][intf] = &iface_ops_dis;
+			break;
+		}
+	} else if (xi.interface == 8) { /* DPI */
+		int qlm = 0;
+
+		for (qlm = 0; qlm < 5; qlm++) {
+			/* if GSERX_CFG[pcie] == 1, then enable npi */
+			if (csr_rd_node(xi.node, CVMX_GSERX_CFG(qlm)) & 0x1) {
+				iface_node_ops[xi.node][xi.interface] =
+					&iface_ops_npi;
+				return iface_node_ops[xi.node][xi.interface]->mode;
+			}
+		}
+		iface_node_ops[xi.node][xi.interface] = &iface_ops_dis;
+	} else if (xi.interface == 9) { /* LOOP */
+		iface_node_ops[xi.node][xi.interface] = &iface_ops_loop;
+	} else {
+		iface_node_ops[xi.node][xi.interface] = &iface_ops_dis;
+	}
+
+	return iface_node_ops[xi.node][xi.interface]->mode;
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for CN73XX.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cn73xx(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int interface = xi.interface;
+
+	/* SGMII/XAUI/XLAUI/XFI */
+	if (interface < 3) {
+		int qlm = cvmx_qlm_lmac(xiface, 0);
+		enum cvmx_qlm_mode qlm_mode;
+
+		if (qlm == -1) {
+			iface_ops[interface] = &iface_ops_dis;
+			return iface_ops[interface]->mode;
+		}
+		qlm_mode = cvmx_qlm_get_mode(qlm);
+
+		switch (qlm_mode) {
+		case CVMX_QLM_MODE_SGMII:
+		case CVMX_QLM_MODE_SGMII_2X1:
+		case CVMX_QLM_MODE_RGMII_SGMII:
+		case CVMX_QLM_MODE_RGMII_SGMII_1X1:
+			iface_ops[interface] = &iface_ops_bgx_sgmii;
+			break;
+		case CVMX_QLM_MODE_XAUI:
+		case CVMX_QLM_MODE_RGMII_XAUI:
+			iface_ops[interface] = &iface_ops_bgx_xaui;
+			break;
+		case CVMX_QLM_MODE_RXAUI:
+		case CVMX_QLM_MODE_RXAUI_1X2:
+		case CVMX_QLM_MODE_RGMII_RXAUI:
+			iface_ops[interface] = &iface_ops_bgx_rxaui;
+			break;
+		case CVMX_QLM_MODE_XLAUI:
+		case CVMX_QLM_MODE_RGMII_XLAUI:
+			iface_ops[interface] = &iface_ops_bgx_xlaui;
+			break;
+		case CVMX_QLM_MODE_XFI:
+		case CVMX_QLM_MODE_XFI_1X2:
+		case CVMX_QLM_MODE_RGMII_XFI:
+			iface_ops[interface] = &iface_ops_bgx_xfi;
+			break;
+		case CVMX_QLM_MODE_10G_KR:
+		case CVMX_QLM_MODE_10G_KR_1X2:
+		case CVMX_QLM_MODE_RGMII_10G_KR:
+			iface_ops[interface] = &iface_ops_bgx_10G_KR;
+			break;
+		case CVMX_QLM_MODE_40G_KR4:
+		case CVMX_QLM_MODE_RGMII_40G_KR4:
+			iface_ops[interface] = &iface_ops_bgx_40G_KR4;
+			break;
+		case CVMX_QLM_MODE_MIXED:
+			iface_ops[interface] = &iface_ops_bgx_mixed;
+			break;
+		default:
+			iface_ops[interface] = &iface_ops_dis;
+			break;
+		}
+	} else if (interface == 3) { /* DPI */
+		iface_ops[interface] = &iface_ops_npi;
+	} else if (interface == 4) { /* LOOP */
+		iface_ops[interface] = &iface_ops_loop;
+	} else {
+		iface_ops[interface] = &iface_ops_dis;
+	}
+
+	return iface_ops[interface]->mode;
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for CNF75XX.
+ *
+ * CNF75XX has a single BGX block, which is attached to two DLMs,
+ * the first, GSER4 only supports SGMII mode, while the second,
+ * GSER5 supports 1G/10G single late modes, i.e. SGMII, XFI, 10G-KR.
+ * Each half-BGX is thus designated as a separate interface with two ports each.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cnf75xx(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int interface = xi.interface;
+
+	/* BGX0: SGMII (DLM4/DLM5)/XFI(DLM5)  */
+	if (interface < 1) {
+		enum cvmx_qlm_mode qlm_mode;
+		int qlm = cvmx_qlm_lmac(xiface, 0);
+
+		if (qlm == -1) {
+			iface_ops[interface] = &iface_ops_dis;
+			return iface_ops[interface]->mode;
+		}
+		qlm_mode = cvmx_qlm_get_mode(qlm);
+
+		switch (qlm_mode) {
+		case CVMX_QLM_MODE_SGMII:
+		case CVMX_QLM_MODE_SGMII_2X1:
+			iface_ops[interface] = &iface_ops_bgx_sgmii;
+			break;
+		case CVMX_QLM_MODE_XFI_1X2:
+			iface_ops[interface] = &iface_ops_bgx_xfi;
+			break;
+		case CVMX_QLM_MODE_10G_KR_1X2:
+			iface_ops[interface] = &iface_ops_bgx_10G_KR;
+			break;
+		case CVMX_QLM_MODE_MIXED:
+			iface_ops[interface] = &iface_ops_bgx_mixed;
+			break;
+		default:
+			iface_ops[interface] = &iface_ops_dis;
+			break;
+		}
+	} else if ((interface < 3) && OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_sriox_status_reg_t sriox_status_reg;
+		int srio_port = interface - 1;
+
+		sriox_status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(srio_port));
+
+		if (sriox_status_reg.s.srio)
+			iface_ops[interface] = &iface_ops_srio;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} else if (interface == 3) { /* DPI */
+		iface_ops[interface] = &iface_ops_npi;
+	} else if (interface == 4) { /* LOOP */
+		iface_ops[interface] = &iface_ops_loop;
+	} else {
+		iface_ops[interface] = &iface_ops_dis;
+	}
+
+	return iface_ops[interface]->mode;
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for CN68xx.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cn68xx(int interface)
+{
+	union cvmx_mio_qlmx_cfg qlm_cfg;
+
+	switch (interface) {
+	case 0:
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(0));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (qlm_cfg.s.qlm_cfg == 7)
+			iface_ops[interface] = &iface_ops_rxaui;
+		else if (qlm_cfg.s.qlm_cfg == 2)
+			iface_ops[interface] = &iface_ops_sgmii;
+		else if (qlm_cfg.s.qlm_cfg == 3)
+			iface_ops[interface] = &iface_ops_xaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+		break;
+
+	case 1:
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(0));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (qlm_cfg.s.qlm_cfg == 7)
+			iface_ops[interface] = &iface_ops_rxaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+		break;
+
+	case 2:
+	case 3:
+	case 4:
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(interface));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (qlm_cfg.s.qlm_cfg == 2)
+			iface_ops[interface] = &iface_ops_sgmii;
+		else if (qlm_cfg.s.qlm_cfg == 3)
+			iface_ops[interface] = &iface_ops_xaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+		break;
+
+	case 5:
+	case 6:
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(interface - 4));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (qlm_cfg.s.qlm_cfg == 1)
+			iface_ops[interface] = &iface_ops_ilk;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+		break;
+
+	case 7: {
+		union cvmx_mio_qlmx_cfg qlm_cfg1;
+		/* Check if PCIe0/PCIe1 is configured for PCIe */
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(3));
+		qlm_cfg1.u64 = csr_rd(CVMX_MIO_QLMX_CFG(1));
+		/* QLM is disabled when QLM SPD is 15. */
+		if ((qlm_cfg.s.qlm_spd != 15 && qlm_cfg.s.qlm_cfg == 0) ||
+		    (qlm_cfg1.s.qlm_spd != 15 && qlm_cfg1.s.qlm_cfg == 0))
+			iface_ops[interface] = &iface_ops_npi;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} break;
+
+	case 8:
+		iface_ops[interface] = &iface_ops_loop;
+		break;
+
+	default:
+		iface_ops[interface] = &iface_ops_dis;
+		break;
+	}
+
+	return iface_ops[interface]->mode;
+}
+
+/**
+ * @INTERNAL
+ * Return interface mode for an Octeon II
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_octeon2(int interface)
+{
+	union cvmx_gmxx_inf_mode mode;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return __cvmx_get_mode_cn68xx(interface);
+
+	if (interface == 2) {
+		iface_ops[interface] = &iface_ops_npi;
+	} else if (interface == 3) {
+		iface_ops[interface] = &iface_ops_loop;
+	} else if ((OCTEON_IS_MODEL(OCTEON_CN63XX) &&
+		    (interface == 4 || interface == 5)) ||
+		   (OCTEON_IS_MODEL(OCTEON_CN66XX) && interface >= 4 &&
+		    interface <= 7)) {
+		/* Only present in CN63XX & CN66XX Octeon model */
+		union cvmx_sriox_status_reg sriox_status_reg;
+
+		/* cn66xx pass1.0 has only 2 SRIO interfaces. */
+		if ((interface == 5 || interface == 7) &&
+		    OCTEON_IS_MODEL(OCTEON_CN66XX_PASS1_0)) {
+			iface_ops[interface] = &iface_ops_dis;
+		} else if (interface == 5 && OCTEON_IS_MODEL(OCTEON_CN66XX)) {
+			/*
+			 * Later passes of cn66xx support SRIO0 - x4/x2/x1,
+			 * SRIO2 - x2/x1, SRIO3 - x1
+			 */
+			iface_ops[interface] = &iface_ops_dis;
+		} else {
+			sriox_status_reg.u64 =
+				csr_rd(CVMX_SRIOX_STATUS_REG(interface - 4));
+			if (sriox_status_reg.s.srio)
+				iface_ops[interface] = &iface_ops_srio;
+			else
+				iface_ops[interface] = &iface_ops_dis;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN66XX)) {
+		union cvmx_mio_qlmx_cfg mio_qlm_cfg;
+
+		/* QLM2 is SGMII0 and QLM1 is SGMII1 */
+		if (interface == 0) {
+			mio_qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(2));
+		} else if (interface == 1) {
+			mio_qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(1));
+		} else {
+			iface_ops[interface] = &iface_ops_dis;
+			return iface_ops[interface]->mode;
+		}
+
+		if (mio_qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (mio_qlm_cfg.s.qlm_cfg == 9)
+			iface_ops[interface] = &iface_ops_sgmii;
+		else if (mio_qlm_cfg.s.qlm_cfg == 11)
+			iface_ops[interface] = &iface_ops_xaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN61XX)) {
+		union cvmx_mio_qlmx_cfg qlm_cfg;
+
+		if (interface == 0) {
+			qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(2));
+		} else if (interface == 1) {
+			qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(0));
+		} else {
+			iface_ops[interface] = &iface_ops_dis;
+			return iface_ops[interface]->mode;
+		}
+
+		if (qlm_cfg.s.qlm_spd == 15)
+			iface_ops[interface] = &iface_ops_dis;
+		else if (qlm_cfg.s.qlm_cfg == 2)
+			iface_ops[interface] = &iface_ops_sgmii;
+		else if (qlm_cfg.s.qlm_cfg == 3)
+			iface_ops[interface] = &iface_ops_xaui;
+		else
+			iface_ops[interface] = &iface_ops_dis;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF71XX)) {
+		if (interface == 0) {
+			union cvmx_mio_qlmx_cfg qlm_cfg;
+
+			qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(0));
+			if (qlm_cfg.s.qlm_cfg == 2)
+				iface_ops[interface] = &iface_ops_sgmii;
+			else
+				iface_ops[interface] = &iface_ops_dis;
+		} else {
+			iface_ops[interface] = &iface_ops_dis;
+		}
+	} else if (interface == 1 && OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+		iface_ops[interface] = &iface_ops_dis;
+	} else {
+		mode.u64 = csr_rd(CVMX_GMXX_INF_MODE(interface));
+
+		if (OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+			switch (mode.cn63xx.mode) {
+			case 0:
+				iface_ops[interface] = &iface_ops_sgmii;
+				break;
+
+			case 1:
+				iface_ops[interface] = &iface_ops_xaui;
+				break;
+
+			default:
+				iface_ops[interface] = &iface_ops_dis;
+				break;
+			}
+		} else {
+			if (!mode.s.en)
+				iface_ops[interface] = &iface_ops_dis;
+			else if (mode.s.type)
+				iface_ops[interface] = &iface_ops_gmii;
+			else
+				iface_ops[interface] = &iface_ops_rgmii;
+		}
+	}
+
+	return iface_ops[interface]->mode;
+}
+
+/**
+ * Get the operating mode of an interface. Depending on the Octeon
+ * chip and configuration, this function returns an enumeration
+ * of the type of packet I/O supported by an interface.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Mode of the interface. Unknown or unsupported interfaces return
+ *         DISABLED.
+ */
+cvmx_helper_interface_mode_t cvmx_helper_interface_get_mode(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (xi.interface < 0 ||
+	    xi.interface >= cvmx_helper_get_number_of_interfaces())
+		return CVMX_HELPER_INTERFACE_MODE_DISABLED;
+
+	/*
+	 * Check if the interface mode has been already cached. If it has,
+	 * simply return it. Otherwise, fall through the rest of the code to
+	 * determine the interface mode and cache it in iface_ops.
+	 */
+	if (iface_node_ops[xi.node][xi.interface]) {
+		cvmx_helper_interface_mode_t mode;
+
+		mode = iface_node_ops[xi.node][xi.interface]->mode;
+		return mode;
+	}
+
+	/*
+	 * OCTEON III models
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return __cvmx_get_mode_cn70xx(xi.interface);
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return __cvmx_get_mode_cn78xx(xiface);
+
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_helper_interface_mode_t mode;
+
+		mode = __cvmx_get_mode_cnf75xx(xiface);
+		return mode;
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		cvmx_helper_interface_mode_t mode;
+
+		mode = __cvmx_get_mode_cn73xx(xiface);
+		return mode;
+	}
+
+	/*
+	 * Octeon II models
+	 */
+	if (OCTEON_IS_OCTEON2())
+		return __cvmx_get_mode_octeon2(xi.interface);
+
+	/*
+	 * Octeon and Octeon Plus models
+	 */
+	if (xi.interface == 2) {
+		iface_ops[xi.interface] = &iface_ops_npi;
+	} else if (xi.interface == 3) {
+		iface_ops[xi.interface] = &iface_ops_dis;
+	} else {
+		union cvmx_gmxx_inf_mode mode;
+
+		mode.u64 = csr_rd(CVMX_GMXX_INF_MODE(xi.interface));
+
+		if (!mode.s.en)
+			iface_ops[xi.interface] = &iface_ops_dis;
+		else if (mode.s.type)
+			iface_ops[xi.interface] = &iface_ops_gmii;
+		else
+			iface_ops[xi.interface] = &iface_ops_rgmii;
+	}
+
+	return iface_ops[xi.interface]->mode;
+}
+
+/**
+ * Determine the actual number of hardware ports connected to an
+ * interface. It doesn't setup the ports or enable them.
+ *
+ * @param xiface Interface to enumerate
+ *
+ * @return The number of ports on the interface, negative on failure
+ */
+int cvmx_helper_interface_enumerate(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int result = 0;
+
+	cvmx_helper_interface_get_mode(xiface);
+	if (iface_node_ops[xi.node][xi.interface]->enumerate)
+		result = iface_node_ops[xi.node][xi.interface]->enumerate(xiface);
+
+	return result;
+}
+
+/**
+ * This function probes an interface to determine the actual number of
+ * hardware ports connected to it. It does some setup the ports but
+ * doesn't enable them. The main goal here is to set the global
+ * interface_port_count[interface] correctly. Final hardware setup of
+ * the ports will be performed later.
+ *
+ * @param xiface Interface to probe
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_interface_probe(int xiface)
+{
+	/*
+	 * At this stage in the game we don't want packets to be
+	 * moving yet.  The following probe calls should perform
+	 * hardware setup needed to determine port counts. Receive
+	 * must still be disabled.
+	 */
+	int nports;
+	int has_fcs;
+	enum cvmx_pko_padding padding = CVMX_PKO_PADDING_NONE;
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	nports = -1;
+	has_fcs = 0;
+
+	cvmx_helper_interface_get_mode(xiface);
+	if (iface_node_ops[xi.node][xi.interface]->probe)
+		nports = iface_node_ops[xi.node][xi.interface]->probe(xiface);
+
+	switch (iface_node_ops[xi.node][xi.interface]->mode) {
+		/* These types don't support ports to IPD/PKO */
+	case CVMX_HELPER_INTERFACE_MODE_DISABLED:
+	case CVMX_HELPER_INTERFACE_MODE_PCIE:
+		nports = 0;
+		break;
+		/* XAUI is a single high speed port */
+	case CVMX_HELPER_INTERFACE_MODE_XAUI:
+	case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+	case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+	case CVMX_HELPER_INTERFACE_MODE_XFI:
+	case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+	case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+	case CVMX_HELPER_INTERFACE_MODE_MIXED:
+		has_fcs = 1;
+		padding = CVMX_PKO_PADDING_60;
+		break;
+		/*
+		 * RGMII/GMII/MII are all treated about the same. Most
+		 * functions refer to these ports as RGMII.
+		 */
+	case CVMX_HELPER_INTERFACE_MODE_RGMII:
+	case CVMX_HELPER_INTERFACE_MODE_GMII:
+		padding = CVMX_PKO_PADDING_60;
+		break;
+		/*
+		 * SPI4 can have 1-16 ports depending on the device at
+		 * the other end.
+		 */
+	case CVMX_HELPER_INTERFACE_MODE_SPI:
+		padding = CVMX_PKO_PADDING_60;
+		break;
+		/*
+		 * SGMII can have 1-4 ports depending on how many are
+		 * hooked up.
+		 */
+	case CVMX_HELPER_INTERFACE_MODE_SGMII:
+	case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+		padding = CVMX_PKO_PADDING_60;
+	case CVMX_HELPER_INTERFACE_MODE_PICMG:
+		has_fcs = 1;
+		break;
+		/* PCI target Network Packet Interface */
+	case CVMX_HELPER_INTERFACE_MODE_NPI:
+		break;
+		/*
+		 * Special loopback only ports. These are not the same
+		 * as other ports in loopback mode.
+		 */
+	case CVMX_HELPER_INTERFACE_MODE_LOOP:
+		break;
+		/* SRIO has 2^N ports, where N is number of interfaces */
+	case CVMX_HELPER_INTERFACE_MODE_SRIO:
+		break;
+	case CVMX_HELPER_INTERFACE_MODE_ILK:
+		padding = CVMX_PKO_PADDING_60;
+		has_fcs = 1;
+		break;
+	case CVMX_HELPER_INTERFACE_MODE_AGL:
+		has_fcs = 1;
+		break;
+	}
+
+	if (nports == -1)
+		return -1;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND))
+		has_fcs = 0;
+
+	nports = __cvmx_helper_board_interface_probe(xiface, nports);
+	__cvmx_helper_init_interface(xiface, nports, has_fcs, padding);
+	/* Make sure all global variables propagate to other cores */
+	CVMX_SYNCWS;
+
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Setup backpressure.
+ *
+ * @return Zero on success, negative on failure
+ */
+static int __cvmx_helper_global_setup_backpressure(int node)
+{
+	cvmx_qos_proto_t qos_proto;
+	cvmx_qos_pkt_mode_t qos_mode;
+	int port, xipdport;
+	unsigned int bpmask;
+	int interface, xiface, ports;
+	int num_interfaces = cvmx_helper_get_number_of_interfaces();
+
+	if (cvmx_rgmii_backpressure_dis) {
+		qos_proto = CVMX_QOS_PROTO_NONE;
+		qos_mode = CVMX_QOS_PKT_MODE_DROP;
+	} else {
+		qos_proto = CVMX_QOS_PROTO_PAUSE;
+		qos_mode = CVMX_QOS_PKT_MODE_HWONLY;
+	}
+
+	for (interface = 0; interface < num_interfaces; interface++) {
+		xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+		ports = cvmx_helper_ports_on_interface(xiface);
+
+		switch (cvmx_helper_interface_get_mode(xiface)) {
+		case CVMX_HELPER_INTERFACE_MODE_DISABLED:
+		case CVMX_HELPER_INTERFACE_MODE_PCIE:
+		case CVMX_HELPER_INTERFACE_MODE_SRIO:
+		case CVMX_HELPER_INTERFACE_MODE_ILK:
+		case CVMX_HELPER_INTERFACE_MODE_NPI:
+		case CVMX_HELPER_INTERFACE_MODE_PICMG:
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_LOOP:
+		case CVMX_HELPER_INTERFACE_MODE_XAUI:
+		case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+		case CVMX_HELPER_INTERFACE_MODE_XFI:
+		case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+		case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+			bpmask = (cvmx_rgmii_backpressure_dis) ? 0xF : 0;
+			if (octeon_has_feature(OCTEON_FEATURE_BGX)) {
+				for (port = 0; port < ports; port++) {
+					xipdport = cvmx_helper_get_ipd_port(xiface, port);
+					cvmx_bgx_set_flowctl_mode(xipdport, qos_proto, qos_mode);
+				}
+				cvmx_bgx_set_backpressure_override(xiface, bpmask);
+			}
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_RGMII:
+		case CVMX_HELPER_INTERFACE_MODE_GMII:
+		case CVMX_HELPER_INTERFACE_MODE_SPI:
+		case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+		case CVMX_HELPER_INTERFACE_MODE_MIXED:
+			bpmask = (cvmx_rgmii_backpressure_dis) ? 0xF : 0;
+			if (octeon_has_feature(OCTEON_FEATURE_BGX)) {
+				for (port = 0; port < ports; port++) {
+					xipdport = cvmx_helper_get_ipd_port(xiface, port);
+					cvmx_bgx_set_flowctl_mode(xipdport, qos_proto, qos_mode);
+				}
+				cvmx_bgx_set_backpressure_override(xiface, bpmask);
+			} else {
+				cvmx_gmx_set_backpressure_override(interface, bpmask);
+			}
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_AGL:
+			bpmask = (cvmx_rgmii_backpressure_dis) ? 0x1 : 0;
+			cvmx_agl_set_backpressure_override(interface, bpmask);
+			break;
+		}
+	}
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Verify the per port IPD backpressure is aligned properly.
+ * @return Zero if working, non zero if misaligned
+ */
+int __cvmx_helper_backpressure_is_misaligned(void)
+{
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Enable packet input/output from the hardware. This function is
+ * called after all internal setup is complete and IPD is enabled.
+ * After this function completes, packets will be accepted from the
+ * hardware ports. PKO should still be disabled to make sure packets
+ * aren't sent out partially setup hardware.
+ *
+ * @param xiface Interface to enable
+ *
+ * @return Zero on success, negative on failure
+ */
+int __cvmx_helper_packet_hardware_enable(int xiface)
+{
+	int result = 0;
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (iface_node_ops[xi.node][xi.interface]->enable)
+		result = iface_node_ops[xi.node][xi.interface]->enable(xiface);
+	result |= __cvmx_helper_board_hardware_enable(xiface);
+	return result;
+}
+
+int cvmx_helper_ipd_and_packet_input_enable(void)
+{
+	return cvmx_helper_ipd_and_packet_input_enable_node(cvmx_get_node_num());
+}
+
+/**
+ * Called after all internal packet IO paths are setup. This
+ * function enables IPD/PIP and begins packet input and output.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_ipd_and_packet_input_enable_node(int node)
+{
+	int num_interfaces;
+	int interface;
+	int num_ports;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		cvmx_helper_pki_enable(node);
+	} else {
+		/* Enable IPD */
+		cvmx_ipd_enable();
+	}
+
+	/*
+	 * Time to enable hardware ports packet input and output. Note
+	 * that at this point IPD/PIP must be fully functional and PKO
+	 * must be disabled .
+	 */
+	num_interfaces = cvmx_helper_get_number_of_interfaces();
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+
+		num_ports = cvmx_helper_ports_on_interface(xiface);
+		if (num_ports > 0)
+			__cvmx_helper_packet_hardware_enable(xiface);
+	}
+
+	/* Finally enable PKO now that the entire path is up and running */
+	/* enable pko */
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		; // cvmx_pko_enable_78xx(0); already enabled
+	else
+		cvmx_pko_enable();
+
+	return 0;
+}
+
+/**
+ * Initialize the PIP, IPD, and PKO hardware to support
+ * simple priority based queues for the ethernet ports. Each
+ * port is configured with a number of priority queues based
+ * on CVMX_PKO_QUEUES_PER_PORT_* where each queue is lower
+ * priority than the previous.
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_node(unsigned int node)
+{
+	int result = 0;
+	int interface;
+	int xiface;
+	union cvmx_l2c_cfg l2c_cfg;
+	union cvmx_smix_en smix_en;
+	const int num_interfaces = cvmx_helper_get_number_of_interfaces();
+
+	/*
+	 * Tell L2 to give the IOB statically higher priority compared
+	 * to the cores. This avoids conditions where IO blocks might
+	 * be starved under very high L2 loads.
+	 */
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_OCTEON3()) {
+		union cvmx_l2c_ctl l2c_ctl;
+
+		l2c_ctl.u64 = csr_rd_node(node, CVMX_L2C_CTL);
+		l2c_ctl.s.rsp_arb_mode = 1;
+		l2c_ctl.s.xmc_arb_mode = 0;
+		csr_wr_node(node, CVMX_L2C_CTL, l2c_ctl.u64);
+	} else {
+		l2c_cfg.u64 = csr_rd(CVMX_L2C_CFG);
+		l2c_cfg.s.lrf_arb_mode = 0;
+		l2c_cfg.s.rfb_arb_mode = 0;
+		csr_wr(CVMX_L2C_CFG, l2c_cfg.u64);
+	}
+
+	int smi_inf;
+	int i;
+
+	/* Newer chips have more than one SMI/MDIO interface */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX) || OCTEON_IS_MODEL(OCTEON_CN78XX))
+		smi_inf = 4;
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		smi_inf = 2;
+	else
+		smi_inf = 2;
+
+	for (i = 0; i < smi_inf; i++) {
+		/* Make sure SMI/MDIO is enabled so we can query PHYs */
+		smix_en.u64 = csr_rd_node(node, CVMX_SMIX_EN(i));
+		if (!smix_en.s.en) {
+			smix_en.s.en = 1;
+			csr_wr_node(node, CVMX_SMIX_EN(i), smix_en.u64);
+		}
+	}
+
+	//vinita_to_do ask it need to be modify for multinode
+	__cvmx_helper_init_port_valid();
+
+	for (interface = 0; interface < num_interfaces; interface++) {
+		xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+		result |= cvmx_helper_interface_probe(xiface);
+	}
+
+	/* PKO3 init precedes that of interfaces */
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		__cvmx_helper_init_port_config_data(node);
+		result = cvmx_helper_pko3_init_global(node);
+	} else {
+		result = cvmx_helper_pko_init();
+	}
+
+	/* Errata SSO-29000, Disabling power saving SSO conditional clocking */
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ws_cfg_t cfg;
+
+		cfg.u64 = csr_rd_node(node, CVMX_SSO_WS_CFG);
+		cfg.s.sso_cclk_dis = 1;
+		csr_wr_node(node, CVMX_SSO_WS_CFG, cfg.u64);
+	}
+
+	if (result < 0)
+		return result;
+
+	for (interface = 0; interface < num_interfaces; interface++) {
+		xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+		/* Skip invalid/disabled interfaces */
+		if (cvmx_helper_ports_on_interface(xiface) <= 0)
+			continue;
+		printf("Node %d Interface %d has %d ports (%s)\n", node, interface,
+		       cvmx_helper_ports_on_interface(xiface),
+		       cvmx_helper_interface_mode_to_string(
+			       cvmx_helper_interface_get_mode(xiface)));
+
+		result |= __cvmx_helper_ipd_setup_interface(xiface);
+		if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+			result |= cvmx_helper_pko3_init_interface(xiface);
+		else
+			result |= __cvmx_helper_interface_setup_pko(interface);
+	}
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI))
+		result |= __cvmx_helper_pki_global_setup(node);
+	else
+		result |= __cvmx_helper_ipd_global_setup();
+
+	/* Enable any flow control and backpressure */
+	result |= __cvmx_helper_global_setup_backpressure(node);
+
+	/* export app config if set */
+	if (cvmx_export_app_config)
+		result |= (*cvmx_export_app_config)();
+
+	if (cvmx_ipd_cfg.ipd_enable && cvmx_pki_dflt_init[node])
+		result |= cvmx_helper_ipd_and_packet_input_enable_node(node);
+	return result;
+}
+
+/**
+ * Initialize the PIP, IPD, and PKO hardware to support
+ * simple priority based queues for the ethernet ports. Each
+ * port is configured with a number of priority queues based
+ * on CVMX_PKO_QUEUES_PER_PORT_* where each queue is lower
+ * priority than the previous.
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_global(void)
+{
+	unsigned int node = cvmx_get_node_num();
+
+	return cvmx_helper_initialize_packet_io_node(node);
+}
+
+/**
+ * Does core local initialization for packet io
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_initialize_packet_io_local(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		__cvmx_pko3_dq_table_setup();
+
+	return 0;
+}
+
+struct cvmx_buffer_list {
+	struct cvmx_buffer_list *next;
+};
+
+/**
+ * Disables the sending of flow control (pause) frames on the specified
+ * GMX port(s).
+ *
+ * @param interface Which interface (0 or 1)
+ * @param port_mask Mask (4bits) of which ports on the interface to disable
+ *                  backpressure on.
+ *                  1 => disable backpressure
+ *                  0 => enable backpressure
+ *
+ * @return 0 on success
+ *         -1 on error
+ */
+int cvmx_gmx_set_backpressure_override(u32 interface, uint32_t port_mask)
+{
+	union cvmx_gmxx_tx_ovr_bp gmxx_tx_ovr_bp;
+	/* Check for valid arguments */
+	if (port_mask & ~0xf || interface & ~0x1)
+		return -1;
+	if (interface >= CVMX_HELPER_MAX_GMX)
+		return -1;
+
+	gmxx_tx_ovr_bp.u64 = 0;
+	gmxx_tx_ovr_bp.s.en = port_mask;       /* Per port Enable back pressure override */
+	gmxx_tx_ovr_bp.s.ign_full = port_mask; /* Ignore the RX FIFO full when computing BP */
+	csr_wr(CVMX_GMXX_TX_OVR_BP(interface), gmxx_tx_ovr_bp.u64);
+	return 0;
+}
+
+/**
+ * Disables the sending of flow control (pause) frames on the specified
+ * AGL (RGMII) port(s).
+ *
+ * @param interface Which interface (0 or 1)
+ * @param port_mask Mask (4bits) of which ports on the interface to disable
+ *                  backpressure on.
+ *                  1 => disable backpressure
+ *                  0 => enable backpressure
+ *
+ * @return 0 on success
+ *         -1 on error
+ */
+int cvmx_agl_set_backpressure_override(u32 interface, uint32_t port_mask)
+{
+	union cvmx_agl_gmx_tx_ovr_bp agl_gmx_tx_ovr_bp;
+	int port = cvmx_helper_agl_get_port(interface);
+
+	if (port == -1)
+		return -1;
+	/* Check for valid arguments */
+	agl_gmx_tx_ovr_bp.u64 = 0;
+	/* Per port Enable back pressure override */
+	agl_gmx_tx_ovr_bp.s.en = port_mask;
+	/* Ignore the RX FIFO full when computing BP */
+	agl_gmx_tx_ovr_bp.s.ign_full = port_mask;
+	csr_wr(CVMX_GMXX_TX_OVR_BP(port), agl_gmx_tx_ovr_bp.u64);
+	return 0;
+}
+
+/**
+ * Helper function for global packet IO shutdown
+ */
+int cvmx_helper_shutdown_packet_io_global_cn78xx(int node)
+{
+	int num_interfaces = cvmx_helper_get_number_of_interfaces();
+	cvmx_wqe_t *work;
+	int interface;
+	int result = 0;
+
+	/* Shut down all interfaces and disable TX and RX on all ports */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+		int index;
+		int num_ports = cvmx_helper_ports_on_interface(xiface);
+
+		if (num_ports > 4)
+			num_ports = 4;
+
+		cvmx_bgx_set_backpressure_override(xiface, 0);
+		for (index = 0; index < num_ports; index++) {
+			cvmx_helper_link_info_t link_info;
+
+			if (!cvmx_helper_is_port_valid(xiface, index))
+				continue;
+
+			cvmx_helper_bgx_shutdown_port(xiface, index);
+
+			/* Turn off link LEDs */
+			link_info.u64 = 0;
+			cvmx_helper_update_link_led(xiface, index, link_info);
+		}
+	}
+
+	/* Stop input first */
+	cvmx_helper_pki_shutdown(node);
+
+	/* Retrieve all packets from the SSO and free them */
+	result = 0;
+	while ((work = cvmx_pow_work_request_sync(CVMX_POW_WAIT))) {
+		cvmx_helper_free_pki_pkt_data(work);
+		cvmx_wqe_pki_free(work);
+		result++;
+	}
+
+	if (result > 0)
+		debug("%s: Purged %d packets from SSO\n", __func__, result);
+
+	/*
+	 * No need to wait for PKO queues to drain,
+	 * dq_close() drains the queues to NULL.
+	 */
+
+	/* Shutdown PKO interfaces */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+
+		cvmx_helper_pko3_shut_interface(xiface);
+	}
+
+	/* Disable MAC address filtering */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+
+		switch (cvmx_helper_interface_get_mode(xiface)) {
+		case CVMX_HELPER_INTERFACE_MODE_XAUI:
+		case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		case CVMX_HELPER_INTERFACE_MODE_XLAUI:
+		case CVMX_HELPER_INTERFACE_MODE_XFI:
+		case CVMX_HELPER_INTERFACE_MODE_10G_KR:
+		case CVMX_HELPER_INTERFACE_MODE_40G_KR4:
+		case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		case CVMX_HELPER_INTERFACE_MODE_MIXED: {
+			int index;
+			int num_ports = cvmx_helper_ports_on_interface(xiface);
+
+			for (index = 0; index < num_ports; index++) {
+				if (!cvmx_helper_is_port_valid(xiface, index))
+					continue;
+
+				/* Reset MAC filtering */
+				cvmx_helper_bgx_rx_adr_ctl(node, interface, index, 0, 0, 0);
+			}
+			break;
+		}
+		default:
+			break;
+		}
+	}
+
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int index;
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+		int num_ports = cvmx_helper_ports_on_interface(xiface);
+
+		for (index = 0; index < num_ports; index++) {
+			/* Doing this twice should clear it since no packets
+			 * can be received.
+			 */
+			cvmx_update_rx_activity_led(xiface, index, false);
+			cvmx_update_rx_activity_led(xiface, index, false);
+		}
+	}
+
+	/* Shutdown the PKO unit */
+	result = cvmx_helper_pko3_shutdown(node);
+
+	/* Release interface structures */
+	__cvmx_helper_shutdown_interfaces();
+
+	return result;
+}
+
+/**
+ * Undo the initialization performed in
+ * cvmx_helper_initialize_packet_io_global(). After calling this routine and the
+ * local version on each core, packet IO for Octeon will be disabled and placed
+ * in the initial reset state. It will then be safe to call the initialize
+ * later on. Note that this routine does not empty the FPA pools. It frees all
+ * buffers used by the packet IO hardware to the FPA so a function emptying the
+ * FPA after shutdown should find all packet buffers in the FPA.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_helper_shutdown_packet_io_global(void)
+{
+	const int timeout = 5; /* Wait up to 5 seconds for timeouts */
+	int result = 0;
+	int num_interfaces = cvmx_helper_get_number_of_interfaces();
+	int interface;
+	int num_ports;
+	int index;
+	struct cvmx_buffer_list *pool0_buffers;
+	struct cvmx_buffer_list *pool0_buffers_tail;
+	cvmx_wqe_t *work;
+	union cvmx_ipd_ctl_status ipd_ctl_status;
+	int wqe_pool = (int)cvmx_fpa_get_wqe_pool();
+	int node = cvmx_get_node_num();
+	cvmx_pcsx_mrx_control_reg_t control_reg;
+
+	if (octeon_has_feature(OCTEON_FEATURE_BGX))
+		return cvmx_helper_shutdown_packet_io_global_cn78xx(node);
+
+	/* Step 1: Disable all backpressure */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		cvmx_helper_interface_mode_t mode =
+			cvmx_helper_interface_get_mode(interface);
+
+		if (mode == CVMX_HELPER_INTERFACE_MODE_AGL)
+			cvmx_agl_set_backpressure_override(interface, 0x1);
+		else if (mode != CVMX_HELPER_INTERFACE_MODE_DISABLED)
+			cvmx_gmx_set_backpressure_override(interface, 0xf);
+	}
+
+	/* Step 2: Wait for the PKO queues to drain */
+	result = __cvmx_helper_pko_drain();
+	if (result < 0) {
+		debug("WARNING: %s: Failed to drain some PKO queues\n",
+		      __func__);
+	}
+
+	/* Step 3: Disable TX and RX on all ports */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node,
+								  interface);
+
+		switch (cvmx_helper_interface_get_mode(interface)) {
+		case CVMX_HELPER_INTERFACE_MODE_DISABLED:
+		case CVMX_HELPER_INTERFACE_MODE_PCIE:
+			/* Not a packet interface */
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_NPI:
+		case CVMX_HELPER_INTERFACE_MODE_SRIO:
+		case CVMX_HELPER_INTERFACE_MODE_ILK:
+			/*
+			 * We don't handle the NPI/NPEI/SRIO packet
+			 * engines. The caller must know these are
+			 * idle.
+			 */
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_LOOP:
+			/*
+			 * Nothing needed. Once PKO is idle, the
+			 * loopback devices must be idle.
+			 */
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_SPI:
+			/*
+			 * SPI cannot be disabled from Octeon. It is
+			 * the responsibility of the caller to make
+			 * sure SPI is idle before doing shutdown.
+			 *
+			 * Fall through and do the same processing as
+			 * RGMII/GMII.
+			 */
+			fallthrough;
+		case CVMX_HELPER_INTERFACE_MODE_GMII:
+		case CVMX_HELPER_INTERFACE_MODE_RGMII:
+			/* Disable outermost RX@the ASX block */
+			csr_wr(CVMX_ASXX_RX_PRT_EN(interface), 0);
+			num_ports = cvmx_helper_ports_on_interface(xiface);
+			if (num_ports > 4)
+				num_ports = 4;
+			for (index = 0; index < num_ports; index++) {
+				union cvmx_gmxx_prtx_cfg gmx_cfg;
+
+				if (!cvmx_helper_is_port_valid(interface, index))
+					continue;
+				gmx_cfg.u64 = csr_rd(CVMX_GMXX_PRTX_CFG(index, interface));
+				gmx_cfg.s.en = 0;
+				csr_wr(CVMX_GMXX_PRTX_CFG(index, interface), gmx_cfg.u64);
+				/* Poll the GMX state machine waiting for it to become idle */
+				csr_wr(CVMX_NPI_DBG_SELECT,
+				       interface * 0x800 + index * 0x100 + 0x880);
+				if (CVMX_WAIT_FOR_FIELD64(CVMX_DBG_DATA, union cvmx_dbg_data,
+							  data & 7, ==, 0, timeout * 1000000)) {
+					debug("GMX RX path timeout waiting for idle\n");
+					result = -1;
+				}
+				if (CVMX_WAIT_FOR_FIELD64(CVMX_DBG_DATA, union cvmx_dbg_data,
+							  data & 0xf, ==, 0, timeout * 1000000)) {
+					debug("GMX TX path timeout waiting for idle\n");
+					result = -1;
+				}
+			}
+			/* Disable outermost TX@the ASX block */
+			csr_wr(CVMX_ASXX_TX_PRT_EN(interface), 0);
+			/* Disable interrupts for interface */
+			csr_wr(CVMX_ASXX_INT_EN(interface), 0);
+			csr_wr(CVMX_GMXX_TX_INT_EN(interface), 0);
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_XAUI:
+		case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+		case CVMX_HELPER_INTERFACE_MODE_PICMG:
+			num_ports = cvmx_helper_ports_on_interface(xiface);
+			if (num_ports > 4)
+				num_ports = 4;
+			for (index = 0; index < num_ports; index++) {
+				union cvmx_gmxx_prtx_cfg gmx_cfg;
+
+				if (!cvmx_helper_is_port_valid(interface, index))
+					continue;
+				gmx_cfg.u64 = csr_rd(CVMX_GMXX_PRTX_CFG(index, interface));
+				gmx_cfg.s.en = 0;
+				csr_wr(CVMX_GMXX_PRTX_CFG(index, interface), gmx_cfg.u64);
+				if (CVMX_WAIT_FOR_FIELD64(CVMX_GMXX_PRTX_CFG(index, interface),
+							  union cvmx_gmxx_prtx_cfg, rx_idle, ==, 1,
+							  timeout * 1000000)) {
+					debug("GMX RX path timeout waiting for idle\n");
+					result = -1;
+				}
+				if (CVMX_WAIT_FOR_FIELD64(CVMX_GMXX_PRTX_CFG(index, interface),
+							  union cvmx_gmxx_prtx_cfg, tx_idle, ==, 1,
+							  timeout * 1000000)) {
+					debug("GMX TX path timeout waiting for idle\n");
+					result = -1;
+				}
+				/* For SGMII some PHYs require that the PCS
+				 * interface be powered down and reset (i.e.
+				 * Atheros/Qualcomm PHYs).
+				 */
+				if (cvmx_helper_interface_get_mode(interface) ==
+				    CVMX_HELPER_INTERFACE_MODE_SGMII) {
+					u64 reg;
+
+					reg = CVMX_PCSX_MRX_CONTROL_REG(index, interface);
+					/* Power down the interface */
+					control_reg.u64 = csr_rd(reg);
+					control_reg.s.pwr_dn = 1;
+					csr_wr(reg, control_reg.u64);
+					csr_rd(reg);
+				}
+			}
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_AGL: {
+			int port = cvmx_helper_agl_get_port(interface);
+			union cvmx_agl_gmx_prtx_cfg agl_gmx_cfg;
+
+			agl_gmx_cfg.u64 = csr_rd(CVMX_AGL_GMX_PRTX_CFG(port));
+			agl_gmx_cfg.s.en = 0;
+			csr_wr(CVMX_AGL_GMX_PRTX_CFG(port), agl_gmx_cfg.u64);
+			if (CVMX_WAIT_FOR_FIELD64(CVMX_AGL_GMX_PRTX_CFG(port),
+						  union cvmx_agl_gmx_prtx_cfg, rx_idle, ==, 1,
+						  timeout * 1000000)) {
+				debug("AGL RX path timeout waiting for idle\n");
+				result = -1;
+			}
+			if (CVMX_WAIT_FOR_FIELD64(CVMX_AGL_GMX_PRTX_CFG(port),
+						  union cvmx_agl_gmx_prtx_cfg, tx_idle, ==, 1,
+						  timeout * 1000000)) {
+				debug("AGL TX path timeout waiting for idle\n");
+				result = -1;
+			}
+		} break;
+		default:
+			break;
+		}
+	}
+
+	/* Step 4: Retrieve all packets from the POW and free them */
+	while ((work = cvmx_pow_work_request_sync(CVMX_POW_WAIT))) {
+		cvmx_helper_free_packet_data(work);
+		cvmx_fpa1_free(work, wqe_pool, 0);
+	}
+
+	/* Step 5 */
+	cvmx_ipd_disable();
+
+	/*
+	 * Step 6: Drain all prefetched buffers from IPD/PIP. Note that IPD/PIP
+	 * have not been reset yet
+	 */
+	__cvmx_ipd_free_ptr();
+
+	/* Step 7: Free the PKO command buffers and put PKO in reset */
+	cvmx_pko_shutdown();
+
+	/* Step 8: Disable MAC address filtering */
+	for (interface = 0; interface < num_interfaces; interface++) {
+		int xiface = cvmx_helper_node_interface_to_xiface(node, interface);
+
+		switch (cvmx_helper_interface_get_mode(interface)) {
+		case CVMX_HELPER_INTERFACE_MODE_DISABLED:
+		case CVMX_HELPER_INTERFACE_MODE_PCIE:
+		case CVMX_HELPER_INTERFACE_MODE_SRIO:
+		case CVMX_HELPER_INTERFACE_MODE_ILK:
+		case CVMX_HELPER_INTERFACE_MODE_NPI:
+		case CVMX_HELPER_INTERFACE_MODE_LOOP:
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_XAUI:
+		case CVMX_HELPER_INTERFACE_MODE_RXAUI:
+		case CVMX_HELPER_INTERFACE_MODE_GMII:
+		case CVMX_HELPER_INTERFACE_MODE_RGMII:
+		case CVMX_HELPER_INTERFACE_MODE_SPI:
+		case CVMX_HELPER_INTERFACE_MODE_SGMII:
+		case CVMX_HELPER_INTERFACE_MODE_QSGMII:
+		case CVMX_HELPER_INTERFACE_MODE_PICMG:
+			num_ports = cvmx_helper_ports_on_interface(xiface);
+			if (num_ports > 4)
+				num_ports = 4;
+			for (index = 0; index < num_ports; index++) {
+				if (!cvmx_helper_is_port_valid(interface, index))
+					continue;
+				csr_wr(CVMX_GMXX_RXX_ADR_CTL(index, interface), 1);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM_EN(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM0(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM1(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM2(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM3(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM4(index, interface), 0);
+				csr_wr(CVMX_GMXX_RXX_ADR_CAM5(index, interface), 0);
+			}
+			break;
+		case CVMX_HELPER_INTERFACE_MODE_AGL: {
+			int port = cvmx_helper_agl_get_port(interface);
+
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CTL(port), 1);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM_EN(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM0(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM1(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM2(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM3(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM4(port), 0);
+			csr_wr(CVMX_AGL_GMX_RXX_ADR_CAM5(port), 0);
+		} break;
+		default:
+			break;
+		}
+	}
+
+	/*
+	 * Step 9: Drain all FPA buffers out of pool 0 before we reset
+	 * IPD/PIP.  This is needed to keep IPD_QUE0_FREE_PAGE_CNT in
+	 * sync. We temporarily keep the buffers in the pool0_buffers
+	 * list.
+	 */
+	pool0_buffers = NULL;
+	pool0_buffers_tail = NULL;
+	while (1) {
+		struct cvmx_buffer_list *buffer = cvmx_fpa1_alloc(0);
+
+		if (buffer) {
+			buffer->next = NULL;
+
+			if (!pool0_buffers)
+				pool0_buffers = buffer;
+			else
+				pool0_buffers_tail->next = buffer;
+
+			pool0_buffers_tail = buffer;
+		} else {
+			break;
+		}
+	}
+
+	/* Step 10: Reset IPD and PIP */
+	ipd_ctl_status.u64 = csr_rd(CVMX_IPD_CTL_STATUS);
+	ipd_ctl_status.s.reset = 1;
+	csr_wr(CVMX_IPD_CTL_STATUS, ipd_ctl_status.u64);
+
+	/* Make sure IPD has finished reset. */
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		if (CVMX_WAIT_FOR_FIELD64(CVMX_IPD_CTL_STATUS, union cvmx_ipd_ctl_status, rst_done,
+					  ==, 0, 1000)) {
+			debug("IPD reset timeout waiting for idle\n");
+			result = -1;
+		}
+	}
+
+	/* Step 11: Restore the FPA buffers into pool 0 */
+	while (pool0_buffers) {
+		struct cvmx_buffer_list *n = pool0_buffers->next;
+
+		cvmx_fpa1_free(pool0_buffers, 0, 0);
+		pool0_buffers = n;
+	}
+
+	/* Step 12: Release interface structures */
+	__cvmx_helper_shutdown_interfaces();
+
+	return result;
+}
+
+/**
+ * Does core local shutdown of packet io
+ *
+ * @return Zero on success, non-zero on failure
+ */
+int cvmx_helper_shutdown_packet_io_local(void)
+{
+	/*
+	 * Currently there is nothing to do per core. This may change
+	 * in the future.
+	 */
+	return 0;
+}
+
+/**
+ * Auto configure an IPD/PKO port link state and speed. This
+ * function basically does the equivalent of:
+ * cvmx_helper_link_set(ipd_port, cvmx_helper_link_get(ipd_port));
+ *
+ * @param xipd_port IPD/PKO port to auto configure
+ *
+ * @return Link state after configure
+ */
+cvmx_helper_link_info_t cvmx_helper_link_autoconf(int xipd_port)
+{
+	cvmx_helper_link_info_t link_info;
+	int xiface = cvmx_helper_get_interface_num(xipd_port);
+	int index = cvmx_helper_get_interface_index_num(xipd_port);
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int interface = xi.interface;
+
+	if (interface == -1 || index == -1 || index >= cvmx_helper_ports_on_interface(xiface)) {
+		link_info.u64 = 0;
+		return link_info;
+	}
+
+	link_info = cvmx_helper_link_get(xipd_port);
+	if (link_info.u64 == (__cvmx_helper_get_link_info(xiface, index)).u64)
+		return link_info;
+
+	if (!link_info.s.link_up)
+		cvmx_error_disable_group(CVMX_ERROR_GROUP_ETHERNET, xipd_port);
+
+	/* If we fail to set the link speed, port_link_info will not change */
+	cvmx_helper_link_set(xipd_port, link_info);
+
+	if (link_info.s.link_up)
+		cvmx_error_enable_group(CVMX_ERROR_GROUP_ETHERNET, xipd_port);
+
+	return link_info;
+}
+
+/**
+ * Return the link state of an IPD/PKO port as returned by
+ * auto negotiation. The result of this function may not match
+ * Octeon's link config if auto negotiation has changed since
+ * the last call to cvmx_helper_link_set().
+ *
+ * @param xipd_port IPD/PKO port to query
+ *
+ * @return Link state
+ */
+cvmx_helper_link_info_t cvmx_helper_link_get(int xipd_port)
+{
+	cvmx_helper_link_info_t result;
+	int xiface = cvmx_helper_get_interface_num(xipd_port);
+	int index = cvmx_helper_get_interface_index_num(xipd_port);
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	struct cvmx_fdt_sfp_info *sfp_info;
+
+	/*
+	 * The default result will be a down link unless the code
+	 * below changes it.
+	 */
+	result.u64 = 0;
+
+	if (__cvmx_helper_xiface_is_null(xiface) || index == -1 ||
+	    index >= cvmx_helper_ports_on_interface(xiface)) {
+		return result;
+	}
+
+	if (iface_node_ops[xi.node][xi.interface]->link_get)
+		result = iface_node_ops[xi.node][xi.interface]->link_get(xipd_port);
+
+	if (xipd_port >= 0) {
+		cvmx_helper_update_link_led(xiface, index, result);
+
+		sfp_info = cvmx_helper_cfg_get_sfp_info(xiface, index);
+
+		while (sfp_info) {
+			if ((!result.s.link_up || (result.s.link_up && sfp_info->last_mod_abs)))
+				cvmx_sfp_check_mod_abs(sfp_info, sfp_info->mod_abs_data);
+			sfp_info = sfp_info->next_iface_sfp;
+		}
+	}
+
+	return result;
+}
+
+/**
+ * Configure an IPD/PKO port for the specified link state. This
+ * function does not influence auto negotiation at the PHY level.
+ * The passed link state must always match the link state returned
+ * by cvmx_helper_link_get(). It is normally best to use
+ * cvmx_helper_link_autoconf() instead.
+ *
+ * @param xipd_port  IPD/PKO port to configure
+ * @param link_info The new link state
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_helper_link_set(int xipd_port, cvmx_helper_link_info_t link_info)
+{
+	int result = -1;
+	int xiface = cvmx_helper_get_interface_num(xipd_port);
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int index = cvmx_helper_get_interface_index_num(xipd_port);
+
+	if (__cvmx_helper_xiface_is_null(xiface) || index == -1 ||
+	    index >= cvmx_helper_ports_on_interface(xiface))
+		return -1;
+
+	if (iface_node_ops[xi.node][xi.interface]->link_set)
+		result = iface_node_ops[xi.node][xi.interface]->link_set(xipd_port, link_info);
+
+	/*
+	 * Set the port_link_info here so that the link status is
+	 * updated no matter how cvmx_helper_link_set is called. We
+	 * don't change the value if link_set failed.
+	 */
+	if (result == 0)
+		__cvmx_helper_set_link_info(xiface, index, link_info);
+	return result;
+}
+
+/**
+ * Configure a port for internal and/or external loopback. Internal loopback
+ * causes packets sent by the port to be received by Octeon. External loopback
+ * causes packets received from the wire to sent out again.
+ *
+ * @param xipd_port IPD/PKO port to loopback.
+ * @param enable_internal
+ *                 Non zero if you want internal loopback
+ * @param enable_external
+ *                 Non zero if you want external loopback
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_helper_configure_loopback(int xipd_port, int enable_internal, int enable_external)
+{
+	int result = -1;
+	int xiface = cvmx_helper_get_interface_num(xipd_port);
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+	int index = cvmx_helper_get_interface_index_num(xipd_port);
+
+	if (index >= cvmx_helper_ports_on_interface(xiface))
+		return -1;
+
+	cvmx_helper_interface_get_mode(xiface);
+	if (iface_node_ops[xi.node][xi.interface]->loopback)
+		result = iface_node_ops[xi.node][xi.interface]->loopback(xipd_port, enable_internal,
+									 enable_external);
+
+	return result;
+}
+
+void cvmx_helper_setup_simulator_io_buffer_counts(int node, int num_packet_buffers, int pko_buffers)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		cvmx_helper_pki_set_dflt_pool_buffer(node, num_packet_buffers);
+		cvmx_helper_pki_set_dflt_aura_buffer(node, num_packet_buffers);
+
+	} else {
+		cvmx_ipd_set_packet_pool_buffer_count(num_packet_buffers);
+		cvmx_ipd_set_wqe_pool_buffer_count(num_packet_buffers);
+		cvmx_pko_set_cmd_queue_pool_buffer_count(pko_buffers);
+	}
+}
+
+void *cvmx_helper_mem_alloc(int node, uint64_t alloc_size, uint64_t align)
+{
+	s64 paddr;
+
+	paddr = cvmx_bootmem_phy_alloc_range(alloc_size, align, cvmx_addr_on_node(node, 0ull),
+					     cvmx_addr_on_node(node, 0xffffffffff));
+	if (paddr <= 0ll) {
+		printf("ERROR: %s failed size %u\n", __func__, (unsigned int)alloc_size);
+		return NULL;
+	}
+	return cvmx_phys_to_ptr(paddr);
+}
+
+void cvmx_helper_mem_free(void *buffer, uint64_t size)
+{
+	__cvmx_bootmem_phy_free(cvmx_ptr_to_phys(buffer), size, 0);
+}
+
+int cvmx_helper_qos_config_init(cvmx_qos_proto_t qos_proto, cvmx_qos_config_t *qos_cfg)
+{
+	int i;
+
+	memset(qos_cfg, 0, sizeof(cvmx_qos_config_t));
+	qos_cfg->pkt_mode = CVMX_QOS_PKT_MODE_HWONLY; /* Process PAUSEs in hardware only.*/
+	qos_cfg->pool_mode = CVMX_QOS_POOL_PER_PORT;  /* One Pool per BGX:LMAC.*/
+	qos_cfg->pktbuf_size = 2048;		      /* Fit WQE + MTU in one buffer.*/
+	qos_cfg->aura_size = 1024;	/* 1K buffers typically enough for any application.*/
+	qos_cfg->pko_pfc_en = 1;	/* Enable PKO layout for PFC feature. */
+	qos_cfg->vlan_num = 1;		/* For Stacked VLAN, use 2nd VLAN in the QPG algorithm.*/
+	qos_cfg->qos_proto = qos_proto; /* Use PFC flow-control protocol.*/
+	qos_cfg->qpg_base = -1;		/* QPG Table index is undefined.*/
+	qos_cfg->p_time = 0x60;		/* PAUSE packets time window.*/
+	qos_cfg->p_interval = 0x10;	/* PAUSE packets interval.*/
+	for (i = 0; i < CVMX_QOS_NUM; i++) {
+		qos_cfg->groups[i] = i;	      /* SSO Groups = 0...7 */
+		qos_cfg->group_prio[i] = i;   /* SSO Group priority = QOS. */
+		qos_cfg->drop_thresh[i] = 99; /* 99% of the Aura size.*/
+		qos_cfg->red_thresh[i] = 90;  /* 90% of the Aura size.*/
+		qos_cfg->bp_thresh[i] = 70;   /* 70% of the Aura size.*/
+	}
+	return 0;
+}
+
+int cvmx_helper_qos_port_config_update(int xipdport, cvmx_qos_config_t *qos_cfg)
+{
+	cvmx_user_static_pko_queue_config_t pkocfg;
+	cvmx_xport_t xp = cvmx_helper_ipd_port_to_xport(xipdport);
+	int xiface = cvmx_helper_get_interface_num(xipdport);
+	cvmx_xiface_t xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	/* Configure PKO port for PFC SQ layout: */
+	cvmx_helper_pko_queue_config_get(xp.node, &pkocfg);
+	pkocfg.pknd.pko_cfg_iface[xi.interface].pfc_enable = 1;
+	cvmx_helper_pko_queue_config_set(xp.node, &pkocfg);
+	return 0;
+}
+
+int cvmx_helper_qos_port_setup(int xipdport, cvmx_qos_config_t *qos_cfg)
+{
+	const int channles = CVMX_QOS_NUM;
+	int bufsize = qos_cfg->pktbuf_size;
+	int aura_size = qos_cfg->aura_size;
+	cvmx_xport_t xp = cvmx_helper_ipd_port_to_xport(xipdport);
+	int node = xp.node;
+	int ipdport = xp.port;
+	int port = cvmx_helper_get_interface_index_num(xp.port);
+	int xiface = cvmx_helper_get_interface_num(xipdport);
+	cvmx_xiface_t xi = cvmx_helper_xiface_to_node_interface(xiface);
+	cvmx_fpa3_pool_t gpool;
+	cvmx_fpa3_gaura_t gaura;
+	cvmx_bgxx_cmr_rx_ovr_bp_t ovrbp;
+	struct cvmx_pki_qpg_config qpgcfg;
+	struct cvmx_pki_style_config stcfg, stcfg_dflt;
+	struct cvmx_pki_pkind_config pkcfg;
+	int chan, bpid, group, qpg;
+	int bpen, reden, dropen, passthr, dropthr, bpthr;
+	int nbufs, pkind, style;
+	char name[32];
+
+	if (qos_cfg->pool_mode == CVMX_QOS_POOL_PER_PORT) {
+		/* Allocate and setup packet Pool: */
+		nbufs = aura_size * channles;
+		sprintf(name, "QOS.P%d", ipdport);
+		gpool = cvmx_fpa3_setup_fill_pool(node, -1 /*auto*/, name, bufsize, nbufs, NULL);
+		if (!__cvmx_fpa3_pool_valid(gpool)) {
+			printf("%s: Failed to setup FPA Pool\n", __func__);
+			return -1;
+		}
+		for (chan = 0; chan < channles; chan++)
+			qos_cfg->gpools[chan] = gpool;
+	} else {
+		printf("%s: Invalid pool_mode %d\n", __func__, qos_cfg->pool_mode);
+		return -1;
+	}
+	/* Allocate QPG entries: */
+	qos_cfg->qpg_base = cvmx_pki_qpg_entry_alloc(node, -1 /*auto*/, channles);
+	if (qos_cfg->qpg_base < 0) {
+		printf("%s: Failed to allocate QPG entry\n", __func__);
+		return -1;
+	}
+	for (chan = 0; chan < channles; chan++) {
+		/* Allocate and setup Aura, setup BP threshold: */
+		gpool = qos_cfg->gpools[chan];
+		sprintf(name, "QOS.A%d", ipdport + chan);
+		gaura = cvmx_fpa3_set_aura_for_pool(gpool, -1 /*auto*/, name, bufsize, aura_size);
+		if (!__cvmx_fpa3_aura_valid(gaura)) {
+			printf("%s: Failed to setup FPA Aura for Channel %d\n", __func__, chan);
+			return -1;
+		}
+		qos_cfg->gauras[chan] = gaura;
+		bpen = 1;
+		reden = 1;
+		dropen = 1;
+		dropthr = (qos_cfg->drop_thresh[chan] * 10 * aura_size) / 1000;
+		passthr = (qos_cfg->red_thresh[chan] * 10 * aura_size) / 1000;
+		bpthr = (qos_cfg->bp_thresh[chan] * 10 * aura_size) / 1000;
+		cvmx_fpa3_setup_aura_qos(gaura, reden, passthr, dropthr, bpen, bpthr);
+		cvmx_pki_enable_aura_qos(node, gaura.laura, reden, dropen, bpen);
+
+		/* Allocate BPID, link Aura and Channel using BPID: */
+		bpid = cvmx_pki_bpid_alloc(node, -1 /*auto*/);
+		if (bpid < 0) {
+			printf("%s: Failed to allocate BPID for channel %d\n",
+			       __func__, chan);
+			return -1;
+		}
+		qos_cfg->bpids[chan] = bpid;
+		cvmx_pki_write_aura_bpid(node, gaura.laura, bpid);
+		cvmx_pki_write_channel_bpid(node, ipdport + chan, bpid);
+
+		/* Setup QPG entries: */
+		group = qos_cfg->groups[chan];
+		qpg = qos_cfg->qpg_base + chan;
+		cvmx_pki_read_qpg_entry(node, qpg, &qpgcfg);
+		qpgcfg.port_add = chan;
+		qpgcfg.aura_num = gaura.laura;
+		qpgcfg.grp_ok = (node << CVMX_WQE_GRP_NODE_SHIFT) | group;
+		qpgcfg.grp_bad = (node << CVMX_WQE_GRP_NODE_SHIFT) | group;
+		qpgcfg.grptag_ok = (node << CVMX_WQE_GRP_NODE_SHIFT) | 0;
+		qpgcfg.grptag_bad = (node << CVMX_WQE_GRP_NODE_SHIFT) | 0;
+		cvmx_pki_write_qpg_entry(node, qpg, &qpgcfg);
+	}
+	/* Allocate and setup STYLE: */
+	cvmx_helper_pki_get_dflt_style(node, &stcfg_dflt);
+	style = cvmx_pki_style_alloc(node, -1 /*auto*/);
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &stcfg);
+	stcfg.tag_cfg = stcfg_dflt.tag_cfg;
+	stcfg.parm_cfg.tag_type = CVMX_POW_TAG_TYPE_ORDERED;
+	stcfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_VLAN;
+	stcfg.parm_cfg.qpg_base = qos_cfg->qpg_base;
+	stcfg.parm_cfg.qpg_port_msb = 0;
+	stcfg.parm_cfg.qpg_port_sh = 0;
+	stcfg.parm_cfg.qpg_dis_grptag = 1;
+	stcfg.parm_cfg.fcs_strip = 1;
+	stcfg.parm_cfg.mbuff_size = bufsize - 64; /* Do not use 100% of the buffer. */
+	stcfg.parm_cfg.force_drop = 0;
+	stcfg.parm_cfg.nodrop = 0;
+	stcfg.parm_cfg.rawdrp = 0;
+	stcfg.parm_cfg.cache_mode = 2; /* 1st buffer in L2 */
+	stcfg.parm_cfg.wqe_vs = qos_cfg->vlan_num;
+	cvmx_pki_write_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &stcfg);
+
+	/* Setup PKIND: */
+	pkind = cvmx_helper_get_pknd(xiface, port);
+	cvmx_pki_read_pkind_config(node, pkind, &pkcfg);
+	pkcfg.cluster_grp = 0; /* OCTEON3 has only one cluster group = 0 */
+	pkcfg.initial_style = style;
+	pkcfg.initial_parse_mode = CVMX_PKI_PARSE_LA_TO_LG;
+	cvmx_pki_write_pkind_config(node, pkind, &pkcfg);
+
+	/* Setup parameters of the QOS packet and enable QOS flow-control: */
+	cvmx_bgx_set_pause_pkt_param(xipdport, 0, 0x0180c2000001, 0x8808, qos_cfg->p_time,
+				     qos_cfg->p_interval);
+	cvmx_bgx_set_flowctl_mode(xipdport, qos_cfg->qos_proto, qos_cfg->pkt_mode);
+
+	/* Enable PKI channel backpressure in the BGX: */
+	ovrbp.u64 = csr_rd_node(node, CVMX_BGXX_CMR_RX_OVR_BP(xi.interface));
+	ovrbp.s.en &= ~(1 << port);
+	ovrbp.s.ign_fifo_bp &= ~(1 << port);
+	csr_wr_node(node, CVMX_BGXX_CMR_RX_OVR_BP(xi.interface), ovrbp.u64);
+	return 0;
+}
+
+int cvmx_helper_qos_sso_setup(int xipdport, cvmx_qos_config_t *qos_cfg)
+{
+	const int channels = CVMX_QOS_NUM;
+	cvmx_sso_grpx_pri_t grppri;
+	int chan, qos, group;
+	cvmx_xport_t xp = cvmx_helper_ipd_port_to_xport(xipdport);
+	int node = xp.node;
+
+	for (chan = 0; chan < channels; chan++) {
+		qos = cvmx_helper_qos2prio(chan);
+		group = qos_cfg->groups[qos];
+		grppri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+		grppri.s.pri = qos_cfg->group_prio[chan];
+		csr_wr_node(node, CVMX_SSO_GRPX_PRI(group), grppri.u64);
+	}
+	return 0;
+}
+
+int cvmx_helper_get_chan_e_name(int chan, char *namebuf, int buflen)
+{
+	int n, dpichans;
+
+	if ((unsigned int)chan >= CVMX_PKO3_IPD_NUM_MAX) {
+		printf("%s: Channel %d is out of range (0..4095)\n", __func__, chan);
+		return -1;
+	}
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		dpichans = 64;
+	else
+		dpichans = 128;
+
+	if (chan >= 0 && chan < 64)
+		n = snprintf(namebuf, buflen, "LBK%d", chan);
+	else if (chan >= 0x100 && chan < (0x100 + dpichans))
+		n = snprintf(namebuf, buflen, "DPI%d", chan - 0x100);
+	else if (chan == 0x200)
+		n = snprintf(namebuf, buflen, "NQM");
+	else if (chan >= 0x240 && chan < (0x240 + (1 << 1) + 2))
+		n = snprintf(namebuf, buflen, "SRIO%d:%d", (chan - 0x240) >> 1,
+			     (chan - 0x240) & 0x1);
+	else if (chan >= 0x400 && chan < (0x400 + (1 << 8) + 256))
+		n = snprintf(namebuf, buflen, "ILK%d:%d", (chan - 0x400) >> 8,
+			     (chan - 0x400) & 0xFF);
+	else if (chan >= 0x800 && chan < (0x800 + (5 << 8) + (3 << 4) + 16))
+		n = snprintf(namebuf, buflen, "BGX%d:%d:%d", (chan - 0x800) >> 8,
+			     ((chan - 0x800) >> 4) & 0x3, (chan - 0x800) & 0xF);
+	else
+		n = snprintf(namebuf, buflen, "--");
+	return n;
+}
+
+#ifdef CVMX_DUMP_DIAGNOSTICS
+void cvmx_helper_dump_for_diagnostics(int node)
+{
+	if (!(OCTEON_IS_OCTEON3() && !OCTEON_IS_MODEL(OCTEON_CN70XX))) {
+		printf("Diagnostics are not implemented for this model\n");
+		return;
+	}
+#ifdef CVMX_DUMP_GSER
+	{
+		int qlm, num_qlms;
+
+		num_qlms = cvmx_qlm_get_num();
+		for (qlm = 0; qlm < num_qlms; qlm++) {
+			cvmx_dump_gser_config_node(node, qlm);
+			cvmx_dump_gser_status_node(node, qlm);
+		}
+	}
+#endif
+#ifdef CVMX_DUMP_BGX
+	{
+		int bgx;
+
+		for (bgx = 0; bgx < CVMX_HELPER_MAX_GMX; bgx++) {
+			cvmx_dump_bgx_config_node(node, bgx);
+			cvmx_dump_bgx_status_node(node, bgx);
+		}
+	}
+#endif
+#ifdef CVMX_DUMP_PKI
+	cvmx_pki_config_dump(node);
+	cvmx_pki_stats_dump(node);
+#endif
+#ifdef CVMX_DUMP_PKO
+	cvmx_helper_pko3_config_dump(node);
+	cvmx_helper_pko3_stats_dump(node);
+#endif
+#ifdef CVMX_DUMO_SSO
+	cvmx_sso_config_dump(node);
+#endif
+}
+#endif
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 41/50] mips: octeon: Add cvmx-pcie.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (39 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 40/50] mips: octeon: Add cvmx-helper.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 42/50] mips: octeon: Add cvmx-qlm.c Stefan Roese
                   ` (11 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-pcie.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-pcie.c | 2487 +++++++++++++++++++++++++++++
 1 file changed, 2487 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-pcie.c

diff --git a/arch/mips/mach-octeon/cvmx-pcie.c b/arch/mips/mach-octeon/cvmx-pcie.c
new file mode 100644
index 0000000000..f42d44cbec
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-pcie.c
@@ -0,0 +1,2487 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to PCIe as a host(RC) or target(EP)
+ */
+
+#include <log.h>
+#include <linux/delay.h>
+#include <linux/libfdt.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+
+#include <mach/cvmx-helper-fdt.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-error.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-bgxx-defs.h>
+#include <mach/cvmx-ciu-defs.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-gserx-defs.h>
+#include <mach/cvmx-mio-defs.h>
+#include <mach/cvmx-pciercx-defs.h>
+#include <mach/cvmx-pcieepx-defs.h>
+#include <mach/cvmx-pemx-defs.h>
+#include <mach/cvmx-pexp-defs.h>
+#include <mach/cvmx-rst-defs.h>
+#include <mach/cvmx-sata-defs.h>
+#include <mach/cvmx-sli-defs.h>
+#include <mach/cvmx-sriomaintx-defs.h>
+#include <mach/cvmx-sriox-defs.h>
+
+#include <mach/cvmx-dpi-defs.h>
+#include <mach/cvmx-sli-defs.h>
+#include <mach/cvmx-dtx-defs.h>
+
+DECLARE_GLOBAL_DATA_PTR;
+
+#define MRRS_CN6XXX 3 /* 1024 byte Max Read Request Size */
+#define MPS_CN6XXX  0 /* 128 byte Max Packet Size (Limit of most PCs) */
+
+/* Endian swap mode. */
+#define _CVMX_PCIE_ES 1
+
+#define CVMX_READ_CSR(addr)		   csr_rd_node(node, addr)
+#define CVMX_WRITE_CSR(addr, val)	   csr_wr_node(node, addr, val)
+#define CVMX_PCIE_CFGX_READ(p, addr)	   cvmx_pcie_cfgx_read_node(node, p, addr)
+#define CVMX_PCIE_CFGX_WRITE(p, addr, val) cvmx_pcie_cfgx_write_node(node, p, addr, val)
+
+/* #define DEBUG_PCIE */
+
+/* Delay after link up, before issuing first configuration read */
+#define PCIE_DEVICE_READY_WAIT_DELAY_MICROSECONDS 700000
+
+/* Recommended Preset Vector: Drop Preset 10    */
+int pcie_preset_vec[4] = { 0x593, 0x593, 0x593, 0x593 };
+
+/* Number of LTSSM transitions to record, must be a power of 2 */
+#define LTSSM_HISTORY_SIZE 64
+#define MAX_RETRIES	   2
+
+bool pcie_link_initialized[CVMX_MAX_NODES][CVMX_PCIE_MAX_PORTS];
+int cvmx_primary_pcie_bus_number = 1;
+
+static uint32_t __cvmx_pcie_config_read32(int node, int pcie_port, int bus, int dev, int func,
+					  int reg, int lst);
+
+/**
+ * Return the Core virtual base address for PCIe IO access. IOs are
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+uint64_t cvmx_pcie_get_io_base_address(int pcie_port)
+{
+	cvmx_pcie_address_t pcie_addr;
+
+	pcie_addr.u64 = 0;
+	pcie_addr.io.upper = 0;
+	pcie_addr.io.io = 1;
+	pcie_addr.io.did = 3;
+	pcie_addr.io.subdid = 2;
+	pcie_addr.io.node = (pcie_port >> 4) & 0x3;
+	pcie_addr.io.es = _CVMX_PCIE_ES;
+	pcie_addr.io.port = (pcie_port & 0x3);
+	return pcie_addr.u64;
+}
+
+/**
+ * Size of the IO address region returned at address
+ * cvmx_pcie_get_io_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the IO window
+ */
+uint64_t cvmx_pcie_get_io_size(int pcie_port)
+{
+	return 1ull << 32;
+}
+
+/**
+ * Return the Core virtual base address for PCIe MEM access. Memory is
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+uint64_t cvmx_pcie_get_mem_base_address(int pcie_port)
+{
+	cvmx_pcie_address_t pcie_addr;
+
+	pcie_addr.u64 = 0;
+	pcie_addr.mem.upper = 0;
+	pcie_addr.mem.io = 1;
+	pcie_addr.mem.did = 3;
+	pcie_addr.mem.subdid = 3 + (pcie_port & 0x3);
+	pcie_addr.mem.node = (pcie_port >> 4) & 0x3;
+	return pcie_addr.u64;
+}
+
+/**
+ * Size of the Mem address region returned at address
+ * cvmx_pcie_get_mem_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the Mem window
+ */
+uint64_t cvmx_pcie_get_mem_size(int pcie_port)
+{
+	return 1ull << 36;
+}
+
+/**
+ * @INTERNAL
+ * Return the QLM number for the PCIE port.
+ *
+ * @param  pcie_port  QLM number to return for.
+ *
+ * @return QLM number.
+ */
+static int __cvmx_pcie_get_qlm(int node, int pcie_port)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		cvmx_pemx_cfg_t pem_cfg;
+		cvmx_pemx_qlm_t pem_qlm;
+		cvmx_gserx_cfg_t gserx_cfg;
+
+		switch (pcie_port) {
+		case 0: /* PEM0 */
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(0));
+			if (gserx_cfg.s.pcie)
+				return 0; /* PEM0 is on QLM0 and possibly QLM1 */
+			else
+				return -1; /* PEM0 is disabled */
+		case 1:			   /* PEM1 */
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(0));
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(1));
+			if (!pem_cfg.cn78xx.lanes8 && gserx_cfg.s.pcie)
+				return 1; /* PEM1 is on QLM 1 */
+			else
+				return -1; /* PEM1 is disabled */
+		case 2:			   /* PEM2 */
+			pem_qlm.u64 = CVMX_READ_CSR(CVMX_PEMX_QLM(2));
+			if (pem_qlm.cn73xx.pemdlmsel == 1) {
+				gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(5));
+				if (gserx_cfg.s.pcie)
+					return 5; /* PEM2 is on DLM5 */
+				else
+					return -1; /* PEM2 is disabled */
+			}
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(2));
+			if (gserx_cfg.s.pcie)
+				return 2; /* PEM2 is on QLM2 and possibly QLM3 */
+			else
+				return -1; /* PEM2 is disabled */
+		case 3:			   /* PEM3 */
+			pem_qlm.u64 = CVMX_READ_CSR(CVMX_PEMX_QLM(3));
+			if (pem_qlm.cn73xx.pemdlmsel == 1) {
+				gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(6));
+				if (gserx_cfg.s.pcie)
+					return 6; /* PEM2 is on DLM5 */
+				else
+					return -1; /* PEM2 is disabled */
+			}
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(2));
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(3));
+			if (!pem_cfg.cn78xx.lanes8 && gserx_cfg.s.pcie)
+				return 3; /* PEM2 is on QLM2 and possibly QLM3 */
+			else
+				return -1; /* PEM2 is disabled */
+		default:
+			printf("Invalid %d PCIe port\n", pcie_port);
+			return -2;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		cvmx_pemx_cfg_t pem_cfg;
+		cvmx_gserx_cfg_t gserx_cfg;
+
+		switch (pcie_port) {
+		case 0:
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(0));
+			if (gserx_cfg.s.pcie)
+				return 0; /* PEM0 is on QLM0 and possibly QLM1 */
+			else
+				return -1; /* PEM0 is disabled */
+		case 1:			   /* PEM1 */
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(0));
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(1));
+			if (!pem_cfg.cn78xx.lanes8 && gserx_cfg.s.pcie)
+				return 1; /* PEM1 is on QLM 1 */
+			else
+				return -1; /* PEM1 is disabled */
+		case 2:			   /* PEM2 */
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(2));
+			if (gserx_cfg.s.pcie)
+				return 2; /* PEM2 is on QLM2 and possibly QLM3 */
+			else
+				return -1; /* PEM2 is disabled */
+		case 3:			   /* PEM3 */
+		{
+			cvmx_gserx_cfg_t gser4_cfg;
+
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(2));
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(3));
+			gser4_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(4));
+			if (pem_cfg.cn78xx.lanes8) {
+				if (gser4_cfg.s.pcie)
+					return 4; /* PEM3 is on QLM4 */
+				else
+					return -1; /* PEM3 is disabled */
+			} else {
+				if (gserx_cfg.s.pcie)
+					return 3; /* PEM3 is on QLM3 */
+				else if (gser4_cfg.s.pcie)
+					return 4; /* PEM3 is on QLM4 */
+				else
+					return -1; /* PEM3 is disabled */
+			}
+		}
+		default:
+			printf("Invalid %d PCIe port\n", pcie_port);
+			return -1;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		enum cvmx_qlm_mode mode1 = cvmx_qlm_get_mode(1);
+		enum cvmx_qlm_mode mode2 = cvmx_qlm_get_mode(2);
+
+		switch (pcie_port) {
+		case 0: /* PCIe0 can be DLM1 with 1, 2 or 4 lanes */
+			if (mode1 == CVMX_QLM_MODE_PCIE ||     /* Using DLM 1-2 */
+			    mode1 == CVMX_QLM_MODE_PCIE_1X2 || /* Using DLM 1 */
+			    mode1 == CVMX_QLM_MODE_PCIE_2X1 || /* Using DLM 1, lane 0 */
+			    mode1 == CVMX_QLM_MODE_PCIE_1X1) /* Using DLM 1, l0, l1 not used */
+				return 1;
+			else
+				return -1;
+		case 1: /* PCIe1 can be DLM1 1 lane(1), DLM2 1 lane(0) or 2 lanes(0-1) */
+			if (mode1 == CVMX_QLM_MODE_PCIE_2X1)
+				return 1;
+			else if (mode2 == CVMX_QLM_MODE_PCIE_1X2)
+				return 2;
+			else if (mode2 == CVMX_QLM_MODE_PCIE_2X1)
+				return 2;
+			else
+				return -1;
+		case 2: /* PCIe2 can be DLM2 1 lanes(1) */
+			if (mode2 == CVMX_QLM_MODE_PCIE_2X1)
+				return 2;
+			else
+				return -1;
+		default: /* Only three PEM blocks */
+			return -1;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_gserx_cfg_t gserx_cfg;
+
+		switch (pcie_port) {
+		case 0: /* PEM0 */
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(0));
+			if (gserx_cfg.s.pcie)
+				return 0; /* PEM0 is on QLM0 and possibly QLM1 */
+			else
+				return -1; /* PEM0 is disabled */
+		case 1:			   /* PEM1 */
+			gserx_cfg.u64 = CVMX_READ_CSR(CVMX_GSERX_CFG(1));
+			if (gserx_cfg.s.pcie)
+				return 1; /* PEM1 is on DLM1 */
+			else
+				return -1; /* PEM1 is disabled */
+		default:
+			return -1;
+		}
+	}
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Initialize the RC config space CSRs
+ *
+ * @param node      node
+ * @param pcie_port PCIe port to initialize
+ */
+static void __cvmx_pcie_rc_initialize_config_space(int node, int pcie_port)
+{
+	/* Max Payload Size (PCIE*_CFG030[MPS]) */
+	/* Max Read Request Size (PCIE*_CFG030[MRRS]) */
+	/* Relaxed-order, no-snoop enables (PCIE*_CFG030[RO_EN,NS_EN] */
+	/* Error Message Enables (PCIE*_CFG030[CE_EN,NFE_EN,FE_EN,UR_EN]) */
+	{
+		cvmx_pciercx_cfg030_t pciercx_cfg030;
+
+		pciercx_cfg030.u32 = CVMX_PCIE_CFGX_READ(pcie_port,
+							 CVMX_PCIERCX_CFG030(pcie_port));
+		pciercx_cfg030.s.mps = MPS_CN6XXX;
+		pciercx_cfg030.s.mrrs = MRRS_CN6XXX;
+		/*
+		 * Enable relaxed order processing. This will allow devices
+		 * to affect read response ordering
+		 */
+		pciercx_cfg030.s.ro_en = 1;
+		/* Enable no snoop processing. Not used by Octeon */
+		pciercx_cfg030.s.ns_en = 1;
+		/* Correctable error reporting enable. */
+		pciercx_cfg030.s.ce_en = 1;
+		/* Non-fatal error reporting enable. */
+		pciercx_cfg030.s.nfe_en = 1;
+		/* Fatal error reporting enable. */
+		pciercx_cfg030.s.fe_en = 1;
+		/* Unsupported request reporting enable. */
+		pciercx_cfg030.s.ur_en = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG030(pcie_port),
+				     pciercx_cfg030.u32);
+	}
+
+	/*
+	 * Max Payload Size (DPI_SLI_PRTX_CFG[MPS]) must match
+	 * PCIE*_CFG030[MPS]
+	 */
+	/*
+	 * Max Read Request Size (DPI_SLI_PRTX_CFG[MRRS]) must not exceed
+	 * PCIE*_CFG030[MRRS]
+	 */
+	cvmx_dpi_sli_prtx_cfg_t prt_cfg;
+	cvmx_sli_s2m_portx_ctl_t sli_s2m_portx_ctl;
+
+	prt_cfg.u64 = CVMX_READ_CSR(CVMX_DPI_SLI_PRTX_CFG(pcie_port));
+	prt_cfg.s.mps = MPS_CN6XXX;
+	prt_cfg.s.mrrs = MRRS_CN6XXX;
+	/* Max outstanding load request. */
+	prt_cfg.s.molr = 32;
+	CVMX_WRITE_CSR(CVMX_DPI_SLI_PRTX_CFG(pcie_port), prt_cfg.u64);
+
+	sli_s2m_portx_ctl.u64 = CVMX_READ_CSR(CVMX_PEXP_SLI_S2M_PORTX_CTL(pcie_port));
+	if (!(OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX) ||
+	      OCTEON_IS_MODEL(OCTEON_CNF75XX)))
+		sli_s2m_portx_ctl.cn61xx.mrrs = MRRS_CN6XXX;
+	CVMX_WRITE_CSR(CVMX_PEXP_SLI_S2M_PORTX_CTL(pcie_port), sli_s2m_portx_ctl.u64);
+
+	/* ECRC Generation (PCIE*_CFG070[GE,CE]) */
+	{
+		cvmx_pciercx_cfg070_t pciercx_cfg070;
+
+		pciercx_cfg070.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG070(pcie_port));
+		pciercx_cfg070.s.ge = 1; /* ECRC generation enable. */
+		pciercx_cfg070.s.ce = 1; /* ECRC check enable. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG070(pcie_port), pciercx_cfg070.u32);
+	}
+
+	/* Access Enables (PCIE*_CFG001[MSAE,ME]) */
+	/* ME and MSAE should always be set. */
+	/* Interrupt Disable (PCIE*_CFG001[I_DIS]) */
+	/* System Error Message Enable (PCIE*_CFG001[SEE]) */
+	{
+		cvmx_pciercx_cfg001_t pciercx_cfg001;
+
+		pciercx_cfg001.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG001(pcie_port));
+		pciercx_cfg001.s.msae = 1;  /* Memory space enable. */
+		pciercx_cfg001.s.me = 1;    /* Bus master enable. */
+		pciercx_cfg001.s.i_dis = 1; /* INTx assertion disable. */
+		pciercx_cfg001.s.see = 1;   /* SERR# enable */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG001(pcie_port), pciercx_cfg001.u32);
+	}
+
+	/* Advanced Error Recovery Message Enables */
+	/* (PCIE*_CFG066,PCIE*_CFG067,PCIE*_CFG069) */
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG066(pcie_port), 0);
+	/* Use CVMX_PCIERCX_CFG067 hardware default */
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG069(pcie_port), 0);
+
+	/* Active State Power Management (PCIE*_CFG032[ASLPC]) */
+	{
+		cvmx_pciercx_cfg032_t pciercx_cfg032;
+
+		pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
+		pciercx_cfg032.s.aslpc = 0; /* Active state Link PM control. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG032(pcie_port), pciercx_cfg032.u32);
+	}
+
+	/* Link Width Mode (PCIERCn_CFG452[LME]) - Set during
+	 * cvmx_pcie_rc_initialize_link()
+	 */
+	/* Primary Bus Number (PCIERCn_CFG006[PBNUM]) */
+	{
+		/* We set the primary bus number to 1 so IDT bridges are happy.
+		 * They don't like zero
+		 */
+		cvmx_pciercx_cfg006_t pciercx_cfg006;
+
+		pciercx_cfg006.u32 = 0;
+		pciercx_cfg006.s.pbnum = cvmx_primary_pcie_bus_number;
+		pciercx_cfg006.s.sbnum = cvmx_primary_pcie_bus_number;
+		pciercx_cfg006.s.subbnum = cvmx_primary_pcie_bus_number;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG006(pcie_port), pciercx_cfg006.u32);
+	}
+
+	/* Memory-mapped I/O BAR (PCIERCn_CFG008) */
+	/* Most applications should disable the memory-mapped I/O BAR by */
+	/* setting PCIERCn_CFG008[ML_ADDR] < PCIERCn_CFG008[MB_ADDR] */
+	{
+		cvmx_pciercx_cfg008_t pciercx_cfg008;
+
+		pciercx_cfg008.u32 = 0;
+		pciercx_cfg008.s.mb_addr = 0x100;
+		pciercx_cfg008.s.ml_addr = 0;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG008(pcie_port), pciercx_cfg008.u32);
+	}
+
+	/* Prefetchable BAR (PCIERCn_CFG009,PCIERCn_CFG010,PCIERCn_CFG011) */
+	/* Most applications should disable the prefetchable BAR by setting */
+	/* PCIERCn_CFG011[UMEM_LIMIT],PCIERCn_CFG009[LMEM_LIMIT] < */
+	/* PCIERCn_CFG010[UMEM_BASE],PCIERCn_CFG009[LMEM_BASE] */
+	{
+		cvmx_pciercx_cfg009_t pciercx_cfg009;
+		cvmx_pciercx_cfg010_t pciercx_cfg010;
+		cvmx_pciercx_cfg011_t pciercx_cfg011;
+
+		pciercx_cfg009.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG009(pcie_port));
+		pciercx_cfg010.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG010(pcie_port));
+		pciercx_cfg011.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG011(pcie_port));
+		pciercx_cfg009.s.lmem_base = 0x100;
+		pciercx_cfg009.s.lmem_limit = 0;
+		pciercx_cfg010.s.umem_base = 0x100;
+		pciercx_cfg011.s.umem_limit = 0;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG009(pcie_port), pciercx_cfg009.u32);
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG010(pcie_port), pciercx_cfg010.u32);
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG011(pcie_port), pciercx_cfg011.u32);
+	}
+
+	/* System Error Interrupt Enables (PCIERCn_CFG035[SECEE,SEFEE,SENFEE]) */
+	/* PME Interrupt Enables (PCIERCn_CFG035[PMEIE]) */
+	{
+		cvmx_pciercx_cfg035_t pciercx_cfg035;
+
+		pciercx_cfg035.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG035(pcie_port));
+		pciercx_cfg035.s.secee = 1;  /* System error on correctable error enable. */
+		pciercx_cfg035.s.sefee = 1;  /* System error on fatal error enable. */
+		pciercx_cfg035.s.senfee = 1; /* System error on non-fatal error enable. */
+		pciercx_cfg035.s.pmeie = 1;  /* PME interrupt enable. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG035(pcie_port), pciercx_cfg035.u32);
+	}
+
+	/* Advanced Error Recovery Interrupt Enables */
+	/* (PCIERCn_CFG075[CERE,NFERE,FERE]) */
+	{
+		cvmx_pciercx_cfg075_t pciercx_cfg075;
+
+		pciercx_cfg075.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG075(pcie_port));
+		pciercx_cfg075.s.cere = 1;  /* Correctable error reporting enable. */
+		pciercx_cfg075.s.nfere = 1; /* Non-fatal error reporting enable. */
+		pciercx_cfg075.s.fere = 1;  /* Fatal error reporting enable. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG075(pcie_port), pciercx_cfg075.u32);
+	}
+
+	/* HP Interrupt Enables (PCIERCn_CFG034[HPINT_EN], */
+	/* PCIERCn_CFG034[DLLS_EN,CCINT_EN]) */
+	{
+		cvmx_pciercx_cfg034_t pciercx_cfg034;
+
+		pciercx_cfg034.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG034(pcie_port));
+		pciercx_cfg034.s.hpint_en = 1; /* Hot-plug interrupt enable. */
+		pciercx_cfg034.s.dlls_en = 1;  /* Data Link Layer state changed enable */
+		pciercx_cfg034.s.ccint_en = 1; /* Command completed interrupt enable. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG034(pcie_port), pciercx_cfg034.u32);
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		int qlm = __cvmx_pcie_get_qlm(node, pcie_port);
+		int speed = cvmx_qlm_get_gbaud_mhz(qlm);
+		cvmx_pemx_cfg_t pem_cfg;
+		cvmx_pciercx_cfg031_t cfg031;
+		cvmx_pciercx_cfg040_t cfg040;
+		cvmx_pciercx_cfg452_t cfg452;
+		cvmx_pciercx_cfg089_t cfg089;
+		cvmx_pciercx_cfg090_t cfg090;
+		cvmx_pciercx_cfg091_t cfg091;
+		cvmx_pciercx_cfg092_t cfg092;
+		cvmx_pciercx_cfg554_t cfg554;
+
+		/*
+		 * Make sure the PEM agrees with GSERX about the speed
+		 * its going to try
+		 */
+		switch (speed) {
+		case 2500: /* Gen1 */
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+			pem_cfg.s.md = 0;
+			CVMX_WRITE_CSR(CVMX_PEMX_CFG(pcie_port), pem_cfg.u64);
+
+			/* Set the target link speed */
+			cfg040.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG040(pcie_port));
+			cfg040.s.tls = 1;
+			CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG040(pcie_port), cfg040.u32);
+			break;
+		case 5000: /* Gen2 */
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+			pem_cfg.s.md = 1;
+			CVMX_WRITE_CSR(CVMX_PEMX_CFG(pcie_port), pem_cfg.u64);
+
+			/* Set the target link speed */
+			cfg040.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG040(pcie_port));
+			cfg040.s.tls = 2;
+			CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG040(pcie_port), cfg040.u32);
+			break;
+		case 8000: /* Gen3 */
+			pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+			pem_cfg.s.md = 2;
+			CVMX_WRITE_CSR(CVMX_PEMX_CFG(pcie_port), pem_cfg.u64);
+
+			/* Set the target link speed */
+			cfg040.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG040(pcie_port));
+			cfg040.s.tls = 3;
+			CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG040(pcie_port), cfg040.u32);
+			break;
+		default:
+			break;
+		}
+
+		/* Link Width Mode (PCIERCn_CFG452[LME]) */
+		pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+		cfg452.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG452(pcie_port));
+		if (qlm >= 5)
+			cfg452.s.lme = 0x3;
+		else
+			cfg452.s.lme = (pem_cfg.cn78xx.lanes8) ? 0xf : 0x7;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG452(pcie_port), cfg452.u32);
+
+		/* Errata PEM-25990 - Disable ASLPMS */
+		cfg031.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG031(pcie_port));
+		cfg031.s.aslpms = 0;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG031(pcie_port), cfg031.u32);
+
+		/* CFG554.PRV default changed from 16'h7ff to 16'h593. */
+		cfg554.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG554(pcie_port));
+		cfg554.s.prv = pcie_preset_vec[pcie_port];
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG554(pcie_port), cfg554.u32);
+		/* Errata PEM-26189 - Disable the 2ms timer on all chips */
+		cfg554.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG554(pcie_port));
+		cfg554.s.p23td = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG554(pcie_port), cfg554.u32);
+
+		/* Errata PEM-21178 - Change the CFG[089-092] LxUTP & LxDTP defaults. */
+		cfg089.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG089(pcie_port));
+		cfg089.s.l1ddtp = 7;
+		cfg089.s.l1utp = 7;
+		cfg089.s.l0dtp = 7;
+		cfg089.s.l0utp = 7;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG089(pcie_port), cfg089.u32);
+		cfg090.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG090(pcie_port));
+		cfg090.s.l3dtp = 7;
+		cfg090.s.l3utp = 7;
+		cfg090.s.l2dtp = 7;
+		cfg090.s.l2utp = 7;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG090(pcie_port), cfg090.u32);
+		cfg091.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG091(pcie_port));
+		cfg091.s.l5dtp = 7;
+		cfg091.s.l5utp = 7;
+		cfg091.s.l4dtp = 7;
+		cfg091.s.l4utp = 7;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG091(pcie_port), cfg091.u32);
+		cfg092.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG092(pcie_port));
+		cfg092.s.l7dtp = 7;
+		cfg092.s.l7utp = 7;
+		cfg092.s.l6dtp = 7;
+		cfg092.s.l6utp = 7;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG092(pcie_port), cfg092.u32);
+	}
+}
+
+static void __cvmx_increment_ba(cvmx_sli_mem_access_subidx_t *pmas)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		pmas->cn68xx.ba++;
+	else
+		pmas->cn63xx.ba++;
+}
+
+/*
+ * milliseconds to retry PCIe cfg-space access:
+ * Value 32(unscaled) was recommended in HRM, but may be too small for
+ * some PCIe devices. This 200mS default should cover most devices,
+ * but can be extended by bootparam cvmx-pcie.cfg_timeout, or reduced
+ * to speed boot if it is known that no devices need so much time.
+ */
+static int cfg_timeout = 200;
+
+static int cfg_retries(void)
+{
+	static int cfg_ticks = -1;
+
+	if (cfg_ticks < 0) {
+		u64 nS = cfg_timeout * 1000000;
+		const int ceiling = 0xffff;
+
+		cfg_ticks = nS / (gd->bus_clk >> 16);
+		if (cfg_ticks > ceiling)
+			cfg_ticks = ceiling;
+	}
+
+	return cfg_ticks;
+}
+
+/**
+ * @INTERNAL
+ * Enable/Disable PEMX_PEMON.pemon based on the direction.
+ *
+ * @param node      node
+ * @param pcie_port PCIe port
+ * @param direction 0 to disable, 1 to enable
+ */
+static void __cvmx_pcie_config_pemon(int node, int pcie_port, bool direction)
+{
+	cvmx_pemx_on_t pemon;
+
+	pemon.u64 = CVMX_READ_CSR(CVMX_PEMX_ON(pcie_port));
+	pemon.s.pemon = direction;
+	CVMX_WRITE_CSR(CVMX_PEMX_ON(pcie_port), pemon.u64);
+	pemon.u64 = CVMX_READ_CSR(CVMX_PEMX_ON(pcie_port));
+}
+
+/**
+ * @INTERNAL
+ * De-assert GSER_PHY.phy_reset for a given qlm
+ *
+ * @param node       node
+ * @param qlm        qlm for a given PCIe port
+ */
+static void __cvmx_pcie_gser_phy_config(int node, int pcie_port, int qlm)
+{
+	cvmx_pemx_cfg_t pem_cfg;
+	cvmx_gserx_phy_ctl_t ctrl;
+	int has_8lanes = 0;
+	int is_gen3 = 0;
+
+	ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm));
+
+	/* Assert the reset */
+	ctrl.s.phy_reset = 1;
+	CVMX_WRITE_CSR(CVMX_GSERX_PHY_CTL(qlm), ctrl.u64);
+	pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+	udelay(10);
+
+	has_8lanes = pem_cfg.cn78xx.lanes8;
+	is_gen3 = pem_cfg.cn78xx.md >= 2;
+
+	if (has_8lanes) {
+		ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm + 1));
+		ctrl.s.phy_reset = 1;
+		CVMX_WRITE_CSR(CVMX_GSERX_PHY_CTL(qlm + 1), ctrl.u64);
+		ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm + 1));
+	}
+	ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm));
+	udelay(10);
+
+	/* Deassert the reset */
+	ctrl.s.phy_reset = 0;
+	CVMX_WRITE_CSR(CVMX_GSERX_PHY_CTL(qlm), ctrl.u64);
+	pem_cfg.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG(pcie_port));
+	udelay(500);
+
+	if (has_8lanes) {
+		ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm + 1));
+		ctrl.s.phy_reset = 0;
+		CVMX_WRITE_CSR(CVMX_GSERX_PHY_CTL(qlm + 1), ctrl.u64);
+	}
+	ctrl.u64 = CVMX_READ_CSR(CVMX_GSERX_PHY_CTL(qlm));
+	udelay(500);
+
+	/* Apply some erratas after PHY reset, only applies to PCIe GEN3 */
+	if (is_gen3) {
+		int i;
+		int high_qlm = has_8lanes ? qlm + 1 : qlm;
+
+		/* Apply workaround for Errata GSER-26150 */
+		if (OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_0)) {
+			for (i = qlm; i < high_qlm; i++) {
+				cvmx_gserx_glbl_pll_cfg_3_t pll_cfg_3;
+				cvmx_gserx_glbl_misc_config_1_t misc_config_1;
+				/* Update PLL parameters */
+				/*
+				 * Step 1: Set
+				 * GSER()_GLBL_PLL_CFG_3[PLL_VCTRL_SEL_LCVCO_VAL] = 0x2,
+				 * and
+				 * GSER()_GLBL_PLL_CFG_3[PCS_SDS_PLL_VCO_AMP] = 0
+				 */
+				pll_cfg_3.u64 = CVMX_READ_CSR(CVMX_GSERX_GLBL_PLL_CFG_3(i));
+				pll_cfg_3.s.pcs_sds_pll_vco_amp = 0;
+				pll_cfg_3.s.pll_vctrl_sel_lcvco_val = 2;
+				CVMX_WRITE_CSR(CVMX_GSERX_GLBL_PLL_CFG_3(i), pll_cfg_3.u64);
+
+				/*
+				 * Step 2: Set
+				 * GSER()_GLBL_MISC_CONFIG_1[PCS_SDS_TRIM_CHP_REG] = 0x2.
+				 */
+				misc_config_1.u64 = CVMX_READ_CSR(CVMX_GSERX_GLBL_MISC_CONFIG_1(i));
+				misc_config_1.s.pcs_sds_trim_chp_reg = 2;
+				CVMX_WRITE_CSR(CVMX_GSERX_GLBL_MISC_CONFIG_1(i), misc_config_1.u64);
+			}
+		}
+
+		/* Apply workaround for Errata GSER-25992 */
+		if (OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_X) ||
+		    OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)) {
+			for (i = qlm; i < high_qlm; i++)
+				cvmx_qlm_gser_errata_25992(node, i);
+		}
+	}
+}
+
+/* Get the PCIe LTSSM state for the given port
+ *
+ * @param node      Node to query
+ * @param pcie_port PEM to query
+ *
+ * @return LTSSM state
+ */
+static int __cvmx_pcie_rc_get_ltssm_state(int node, int pcie_port)
+{
+	u64 debug;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) && pcie_port == 0) {
+		CVMX_WRITE_CSR(CVMX_DTX_SPEM_SELX(0), 0);
+		CVMX_READ_CSR(CVMX_DTX_SPEM_SELX(0));
+		CVMX_WRITE_CSR(CVMX_DTX_SPEM_ENAX(0), 0xfffffffffull);
+		CVMX_READ_CSR(CVMX_DTX_SPEM_ENAX(0));
+
+		/* Read the value */
+		debug = CVMX_READ_CSR(CVMX_DTX_SPEM_DATX(0));
+
+		/* Disable the PEM from driving OCLA signals */
+		CVMX_WRITE_CSR(CVMX_DTX_SPEM_ENAX(0), 0);
+		CVMX_READ_CSR(CVMX_DTX_SPEM_ENAX(0));
+	} else {
+		/* LTSSM state is in debug select 0 */
+		CVMX_WRITE_CSR(CVMX_DTX_PEMX_SELX(0, pcie_port), 0);
+		CVMX_READ_CSR(CVMX_DTX_PEMX_SELX(0, pcie_port));
+		CVMX_WRITE_CSR(CVMX_DTX_PEMX_ENAX(0, pcie_port), 0xfffffffffull);
+		CVMX_READ_CSR(CVMX_DTX_PEMX_ENAX(0, pcie_port));
+
+		/* Read the value */
+		debug = CVMX_READ_CSR(CVMX_DTX_PEMX_DATX(0, pcie_port));
+
+		/* Disable the PEM from driving OCLA signals */
+		CVMX_WRITE_CSR(CVMX_DTX_PEMX_ENAX(0, pcie_port), 0);
+		CVMX_READ_CSR(CVMX_DTX_PEMX_ENAX(0, pcie_port));
+	}
+
+	/* DBGSEL = 0x0, bits[8:3] */
+	return cvmx_bit_extract(debug, 3, 6);
+}
+
+/**
+ * Get the PCIe LTSSM state for the given port
+ *
+ * @param node      Node to query
+ * @param pcie_port PEM to query
+ *
+ * @return LTSSM state
+ */
+static const char *cvmx_pcie_get_ltssm_string(int ltssm)
+{
+	switch (ltssm) {
+	case 0x00:
+		return "DETECT_QUIET";
+	case 0x01:
+		return "DETECT_ACT";
+	case 0x02:
+		return "POLL_ACTIVE";
+	case 0x03:
+		return "POLL_COMPLIANCE";
+	case 0x04:
+		return "POLL_CONFIG";
+	case 0x05:
+		return "PRE_DETECT_QUIET";
+	case 0x06:
+		return "DETECT_WAIT";
+	case 0x07:
+		return "CFG_LINKWD_START";
+	case 0x08:
+		return "CFG_LINKWD_ACEPT";
+	case 0x09:
+		return "CFG_LANENUM_WAIT";
+	case 0x0A:
+		return "CFG_LANENUM_ACEPT";
+	case 0x0B:
+		return "CFG_COMPLETE";
+	case 0x0C:
+		return "CFG_IDLE";
+	case 0x0D:
+		return "RCVRY_LOCK";
+	case 0x0E:
+		return "RCVRY_SPEED";
+	case 0x0F:
+		return "RCVRY_RCVRCFG";
+	case 0x10:
+		return "RCVRY_IDLE";
+	case 0x11:
+		return "L0";
+	case 0x12:
+		return "L0S";
+	case 0x13:
+		return "L123_SEND_EIDLE";
+	case 0x14:
+		return "L1_IDLE";
+	case 0x15:
+		return "L2_IDLE";
+	case 0x16:
+		return "L2_WAKE";
+	case 0x17:
+		return "DISABLED_ENTRY";
+	case 0x18:
+		return "DISABLED_IDLE";
+	case 0x19:
+		return "DISABLED";
+	case 0x1A:
+		return "LPBK_ENTRY";
+	case 0x1B:
+		return "LPBK_ACTIVE";
+	case 0x1C:
+		return "LPBK_EXIT";
+	case 0x1D:
+		return "LPBK_EXIT_TIMEOUT";
+	case 0x1E:
+		return "HOT_RESET_ENTRY";
+	case 0x1F:
+		return "HOT_RESET";
+	case 0x20:
+		return "RCVRY_EQ0";
+	case 0x21:
+		return "RCVRY_EQ1";
+	case 0x22:
+		return "RCVRY_EQ2";
+	case 0x23:
+		return "RCVRY_EQ3";
+	default:
+		return "Unknown";
+	}
+}
+
+/**
+ * During PCIe link initialization we need to make config request to the attached
+ * device to verify its speed and width. These config access happen very early
+ * after the device is taken out of reset, so may fail for some amount of time.
+ * This function automatically retries these config accesses. The normal builtin
+ * hardware retry isn't enough for this very early access.
+ *
+ * @param node      Note to read from
+ * @param pcie_port PCIe port to read from
+ * @param bus       PCIe bus number
+ * @param dev       PCIe device
+ * @param func      PCIe function on the device
+ * @param reg       Register to read
+ *
+ * @return Config register value, or all ones on failure
+ */
+static uint32_t cvmx_pcie_config_read32_retry(int node, int pcie_port, int bus, int dev, int func,
+					      int reg)
+{
+	/*
+	 * Read the PCI config register until we get a valid value. Some cards
+	 * require time after link up to return data. Wait@most 3 seconds
+	 */
+	u64 timeout = 300;
+	u32 val;
+
+	do {
+		/* Read PCI capability pointer */
+		val = __cvmx_pcie_config_read32(node, pcie_port, bus, dev, func, reg, 0);
+
+		/* Check the read succeeded */
+		if (val != 0xffffffff)
+			return val;
+		/* Failed, wait a little and try again */
+		mdelay(10);
+	} while (--timeout);
+
+	debug("N%d.PCIe%d: Config read failed, can't communicate with device\n",
+	      node, pcie_port);
+
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Initialize a host mode PCIe gen 2 link. This function takes a PCIe
+ * port from reset to a link up state. Software can then begin
+ * configuring the rest of the link.
+ *
+ * @param node	    node
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+static int __cvmx_pcie_rc_initialize_link_gen2(int node, int pcie_port)
+{
+	u64 start_cycle;
+
+	cvmx_pemx_ctl_status_t pem_ctl_status;
+	cvmx_pciercx_cfg032_t pciercx_cfg032;
+	cvmx_pciercx_cfg448_t pciercx_cfg448;
+
+	if (OCTEON_IS_OCTEON3()) {
+		if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_PEMX_ON(pcie_port), cvmx_pemx_on_t,
+					       pemoor, ==, 1, 100000)) {
+			printf("%d:PCIe: Port %d PEM not on, skipping\n", node, pcie_port);
+			return -1;
+		}
+	}
+
+	/* Bring up the link */
+	pem_ctl_status.u64 = CVMX_READ_CSR(CVMX_PEMX_CTL_STATUS(pcie_port));
+	pem_ctl_status.s.lnk_enb = 1;
+	CVMX_WRITE_CSR(CVMX_PEMX_CTL_STATUS(pcie_port), pem_ctl_status.u64);
+
+	/* Wait for the link to come up */
+	start_cycle = get_timer(0);
+	do {
+		if (get_timer(start_cycle) > 1000)
+			return -1;
+
+		udelay(1000);
+		pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port,
+							 CVMX_PCIERCX_CFG032(pcie_port));
+	} while ((pciercx_cfg032.s.dlla == 0) || (pciercx_cfg032.s.lt == 1));
+
+	/* Update the Replay Time Limit.  Empirically, some PCIe devices take a
+	 * little longer to respond than expected under load. As a workaround
+	 * for this we configure the Replay Time Limit to the value expected
+	 * for a 512 byte MPS instead of our actual 256 byte MPS. The numbers
+	 * below are directly from the PCIe spec table 3-4
+	 */
+	pciercx_cfg448.u32 = CVMX_PCIE_CFGX_READ(pcie_port,
+						 CVMX_PCIERCX_CFG448(pcie_port));
+	switch (pciercx_cfg032.s.nlw) {
+	case 1: /* 1 lane */
+		pciercx_cfg448.s.rtl = 1677;
+		break;
+	case 2: /* 2 lanes */
+		pciercx_cfg448.s.rtl = 867;
+		break;
+	case 4: /* 4 lanes */
+		pciercx_cfg448.s.rtl = 462;
+		break;
+	case 8: /* 8 lanes */
+		pciercx_cfg448.s.rtl = 258;
+		break;
+	}
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG448(pcie_port),
+			     pciercx_cfg448.u32);
+
+	return 0;
+}
+
+extern int octeon_pcie_get_qlm_from_fdt(int numa_node, int pcie_port);
+
+static int __cvmx_pcie_check_pcie_port(int node, int pcie_port, enum cvmx_qlm_mode mode)
+{
+	if (mode == CVMX_QLM_MODE_SRIO_1X4 || mode == CVMX_QLM_MODE_SRIO_2X2 ||
+	    mode == CVMX_QLM_MODE_SRIO_4X1) {
+		printf("%d:PCIe: Port %d is SRIO, skipping.\n", node, pcie_port);
+		return -1;
+	} else if (mode == CVMX_QLM_MODE_SGMII) {
+		printf("%d:PCIe: Port %d is SGMII, skipping.\n", node, pcie_port);
+		return -1;
+	} else if (mode == CVMX_QLM_MODE_XAUI || mode == CVMX_QLM_MODE_RXAUI) {
+		printf("%d:PCIe: Port %d is XAUI, skipping.\n", node, pcie_port);
+		return -1;
+	} else if (mode == CVMX_QLM_MODE_ILK) {
+		printf("%d:PCIe: Port %d is ILK, skipping.\n", node, pcie_port);
+		return -1;
+	} else if (mode != CVMX_QLM_MODE_PCIE &&
+		   mode != CVMX_QLM_MODE_PCIE_1X8 &&
+		   mode != CVMX_QLM_MODE_PCIE_1X2 &&
+		   mode != CVMX_QLM_MODE_PCIE_2X1 &&
+		   mode != CVMX_QLM_MODE_PCIE_1X1) {
+		printf("%d:PCIe: Port %d is unknown, skipping.\n",
+		       node, pcie_port);
+		return -1;
+	}
+	return 0;
+}
+
+static int __cvmx_pcie_check_qlm_mode(int node, int pcie_port, int qlm)
+{
+	enum cvmx_qlm_mode mode = CVMX_QLM_MODE_DISABLED;
+
+	if (qlm < 0)
+		return -1;
+
+	/* Make sure this interface is PCIe */
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		if (cvmx_qlm_get_dlm_mode(1, pcie_port) ==
+		    CVMX_QLM_MODE_DISABLED) {
+			printf("PCIe: Port %d not in PCIe mode, skipping\n",
+			       pcie_port);
+			return -1;
+		}
+	} else if (octeon_has_feature(OCTEON_FEATURE_PCIE)) {
+		/*
+		 * Requires reading the MIO_QLMX_CFG register to figure
+		 * out the port type.
+		 */
+		if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+			qlm = 3 - (pcie_port * 2);
+		} else if (OCTEON_IS_MODEL(OCTEON_CN61XX)) {
+			cvmx_mio_qlmx_cfg_t qlm_cfg;
+
+			qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(1));
+			if (qlm_cfg.s.qlm_cfg == 1)
+				qlm = 1;
+			else
+				qlm = pcie_port;
+		} else if (OCTEON_IS_MODEL(OCTEON_CN66XX) ||
+			   OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+			qlm = pcie_port;
+		}
+
+		/*
+		 * PCIe is allowed only in QLM1, 1 PCIe port in x2 or
+		 * 2 PCIe ports in x1
+		 */
+		else if (OCTEON_IS_MODEL(OCTEON_CNF71XX))
+			qlm = 1;
+
+		mode = cvmx_qlm_get_mode(qlm);
+
+		__cvmx_pcie_check_pcie_port(node, pcie_port, mode);
+	}
+	return 0;
+}
+
+static void __cvmx_pcie_sli_config(int node, int pcie_port)
+{
+	cvmx_pemx_bar_ctl_t pemx_bar_ctl;
+	cvmx_pemx_ctl_status_t pemx_ctl_status;
+	cvmx_sli_ctl_portx_t sli_ctl_portx;
+	cvmx_sli_mem_access_ctl_t sli_mem_access_ctl;
+	cvmx_sli_mem_access_subidx_t mem_access_subid;
+	cvmx_pemx_bar1_indexx_t bar1_index;
+	int i;
+
+	/* Store merge control (SLI_MEM_ACCESS_CTL[TIMER,MAX_WORD]) */
+	sli_mem_access_ctl.u64 = CVMX_READ_CSR(CVMX_PEXP_SLI_MEM_ACCESS_CTL);
+	sli_mem_access_ctl.s.max_word = 0; /* Allow 16 words to combine */
+	sli_mem_access_ctl.s.timer = 127;  /* Wait up to 127 cycles for more data */
+	CVMX_WRITE_CSR(CVMX_PEXP_SLI_MEM_ACCESS_CTL, sli_mem_access_ctl.u64);
+
+	/* Setup Mem access SubDIDs */
+	mem_access_subid.u64 = 0;
+	mem_access_subid.s.port = pcie_port;	/* Port the request is sent to. */
+	mem_access_subid.s.nmerge = 0;		/* Allow merging as it works on CN6XXX. */
+	mem_access_subid.s.esr = _CVMX_PCIE_ES; /* Endian-swap for Reads. */
+	mem_access_subid.s.esw = _CVMX_PCIE_ES; /* Endian-swap for Writes. */
+	mem_access_subid.s.wtype = 0;		/* "No snoop" and "Relaxed ordering" are not set */
+	mem_access_subid.s.rtype = 0;		/* "No snoop" and "Relaxed ordering" are not set */
+	/* PCIe Address Bits <63:34>. */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		mem_access_subid.cn68xx.ba = 0;
+	else
+		mem_access_subid.cn63xx.ba = 0;
+
+	/* Setup mem access 12-15 for port 0, 16-19 for port 1, supplying 36
+	 * bits of address space
+	 */
+	for (i = 12 + pcie_port * 4; i < 16 + pcie_port * 4; i++) {
+		CVMX_WRITE_CSR(CVMX_PEXP_SLI_MEM_ACCESS_SUBIDX(i), mem_access_subid.u64);
+		/* Set each SUBID to extend the addressable range */
+		__cvmx_increment_ba(&mem_access_subid);
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX) || OCTEON_IS_MODEL(OCTEON_CN66XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CN68XX) ||
+	    (OCTEON_IS_OCTEON3() && !OCTEON_IS_MODEL(OCTEON_CN70XX))) {
+		/* Disable the peer to peer forwarding register. This must be
+		 * setup by the OS after it enumerates the bus and assigns
+		 * addresses to the PCIe busses
+		 */
+		for (i = 0; i < 4; i++) {
+			CVMX_WRITE_CSR(CVMX_PEMX_P2P_BARX_START(i, pcie_port), -1);
+			CVMX_WRITE_CSR(CVMX_PEMX_P2P_BARX_END(i, pcie_port), -1);
+		}
+	}
+
+	/* Set Octeon's BAR0 to decode 0-16KB. It overlaps with Bar2 */
+	CVMX_WRITE_CSR(CVMX_PEMX_P2N_BAR0_START(pcie_port), 0);
+
+	/* Set Octeon's BAR2 to decode 0-2^41. Bar0 and Bar1 take precedence
+	 * where they overlap. It also overlaps with the device addresses, so
+	 * make sure the peer to peer forwarding is set right
+	 */
+	CVMX_WRITE_CSR(CVMX_PEMX_P2N_BAR2_START(pcie_port), 0);
+
+	/* Setup BAR2 attributes */
+	/* Relaxed Ordering (NPEI_CTL_PORTn[PTLP_RO,CTLP_RO, WAIT_COM]) */
+	/* - PTLP_RO,CTLP_RO should normally be set (except for debug). */
+	/* - WAIT_COM=0 will likely work for all applications. */
+	/* Load completion relaxed ordering (NPEI_CTL_PORTn[WAITL_COM]) */
+	pemx_bar_ctl.u64 = CVMX_READ_CSR(CVMX_PEMX_BAR_CTL(pcie_port));
+	pemx_bar_ctl.s.bar1_siz = 3; /* 256MB BAR1 */
+	pemx_bar_ctl.s.bar2_enb = 1;
+	pemx_bar_ctl.s.bar2_esx = _CVMX_PCIE_ES;
+	pemx_bar_ctl.s.bar2_cax = 0;
+	CVMX_WRITE_CSR(CVMX_PEMX_BAR_CTL(pcie_port), pemx_bar_ctl.u64);
+	sli_ctl_portx.u64 = CVMX_READ_CSR(CVMX_PEXP_SLI_CTL_PORTX(pcie_port));
+	sli_ctl_portx.s.ptlp_ro = 1;
+	sli_ctl_portx.s.ctlp_ro = 1;
+	sli_ctl_portx.s.wait_com = 0;
+	sli_ctl_portx.s.waitl_com = 0;
+	CVMX_WRITE_CSR(CVMX_PEXP_SLI_CTL_PORTX(pcie_port), sli_ctl_portx.u64);
+
+	/* BAR1 follows BAR2 */
+	CVMX_WRITE_CSR(CVMX_PEMX_P2N_BAR1_START(pcie_port),
+		       CVMX_PCIE_BAR1_RC_BASE);
+
+	bar1_index.u64 = 0;
+	bar1_index.s.addr_idx = (CVMX_PCIE_BAR1_PHYS_BASE >> 22);
+	bar1_index.s.ca = 1;		      /* Not Cached */
+	bar1_index.s.end_swp = _CVMX_PCIE_ES; /* Endian Swap mode */
+	bar1_index.s.addr_v = 1;	      /* Valid entry */
+
+	for (i = 0; i < 16; i++) {
+		CVMX_WRITE_CSR(CVMX_PEMX_BAR1_INDEXX(i, pcie_port),
+			       bar1_index.u64);
+		/* 256MB / 16 >> 22 == 4 */
+		bar1_index.s.addr_idx += (((1ull << 28) / 16ull) >> 22);
+	}
+
+	/* Wait for 200ms */
+	pemx_ctl_status.u64 = CVMX_READ_CSR(CVMX_PEMX_CTL_STATUS(pcie_port));
+	pemx_ctl_status.cn63xx.cfg_rtry = cfg_retries();
+	CVMX_WRITE_CSR(CVMX_PEMX_CTL_STATUS(pcie_port), pemx_ctl_status.u64);
+
+	/*
+	 * Here is the second part of the config retry changes. Wait for 700ms
+	 * after setting up the link before continuing. PCIe says the devices
+	 * may need up to 900ms to come up. 700ms plus 200ms from above gives
+	 * us a total of 900ms
+	 */
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX))
+		udelay(PCIE_DEVICE_READY_WAIT_DELAY_MICROSECONDS);
+}
+
+/**
+ * Initialize a PCIe gen 2 port for use in host(RC) mode. It doesn't enumerate
+ * the bus.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+static int __cvmx_pcie_rc_initialize_gen2(int pcie_port)
+{
+	cvmx_ciu_soft_prst_t ciu_soft_prst;
+	cvmx_mio_rst_ctlx_t mio_rst_ctl;
+	cvmx_pemx_bist_status_t pemx_bist_status;
+	cvmx_pemx_bist_status2_t pemx_bist_status2;
+	cvmx_pciercx_cfg032_t pciercx_cfg032;
+	cvmx_pciercx_cfg515_t pciercx_cfg515;
+	u64 ciu_soft_prst_reg, rst_ctl_reg;
+	int ep_mode;
+	int qlm = 0;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+
+	if (pcie_port >= CVMX_PCIE_PORTS) {
+		//debug("Invalid PCIe%d port\n", pcie_port);
+		return -1;
+	}
+
+	if (__cvmx_pcie_check_qlm_mode(node, pcie_port, qlm))
+		return -1;
+
+	/* Make sure we aren't trying to setup a target mode interface in host
+	 * mode
+	 */
+	if (OCTEON_IS_OCTEON3()) {
+		ciu_soft_prst_reg = CVMX_RST_SOFT_PRSTX(pcie_port);
+		rst_ctl_reg = CVMX_RST_CTLX(pcie_port);
+	} else {
+		ciu_soft_prst_reg = (pcie_port) ? CVMX_CIU_SOFT_PRST1 : CVMX_CIU_SOFT_PRST;
+		rst_ctl_reg = CVMX_MIO_RST_CTLX(pcie_port);
+	}
+	mio_rst_ctl.u64 = CVMX_READ_CSR(rst_ctl_reg);
+
+	ep_mode = ((OCTEON_IS_MODEL(OCTEON_CN61XX) || OCTEON_IS_MODEL(OCTEON_CNF71XX)) ?
+				 (mio_rst_ctl.s.prtmode != 1) :
+				 (!mio_rst_ctl.s.host_mode));
+
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX) && pcie_port) {
+		cvmx_pemx_cfg_t pemx_cfg;
+
+		pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+		if ((pemx_cfg.s.md & 3) == 2) {
+			printf("PCIe: Port %d in 1x4 mode.\n", pcie_port);
+			return -1;
+		}
+	}
+
+	if (ep_mode) {
+		printf("%d:PCIe: Port %d in endpoint mode.\n", node, pcie_port);
+		return -1;
+	}
+
+	/* CN63XX Pass 1.0 errata G-14395 requires the QLM De-emphasis be
+	 * programmed
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX_PASS1_0)) {
+		if (pcie_port) {
+			cvmx_ciu_qlm1_t ciu_qlm;
+
+			ciu_qlm.u64 = csr_rd(CVMX_CIU_QLM1);
+			ciu_qlm.s.txbypass = 1;
+			ciu_qlm.s.txdeemph = 5;
+			ciu_qlm.s.txmargin = 0x17;
+			csr_wr(CVMX_CIU_QLM1, ciu_qlm.u64);
+		} else {
+			cvmx_ciu_qlm0_t ciu_qlm;
+
+			ciu_qlm.u64 = csr_rd(CVMX_CIU_QLM0);
+			ciu_qlm.s.txbypass = 1;
+			ciu_qlm.s.txdeemph = 5;
+			ciu_qlm.s.txmargin = 0x17;
+			csr_wr(CVMX_CIU_QLM0, ciu_qlm.u64);
+		}
+	}
+
+	/* Bring the PCIe out of reset */
+	ciu_soft_prst.u64 = CVMX_READ_CSR(ciu_soft_prst_reg);
+	/* After a chip reset the PCIe will also be in reset. If it
+	 * isn't, most likely someone is trying to init it again
+	 * without a proper PCIe reset.
+	 */
+	if (ciu_soft_prst.s.soft_prst == 0) {
+		/* Reset the port */
+		ciu_soft_prst.s.soft_prst = 1;
+		CVMX_WRITE_CSR(ciu_soft_prst_reg, ciu_soft_prst.u64);
+
+		/* Read to make sure write happens */
+		ciu_soft_prst.u64 = CVMX_READ_CSR(ciu_soft_prst_reg);
+
+		/* Keep PERST asserted for 2 ms */
+		udelay(2000);
+	}
+
+	/* Deassert PERST */
+	ciu_soft_prst.u64 = CVMX_READ_CSR(ciu_soft_prst_reg);
+	ciu_soft_prst.s.soft_prst = 0;
+	CVMX_WRITE_CSR(ciu_soft_prst_reg, ciu_soft_prst.u64);
+	ciu_soft_prst.u64 = CVMX_READ_CSR(ciu_soft_prst_reg);
+
+	/* Wait 1ms for PCIe reset to complete */
+	udelay(1000);
+
+	/* Set MPLL multiplier as per Errata 20669. */
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		int qlm = __cvmx_pcie_get_qlm(0, pcie_port);
+		enum cvmx_qlm_mode mode;
+		int old_mult;
+		u64 meas_refclock = cvmx_qlm_measure_clock(qlm);
+
+		if (meas_refclock > 99000000 && meas_refclock < 101000000) {
+			old_mult = 35;
+		} else if (meas_refclock > 124000000 &&
+			   meas_refclock < 126000000) {
+			old_mult = 56;
+		} else if (meas_refclock > 156000000 &&
+			   meas_refclock < 156500000) {
+			old_mult = 45;
+		} else {
+			printf("%s: Invalid reference clock for qlm %d\n",
+			       __func__, qlm);
+			return -1;
+		}
+		mode = cvmx_qlm_get_mode(qlm);
+		__cvmx_qlm_set_mult(qlm, 2500, old_mult);
+		/* Adjust mplls for both dlms when configured as pcie 1x4 */
+		if (mode == CVMX_QLM_MODE_PCIE && pcie_port == 0)
+			__cvmx_qlm_set_mult(qlm + 1, 2500, old_mult);
+	}
+
+	/*
+	 * Check and make sure PCIe came out of reset. If it doesn't the board
+	 * probably hasn't wired the clocks up and the interface should be
+	 * skipped
+	 */
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, rst_ctl_reg, cvmx_mio_rst_ctlx_t,
+				       rst_done, ==, 1, 10000)) {
+		printf("%d:PCIe: Port %d stuck in reset, skipping.\n", node, pcie_port);
+		return -1;
+	}
+
+	/* Check BIST status */
+	pemx_bist_status.u64 = CVMX_READ_CSR(CVMX_PEMX_BIST_STATUS(pcie_port));
+	if (pemx_bist_status.u64)
+		printf("%d:PCIe: BIST FAILED for port %d (0x%016llx)\n", node, pcie_port,
+		       CAST64(pemx_bist_status.u64));
+	pemx_bist_status2.u64 = CVMX_READ_CSR(CVMX_PEMX_BIST_STATUS2(pcie_port));
+
+	/*
+	 * Errata PCIE-14766 may cause the lower 6 bits to be randomly set on
+	 * CN63XXp1
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX_PASS1_X))
+		pemx_bist_status2.u64 &= ~0x3full;
+
+	if (pemx_bist_status2.u64) {
+		printf("%d:PCIe: BIST2 FAILED for port %d (0x%016llx)\n",
+		       node, pcie_port, CAST64(pemx_bist_status2.u64));
+	}
+
+	/* Initialize the config space CSRs */
+	__cvmx_pcie_rc_initialize_config_space(node, pcie_port);
+
+	/* Enable gen2 speed selection */
+	pciercx_cfg515.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG515(pcie_port));
+	pciercx_cfg515.s.dsc = 1;
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG515(pcie_port), pciercx_cfg515.u32);
+
+	/* Bring the link up */
+	if (__cvmx_pcie_rc_initialize_link_gen2(node, pcie_port)) {
+		/* Some gen1 devices don't handle the gen 2 training correctly.
+		 * Disable gen2 and try again with only gen1
+		 */
+		cvmx_pciercx_cfg031_t pciercx_cfg031;
+
+		pciercx_cfg031.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG031(pcie_port));
+		pciercx_cfg031.s.mls = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG031(pcie_port), pciercx_cfg031.u32);
+		if (__cvmx_pcie_rc_initialize_link_gen2(node, pcie_port)) {
+			printf("PCIe: Link timeout on port %d, probably the slot is empty\n",
+			       pcie_port);
+			return -1;
+		}
+	}
+
+	__cvmx_pcie_sli_config(node, pcie_port);
+
+	/* Display the link status */
+	pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
+	printf("PCIe: Port %d link active, %d lanes, speed gen%d\n", pcie_port,
+	       pciercx_cfg032.s.nlw, pciercx_cfg032.s.ls);
+
+	pcie_link_initialized[node][pcie_port] = true;
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Initialize a host mode PCIe gen 2 link. This function takes a PCIe
+ * port from reset to a link up state. Software can then begin
+ * configuring the rest of the link.
+ *
+ * @param node	    node
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+static int __cvmx_pcie_rc_initialize_link_gen2_v3(int node, int pcie_port)
+{
+	u8 ltssm_history[LTSSM_HISTORY_SIZE];
+	int ltssm_history_loc;
+	cvmx_pemx_ctl_status_t pem_ctl_status;
+	cvmx_pciercx_cfg006_t pciercx_cfg006;
+	cvmx_pciercx_cfg031_t pciercx_cfg031;
+	cvmx_pciercx_cfg032_t pciercx_cfg032;
+	cvmx_pciercx_cfg068_t pciercx_cfg068;
+	cvmx_pciercx_cfg448_t pciercx_cfg448;
+	cvmx_pciercx_cfg515_t pciercx_cfg515;
+	int max_gen, max_width;
+	u64 hold_time;
+	u64 bounce_allow_time;
+	u64 timeout, good_time, current_time;
+	int neg_gen, neg_width, bus, dev_gen, dev_width;
+	unsigned int cap, cap_next;
+	int ltssm_state, desired_gen;
+	int desired_width;
+	int i, need_speed_change, need_lane_change;
+	int do_retry_speed = 0;
+	int link_up = 0, is_loop_done = 0;
+
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_PEMX_ON(pcie_port), cvmx_pemx_on_t, pemoor, ==, 1,
+				       100000)) {
+		printf("N%d:PCIe: Port %d PEM not on, skipping\n", node, pcie_port);
+		return -1;
+	}
+
+	/* Record starting LTSSM state for debug */
+	memset(ltssm_history, -1, sizeof(ltssm_history));
+	ltssm_history[0] = __cvmx_pcie_rc_get_ltssm_state(node, pcie_port);
+	ltssm_history_loc = 0;
+
+	pciercx_cfg031.u32 = CVMX_PCIE_CFGX_READ(pcie_port,
+						 CVMX_PCIERCX_CFG031(pcie_port));
+	/* Max speed of PEM from config (1-3) */
+	max_gen = pciercx_cfg031.s.mls;
+	/* Max lane width of PEM (1-3) */
+	max_width = pciercx_cfg031.s.mlw;
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: Link supports up to %d lanes, speed gen%d\n",
+	       node, pcie_port, max_width, max_gen);
+#endif
+
+	/* Bring up the link */
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: Enabling the link\n", node, pcie_port);
+#endif
+	pem_ctl_status.u64 = CVMX_READ_CSR(CVMX_PEMX_CTL_STATUS(pcie_port));
+	pem_ctl_status.s.lnk_enb = 1;
+	CVMX_WRITE_CSR(CVMX_PEMX_CTL_STATUS(pcie_port), pem_ctl_status.u64);
+
+	/*
+	 * Configure SLI after enabling PCIe link. Is required for reading
+	 * PCIe card capabilities.
+	 */
+	__cvmx_pcie_sli_config(node, pcie_port);
+
+	/*
+	 * After the link is enabled  no prints until link up or error,
+	 * Otherwise will miss link state captures
+	 */
+
+retry_speed:
+	/* Clear RC Correctable Error Status Register */
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG068(pcie_port), -1);
+
+	/* Wait for the link to come up and link training to be complete */
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: Waiting for link\n", node, pcie_port);
+#endif
+
+	/* Timeout of 2 secs */
+	timeout = get_timer(0) + 2000;
+
+	/* Records when the link first went good */
+	good_time = 0;
+
+	do {
+		pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
+		/*
+		 * Errata PEM-31375 PEM RSL access to PCLK registers can
+		 * timeout during speed change. Check for temporary hardware
+		 * timeout, and rety if happens
+		 */
+		if (pciercx_cfg032.u32 == 0xffffffff)
+			continue;
+
+		/* Record LTSSM state for debug */
+		ltssm_state = __cvmx_pcie_rc_get_ltssm_state(node, pcie_port);
+
+		if (ltssm_history[ltssm_history_loc] != ltssm_state) {
+			ltssm_history_loc = (ltssm_history_loc + 1) & (LTSSM_HISTORY_SIZE - 1);
+			ltssm_history[ltssm_history_loc] = ltssm_state;
+		}
+
+		/* Check if the link is up */
+		//		current_time = cvmx_get_cycle();
+		current_time = get_timer(0);
+		link_up = (pciercx_cfg032.s.dlla && !pciercx_cfg032.s.lt);
+
+		if (link_up) {
+			/* Is this the first link up? */
+			if (!good_time) {
+				/* Mark the time when the link transitioned to good */
+				good_time = current_time;
+			} else {
+				/* Check for a link error */
+				pciercx_cfg068.u32 = CVMX_PCIE_CFGX_READ(
+					pcie_port, CVMX_PCIERCX_CFG068(pcie_port));
+				if (pciercx_cfg068.s.res) {
+					/*
+					 * Ignore errors before we've been
+					 * stable for bounce_allow_time
+					 */
+					if (good_time + bounce_allow_time <=
+					    current_time) {
+#ifdef DEBUG_PCIE
+						printf("N%d.PCIe%d: Link errors after link up\n",
+						       node, pcie_port);
+#endif
+						/* Link error, signal a retry */
+						return 1;
+					}
+
+					/*
+					 * Clear RC Correctable Error
+					 * Status Register
+					 */
+					CVMX_PCIE_CFGX_WRITE(pcie_port,
+							     CVMX_PCIERCX_CFG068(pcie_port),
+							     -1);
+#ifdef DEBUG_PCIE
+					printf("N%d.PCIe%d: Ignored error during settling time\n",
+					       node, pcie_port);
+#endif
+				}
+			}
+		} else if (good_time) {
+			if (good_time + bounce_allow_time <= current_time) {
+				/*
+				 * We allow bounces for bounce_allow_time after
+				 * the link is good. Once this time passes any
+				 * bounce requires a retry
+				 */
+#ifdef DEBUG_PCIE
+				printf("N%d.PCIe%d: Link bounce detected\n",
+				       node, pcie_port);
+#endif
+				return 1; /* Link bounce, signal a retry */
+			}
+
+#ifdef DEBUG_PCIE
+			printf("N%d.PCIe%d: Ignored bounce during settling time\n",
+			       node, pcie_port);
+#endif
+		}
+
+		/* Determine if we've hit the timeout */
+		is_loop_done = (current_time >= timeout);
+
+		/*
+		 * Determine if we've had a good link for the required hold
+		 * time
+		 */
+		is_loop_done |= link_up && (good_time + hold_time <=
+					    current_time);
+	} while (!is_loop_done);
+
+	/* Trace the LTSSM state */
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: LTSSM History\n", node, pcie_port);
+#endif
+	for (i = 0; i < LTSSM_HISTORY_SIZE; i++) {
+		ltssm_history_loc = (ltssm_history_loc + 1) & (LTSSM_HISTORY_SIZE - 1);
+#ifdef DEBUG_PCIE
+		if (ltssm_history[ltssm_history_loc] != 0xff)
+			printf("N%d.PCIe%d: %s\n", node, pcie_port,
+			       cvmx_pcie_get_ltssm_string(ltssm_history[ltssm_history_loc]));
+#endif
+	}
+
+	if (!link_up) {
+		ltssm_state = __cvmx_pcie_rc_get_ltssm_state(node, pcie_port);
+#ifdef DEBUG_PCIE
+		printf("N%d.PCIe%d: Link down, Data link layer %s(DLLA=%d), Link training %s(LT=%d), LTSSM %s\n",
+		       node, pcie_port, pciercx_cfg032.s.dlla ? "active" : "down",
+		       pciercx_cfg032.s.dlla, pciercx_cfg032.s.lt ? "active" : "complete",
+		       pciercx_cfg032.s.lt, cvmx_pcie_get_ltssm_string(ltssm_state));
+#endif
+		return 1; /* Link down, signal a retry */
+	}
+
+	/* Report the negotiated link speed and width */
+	neg_gen = pciercx_cfg032.s.ls;	  /* Current speed of PEM (1-3) */
+	neg_width = pciercx_cfg032.s.nlw; /* Current lane width of PEM (1-8) */
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: Link negotiated %d lanes, speed gen%d\n", node, pcie_port, neg_width,
+	       neg_gen);
+#endif
+	/* Determine PCIe bus number the directly attached device uses */
+	pciercx_cfg006.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG006(pcie_port));
+	bus = pciercx_cfg006.s.sbnum;
+
+	/* The SLI has to be initialized so we can read the downstream devices     */
+	dev_gen = 1;   /* Device max speed (1-3) */
+	dev_width = 1; /* Device max lane width (1-16) */
+#ifdef DEBUG_PCIE
+	printf("N%d.PCIe%d: Reading Bus %d device max speed and width\n", node, pcie_port, bus);
+#endif
+
+	/*
+	 * Here is the second part of the config retry changes. Wait for 700ms
+	 * after setting up the link before continuing. PCIe says the devices
+	 * may need up to 900ms to come up. 700ms plus 200ms from above gives
+	 * us a total of 900ms
+	 */
+	udelay(PCIE_DEVICE_READY_WAIT_DELAY_MICROSECONDS);
+
+	/* Read PCI capability pointer at offset 0x34 of target */
+	cap = cvmx_pcie_config_read32_retry(node, pcie_port, bus, 0, 0, 0x34);
+
+	/* Check if we were able to read capabilities pointer */
+	if (cap == 0xffffffff)
+		return 1; /* Signal retry needed */
+
+	/* Read device max speed and width */
+	cap_next = cap & 0xff;
+	while (cap_next) {
+		cap = cvmx_pcie_config_read32_retry(node, pcie_port, bus,
+						    0, 0, cap_next);
+		if (cap == 0xffffffff)
+			return 1; /* Signal retry needed */
+
+		/* Is this a PCIe capability (0x10)? */
+		if ((cap & 0xff) == 0x10) {
+#ifdef DEBUG_PCIE
+			printf("N%d.PCIe%d: Found PCIe capability@offset 0x%x\n",
+			       node, pcie_port, cap_next);
+#endif
+			/* Offset 0xc contains the max link info */
+			cap = cvmx_pcie_config_read32_retry(node, pcie_port, bus, 0, 0,
+							    cap_next + 0xc);
+			if (cap == 0xffffffff)
+				return 1;	       /* Signal retry needed */
+			dev_gen = cap & 0xf;	       /* Max speed of PEM from config (1-3) */
+			dev_width = (cap >> 4) & 0x3f; /* Max lane width of PEM (1-16) */
+#ifdef DEBUG_PCIE
+			printf("N%d.PCIe%d: Device supports %d lanes, speed gen%d\n", node,
+			       pcie_port, dev_width, dev_gen);
+#endif
+			break;
+		}
+		/* Move to next capability */
+		cap_next = (cap >> 8) & 0xff;
+	}
+
+	/*
+	 * Desired link speed and width is either limited by the device or our
+	 * PEM configuration. Choose the most restrictive limit
+	 */
+	desired_gen = (dev_gen < max_gen) ? dev_gen : max_gen;
+	desired_width = (dev_width < max_width) ? dev_width : max_width;
+
+	/*
+	 * We need a change if we don't match the desired speed or width.
+	 * Note that we allow better than expected in case the device lied
+	 * about its capabilities
+	 */
+	need_speed_change = (neg_gen < desired_gen);
+	need_lane_change = (neg_width < desired_width);
+
+	if (need_lane_change) {
+		/* We didn't get the maximum number of lanes */
+#ifdef DEBUG_PCIE
+		printf("N%d.PCIe%d: Link width (%d) less that supported (%d)\n",
+		       node, pcie_port, neg_width, desired_width);
+#endif
+		return 2; /* Link wrong width, signal a retry */
+	} else if (need_speed_change) {
+		if (do_retry_speed) {
+#ifdef DEBUG_PCIE
+			printf("N%d.PCIe%d: Link speed (gen%d) less that supported (gen%d)\n", node,
+			       pcie_port, neg_gen, desired_gen);
+#endif
+			return 1; /* Link at width, but speed low. Request a retry */
+		}
+
+		/* We didn't get the maximum speed. Request a speed change */
+#ifdef DEBUG_PCIE
+		printf("N%d.PCIe%d: Link speed (gen%d) less that supported (gen%d), requesting a speed change\n",
+		       node, pcie_port, neg_gen, desired_gen);
+#endif
+		pciercx_cfg515.u32 =
+			CVMX_PCIE_CFGX_READ(pcie_port,
+					    CVMX_PCIERCX_CFG515(pcie_port));
+		pciercx_cfg515.s.dsc = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port,
+				     CVMX_PCIERCX_CFG515(pcie_port),
+				     pciercx_cfg515.u32);
+		mdelay(100);
+		do_retry_speed = true;
+		goto retry_speed;
+	} else {
+#ifdef DEBUG_PCIE
+		printf("N%d.PCIe%d: Link at best speed and width\n",
+		       node, pcie_port);
+#endif
+		/* For gen3 links check if we are getting errors over the link */
+		if (neg_gen == 3) {
+			/* Read RC Correctable Error Status Register */
+			pciercx_cfg068.u32 =
+				CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG068(pcie_port));
+			if (pciercx_cfg068.s.res) {
+#ifdef DEBUG_PCIE
+				printf("N%d.PCIe%d: Link reporting error status\n", node,
+				       pcie_port);
+#endif
+				return 1; /* Getting receiver errors, request a retry */
+			}
+		}
+		return 0; /* Link@correct speed and width */
+	}
+
+	/* Update the Replay Time Limit.  Empirically, some PCIe devices take a
+	 * little longer to respond than expected under load. As a workaround
+	 * for this we configure the Replay Time Limit to the value expected
+	 * for a 512 byte MPS instead of our actual 256 byte MPS. The numbers
+	 * below are directly from the PCIe spec table 3-4
+	 */
+	pciercx_cfg448.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG448(pcie_port));
+	switch (pciercx_cfg032.s.nlw) {
+	case 1: /* 1 lane */
+		pciercx_cfg448.s.rtl = 1677;
+		break;
+	case 2: /* 2 lanes */
+		pciercx_cfg448.s.rtl = 867;
+		break;
+	case 4: /* 4 lanes */
+		pciercx_cfg448.s.rtl = 462;
+		break;
+	case 8: /* 8 lanes */
+		pciercx_cfg448.s.rtl = 258;
+		break;
+	}
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG448(pcie_port), pciercx_cfg448.u32);
+
+	return 0;
+}
+
+static int __cvmx_pcie_rc_initialize_gen2_v3(int pcie_port)
+{
+	cvmx_rst_ctlx_t rst_ctl;
+	cvmx_rst_soft_prstx_t rst_soft_prst;
+	cvmx_pciercx_cfg031_t pciercx_cfg031;
+	cvmx_pciercx_cfg032_t pciercx_cfg032;
+	cvmx_pciercx_cfg038_t pciercx_cfg038;
+	cvmx_pciercx_cfg040_t pciercx_cfg040;
+	cvmx_pciercx_cfg515_t pciercx_cfg515;
+	cvmx_pciercx_cfg548_t pciercx_cfg548;
+	cvmx_pemx_bist_status_t pemx_bist_status;
+	u64 rst_soft_prst_reg;
+	int qlm;
+	int node = (pcie_port >> 4) & 0x3;
+	bool requires_pem_reset = 0;
+	enum cvmx_qlm_mode mode = CVMX_QLM_MODE_DISABLED;
+	int retry_count = 0;
+	int result = 0;
+
+	pcie_port &= 0x3;
+
+	/* Assume link down until proven up */
+	pcie_link_initialized[node][pcie_port] = false;
+
+	/* Attempt link initialization up to 3 times */
+	while (retry_count <= MAX_RETRIES) {
+#ifdef DEBUG_PCIE
+		if (retry_count)
+			printf("N%d:PCIE%d: Starting link retry %d\n", node, pcie_port,
+			       retry_count);
+#endif
+		if (pcie_port >= CVMX_PCIE_PORTS) {
+#ifdef DEBUG_PCIE
+			printf("Invalid PCIe%d port\n", pcie_port);
+#endif
+			return -1;
+		}
+
+		qlm = __cvmx_pcie_get_qlm(node, pcie_port);
+
+		if (qlm < 0)
+			return -1;
+
+		mode = cvmx_qlm_get_mode(qlm);
+		if (__cvmx_pcie_check_pcie_port(node, pcie_port, mode))
+			return -1;
+
+		rst_soft_prst_reg = CVMX_RST_SOFT_PRSTX(pcie_port);
+		rst_ctl.u64 = CVMX_READ_CSR(CVMX_RST_CTLX(pcie_port));
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+			CVMX_WRITE_CSR(CVMX_DTX_PEMX_SELX(0, pcie_port), 0x17);
+			CVMX_WRITE_CSR(CVMX_DTX_PEMX_SELX(1, pcie_port), 0);
+		}
+
+		if (!rst_ctl.s.host_mode) {
+			printf("N%d:PCIE: Port %d in endpoint mode.\n",
+			       node, pcie_port);
+			return -1;
+		}
+
+		/* Bring the PCIe out of reset */
+		rst_soft_prst.u64 = CVMX_READ_CSR(rst_soft_prst_reg);
+
+		/*
+		 * After a chip reset the PCIe will also be in reset. If it
+		 * isn't, most likely someone is trying to init it again
+		 * without a proper PCIe reset.
+		 */
+		if (rst_soft_prst.s.soft_prst == 0) {
+			/* Disable the MAC controller before resetting */
+			__cvmx_pcie_config_pemon(node, pcie_port, 0);
+
+			/* Reset the port */
+			rst_soft_prst.s.soft_prst = 1;
+			CVMX_WRITE_CSR(rst_soft_prst_reg, rst_soft_prst.u64);
+
+			/* Read to make sure write happens */
+			rst_soft_prst.u64 = CVMX_READ_CSR(rst_soft_prst_reg);
+
+			/* Keep PERST asserted for 2 ms */
+			udelay(2000);
+
+			/* Reset GSER_PHY to put in a clean state */
+			__cvmx_pcie_gser_phy_config(node, pcie_port, qlm);
+			requires_pem_reset = 1;
+
+			/* Enable MAC controller before taking pcie out of reset */
+			__cvmx_pcie_config_pemon(node, pcie_port, 1);
+		}
+
+		/* Deassert PERST */
+		rst_soft_prst.u64 = CVMX_READ_CSR(rst_soft_prst_reg);
+		rst_soft_prst.s.soft_prst = 0;
+		CVMX_WRITE_CSR(rst_soft_prst_reg, rst_soft_prst.u64);
+		rst_soft_prst.u64 = CVMX_READ_CSR(rst_soft_prst_reg);
+
+		/* Check if PLLs are locked after GSER_PHY reset. */
+		if (requires_pem_reset) {
+			cvmx_pemx_cfg_t pemx_cfg;
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(pcie_port));
+			if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm), cvmx_gserx_qlm_stat_t,
+						  rst_rdy, ==, 1, 10000)) {
+				printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", qlm);
+				return -1;
+			}
+			if (pemx_cfg.cn78xx.lanes8 &&
+			    (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm + 1),
+						   cvmx_gserx_qlm_stat_t, rst_rdy, ==, 1, 10000))) {
+				printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n",
+				       qlm + 1);
+				return -1;
+			}
+		}
+
+		/* Wait 1ms for PCIe reset to complete */
+		udelay(1000);
+
+		/*
+		 * Check and make sure PCIe came out of reset. If it doesn't
+		 * the board probably hasn't wired the clocks up and the
+		 * interface should be skipped
+		 */
+		if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_RST_CTLX(pcie_port),
+					       cvmx_rst_ctlx_t,
+					       rst_done, ==, 1, 10000)) {
+			printf("N%d:PCIE: Port %d stuck in reset, skipping.\n", node, pcie_port);
+			return -1;
+		}
+
+		/* Check BIST status */
+		pemx_bist_status.u64 = CVMX_READ_CSR(CVMX_PEMX_BIST_STATUS(pcie_port));
+		if (pemx_bist_status.u64)
+			printf("N%d:PCIE: BIST FAILED for port %d (0x%016llx)\n", node, pcie_port,
+			       CAST64(pemx_bist_status.u64));
+
+			/* Initialize the config space CSRs */
+#ifdef DEBUG_PCIE
+		printf("N%d:PCIE%d Initialize Config Space\n", node, pcie_port);
+#endif
+		__cvmx_pcie_rc_initialize_config_space(node, pcie_port);
+
+		/* Enable gen2 speed selection */
+		pciercx_cfg515.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG515(pcie_port));
+		pciercx_cfg515.s.dsc = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG515(pcie_port), pciercx_cfg515.u32);
+
+		/* Do the link retries on the PCIe interface */
+		if (retry_count == MAX_RETRIES) {
+			/*
+			 * This has to be done AFTER the QLM/PHY interface
+			 * initialized
+			 */
+			pciercx_cfg031.u32 =
+				CVMX_PCIE_CFGX_READ(pcie_port,
+						    CVMX_PCIERCX_CFG031(pcie_port));
+			/*
+			 * Drop speed to gen2 if link bouncing
+			 * Result = -1  PEM in reset
+			 * Result = 0:  link speed and width ok no retry needed
+			 * Result = 1:  link errors or speed change needed
+			 * Result = 2:  lane width error
+			 */
+			if (pciercx_cfg031.s.mls == 3 && result != 2) {
+#ifdef DEBUG_PCIE
+				printf("N%d:PCIE%d: Dropping speed to gen2\n", node, pcie_port);
+#endif
+				pciercx_cfg031.s.mls = 2;
+				CVMX_PCIE_CFGX_WRITE(pcie_port,
+						     CVMX_PCIERCX_CFG031(pcie_port),
+						     pciercx_cfg031.u32);
+
+				/* Set the target link speed */
+				pciercx_cfg040.u32 = CVMX_PCIE_CFGX_READ(
+					pcie_port, CVMX_PCIERCX_CFG040(pcie_port));
+				pciercx_cfg040.s.tls = 2;
+				CVMX_PCIE_CFGX_WRITE(pcie_port,
+						     CVMX_PCIERCX_CFG040(pcie_port),
+						     pciercx_cfg040.u32);
+			}
+		}
+
+		/* Bring the link up */
+		result = __cvmx_pcie_rc_initialize_link_gen2_v3(node, pcie_port);
+		if (result == 0) {
+#ifdef DEBUG_PCIE
+			printf("N%d:PCIE%d: Link does not need a retry\n", node, pcie_port);
+#endif
+			break;
+		} else if (result > 0) {
+			if (retry_count >= MAX_RETRIES) {
+				int link_up;
+#ifdef DEBUG_PCIE
+				printf("N%d:PCIE%d: Link requested a retry, but hit the max retries\n",
+				       node, pcie_port);
+#endif
+				/* If the link is down, report failure */
+				pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(
+					pcie_port,
+					CVMX_PCIERCX_CFG032(pcie_port));
+				link_up = (pciercx_cfg032.s.dlla && !pciercx_cfg032.s.lt);
+				if (!link_up)
+					result = -1;
+			}
+#ifdef DEBUG_PCIE
+			else
+				printf("N%d.PCIE%d: Link requested a retry\n", node, pcie_port);
+#endif
+		}
+		if (result < 0) {
+			int ltssm_state = __cvmx_pcie_rc_get_ltssm_state(node, pcie_port);
+
+			printf("N%d:PCIE%d: Link timeout, probably the slot is empty (LTSSM %s)\n",
+			       node, pcie_port, cvmx_pcie_get_ltssm_string(ltssm_state));
+			return -1;
+		}
+		retry_count++;
+	}
+
+	pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
+	/*
+	 * Errata PEM-28816: Link retrain initiated@GEN1 can cause PCIE
+	 * link to hang. For Gen1 links we must disable equalization
+	 */
+	if (pciercx_cfg032.s.ls == 1) {
+#ifdef DEBUG_PCIE
+		printf("N%d:PCIE%d: Disabling equalization for GEN1 Link\n", node, pcie_port);
+#endif
+		pciercx_cfg548.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG548(pcie_port));
+		pciercx_cfg548.s.ed = 1;
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG548(pcie_port), pciercx_cfg548.u32);
+	}
+
+	/* Errata PCIE-29440: Atomic operations to work properly */
+	pciercx_cfg038.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG038(pcie_port));
+	pciercx_cfg038.s.atom_op_eb = 0;
+	pciercx_cfg038.s.atom_op = 1;
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG038(pcie_port), pciercx_cfg038.u32);
+
+	/* Errata PCIE-29566 PEM Link Hangs after going into L1 */
+	pciercx_cfg548.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG548(pcie_port));
+	pciercx_cfg548.s.grizdnc = 0;
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIERCX_CFG548(pcie_port), pciercx_cfg548.u32);
+
+	if (result < 0)
+		return result;
+
+	/* Display the link status */
+	pciercx_cfg032.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIERCX_CFG032(pcie_port));
+	printf("N%d:PCIe: Port %d link active, %d lanes, speed gen%d\n", node, pcie_port,
+	       pciercx_cfg032.s.nlw, pciercx_cfg032.s.ls);
+
+	pcie_link_initialized[node][pcie_port] = true;
+	return 0;
+}
+
+/**
+ * Initialize a PCIe port for use in host(RC) mode. It doesn't enumerate the bus.
+ *
+ * @param pcie_port PCIe port to initialize for a node
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_initialize(int pcie_port)
+{
+	int result;
+
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX))
+		result = __cvmx_pcie_rc_initialize_gen2(pcie_port);
+	else
+		result = __cvmx_pcie_rc_initialize_gen2_v3(pcie_port);
+
+	if (result == 0)
+		cvmx_error_enable_group(CVMX_ERROR_GROUP_PCI, pcie_port);
+	return result;
+}
+
+/**
+ * Shutdown a PCIe port and put it in reset
+ *
+ * @param pcie_port PCIe port to shutdown for a node
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_shutdown(int pcie_port)
+{
+	u64 ciu_soft_prst_reg;
+	cvmx_ciu_soft_prst_t ciu_soft_prst;
+	int node;
+
+	/* Shutdown only if PEM is in RC mode */
+	if (!cvmx_pcie_is_host_mode(pcie_port))
+		return -1;
+
+	node = (pcie_port >> 4) & 0x3;
+	pcie_port &= 0x3;
+	cvmx_error_disable_group(CVMX_ERROR_GROUP_PCI, pcie_port);
+	/* Wait for all pending operations to complete */
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_PEMX_CPL_LUT_VALID(pcie_port),
+				       cvmx_pemx_cpl_lut_valid_t, tag, ==,
+				       0, 2000))
+		debug("PCIe: Port %d shutdown timeout\n", pcie_port);
+
+	if (OCTEON_IS_OCTEON3()) {
+		ciu_soft_prst_reg = CVMX_RST_SOFT_PRSTX(pcie_port);
+	} else {
+		ciu_soft_prst_reg = (pcie_port) ? CVMX_CIU_SOFT_PRST1 :
+			CVMX_CIU_SOFT_PRST;
+	}
+
+	/* Force reset */
+	ciu_soft_prst.u64 = CVMX_READ_CSR(ciu_soft_prst_reg);
+	ciu_soft_prst.s.soft_prst = 1;
+	CVMX_WRITE_CSR(ciu_soft_prst_reg, ciu_soft_prst.u64);
+
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Build a PCIe config space request address for a device
+ *
+ * @param node	    node
+ * @param port	    PCIe port (relative to the node) to access
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return 64bit Octeon IO address
+ */
+static uint64_t __cvmx_pcie_build_config_addr(int node, int port, int bus, int dev, int fn, int reg)
+{
+	cvmx_pcie_address_t pcie_addr;
+	cvmx_pciercx_cfg006_t pciercx_cfg006;
+
+	pciercx_cfg006.u32 = cvmx_pcie_cfgx_read_node(node, port,
+						      CVMX_PCIERCX_CFG006(port));
+	if (bus <= pciercx_cfg006.s.pbnum && dev != 0)
+		return 0;
+
+	pcie_addr.u64 = 0;
+	pcie_addr.config.upper = 2;
+	pcie_addr.config.io = 1;
+	pcie_addr.config.did = 3;
+	pcie_addr.config.subdid = 1;
+	pcie_addr.config.node = node;
+	pcie_addr.config.es = _CVMX_PCIE_ES;
+	pcie_addr.config.port = port;
+	/* Always use config type 0 */
+	if (pciercx_cfg006.s.pbnum == 0)
+		pcie_addr.config.ty = (bus > pciercx_cfg006.s.pbnum + 1);
+	else
+		pcie_addr.config.ty = (bus > pciercx_cfg006.s.pbnum);
+	pcie_addr.config.bus = bus;
+	pcie_addr.config.dev = dev;
+	pcie_addr.config.func = fn;
+	pcie_addr.config.reg = reg;
+	return pcie_addr.u64;
+}
+
+/**
+ * Read 8bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+uint8_t cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn, int reg)
+{
+	u64 address;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, fn, reg);
+	if (address)
+		return cvmx_read64_uint8(address);
+	else
+		return 0xff;
+}
+
+/**
+ * Read 16bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+uint16_t cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn, int reg)
+{
+	u64 address;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, fn, reg);
+	if (address)
+		return le16_to_cpu(cvmx_read64_uint16(address));
+	else
+		return 0xffff;
+}
+
+static uint32_t __cvmx_pcie_config_read32(int node, int pcie_port, int bus, int dev, int func,
+					  int reg, int lst)
+{
+	u64 address;
+
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, func, reg);
+	if (lst) {
+		if (address && pcie_link_initialized[node][pcie_port])
+			return le32_to_cpu(cvmx_read64_uint32(address));
+		else
+			return 0xffffffff;
+	} else if (address) {
+		return le32_to_cpu(cvmx_read64_uint32(address));
+	} else {
+		return 0xffffffff;
+	}
+}
+
+/**
+ * Read 32bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+uint32_t cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn, int reg)
+{
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	return __cvmx_pcie_config_read32(node, pcie_port, bus, dev, fn, reg,
+					 pcie_link_initialized[node][pcie_port]);
+}
+
+/**
+ * Write 8bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int fn, int reg, uint8_t val)
+{
+	u64 address;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, fn, reg);
+	if (address)
+		cvmx_write64_uint8(address, val);
+}
+
+/**
+ * Write 16bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int fn, int reg, uint16_t val)
+{
+	u64 address;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, fn, reg);
+	if (address)
+		cvmx_write64_uint16(address, cpu_to_le16(val));
+}
+
+/**
+ * Write 32bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int fn, int reg, uint32_t val)
+{
+	u64 address;
+	int node = (pcie_port >> 4) & 0x3;
+
+	pcie_port &= 0x3;
+	address = __cvmx_pcie_build_config_addr(node, pcie_port, bus, dev, fn, reg);
+	if (address)
+		cvmx_write64_uint32(address, cpu_to_le32(val));
+}
+
+/**
+ * Read a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to read from
+ * @param cfg_offset Address to read
+ *
+ * @return Value read
+ */
+uint32_t cvmx_pcie_cfgx_read(int pcie_port, uint32_t cfg_offset)
+{
+	return cvmx_pcie_cfgx_read_node(0, pcie_port, cfg_offset);
+}
+
+uint32_t cvmx_pcie_cfgx_read_node(int node, int pcie_port, uint32_t cfg_offset)
+{
+	cvmx_pemx_cfg_rd_t pemx_cfg_rd;
+
+	pemx_cfg_rd.u64 = 0;
+	pemx_cfg_rd.s.addr = cfg_offset;
+	CVMX_WRITE_CSR(CVMX_PEMX_CFG_RD(pcie_port), pemx_cfg_rd.u64);
+	pemx_cfg_rd.u64 = CVMX_READ_CSR(CVMX_PEMX_CFG_RD(pcie_port));
+
+	return pemx_cfg_rd.s.data;
+}
+
+/**
+ * Write a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to write to
+ * @param cfg_offset Address to write
+ * @param val        Value to write
+ */
+void cvmx_pcie_cfgx_write(int pcie_port, uint32_t cfg_offset, uint32_t val)
+{
+	cvmx_pcie_cfgx_write_node(0, pcie_port, cfg_offset, val);
+}
+
+void cvmx_pcie_cfgx_write_node(int node, int pcie_port, uint32_t cfg_offset, uint32_t val)
+{
+	cvmx_pemx_cfg_wr_t pemx_cfg_wr;
+
+	pemx_cfg_wr.u64 = 0;
+	pemx_cfg_wr.s.addr = cfg_offset;
+	pemx_cfg_wr.s.data = val;
+	CVMX_WRITE_CSR(CVMX_PEMX_CFG_WR(pcie_port), pemx_cfg_wr.u64);
+}
+
+extern int cvmx_pcie_is_host_mode(int pcie_port);
+
+/**
+ * Initialize a PCIe port for use in target(EP) mode.
+ *
+ * @param pcie_port PCIe port to initialize for a node
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_ep_initialize(int pcie_port)
+{
+	int node = (pcie_port >> 4) & 0x3;
+
+	if (cvmx_pcie_is_host_mode(pcie_port))
+		return -1;
+
+	pcie_port &= 0x3;
+
+	/* CN63XX Pass 1.0 errata G-14395 requires the QLM De-emphasis be
+	 * programmed
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX_PASS1_0)) {
+		if (pcie_port) {
+			cvmx_ciu_qlm1_t ciu_qlm;
+
+			ciu_qlm.u64 = csr_rd(CVMX_CIU_QLM1);
+			ciu_qlm.s.txbypass = 1;
+			ciu_qlm.s.txdeemph = 5;
+			ciu_qlm.s.txmargin = 0x17;
+			csr_wr(CVMX_CIU_QLM1, ciu_qlm.u64);
+		} else {
+			cvmx_ciu_qlm0_t ciu_qlm;
+
+			ciu_qlm.u64 = csr_rd(CVMX_CIU_QLM0);
+			ciu_qlm.s.txbypass = 1;
+			ciu_qlm.s.txdeemph = 5;
+			ciu_qlm.s.txmargin = 0x17;
+			csr_wr(CVMX_CIU_QLM0, ciu_qlm.u64);
+		}
+	}
+
+	/* Enable bus master and memory */
+	CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIEEPX_CFG001(pcie_port), 0x6);
+
+	/* Max Payload Size (PCIE*_CFG030[MPS]) */
+	/* Max Read Request Size (PCIE*_CFG030[MRRS]) */
+	/* Relaxed-order, no-snoop enables (PCIE*_CFG030[RO_EN,NS_EN] */
+	/* Error Message Enables (PCIE*_CFG030[CE_EN,NFE_EN,FE_EN,UR_EN]) */
+	{
+		cvmx_pcieepx_cfg030_t pcieepx_cfg030;
+
+		pcieepx_cfg030.u32 = CVMX_PCIE_CFGX_READ(pcie_port, CVMX_PCIEEPX_CFG030(pcie_port));
+		pcieepx_cfg030.s.mps = MPS_CN6XXX;
+		pcieepx_cfg030.s.mrrs = MRRS_CN6XXX;
+		pcieepx_cfg030.s.ro_en = 1;  /* Enable relaxed ordering. */
+		pcieepx_cfg030.s.ns_en = 1;  /* Enable no snoop. */
+		pcieepx_cfg030.s.ce_en = 1;  /* Correctable error reporting enable. */
+		pcieepx_cfg030.s.nfe_en = 1; /* Non-fatal error reporting enable. */
+		pcieepx_cfg030.s.fe_en = 1;  /* Fatal error reporting enable. */
+		pcieepx_cfg030.s.ur_en = 1;  /* Unsupported request reporting enable. */
+		CVMX_PCIE_CFGX_WRITE(pcie_port, CVMX_PCIEEPX_CFG030(pcie_port), pcieepx_cfg030.u32);
+	}
+
+	/* Max Payload Size (DPI_SLI_PRTX_CFG[MPS]) must match
+	 * PCIE*_CFG030[MPS]
+	 */
+	/* Max Read Request Size (DPI_SLI_PRTX_CFG[MRRS]) must not
+	 * exceed PCIE*_CFG030[MRRS]
+	 */
+	cvmx_dpi_sli_prtx_cfg_t prt_cfg;
+	cvmx_sli_s2m_portx_ctl_t sli_s2m_portx_ctl;
+
+	prt_cfg.u64 = CVMX_READ_CSR(CVMX_DPI_SLI_PRTX_CFG(pcie_port));
+	prt_cfg.s.mps = MPS_CN6XXX;
+	prt_cfg.s.mrrs = MRRS_CN6XXX;
+	/* Max outstanding load request. */
+	prt_cfg.s.molr = 32;
+	CVMX_WRITE_CSR(CVMX_DPI_SLI_PRTX_CFG(pcie_port), prt_cfg.u64);
+
+	sli_s2m_portx_ctl.u64 = CVMX_READ_CSR(CVMX_PEXP_SLI_S2M_PORTX_CTL(pcie_port));
+	if (!(OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX) ||
+	      OCTEON_IS_MODEL(OCTEON_CNF75XX)))
+		sli_s2m_portx_ctl.cn61xx.mrrs = MRRS_CN6XXX;
+	CVMX_WRITE_CSR(CVMX_PEXP_SLI_S2M_PORTX_CTL(pcie_port), sli_s2m_portx_ctl.u64);
+
+	/* Setup Mem access SubDID 12 to access Host memory */
+	cvmx_sli_mem_access_subidx_t mem_access_subid;
+
+	mem_access_subid.u64 = 0;
+	mem_access_subid.s.port = pcie_port; /* Port the request is sent to. */
+	mem_access_subid.s.nmerge = 0;	     /* Merging is allowed in this window. */
+	mem_access_subid.s.esr = 0;	     /* Endian-swap for Reads. */
+	mem_access_subid.s.esw = 0;	     /* Endian-swap for Writes. */
+	mem_access_subid.s.wtype = 0;	     /* "No snoop" and "Relaxed ordering" are not set */
+	mem_access_subid.s.rtype = 0;	     /* "No snoop" and "Relaxed ordering" are not set */
+	/* PCIe Address Bits <63:34>. */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		mem_access_subid.cn68xx.ba = 0;
+	else
+		mem_access_subid.cn63xx.ba = 0;
+	CVMX_WRITE_CSR(CVMX_PEXP_SLI_MEM_ACCESS_SUBIDX(12 + pcie_port * 4), mem_access_subid.u64);
+
+	return 0;
+}
+
+/**
+ * Wait for posted PCIe read/writes to reach the other side of
+ * the internal PCIe switch. This will insure that core
+ * read/writes are posted before anything after this function
+ * is called. This may be necessary when writing to memory that
+ * will later be read using the DMA/PKT engines.
+ *
+ * @param pcie_port PCIe port to wait for
+ */
+void cvmx_pcie_wait_for_pending(int pcie_port)
+{
+	cvmx_sli_data_out_cnt_t sli_data_out_cnt;
+	int a;
+	int b;
+	int c;
+
+	sli_data_out_cnt.u64 = csr_rd(CVMX_PEXP_SLI_DATA_OUT_CNT);
+	if (pcie_port) {
+		if (!sli_data_out_cnt.s.p1_fcnt)
+			return;
+		a = sli_data_out_cnt.s.p1_ucnt;
+		b = (a + sli_data_out_cnt.s.p1_fcnt - 1) & 0xffff;
+	} else {
+		if (!sli_data_out_cnt.s.p0_fcnt)
+			return;
+		a = sli_data_out_cnt.s.p0_ucnt;
+		b = (a + sli_data_out_cnt.s.p0_fcnt - 1) & 0xffff;
+	}
+
+	while (1) {
+		sli_data_out_cnt.u64 = csr_rd(CVMX_PEXP_SLI_DATA_OUT_CNT);
+		c = (pcie_port) ? sli_data_out_cnt.s.p1_ucnt :
+			sli_data_out_cnt.s.p0_ucnt;
+		if (a <= b) {
+			if (c < a || c > b)
+				return;
+		} else {
+			if (c > b && c < a)
+				return;
+		}
+	}
+}
+
+/**
+ * Returns if a PCIe port is in host or target mode.
+ *
+ * @param pcie_port PCIe port number (PEM number)
+ *
+ * @return 0 if PCIe port is in target mode, !0 if in host mode.
+ */
+int cvmx_pcie_is_host_mode(int pcie_port)
+{
+	int node = (pcie_port >> 4) & 0x3;
+	cvmx_mio_rst_ctlx_t mio_rst_ctl;
+
+	pcie_port &= 0x3;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		cvmx_pemx_strap_t strap;
+
+		strap.u64 = CVMX_READ_CSR(CVMX_PEMX_STRAP(pcie_port));
+		return (strap.cn78xx.pimode == 3);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		cvmx_rst_ctlx_t rst_ctl;
+
+		rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(pcie_port));
+		return !!rst_ctl.s.host_mode;
+	}
+
+	mio_rst_ctl.u64 = csr_rd(CVMX_MIO_RST_CTLX(pcie_port));
+	if (OCTEON_IS_MODEL(OCTEON_CN61XX) || OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		return mio_rst_ctl.s.prtmode != 0;
+	else
+		return !!mio_rst_ctl.s.host_mode;
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 42/50] mips: octeon: Add cvmx-qlm.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (40 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 41/50] mips: octeon: Add cvmx-pcie.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 43/50] mips: octeon: Add octeon_fdt.c Stefan Roese
                   ` (10 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import cvmx-qlm.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/cvmx-qlm.c | 2350 ++++++++++++++++++++++++++++++
 1 file changed, 2350 insertions(+)
 create mode 100644 arch/mips/mach-octeon/cvmx-qlm.c

diff --git a/arch/mips/mach-octeon/cvmx-qlm.c b/arch/mips/mach-octeon/cvmx-qlm.c
new file mode 100644
index 0000000000..970e34aaff
--- /dev/null
+++ b/arch/mips/mach-octeon/cvmx-qlm.c
@@ -0,0 +1,2350 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Helper utilities for qlm.
+ */
+
+#include <log.h>
+#include <time.h>
+#include <asm/global_data.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-bgxx-defs.h>
+#include <mach/cvmx-ciu-defs.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-gserx-defs.h>
+#include <mach/cvmx-mio-defs.h>
+#include <mach/cvmx-pciercx-defs.h>
+#include <mach/cvmx-pemx-defs.h>
+#include <mach/cvmx-pexp-defs.h>
+#include <mach/cvmx-rst-defs.h>
+#include <mach/cvmx-sata-defs.h>
+#include <mach/cvmx-sli-defs.h>
+#include <mach/cvmx-sriomaintx-defs.h>
+#include <mach/cvmx-sriox-defs.h>
+
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-jtag.h>
+
+DECLARE_GLOBAL_DATA_PTR;
+
+/*
+ * Their is a copy of this in bootloader qlm configuration, make sure
+ * to update both the places till i figure out
+ */
+#define R_25G_REFCLK100		 0x0
+#define R_5G_REFCLK100		 0x1
+#define R_8G_REFCLK100		 0x2
+#define R_125G_REFCLK15625_KX	 0x3
+#define R_3125G_REFCLK15625_XAUI 0x4
+#define R_103125G_REFCLK15625_KR 0x5
+#define R_125G_REFCLK15625_SGMII 0x6
+#define R_5G_REFCLK15625_QSGMII	 0x7
+#define R_625G_REFCLK15625_RXAUI 0x8
+#define R_25G_REFCLK125		 0x9
+#define R_5G_REFCLK125		 0xa
+#define R_8G_REFCLK125		 0xb
+
+static const int REF_100MHZ = 100000000;
+static const int REF_125MHZ = 125000000;
+static const int REF_156MHZ = 156250000;
+
+static qlm_jtag_uint32_t *__cvmx_qlm_jtag_xor_ref;
+
+/**
+ * Return the number of QLMs supported by the chip
+ *
+ * @return  Number of QLMs
+ */
+int cvmx_qlm_get_num(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 5;
+	else if (OCTEON_IS_MODEL(OCTEON_CN66XX))
+		return 3;
+	else if (OCTEON_IS_MODEL(OCTEON_CN63XX))
+		return 3;
+	else if (OCTEON_IS_MODEL(OCTEON_CN61XX))
+		return 3;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		return 2;
+	else if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 8;
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 7;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 9;
+	return 0;
+}
+
+/**
+ * Return the qlm number based on the interface
+ *
+ * @param xiface  interface to look up
+ *
+ * @return the qlm number based on the xiface
+ */
+int cvmx_qlm_interface(int xiface)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (OCTEON_IS_MODEL(OCTEON_CN61XX)) {
+		return (xi.interface == 0) ? 2 : 0;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN63XX) || OCTEON_IS_MODEL(OCTEON_CN66XX)) {
+		return 2 - xi.interface;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF71XX)) {
+		if (xi.interface == 0)
+			return 0;
+
+		debug("Warning: %s: Invalid interface %d\n",
+		      __func__, xi.interface);
+	} else if (octeon_has_feature(OCTEON_FEATURE_BGX)) {
+		debug("Warning: not supported\n");
+		return -1;
+	}
+
+	/* Must be cn68XX */
+	switch (xi.interface) {
+	case 1:
+		return 0;
+	default:
+		return xi.interface;
+	}
+
+	return -1;
+}
+
+/**
+ * Return the qlm number based for a port in the interface
+ *
+ * @param xiface  interface to look up
+ * @param index  index in an interface
+ *
+ * @return the qlm number based on the xiface
+ */
+int cvmx_qlm_lmac(int xiface, int index)
+{
+	struct cvmx_xiface xi = cvmx_helper_xiface_to_node_interface(xiface);
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		cvmx_bgxx_cmr_global_config_t gconfig;
+		cvmx_gserx_phy_ctl_t phy_ctl;
+		cvmx_gserx_cfg_t gserx_cfg;
+		int qlm;
+
+		if (xi.interface < 6) {
+			if (xi.interface < 2) {
+				gconfig.u64 =
+					csr_rd_node(xi.node,
+						    CVMX_BGXX_CMR_GLOBAL_CONFIG(xi.interface));
+				if (gconfig.s.pmux_sds_sel)
+					qlm = xi.interface + 2; /* QLM 2 or 3 */
+				else
+					qlm = xi.interface; /* QLM 0 or 1 */
+			} else {
+				qlm = xi.interface + 2; /* QLM 4-7 */
+			}
+
+			/* make sure the QLM is powered up and out of reset */
+			phy_ctl.u64 = csr_rd_node(xi.node, CVMX_GSERX_PHY_CTL(qlm));
+			if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+				return -1;
+			gserx_cfg.u64 = csr_rd_node(xi.node, CVMX_GSERX_CFG(qlm));
+			if (gserx_cfg.s.bgx)
+				return qlm;
+			else
+				return -1;
+		} else if (xi.interface <= 7) { /* ILK */
+			int qlm;
+
+			for (qlm = 4; qlm < 8; qlm++) {
+				/* Make sure the QLM is powered and out of reset */
+				phy_ctl.u64 = csr_rd_node(xi.node, CVMX_GSERX_PHY_CTL(qlm));
+				if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+					continue;
+				/* Make sure the QLM is in ILK mode */
+				gserx_cfg.u64 = csr_rd_node(xi.node, CVMX_GSERX_CFG(qlm));
+				if (gserx_cfg.s.ila)
+					return qlm;
+			}
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		cvmx_gserx_phy_ctl_t phy_ctl;
+		cvmx_gserx_cfg_t gserx_cfg;
+		int qlm;
+
+		/* (interface)0->QLM2, 1->QLM3, 2->DLM5/3->DLM6 */
+		if (xi.interface < 2) {
+			qlm = xi.interface + 2; /* (0,1)->ret(2,3) */
+
+			phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+			if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+				return -1;
+
+			gserx_cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+			if (gserx_cfg.s.bgx)
+				return qlm;
+			else
+				return -1;
+		} else if (xi.interface == 2) {
+			cvmx_gserx_cfg_t g1, g2;
+
+			g1.u64 = csr_rd(CVMX_GSERX_CFG(5));
+			g2.u64 = csr_rd(CVMX_GSERX_CFG(6));
+			/* Check if both QLM5 & QLM6 are BGX2 */
+			if (g2.s.bgx) {
+				if (g1.s.bgx) {
+					cvmx_gserx_phy_ctl_t phy_ctl1;
+
+					phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(5));
+					phy_ctl1.u64 = csr_rd(CVMX_GSERX_PHY_CTL(6));
+					if ((phy_ctl.s.phy_pd || phy_ctl.s.phy_reset) &&
+					    (phy_ctl1.s.phy_pd || phy_ctl1.s.phy_reset))
+						return -1;
+					if (index >= 2)
+						return 6;
+					return 5;
+				} else { /* QLM6 is BGX2 */
+					phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(6));
+					if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+						return -1;
+					return 6;
+				}
+			} else if (g1.s.bgx) {
+				phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(5));
+				if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+					return -1;
+				return 5;
+			}
+		}
+		return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_gserx_phy_ctl_t phy_ctl;
+		cvmx_gserx_cfg_t gserx_cfg;
+		int qlm;
+
+		if (xi.interface == 0) {
+			cvmx_gserx_cfg_t g1, g2;
+
+			g1.u64 = csr_rd(CVMX_GSERX_CFG(4));
+			g2.u64 = csr_rd(CVMX_GSERX_CFG(5));
+			/* Check if both QLM4 & QLM5 are BGX0 */
+			if (g2.s.bgx) {
+				if (g1.s.bgx) {
+					cvmx_gserx_phy_ctl_t phy_ctl1;
+
+					phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(4));
+					phy_ctl1.u64 = csr_rd(CVMX_GSERX_PHY_CTL(5));
+					if ((phy_ctl.s.phy_pd || phy_ctl.s.phy_reset) &&
+					    (phy_ctl1.s.phy_pd || phy_ctl1.s.phy_reset))
+						return -1;
+					if (index >= 2)
+						return 5;
+					return 4;
+				}
+
+				/* QLM5 is BGX0 */
+				phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(5));
+				if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+					return -1;
+				return 5;
+			} else if (g1.s.bgx) {
+				phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(4));
+				if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+					return -1;
+				return 4;
+			}
+		} else if (xi.interface < 2) {
+			qlm = (xi.interface == 1) ? 2 : 3;
+			gserx_cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+			if (gserx_cfg.s.srio)
+				return qlm;
+		}
+		return -1;
+	}
+	return -1;
+}
+
+/**
+ * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
+ *
+ * @param BGX  BGX to search for.
+ *
+ * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
+ */
+int cvmx_qlm_mux_interface(int bgx)
+{
+	int mux = 0;
+	cvmx_gserx_cfg_t gser1, gser2;
+	int qlm1, qlm2;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) && bgx != 2)
+		return -1;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX) && bgx != 0)
+		return -1;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		qlm1 = 5;
+		qlm2 = 6;
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		qlm1 = 4;
+		qlm2 = 5;
+	} else {
+		return -1;
+	}
+
+	gser1.u64 = csr_rd(CVMX_GSERX_CFG(qlm1));
+	gser2.u64 = csr_rd(CVMX_GSERX_CFG(qlm2));
+
+	if (gser1.s.bgx && gser2.s.bgx)
+		mux = 0;
+	else if (gser1.s.bgx)
+		mux = 1; // BGX2 is using DLM5 only
+	else if (gser2.s.bgx)
+		mux = 2; // BGX2 is using DLM6 only
+
+	return mux;
+}
+
+/**
+ * Return number of lanes for a given qlm
+ *
+ * @param qlm    QLM to examine
+ *
+ * @return  Number of lanes
+ */
+int cvmx_qlm_get_lanes(int qlm)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN61XX) && qlm == 1)
+		return 2;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		return 2;
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return (qlm < 4) ? 4 /*QLM0,1,2,3*/ : 2 /*DLM4,5,6*/;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return (qlm == 2 || qlm == 3) ? 4 /*QLM2,3*/ : 2 /*DLM0,1,4,5*/;
+	return 4;
+}
+
+/**
+ * Get the QLM JTAG fields based on Octeon model on the supported chips.
+ *
+ * @return  qlm_jtag_field_t structure
+ */
+const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void)
+{
+	/* Figure out which JTAG chain description we're using */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		return __cvmx_qlm_jtag_field_cn68xx;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN66XX) || OCTEON_IS_MODEL(OCTEON_CN61XX) ||
+		   OCTEON_IS_MODEL(OCTEON_CNF71XX)) {
+		return __cvmx_qlm_jtag_field_cn66xx;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+		return __cvmx_qlm_jtag_field_cn63xx;
+	}
+
+	return NULL;
+}
+
+/**
+ * Get the QLM JTAG length by going through qlm_jtag_field for each
+ * Octeon model that is supported
+ *
+ * @return return the length.
+ */
+int cvmx_qlm_jtag_get_length(void)
+{
+	const __cvmx_qlm_jtag_field_t *qlm_ptr = cvmx_qlm_jtag_get_field();
+	int length = 0;
+
+	/* Figure out how many bits are in the JTAG chain */
+	while (qlm_ptr && qlm_ptr->name) {
+		if (qlm_ptr->stop_bit > length)
+			length = qlm_ptr->stop_bit + 1;
+		qlm_ptr++;
+	}
+	return length;
+}
+
+/**
+ * Initialize the QLM layer
+ */
+void cvmx_qlm_init(void)
+{
+	if (OCTEON_IS_OCTEON3())
+		return;
+
+	/* ToDo: No support for non-Octeon 3 yet */
+	printf("Please add support for unsupported Octeon SoC\n");
+}
+
+/**
+ * Lookup the bit information for a JTAG field name
+ *
+ * @param name   Name to lookup
+ *
+ * @return Field info, or NULL on failure
+ */
+static const __cvmx_qlm_jtag_field_t *__cvmx_qlm_lookup_field(const char *name)
+{
+	const __cvmx_qlm_jtag_field_t *ptr = cvmx_qlm_jtag_get_field();
+
+	while (ptr->name) {
+		if (strcmp(name, ptr->name) == 0)
+			return ptr;
+		ptr++;
+	}
+
+	debug("%s: Illegal field name %s\n", __func__, name);
+	return NULL;
+}
+
+/**
+ * Get a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to get
+ * @param lane   Lane in QLM to get
+ * @param name   String name of field
+ *
+ * @return JTAG field value
+ */
+uint64_t cvmx_qlm_jtag_get(int qlm, int lane, const char *name)
+{
+	const __cvmx_qlm_jtag_field_t *field = __cvmx_qlm_lookup_field(name);
+	int qlm_jtag_length = cvmx_qlm_jtag_get_length();
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+
+	if (!field)
+		return 0;
+
+	/* Capture the current settings */
+	cvmx_helper_qlm_jtag_capture(qlm);
+	/*
+	 * Shift past lanes we don't care about. CN6XXX/7XXX shifts lane 0 first,
+	 * CN3XXX/5XXX shifts lane 3 first
+	 */
+	/* Shift to the start of the field */
+	cvmx_helper_qlm_jtag_shift_zeros(qlm,
+					 qlm_jtag_length * (num_lanes - 1 - lane));
+	cvmx_helper_qlm_jtag_shift_zeros(qlm, field->start_bit);
+	/* Shift out the value and return it */
+	return cvmx_helper_qlm_jtag_shift(qlm, field->stop_bit - field->start_bit + 1, 0);
+}
+
+/**
+ * Set a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to set
+ * @param lane   Lane in QLM to set, or -1 for all lanes
+ * @param name   String name of field
+ * @param value  Value of the field
+ */
+void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, uint64_t value)
+{
+	int i, l;
+	u32 shift_values[CVMX_QLM_JTAG_UINT32];
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+	const __cvmx_qlm_jtag_field_t *field = __cvmx_qlm_lookup_field(name);
+	int qlm_jtag_length = cvmx_qlm_jtag_get_length();
+	int total_length = qlm_jtag_length * num_lanes;
+	int bits = 0;
+
+	if (!field)
+		return;
+
+	/* Get the current state */
+	cvmx_helper_qlm_jtag_capture(qlm);
+	for (i = 0; i < CVMX_QLM_JTAG_UINT32; i++)
+		shift_values[i] = cvmx_helper_qlm_jtag_shift(qlm, 32, 0);
+
+	/* Put new data in our local array */
+	for (l = 0; l < num_lanes; l++) {
+		u64 new_value = value;
+		int bits;
+		int adj_lanes;
+
+		if (l != lane && lane != -1)
+			continue;
+
+		adj_lanes = (num_lanes - 1 - l) * qlm_jtag_length;
+
+		for (bits = field->start_bit + adj_lanes; bits <= field->stop_bit + adj_lanes;
+		     bits++) {
+			if (new_value & 1)
+				shift_values[bits / 32] |= 1 << (bits & 31);
+			else
+				shift_values[bits / 32] &= ~(1 << (bits & 31));
+			new_value >>= 1;
+		}
+	}
+
+	/* Shift out data and xor with reference */
+	while (bits < total_length) {
+		u32 shift = shift_values[bits / 32] ^ __cvmx_qlm_jtag_xor_ref[qlm][bits / 32];
+		int width = total_length - bits;
+
+		if (width > 32)
+			width = 32;
+		cvmx_helper_qlm_jtag_shift(qlm, width, shift);
+		bits += 32;
+	}
+
+	/* Update the new data */
+	cvmx_helper_qlm_jtag_update(qlm);
+
+	/*
+	 * Always give the QLM 1ms to settle after every update. This may not
+	 * always be needed, but some of the options make significant
+	 * electrical changes
+	 */
+	udelay(1000);
+}
+
+/**
+ * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
+ * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function tweaks the
+ * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
+ */
+void __cvmx_qlm_speed_tweak(void)
+{
+	cvmx_mio_qlmx_cfg_t qlm_cfg;
+	int num_qlms = cvmx_qlm_get_num();
+	int qlm;
+
+	/* Workaround for Errata (G-16467) */
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS2_X)) {
+		for (qlm = 0; qlm < num_qlms; qlm++) {
+			int ir50dac;
+
+			/*
+			 * This workaround only applies to QLMs running at
+			 * 6.25Ghz
+			 */
+			if (cvmx_qlm_get_gbaud_mhz(qlm) == 6250) {
+#ifdef CVMX_QLM_DUMP_STATE
+				debug("%s:%d: QLM%d: Applying workaround for Errata G-16467\n",
+				      __func__, __LINE__, qlm);
+				cvmx_qlm_display_registers(qlm);
+				debug("\n");
+#endif
+				cvmx_qlm_jtag_set(qlm, -1, "cfg_cdr_trunc", 0);
+				/* Hold the QLM in reset */
+				cvmx_qlm_jtag_set(qlm, -1, "cfg_rst_n_set", 0);
+				cvmx_qlm_jtag_set(qlm, -1, "cfg_rst_n_clr", 1);
+				/* Forcfe TX to be idle */
+				cvmx_qlm_jtag_set(qlm, -1, "cfg_tx_idle_clr", 0);
+				cvmx_qlm_jtag_set(qlm, -1, "cfg_tx_idle_set", 1);
+				if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS2_0)) {
+					ir50dac = cvmx_qlm_jtag_get(qlm, 0, "ir50dac");
+					while (++ir50dac <= 31)
+						cvmx_qlm_jtag_set(qlm, -1, "ir50dac", ir50dac);
+				}
+				cvmx_qlm_jtag_set(qlm, -1, "div4_byp", 0);
+				cvmx_qlm_jtag_set(qlm, -1, "clkf_byp", 16);
+				cvmx_qlm_jtag_set(qlm, -1, "serdes_pll_byp", 1);
+				cvmx_qlm_jtag_set(qlm, -1, "spdsel_byp", 1);
+#ifdef CVMX_QLM_DUMP_STATE
+				debug("%s:%d: QLM%d: Done applying workaround for Errata G-16467\n",
+				      __func__, __LINE__, qlm);
+				cvmx_qlm_display_registers(qlm);
+				debug("\n\n");
+#endif
+				/*
+				 * The QLM will be taken out of reset later
+				 * when ILK/XAUI are initialized.
+				 */
+			}
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1_X) ||
+		   OCTEON_IS_MODEL(OCTEON_CN66XX_PASS1_X)) {
+		/* Loop through the QLMs */
+		for (qlm = 0; qlm < num_qlms; qlm++) {
+			/* Read the QLM speed */
+			qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+
+			/* If the QLM is@6.25Ghz or 5Ghz then program JTAG */
+			if (qlm_cfg.s.qlm_spd == 5 || qlm_cfg.s.qlm_spd == 12 ||
+			    qlm_cfg.s.qlm_spd == 0 || qlm_cfg.s.qlm_spd == 6 ||
+			    qlm_cfg.s.qlm_spd == 11) {
+				cvmx_qlm_jtag_set(qlm, -1, "rx_cap_gen2", 0x1);
+				cvmx_qlm_jtag_set(qlm, -1, "rx_eq_gen2", 0x8);
+			}
+		}
+	}
+}
+
+/**
+ * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
+ * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM tweak.
+ * This function tweaks the JTAG setting for a QLMs for PCIe to run better.
+ */
+void __cvmx_qlm_pcie_idle_dac_tweak(void)
+{
+	int num_qlms = 0;
+	int qlm;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1_X))
+		num_qlms = 5;
+	else if (OCTEON_IS_MODEL(OCTEON_CN66XX_PASS1_X))
+		num_qlms = 3;
+	else if (OCTEON_IS_MODEL(OCTEON_CN63XX))
+		num_qlms = 3;
+	else
+		return;
+
+	/* Loop through the QLMs */
+	for (qlm = 0; qlm < num_qlms; qlm++)
+		cvmx_qlm_jtag_set(qlm, -1, "idle_dac", 0x2);
+}
+
+void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN6XXX) || OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		cvmx_qlm_jtag_set(qlm, lane, "cfg_rxd_set", 0x1);
+}
+
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz for a given node.
+ *
+ * @param node   node of the QLM
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm)
+{
+	cvmx_gserx_lane_mode_t lane_mode;
+	cvmx_gserx_cfg_t cfg;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_MULTINODE))
+		return 0;
+
+	if (qlm >= 8)
+		return -1; /* FIXME for OCI */
+	/* Check if QLM is configured */
+	cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+	if (cfg.u64 == 0)
+		return -1;
+	if (cfg.s.pcie) {
+		int pem = 0;
+		cvmx_pemx_cfg_t pemx_cfg;
+
+		switch (qlm) {
+		case 0: /* Either PEM0 x4 of PEM0 x8 */
+			pem = 0;
+			break;
+		case 1: /* Either PEM0 x4 of PEM1 x4 */
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(0));
+			if (pemx_cfg.cn78xx.lanes8)
+				pem = 0;
+			else
+				pem = 1;
+			break;
+		case 2: /* Either PEM2 x4 of PEM2 x8 */
+			pem = 2;
+			break;
+		case 3: /* Either PEM2 x8 of PEM3 x4 or x8 */
+			/* Can be last 4 lanes of PEM2 */
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				pem = 2;
+			} else {
+				pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+				if (pemx_cfg.cn78xx.lanes8)
+					pem = 3;
+				else
+					pem = 2;
+			}
+			break;
+		case 4: /* Either PEM3 x8 of PEM3 x4 */
+			pem = 3;
+			break;
+		default:
+			debug("QLM%d: Should be in PCIe mode\n", qlm);
+			break;
+		}
+		pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(pem));
+		switch (pemx_cfg.s.md) {
+		case 0: /* Gen1 */
+			return 2500;
+		case 1: /* Gen2 */
+			return 5000;
+		case 2: /* Gen3 */
+			return 8000;
+		default:
+			return 0;
+		}
+	} else {
+		lane_mode.u64 = csr_rd_node(node, CVMX_GSERX_LANE_MODE(qlm));
+		switch (lane_mode.s.lmode) {
+		case R_25G_REFCLK100:
+			return 2500;
+		case R_5G_REFCLK100:
+			return 5000;
+		case R_8G_REFCLK100:
+			return 8000;
+		case R_125G_REFCLK15625_KX:
+			return 1250;
+		case R_3125G_REFCLK15625_XAUI:
+			return 3125;
+		case R_103125G_REFCLK15625_KR:
+			return 10312;
+		case R_125G_REFCLK15625_SGMII:
+			return 1250;
+		case R_5G_REFCLK15625_QSGMII:
+			return 5000;
+		case R_625G_REFCLK15625_RXAUI:
+			return 6250;
+		case R_25G_REFCLK125:
+			return 2500;
+		case R_5G_REFCLK125:
+			return 5000;
+		case R_8G_REFCLK125:
+			return 8000;
+		default:
+			return 0;
+		}
+	}
+}
+
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz.
+ *
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz(int qlm)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+		if (qlm == 2) {
+			cvmx_gmxx_inf_mode_t inf_mode;
+
+			inf_mode.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+			switch (inf_mode.s.speed) {
+			case 0:
+				return 5000; /* 5     Gbaud */
+			case 1:
+				return 2500; /* 2.5   Gbaud */
+			case 2:
+				return 2500; /* 2.5   Gbaud */
+			case 3:
+				return 1250; /* 1.25  Gbaud */
+			case 4:
+				return 1250; /* 1.25  Gbaud */
+			case 5:
+				return 6250; /* 6.25  Gbaud */
+			case 6:
+				return 5000; /* 5     Gbaud */
+			case 7:
+				return 2500; /* 2.5   Gbaud */
+			case 8:
+				return 3125; /* 3.125 Gbaud */
+			case 9:
+				return 2500; /* 2.5   Gbaud */
+			case 10:
+				return 1250; /* 1.25  Gbaud */
+			case 11:
+				return 5000; /* 5     Gbaud */
+			case 12:
+				return 6250; /* 6.25  Gbaud */
+			case 13:
+				return 3750; /* 3.75  Gbaud */
+			case 14:
+				return 3125; /* 3.125 Gbaud */
+			default:
+				return 0; /* Disabled */
+			}
+		} else {
+			cvmx_sriox_status_reg_t status_reg;
+
+			status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(qlm));
+			if (status_reg.s.srio) {
+				cvmx_sriomaintx_port_0_ctl2_t sriomaintx_port_0_ctl2;
+
+				sriomaintx_port_0_ctl2.u32 =
+					csr_rd(CVMX_SRIOMAINTX_PORT_0_CTL2(qlm));
+				switch (sriomaintx_port_0_ctl2.s.sel_baud) {
+				case 1:
+					return 1250; /* 1.25  Gbaud */
+				case 2:
+					return 2500; /* 2.5   Gbaud */
+				case 3:
+					return 3125; /* 3.125 Gbaud */
+				case 4:
+					return 5000; /* 5     Gbaud */
+				case 5:
+					return 6250; /* 6.250 Gbaud */
+				default:
+					return 0; /* Disabled */
+				}
+			} else {
+				cvmx_pciercx_cfg032_t pciercx_cfg032;
+
+				pciercx_cfg032.u32 = csr_rd(CVMX_PCIERCX_CFG032(qlm));
+				switch (pciercx_cfg032.s.ls) {
+				case 1:
+					return 2500;
+				case 2:
+					return 5000;
+				case 4:
+					return 8000;
+				default: {
+					cvmx_mio_rst_boot_t mio_rst_boot;
+
+					mio_rst_boot.u64 = csr_rd(CVMX_MIO_RST_BOOT);
+					if (qlm == 0 && mio_rst_boot.s.qlm0_spd == 0xf)
+						return 0;
+
+					if (qlm == 1 && mio_rst_boot.s.qlm1_spd == 0xf)
+						return 0;
+
+					/* Best guess I can make */
+					return 5000;
+				}
+				}
+			}
+		}
+	} else if (OCTEON_IS_OCTEON2()) {
+		cvmx_mio_qlmx_cfg_t qlm_cfg;
+
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+		switch (qlm_cfg.s.qlm_spd) {
+		case 0:
+			return 5000; /* 5     Gbaud */
+		case 1:
+			return 2500; /* 2.5   Gbaud */
+		case 2:
+			return 2500; /* 2.5   Gbaud */
+		case 3:
+			return 1250; /* 1.25  Gbaud */
+		case 4:
+			return 1250; /* 1.25  Gbaud */
+		case 5:
+			return 6250; /* 6.25  Gbaud */
+		case 6:
+			return 5000; /* 5     Gbaud */
+		case 7:
+			return 2500; /* 2.5   Gbaud */
+		case 8:
+			return 3125; /* 3.125 Gbaud */
+		case 9:
+			return 2500; /* 2.5   Gbaud */
+		case 10:
+			return 1250; /* 1.25  Gbaud */
+		case 11:
+			return 5000; /* 5     Gbaud */
+		case 12:
+			return 6250; /* 6.25  Gbaud */
+		case 13:
+			return 3750; /* 3.75  Gbaud */
+		case 14:
+			return 3125; /* 3.125 Gbaud */
+		default:
+			return 0; /* Disabled */
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		cvmx_gserx_dlmx_mpll_multiplier_t mpll_multiplier;
+		u64 meas_refclock;
+		u64 freq;
+
+		/* Measure the reference clock */
+		meas_refclock = cvmx_qlm_measure_clock(qlm);
+		/* Multiply to get the final frequency */
+		mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+		freq = meas_refclock * mpll_multiplier.s.mpll_multiplier;
+		freq = (freq + 500000) / 1000000;
+
+		return freq;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		return cvmx_qlm_get_gbaud_mhz_node(cvmx_get_node_num(), qlm);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_gserx_lane_mode_t lane_mode;
+
+		lane_mode.u64 = csr_rd(CVMX_GSERX_LANE_MODE(qlm));
+		switch (lane_mode.s.lmode) {
+		case R_25G_REFCLK100:
+			return 2500;
+		case R_5G_REFCLK100:
+			return 5000;
+		case R_8G_REFCLK100:
+			return 8000;
+		case R_125G_REFCLK15625_KX:
+			return 1250;
+		case R_3125G_REFCLK15625_XAUI:
+			return 3125;
+		case R_103125G_REFCLK15625_KR:
+			return 10312;
+		case R_125G_REFCLK15625_SGMII:
+			return 1250;
+		case R_5G_REFCLK15625_QSGMII:
+			return 5000;
+		case R_625G_REFCLK15625_RXAUI:
+			return 6250;
+		case R_25G_REFCLK125:
+			return 2500;
+		case R_5G_REFCLK125:
+			return 5000;
+		case R_8G_REFCLK125:
+			return 8000;
+		default:
+			return 0;
+		}
+	}
+	return 0;
+}
+
+static enum cvmx_qlm_mode __cvmx_qlm_get_mode_cn70xx(int qlm)
+{
+	switch (qlm) {
+	case 0: /* DLM0/DLM1 - SGMII/QSGMII/RXAUI */
+	{
+		union cvmx_gmxx_inf_mode inf_mode0, inf_mode1;
+
+		inf_mode0.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+		inf_mode1.u64 = csr_rd(CVMX_GMXX_INF_MODE(1));
+
+		/* SGMII0 SGMII1 */
+		switch (inf_mode0.s.mode) {
+		case CVMX_GMX_INF_MODE_SGMII:
+			switch (inf_mode1.s.mode) {
+			case CVMX_GMX_INF_MODE_SGMII:
+				return CVMX_QLM_MODE_SGMII_SGMII;
+			case CVMX_GMX_INF_MODE_QSGMII:
+				return CVMX_QLM_MODE_SGMII_QSGMII;
+			default:
+				return CVMX_QLM_MODE_SGMII_DISABLED;
+			}
+		case CVMX_GMX_INF_MODE_QSGMII:
+			switch (inf_mode1.s.mode) {
+			case CVMX_GMX_INF_MODE_SGMII:
+				return CVMX_QLM_MODE_QSGMII_SGMII;
+			case CVMX_GMX_INF_MODE_QSGMII:
+				return CVMX_QLM_MODE_QSGMII_QSGMII;
+			default:
+				return CVMX_QLM_MODE_QSGMII_DISABLED;
+			}
+		case CVMX_GMX_INF_MODE_RXAUI:
+			return CVMX_QLM_MODE_RXAUI_1X2;
+		default:
+			switch (inf_mode1.s.mode) {
+			case CVMX_GMX_INF_MODE_SGMII:
+				return CVMX_QLM_MODE_DISABLED_SGMII;
+			case CVMX_GMX_INF_MODE_QSGMII:
+				return CVMX_QLM_MODE_DISABLED_QSGMII;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+		}
+	}
+	case 1: /* Sata / pem0 */
+	{
+		union cvmx_gserx_sata_cfg sata_cfg;
+		union cvmx_pemx_cfg pem0_cfg;
+
+		sata_cfg.u64 = csr_rd(CVMX_GSERX_SATA_CFG(0));
+		pem0_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+
+		switch (pem0_cfg.cn70xx.md) {
+		case CVMX_PEM_MD_GEN2_2LANE:
+		case CVMX_PEM_MD_GEN1_2LANE:
+			return CVMX_QLM_MODE_PCIE_1X2;
+		case CVMX_PEM_MD_GEN2_1LANE:
+		case CVMX_PEM_MD_GEN1_1LANE:
+			if (sata_cfg.s.sata_en)
+				/* Both PEM0 and PEM1 */
+				return CVMX_QLM_MODE_PCIE_2X1;
+
+			/* Only PEM0 */
+			return CVMX_QLM_MODE_PCIE_1X1;
+		case CVMX_PEM_MD_GEN2_4LANE:
+		case CVMX_PEM_MD_GEN1_4LANE:
+			return CVMX_QLM_MODE_PCIE;
+		default:
+			return CVMX_QLM_MODE_DISABLED;
+		}
+	}
+	case 2: {
+		union cvmx_gserx_sata_cfg sata_cfg;
+		union cvmx_pemx_cfg pem0_cfg, pem1_cfg, pem2_cfg;
+
+		sata_cfg.u64 = csr_rd(CVMX_GSERX_SATA_CFG(0));
+		pem0_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+		pem1_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+		pem2_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+
+		if (sata_cfg.s.sata_en)
+			return CVMX_QLM_MODE_SATA_2X1;
+		if (pem0_cfg.cn70xx.md == CVMX_PEM_MD_GEN2_4LANE ||
+		    pem0_cfg.cn70xx.md == CVMX_PEM_MD_GEN1_4LANE)
+			return CVMX_QLM_MODE_PCIE;
+		if (pem1_cfg.cn70xx.md == CVMX_PEM_MD_GEN2_2LANE ||
+		    pem1_cfg.cn70xx.md == CVMX_PEM_MD_GEN1_2LANE) {
+			return CVMX_QLM_MODE_PCIE_1X2;
+		}
+		if (pem1_cfg.cn70xx.md == CVMX_PEM_MD_GEN2_1LANE ||
+		    pem1_cfg.cn70xx.md == CVMX_PEM_MD_GEN1_1LANE) {
+			if (pem2_cfg.cn70xx.md == CVMX_PEM_MD_GEN2_1LANE ||
+			    pem2_cfg.cn70xx.md == CVMX_PEM_MD_GEN1_1LANE) {
+				return CVMX_QLM_MODE_PCIE_2X1;
+			} else {
+				return CVMX_QLM_MODE_PCIE_1X1;
+			}
+		}
+		if (pem2_cfg.cn70xx.md == CVMX_PEM_MD_GEN2_1LANE ||
+		    pem2_cfg.cn70xx.md == CVMX_PEM_MD_GEN1_1LANE)
+			return CVMX_QLM_MODE_PCIE_2X1;
+		return CVMX_QLM_MODE_DISABLED;
+	}
+	default:
+		return CVMX_QLM_MODE_DISABLED;
+	}
+
+	return CVMX_QLM_MODE_DISABLED;
+}
+
+/*
+ * Get the DLM mode for the interface based on the interface type.
+ *
+ * @param interface_type   0 - SGMII/QSGMII/RXAUI interface
+ *                         1 - PCIe
+ *                         2 - SATA
+ * @param interface        interface to use
+ * @return  the qlm mode the interface is
+ */
+enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int interface_type, int interface)
+{
+	switch (interface_type) {
+	case 0: /* SGMII/QSGMII/RXAUI */
+	{
+		enum cvmx_qlm_mode qlm_mode = __cvmx_qlm_get_mode_cn70xx(0);
+
+		switch (interface) {
+		case 0:
+			switch (qlm_mode) {
+			case CVMX_QLM_MODE_SGMII_SGMII:
+			case CVMX_QLM_MODE_SGMII_DISABLED:
+			case CVMX_QLM_MODE_SGMII_QSGMII:
+				return CVMX_QLM_MODE_SGMII;
+			case CVMX_QLM_MODE_QSGMII_QSGMII:
+			case CVMX_QLM_MODE_QSGMII_DISABLED:
+			case CVMX_QLM_MODE_QSGMII_SGMII:
+				return CVMX_QLM_MODE_QSGMII;
+			case CVMX_QLM_MODE_RXAUI_1X2:
+				return CVMX_QLM_MODE_RXAUI;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+		case 1:
+			switch (qlm_mode) {
+			case CVMX_QLM_MODE_SGMII_SGMII:
+			case CVMX_QLM_MODE_DISABLED_SGMII:
+			case CVMX_QLM_MODE_QSGMII_SGMII:
+				return CVMX_QLM_MODE_SGMII;
+			case CVMX_QLM_MODE_QSGMII_QSGMII:
+			case CVMX_QLM_MODE_DISABLED_QSGMII:
+			case CVMX_QLM_MODE_SGMII_QSGMII:
+				return CVMX_QLM_MODE_QSGMII;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+		default:
+			return qlm_mode;
+		}
+	}
+	case 1: /* PCIe */
+	{
+		enum cvmx_qlm_mode qlm_mode1 = __cvmx_qlm_get_mode_cn70xx(1);
+		enum cvmx_qlm_mode qlm_mode2 = __cvmx_qlm_get_mode_cn70xx(2);
+
+		switch (interface) {
+		case 0: /* PCIe0 can be DLM1 with 1, 2 or 4 lanes */
+			return qlm_mode1;
+		case 1:
+			/*
+			 * PCIe1 can be in DLM1 1 lane(1), DLM2 1 lane(0)
+			 * or 2 lanes(0-1)
+			 */
+			if (qlm_mode1 == CVMX_QLM_MODE_PCIE_2X1)
+				return CVMX_QLM_MODE_PCIE_2X1;
+			else if (qlm_mode2 == CVMX_QLM_MODE_PCIE_1X2 ||
+				 qlm_mode2 == CVMX_QLM_MODE_PCIE_2X1)
+				return qlm_mode2;
+			else
+				return CVMX_QLM_MODE_DISABLED;
+		case 2: /* PCIe2 can be DLM2 1 lanes(1) */
+			if (qlm_mode2 == CVMX_QLM_MODE_PCIE_2X1)
+				return qlm_mode2;
+			else
+				return CVMX_QLM_MODE_DISABLED;
+		default:
+			return CVMX_QLM_MODE_DISABLED;
+		}
+	}
+	case 2: /* SATA */
+	{
+		enum cvmx_qlm_mode qlm_mode = __cvmx_qlm_get_mode_cn70xx(2);
+
+		if (qlm_mode == CVMX_QLM_MODE_SATA_2X1)
+			return CVMX_QLM_MODE_SATA_2X1;
+		else
+			return CVMX_QLM_MODE_DISABLED;
+	}
+	default:
+		return CVMX_QLM_MODE_DISABLED;
+	}
+}
+
+static enum cvmx_qlm_mode __cvmx_qlm_get_mode_cn6xxx(int qlm)
+{
+	cvmx_mio_qlmx_cfg_t qlmx_cfg;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		qlmx_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlmx_cfg.s.qlm_spd == 15)
+			return CVMX_QLM_MODE_DISABLED;
+
+		switch (qlmx_cfg.s.qlm_cfg) {
+		case 0: /* PCIE */
+			return CVMX_QLM_MODE_PCIE;
+		case 1: /* ILK */
+			return CVMX_QLM_MODE_ILK;
+		case 2: /* SGMII */
+			return CVMX_QLM_MODE_SGMII;
+		case 3: /* XAUI */
+			return CVMX_QLM_MODE_XAUI;
+		case 7: /* RXAUI */
+			return CVMX_QLM_MODE_RXAUI;
+		default:
+			return CVMX_QLM_MODE_DISABLED;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN66XX)) {
+		qlmx_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlmx_cfg.s.qlm_spd == 15)
+			return CVMX_QLM_MODE_DISABLED;
+
+		switch (qlmx_cfg.s.qlm_cfg) {
+		case 0x9: /* SGMII */
+			return CVMX_QLM_MODE_SGMII;
+		case 0xb: /* XAUI */
+			return CVMX_QLM_MODE_XAUI;
+		case 0x0: /* PCIE gen2 */
+		case 0x8: /* PCIE gen2 (alias) */
+		case 0x2: /* PCIE gen1 */
+		case 0xa: /* PCIE gen1 (alias) */
+			return CVMX_QLM_MODE_PCIE;
+		case 0x1: /* SRIO 1x4 short */
+		case 0x3: /* SRIO 1x4 long */
+			return CVMX_QLM_MODE_SRIO_1X4;
+		case 0x4: /* SRIO 2x2 short */
+		case 0x6: /* SRIO 2x2 long */
+			return CVMX_QLM_MODE_SRIO_2X2;
+		case 0x5: /* SRIO 4x1 short */
+		case 0x7: /* SRIO 4x1 long */
+			if (!OCTEON_IS_MODEL(OCTEON_CN66XX_PASS1_0))
+				return CVMX_QLM_MODE_SRIO_4X1;
+			fallthrough;
+		default:
+			return CVMX_QLM_MODE_DISABLED;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CN63XX)) {
+		cvmx_sriox_status_reg_t status_reg;
+		/* For now skip qlm2 */
+		if (qlm == 2) {
+			cvmx_gmxx_inf_mode_t inf_mode;
+
+			inf_mode.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+			if (inf_mode.s.speed == 15)
+				return CVMX_QLM_MODE_DISABLED;
+			else if (inf_mode.s.mode == 0)
+				return CVMX_QLM_MODE_SGMII;
+			else
+				return CVMX_QLM_MODE_XAUI;
+		}
+		status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(qlm));
+		if (status_reg.s.srio)
+			return CVMX_QLM_MODE_SRIO_1X4;
+		else
+			return CVMX_QLM_MODE_PCIE;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN61XX)) {
+		qlmx_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlmx_cfg.s.qlm_spd == 15)
+			return CVMX_QLM_MODE_DISABLED;
+
+		switch (qlm) {
+		case 0:
+			switch (qlmx_cfg.s.qlm_cfg) {
+			case 0: /* PCIe 1x4 gen2 / gen1 */
+				return CVMX_QLM_MODE_PCIE;
+			case 2: /* SGMII */
+				return CVMX_QLM_MODE_SGMII;
+			case 3: /* XAUI */
+				return CVMX_QLM_MODE_XAUI;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 1:
+			switch (qlmx_cfg.s.qlm_cfg) {
+			case 0: /* PCIe 1x2 gen2 / gen1 */
+				return CVMX_QLM_MODE_PCIE_1X2;
+			case 1: /* PCIe 2x1 gen2 / gen1 */
+				return CVMX_QLM_MODE_PCIE_2X1;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 2:
+			switch (qlmx_cfg.s.qlm_cfg) {
+			case 2: /* SGMII */
+				return CVMX_QLM_MODE_SGMII;
+			case 3: /* XAUI */
+				return CVMX_QLM_MODE_XAUI;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		}
+	} else if (OCTEON_IS_MODEL(OCTEON_CNF71XX)) {
+		qlmx_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+		/* QLM is disabled when QLM SPD is 15. */
+		if (qlmx_cfg.s.qlm_spd == 15)
+			return CVMX_QLM_MODE_DISABLED;
+
+		switch (qlm) {
+		case 0:
+			if (qlmx_cfg.s.qlm_cfg == 2) /* SGMII */
+				return CVMX_QLM_MODE_SGMII;
+			break;
+		case 1:
+			switch (qlmx_cfg.s.qlm_cfg) {
+			case 0: /* PCIe 1x2 gen2 / gen1 */
+				return CVMX_QLM_MODE_PCIE_1X2;
+			case 1: /* PCIe 2x1 gen2 / gen1 */
+				return CVMX_QLM_MODE_PCIE_2X1;
+			default:
+				return CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		}
+	}
+	return CVMX_QLM_MODE_DISABLED;
+}
+
+/**
+ * @INTERNAL
+ * Decrement the MPLL Multiplier for the DLM as per Errata G-20669
+ *
+ * @param qlm            DLM to configure
+ * @param baud_mhz       Speed of the DLM configured at
+ * @param old_multiplier MPLL_MULTIPLIER value to decrement
+ */
+void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier)
+{
+	cvmx_gserx_dlmx_mpll_multiplier_t mpll_multiplier;
+	cvmx_gserx_dlmx_ref_clkdiv2_t clkdiv;
+	u64 meas_refclock, mult;
+
+	if (!OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return;
+
+	if (qlm == -1)
+		return;
+
+	meas_refclock = cvmx_qlm_measure_clock(qlm);
+	if (meas_refclock == 0) {
+		printf("DLM%d: Reference clock not running\n", qlm);
+		return;
+	}
+
+	/*
+	 * The baud rate multiplier needs to be adjusted on the CN70XX if
+	 * the reference clock is > 100MHz.
+	 */
+	if (qlm == 0) {
+		clkdiv.u64 = csr_rd(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0));
+		if (clkdiv.s.ref_clkdiv2)
+			baud_mhz *= 2;
+	}
+	mult = (uint64_t)baud_mhz * 1000000 + (meas_refclock / 2);
+	mult /= meas_refclock;
+
+	/*
+	 * 6. Decrease MPLL_MULTIPLIER by one continually until it reaches
+	 * the desired long-term setting, ensuring that each MPLL_MULTIPLIER
+	 * value is constant for at least 1 msec before changing to the next
+	 * value. The desired long-term setting is as indicated in HRM tables
+	 * 21-1, 21-2, and 21-3. This is not required with the HRM
+	 * sequence.
+	 */
+	do {
+		mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+		mpll_multiplier.s.mpll_multiplier = --old_multiplier;
+		csr_wr(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0), mpll_multiplier.u64);
+		/* Wait for 1 ms */
+		udelay(1000);
+	} while (old_multiplier > (int)mult);
+}
+
+enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm)
+{
+	cvmx_gserx_cfg_t gserx_cfg;
+	int qlm_mode[2][9] = { { -1, -1, -1, -1, -1, -1, -1, -1 },
+			       { -1, -1, -1, -1, -1, -1, -1, -1 } };
+
+	if (qlm >= 8)
+		return CVMX_QLM_MODE_OCI;
+
+	if (qlm_mode[node][qlm] != -1)
+		return qlm_mode[node][qlm];
+
+	gserx_cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+	if (gserx_cfg.s.pcie) {
+		switch (qlm) {
+		case 0: /* Either PEM0 x4 or PEM0 x8 */
+		case 1: /* Either PEM0 x8 or PEM1 x4 */
+		{
+			cvmx_pemx_cfg_t pemx_cfg;
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(0));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM0 x8 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM0 x4 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		case 2: /* Either PEM2 x4 or PEM2 x8 */
+		{
+			cvmx_pemx_cfg_t pemx_cfg;
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM2 x8 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM2 x4 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		case 3: /* Either PEM2 x8 or PEM3 x4 or PEM3 x8 */
+		{
+			cvmx_pemx_cfg_t pemx_cfg;
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM2 x8 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			}
+
+			/* Can be first 4 lanes of PEM3 */
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM3 x8 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM2 x4 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		case 4: /* Either PEM3 x8 or PEM3 x4 */
+		{
+			cvmx_pemx_cfg_t pemx_cfg;
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM3 x8 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM3 x4 */
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		default:
+			qlm_mode[node][qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+	} else if (gserx_cfg.s.ila) {
+		qlm_mode[node][qlm] = CVMX_QLM_MODE_ILK;
+	} else if (gserx_cfg.s.bgx) {
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		cvmx_bgxx_spux_br_pmd_control_t pmd_control;
+		int bgx = (qlm < 2) ? qlm : qlm - 2;
+
+		cmr_config.u64 = csr_rd_node(node, CVMX_BGXX_CMRX_CONFIG(0, bgx));
+		pmd_control.u64 = csr_rd_node(node, CVMX_BGXX_SPUX_BR_PMD_CONTROL(0, bgx));
+
+		switch (cmr_config.s.lmac_type) {
+		case 0:
+			qlm_mode[node][qlm] = CVMX_QLM_MODE_SGMII;
+			break;
+		case 1:
+			qlm_mode[node][qlm] = CVMX_QLM_MODE_XAUI;
+			break;
+		case 2:
+			qlm_mode[node][qlm] = CVMX_QLM_MODE_RXAUI;
+			break;
+		case 3:
+			/*
+			 * Use training to determine if we're in 10GBASE-KR
+			 * or XFI
+			 */
+			if (pmd_control.s.train_en)
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_10G_KR;
+			else
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_XFI;
+			break;
+		case 4:
+			/*
+			 * Use training to determine if we're in 40GBASE-KR
+			 * or XLAUI
+			 */
+			if (pmd_control.s.train_en)
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_40G_KR4;
+			else
+				qlm_mode[node][qlm] = CVMX_QLM_MODE_XLAUI;
+			break;
+		default:
+			qlm_mode[node][qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+	} else {
+		qlm_mode[node][qlm] = CVMX_QLM_MODE_DISABLED;
+	}
+
+	return qlm_mode[node][qlm];
+}
+
+enum cvmx_qlm_mode __cvmx_qlm_get_mode_cn73xx(int qlm)
+{
+	cvmx_gserx_cfg_t gserx_cfg;
+	int qlm_mode[7] = { -1, -1, -1, -1, -1, -1, -1 };
+
+	if (qlm_mode[qlm] != -1)
+		return qlm_mode[qlm];
+
+	if (qlm > 6) {
+		debug("Invalid QLM(%d) passed\n", qlm);
+		return -1;
+	}
+
+	gserx_cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+	if (gserx_cfg.s.pcie) {
+		cvmx_pemx_cfg_t pemx_cfg;
+
+		switch (qlm) {
+		case 0: /* Either PEM0 x4 or PEM0 x8 */
+		case 1: /* Either PEM0 x8 or PEM1 x4 */
+		{
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM0 x8 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM0/PEM1 x4 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		case 2: /* Either PEM2 x4 or PEM2 x8 */
+		{
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM2 x8 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM2 x4 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		case 5:
+		case 6:						/* PEM3 x2 */
+			qlm_mode[qlm] = CVMX_QLM_MODE_PCIE_1X2; /* PEM3 x2 */
+			break;
+		case 3: /* Either PEM2 x8 or PEM3 x4 */
+		{
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* PEM2 x8 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else {
+				/* PEM3 x4 */
+				qlm_mode[qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			break;
+		}
+		default:
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+	} else if (gserx_cfg.s.bgx) {
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		cvmx_bgxx_cmr_rx_lmacs_t bgx_cmr_rx_lmacs;
+		cvmx_bgxx_spux_br_pmd_control_t pmd_control;
+		int bgx = 0;
+		int start = 0, end = 4, index;
+		int lane_mask = 0, train_mask = 0;
+		int mux = 0; // 0:BGX2 (DLM5/DLM6), 1:BGX2(DLM5), 2:BGX2(DLM6)
+
+		if (qlm < 4) {
+			bgx = qlm - 2;
+		} else if (qlm == 5 || qlm == 6) {
+			bgx = 2;
+			mux = cvmx_qlm_mux_interface(bgx);
+			if (mux == 0) {
+				start = 0;
+				end = 4;
+			} else if (mux == 1) {
+				start = 0;
+				end = 2;
+			} else if (mux == 2) {
+				start = 2;
+				end = 4;
+			} else {
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+				return qlm_mode[qlm];
+			}
+		}
+
+		for (index = start; index < end; index++) {
+			cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(index, bgx));
+			pmd_control.u64 = csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx));
+			lane_mask |= (cmr_config.s.lmac_type << (index * 4));
+			train_mask |= (pmd_control.s.train_en << (index * 4));
+		}
+
+		/* Need to include DLM5 lmacs when only DLM6 DLM is used */
+		if (mux == 2)
+			bgx_cmr_rx_lmacs.u64 = csr_rd(CVMX_BGXX_CMR_RX_LMACS(2));
+		switch (lane_mask) {
+		case 0:
+			if (mux == 1) {
+				qlm_mode[qlm] = CVMX_QLM_MODE_SGMII_2X1;
+			} else if (mux == 2) {
+				qlm_mode[qlm] = CVMX_QLM_MODE_SGMII_2X1;
+				bgx_cmr_rx_lmacs.s.lmacs = 4;
+			}
+			qlm_mode[qlm] = CVMX_QLM_MODE_SGMII;
+			break;
+		case 0x1:
+			qlm_mode[qlm] = CVMX_QLM_MODE_XAUI;
+			break;
+		case 0x2:
+			if (mux == 1) {
+				// NONE+RXAUI
+				qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI_1X2;
+			} else if (mux == 0) {
+				// RXAUI+SGMII
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			} else {
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 0x202:
+			if (mux == 2) {
+				// RXAUI+RXAUI
+				qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI_1X2;
+				bgx_cmr_rx_lmacs.s.lmacs = 4;
+			} else if (mux == 1) {
+				// RXAUI+RXAUI
+				qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI_1X2;
+			} else if (mux == 0) {
+				qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI;
+			} else {
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 0x22:
+			qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI;
+			break;
+		case 0x3333:
+			/*
+			 * Use training to determine if we're in 10GBASE-KR
+			 * or XFI
+			 */
+			if (train_mask)
+				qlm_mode[qlm] = CVMX_QLM_MODE_10G_KR;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_XFI;
+			break;
+		case 0x4:
+			/*
+			 * Use training to determine if we're in 40GBASE-KR
+			 * or XLAUI
+			 */
+			if (train_mask)
+				qlm_mode[qlm] = CVMX_QLM_MODE_40G_KR4;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_XLAUI;
+			break;
+		case 0x0005:
+			qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_SGMII;
+			break;
+		case 0x3335:
+			if (train_mask)
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_10G_KR;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_XFI;
+			break;
+		case 0x45:
+			if (train_mask)
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_40G_KR4;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_XLAUI;
+			break;
+		case 0x225:
+			qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_RXAUI;
+			break;
+		case 0x15:
+			qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_XAUI;
+			break;
+
+		case 0x200:
+			if (mux == 2) {
+				qlm_mode[qlm] = CVMX_QLM_MODE_RXAUI_1X2;
+				bgx_cmr_rx_lmacs.s.lmacs = 4;
+			} else
+		case 0x205:
+		case 0x233:
+		case 0x3302:
+		case 0x3305:
+			if (mux == 0)
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		case 0x3300:
+			if (mux == 0) {
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			} else if (mux == 2) {
+				if (train_mask)
+					qlm_mode[qlm] = CVMX_QLM_MODE_10G_KR_1X2;
+				else
+					qlm_mode[qlm] = CVMX_QLM_MODE_XFI_1X2;
+				bgx_cmr_rx_lmacs.s.lmacs = 4;
+			} else {
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 0x33:
+			if (mux == 1 || mux == 2) {
+				if (train_mask)
+					qlm_mode[qlm] = CVMX_QLM_MODE_10G_KR_1X2;
+				else
+					qlm_mode[qlm] = CVMX_QLM_MODE_XFI_1X2;
+				if (mux == 2)
+					bgx_cmr_rx_lmacs.s.lmacs = 4;
+			} else {
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			}
+			break;
+		case 0x0035:
+			if (mux == 0)
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			else if (train_mask)
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_10G_KR_1X1;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_RGMII_XFI_1X1;
+			break;
+		case 0x235:
+			if (mux == 0)
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		default:
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+		if (mux == 2) {
+			csr_wr(CVMX_BGXX_CMR_RX_LMACS(2), bgx_cmr_rx_lmacs.u64);
+			csr_wr(CVMX_BGXX_CMR_TX_LMACS(2), bgx_cmr_rx_lmacs.u64);
+		}
+	} else if (gserx_cfg.s.sata) {
+		qlm_mode[qlm] = CVMX_QLM_MODE_SATA_2X1;
+	} else {
+		qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+	}
+
+	return qlm_mode[qlm];
+}
+
+enum cvmx_qlm_mode __cvmx_qlm_get_mode_cnf75xx(int qlm)
+{
+	cvmx_gserx_cfg_t gserx_cfg;
+	int qlm_mode[9] = { -1, -1, -1, -1, -1, -1, -1 };
+
+	if (qlm_mode[qlm] != -1)
+		return qlm_mode[qlm];
+
+	if (qlm > 9) {
+		debug("Invalid QLM(%d) passed\n", qlm);
+		return -1;
+	}
+
+	if ((qlm == 2 || qlm == 3) && (OCTEON_IS_MODEL(OCTEON_CNF75XX))) {
+		cvmx_sriox_status_reg_t status_reg;
+		int port = (qlm == 2) ? 0 : 1;
+
+		status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(port));
+		/* FIXME add different width */
+		if (status_reg.s.srio)
+			qlm_mode[qlm] = CVMX_QLM_MODE_SRIO_1X4;
+		else
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+		return qlm_mode[qlm];
+	}
+
+	gserx_cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+	if (gserx_cfg.s.pcie) {
+		switch (qlm) {
+		case 0: /* Either PEM0 x2 or PEM0 x4 */
+		case 1: /* Either PEM1 x2 or PEM0 x4 */
+		{
+			/* FIXME later */
+			qlm_mode[qlm] = CVMX_QLM_MODE_PCIE;
+			break;
+		}
+		default:
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+	} else if (gserx_cfg.s.bgx) {
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		cvmx_bgxx_spux_br_pmd_control_t pmd_control;
+		int bgx = 0;
+		int start = 0, end = 4, index;
+		int lane_mask = 0, train_mask = 0;
+		int mux = 0; // 0:BGX0 (DLM4/DLM5), 1:BGX0(DLM4), 2:BGX0(DLM5)
+		cvmx_gserx_cfg_t gser1, gser2;
+
+		gser1.u64 = csr_rd(CVMX_GSERX_CFG(4));
+		gser2.u64 = csr_rd(CVMX_GSERX_CFG(5));
+		if (gser1.s.bgx && gser2.s.bgx) {
+			start = 0;
+			end = 4;
+		} else if (gser1.s.bgx) {
+			start = 0;
+			end = 2;
+			mux = 1;
+		} else if (gser2.s.bgx) {
+			start = 2;
+			end = 4;
+			mux = 2;
+		} else {
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			return qlm_mode[qlm];
+		}
+
+		for (index = start; index < end; index++) {
+			cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(index, bgx));
+			pmd_control.u64 = csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx));
+			lane_mask |= (cmr_config.s.lmac_type << (index * 4));
+			train_mask |= (pmd_control.s.train_en << (index * 4));
+		}
+
+		switch (lane_mask) {
+		case 0:
+			if (mux == 1 || mux == 2)
+				qlm_mode[qlm] = CVMX_QLM_MODE_SGMII_2X1;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_SGMII;
+			break;
+		case 0x3300:
+			if (mux == 0)
+				qlm_mode[qlm] = CVMX_QLM_MODE_MIXED;
+			else if (mux == 2)
+				if (train_mask)
+					qlm_mode[qlm] = CVMX_QLM_MODE_10G_KR_1X2;
+				else
+					qlm_mode[qlm] = CVMX_QLM_MODE_XFI_1X2;
+			else
+				qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		default:
+			qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+			break;
+		}
+	} else {
+		qlm_mode[qlm] = CVMX_QLM_MODE_DISABLED;
+	}
+
+	return qlm_mode[qlm];
+}
+
+/*
+ * Read QLM and return mode.
+ */
+enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm)
+{
+	if (OCTEON_IS_OCTEON2())
+		return __cvmx_qlm_get_mode_cn6xxx(qlm);
+	else if (OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return __cvmx_qlm_get_mode_cn70xx(qlm);
+	else if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return cvmx_qlm_get_mode_cn78xx(cvmx_get_node_num(), qlm);
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return __cvmx_qlm_get_mode_cn73xx(qlm);
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return __cvmx_qlm_get_mode_cnf75xx(qlm);
+
+	return CVMX_QLM_MODE_DISABLED;
+}
+
+int cvmx_qlm_measure_clock_cn7xxx(int node, int qlm)
+{
+	cvmx_gserx_cfg_t cfg;
+	cvmx_gserx_refclk_sel_t refclk_sel;
+	cvmx_gserx_lane_mode_t lane_mode;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX)) {
+		if (node != 0 || qlm >= 7)
+			return -1;
+	} else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		if (qlm >= 8 || node > 1)
+			return -1; /* FIXME for OCI */
+	} else {
+		debug("%s: Unsupported OCTEON model\n", __func__);
+		return -1;
+	}
+
+	cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+
+	if (cfg.s.pcie) {
+		refclk_sel.u64 = csr_rd_node(node, CVMX_GSERX_REFCLK_SEL(qlm));
+		if (refclk_sel.s.pcie_refclk125)
+			return REF_125MHZ; /* Ref 125 Mhz */
+		else
+			return REF_100MHZ; /* Ref 100Mhz */
+	}
+
+	lane_mode.u64 = csr_rd_node(node, CVMX_GSERX_LANE_MODE(qlm));
+	switch (lane_mode.s.lmode) {
+	case R_25G_REFCLK100:
+		return REF_100MHZ;
+	case R_5G_REFCLK100:
+		return REF_100MHZ;
+	case R_8G_REFCLK100:
+		return REF_100MHZ;
+	case R_125G_REFCLK15625_KX:
+		return REF_156MHZ;
+	case R_3125G_REFCLK15625_XAUI:
+		return REF_156MHZ;
+	case R_103125G_REFCLK15625_KR:
+		return REF_156MHZ;
+	case R_125G_REFCLK15625_SGMII:
+		return REF_156MHZ;
+	case R_5G_REFCLK15625_QSGMII:
+		return REF_156MHZ;
+	case R_625G_REFCLK15625_RXAUI:
+		return REF_156MHZ;
+	case R_25G_REFCLK125:
+		return REF_125MHZ;
+	case R_5G_REFCLK125:
+		return REF_125MHZ;
+	case R_8G_REFCLK125:
+		return REF_125MHZ;
+	default:
+		return 0;
+	}
+}
+
+/**
+ * Measure the reference clock of a QLM on a multi-node setup
+ *
+ * @param node   node to measure
+ * @param qlm    QLM to measure
+ *
+ * @return Clock rate in Hz
+ */
+int cvmx_qlm_measure_clock_node(int node, int qlm)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_MULTINODE))
+		return cvmx_qlm_measure_clock_cn7xxx(node, qlm);
+	else
+		return cvmx_qlm_measure_clock(qlm);
+}
+
+/**
+ * Measure the reference clock of a QLM
+ *
+ * @param qlm    QLM to measure
+ *
+ * @return Clock rate in Hz
+ */
+int cvmx_qlm_measure_clock(int qlm)
+{
+	cvmx_mio_ptp_clock_cfg_t ptp_clock;
+	u64 count;
+	u64 start_cycle, stop_cycle;
+	int evcnt_offset = 0x10;
+	int incr_count = 1;
+	int ref_clock[16] = { 0 };
+
+	if (ref_clock[qlm])
+		return ref_clock[qlm];
+
+	if (OCTEON_IS_OCTEON3() && !OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return cvmx_qlm_measure_clock_cn7xxx(cvmx_get_node_num(), qlm);
+
+	if (OCTEON_IS_MODEL(OCTEON_CN70XX) && qlm == 0) {
+		cvmx_gserx_dlmx_ref_clkdiv2_t ref_clkdiv2;
+
+		ref_clkdiv2.u64 = csr_rd(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0));
+		if (ref_clkdiv2.s.ref_clkdiv2)
+			incr_count = 2;
+	}
+
+	/* Fix reference clock for OCI QLMs */
+
+	/* Disable the PTP event counter while we configure it */
+	ptp_clock.u64 = csr_rd(CVMX_MIO_PTP_CLOCK_CFG); /* For CN63XXp1 errata */
+	ptp_clock.s.evcnt_en = 0;
+	csr_wr(CVMX_MIO_PTP_CLOCK_CFG, ptp_clock.u64);
+
+	/* Count on rising edge, Choose which QLM to count */
+	ptp_clock.u64 = csr_rd(CVMX_MIO_PTP_CLOCK_CFG); /* For CN63XXp1 errata */
+	ptp_clock.s.evcnt_edge = 0;
+	ptp_clock.s.evcnt_in = evcnt_offset + qlm;
+	csr_wr(CVMX_MIO_PTP_CLOCK_CFG, ptp_clock.u64);
+
+	/* Clear MIO_PTP_EVT_CNT */
+	csr_rd(CVMX_MIO_PTP_EVT_CNT); /* For CN63XXp1 errata */
+	count = csr_rd(CVMX_MIO_PTP_EVT_CNT);
+	csr_wr(CVMX_MIO_PTP_EVT_CNT, -count);
+
+	/* Set MIO_PTP_EVT_CNT to 1 billion */
+	csr_wr(CVMX_MIO_PTP_EVT_CNT, 1000000000);
+
+	/* Enable the PTP event counter */
+	ptp_clock.u64 = csr_rd(CVMX_MIO_PTP_CLOCK_CFG); /* For CN63XXp1 errata */
+	ptp_clock.s.evcnt_en = 1;
+	csr_wr(CVMX_MIO_PTP_CLOCK_CFG, ptp_clock.u64);
+
+	start_cycle = get_ticks();
+	/* Wait for 50ms */
+	mdelay(50);
+
+	/* Read the counter */
+	csr_rd(CVMX_MIO_PTP_EVT_CNT); /* For CN63XXp1 errata */
+	count = csr_rd(CVMX_MIO_PTP_EVT_CNT);
+	stop_cycle = get_ticks();
+
+	/* Disable the PTP event counter */
+	ptp_clock.u64 = csr_rd(CVMX_MIO_PTP_CLOCK_CFG); /* For CN63XXp1 errata */
+	ptp_clock.s.evcnt_en = 0;
+	csr_wr(CVMX_MIO_PTP_CLOCK_CFG, ptp_clock.u64);
+
+	/* Clock counted down, so reverse it */
+	count = 1000000000 - count;
+	count *= incr_count;
+
+	/* Return the rate */
+	ref_clock[qlm] = count * gd->cpu_clk / (stop_cycle - start_cycle);
+
+	return ref_clock[qlm];
+}
+
+/*
+ * Perform RX equalization on a QLM
+ *
+ * @param node	Node the QLM is on
+ * @param qlm	QLM to perform RX equalization on
+ * @param lane	Lane to use, or -1 for all lanes
+ *
+ * @return Zero on success, negative if any lane failed RX equalization
+ */
+int __cvmx_qlm_rx_equalization(int node, int qlm, int lane)
+{
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_br_rxx_ctl_t rxx_ctl;
+	cvmx_gserx_br_rxx_eer_t rxx_eer;
+	cvmx_gserx_rx_eie_detsts_t eie_detsts;
+	int fail, gbaud, l, lane_mask;
+	enum cvmx_qlm_mode mode;
+	int max_lanes = cvmx_qlm_get_lanes(qlm);
+	cvmx_gserx_lane_mode_t lmode;
+	cvmx_gserx_lane_px_mode_1_t pmode_1;
+	int pending = 0;
+	u64 timeout;
+
+	/* Don't touch QLMs if it is reset or powered down */
+	phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+	if (phy_ctl.s.phy_pd || phy_ctl.s.phy_reset)
+		return -1;
+
+	/*
+	 * Check whether GSER PRBS pattern matcher is enabled on any of the
+	 * applicable lanes. Can't complete RX Equalization while pattern
+	 * matcher is enabled because it causes errors
+	 */
+	for (l = 0; l < max_lanes; l++) {
+		cvmx_gserx_lanex_lbert_cfg_t lbert_cfg;
+
+		if (lane != -1 && lane != l)
+			continue;
+
+		lbert_cfg.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_LBERT_CFG(l, qlm));
+		if (lbert_cfg.s.lbert_pm_en == 1)
+			return -1;
+	}
+
+	/* Get Lane Mode */
+	lmode.u64 = csr_rd_node(node, CVMX_GSERX_LANE_MODE(qlm));
+
+	/*
+	 * Check to see if in VMA manual mode is set. If in VMA manual mode
+	 * don't complete rx equalization
+	 */
+	pmode_1.u64 = csr_rd_node(node, CVMX_GSERX_LANE_PX_MODE_1(lmode.s.lmode, qlm));
+	if (pmode_1.s.vma_mm == 1) {
+#ifdef DEBUG_QLM
+		debug("N%d:QLM%d: VMA Manual (manual DFE) selected. Not completing Rx equalization\n",
+		      node, qlm);
+#endif
+		return 0;
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX)) {
+		gbaud = cvmx_qlm_get_gbaud_mhz_node(node, qlm);
+		mode = cvmx_qlm_get_mode_cn78xx(node, qlm);
+	} else {
+		gbaud = cvmx_qlm_get_gbaud_mhz(qlm);
+		mode = cvmx_qlm_get_mode(qlm);
+	}
+
+	/* Apply RX Equalization for speed >= 8G */
+	if (qlm < 8) {
+		if (gbaud < 6250)
+			return 0;
+	}
+
+	/* Don't run on PCIe Links */
+	if (mode == CVMX_QLM_MODE_PCIE || mode == CVMX_QLM_MODE_PCIE_1X8 ||
+	    mode == CVMX_QLM_MODE_PCIE_1X2 || mode == CVMX_QLM_MODE_PCIE_2X1)
+		return -1;
+
+	fail = 0;
+
+	/*
+	 * Before completing Rx equalization wait for
+	 * GSERx_RX_EIE_DETSTS[CDRLOCK] to be set.
+	 * This ensures the rx data is valid
+	 */
+	if (lane == -1) {
+		/*
+		 * check all 4 Lanes (cdrlock = 1111/b) for CDR Lock with
+		 * lane == -1
+		 */
+		if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_RX_EIE_DETSTS(qlm),
+					       cvmx_gserx_rx_eie_detsts_t, cdrlock, ==,
+					       (1 << max_lanes) - 1, 500)) {
+#ifdef DEBUG_QLM
+			eie_detsts.u64 = csr_rd_node(node, CVMX_GSERX_RX_EIE_DETSTS(qlm));
+			debug("ERROR: %d:QLM%d: CDR Lock not detected for all 4 lanes. CDR_LOCK(0x%x)\n",
+			      node, qlm, eie_detsts.s.cdrlock);
+#endif
+			return -1;
+		}
+	} else {
+		if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_RX_EIE_DETSTS(qlm),
+					       cvmx_gserx_rx_eie_detsts_t, cdrlock, &, (1 << lane),
+					       500)) {
+#ifdef DEBUG_QLM
+			eie_detsts.u64 = csr_rd_node(node, CVMX_GSERX_RX_EIE_DETSTS(qlm));
+			debug("ERROR: %d:QLM%d: CDR Lock not detected for Lane%d CDR_LOCK(0x%x)\n",
+			      node, qlm, lane, eie_detsts.s.cdrlock);
+#endif
+			return -1;
+		}
+	}
+
+	/*
+	 * Errata (GSER-20075) GSER(0..13)_BR_RX3_EER[RXT_ERR] is
+	 * GSER(0..13)_BR_RX2_EER[RXT_ERR]. Since lanes 2-3 trigger@the
+	 * same time, we need to setup lane 3 before we loop through the lanes
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && (lane == -1 || lane == 3)) {
+		/* Enable software control */
+		rxx_ctl.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_CTL(3, qlm));
+		rxx_ctl.s.rxt_swm = 1;
+		csr_wr_node(node, CVMX_GSERX_BR_RXX_CTL(3, qlm), rxx_ctl.u64);
+
+		/* Clear the completion flag */
+		rxx_eer.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_EER(3, qlm));
+		rxx_eer.s.rxt_esv = 0;
+		csr_wr_node(node, CVMX_GSERX_BR_RXX_EER(3, qlm), rxx_eer.u64);
+		/* Initiate a new request on lane 2 */
+		if (lane == 3) {
+			rxx_eer.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_EER(2, qlm));
+			rxx_eer.s.rxt_eer = 1;
+			csr_wr_node(node, CVMX_GSERX_BR_RXX_EER(2, qlm), rxx_eer.u64);
+		}
+	}
+
+	for (l = 0; l < max_lanes; l++) {
+		if (lane != -1 && lane != l)
+			continue;
+
+		/*
+		 * Skip lane 3 on 78p1.x due to Errata (GSER-20075).
+		 * Handled above
+		 */
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && l == 3) {
+			/*
+			 * Need to add lane 3 to pending list for 78xx
+			 * pass 1.x
+			 */
+			pending |= 1 << 3;
+			continue;
+		}
+		/* Enable software control */
+		rxx_ctl.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_CTL(l, qlm));
+		rxx_ctl.s.rxt_swm = 1;
+		csr_wr_node(node, CVMX_GSERX_BR_RXX_CTL(l, qlm), rxx_ctl.u64);
+
+		/* Clear the completion flag and initiate a new request */
+		rxx_eer.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_EER(l, qlm));
+		rxx_eer.s.rxt_esv = 0;
+		rxx_eer.s.rxt_eer = 1;
+		csr_wr_node(node, CVMX_GSERX_BR_RXX_EER(l, qlm), rxx_eer.u64);
+		pending |= 1 << l;
+	}
+
+	/*
+	 * Wait for 250ms, approx 10x times measured value, as XFI/XLAUI
+	 * can take 21-23ms, other interfaces can take 2-3ms.
+	 */
+	timeout = get_timer(0);
+
+	lane_mask = 0;
+	while (pending) {
+		/* Wait for RX equalization to complete */
+		for (l = 0; l < max_lanes; l++) {
+			lane_mask = 1 << l;
+			/* Only check lanes that are pending */
+			if (!(pending & lane_mask))
+				continue;
+
+			/*
+			 * Read the registers for checking Electrical Idle/CDR
+			 * lock and the status of the RX equalization
+			 */
+			eie_detsts.u64 = csr_rd_node(node, CVMX_GSERX_RX_EIE_DETSTS(qlm));
+			rxx_eer.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_EER(l, qlm));
+
+			/*
+			 * Mark failure if lane entered Electrical Idle or lost
+			 * CDR Lock. The bit for the lane will have cleared in
+			 * either EIESTS or CDRLOCK
+			 */
+			if (!(eie_detsts.s.eiests & eie_detsts.s.cdrlock & lane_mask)) {
+				fail |= lane_mask;
+				pending &= ~lane_mask;
+			} else if (rxx_eer.s.rxt_esv) {
+				pending &= ~lane_mask;
+			}
+		}
+
+		/* Breakout of the loop on timeout */
+		if (get_timer(timeout) > 250)
+			break;
+	}
+
+	lane_mask = 0;
+	/* Cleanup and report status */
+	for (l = 0; l < max_lanes; l++) {
+		if (lane != -1 && lane != l)
+			continue;
+
+		lane_mask = 1 << l;
+		rxx_eer.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_EER(l, qlm));
+		/* Switch back to hardware control */
+		rxx_ctl.u64 = csr_rd_node(node, CVMX_GSERX_BR_RXX_CTL(l, qlm));
+		rxx_ctl.s.rxt_swm = 0;
+		csr_wr_node(node, CVMX_GSERX_BR_RXX_CTL(l, qlm), rxx_ctl.u64);
+
+		/* Report status */
+		if (fail & lane_mask) {
+#ifdef DEBUG_QLM
+			debug("%d:QLM%d: Lane%d RX equalization lost CDR Lock or entered Electrical Idle\n",
+			      node, qlm, l);
+#endif
+		} else if ((pending & lane_mask) || !rxx_eer.s.rxt_esv) {
+#ifdef DEBUG_QLM
+			debug("%d:QLM%d: Lane %d RX equalization timeout\n", node, qlm, l);
+#endif
+			fail |= 1 << l;
+		} else {
+#ifdef DEBUG_QLM
+			char *dir_label[4] = { "Hold", "Inc", "Dec", "Hold" };
+#ifdef DEBUG_QLM_RX
+			cvmx_gserx_lanex_rx_aeq_out_0_t rx_aeq_out_0;
+			cvmx_gserx_lanex_rx_aeq_out_1_t rx_aeq_out_1;
+			cvmx_gserx_lanex_rx_aeq_out_2_t rx_aeq_out_2;
+			cvmx_gserx_lanex_rx_vma_status_0_t rx_vma_status_0;
+#endif
+			debug("%d:QLM%d: Lane%d: RX equalization completed.\n", node, qlm, l);
+			debug("    Tx Direction Hints TXPRE: %s, TXMAIN: %s, TXPOST: %s, Figure of Merit: %d\n",
+			      dir_label[(rxx_eer.s.rxt_esm) & 0x3],
+			      dir_label[((rxx_eer.s.rxt_esm) >> 2) & 0x3],
+			      dir_label[((rxx_eer.s.rxt_esm) >> 4) & 0x3], rxx_eer.s.rxt_esm >> 6);
+
+#ifdef DEBUG_QLM_RX
+			rx_aeq_out_0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_AEQ_OUT_0(l, qlm));
+			rx_aeq_out_1.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_AEQ_OUT_1(l, qlm));
+			rx_aeq_out_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_AEQ_OUT_2(l, qlm));
+			rx_vma_status_0.u64 =
+				csr_rd_node(node, CVMX_GSERX_LANEX_RX_VMA_STATUS_0(l, qlm));
+			debug("    DFE Tap1:%lu, Tap2:%ld, Tap3:%ld, Tap4:%ld, Tap5:%ld\n",
+			      (unsigned int long)cvmx_bit_extract(rx_aeq_out_1.u64, 0, 5),
+			      (unsigned int long)cvmx_bit_extract_smag(rx_aeq_out_1.u64, 5, 9),
+			      (unsigned int long)cvmx_bit_extract_smag(rx_aeq_out_1.u64, 10, 14),
+			      (unsigned int long)cvmx_bit_extract_smag(rx_aeq_out_0.u64, 0, 4),
+			      (unsigned int long)cvmx_bit_extract_smag(rx_aeq_out_0.u64, 5, 9));
+			debug("    Pre-CTLE Gain:%lu, Post-CTLE Gain:%lu, CTLE Peak:%lu, CTLE Pole:%lu\n",
+			      (unsigned int long)cvmx_bit_extract(rx_aeq_out_2.u64, 4, 4),
+			      (unsigned int long)cvmx_bit_extract(rx_aeq_out_2.u64, 0, 4),
+			      (unsigned int long)cvmx_bit_extract(rx_vma_status_0.u64, 2, 4),
+			      (unsigned int long)cvmx_bit_extract(rx_vma_status_0.u64, 0, 2));
+#endif
+#endif
+		}
+	}
+
+	return (fail) ? -1 : 0;
+}
+
+/**
+ * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
+ * Training may not update PHY Tx Taps. This function is not static
+ * so we can share it with BGX KR
+ *
+ * @param node	Node to apply errata workaround
+ * @param qlm	QLM to apply errata workaround
+ * @param lane	Lane to apply the errata
+ */
+int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane)
+{
+	cvmx_gserx_lanex_pcs_ctlifc_0_t clifc0;
+	cvmx_gserx_lanex_pcs_ctlifc_2_t clifc2;
+
+	if (!(OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_0) || OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_1) ||
+	      OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_2) || OCTEON_IS_MODEL(OCTEON_CNF75XX_PASS1_0) ||
+	      OCTEON_IS_MODEL(OCTEON_CN78XX)))
+		return 0;
+
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_RX_EIE_DETSTS(qlm),
+				       cvmx_gserx_rx_eie_detsts_t, cdrlock, &,
+				       (1 << lane), 200))
+		return -1;
+
+	clifc0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(lane, qlm));
+	clifc0.s.cfg_tx_coeff_req_ovrrd_val = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(lane, qlm), clifc0.u64);
+	clifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	clifc2.s.cfg_tx_coeff_req_ovrrd_en = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), clifc2.u64);
+	clifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	clifc2.s.ctlifc_ovrrd_req = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), clifc2.u64);
+	clifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	clifc2.s.cfg_tx_coeff_req_ovrrd_en = 0;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), clifc2.u64);
+	clifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	clifc2.s.ctlifc_ovrrd_req = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), clifc2.u64);
+	return 0;
+}
+
+/**
+ * Updates the RX EQ Default Settings Update (CTLE Bias) to support longer
+ * SERDES channels
+ *
+ * @INTERNAL
+ *
+ * @param node	Node number to configure
+ * @param qlm	QLM number to configure
+ */
+void cvmx_qlm_gser_errata_25992(int node, int qlm)
+{
+	int lane;
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+
+	if (!(OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_0) || OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_1) ||
+	      OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_2) || OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X)))
+		return;
+
+	for (lane = 0; lane < num_lanes; lane++) {
+		cvmx_gserx_lanex_rx_ctle_ctrl_t rx_ctle_ctrl;
+		cvmx_gserx_lanex_rx_cfg_4_t rx_cfg_4;
+
+		rx_ctle_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CTLE_CTRL(lane, qlm));
+		rx_ctle_ctrl.s.pcs_sds_rx_ctle_bias_ctrl = 3;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_CTLE_CTRL(lane, qlm), rx_ctle_ctrl.u64);
+
+		rx_cfg_4.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CFG_4(lane, qlm));
+		rx_cfg_4.s.cfg_rx_errdet_ctrl = 0xcd6f;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_CFG_4(lane, qlm), rx_cfg_4.u64);
+	}
+}
+
+void cvmx_qlm_display_registers(int qlm)
+{
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+	int lane;
+	const __cvmx_qlm_jtag_field_t *ptr = cvmx_qlm_jtag_get_field();
+
+	debug("%29s", "Field[<stop bit>:<start bit>]");
+	for (lane = 0; lane < num_lanes; lane++)
+		debug("\t      Lane %d", lane);
+	debug("\n");
+
+	while (ptr && ptr->name) {
+		debug("%20s[%3d:%3d]", ptr->name, ptr->stop_bit, ptr->start_bit);
+		for (lane = 0; lane < num_lanes; lane++) {
+			u64 val;
+			int tx_byp = 0;
+
+			/*
+			 * Make sure serdes_tx_byp is set for displaying
+			 * TX amplitude and TX demphasis field values.
+			 */
+			if (strncmp(ptr->name, "biasdrv_", 8) == 0 ||
+			    strncmp(ptr->name, "tcoeff_", 7) == 0) {
+				tx_byp = cvmx_qlm_jtag_get(qlm, lane, "serdes_tx_byp");
+				if (tx_byp == 0) {
+					debug("\t \t");
+					continue;
+				}
+			}
+			val = cvmx_qlm_jtag_get(qlm, lane, ptr->name);
+			debug("\t%4llu (0x%04llx)", (unsigned long long)val,
+			      (unsigned long long)val);
+		}
+		debug("\n");
+		ptr++;
+	}
+}
+
+/* ToDo: CVMX_DUMP_GSER removed for now (unused!) */
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 43/50] mips: octeon: Add octeon_fdt.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (41 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 42/50] mips: octeon: Add cvmx-qlm.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 44/50] mips: octeon: Add octeon_qlm.c Stefan Roese
                   ` (9 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import octeon_fdt.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/octeon_fdt.c | 1040 ++++++++++++++++++++++++++++
 1 file changed, 1040 insertions(+)
 create mode 100644 arch/mips/mach-octeon/octeon_fdt.c

diff --git a/arch/mips/mach-octeon/octeon_fdt.c b/arch/mips/mach-octeon/octeon_fdt.c
new file mode 100644
index 0000000000..199f692516
--- /dev/null
+++ b/arch/mips/mach-octeon/octeon_fdt.c
@@ -0,0 +1,1040 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#include <env.h>
+#include <log.h>
+#include <i2c.h>
+#include <net.h>
+#include <dm/device.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-csr.h>
+#include <mach/cvmx-bootmem.h>
+#include <mach/octeon-model.h>
+#include <mach/octeon_eth.h>
+#include <mach/octeon_fdt.h>
+#include <mach/cvmx-helper-fdt.h>
+#include <mach/cvmx-helper-gpio.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/octeon-feature.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <asm/gpio.h>
+
+#ifdef CONFIG_PCA953X
+#include <pca953x.h>
+#endif
+#ifdef CONFIG_PCF857X
+#include <pcf857x.h>
+#endif
+#ifdef CONFIG_PCA9698
+#include <pca9698.h>
+#endif
+#ifdef CONFIG_PCA9554
+#include <pca9554.h>
+#endif
+#ifdef CONFIG_PCA9555
+#include <pca9555.h>
+#endif
+
+DECLARE_GLOBAL_DATA_PTR;
+
+#ifdef CONFIG_PCA9554
+static const char * const pca9554_gpio_list[] = {
+	"pca9554",
+	"nxp,pca9554",
+	"ti,pca9554",
+	NULL,
+};
+#endif
+
+#ifdef CONFIG_PCA9555
+static const char * const pca9555_gpio_list[] = {
+	"pca9535",    "nxp,pca9535", "pca9539", "nxp,pca9539", "pca9555",
+	"nxp,pca9555", "ti,pca9555", "max7312", "maxim,max7312", "max7313",
+	"maxim,max7313", "tca6416", "tca9539",    NULL,
+};
+#endif
+
+#ifdef CONFIG_PCA9698
+/** List of compatible strings supported by pca9698 driver */
+static const char * const pca9698_gpio_list[] = {
+	"nxp,pca9505", "pca9505", "nxp,pca9698", "pca9698", NULL,
+};
+#endif
+
+#ifdef CONFIG_PCA953X
+/** List of compatible strings supported by pca953x driver */
+static const char * const pca953x_gpio_list[] = {
+	"nxp,pca9534", "nxp,pca9535", "nxp,pca9536", "nxp,pca9537", "nxp,pca9538", "nxp,pca9539",
+	"nxp,pca953x", "nxp,pca9554", "nxp,pca9555", "nxp,pca9556", "nxp,pca9557", "nxp,pca6107",
+	"pca9534",     "pca9535",     "pca9536",     "pca9537",	    "pca9538",	   "pca9539",
+	"pca953x",     "pca9554",     "pca9555",     "pca9556",	    "pca9557",	   "max7310",
+	"max7312",     "max7313",     "max7315",     "pca6107",	    "tca6408",	   "tca6416",
+	"tca9555",     NULL
+};
+#endif
+
+#ifdef CONFIG_PHY_VITESSE
+static const char * const vitesse_vsc8488_gpio_list[] = {
+	"vitesse,vsc8486",   "microsemi,vsc8486", "vitesse,vsc8488",
+	"microsemi,vsc8488", "vitesse,vsc8489",	  "microsemi,vsc8489",
+	"vitesse,vsc8490",   "microsemi,vsc8490", NULL
+};
+#endif
+
+/** List of compatible strings supported by Octeon driver */
+static const char * const octeon_gpio_list[] = {
+	"cavium,octeon-7890-gpio",
+	"cavium,octeon-3860-gpio",
+	NULL
+};
+
+/**
+ * Trims nodes from the flat device tree.
+ *
+ * @param fdt - pointer to working FDT, usually in gd->fdt_blob
+ * @param fdt_key - key to preserve.  All non-matching keys are removed
+ * @param trim_name - name of property to look for.  If NULL use
+ *		      'cavium,qlm-trim'
+ *
+ * The key should look something like device #, type where device # is a
+ * number from 0-9 and type is a string describing the type.  For QLM
+ * operations this would typically contain the QLM number followed by
+ * the type in the device tree, like "0,xaui", "0,sgmii", etc.  This function
+ * will trim all items in the device tree which match the device number but
+ * have a type which does not match.  For example, if a QLM has a xaui module
+ * installed on QLM 0 and "0,xaui" is passed as a key, then all FDT nodes that
+ * have "0,xaui" will be preserved but all others, i.e. "0,sgmii" will be
+ * removed.
+ *
+ * Note that the trim_name must also match.  If trim_name is NULL then it
+ * looks for the property "cavium,qlm-trim".
+ *
+ * Also, when the trim_name is "cavium,qlm-trim" or NULL that the interfaces
+ * will also be renamed based on their register values.
+ *
+ * For example, if a PIP interface is named "interface at W" and has the property
+ * reg = <0> then the interface will be renamed after this function to
+ * interface at 0.
+ *
+ * @return 0 for success.
+ */
+int __octeon_fdt_patch(void *fdt, const char *fdt_key, const char *trim_name)
+{
+	bool rename = !trim_name || !strcmp(trim_name, "cavium,qlm-trim");
+
+	return octeon_fdt_patch_rename(fdt, fdt_key, trim_name, rename, NULL, NULL);
+}
+
+int octeon_fdt_patch(void *fdt, const char *fdt_key, const char *trim_name)
+	__attribute__((weak, alias("__octeon_fdt_patch")));
+
+/**
+ * Trims nodes from the flat device tree.
+ *
+ * @param fdt - pointer to working FDT, usually in gd->fdt_blob
+ * @param fdt_key - key to preserve.  All non-matching keys are removed
+ * @param trim_name - name of property to look for.  If NULL use
+ *		      'cavium,qlm-trim'
+ * @param rename - set to TRUE to rename interfaces.
+ * @param callback - function to call on matched nodes.
+ * @param cbarg - passed to callback.
+ *
+ * The key should look something like device #, type where device # is a
+ * number from 0-9 and type is a string describing the type.  For QLM
+ * operations this would typically contain the QLM number followed by
+ * the type in the device tree, like "0,xaui", "0,sgmii", etc.  This function
+ * will trim all items in the device tree which match the device number but
+ * have a type which does not match.  For example, if a QLM has a xaui module
+ * installed on QLM 0 and "0,xaui" is passed as a key, then all FDT nodes that
+ * have "0,xaui" will be preserved but all others, i.e. "0,sgmii" will be
+ * removed.
+ *
+ * Note that the trim_name must also match.  If trim_name is NULL then it
+ * looks for the property "cavium,qlm-trim".
+ *
+ * Also, when the trim_name is "cavium,qlm-trim" or NULL that the interfaces
+ * will also be renamed based on their register values.
+ *
+ * For example, if a PIP interface is named "interface at W" and has the property
+ * reg = <0> then the interface will be renamed after this function to
+ * interface at 0.
+ *
+ * @return 0 for success.
+ */
+int octeon_fdt_patch_rename(void *fdt, const char *fdt_key,
+			    const char *trim_name, bool rename,
+			    void (*callback)(void *fdt, int offset, void *arg),
+			    void *cbarg)
+	__attribute__((weak, alias("__octeon_fdt_patch_rename")));
+
+int __octeon_fdt_patch_rename(void *fdt, const char *fdt_key,
+			      const char *trim_name, bool rename,
+			      void (*callback)(void *fdt, int offset, void *arg),
+			      void *cbarg)
+{
+	int fdt_key_len;
+	int offset, next_offset;
+	int aliases;
+	const void *aprop;
+	char qlm[32];
+	char *mode;
+	int qlm_key_len;
+	int rc;
+	int cpu_node;
+
+	if (!trim_name)
+		trim_name = "cavium,qlm-trim";
+
+	strncpy(qlm, fdt_key, sizeof(qlm));
+	mode = qlm;
+	strsep(&mode, ",");
+	qlm_key_len = strlen(qlm);
+
+	debug("In %s: Patching FDT header at 0x%p with key \"%s\"\n", __func__, fdt, fdt_key);
+	if (!fdt || fdt_check_header(fdt) != 0) {
+		printf("%s: Invalid device tree\n", __func__);
+		return -1;
+	}
+
+	fdt_key_len = strlen(fdt_key) + 1;
+
+	/* Prune out the unwanted parts based on the QLM mode.  */
+	offset = 0;
+	for (offset = fdt_next_node(fdt, offset, NULL); offset >= 0; offset = next_offset) {
+		int len;
+		const char *val;
+		const char *val_comma;
+
+		next_offset = fdt_next_node(fdt, offset, NULL);
+
+		val = fdt_getprop(fdt, offset, trim_name, &len);
+		if (!val)
+			continue;
+
+		debug("fdt found trim name %s, comparing key \"%s\"(%d) with \"%s\"(%d)\n",
+		      trim_name, fdt_key, fdt_key_len, val, len);
+		val_comma = strchr(val, ',');
+		if (!val_comma || (val_comma - val) != qlm_key_len)
+			continue;
+		if (strncmp(val, qlm, qlm_key_len) != 0)
+			continue; /* Not this QLM. */
+
+		debug("fdt key number \"%s\" matches\n", val);
+		if (!fdt_stringlist_contains(val, len, fdt_key)) {
+			debug("Key \"%s\" does not match \"%s\"\n", val, fdt_key);
+			/* This QLM, but wrong mode.  Delete it. */
+			/* See if there's an alias that needs deleting */
+			val = fdt_getprop(fdt, offset, "cavium,qlm-trim-alias", NULL);
+			if (val) {
+				debug("Trimming alias \"%s\"\n", val);
+				aliases = fdt_path_offset(fdt, "/aliases");
+				if (aliases) {
+					aprop = fdt_getprop(fdt, aliases, val, NULL);
+					if (aprop) {
+						rc = fdt_nop_property(fdt, aliases, val);
+						if (rc) {
+							printf("Error: Could not NOP alias %s in fdt\n",
+							       val);
+						}
+					} else {
+						printf("Error: could not find /aliases/%s in device tree\n",
+						       val);
+					}
+				} else {
+					puts("Error: could not find /aliases in device tree\n");
+				}
+			}
+			debug("fdt trimming matching key %s\n", fdt_key);
+			next_offset = fdt_parent_offset(fdt, offset);
+			rc = fdt_nop_node(fdt, offset);
+			if (rc)
+				printf("Error %d noping node in device tree\n", rc);
+		}
+	}
+
+	debug("%s: Starting pass 2 for key %s\n", __func__, fdt_key);
+	/* Second pass: Rewrite names and remove key properties.  */
+	offset = -1;
+	for (offset = fdt_next_node(fdt, offset, NULL); offset >= 0; offset = next_offset) {
+		int len;
+		const char *val = fdt_getprop(fdt, offset, trim_name, &len);
+
+		next_offset = fdt_next_node(fdt, offset, NULL);
+
+		if (!val)
+			continue;
+		debug("Searching stringlist %s for %s\n", val, fdt_key);
+		if (fdt_stringlist_contains(val, len, fdt_key)) {
+			char new_name[64];
+			const char *name;
+			const char *at;
+			int reg;
+
+			debug("Found key %s at offset 0x%x\n", fdt_key, offset);
+			fdt_nop_property(fdt, offset, trim_name);
+
+			if (rename) {
+				name = fdt_get_name(fdt, offset, NULL);
+				debug("  name: %s\n", name);
+				if (!name)
+					continue;
+				at = strchr(name, '@');
+				if (!at)
+					continue;
+
+				reg = fdtdec_get_int(fdt, offset, "reg", -1);
+				if (reg == -1)
+					continue;
+
+				debug("  reg: %d\n", reg);
+				len = at - name + 1;
+				debug("  len: %d\n", len);
+				if (len + 9 >= sizeof(new_name))
+					continue;
+
+				memcpy(new_name, name, len);
+				cpu_node = cvmx_fdt_get_cpu_node(fdt, offset);
+				if (cpu_node > 0)
+					snprintf(new_name + len, sizeof(new_name) - len, "%x_%x",
+						 cpu_node, reg);
+				else
+					sprintf(new_name + len, "%x", reg);
+				debug("Renaming cpu node %d %s to %s\n", cpu_node, name, new_name);
+				fdt_set_name(fdt, offset, new_name);
+			}
+			if (callback)
+				callback(fdt, offset, cbarg);
+
+			/* Structure may have changed, start at the beginning. */
+			next_offset = 0;
+		}
+	}
+
+	return 0;
+}
+
+#ifdef CONFIG_CMD_NET
+static void octeon_set_one_fdt_mac(int node, uint64_t *mac)
+{
+	u8 mac_addr[6];
+	int r;
+
+	mac_addr[5] = *mac & 0xff;
+	mac_addr[4] = (*mac >> 8) & 0xff;
+	mac_addr[3] = (*mac >> 16) & 0xff;
+	mac_addr[2] = (*mac >> 24) & 0xff;
+	mac_addr[1] = (*mac >> 32) & 0xff;
+	mac_addr[0] = (*mac >> 40) & 0xff;
+
+	r = fdt_setprop_inplace(working_fdt, node, "local-mac-address", mac_addr, 6);
+	if (r == 0)
+		*mac = *mac + 1;
+}
+
+static uint64_t convert_mac(const u8 mac_addr[6])
+{
+	int i;
+	u64 mac = 0;
+
+	for (i = 0; i < 6; i++)
+		mac = (mac << 8) | mac_addr[i];
+	return mac;
+}
+
+/**
+ * Fix up the MAC address in the flat device tree based on the MAC address
+ * stored in ethaddr or in the board descriptor.
+ *
+ * NOTE: This function is weak and an alias for __octeon_fixup_fdt_mac_addr.
+ */
+void octeon_fixup_fdt_mac_addr(void) __attribute__((weak, alias("__octeon_fixup_fdt_mac_addr")));
+
+void __octeon_fixup_fdt_mac_addr(void)
+{
+	int node, pip, interface, ethernet;
+	int i, e;
+	u64 mac = 0;
+	uchar mac_addr[6];
+	char name[20];
+	bool env_mac_addr_valid;
+	const char *p;
+
+	debug("%s: env ethaddr: %s\n", __func__, (p = env_get("ethaddr")) ? p : "not set");
+	if (eth_env_get_enetaddr("ethaddr", mac_addr)) {
+		mac = convert_mac(mac_addr);
+		env_mac_addr_valid = true;
+	} else {
+		mac = convert_mac((uint8_t *)gd->arch.mac_desc.mac_addr_base);
+		env_mac_addr_valid = false;
+	}
+
+	debug("%s: mac_addr: %pM, board mac: %pM, env valid: %s\n", __func__, mac_addr,
+	      gd->arch.mac_desc.mac_addr_base, env_mac_addr_valid ? "true" : "false");
+
+	if (env_mac_addr_valid && memcmp(mac_addr, (void *)gd->arch.mac_desc.mac_addr_base, 6))
+		printf("Warning: the environment variable ethaddr is set to %pM\n"
+		       "which does not match the board descriptor MAC address %pM.\n"
+		       "Please clear the ethaddr environment variable with the command\n"
+		       "\"setenv -f ethaddr; saveenv\" or change the board MAC address with the command\n"
+		       "\"tlv_eeprom set mac %pM\" to change the board MAC address so that it matches\n"
+		       "the environment address.\n"
+		       "Note: the correct MAC address is usually the one stored in the tlv EEPROM.\n",
+		       mac_addr, gd->arch.mac_desc.mac_addr_base, mac_addr);
+
+	for (i = 0; i < 2; i++) {
+		sprintf(name, "mix%x", i);
+		p = fdt_get_alias(working_fdt, name);
+		if (p) {
+			node = fdt_path_offset(working_fdt, p);
+			if (node > 0)
+				octeon_set_one_fdt_mac(node, &mac);
+		}
+	}
+
+	for (i = 0; i < 2; i++) {
+		sprintf(name, "rgmii%x", i);
+		p = fdt_get_alias(working_fdt, name);
+		if (p) {
+			node = fdt_path_offset(working_fdt, p);
+			if (node > 0)
+				octeon_set_one_fdt_mac(node, &mac);
+		}
+	}
+
+	pip = fdt_node_offset_by_compatible(working_fdt, -1, "cavium,octeon-3860-pip");
+
+	if (pip > 0)
+		for (i = 0; i < 8; i++) {
+			sprintf(name, "interface@%d", i);
+			interface = fdt_subnode_offset(working_fdt, pip, name);
+			if (interface <= 0)
+				continue;
+			for (e = 0; e < 16; e++) {
+				sprintf(name, "ethernet@%d", e);
+				ethernet = fdt_subnode_offset(working_fdt, interface, name);
+				if (ethernet <= 0)
+					continue;
+				octeon_set_one_fdt_mac(ethernet, &mac);
+			}
+		}
+
+	/* Assign 78XX addresses in the order they appear in the device tree. */
+	node = fdt_node_offset_by_compatible(working_fdt, -1, "cavium,octeon-7890-bgx-port");
+	while (node != -FDT_ERR_NOTFOUND) {
+		octeon_set_one_fdt_mac(node, &mac);
+		node = fdt_node_offset_by_compatible(working_fdt, node,
+						     "cavium,octeon-7890-bgx-port");
+	}
+}
+#endif
+
+/**
+ * This function fixes the clock-frequency in the flat device tree for the UART.
+ *
+ * NOTE: This function is weak and an alias for __octeon_fixup_fdt_uart.
+ */
+void octeon_fixup_fdt_uart(void) __attribute__((weak, alias("__octeon_fixup_fdt_uart")));
+
+void __octeon_fixup_fdt_uart(void)
+{
+	u32 clk;
+	int node;
+
+	clk = gd->bus_clk;
+
+	/* Device trees already have good values for fast simulator
+	 * output, real boards need the correct value.
+	 */
+	node = fdt_node_offset_by_compatible(working_fdt, -1, "cavium,octeon-3860-uart");
+	while (node != -FDT_ERR_NOTFOUND) {
+		fdt_setprop_inplace_cell(working_fdt, node, "clock-frequency", clk);
+		node = fdt_node_offset_by_compatible(working_fdt, node, "cavium,octeon-3860-uart");
+	}
+}
+
+/**
+ * This function fills in the /memory portion of the flat device tree.
+ *
+ * NOTE: This function is weak and aliased to __octeon_fixup_fdt_memory.
+ */
+void octeon_fixup_fdt_memory(void) __attribute__((weak, alias("__octeon_fixup_fdt_memory")));
+
+void __octeon_fixup_fdt_memory(void)
+{
+	u64 sizes[3], addresses[3];
+	u64 size_left = gd->ram_size;
+	int num_addresses = 0;
+	int rc;
+	int node;
+
+	size_left = gd->ram_size;
+	sizes[num_addresses] = min_t(u64, size_left, 256 * 1024 * 1024);
+	size_left -= sizes[num_addresses];
+	addresses[num_addresses] = 0;
+	num_addresses++;
+
+	if (size_left > 0) {
+		sizes[num_addresses] = size_left;
+		addresses[num_addresses] = 0x20000000ULL;
+		num_addresses++;
+	}
+
+	node = fdt_path_offset(working_fdt, "/memory");
+	if (node < 0)
+		node = fdt_add_subnode(working_fdt, fdt_path_offset(working_fdt, "/"), "memory");
+	if (node < 0) {
+		printf("Could not add memory section to fdt: %s\n", fdt_strerror(node));
+		return;
+	}
+	rc = fdt_fixup_memory_banks(working_fdt, addresses, sizes, num_addresses);
+	if (rc != 0)
+		printf("%s: fdt_fixup_memory_banks returned %d when adding %d addresses\n",
+		       __func__, rc, num_addresses);
+}
+
+void octeon_fixup_fdt(void) __attribute__((weak, alias("__octeon_fixup_fdt")));
+
+void __octeon_fixup_fdt(void)
+{
+	if (!working_fdt)
+		return;
+
+#ifdef CONFIG_CMD_NET
+	octeon_fixup_fdt_mac_addr();
+#endif /* CONFIG_CMD_NET */
+
+#if !CONFIG_OCTEON_SIM_SPEED
+	octeon_fixup_fdt_uart();
+#endif
+
+	octeon_fixup_fdt_memory();
+}
+
+int __board_fixup_fdt(void)
+{
+	/*
+	 * Nothing to do in this dummy implementation
+	 */
+	return 0;
+}
+
+int board_fixup_fdt(void) __attribute__((weak, alias("__board_fixup_fdt")));
+
+/**
+ * This is a helper function to find the offset of a PHY device given
+ * an Ethernet device.
+ *
+ * @param[in] eth - Ethernet device to search for PHY offset
+ *
+ * @returns offset of phy info in device tree or -1 if not found
+ */
+//int octeon_fdt_find_phy(const struct eth_device *eth)
+int octeon_fdt_find_phy(const struct udevice *eth)
+{
+	int aliases;
+	const void *fdt = gd->fdt_blob;
+	const char *pip_path;
+	int pip;
+	char buffer[64];
+#if 0
+	struct octeon_eth_info *oct_eth_info =
+				 (struct octeon_eth_info *)eth->priv;
+#else
+	struct octeon_eth_info *oct_eth_info = dev_get_priv(eth);
+#endif
+	int interface, index;
+	int phandle;
+	int phy;
+	u32 *phy_handle;
+
+	aliases = fdt_path_offset(fdt, "/aliases");
+	if (aliases < 0) {
+		puts("/aliases not found in device tree!\n");
+		return -1;
+	}
+	pip_path = fdt_getprop(fdt, aliases, "pip", NULL);
+	if (!pip_path) {
+		puts("pip not found in aliases in device tree\n");
+		return -1;
+	}
+	pip = fdt_path_offset(fdt, pip_path);
+	if (pip < 0) {
+		puts("pip not found in device tree\n");
+		return -1;
+	}
+	snprintf(buffer, sizeof(buffer), "interface@%d", oct_eth_info->interface);
+	interface = fdt_subnode_offset(fdt, pip, buffer);
+	if (interface < 0) {
+		printf("%s: interface@%d not found in device tree for %s\n", __func__,
+		       oct_eth_info->interface, eth->name);
+		return -1;
+	}
+	snprintf(buffer, sizeof(buffer), "ethernet@%x", oct_eth_info->index);
+	index = fdt_subnode_offset(fdt, interface, buffer);
+	if (index < 0) {
+		printf("%s: ethernet@%x not found in device tree for %s\n", __func__,
+		       oct_eth_info->index, eth->name);
+		return -1;
+	}
+	phy_handle = (uint32_t *)fdt_getprop(fdt, index, "phy-handle", NULL);
+	if (phy_handle < 0) {
+		printf("%s: phy-handle not found for %s\n", __func__, eth->name);
+		return -1;
+	}
+	phandle = fdt32_to_cpu(*phy_handle);
+	phy = fdt_node_offset_by_phandle(fdt, phandle);
+	if (phy < 0) {
+		printf("%s: phy not found for %s\n", __func__, eth->name);
+		return -1;
+	}
+
+	return phy;
+}
+
+/**
+ * This helper function returns if a node contains the specified vendor name.
+ *
+ * @param[in]	fdt		pointer to device tree blob
+ * @param	nodeoffset	offset of the tree node
+ * @param[in]	vendor		name of vendor to check
+ *
+ * returns:
+ *	0, if the node has a compatible vendor string property
+ *	1, if the node does not contain the vendor string property
+ *	-FDT_ERR_NOTFOUND, if the given node has no 'compatible' property
+ *	-FDT_ERR_BADOFFSET, if nodeoffset does not refer to a BEGIN_NODE tag
+ *	-FDT_ERR_BADMAGIC,
+ *	-FDT_ERR_BADVERSION,
+ *	-FDT_BADSTATE,
+ *	-FDT_ERR_BADSTRUCTURE, standard meanings
+ */
+int octeon_fdt_compat_vendor(const void *fdt, int nodeoffset, const char *vendor)
+{
+	const char *strlist;
+	const char *p;
+	int len;
+	int listlen;
+
+	strlist = fdt_getprop(fdt, nodeoffset, "compatible", &listlen);
+	if (!strlist)
+		return listlen;
+
+	len = strlen(vendor);
+
+	debug("%s(%p, %d, %s (%p)) strlist: %s (%p), len: %d\n", __func__, fdt, nodeoffset, vendor,
+	      vendor, strlist, strlist, len);
+	while (listlen >= len) {
+		debug("  Comparing %d bytes of %s and %s\n", len, vendor, strlist);
+		if ((memcmp(vendor, strlist, len) == 0) &&
+		    ((strlist[len] == ',') || (strlist[len] == '\0')))
+			return 0;
+		p = memchr(strlist, '\0', listlen);
+		if (!p)
+			return 1; /* malformed strlist.. */
+		listlen -= (p - strlist) + 1;
+		strlist = p + 1;
+	}
+	return 1;
+}
+
+/**
+ * Given a node in the device tree get the OCTEON OCX node number
+ *
+ * @param fdt		pointer to flat device tree
+ * @param nodeoffset	node offset to get OCX node for
+ *
+ * @return the Octeon OCX node number
+ */
+int octeon_fdt_get_soc_node(const void *fdt, int nodeoffset)
+{
+	return 0;
+}
+
+/**
+ * Given a FDT node, check if it is compatible with a list of devices
+ *
+ * @param[in]	fdt		Flat device tree pointer
+ * @param	node_offset	Node offset in device tree
+ * @param[in]	strlist		Array of FDT devices to check, end must be NULL
+ *
+ * @return	0 if at least one device is compatible, 1 if not compatible.
+ */
+int octeon_fdt_node_check_compatible(const void *fdt, int node_offset,
+				     const char *const *strlist)
+{
+	while (*strlist && **strlist) {
+		debug("%s: Checking %s\n", __func__, *strlist);
+		if (!fdt_node_check_compatible(fdt, node_offset, *strlist)) {
+			debug("%s: match found\n", __func__);
+			return 0;
+		}
+		strlist++;
+	}
+	debug("%s: No match found\n", __func__);
+	return 1;
+}
+
+/**
+ * Given a node offset, find the i2c bus number for that node
+ *
+ * @param[in]	fdt	Pointer to flat device tree
+ * @param	node_offset	Node offset in device tree
+ *
+ * @return	i2c bus number or -1 if error
+ */
+int octeon_fdt_i2c_get_bus(const void *fdt, int node_offset)
+{
+	const char *compat;
+	const u64 addresses[] = { 0x1180000001000, 0x1180000001200 };
+	u64 reg;
+	int i;
+	int bus = -1;
+	bool found = false;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CIU3))
+		compat = "cavium,octeon-7890-twsi";
+	else
+		compat = "cavium,octeon-3860-twsi";
+
+	while (node_offset > 0 &&
+	       !(found = !fdt_node_check_compatible(fdt, node_offset, compat))) {
+		node_offset = fdt_parent_offset(fdt, node_offset);
+#ifdef CONFIG_OCTEON_I2C_FDT
+		bus = i2c_get_bus_num_fdt(node_offset);
+		if (bus >= 0) {
+			debug("%s: Found bus 0x%x\n", __func__, bus);
+			return bus;
+		}
+#endif
+	}
+	if (!found) {
+		printf("Error: node %d in device tree is not a child of the I2C bus\n",
+		       node_offset);
+		return -1;
+	}
+
+	reg = fdtdec_get_addr(fdt, node_offset, "reg");
+	if (reg == FDT_ADDR_T_NONE) {
+		printf("%s: Error: invalid reg address for TWSI bus\n", __func__);
+		return -1;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(addresses); i++)
+		if (reg == addresses[i]) {
+			bus = i;
+			break;
+		}
+
+	debug("%s: bus 0x%x\n", __func__, bus);
+	return bus;
+}
+
+/**
+ * Given an offset into the fdt, output the i2c bus and address of the device
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	node	offset in FDT of device
+ * @param[out]	bus	i2c bus number of device
+ * @param[out]	addr	address of device on i2c bus
+ *
+ * @return	0 for success, -1 on error
+ */
+int octeon_fdt_get_i2c_bus_addr(const void *fdt, int node, int *bus, int *addr)
+{
+	*bus = octeon_fdt_i2c_get_bus(fdt, fdt_parent_offset(fdt, node));
+	if (*bus < 0) {
+		printf("%s: Could not get parent i2c bus\n", __func__);
+		return -1;
+	}
+	*addr = fdtdec_get_int(fdt, node, "reg", -1);
+	if (*addr < 0)
+		return -1;
+	return 0;
+}
+
+/**
+ * Reads a GPIO pin given the node of the GPIO device in the device tree and
+ * the pin number.
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	phandle	phandle of GPIO node
+ * @param	pin	pin number to read
+ *
+ * @return	0 = pin is low, 1 = pin is high, -1 = error
+ */
+int octeon_fdt_read_gpio(const void *fdt, int phandle, int pin)
+{
+	enum cvmx_gpio_type type;
+	__maybe_unused int node;
+	__maybe_unused int addr;
+	__maybe_unused int bus;
+	__maybe_unused int old_bus;
+	int num_pins;
+	int value;
+
+	type = cvmx_fdt_get_gpio_type(fdt, phandle, &num_pins);
+	if ((pin & 0xff) >= num_pins) {
+		debug("%s: pin number %d out of range\n", __func__, pin);
+		return -1;
+	}
+	switch (type) {
+#ifdef CONFIG_PCA953X
+	case CVMX_GPIO_PIN_PCA953X:
+		node = fdt_node_offset_by_phandle(fdt, phandle);
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+		value = pca953x_get_val(bus, addr);
+		if (value < 0) {
+			printf("%s: Error reading PCA953X GPIO@0x%x:0x%x\n", __func__, bus,
+			       addr);
+			return -1;
+		}
+		value = (value >> pin) & 1;
+		break;
+#endif
+#ifdef CONFIG_PCF857X
+	case CVMX_GPIO_PIN_PCF857X:
+		node = fdt_node_offset_by_phandle(fdt, phandle);
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+		value = pcf857x_get_val(bus, addr);
+		if (value < 0) {
+			printf("%s: Error reading PCF857X GPIO@0x%x:0x%x\n", __func__, bus,
+			       addr);
+			return -1;
+		}
+		value = (value >> pin) & 1;
+		break;
+#endif
+#ifdef CONFIG_PCA9698
+	case CVMX_GPIO_PIN_PCA9698:
+		node = fdt_node_offset_by_phandle(fdt, phandle);
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+		old_bus = i2c_get_bus_num();
+		i2c_set_bus_num(bus);
+		value = pca9698_get_value(addr, pin);
+		i2c_set_bus_num(old_bus);
+		break;
+#endif
+	case CVMX_GPIO_PIN_OCTEON:
+		value = gpio_get_value(pin);
+		break;
+	default:
+		printf("%s: Unknown GPIO type %d\n", __func__, type);
+		return -1;
+	}
+	return value;
+}
+
+/**
+ * Reads a GPIO pin given the node of the GPIO device in the device tree and
+ * the pin number.
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	phandle	phandle of GPIO node
+ * @param	pin	pin number to read
+ * @param	val	value to write (1 = high, 0 = low)
+ *
+ * @return	0 = success, -1 = error
+ */
+int octeon_fdt_set_gpio(const void *fdt, int phandle, int pin, int val)
+{
+	enum cvmx_gpio_type type;
+	int node;
+	int num_pins;
+	__maybe_unused int addr;
+	__maybe_unused int bus;
+	__maybe_unused int old_bus;
+	__maybe_unused int rc;
+
+	node = fdt_node_offset_by_phandle(fdt, phandle);
+	if (node < 0) {
+		printf("%s: Invalid phandle\n", __func__);
+		return -1;
+	}
+
+	type = cvmx_fdt_get_gpio_type(fdt, phandle, &num_pins);
+	if ((pin & 0xff) >= num_pins) {
+		debug("%s: pin number %d out of range\n", __func__, pin);
+		return -1;
+	}
+	switch (type) {
+#ifdef CONFIG_PCA953X
+	case CVMX_GPIO_PIN_PCA953X:
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+
+		return pca953x_set_val(bus, addr, 1 << pin, val << pin);
+#endif
+#ifdef CONFIG_PCF857X
+	case CVMX_GPIO_PIN_PCF857X:
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+		return pcf957x_set_val(bus, addr, 1 << pin, val << pin);
+#endif
+#ifdef CONFIG_PCA9698
+	case CVMX_GPIO_PIN_PCA9698:
+		if (octeon_fdt_get_i2c_bus_addr(fdt, node, &bus, &addr)) {
+			printf("%s: Could not get gpio bus and/or address\n", __func__);
+			return -1;
+		}
+		old_bus = i2c_get_bus_num();
+		i2c_set_bus_num(bus);
+		rc = pca9698_set_value(addr, pin, val);
+		i2c_set_bus_num(old_bus);
+		return rc;
+#endif
+	case CVMX_GPIO_PIN_OCTEON:
+		return gpio_set_value(pin, val);
+	default:
+		printf("%s: Unknown GPIO type %d\n", __func__, type);
+		return -1;
+	}
+}
+
+/**
+ * Given the node of a GPIO entry output the GPIO type, i2c bus and i2c
+ * address.
+ *
+ * @param	fdt_node	node of GPIO in device tree, generally
+ *				derived from a phandle.
+ * @param[out]	type		Type of GPIO detected
+ * @param[out]	i2c_bus		For i2c GPIO expanders, the i2c bus number
+ * @param[out]	i2c_addr	For i2c GPIO expanders, the i2c address
+ *
+ * @return	0 for success, -1 for errors
+ *
+ * NOTE: It is up to the caller to determine the pin number.
+ */
+int octeon_fdt_get_gpio_info(int fdt_node, enum octeon_gpio_type *type,
+			     int *i2c_bus, int *i2c_addr)
+{
+	const void *fdt = gd->fdt_blob;
+
+	int i2c_bus_node __attribute__((unused));
+
+	*type = GPIO_TYPE_UNKNOWN;
+
+	if (!octeon_fdt_node_check_compatible(fdt, fdt_node, octeon_gpio_list)) {
+		debug("%s: Found Octeon compatible GPIO\n", __func__);
+		*type = GPIO_TYPE_OCTEON;
+		if (i2c_bus)
+			*i2c_bus = -1;
+		if (i2c_addr)
+			*i2c_addr = -1;
+		return 0;
+	}
+#ifdef CONFIG_PCA9555
+	if (!octeon_fdt_node_check_compatible(fdt, fdt_node, pca9555_gpio_list)) {
+		debug("%s: Found PCA9555 type compatible GPIO\n", __func__);
+		*type = GPIO_TYPE_PCA9555;
+	}
+#endif
+#ifdef CONFIG_PCA9554
+	if (!octeon_fdt_node_check_compatible(fdt, fdt_node, pca9554_gpio_list)) {
+		debug("%s: Found PCA9555 type compatible GPIO\n", __func__);
+		*type = GPIO_TYPE_PCA9554;
+	}
+#endif
+#ifdef CONFIG_PCA953X
+	if (!octeon_fdt_node_check_compatible(fdt, fdt_node, pca953x_gpio_list)) {
+		debug("%s: Found PCA953x compatible GPIO", __func__);
+		*type = GPIO_TYPE_PCA953X;
+	}
+#endif
+#ifdef CONFIG_PCA9698
+	if (!octeon_fdt_node_check_compatible(fdt, fdt_node, pca9698_gpio_list)) {
+		debug("%s: Found PCA9698 compatible GPIO", __func__);
+		*type = GPIO_TYPE_PCA9698;
+	}
+#endif
+#if defined(CONFIG_PCA953X) || defined(CONFIG_PCA9698) || \
+	defined(CONFIG_PCA9555) || defined(CONFIG_PCA9554)
+	if (!i2c_addr || !i2c_bus) {
+		printf("%s: Error: i2c_addr or i2c_bus is NULL\n", __func__);
+		return -1;
+	}
+
+	*i2c_addr = fdtdec_get_int(fdt, fdt_node, "reg", -1);
+	i2c_bus_node = fdt_parent_offset(fdt, fdt_node);
+	if (i2c_bus_node < 0) {
+		printf("%s: Invalid parent\n", __func__);
+		return -1;
+	}
+	*i2c_bus = i2c_get_bus_num_fdt(i2c_bus_node);
+#endif
+	return (*type != GPIO_TYPE_UNKNOWN) ? 0 : -1;
+}
+
+#ifdef CONFIG_PHY_VITESSE
+/**
+ * Given a node in the flat device tree, return the matching PHY device
+ *
+ * @param	fdt_node	FDT node in device tree
+ *
+ * @return	pointer to PHY device or NULL if none found.
+ */
+static struct phy_device *octeon_fdt_get_phy_device_from_node(int fdt_node)
+{
+	struct eth_device *dev;
+	int i = 0;
+	struct octeon_eth_info *ethinfo = NULL;
+
+	do {
+		dev = eth_get_dev_by_index(i++);
+		if (!dev)
+			return NULL;
+		ethinfo = dev->priv;
+		if (ethinfo->phy_offset == fdt_node)
+			return ethinfo->phydev;
+	} while (dev);
+	return NULL;
+}
+#endif
+
+/**
+ * Get the PHY data structure for the specified FDT node and output the type
+ *
+ * @param	fdt_node	FDT node of phy
+ * @param[out]	type		Type of GPIO
+ *
+ * @return	pointer to phy device or NULL if no match found.
+ */
+struct phy_device *octeon_fdt_get_phy_gpio_info(int fdt_node, enum octeon_gpio_type *type)
+{
+#ifdef CONFIG_PHY_VITESSE
+	struct phy_device *phydev;
+
+	if (!octeon_fdt_node_check_compatible(gd->fdt_blob, fdt_node,
+					      vitesse_vsc8488_gpio_list)) {
+		phydev = octeon_fdt_get_phy_device_from_node(fdt_node);
+		if (phydev) {
+			debug("%s: Found Vitesse VSC848X compatible GPIO\n", __func__);
+			*type = GPIO_TYPE_VSC8488;
+			return phydev;
+		}
+
+		debug("%s: Error: phy device not found!\n", __func__);
+		return NULL;
+	}
+
+	debug("%s: No compatible Vitesse PHY type found\n", __func__);
+#endif
+	return NULL;
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 44/50] mips: octeon: Add octeon_qlm.c
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (42 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 43/50] mips: octeon: Add octeon_fdt.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 45/50] mips: octeon: Makefile: Enable building of the newly added C files Stefan Roese
                   ` (8 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import octeon_qlm.c from 2013 U-Boot. It will be used by the later
added drivers to support PCIe and networking on the MIPS Octeon II / III
platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/octeon_qlm.c | 5853 ++++++++++++++++++++++++++++
 1 file changed, 5853 insertions(+)
 create mode 100644 arch/mips/mach-octeon/octeon_qlm.c

diff --git a/arch/mips/mach-octeon/octeon_qlm.c b/arch/mips/mach-octeon/octeon_qlm.c
new file mode 100644
index 0000000000..763692781d
--- /dev/null
+++ b/arch/mips/mach-octeon/octeon_qlm.c
@@ -0,0 +1,5853 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#include <dm.h>
+#include <time.h>
+#include <linux/delay.h>
+
+#include <mach/cvmx-regs.h>
+#include <mach/octeon-model.h>
+#include <mach/cvmx-fuse.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/cvmx-pcie.h>
+
+#include <mach/cvmx-bgxx-defs.h>
+#include <mach/cvmx-ciu-defs.h>
+#include <mach/cvmx-gmxx-defs.h>
+#include <mach/cvmx-gserx-defs.h>
+#include <mach/cvmx-mio-defs.h>
+#include <mach/cvmx-pciercx-defs.h>
+#include <mach/cvmx-pemx-defs.h>
+#include <mach/cvmx-pexp-defs.h>
+#include <mach/cvmx-rst-defs.h>
+#include <mach/cvmx-sata-defs.h>
+#include <mach/cvmx-sli-defs.h>
+#include <mach/cvmx-sriomaintx-defs.h>
+#include <mach/cvmx-sriox-defs.h>
+
+DECLARE_GLOBAL_DATA_PTR;
+
+/** 2.5GHz with 100MHz reference clock */
+#define R_2_5G_REFCLK100 0x0
+/** 5.0GHz with 100MHz reference clock */
+#define R_5G_REFCLK100 0x1
+/** 8.0GHz with 100MHz reference clock */
+#define R_8G_REFCLK100 0x2
+/** 1.25GHz with 156.25MHz reference clock */
+#define R_125G_REFCLK15625_KX 0x3
+/** 3.125Ghz with 156.25MHz reference clock (XAUI) */
+#define R_3125G_REFCLK15625_XAUI 0x4
+/** 10.3125GHz with 156.25MHz reference clock (XFI/XLAUI) */
+#define R_103125G_REFCLK15625_KR 0x5
+/** 1.25GHz with 156.25MHz reference clock (SGMII) */
+#define R_125G_REFCLK15625_SGMII 0x6
+/** 5GHz with 156.25MHz reference clock (QSGMII) */
+#define R_5G_REFCLK15625_QSGMII 0x7
+/** 6.25GHz with 156.25MHz reference clock (RXAUI/25G) */
+#define R_625G_REFCLK15625_RXAUI 0x8
+/** 2.5GHz with 125MHz reference clock */
+#define R_2_5G_REFCLK125 0x9
+/** 5GHz with 125MHz reference clock */
+#define R_5G_REFCLK125 0xa
+/** 8GHz with 125MHz reference clock */
+#define R_8G_REFCLK125 0xb
+/** Must be last, number of modes */
+#define R_NUM_LANE_MODES 0xc
+
+int cvmx_qlm_is_ref_clock(int qlm, int reference_mhz)
+{
+	int ref_clock = cvmx_qlm_measure_clock(qlm);
+	int mhz = ref_clock / 1000000;
+	int range = reference_mhz / 10;
+
+	return ((mhz >= reference_mhz - range) && (mhz <= reference_mhz + range));
+}
+
+static int __get_qlm_spd(int qlm, int speed)
+{
+	int qlm_spd = 0xf;
+
+	if (cvmx_qlm_is_ref_clock(qlm, 100)) {
+		if (speed == 1250)
+			qlm_spd = 0x3;
+		else if (speed == 2500)
+			qlm_spd = 0x2;
+		else if (speed == 5000)
+			qlm_spd = 0x0;
+		else
+			qlm_spd = 0xf;
+	} else if (cvmx_qlm_is_ref_clock(qlm, 125)) {
+		if (speed == 1250)
+			qlm_spd = 0xa;
+		else if (speed == 2500)
+			qlm_spd = 0x9;
+		else if (speed == 3125)
+			qlm_spd = 0x8;
+		else if (speed == 5000)
+			qlm_spd = 0x6;
+		else if (speed == 6250)
+			qlm_spd = 0x5;
+		else
+			qlm_spd = 0xf;
+	} else if (cvmx_qlm_is_ref_clock(qlm, 156)) {
+		if (speed == 1250)
+			qlm_spd = 0x4;
+		else if (speed == 2500)
+			qlm_spd = 0x7;
+		else if (speed == 3125)
+			qlm_spd = 0xe;
+		else if (speed == 3750)
+			qlm_spd = 0xd;
+		else if (speed == 5000)
+			qlm_spd = 0xb;
+		else if (speed == 6250)
+			qlm_spd = 0xc;
+		else
+			qlm_spd = 0xf;
+	} else if (cvmx_qlm_is_ref_clock(qlm, 161)) {
+		if (speed == 6316)
+			qlm_spd = 0xc;
+	}
+	return qlm_spd;
+}
+
+static void __set_qlm_pcie_mode_61xx(int pcie_port, int root_complex)
+{
+	int rc = root_complex ? 1 : 0;
+	int ep = root_complex ? 0 : 1;
+	cvmx_ciu_soft_prst1_t soft_prst1;
+	cvmx_ciu_soft_prst_t soft_prst;
+	cvmx_mio_rst_ctlx_t rst_ctl;
+
+	if (pcie_port) {
+		soft_prst1.u64 = csr_rd(CVMX_CIU_SOFT_PRST1);
+		soft_prst1.s.soft_prst = 1;
+		csr_wr(CVMX_CIU_SOFT_PRST1, soft_prst1.u64);
+	} else {
+		soft_prst.u64 = csr_rd(CVMX_CIU_SOFT_PRST);
+		soft_prst.s.soft_prst = 1;
+		csr_wr(CVMX_CIU_SOFT_PRST, soft_prst.u64);
+	}
+
+	rst_ctl.u64 = csr_rd(CVMX_MIO_RST_CTLX(pcie_port));
+
+	rst_ctl.s.prst_link = rc;
+	rst_ctl.s.rst_link = ep;
+	rst_ctl.s.prtmode = rc;
+	rst_ctl.s.rst_drv = rc;
+	rst_ctl.s.rst_rcv = 0;
+	rst_ctl.s.rst_chip = ep;
+	csr_wr(CVMX_MIO_RST_CTLX(pcie_port), rst_ctl.u64);
+
+	if (root_complex == 0) {
+		if (pcie_port) {
+			soft_prst1.u64 = csr_rd(CVMX_CIU_SOFT_PRST1);
+			soft_prst1.s.soft_prst = 0;
+			csr_wr(CVMX_CIU_SOFT_PRST1, soft_prst1.u64);
+		} else {
+			soft_prst.u64 = csr_rd(CVMX_CIU_SOFT_PRST);
+			soft_prst.s.soft_prst = 0;
+			csr_wr(CVMX_CIU_SOFT_PRST, soft_prst.u64);
+		}
+	}
+}
+
+/**
+ * Configure qlm speed and mode. MIO_QLMX_CFG[speed,mode] are not set
+ * for CN61XX.
+ *
+ * @param qlm     The QLM to configure
+ * @param speed   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ *                  QLM 0: 0 = PCIe0 1X4, 1 = Reserved, 2 = SGMII1, 3 = XAUI1
+ *                  QLM 1: 0 = PCIe1 1x2, 1 = PCIe(0/1) 2x1, 2 - 3 = Reserved
+ *                  QLM 2: 0 - 1 = Reserved, 2 = SGMII0, 3 = XAUI0
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP
+ *		  mode.
+ * @param pcie2x1 Only used when QLM1 is in PCIE2x1 mode.  The QLM_SPD has a
+ *		  different value on how PEMx needs to be configured:
+ *                   0x0 - both PEM0 & PEM1 are in gen1 mode.
+ *                   0x1 - PEM0 in gen2 and PEM1 in gen1 mode.
+ *                   0x2 - PEM0 in gen1 and PEM1 in gen2 mode.
+ *                   0x3 - both PEM0 & PEM1 are in gen2 mode.
+ *               SPEED value is ignored in this mode. QLM_SPD is set based on
+ *               pcie2x1 value in this mode.
+ *
+ * @return       Return 0 on success or -1.
+ */
+static int octeon_configure_qlm_cn61xx(int qlm, int speed, int mode, int rc, int pcie2x1)
+{
+	cvmx_mio_qlmx_cfg_t qlm_cfg;
+
+	/* The QLM speed varies for SGMII/XAUI and PCIe mode. And depends on
+	 * reference clock.
+	 */
+	if (!OCTEON_IS_MODEL(OCTEON_CN61XX))
+		return -1;
+
+	if (qlm < 3) {
+		qlm_cfg.u64 = csr_rd(CVMX_MIO_QLMX_CFG(qlm));
+	} else {
+		debug("WARNING: Invalid QLM(%d) passed\n", qlm);
+		return -1;
+	}
+
+	switch (qlm) {
+		/* SGMII/XAUI mode */
+	case 2: {
+		if (mode < 2) {
+			qlm_cfg.s.qlm_spd = 0xf;
+			break;
+		}
+		qlm_cfg.s.qlm_spd = __get_qlm_spd(qlm, speed);
+		qlm_cfg.s.qlm_cfg = mode;
+		break;
+	}
+	case 1: {
+		if (mode == 1) { /* 2x1 mode */
+			cvmx_mio_qlmx_cfg_t qlm0;
+
+			/* When QLM0 is configured as PCIe(QLM_CFG=0x0)
+			 * and enabled (QLM_SPD != 0xf), QLM1 cannot be
+			 * configured as PCIe 2x1 mode (QLM_CFG=0x1)
+			 * and enabled (QLM_SPD != 0xf).
+			 */
+			qlm0.u64 = csr_rd(CVMX_MIO_QLMX_CFG(0));
+			if (qlm0.s.qlm_spd != 0xf && qlm0.s.qlm_cfg == 0) {
+				debug("Invalid mode(%d) for QLM(%d) as QLM1 is PCIe mode\n",
+				      mode, qlm);
+				qlm_cfg.s.qlm_spd = 0xf;
+				break;
+			}
+
+			/* Set QLM_SPD based on reference clock and mode */
+			if (cvmx_qlm_is_ref_clock(qlm, 100)) {
+				if (pcie2x1 == 0x3)
+					qlm_cfg.s.qlm_spd = 0x0;
+				else if (pcie2x1 == 0x1)
+					qlm_cfg.s.qlm_spd = 0x2;
+				else if (pcie2x1 == 0x2)
+					qlm_cfg.s.qlm_spd = 0x1;
+				else if (pcie2x1 == 0x0)
+					qlm_cfg.s.qlm_spd = 0x3;
+				else
+					qlm_cfg.s.qlm_spd = 0xf;
+			} else if (cvmx_qlm_is_ref_clock(qlm, 125)) {
+				if (pcie2x1 == 0x3)
+					qlm_cfg.s.qlm_spd = 0x4;
+				else if (pcie2x1 == 0x1)
+					qlm_cfg.s.qlm_spd = 0x6;
+				else if (pcie2x1 == 0x2)
+					qlm_cfg.s.qlm_spd = 0x9;
+				else if (pcie2x1 == 0x0)
+					qlm_cfg.s.qlm_spd = 0x7;
+				else
+					qlm_cfg.s.qlm_spd = 0xf;
+			}
+			qlm_cfg.s.qlm_cfg = mode;
+			csr_wr(CVMX_MIO_QLMX_CFG(qlm), qlm_cfg.u64);
+
+			/* Set PCIe mode bits */
+			__set_qlm_pcie_mode_61xx(0, rc);
+			__set_qlm_pcie_mode_61xx(1, rc);
+			return 0;
+		} else if (mode > 1) {
+			debug("Invalid mode(%d) for QLM(%d).\n", mode, qlm);
+			qlm_cfg.s.qlm_spd = 0xf;
+			break;
+		}
+
+		/* Set speed and mode for PCIe 1x2 mode. */
+		if (cvmx_qlm_is_ref_clock(qlm, 100)) {
+			if (speed == 5000)
+				qlm_cfg.s.qlm_spd = 0x1;
+			else if (speed == 2500)
+				qlm_cfg.s.qlm_spd = 0x2;
+			else
+				qlm_cfg.s.qlm_spd = 0xf;
+		} else if (cvmx_qlm_is_ref_clock(qlm, 125)) {
+			if (speed == 5000)
+				qlm_cfg.s.qlm_spd = 0x4;
+			else if (speed == 2500)
+				qlm_cfg.s.qlm_spd = 0x6;
+			else
+				qlm_cfg.s.qlm_spd = 0xf;
+		} else {
+			qlm_cfg.s.qlm_spd = 0xf;
+		}
+
+		qlm_cfg.s.qlm_cfg = mode;
+		csr_wr(CVMX_MIO_QLMX_CFG(qlm), qlm_cfg.u64);
+
+		/* Set PCIe mode bits */
+		__set_qlm_pcie_mode_61xx(1, rc);
+		return 0;
+	}
+	case 0: {
+		/* QLM_CFG = 0x1 - Reserved */
+		if (mode == 1) {
+			qlm_cfg.s.qlm_spd = 0xf;
+			break;
+		}
+		/* QLM_CFG = 0x0 - PCIe 1x4(PEM0) */
+		if (mode == 0 && speed != 5000 && speed != 2500) {
+			qlm_cfg.s.qlm_spd = 0xf;
+			break;
+		}
+
+		/* Set speed and mode */
+		qlm_cfg.s.qlm_spd = __get_qlm_spd(qlm, speed);
+		qlm_cfg.s.qlm_cfg = mode;
+		csr_wr(CVMX_MIO_QLMX_CFG(qlm), qlm_cfg.u64);
+
+		/* Set PCIe mode bits */
+		if (mode == 0)
+			__set_qlm_pcie_mode_61xx(0, rc);
+
+		return 0;
+	}
+	default:
+		debug("WARNING: Invalid QLM(%d) passed\n", qlm);
+		qlm_cfg.s.qlm_spd = 0xf;
+	}
+	csr_wr(CVMX_MIO_QLMX_CFG(qlm), qlm_cfg.u64);
+	return 0;
+}
+
+/* qlm      : DLM to configure
+ * baud_mhz : speed of the DLM
+ * ref_clk_sel  :  reference clock speed selection where:
+ *			0:	100MHz
+ *			1:	125MHz
+ *			2:	156.25MHz
+ *
+ * ref_clk_input:  reference clock input where:
+ *			0:	DLMC_REF_CLK0_[P,N]
+ *			1:	DLMC_REF_CLK1_[P,N]
+ *			2:	DLM0_REF_CLK_[P,N] (only valid for QLM 0)
+ * is_sff7000_rxaui : boolean to indicate whether qlm is RXAUI on SFF7000
+ */
+static int __dlm_setup_pll_cn70xx(int qlm, int baud_mhz, int ref_clk_sel, int ref_clk_input,
+				  int is_sff7000_rxaui)
+{
+	cvmx_gserx_dlmx_test_powerdown_t dlmx_test_powerdown;
+	cvmx_gserx_dlmx_ref_ssp_en_t dlmx_ref_ssp_en;
+	cvmx_gserx_dlmx_mpll_en_t dlmx_mpll_en;
+	cvmx_gserx_dlmx_phy_reset_t dlmx_phy_reset;
+	cvmx_gserx_dlmx_tx_amplitude_t tx_amplitude;
+	cvmx_gserx_dlmx_tx_preemph_t tx_preemph;
+	cvmx_gserx_dlmx_rx_eq_t rx_eq;
+	cvmx_gserx_dlmx_ref_clkdiv2_t ref_clkdiv2;
+	cvmx_gserx_dlmx_mpll_multiplier_t mpll_multiplier;
+	int gmx_ref_clk = 100;
+
+	debug("%s(%d, %d, %d, %d, %d)\n", __func__, qlm, baud_mhz, ref_clk_sel, ref_clk_input,
+	      is_sff7000_rxaui);
+	if (ref_clk_sel == 1)
+		gmx_ref_clk = 125;
+	else if (ref_clk_sel == 2)
+		gmx_ref_clk = 156;
+
+	if (qlm != 0 && ref_clk_input == 2) {
+		printf("%s: Error: can only use reference clock inputs 0 or 1 for DLM %d\n",
+		       __func__, qlm);
+		return -1;
+	}
+
+	/* Hardware defaults are invalid */
+	tx_amplitude.u64 = csr_rd(CVMX_GSERX_DLMX_TX_AMPLITUDE(qlm, 0));
+	if (is_sff7000_rxaui) {
+		tx_amplitude.s.tx0_amplitude = 100;
+		tx_amplitude.s.tx1_amplitude = 100;
+	} else {
+		tx_amplitude.s.tx0_amplitude = 65;
+		tx_amplitude.s.tx1_amplitude = 65;
+	}
+
+	csr_wr(CVMX_GSERX_DLMX_TX_AMPLITUDE(qlm, 0), tx_amplitude.u64);
+
+	tx_preemph.u64 = csr_rd(CVMX_GSERX_DLMX_TX_PREEMPH(qlm, 0));
+
+	if (is_sff7000_rxaui) {
+		tx_preemph.s.tx0_preemph = 0;
+		tx_preemph.s.tx1_preemph = 0;
+	} else {
+		tx_preemph.s.tx0_preemph = 22;
+		tx_preemph.s.tx1_preemph = 22;
+	}
+	csr_wr(CVMX_GSERX_DLMX_TX_PREEMPH(qlm, 0), tx_preemph.u64);
+
+	rx_eq.u64 = csr_rd(CVMX_GSERX_DLMX_RX_EQ(qlm, 0));
+	rx_eq.s.rx0_eq = 0;
+	rx_eq.s.rx1_eq = 0;
+	csr_wr(CVMX_GSERX_DLMX_RX_EQ(qlm, 0), rx_eq.u64);
+
+	/* 1. Write GSER0_DLM0_REF_USE_PAD[REF_USE_PAD] = 1 (to select
+	 *    reference-clock input)
+	 *    The documentation for this register in the HRM is useless since
+	 *    it says it selects between two different clocks that are not
+	 *    documented anywhere.  What it really does is select between
+	 *    DLM0_REF_CLK_[P,N] if 1 and DLMC_REF_CLK[0,1]_[P,N] if 0.
+	 *
+	 *    This register must be 0 for DLMs 1 and 2 and can only be 1 for
+	 *    DLM 0.
+	 */
+	csr_wr(CVMX_GSERX_DLMX_REF_USE_PAD(0, 0), ((ref_clk_input == 2) && (qlm == 0)) ? 1 : 0);
+
+	/* Reference clock was already chosen before we got here */
+
+	/* 2. Write GSER0_DLM0_REFCLK_SEL[REFCLK_SEL] if required for
+	 *    reference-clock selection.
+	 *
+	 *    If GSERX_DLMX_REF_USE_PAD is 1 then this register is ignored.
+	 */
+	csr_wr(CVMX_GSERX_DLMX_REFCLK_SEL(0, 0), ref_clk_input & 1);
+
+	/* Reference clock was already chosen before we got here */
+
+	/* 3. If required, write GSER0_DLM0_REF_CLKDIV2[REF_CLKDIV2] (must be
+	 *    set if reference clock > 100 MHz)
+	 */
+	/* Apply workaround for Errata (G-20669) MPLL may not come up. */
+	ref_clkdiv2.u64 = csr_rd(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0));
+	if (gmx_ref_clk == 100)
+		ref_clkdiv2.s.ref_clkdiv2 = 0;
+	else
+		ref_clkdiv2.s.ref_clkdiv2 = 1;
+	csr_wr(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0), ref_clkdiv2.u64);
+
+	/* 1. Ensure GSER(0)_DLM(0..2)_PHY_RESET[PHY_RESET] is set. */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 2. If SGMII or QSGMII or RXAUI (i.e. if DLM0) set
+	 *    GSER(0)_DLM(0)_MPLL_EN[MPLL_EN] to one.
+	 */
+	/* 7. Set GSER0_DLM0_MPLL_EN[MPLL_EN] = 1 */
+	dlmx_mpll_en.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_EN(0, 0));
+	dlmx_mpll_en.s.mpll_en = 1;
+	csr_wr(CVMX_GSERX_DLMX_MPLL_EN(0, 0), dlmx_mpll_en.u64);
+
+	/* 3. Set GSER(0)_DLM(0..2)_MPLL_MULTIPLIER[MPLL_MULTIPLIER]
+	 *    to the value in the preceding table, which is different
+	 *    than the desired setting prescribed by the HRM.
+	 */
+	mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+	if (gmx_ref_clk == 100)
+		mpll_multiplier.s.mpll_multiplier = 35;
+	else if (gmx_ref_clk == 125)
+		mpll_multiplier.s.mpll_multiplier = 56;
+	else
+		mpll_multiplier.s.mpll_multiplier = 45;
+	debug("%s: Setting mpll multiplier to %u for DLM%d, baud %d, clock rate %uMHz\n",
+	      __func__, mpll_multiplier.s.mpll_multiplier, qlm, baud_mhz, gmx_ref_clk);
+
+	csr_wr(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0), mpll_multiplier.u64);
+
+	/* 5. Clear GSER0_DLM0_TEST_POWERDOWN[TEST_POWERDOWN] */
+	dlmx_test_powerdown.u64 = csr_rd(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0));
+	dlmx_test_powerdown.s.test_powerdown = 0;
+	csr_wr(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0), dlmx_test_powerdown.u64);
+
+	/* 6. Set GSER0_DLM0_REF_SSP_EN[REF_SSP_EN] = 1 */
+	dlmx_ref_ssp_en.u64 = csr_rd(CVMX_GSERX_DLMX_REF_SSP_EN(qlm, 0));
+	dlmx_ref_ssp_en.s.ref_ssp_en = 1;
+	csr_wr(CVMX_GSERX_DLMX_REF_SSP_EN(0, 0), dlmx_ref_ssp_en.u64);
+
+	/* 8. Clear GSER0_DLM0_PHY_RESET[PHY_RESET] = 0 */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 5. If PCIe or SATA (i.e. if DLM1 or DLM2), set both MPLL_EN
+	 * and MPLL_EN_OVRD to one in GSER(0)_PHY(1..2)_OVRD_IN_LO.
+	 */
+
+	/* 6. Decrease MPLL_MULTIPLIER by one continually until it
+	 * reaches the desired long-term setting, ensuring that each
+	 * MPLL_MULTIPLIER value is constant for at least 1 msec before
+	 * changing to the next value.  The desired long-term setting is
+	 * as indicated in HRM tables 21-1, 21-2, and 21-3.  This is not
+	 * required with the HRM sequence.
+	 */
+	mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+	__cvmx_qlm_set_mult(qlm, baud_mhz, mpll_multiplier.s.mpll_multiplier);
+
+	/* 9. Poll until the MPLL locks. Wait for
+	 *    GSER0_DLM0_MPLL_STATUS[MPLL_STATUS] = 1
+	 */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_MPLL_STATUS(qlm, 0),
+				  cvmx_gserx_dlmx_mpll_status_t, mpll_status, ==, 1, 10000)) {
+		printf("PLL for DLM%d failed to lock\n", qlm);
+		return -1;
+	}
+	return 0;
+}
+
+static int __dlm0_setup_tx_cn70xx(int speed, int ref_clk_sel)
+{
+	int need0, need1;
+	cvmx_gmxx_inf_mode_t mode0, mode1;
+	cvmx_gserx_dlmx_tx_rate_t rate;
+	cvmx_gserx_dlmx_tx_en_t en;
+	cvmx_gserx_dlmx_tx_cm_en_t cm_en;
+	cvmx_gserx_dlmx_tx_data_en_t data_en;
+	cvmx_gserx_dlmx_tx_reset_t tx_reset;
+
+	debug("%s(%d, %d)\n", __func__, speed, ref_clk_sel);
+	mode0.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+	mode1.u64 = csr_rd(CVMX_GMXX_INF_MODE(1));
+
+	/* Which lanes do we need? */
+	need0 = (mode0.s.mode != CVMX_GMX_INF_MODE_DISABLED);
+	need1 = (mode1.s.mode != CVMX_GMX_INF_MODE_DISABLED) ||
+		(mode0.s.mode == CVMX_GMX_INF_MODE_RXAUI);
+
+	/* 1. Write GSER0_DLM0_TX_RATE[TXn_RATE] (Set according to required
+	 *    data rate (see Table 21-1).
+	 */
+	rate.u64 = csr_rd(CVMX_GSERX_DLMX_TX_RATE(0, 0));
+	debug("%s: speed: %d\n", __func__, speed);
+	switch (speed) {
+	case 1250:
+	case 2500:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_100MHZ: /* 100MHz */
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.tx0_rate = (mode0.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 2 : 0;
+			rate.s.tx1_rate = (mode1.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 2 : 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 3125:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.tx0_rate = (mode0.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 1 : 0;
+			rate.s.tx1_rate = (mode1.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 1 : 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 5000: /* QSGMII only */
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_100MHZ: /* 100MHz */
+			rate.s.tx0_rate = 0;
+			rate.s.tx1_rate = 0;
+			break;
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.tx0_rate = 0;
+			rate.s.tx1_rate = 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 6250:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.tx0_rate = 0;
+			rate.s.tx1_rate = 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	default:
+		printf("%s: Invalid rate %d\n", __func__, speed);
+		return -1;
+	}
+	debug("%s: tx 0 rate: %d, tx 1 rate: %d\n", __func__, rate.s.tx0_rate, rate.s.tx1_rate);
+	csr_wr(CVMX_GSERX_DLMX_TX_RATE(0, 0), rate.u64);
+
+	/* 2. Set GSER0_DLM0_TX_EN[TXn_EN] = 1 */
+	en.u64 = csr_rd(CVMX_GSERX_DLMX_TX_EN(0, 0));
+	en.s.tx0_en = need0;
+	en.s.tx1_en = need1;
+	csr_wr(CVMX_GSERX_DLMX_TX_EN(0, 0), en.u64);
+
+	/* 3 set GSER0_DLM0_TX_CM_EN[TXn_CM_EN] = 1 */
+	cm_en.u64 = csr_rd(CVMX_GSERX_DLMX_TX_CM_EN(0, 0));
+	cm_en.s.tx0_cm_en = need0;
+	cm_en.s.tx1_cm_en = need1;
+	csr_wr(CVMX_GSERX_DLMX_TX_CM_EN(0, 0), cm_en.u64);
+
+	/* 4. Set GSER0_DLM0_TX_DATA_EN[TXn_DATA_EN] = 1 */
+	data_en.u64 = csr_rd(CVMX_GSERX_DLMX_TX_DATA_EN(0, 0));
+	data_en.s.tx0_data_en = need0;
+	data_en.s.tx1_data_en = need1;
+	csr_wr(CVMX_GSERX_DLMX_TX_DATA_EN(0, 0), data_en.u64);
+
+	/* 5. Clear GSER0_DLM0_TX_RESET[TXn_DATA_EN] = 0 */
+	tx_reset.u64 = csr_rd(CVMX_GSERX_DLMX_TX_RESET(0, 0));
+	tx_reset.s.tx0_reset = !need0;
+	tx_reset.s.tx1_reset = !need1;
+	csr_wr(CVMX_GSERX_DLMX_TX_RESET(0, 0), tx_reset.u64);
+
+	/* 6. Poll GSER0_DLM0_TX_STATUS[TXn_STATUS, TXn_CM_STATUS] until both
+	 *    are set to 1. This prevents GMX from transmitting until the DLM
+	 *    is ready.
+	 */
+	if (need0) {
+		if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_TX_STATUS(0, 0),
+					  cvmx_gserx_dlmx_tx_status_t, tx0_status, ==, 1, 10000)) {
+			printf("DLM0 TX0 status fail\n");
+			return -1;
+		}
+		if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_TX_STATUS(0, 0),
+					  cvmx_gserx_dlmx_tx_status_t, tx0_cm_status, ==, 1,
+					  10000)) {
+			printf("DLM0 TX0 CM status fail\n");
+			return -1;
+		}
+	}
+	if (need1) {
+		if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_TX_STATUS(0, 0),
+					  cvmx_gserx_dlmx_tx_status_t, tx1_status, ==, 1, 10000)) {
+			printf("DLM0 TX1 status fail\n");
+			return -1;
+		}
+		if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_TX_STATUS(0, 0),
+					  cvmx_gserx_dlmx_tx_status_t, tx1_cm_status, ==, 1,
+					  10000)) {
+			printf("DLM0 TX1 CM status fail\n");
+			return -1;
+		}
+	}
+	return 0;
+}
+
+static int __dlm0_setup_rx_cn70xx(int speed, int ref_clk_sel)
+{
+	int need0, need1;
+	cvmx_gmxx_inf_mode_t mode0, mode1;
+	cvmx_gserx_dlmx_rx_rate_t rate;
+	cvmx_gserx_dlmx_rx_pll_en_t pll_en;
+	cvmx_gserx_dlmx_rx_data_en_t data_en;
+	cvmx_gserx_dlmx_rx_reset_t rx_reset;
+
+	debug("%s(%d, %d)\n", __func__, speed, ref_clk_sel);
+	mode0.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+	mode1.u64 = csr_rd(CVMX_GMXX_INF_MODE(1));
+
+	/* Which lanes do we need? */
+	need0 = (mode0.s.mode != CVMX_GMX_INF_MODE_DISABLED);
+	need1 = (mode1.s.mode != CVMX_GMX_INF_MODE_DISABLED) ||
+		(mode0.s.mode == CVMX_GMX_INF_MODE_RXAUI);
+
+	/* 1. Write GSER0_DLM0_RX_RATE[RXn_RATE] (must match the
+	 * GER0_DLM0_TX_RATE[TXn_RATE] setting).
+	 */
+	rate.u64 = csr_rd(CVMX_GSERX_DLMX_RX_RATE(0, 0));
+	switch (speed) {
+	case 1250:
+	case 2500:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_100MHZ: /* 100MHz */
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.rx0_rate = (mode0.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 2 : 0;
+			rate.s.rx1_rate = (mode1.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 2 : 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 3125:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.rx0_rate = (mode0.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 1 : 0;
+			rate.s.rx1_rate = (mode1.s.mode == CVMX_GMX_INF_MODE_SGMII) ? 1 : 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 5000: /* QSGMII only */
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_100MHZ: /* 100MHz */
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.rx0_rate = 0;
+			rate.s.rx1_rate = 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	case 6250:
+		switch (ref_clk_sel) {
+		case OCTEON_QLM_REF_CLK_125MHZ: /* 125MHz */
+		case OCTEON_QLM_REF_CLK_156MHZ: /* 156.25MHz */
+			rate.s.rx0_rate = 0;
+			rate.s.rx1_rate = 0;
+			break;
+		default:
+			printf("Invalid reference clock select %d\n", ref_clk_sel);
+			return -1;
+		}
+		break;
+	default:
+		printf("%s: Invalid rate %d\n", __func__, speed);
+		return -1;
+	}
+	debug("%s: rx 0 rate: %d, rx 1 rate: %d\n", __func__, rate.s.rx0_rate, rate.s.rx1_rate);
+	csr_wr(CVMX_GSERX_DLMX_RX_RATE(0, 0), rate.u64);
+
+	/* 2. Set GSER0_DLM0_RX_PLL_EN[RXn_PLL_EN] = 1 */
+	pll_en.u64 = csr_rd(CVMX_GSERX_DLMX_RX_PLL_EN(0, 0));
+	pll_en.s.rx0_pll_en = need0;
+	pll_en.s.rx1_pll_en = need1;
+	csr_wr(CVMX_GSERX_DLMX_RX_PLL_EN(0, 0), pll_en.u64);
+
+	/* 3. Set GSER0_DLM0_RX_DATA_EN[RXn_DATA_EN] = 1 */
+	data_en.u64 = csr_rd(CVMX_GSERX_DLMX_RX_DATA_EN(0, 0));
+	data_en.s.rx0_data_en = need0;
+	data_en.s.rx1_data_en = need1;
+	csr_wr(CVMX_GSERX_DLMX_RX_DATA_EN(0, 0), data_en.u64);
+
+	/* 4. Clear GSER0_DLM0_RX_RESET[RXn_DATA_EN] = 0. Now the GMX can be
+	 * enabled: set GMX(0..1)_INF_MODE[EN] = 1
+	 */
+	rx_reset.u64 = csr_rd(CVMX_GSERX_DLMX_RX_RESET(0, 0));
+	rx_reset.s.rx0_reset = !need0;
+	rx_reset.s.rx1_reset = !need1;
+	csr_wr(CVMX_GSERX_DLMX_RX_RESET(0, 0), rx_reset.u64);
+
+	return 0;
+}
+
+static int a_clk;
+
+static int __dlm2_sata_uctl_init_cn70xx(void)
+{
+	cvmx_sata_uctl_ctl_t uctl_ctl;
+	const int MAX_A_CLK = 333000000; /* Max of 333Mhz */
+	int divisor, a_clkdiv;
+
+	/* Wait for all voltages to reach a stable stable. Ensure the
+	 * reference clock is up and stable.
+	 */
+
+	/* 2. Wait for IOI reset to deassert. */
+
+	/* 3. Optionally program the GPIO CSRs for SATA features.
+	 *    a. For cold-presence detect:
+	 *	 i. Select a GPIO for the input and program GPIO_SATA_CTL[sel]
+	 *	    for port0 and port1.
+	 *	 ii. Select a GPIO for the output and program
+	 *	     GPIO_BIT_CFG*[OUTPUT_SEL] for port0 and port1.
+	 *    b. For mechanical-presence detect, select a GPIO for the input
+	 *	 and program GPIO_SATA_CTL[SEL] for port0/port1.
+	 *    c. For LED activity, select a GPIO for the output and program
+	 *	 GPIO_BIT_CFG*[OUTPUT_SEL] for port0/port1.
+	 */
+
+	/* 4. Assert all resets:
+	 *    a. UAHC reset: SATA_UCTL_CTL[UAHC_RST] = 1
+	 *    a. UCTL reset: SATA_UCTL_CTL[UCTL_RST] = 1
+	 */
+
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.sata_uahc_rst = 1;
+	uctl_ctl.s.sata_uctl_rst = 1;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	/* 5. Configure the ACLK:
+	 *    a. Reset the clock dividers: SATA_UCTL_CTL[A_CLKDIV_RST] = 1.
+	 *    b. Select the ACLK frequency (400 MHz maximum)
+	 *	 i. SATA_UCTL_CTL[A_CLKDIV] = desired value,
+	 *	 ii. SATA_UCTL_CTL[A_CLKDIV_EN] = 1 to enable the ACLK,
+	 *    c. Deassert the ACLK clock divider reset:
+	 *	 SATA_UCTL_CTL[A_CLKDIV_RST] = 0
+	 */
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.a_clkdiv_rst = 1;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+
+	divisor = (gd->bus_clk + MAX_A_CLK - 1) / MAX_A_CLK;
+	if (divisor <= 4) {
+		a_clkdiv = divisor - 1;
+	} else if (divisor <= 6) {
+		a_clkdiv = 4;
+		divisor = 6;
+	} else if (divisor <= 8) {
+		a_clkdiv = 5;
+		divisor = 8;
+	} else if (divisor <= 16) {
+		a_clkdiv = 6;
+		divisor = 16;
+	} else if (divisor <= 24) {
+		a_clkdiv = 7;
+		divisor = 24;
+	} else {
+		printf("Unable to determine SATA clock divisor\n");
+		return -1;
+	}
+
+	/* Calculate the final clock rate */
+	a_clk = gd->bus_clk / divisor;
+
+	uctl_ctl.s.a_clkdiv_sel = a_clkdiv;
+	uctl_ctl.s.a_clk_en = 1;
+	uctl_ctl.s.a_clk_byp_sel = 0;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.a_clkdiv_rst = 0;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	udelay(1);
+
+	return 0;
+}
+
+static int __sata_dlm_init_cn70xx(int qlm, int baud_mhz, int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_gserx_sata_cfg_t sata_cfg;
+	cvmx_gserx_sata_lane_rst_t sata_lane_rst;
+	cvmx_gserx_dlmx_phy_reset_t dlmx_phy_reset;
+	cvmx_gserx_dlmx_test_powerdown_t dlmx_test_powerdown;
+	cvmx_gserx_sata_ref_ssp_en_t ref_ssp_en;
+	cvmx_gserx_dlmx_mpll_multiplier_t mpll_multiplier;
+	cvmx_gserx_dlmx_ref_clkdiv2_t ref_clkdiv2;
+	cvmx_sata_uctl_shim_cfg_t shim_cfg;
+	cvmx_gserx_phyx_ovrd_in_lo_t ovrd_in;
+	cvmx_sata_uctl_ctl_t uctl_ctl;
+	int sata_ref_clk;
+
+	debug("%s(%d, %d, %d, %d)\n", __func__, qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+
+	switch (ref_clk_sel) {
+	case 0:
+		sata_ref_clk = 100;
+		break;
+	case 1:
+		sata_ref_clk = 125;
+		break;
+	case 2:
+		sata_ref_clk = 156;
+		break;
+	default:
+		printf("%s: Invalid reference clock select %d for qlm %d\n", __func__,
+		       ref_clk_sel, qlm);
+		return -1;
+	}
+
+	/* 5. Set GSERX0_SATA_CFG[SATA_EN] = 1 to configure DLM2 multiplexing.
+	 */
+	sata_cfg.u64 = csr_rd(CVMX_GSERX_SATA_CFG(0));
+	sata_cfg.s.sata_en = 1;
+	csr_wr(CVMX_GSERX_SATA_CFG(0), sata_cfg.u64);
+
+	/* 1. Write GSER(0)_DLM2_REFCLK_SEL[REFCLK_SEL] if required for
+	 *    reference-clock selection.
+	 */
+	if (ref_clk_input < 2) {
+		csr_wr(CVMX_GSERX_DLMX_REFCLK_SEL(qlm, 0), ref_clk_input);
+		csr_wr(CVMX_GSERX_DLMX_REF_USE_PAD(qlm, 0), 0);
+	} else {
+		csr_wr(CVMX_GSERX_DLMX_REF_USE_PAD(qlm, 0), 1);
+	}
+
+	ref_ssp_en.u64 = csr_rd(CVMX_GSERX_SATA_REF_SSP_EN(0));
+	ref_ssp_en.s.ref_ssp_en = 1;
+	csr_wr(CVMX_GSERX_SATA_REF_SSP_EN(0), ref_ssp_en.u64);
+
+	/* Apply workaround for Errata (G-20669) MPLL may not come up. */
+
+	/* Set REF_CLKDIV2 based on the Ref Clock */
+	ref_clkdiv2.u64 = csr_rd(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0));
+	if (sata_ref_clk == 100)
+		ref_clkdiv2.s.ref_clkdiv2 = 0;
+	else
+		ref_clkdiv2.s.ref_clkdiv2 = 1;
+	csr_wr(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0), ref_clkdiv2.u64);
+
+	/* 1. Ensure GSER(0)_DLM(0..2)_PHY_RESET[PHY_RESET] is set. */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 2. If SGMII or QSGMII or RXAUI (i.e. if DLM0) set
+	 *    GSER(0)_DLM(0)_MPLL_EN[MPLL_EN] to one.
+	 */
+
+	/* 3. Set GSER(0)_DLM(0..2)_MPLL_MULTIPLIER[MPLL_MULTIPLIER]
+	 *    to the value in the preceding table, which is different
+	 *    than the desired setting prescribed by the HRM.
+	 */
+
+	mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+	if (sata_ref_clk == 100)
+		mpll_multiplier.s.mpll_multiplier = 35;
+	else
+		mpll_multiplier.s.mpll_multiplier = 56;
+	csr_wr(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0), mpll_multiplier.u64);
+
+	/* 3. Clear GSER0_DLM2_TEST_POWERDOWN[TEST_POWERDOWN] = 0 */
+	dlmx_test_powerdown.u64 = csr_rd(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0));
+	dlmx_test_powerdown.s.test_powerdown = 0;
+	csr_wr(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0), dlmx_test_powerdown.u64);
+
+	/* 4. Clear either/both lane0 and lane1 resets:
+	 *    GSER0_SATA_LANE_RST[L0_RST, L1_RST] = 0.
+	 */
+	sata_lane_rst.u64 = csr_rd(CVMX_GSERX_SATA_LANE_RST(0));
+	sata_lane_rst.s.l0_rst = 0;
+	sata_lane_rst.s.l1_rst = 0;
+	csr_wr(CVMX_GSERX_SATA_LANE_RST(0), sata_lane_rst.u64);
+
+	udelay(1);
+
+	/* 5. Clear GSER0_DLM2_PHY_RESET */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 6. If PCIe or SATA (i.e. if DLM1 or DLM2), set both MPLL_EN
+	 * and MPLL_EN_OVRD to one in GSER(0)_PHY(1..2)_OVRD_IN_LO.
+	 */
+	ovrd_in.u64 = csr_rd(CVMX_GSERX_PHYX_OVRD_IN_LO(qlm, 0));
+	ovrd_in.s.mpll_en = 1;
+	ovrd_in.s.mpll_en_ovrd = 1;
+	csr_wr(CVMX_GSERX_PHYX_OVRD_IN_LO(qlm, 0), ovrd_in.u64);
+
+	/* 7. Decrease MPLL_MULTIPLIER by one continually until it reaches
+	 *   the desired long-term setting, ensuring that each MPLL_MULTIPLIER
+	 *   value is constant for at least 1 msec before changing to the next
+	 *   value. The desired long-term setting is as indicated in HRM tables
+	 *   21-1, 21-2, and 21-3. This is not required with the HRM
+	 *   sequence.
+	 */
+	mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+	if (sata_ref_clk == 100)
+		mpll_multiplier.s.mpll_multiplier = 0x1e;
+	else
+		mpll_multiplier.s.mpll_multiplier = 0x30;
+	csr_wr(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0), mpll_multiplier.u64);
+
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_MPLL_STATUS(qlm, 0),
+				  cvmx_gserx_dlmx_mpll_status_t, mpll_status, ==, 1, 10000)) {
+		printf("ERROR: SATA MPLL failed to set\n");
+		return -1;
+	}
+
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_RX_STATUS(qlm, 0), cvmx_gserx_dlmx_rx_status_t,
+				  rx0_status, ==, 1, 10000)) {
+		printf("ERROR: SATA RX0_STATUS failed to set\n");
+		return -1;
+	}
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_DLMX_RX_STATUS(qlm, 0), cvmx_gserx_dlmx_rx_status_t,
+				  rx1_status, ==, 1, 10000)) {
+		printf("ERROR: SATA RX1_STATUS failed to set\n");
+		return -1;
+	}
+
+	/* 8. Deassert UCTL and UAHC resets:
+	 *    a. SATA_UCTL_CTL[UCTL_RST] = 0
+	 *    b. SATA_UCTL_CTL[UAHC_RST] = 0
+	 *    c. Wait 10 ACLK cycles before accessing any ACLK-only registers.
+	 */
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.sata_uctl_rst = 0;
+	uctl_ctl.s.sata_uahc_rst = 0;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	udelay(1);
+
+	/* 9. Enable conditional SCLK of UCTL by writing
+	 *    SATA_UCTL_CTL[CSCLK_EN] = 1
+	 */
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.csclk_en = 1;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	/* 10. Initialize UAHC as described in the AHCI Specification (UAHC_*
+	 *     registers
+	 */
+
+	/* set-up endian mode */
+	shim_cfg.u64 = csr_rd(CVMX_SATA_UCTL_SHIM_CFG);
+	shim_cfg.s.dma_endian_mode = 1;
+	shim_cfg.s.csr_endian_mode = 3;
+	csr_wr(CVMX_SATA_UCTL_SHIM_CFG, shim_cfg.u64);
+
+	return 0;
+}
+
+/**
+ * Initializes DLM 4 for SATA
+ *
+ * @param qlm		Must be 4.
+ * @param baud_mhz	Baud rate for SATA
+ * @param ref_clk_sel	Selects the speed of the reference clock where:
+ *			0 = 100MHz, 1 = 125MHz and 2 = 156.25MHz
+ * @param ref_clk_input	Reference clock input where 0 = external QLM clock,
+ *			1 = qlmc_ref_clk0 and 2 = qlmc_ref_clk1
+ */
+static int __sata_dlm_init_cn73xx(int qlm, int baud_mhz, int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_sata_uctl_shim_cfg_t shim_cfg;
+	cvmx_gserx_refclk_sel_t refclk_sel;
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_rx_pwr_ctrl_p2_t pwr_ctrl_p2;
+	cvmx_gserx_lanex_misc_cfg_0_t misc_cfg_0;
+	cvmx_gserx_sata_lane_rst_t lane_rst;
+	cvmx_gserx_pll_px_mode_0_t pmode_0;
+	cvmx_gserx_pll_px_mode_1_t pmode_1;
+	cvmx_gserx_lane_px_mode_0_t lane_pmode_0;
+	cvmx_gserx_lane_px_mode_1_t lane_pmode_1;
+	cvmx_gserx_cfg_t gserx_cfg;
+	cvmx_sata_uctl_ctl_t uctl_ctl;
+	int l;
+	int i;
+
+	/*
+	 * 1. Configure the SATA
+	 */
+
+	/*
+	 * 2. Configure the QLM Reference clock
+	 *    Set GSERX_REFCLK_SEL.COM_CLK_SEL to source reference clock
+	 *    from the external clock mux.
+	 *      GSERX_REFCLK_SEL.USE_COM1 to select qlmc_refclkn/p_1 or
+	 *      leave clear to select qlmc_refclkn/p_0
+	 */
+	refclk_sel.u64 = 0;
+	if (ref_clk_input == 0) { /* External ref clock */
+		refclk_sel.s.com_clk_sel = 0;
+		refclk_sel.s.use_com1 = 0;
+	} else if (ref_clk_input == 1) { /* Common reference clock 0 */
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 0;
+	} else { /* Common reference clock 1 */
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 1;
+	}
+
+	if (ref_clk_sel != 0) {
+		printf("Wrong reference clock selected for QLM4\n");
+		return -1;
+	}
+
+	csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+
+	/* Reset the QLM after changing the reference clock */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	udelay(1);
+
+	/*
+	 * 3. Configure the QLM for SATA mode set GSERX_CFG.SATA
+	 */
+	gserx_cfg.u64 = 0;
+	gserx_cfg.s.sata = 1;
+	csr_wr(CVMX_GSERX_CFG(qlm), gserx_cfg.u64);
+
+	/*
+	 * 12. Clear the appropriate lane resets
+	 *     clear GSERX_SATA_LANE_RST.LX_RST  where X is the lane number 0-1.
+	 */
+	lane_rst.u64 = csr_rd(CVMX_GSERX_SATA_LANE_RST(qlm));
+	lane_rst.s.l0_rst = 0;
+	lane_rst.s.l1_rst = 0;
+	csr_wr(CVMX_GSERX_SATA_LANE_RST(qlm), lane_rst.u64);
+	csr_rd(CVMX_GSERX_SATA_LANE_RST(qlm));
+
+	udelay(1);
+
+	/*
+	 * 4. Take the PHY out of reset
+	 *    Write GSERX_PHY_CTL.PHY_RESET to a zero
+	 */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/* Wait for reset to complete and the PLL to lock */
+	/* PCIe mode doesn't become ready until the PEM block attempts to bring
+	 * the interface up. Skip this check for PCIe
+	 */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm), cvmx_gserx_qlm_stat_t,
+				  rst_rdy, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", qlm);
+		return -1;
+	}
+
+	/* Workaround for errata GSER-30310: SATA HDD Not Ready due to
+	 * PHY SDLL/LDLL lockup@3GHz
+	 */
+	for (i = 0; i < 2; i++) {
+		cvmx_gserx_slicex_pcie1_mode_t pcie1;
+		cvmx_gserx_slicex_pcie2_mode_t pcie2;
+		cvmx_gserx_slicex_pcie3_mode_t pcie3;
+
+		pcie1.u64 = csr_rd(CVMX_GSERX_SLICEX_PCIE1_MODE(i, qlm));
+		pcie1.s.rx_pi_bwsel = 1;
+		pcie1.s.rx_ldll_bwsel = 1;
+		pcie1.s.rx_sdll_bwsel = 1;
+		csr_wr(CVMX_GSERX_SLICEX_PCIE1_MODE(i, qlm), pcie1.u64);
+
+		pcie2.u64 = csr_rd(CVMX_GSERX_SLICEX_PCIE2_MODE(i, qlm));
+		pcie2.s.rx_pi_bwsel = 1;
+		pcie2.s.rx_ldll_bwsel = 1;
+		pcie2.s.rx_sdll_bwsel = 1;
+		csr_wr(CVMX_GSERX_SLICEX_PCIE2_MODE(i, qlm), pcie2.u64);
+
+		pcie3.u64 = csr_rd(CVMX_GSERX_SLICEX_PCIE3_MODE(i, qlm));
+		pcie3.s.rx_pi_bwsel = 1;
+		pcie3.s.rx_ldll_bwsel = 1;
+		pcie3.s.rx_sdll_bwsel = 1;
+		csr_wr(CVMX_GSERX_SLICEX_PCIE3_MODE(i, qlm), pcie3.u64);
+	}
+
+	/*
+	 * 7. Change P2 termination
+	 *    Clear GSERX_RX_PWR_CTRL_P2.P2_RX_SUBBLK_PD[0] (Termination)
+	 */
+	pwr_ctrl_p2.u64 = csr_rd(CVMX_GSERX_RX_PWR_CTRL_P2(qlm));
+	pwr_ctrl_p2.s.p2_rx_subblk_pd &= 0x1e;
+	csr_wr(CVMX_GSERX_RX_PWR_CTRL_P2(qlm), pwr_ctrl_p2.u64);
+
+	/*
+	 * 8. Modify the Electrical IDLE Detect on delay
+	 *    Change GSERX_LANE(0..3)_MISC_CFG_0.EIE_DET_STL_ON_TIME to a 0x4
+	 */
+	for (i = 0; i < 2; i++) {
+		misc_cfg_0.u64 = csr_rd(CVMX_GSERX_LANEX_MISC_CFG_0(i, qlm));
+		misc_cfg_0.s.eie_det_stl_on_time = 4;
+		csr_wr(CVMX_GSERX_LANEX_MISC_CFG_0(i, qlm), misc_cfg_0.u64);
+	}
+
+	/*
+	 * 9. Modify the PLL and Lane Protocol Mode registers to configure
+	 *    the PHY for SATA.
+	 *    (Configure all 3 PLLs, doesn't matter what speed it is configured)
+	 */
+
+	/* Errata (GSER-26724) SATA never indicates GSER QLM_STAT[RST_RDY]
+	 * We program PLL_PX_MODE_0 last due to this errata
+	 */
+	for (l = 0; l < 3; l++) {
+		pmode_1.u64 = csr_rd(CVMX_GSERX_PLL_PX_MODE_1(l, qlm));
+		lane_pmode_0.u64 = csr_rd(CVMX_GSERX_LANE_PX_MODE_0(l, qlm));
+		lane_pmode_1.u64 = csr_rd(CVMX_GSERX_LANE_PX_MODE_1(l, qlm));
+
+		pmode_1.s.pll_cpadj = 0x2;
+		pmode_1.s.pll_opr = 0x0;
+		pmode_1.s.pll_div = 0x1e;
+		pmode_1.s.pll_pcie3en = 0x0;
+		pmode_1.s.pll_16p5en = 0x0;
+
+		lane_pmode_0.s.ctle = 0x0;
+		lane_pmode_0.s.pcie = 0x0;
+		lane_pmode_0.s.tx_ldiv = 0x0;
+		lane_pmode_0.s.srate = 0;
+		lane_pmode_0.s.tx_mode = 0x3;
+		lane_pmode_0.s.rx_mode = 0x3;
+
+		lane_pmode_1.s.vma_mm = 1;
+		lane_pmode_1.s.vma_fine_cfg_sel = 0;
+		lane_pmode_1.s.cdr_fgain = 0xa;
+		lane_pmode_1.s.ph_acc_adj = 0x15;
+
+		if (l == R_2_5G_REFCLK100)
+			lane_pmode_0.s.rx_ldiv = 0x2;
+		else if (l == R_5G_REFCLK100)
+			lane_pmode_0.s.rx_ldiv = 0x1;
+		else
+			lane_pmode_0.s.rx_ldiv = 0x0;
+
+		csr_wr(CVMX_GSERX_PLL_PX_MODE_1(l, qlm), pmode_1.u64);
+		csr_wr(CVMX_GSERX_LANE_PX_MODE_0(l, qlm), lane_pmode_0.u64);
+		csr_wr(CVMX_GSERX_LANE_PX_MODE_1(l, qlm), lane_pmode_1.u64);
+	}
+
+	for (l = 0; l < 3; l++) {
+		pmode_0.u64 = csr_rd(CVMX_GSERX_PLL_PX_MODE_0(l, qlm));
+		pmode_0.s.pll_icp = 0x1;
+		pmode_0.s.pll_rloop = 0x3;
+		pmode_0.s.pll_pcs_div = 0x5;
+		csr_wr(CVMX_GSERX_PLL_PX_MODE_0(l, qlm), pmode_0.u64);
+	}
+
+	for (i = 0; i < 2; i++) {
+		cvmx_gserx_slicex_rx_sdll_ctrl_t rx_sdll;
+
+		rx_sdll.u64 = csr_rd(CVMX_GSERX_SLICEX_RX_SDLL_CTRL(i, qlm));
+		rx_sdll.s.pcs_sds_oob_clk_ctrl = 2;
+		rx_sdll.s.pcs_sds_rx_sdll_tune = 0;
+		rx_sdll.s.pcs_sds_rx_sdll_swsel = 0;
+		csr_wr(CVMX_GSERX_SLICEX_RX_SDLL_CTRL(i, qlm), rx_sdll.u64);
+	}
+
+	for (i = 0; i < 2; i++) {
+		cvmx_gserx_lanex_misc_cfg_0_t misc_cfg;
+
+		misc_cfg.u64 = csr_rd(CVMX_GSERX_LANEX_MISC_CFG_0(i, qlm));
+		misc_cfg.s.use_pma_polarity = 0;
+		misc_cfg.s.cfg_pcs_loopback = 0;
+		misc_cfg.s.pcs_tx_mode_ovrrd_en = 0;
+		misc_cfg.s.pcs_rx_mode_ovrrd_en = 0;
+		misc_cfg.s.cfg_eie_det_cnt = 0;
+		misc_cfg.s.eie_det_stl_on_time = 4;
+		misc_cfg.s.eie_det_stl_off_time = 0;
+		misc_cfg.s.tx_bit_order = 1;
+		misc_cfg.s.rx_bit_order = 1;
+		csr_wr(CVMX_GSERX_LANEX_MISC_CFG_0(i, qlm), misc_cfg.u64);
+	}
+
+	/* Wait for reset to complete and the PLL to lock */
+	/* PCIe mode doesn't become ready until the PEM block attempts to bring
+	 * the interface up. Skip this check for PCIe
+	 */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm), cvmx_gserx_qlm_stat_t,
+				  rst_rdy, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", qlm);
+		return -1;
+	}
+
+	/* Poll GSERX_SATA_STATUS for P0_RDY = 1 */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_SATA_STATUS(qlm), cvmx_gserx_sata_status_t,
+				  p0_rdy, ==, 1, 10000)) {
+		printf("QLM4: Timeout waiting for GSERX_SATA_STATUS[p0_rdy]\n");
+		return -1;
+	}
+
+	/* Poll GSERX_SATA_STATUS for P1_RDY = 1 */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_SATA_STATUS(qlm), cvmx_gserx_sata_status_t,
+				  p1_rdy, ==, 1, 10000)) {
+		printf("QLM4: Timeout waiting for GSERX_SATA_STATUS[p1_rdy]\n");
+		return -1;
+	}
+
+	udelay(2000);
+
+	/* 6. Deassert UCTL and UAHC resets:
+	 *    a. SATA_UCTL_CTL[UCTL_RST] = 0
+	 *    b. SATA_UCTL_CTL[UAHC_RST] = 0
+	 *    c. Wait 10 ACLK cycles before accessing any ACLK-only registers.
+	 */
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.sata_uctl_rst = 0;
+	uctl_ctl.s.sata_uahc_rst = 0;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	udelay(1);
+
+	/* 7. Enable conditional SCLK of UCTL by writing
+	 *    SATA_UCTL_CTL[CSCLK_EN] = 1
+	 */
+	uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+	uctl_ctl.s.csclk_en = 1;
+	csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+	/* set-up endian mode */
+	shim_cfg.u64 = csr_rd(CVMX_SATA_UCTL_SHIM_CFG);
+	shim_cfg.s.dma_endian_mode = 1;
+	shim_cfg.s.csr_endian_mode = 3;
+	csr_wr(CVMX_SATA_UCTL_SHIM_CFG, shim_cfg.u64);
+
+	return 0;
+}
+
+static int __dlm2_sata_uahc_init_cn70xx(int baud_mhz)
+{
+	cvmx_sata_uahc_gbl_cap_t gbl_cap;
+	cvmx_sata_uahc_px_sctl_t sctl;
+	cvmx_sata_uahc_gbl_pi_t pi;
+	cvmx_sata_uahc_px_cmd_t cmd;
+	cvmx_sata_uahc_px_sctl_t sctl0, sctl1;
+	cvmx_sata_uahc_px_ssts_t ssts;
+	cvmx_sata_uahc_px_tfd_t tfd;
+	cvmx_sata_uahc_gbl_timer1ms_t gbl_timer1ms;
+	u64 done;
+	int result = -1;
+	int retry_count = 0;
+	int spd;
+
+	/* From the synopsis data book, SATA_UAHC_GBL_TIMER1MS is the
+	 * AMBA clock in MHz * 1000, which is a_clk(Hz) / 1000
+	 */
+	gbl_timer1ms.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_TIMER1MS);
+	gbl_timer1ms.s.timv = a_clk / 1000;
+	csr_wr32(CVMX_SATA_UAHC_GBL_TIMER1MS, gbl_timer1ms.u32);
+	gbl_timer1ms.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_TIMER1MS);
+
+	/* Set-u global capabilities reg (GBL_CAP) */
+	gbl_cap.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_CAP);
+	debug("%s: SATA_UAHC_GBL_CAP before: 0x%x\n", __func__, gbl_cap.u32);
+	gbl_cap.s.sss = 1;
+	gbl_cap.s.smps = 1;
+	csr_wr32(CVMX_SATA_UAHC_GBL_CAP, gbl_cap.u32);
+	gbl_cap.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_CAP);
+	debug("%s: SATA_UAHC_GBL_CAP after: 0x%x\n", __func__, gbl_cap.u32);
+
+	/* Set-up global hba control reg (interrupt enables) */
+	/* Set-up port SATA control registers (speed limitation) */
+	if (baud_mhz == 1500)
+		spd = 1;
+	else if (baud_mhz == 3000)
+		spd = 2;
+	else
+		spd = 3;
+
+	sctl.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(0));
+	debug("%s: SATA_UAHC_P0_SCTL before: 0x%x\n", __func__, sctl.u32);
+	sctl.s.spd = spd;
+	csr_wr32(CVMX_SATA_UAHC_PX_SCTL(0), sctl.u32);
+	sctl.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(0));
+	debug("%s: SATA_UAHC_P0_SCTL after: 0x%x\n", __func__, sctl.u32);
+	sctl.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(1));
+	debug("%s: SATA_UAHC_P1_SCTL before: 0x%x\n", __func__, sctl.u32);
+	sctl.s.spd = spd;
+	csr_wr32(CVMX_SATA_UAHC_PX_SCTL(1), sctl.u32);
+	sctl.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(1));
+	debug("%s: SATA_UAHC_P1_SCTL after: 0x%x\n", __func__, sctl.u32);
+
+	/* Set-up ports implemented reg. */
+	pi.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_PI);
+	debug("%s: SATA_UAHC_GBL_PI before: 0x%x\n", __func__, pi.u32);
+	pi.s.pi = 3;
+	csr_wr32(CVMX_SATA_UAHC_GBL_PI, pi.u32);
+	pi.u32 = csr_rd32(CVMX_SATA_UAHC_GBL_PI);
+	debug("%s: SATA_UAHC_GBL_PI after: 0x%x\n", __func__, pi.u32);
+
+retry0:
+	/* Clear port SERR and IS registers */
+	csr_wr32(CVMX_SATA_UAHC_PX_SERR(0), csr_rd32(CVMX_SATA_UAHC_PX_SERR(0)));
+	csr_wr32(CVMX_SATA_UAHC_PX_IS(0), csr_rd32(CVMX_SATA_UAHC_PX_IS(0)));
+
+	/* Set spin-up, power on, FIS RX enable, start, active */
+	cmd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_CMD(0));
+	debug("%s: SATA_UAHC_P0_CMD before: 0x%x\n", __func__, cmd.u32);
+	cmd.s.fre = 1;
+	cmd.s.sud = 1;
+	cmd.s.pod = 1;
+	cmd.s.st = 1;
+	cmd.s.icc = 1;
+	cmd.s.fbscp = 1; /* Enable FIS-based switching */
+	csr_wr32(CVMX_SATA_UAHC_PX_CMD(0), cmd.u32);
+	cmd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_CMD(0));
+	debug("%s: SATA_UAHC_P0_CMD after: 0x%x\n", __func__, cmd.u32);
+
+	sctl0.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(0));
+	sctl0.s.det = 1;
+	csr_wr32(CVMX_SATA_UAHC_PX_SCTL(0), sctl0.u32);
+
+	/* check status */
+	done = get_timer(0);
+	while (1) {
+		ssts.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SSTS(0));
+
+		if (ssts.s.ipm == 1 && ssts.s.det == 3) {
+			result = 0;
+			break;
+		} else if (get_timer(done) > 100) {
+			result = -1;
+			break;
+		}
+
+		udelay(100);
+	}
+
+	if (result != -1) {
+		/* Clear the PxSERR Register, by writing '1s' to each
+		 * implemented bit location
+		 */
+		csr_wr32(CVMX_SATA_UAHC_PX_SERR(0), -1);
+
+		/*
+		 * Wait for indication that SATA drive is ready. This is
+		 * determined via an examination of PxTFD.STS. If PxTFD.STS.BSY
+		 * PxTFD.STS.DRQ, and PxTFD.STS.ERR are all '0', prior to the
+		 * maximum allowed time as specified in the ATA/ATAPI-7
+		 * specification, the device is ready.
+		 */
+		/*
+		 * Wait for the device to be ready. BSY(7), DRQ(3), and ERR(0)
+		 * must be clear
+		 */
+		done = get_timer(0);
+		while (1) {
+			tfd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_TFD(0));
+			if ((tfd.s.sts & 0x89) == 0) {
+				result = 0;
+				break;
+			} else if (get_timer(done) > 500) {
+				if (retry_count < 3) {
+					sctl0.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(0));
+					sctl0.s.det = 1; /* Perform interface reset */
+					csr_wr32(CVMX_SATA_UAHC_PX_SCTL(0), sctl0.u32);
+					udelay(1000); /* 1ms dicated by AHCI 1.3 spec */
+					sctl0.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(0));
+					sctl0.s.det = 0; /* Perform interface reset */
+					csr_wr32(CVMX_SATA_UAHC_PX_SCTL(0), sctl0.u32);
+					retry_count++;
+					goto retry0;
+				}
+				result = -1;
+				break;
+			}
+
+			udelay(100);
+		}
+	}
+
+	if (result == -1)
+		printf("SATA0: not available\n");
+	else
+		printf("SATA0: available\n");
+
+	sctl1.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(1));
+	sctl1.s.det = 1;
+	csr_wr32(CVMX_SATA_UAHC_PX_SCTL(1), sctl1.u32);
+
+	result = -1;
+	retry_count = 0;
+
+retry1:
+	/* Clear port SERR and IS registers */
+	csr_wr32(CVMX_SATA_UAHC_PX_SERR(1), csr_rd32(CVMX_SATA_UAHC_PX_SERR(1)));
+	csr_wr32(CVMX_SATA_UAHC_PX_IS(1), csr_rd32(CVMX_SATA_UAHC_PX_IS(1)));
+
+	/* Set spin-up, power on, FIS RX enable, start, active */
+	cmd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_CMD(1));
+	debug("%s: SATA_UAHC_P1_CMD before: 0x%x\n", __func__, cmd.u32);
+	cmd.s.fre = 1;
+	cmd.s.sud = 1;
+	cmd.s.pod = 1;
+	cmd.s.st = 1;
+	cmd.s.icc = 1;
+	cmd.s.fbscp = 1; /* Enable FIS-based switching */
+	csr_wr32(CVMX_SATA_UAHC_PX_CMD(1), cmd.u32);
+	cmd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_CMD(1));
+	debug("%s: SATA_UAHC_P1_CMD after: 0x%x\n", __func__, cmd.u32);
+
+	/* check status */
+	done = get_timer(0);
+	while (1) {
+		ssts.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SSTS(1));
+
+		if (ssts.s.ipm == 1 && ssts.s.det == 3) {
+			result = 0;
+			break;
+		} else if (get_timer(done) > 1000) {
+			result = -1;
+			break;
+		}
+
+		udelay(100);
+	}
+
+	if (result != -1) {
+		/* Clear the PxSERR Register, by writing '1s' to each
+		 * implemented bit location
+		 */
+		csr_wr32(CVMX_SATA_UAHC_PX_SERR(1), csr_rd32(CVMX_SATA_UAHC_PX_SERR(1)));
+
+		/*
+		 * Wait for indication that SATA drive is ready. This is
+		 * determined via an examination of PxTFD.STS. If PxTFD.STS.BSY
+		 * PxTFD.STS.DRQ, and PxTFD.STS.ERR are all '0', prior to the
+		 * maximum allowed time as specified in the ATA/ATAPI-7
+		 * specification, the device is ready.
+		 */
+		/*
+		 * Wait for the device to be ready. BSY(7), DRQ(3), and ERR(0)
+		 * must be clear
+		 */
+		done = get_timer(0);
+		while (1) {
+			tfd.u32 = csr_rd32(CVMX_SATA_UAHC_PX_TFD(1));
+			if ((tfd.s.sts & 0x89) == 0) {
+				result = 0;
+				break;
+			} else if (get_timer(done) > 500) {
+				if (retry_count < 3) {
+					sctl0.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(1));
+					sctl0.s.det = 1; /* Perform interface reset */
+					csr_wr32(CVMX_SATA_UAHC_PX_SCTL(1), sctl0.u32);
+					udelay(1000); /* 1ms dicated by AHCI 1.3 spec */
+					sctl0.u32 = csr_rd32(CVMX_SATA_UAHC_PX_SCTL(1));
+					sctl0.s.det = 0; /* Perform interface reset */
+					csr_wr32(CVMX_SATA_UAHC_PX_SCTL(1), sctl0.u32);
+					retry_count++;
+					goto retry1;
+				}
+				result = -1;
+				break;
+			}
+
+			udelay(100);
+		}
+	}
+
+	if (result == -1)
+		printf("SATA1: not available\n");
+	else
+		printf("SATA1: available\n");
+
+	return 0;
+}
+
+static int __sata_bist_cn70xx(int qlm, int baud_mhz, int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_sata_uctl_bist_status_t bist_status;
+	cvmx_sata_uctl_ctl_t uctl_ctl;
+	cvmx_sata_uctl_shim_cfg_t shim_cfg;
+	u64 done;
+	int result = -1;
+
+	debug("%s(%d, %d, %d, %d)\n", __func__, qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+	bist_status.u64 = csr_rd(CVMX_SATA_UCTL_BIST_STATUS);
+
+	{
+		if (__dlm2_sata_uctl_init_cn70xx()) {
+			printf("ERROR: Failed to initialize SATA UCTL CSRs\n");
+			return -1;
+		}
+		if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+			result = __sata_dlm_init_cn73xx(qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+		else
+			result = __sata_dlm_init_cn70xx(qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+		if (result) {
+			printf("ERROR: Failed to initialize SATA GSER CSRs\n");
+			return -1;
+		}
+
+		uctl_ctl.u64 = csr_rd(CVMX_SATA_UCTL_CTL);
+		uctl_ctl.s.start_bist = 1;
+		csr_wr(CVMX_SATA_UCTL_CTL, uctl_ctl.u64);
+
+		/* Set-up for a 1 sec timer. */
+		done = get_timer(0);
+		while (1) {
+			bist_status.u64 = csr_rd(CVMX_SATA_UCTL_BIST_STATUS);
+			if ((bist_status.s.uctl_xm_r_bist_ndone |
+			     bist_status.s.uctl_xm_w_bist_ndone |
+			     bist_status.s.uahc_p0_rxram_bist_ndone |
+			     bist_status.s.uahc_p1_rxram_bist_ndone |
+			     bist_status.s.uahc_p0_txram_bist_ndone |
+			     bist_status.s.uahc_p1_txram_bist_ndone) == 0) {
+				result = 0;
+				break;
+			} else if (get_timer(done) > 1000) {
+				result = -1;
+				break;
+			}
+
+			udelay(100);
+		}
+		if (result == -1) {
+			printf("ERROR: SATA_UCTL_BIST_STATUS = 0x%llx\n",
+			       (unsigned long long)bist_status.u64);
+			return -1;
+		}
+
+		debug("%s: Initializing UAHC\n", __func__);
+		if (__dlm2_sata_uahc_init_cn70xx(baud_mhz)) {
+			printf("ERROR: Failed to initialize SATA UAHC CSRs\n");
+			return -1;
+		}
+	}
+
+	/* Change CSR_ENDIAN_MODE to big endian to use Open Source AHCI SATA
+	 * driver
+	 */
+	shim_cfg.u64 = csr_rd(CVMX_SATA_UCTL_SHIM_CFG);
+	shim_cfg.s.csr_endian_mode = 1;
+	csr_wr(CVMX_SATA_UCTL_SHIM_CFG, shim_cfg.u64);
+
+	return 0;
+}
+
+static int __setup_sata(int qlm, int baud_mhz, int ref_clk_sel, int ref_clk_input)
+{
+	debug("%s(%d, %d, %d, %d)\n", __func__, qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+	return __sata_bist_cn70xx(qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+}
+
+static int __dlmx_setup_pcie_cn70xx(int qlm, enum cvmx_qlm_mode mode, int gen2, int rc,
+				    int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_gserx_dlmx_phy_reset_t dlmx_phy_reset;
+	cvmx_gserx_dlmx_test_powerdown_t dlmx_test_powerdown;
+	cvmx_gserx_dlmx_mpll_multiplier_t mpll_multiplier;
+	cvmx_gserx_dlmx_ref_clkdiv2_t ref_clkdiv2;
+	static const u8 ref_clk_mult[2] = { 35, 56 }; /* 100 & 125 MHz ref clock supported. */
+
+	debug("%s(%d, %d, %d, %d, %d, %d)\n", __func__, qlm, mode, gen2, rc, ref_clk_sel,
+	      ref_clk_input);
+	if (rc == 0) {
+		debug("Skipping initializing PCIe dlm %d in endpoint mode\n", qlm);
+		return 0;
+	}
+
+	if (qlm > 0 && ref_clk_input > 1) {
+		printf("%s: Error: ref_clk_input can only be 0 or 1 for QLM %d\n",
+		       __func__, qlm);
+		return -1;
+	}
+
+	if (ref_clk_sel > OCTEON_QLM_REF_CLK_125MHZ) {
+		printf("%s: Error: ref_clk_sel can only be 100 or 125 MHZ.\n", __func__);
+		return -1;
+	}
+
+	/* 1. Write GSER0_DLM(1..2)_REFCLK_SEL[REFCLK_SEL] if required for
+	 *    reference-clock selection
+	 */
+
+	csr_wr(CVMX_GSERX_DLMX_REFCLK_SEL(qlm, 0), ref_clk_input);
+
+	/* 2. If required, write GSER0_DLM(1..2)_REF_CLKDIV2[REF_CLKDIV2] = 1
+	 *    (must be set if reference clock >= 100 MHz)
+	 */
+
+	/* 4. Configure the PCIE PIPE:
+	 *  a. Write GSER0_PCIE_PIPE_PORT_SEL[PIPE_PORT_SEL] to configure the
+	 *     PCIE PIPE.
+	 *	0x0 = disables all pipes
+	 *	0x1 = enables pipe0 only (PEM0 4-lane)
+	 *	0x2 = enables pipes 0 and 1 (PEM0 and PEM1 2-lanes each)
+	 *	0x3 = enables pipes 0, 1, 2, and 3 (PEM0, PEM1, and PEM3 are
+	 *	      one-lane each)
+	 *  b. Configure GSER0_PCIE_PIPE_PORT_SEL[CFG_PEM1_DLM2]. If PEM1 is
+	 *     to be configured, this bit must reflect which DLM it is logically
+	 *     tied to. This bit sets multiplexing logic in GSER, and it is used
+	 *     by the RST logic to determine when the MAC can come out of reset.
+	 *	0 = PEM1 is tied to DLM1 (for 3 x 1 PCIe mode).
+	 *	1 = PEM1 is tied to DLM2 (for all other PCIe modes).
+	 */
+	if (qlm == 1) {
+		cvmx_gserx_pcie_pipe_port_sel_t pipe_port;
+
+		pipe_port.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_PORT_SEL(0));
+		pipe_port.s.cfg_pem1_dlm2 = (mode == CVMX_QLM_MODE_PCIE_1X1) ? 1 : 0;
+		pipe_port.s.pipe_port_sel =
+				(mode == CVMX_QLM_MODE_PCIE) ? 1 : /* PEM0 only */
+				(mode == CVMX_QLM_MODE_PCIE_1X2) ? 2 : /* PEM0-1 */
+				(mode == CVMX_QLM_MODE_PCIE_1X1) ? 3 : /* PEM0-2 */
+				(mode == CVMX_QLM_MODE_PCIE_2X1) ? 3 : /* PEM0-1 */
+				0; /* PCIe disabled */
+		csr_wr(CVMX_GSERX_PCIE_PIPE_PORT_SEL(0), pipe_port.u64);
+	}
+
+	/* Apply workaround for Errata (G-20669) MPLL may not come up. */
+
+	/* Set REF_CLKDIV2 based on the Ref Clock */
+	ref_clkdiv2.u64 = csr_rd(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0));
+	ref_clkdiv2.s.ref_clkdiv2 = ref_clk_sel > 0;
+	csr_wr(CVMX_GSERX_DLMX_REF_CLKDIV2(qlm, 0), ref_clkdiv2.u64);
+
+	/* 1. Ensure GSER(0)_DLM(0..2)_PHY_RESET[PHY_RESET] is set. */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 2. If SGMII or QSGMII or RXAUI (i.e. if DLM0) set
+	 *    GSER(0)_DLM(0)_MPLL_EN[MPLL_EN] to one.
+	 */
+
+	/* 3. Set GSER(0)_DLM(0..2)_MPLL_MULTIPLIER[MPLL_MULTIPLIER]
+	 *    to the value in the preceding table, which is different
+	 *    than the desired setting prescribed by the HRM.
+	 */
+	mpll_multiplier.u64 = csr_rd(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0));
+	mpll_multiplier.s.mpll_multiplier = ref_clk_mult[ref_clk_sel];
+	debug("%s: Setting MPLL multiplier to %d\n", __func__,
+	      (int)mpll_multiplier.s.mpll_multiplier);
+	csr_wr(CVMX_GSERX_DLMX_MPLL_MULTIPLIER(qlm, 0), mpll_multiplier.u64);
+	/* 5. Clear GSER0_DLM(1..2)_TEST_POWERDOWN. Configurations that only
+	 *    use DLM1 need not clear GSER0_DLM2_TEST_POWERDOWN
+	 */
+	dlmx_test_powerdown.u64 = csr_rd(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0));
+	dlmx_test_powerdown.s.test_powerdown = 0;
+	csr_wr(CVMX_GSERX_DLMX_TEST_POWERDOWN(qlm, 0), dlmx_test_powerdown.u64);
+
+	/* 6. Clear GSER0_DLM(1..2)_PHY_RESET. Configurations that use only
+	 *    need DLM1 need not clear GSER0_DLM2_PHY_RESET
+	 */
+	dlmx_phy_reset.u64 = csr_rd(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0));
+	dlmx_phy_reset.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_DLMX_PHY_RESET(qlm, 0), dlmx_phy_reset.u64);
+
+	/* 6. Decrease MPLL_MULTIPLIER by one continually until it reaches
+	 *    the desired long-term setting, ensuring that each MPLL_MULTIPLIER
+	 *   value is constant for at least 1 msec before changing to the next
+	 *   value. The desired long-term setting is as indicated in HRM tables
+	 *   21-1, 21-2, and 21-3. This is not required with the HRM
+	 *   sequence.
+	 */
+	/* This is set when initializing PCIe after soft reset is asserted. */
+
+	/* 7. Write the GSER0_PCIE_PIPE_RST register to take the appropriate
+	 *    PIPE out of reset. There is a PIPEn_RST bit for each PIPE. Clear
+	 *    the appropriate bits based on the configuration (reset is
+	 *     active high).
+	 */
+	if (qlm == 1) {
+		cvmx_pemx_cfg_t pemx_cfg;
+		cvmx_pemx_on_t pemx_on;
+		cvmx_gserx_pcie_pipe_rst_t pipe_rst;
+		cvmx_rst_ctlx_t rst_ctl;
+
+		switch (mode) {
+		case CVMX_QLM_MODE_PCIE:     /* PEM0 on DLM1 & DLM2 */
+		case CVMX_QLM_MODE_PCIE_1X2: /* PEM0 on DLM1 */
+		case CVMX_QLM_MODE_PCIE_1X1: /* PEM0 on DLM1 using lane 0 */
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+			pemx_cfg.cn70xx.hostmd = rc;
+			if (mode == CVMX_QLM_MODE_PCIE_1X1) {
+				pemx_cfg.cn70xx.md =
+					gen2 ? CVMX_PEM_MD_GEN2_1LANE : CVMX_PEM_MD_GEN1_1LANE;
+			} else if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_cfg.cn70xx.md =
+					gen2 ? CVMX_PEM_MD_GEN2_4LANE : CVMX_PEM_MD_GEN1_4LANE;
+			} else {
+				pemx_cfg.cn70xx.md =
+					gen2 ? CVMX_PEM_MD_GEN2_2LANE : CVMX_PEM_MD_GEN1_2LANE;
+			}
+			csr_wr(CVMX_PEMX_CFG(0), pemx_cfg.u64);
+
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(0));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(0), rst_ctl.u64);
+
+			/* PEM0 is on DLM1&2 which is pipe0 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe0_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+			break;
+		case CVMX_QLM_MODE_PCIE_2X1: /* PEM0 and PEM1 on DLM1 */
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+			pemx_cfg.cn70xx.hostmd = rc;
+			pemx_cfg.cn70xx.md = gen2 ? CVMX_PEM_MD_GEN2_1LANE : CVMX_PEM_MD_GEN1_1LANE;
+			csr_wr(CVMX_PEMX_CFG(0), pemx_cfg.u64);
+
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(0));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(0), rst_ctl.u64);
+
+			/* PEM0 is on DLM1 which is pipe0 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe0_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+			pemx_cfg.cn70xx.hostmd = 1;
+			pemx_cfg.cn70xx.md = gen2 ? CVMX_PEM_MD_GEN2_1LANE : CVMX_PEM_MD_GEN1_1LANE;
+			csr_wr(CVMX_PEMX_CFG(1), pemx_cfg.u64);
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(1));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(1), rst_ctl.u64);
+			/* PEM1 is on DLM2 which is pipe1 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe1_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(1));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(1), pemx_on.u64);
+			break;
+		default:
+			break;
+		}
+	} else {
+		cvmx_pemx_cfg_t pemx_cfg;
+		cvmx_pemx_on_t pemx_on;
+		cvmx_gserx_pcie_pipe_rst_t pipe_rst;
+		cvmx_rst_ctlx_t rst_ctl;
+
+		switch (mode) {
+		case CVMX_QLM_MODE_PCIE_1X2: /* PEM1 on DLM2 */
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+			pemx_cfg.cn70xx.hostmd = 1;
+			pemx_cfg.cn70xx.md = gen2 ? CVMX_PEM_MD_GEN2_2LANE : CVMX_PEM_MD_GEN1_2LANE;
+			csr_wr(CVMX_PEMX_CFG(1), pemx_cfg.u64);
+
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(1));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(1), rst_ctl.u64);
+
+			/* PEM1 is on DLM1 lane 0, which is pipe1 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe1_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(1));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(1), pemx_on.u64);
+			break;
+		case CVMX_QLM_MODE_PCIE_2X1: /* PEM1 and PEM2 on DLM2 */
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+			pemx_cfg.cn70xx.hostmd = 1;
+			pemx_cfg.cn70xx.md = gen2 ? CVMX_PEM_MD_GEN2_1LANE : CVMX_PEM_MD_GEN1_1LANE;
+			csr_wr(CVMX_PEMX_CFG(1), pemx_cfg.u64);
+
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(1));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(1), rst_ctl.u64);
+
+			/* PEM1 is on DLM2 lane 0, which is pipe2 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe2_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(1));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(1), pemx_on.u64);
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+			pemx_cfg.cn70xx.hostmd = 1;
+			pemx_cfg.cn70xx.md = gen2 ? CVMX_PEM_MD_GEN2_1LANE : CVMX_PEM_MD_GEN1_1LANE;
+			csr_wr(CVMX_PEMX_CFG(2), pemx_cfg.u64);
+
+			rst_ctl.u64 = csr_rd(CVMX_RST_CTLX(2));
+			rst_ctl.s.rst_drv = 1;
+			csr_wr(CVMX_RST_CTLX(2), rst_ctl.u64);
+
+			/* PEM2 is on DLM2 lane 1, which is pipe3 */
+			pipe_rst.u64 = csr_rd(CVMX_GSERX_PCIE_PIPE_RST(0));
+			pipe_rst.s.pipe3_rst = 0;
+			csr_wr(CVMX_GSERX_PCIE_PIPE_RST(0), pipe_rst.u64);
+
+			pemx_on.u64 = csr_rd(CVMX_PEMX_ON(2));
+			pemx_on.s.pemon = 1;
+			csr_wr(CVMX_PEMX_ON(2), pemx_on.u64);
+			break;
+		default:
+			break;
+		}
+	}
+	return 0;
+}
+
+/**
+ * Configure dlm speed and mode for cn70xx.
+ *
+ * @param qlm     The DLM to configure
+ * @param speed   The speed the DLM needs to be configured in Mhz.
+ * @param mode    The DLM to be configured as SGMII/XAUI/PCIe.
+ *                  DLM 0: has 2 interfaces which can be configured as
+ *                         SGMII/QSGMII/RXAUI. Need to configure both at the
+ *                         same time. These are valid option
+ *				CVMX_QLM_MODE_QSGMII,
+ *				CVMX_QLM_MODE_SGMII_SGMII,
+ *				CVMX_QLM_MODE_SGMII_DISABLED,
+ *				CVMX_QLM_MODE_DISABLED_SGMII,
+ *				CVMX_QLM_MODE_SGMII_QSGMII,
+ *				CVMX_QLM_MODE_QSGMII_QSGMII,
+ *				CVMX_QLM_MODE_QSGMII_DISABLED,
+ *				CVMX_QLM_MODE_DISABLED_QSGMII,
+ *				CVMX_QLM_MODE_QSGMII_SGMII,
+ *				CVMX_QLM_MODE_RXAUI_1X2
+ *
+ *                  DLM 1: PEM0/1 in PCIE_1x4/PCIE_2x1/PCIE_1X1
+ *                  DLM 2: PEM0/1/2 in PCIE_1x4/PCIE_1x2/PCIE_2x1/PCIE_1x1
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP mode.
+ * @param gen2    Only used for PCIe, gen2 = 1, in GEN2 mode else in GEN1 mode.
+ *
+ * @param ref_clk_input  The reference-clock input to use to configure QLM
+ * @param ref_clk_sel    The reference-clock selection to use to configure QLM
+ *
+ * @return       Return 0 on success or -1.
+ */
+static int octeon_configure_qlm_cn70xx(int qlm, int speed, int mode, int rc, int gen2,
+				       int ref_clk_sel, int ref_clk_input)
+{
+	debug("%s(%d, %d, %d, %d, %d, %d, %d)\n", __func__, qlm, speed, mode, rc, gen2, ref_clk_sel,
+	      ref_clk_input);
+	switch (qlm) {
+	case 0: {
+		int is_sff7000_rxaui = 0;
+		cvmx_gmxx_inf_mode_t inf_mode0, inf_mode1;
+
+		inf_mode0.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+		inf_mode1.u64 = csr_rd(CVMX_GMXX_INF_MODE(1));
+		if (inf_mode0.s.en || inf_mode1.s.en) {
+			debug("DLM0 already configured\n");
+			return -1;
+		}
+
+		switch (mode) {
+		case CVMX_QLM_MODE_SGMII_SGMII:
+			debug("  Mode SGMII SGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			break;
+		case CVMX_QLM_MODE_SGMII_QSGMII:
+			debug("  Mode SGMII QSGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			break;
+		case CVMX_QLM_MODE_SGMII_DISABLED:
+			debug("  Mode SGMII Disabled\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			break;
+		case CVMX_QLM_MODE_DISABLED_SGMII:
+			debug("Mode Disabled SGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			break;
+		case CVMX_QLM_MODE_QSGMII_SGMII:
+			debug("  Mode QSGMII SGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_SGMII;
+			break;
+		case CVMX_QLM_MODE_QSGMII_QSGMII:
+			debug("  Mode QSGMII QSGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			break;
+		case CVMX_QLM_MODE_QSGMII_DISABLED:
+			debug("  Mode QSGMII Disabled\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			break;
+		case CVMX_QLM_MODE_DISABLED_QSGMII:
+			debug("Mode Disabled QSGMII\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_QSGMII;
+			break;
+		case CVMX_QLM_MODE_RXAUI:
+			debug("  Mode RXAUI\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_RXAUI;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+
+			break;
+		default:
+			debug("  Mode Disabled Disabled\n");
+			inf_mode0.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			inf_mode1.s.mode = CVMX_GMX_INF_MODE_DISABLED;
+			break;
+		}
+		csr_wr(CVMX_GMXX_INF_MODE(0), inf_mode0.u64);
+		csr_wr(CVMX_GMXX_INF_MODE(1), inf_mode1.u64);
+
+		/* Bringup the PLL */
+		if (__dlm_setup_pll_cn70xx(qlm, speed, ref_clk_sel, ref_clk_input,
+					   is_sff7000_rxaui))
+			return -1;
+
+		/* TX Lanes */
+		if (__dlm0_setup_tx_cn70xx(speed, ref_clk_sel))
+			return -1;
+
+		/* RX Lanes */
+		if (__dlm0_setup_rx_cn70xx(speed, ref_clk_sel))
+			return -1;
+
+		/* Enable the interface */
+		inf_mode0.u64 = csr_rd(CVMX_GMXX_INF_MODE(0));
+		if (inf_mode0.s.mode != CVMX_GMX_INF_MODE_DISABLED)
+			inf_mode0.s.en = 1;
+		csr_wr(CVMX_GMXX_INF_MODE(0), inf_mode0.u64);
+		inf_mode1.u64 = csr_rd(CVMX_GMXX_INF_MODE(1));
+		if (inf_mode1.s.mode != CVMX_GMX_INF_MODE_DISABLED)
+			inf_mode1.s.en = 1;
+		csr_wr(CVMX_GMXX_INF_MODE(1), inf_mode1.u64);
+		break;
+	}
+	case 1:
+		switch (mode) {
+		case CVMX_QLM_MODE_PCIE: /* PEM0 on DLM1 & DLM2 */
+			debug("  Mode PCIe\n");
+			if (__dlmx_setup_pcie_cn70xx(1, mode, gen2, rc, ref_clk_sel, ref_clk_input))
+				return -1;
+			if (__dlmx_setup_pcie_cn70xx(2, mode, gen2, rc, ref_clk_sel, ref_clk_input))
+				return -1;
+			break;
+		case CVMX_QLM_MODE_PCIE_1X2: /* PEM0 on DLM1 */
+		case CVMX_QLM_MODE_PCIE_2X1: /* PEM0 & PEM1 on DLM1 */
+		case CVMX_QLM_MODE_PCIE_1X1: /* PEM0 on DLM1, only 1 lane */
+			debug("  Mode PCIe 1x2, 2x1 or 1x1\n");
+			if (__dlmx_setup_pcie_cn70xx(qlm, mode, gen2, rc, ref_clk_sel,
+						     ref_clk_input))
+				return -1;
+			break;
+		case CVMX_QLM_MODE_DISABLED:
+			debug("  Mode disabled\n");
+			break;
+		default:
+			debug("DLM1 illegal mode specified\n");
+			return -1;
+		}
+		break;
+	case 2:
+		switch (mode) {
+		case CVMX_QLM_MODE_SATA_2X1:
+			debug("%s: qlm 2, mode is SATA 2x1\n", __func__);
+			/* DLM2 is SATA, PCIE2 is disabled */
+			if (__setup_sata(qlm, speed, ref_clk_sel, ref_clk_input))
+				return -1;
+			break;
+		case CVMX_QLM_MODE_PCIE:
+			debug("  Mode PCIe\n");
+			/* DLM2 is PCIE0, PCIE1-2 are disabled. */
+			/* Do nothing, its initialized in DLM1 */
+			break;
+		case CVMX_QLM_MODE_PCIE_1X2: /* PEM1 on DLM2 */
+		case CVMX_QLM_MODE_PCIE_2X1: /* PEM1 & PEM2 on DLM2 */
+			debug("  Mode PCIe 1x2 or 2x1\n");
+			if (__dlmx_setup_pcie_cn70xx(qlm, mode, gen2, rc, ref_clk_sel,
+						     ref_clk_input))
+				return -1;
+			break;
+		case CVMX_QLM_MODE_DISABLED:
+			debug("  Mode Disabled\n");
+			break;
+		default:
+			debug("DLM2 illegal mode specified\n");
+			return -1;
+		}
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * Disables DFE for the specified QLM lane(s).
+ * This function should only be called for low-loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param lane     Lane to configure, or -1 all lanes
+ * @param baud_mhz The speed the QLM needs to be configured in Mhz.
+ * @param mode     The QLM to be configured as SGMII/XAUI/PCIe.
+ */
+void octeon_qlm_dfe_disable(int node, int qlm, int lane, int baud_mhz, int mode)
+{
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+	int l;
+	cvmx_gserx_lanex_rx_loop_ctrl_t loop_ctrl;
+	cvmx_gserx_lanex_rx_valbbd_ctrl_0_t ctrl_0;
+	cvmx_gserx_lanex_rx_valbbd_ctrl_1_t ctrl_1;
+	cvmx_gserx_lanex_rx_valbbd_ctrl_2_t ctrl_2;
+	cvmx_gserx_lane_vma_fine_ctrl_2_t lane_vma_fine_ctrl_2;
+
+	/* Interfaces below 5Gbaud are already manually tuned. */
+	if (baud_mhz < 5000)
+		return;
+
+	/* Don't run on PCIe links, SATA or KR.  These interfaces use training */
+	switch (mode) {
+	case CVMX_QLM_MODE_10G_KR_1X2:
+	case CVMX_QLM_MODE_10G_KR:
+	case CVMX_QLM_MODE_40G_KR4:
+		return;
+	case CVMX_QLM_MODE_PCIE_1X1:
+	case CVMX_QLM_MODE_PCIE_2X1:
+	case CVMX_QLM_MODE_PCIE_1X2:
+	case CVMX_QLM_MODE_PCIE:
+	case CVMX_QLM_MODE_PCIE_1X8:
+		return;
+	case CVMX_QLM_MODE_SATA_2X1:
+		return;
+	default:
+		break;
+	}
+
+	/* Updating pre_ctle minimum to 0. This works best for short channels */
+	lane_vma_fine_ctrl_2.u64 = csr_rd_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm));
+	lane_vma_fine_ctrl_2.s.rx_prectle_gain_min_fine = 0;
+	csr_wr_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm), lane_vma_fine_ctrl_2.u64);
+
+	for (l = 0; l < num_lanes; l++) {
+		if (lane != -1 && lane != l)
+			continue;
+
+		/* 1. Write GSERX_LANEx_RX_LOOP_CTRL = 0x0270
+		 * (var "loop_ctrl" with bits 8 & 1 cleared).
+		 * bit<1> dfe_en_byp = 1'b0
+		 */
+		loop_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_LOOP_CTRL(l, qlm));
+		loop_ctrl.s.cfg_rx_lctrl = loop_ctrl.s.cfg_rx_lctrl & 0x3fd;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_LOOP_CTRL(l, qlm), loop_ctrl.u64);
+
+		/* 2. Write GSERX_LANEx_RX_VALBBD_CTRL_1 = 0x0000
+		 * (var "ctrl1" with all bits cleared)
+		 * bits<14:11> CFG_RX_DFE_C3_MVAL = 4'b0000
+		 * bit<10> CFG_RX_DFE_C3_MSGN = 1'b0
+		 * bits<9:6> CFG_RX_DFE_C2_MVAL = 4'b0000
+		 * bit<5> CFG_RX_DFE_C2_MSGN = 1'b0
+		 * bits<4:0> CFG_RX_DFE_C1_MVAL = 5'b00000
+		 */
+		ctrl_1.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_1(l, qlm));
+		ctrl_1.s.dfe_c3_mval = 0;
+		ctrl_1.s.dfe_c3_msgn = 0;
+		ctrl_1.s.dfe_c2_mval = 0;
+		ctrl_1.s.dfe_c2_msgn = 0;
+		ctrl_1.s.dfe_c2_mval = 0;
+		ctrl_1.s.dfe_c1_mval = 0;
+		ctrl_1.s.dfe_c1_msgn = 0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_1(l, qlm), ctrl_1.u64);
+
+		/* 3. Write GSERX_LANEx_RX_VALBBD_CTRL_0 = 0x2400
+		 * (var "ctrl0" with following bits set/cleared)
+		 * bits<11:10> CFG_RX_DFE_GAIN = 0x1
+		 * bits<9:6> CFG_RX_DFE_C5_MVAL = 4'b0000
+		 * bit<5> CFG_RX_DFE_C5_MSGN = 1'b0
+		 * bits<4:1> CFG_RX_DFE_C4_MVAL = 4'b0000
+		 * bit<0> CFG_RX_DFE_C4_MSGN = 1'b0
+		 */
+		ctrl_0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(l, qlm));
+		ctrl_0.s.dfe_gain = 0x1;
+		ctrl_0.s.dfe_c5_mval = 0;
+		ctrl_0.s.dfe_c5_msgn = 0;
+		ctrl_0.s.dfe_c4_mval = 0;
+		ctrl_0.s.dfe_c4_msgn = 0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(l, qlm), ctrl_0.u64);
+
+		/* 4. Write GSER(0..13)_LANE(0..3)_RX_VALBBD_CTRL_2 = 0x003F
+		 * //enable DFE tap overrides
+		 * bit<5> dfe_ovrd_en = 1
+		 * bit<4> dfe_c5_ovrd_val = 1
+		 * bit<3> dfe_c4_ovrd_val = 1
+		 * bit<2> dfe_c3_ovrd_val = 1
+		 * bit<1> dfe_c2_ovrd_val = 1
+		 * bit<0> dfe_c1_ovrd_val = 1
+		 */
+		ctrl_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_2(l, qlm));
+		ctrl_2.s.dfe_ovrd_en = 0x1;
+		ctrl_2.s.dfe_c5_ovrd_val = 0x1;
+		ctrl_2.s.dfe_c4_ovrd_val = 0x1;
+		ctrl_2.s.dfe_c3_ovrd_val = 0x1;
+		ctrl_2.s.dfe_c2_ovrd_val = 0x1;
+		ctrl_2.s.dfe_c1_ovrd_val = 0x1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_2(l, qlm), ctrl_2.u64);
+	}
+}
+
+/**
+ * Disables DFE, uses fixed CTLE Peak value and AGC settings
+ * for the specified QLM lane(s).
+ * This function should only be called for low-loss channels.
+ * This function prevents Rx equalization from happening on all lanes in a QLM
+ * This function should be called for all lanes being used in the QLM.
+ *
+ * @param  node           Node to configure
+ * @param  qlm            QLM to configure
+ * @param  lane           Lane to configure, or -1 all lanes
+ * @param  baud_mhz       The speed the QLM needs to be configured in Mhz.
+ * @param  mode           The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param  ctle_zero      Equalizer Peaking control
+ * @param  agc_pre_ctle   Pre-CTLE gain
+ * @param  agc_post_ctle  Post-CTLE gain
+ * @return Zero on success, negative on failure
+ */
+
+int octeon_qlm_dfe_disable_ctle_agc(int node, int qlm, int lane, int baud_mhz, int mode,
+				    int ctle_zero, int agc_pre_ctle, int agc_post_ctle)
+{
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+	int l;
+	cvmx_gserx_lanex_rx_loop_ctrl_t loop_ctrl;
+	cvmx_gserx_lanex_rx_valbbd_ctrl_0_t ctrl_0;
+	cvmx_gserx_lanex_pwr_ctrl_t lanex_pwr_ctrl;
+	cvmx_gserx_lane_mode_t lmode;
+	cvmx_gserx_lane_px_mode_1_t px_mode_1;
+	cvmx_gserx_lanex_rx_cfg_5_t rx_cfg_5;
+	cvmx_gserx_lanex_rx_cfg_2_t rx_cfg_2;
+	cvmx_gserx_lanex_rx_ctle_ctrl_t ctle_ctrl;
+
+	/* Check tuning constraints */
+	if (ctle_zero < 0 || ctle_zero > 15) {
+		printf("Error: N%d.QLM%d: Invalid CTLE_ZERO(%d).  Must be between -1 and 15.\n",
+		       node, qlm, ctle_zero);
+		return -1;
+	}
+	if (agc_pre_ctle < 0 || agc_pre_ctle > 15) {
+		printf("Error: N%d.QLM%d: Invalid AGC_Pre_CTLE(%d)\n",
+		       node, qlm, agc_pre_ctle);
+		return -1;
+	}
+
+	if (agc_post_ctle < 0 || agc_post_ctle > 15) {
+		printf("Error: N%d.QLM%d: Invalid AGC_Post_CTLE(%d)\n",
+		       node, qlm, agc_post_ctle);
+		return -1;
+	}
+
+	/* Interfaces below 5Gbaud are already manually tuned. */
+	if (baud_mhz < 5000)
+		return 0;
+
+	/* Don't run on PCIe links, SATA or KR.  These interfaces use training */
+	switch (mode) {
+	case CVMX_QLM_MODE_10G_KR_1X2:
+	case CVMX_QLM_MODE_10G_KR:
+	case CVMX_QLM_MODE_40G_KR4:
+		return 0;
+	case CVMX_QLM_MODE_PCIE_1X1:
+	case CVMX_QLM_MODE_PCIE_2X1:
+	case CVMX_QLM_MODE_PCIE_1X2:
+	case CVMX_QLM_MODE_PCIE:
+	case CVMX_QLM_MODE_PCIE_1X8:
+		return 0;
+	case CVMX_QLM_MODE_SATA_2X1:
+		return 0;
+	default:
+		break;
+	}
+
+	lmode.u64 = csr_rd_node(node, CVMX_GSERX_LANE_MODE(qlm));
+
+	/* 1. Enable VMA manual mode for the QLM's lane mode */
+	px_mode_1.u64 = csr_rd_node(node, CVMX_GSERX_LANE_PX_MODE_1(lmode.s.lmode, qlm));
+	px_mode_1.s.vma_mm = 1;
+	csr_wr_node(node, CVMX_GSERX_LANE_PX_MODE_1(lmode.s.lmode, qlm), px_mode_1.u64);
+
+	/* 2. Disable DFE */
+	octeon_qlm_dfe_disable(node, qlm, lane, baud_mhz, mode);
+
+	for (l = 0; l < num_lanes; l++) {
+		if (lane != -1 && lane != l)
+			continue;
+
+		/* 3. Write GSERX_LANEx_RX_VALBBD_CTRL_0.CFG_RX_AGC_GAIN = 0x2 */
+		ctrl_0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(l, qlm));
+		ctrl_0.s.agc_gain = 0x2;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(l, qlm), ctrl_0.u64);
+
+		/* 4. Write GSERX_LANEx_RX_LOOP_CTRL
+		 * bit<8> lctrl_men = 1'b1
+		 * bit<0> cdr_en_byp = 1'b1
+		 */
+		loop_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_LOOP_CTRL(l, qlm));
+		loop_ctrl.s.cfg_rx_lctrl = loop_ctrl.s.cfg_rx_lctrl | 0x101;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_LOOP_CTRL(l, qlm), loop_ctrl.u64);
+
+		/* 5. Write GSERX_LANEx_PWR_CTRL = 0x0040 (var "lanex_pwr_ctrl" with
+		 * following bits set)
+		 * bit<6> RX_LCTRL_OVRRD_EN = 1'b1
+		 * all other bits cleared.
+		 */
+		lanex_pwr_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PWR_CTRL(l, qlm));
+		lanex_pwr_ctrl.s.rx_lctrl_ovrrd_en = 1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PWR_CTRL(l, qlm), lanex_pwr_ctrl.u64);
+
+		/* --Setting AGC in manual mode and configuring CTLE-- */
+		rx_cfg_5.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CFG_5(l, qlm));
+		rx_cfg_5.s.rx_agc_men_ovrrd_val = 1;
+		rx_cfg_5.s.rx_agc_men_ovrrd_en = 1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_CFG_5(l, qlm), rx_cfg_5.u64);
+
+		ctle_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CTLE_CTRL(l, qlm));
+		ctle_ctrl.s.pcs_sds_rx_ctle_zero = ctle_zero;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_CTLE_CTRL(l, qlm), ctle_ctrl.u64);
+
+		rx_cfg_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CFG_2(l, qlm));
+		rx_cfg_2.s.rx_sds_rx_agc_mval = (agc_pre_ctle << 4) | agc_post_ctle;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_CFG_2(l, qlm), rx_cfg_2.u64);
+	}
+	return 0;
+}
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param lane     Lane the apply the tuning parameters
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_pre   pre-cursor pre-emphasis
+ * @param tx_post  post-cursor pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ */
+void octeon_qlm_tune_per_lane_v3(int node, int qlm, int baud_mhz, int lane, int tx_swing,
+				 int tx_pre, int tx_post, int tx_gain, int tx_vboost)
+{
+	cvmx_gserx_cfg_t gserx_cfg;
+	cvmx_gserx_lanex_tx_cfg_0_t tx_cfg0;
+	cvmx_gserx_lanex_tx_pre_emphasis_t pre_emphasis;
+	cvmx_gserx_lanex_tx_cfg_1_t tx_cfg1;
+	cvmx_gserx_lanex_tx_cfg_3_t tx_cfg3;
+	cvmx_bgxx_spux_br_pmd_control_t pmd_control;
+	cvmx_gserx_lanex_pcs_ctlifc_0_t pcs_ctlifc_0;
+	cvmx_gserx_lanex_pcs_ctlifc_2_t pcs_ctlifc_2;
+	int bgx, lmac;
+
+	/* Do not apply QLM tuning to PCIe and KR interfaces. */
+	gserx_cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+	if (gserx_cfg.s.pcie)
+		return;
+
+	/* Apply the QLM tuning only to cn73xx and cn78xx models only */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		bgx = (qlm < 2) ? qlm : (qlm - 2);
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		bgx = (qlm < 4) ? (qlm - 2) : 2;
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		bgx = 0;
+	else
+		return;
+
+	if ((OCTEON_IS_MODEL(OCTEON_CN73XX) && qlm == 6) ||
+	    (OCTEON_IS_MODEL(OCTEON_CNF75XX) && qlm == 5))
+		lmac = 2;
+	else
+		lmac = lane;
+
+	/* No need to tune 10G-KR and 40G-KR interfaces */
+	pmd_control.u64 = csr_rd_node(node, CVMX_BGXX_SPUX_BR_PMD_CONTROL(lmac, bgx));
+	if (pmd_control.s.train_en)
+		return;
+
+	if (tx_pre != -1 && tx_post == -1)
+		tx_post = 0;
+
+	if (tx_post != -1 && tx_pre == -1)
+		tx_pre = 0;
+
+	/* Check tuning constraints */
+	if (tx_swing < -1 || tx_swing > 25) {
+		printf("ERROR: N%d:QLM%d: Lane %d: Invalid TX_SWING(%d). TX_SWING must be <= 25.\n",
+		       node, qlm, lane, tx_swing);
+		return;
+	}
+
+	if (tx_pre < -1 || tx_pre > 10) {
+		printf("ERROR: N%d:QLM%d: Lane %d: Invalid TX_PRE(%d). TX_PRE must be <= 10.\n",
+		       node, qlm, lane, tx_swing);
+		return;
+	}
+
+	if (tx_post < -1 || tx_post > 31) {
+		printf("ERROR: N%d:QLM%d: Lane %d: Invalid TX_POST(%d). TX_POST must be <= 15.\n",
+		       node, qlm, lane, tx_swing);
+		return;
+	}
+
+	if (tx_pre >= 0 && tx_post >= 0 && tx_swing >= 0 &&
+	    tx_pre + tx_post - tx_swing > 2) {
+		printf("ERROR: N%d.QLM%d: Lane %d: TX_PRE(%d) + TX_POST(%d) - TX_SWING(%d) must be <= 2\n",
+		       node, qlm, lane, tx_pre, tx_post, tx_swing);
+		return;
+	}
+
+	if (tx_pre >= 0 && tx_post >= 0 && tx_swing >= 0 &&
+	    tx_pre + tx_post + tx_swing > 35) {
+		printf("ERROR: N%d.QLM%d: Lane %d: TX_PRE(%d) + TX_POST(%d) + TX_SWING(%d) must be <= 35\n",
+		       node, qlm, lane, tx_pre, tx_post, tx_swing);
+		return;
+	}
+
+	if (tx_gain < -1 || tx_gain > 7) {
+		printf("ERROR: N%d.QLM%d: Lane %d: Invalid TX_GAIN(%d). TX_GAIN must be between 0 and 7\n",
+		       node, qlm, lane, tx_gain);
+		return;
+	}
+
+	if (tx_vboost < -1 || tx_vboost > 1) {
+		printf("ERROR: N%d.QLM%d: Lane %d: Invalid TX_VBOOST(%d).  TX_VBOOST must be 0 or 1.\n",
+		       node, qlm, lane, tx_vboost);
+		return;
+	}
+
+	debug("N%d.QLM%d: Lane %d: TX_SWING=%d, TX_PRE=%d, TX_POST=%d, TX_GAIN=%d, TX_VBOOST=%d\n",
+	      node, qlm, lane, tx_swing, tx_pre, tx_post, tx_gain, tx_vboost);
+
+	/* Complete the Tx swing and Tx equilization programming */
+	/* 1) Enable Tx swing and Tx emphasis overrides */
+	tx_cfg1.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_1(lane, qlm));
+	tx_cfg1.s.tx_swing_ovrrd_en = (tx_swing != -1);
+	tx_cfg1.s.tx_premptap_ovrrd_val = (tx_pre != -1) && (tx_post != -1);
+	tx_cfg1.s.tx_vboost_en_ovrrd_en = (tx_vboost != -1); /* Vboost override */
+	;
+	csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_1(lane, qlm), tx_cfg1.u64);
+	/* 2) Program the Tx swing and Tx emphasis Pre-cursor and Post-cursor values */
+	/* CFG_TX_PREMPTAP[8:4] = Lane X's TX post-cursor value (C+1) */
+	/* CFG_TX_PREMPTAP[3:0] = Lane X's TX pre-cursor value (C-1) */
+	if (tx_swing != -1) {
+		tx_cfg0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_0(lane, qlm));
+		tx_cfg0.s.cfg_tx_swing = tx_swing;
+		csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_0(lane, qlm), tx_cfg0.u64);
+	}
+
+	if ((tx_pre != -1) && (tx_post != -1)) {
+		pre_emphasis.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_PRE_EMPHASIS(lane, qlm));
+		pre_emphasis.s.cfg_tx_premptap = (tx_post << 4) | tx_pre;
+		csr_wr_node(node, CVMX_GSERX_LANEX_TX_PRE_EMPHASIS(lane, qlm), pre_emphasis.u64);
+	}
+
+	/* Apply TX gain settings */
+	if (tx_gain != -1) {
+		tx_cfg3.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_3(lane, qlm));
+		tx_cfg3.s.pcs_sds_tx_gain = tx_gain;
+		csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_3(lane, qlm), tx_cfg3.u64);
+	}
+
+	/* Apply TX vboot settings */
+	if (tx_vboost != -1) {
+		tx_cfg3.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_3(lane, qlm));
+		tx_cfg3.s.cfg_tx_vboost_en = tx_vboost;
+		csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_3(lane, qlm), tx_cfg3.u64);
+	}
+
+	/* 3) Program override for the Tx coefficient request */
+	pcs_ctlifc_0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(lane, qlm));
+	if (((tx_pre != -1) && (tx_post != -1)) || (tx_swing != -1))
+		pcs_ctlifc_0.s.cfg_tx_coeff_req_ovrrd_val = 0x1;
+	if (tx_vboost != -1)
+		pcs_ctlifc_0.s.cfg_tx_vboost_en_ovrrd_val = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(lane, qlm), pcs_ctlifc_0.u64);
+
+	/* 4) Enable the Tx coefficient request override enable */
+	pcs_ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	if (((tx_pre != -1) && (tx_post != -1)) || (tx_swing != -1))
+		pcs_ctlifc_2.s.cfg_tx_coeff_req_ovrrd_en = 0x1;
+	if (tx_vboost != -1)
+		pcs_ctlifc_2.s.cfg_tx_vboost_en_ovrrd_en = 1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), pcs_ctlifc_2.u64);
+
+	/* 5) Issue a Control Interface Configuration Override request to start the Tx equalizer */
+	pcs_ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	pcs_ctlifc_2.s.ctlifc_ovrrd_req = 0x1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), pcs_ctlifc_2.u64);
+
+	/* 6) Wait 1 ms for the request to complete */
+	udelay(1000);
+
+	/* Steps 7 & 8 required for subsequent Tx swing and Tx equilization adjustment */
+	/* 7) Disable the Tx coefficient request override enable */
+	pcs_ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	pcs_ctlifc_2.s.cfg_tx_coeff_req_ovrrd_en = 0;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), pcs_ctlifc_2.u64);
+	/* 8) Issue a Control Interface Configuration Override request */
+	pcs_ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+	pcs_ctlifc_2.s.ctlifc_ovrrd_req = 0x1;
+	csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), pcs_ctlifc_2.u64);
+}
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_premptap bits [0:3] pre-cursor pre-emphasis, bits[4:8] post-cursor
+ *		      pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ *
+ */
+void octeon_qlm_tune_v3(int node, int qlm, int baud_mhz, int tx_swing, int tx_premptap, int tx_gain,
+			int tx_vboost)
+{
+	int lane;
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+
+	for (lane = 0; lane < num_lanes; lane++) {
+		int tx_pre = (tx_premptap == -1) ? -1 : tx_premptap & 0xf;
+		int tx_post = (tx_premptap == -1) ? -1 : (tx_premptap >> 4) & 0x1f;
+
+		octeon_qlm_tune_per_lane_v3(node, qlm, baud_mhz, lane, tx_swing, tx_pre, tx_post,
+					    tx_gain, tx_vboost);
+	}
+}
+
+/**
+ * Some QLMs need to override the default pre-ctle for low loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param pre_ctle pre-ctle settings for low loss channels
+ */
+void octeon_qlm_set_channel_v3(int node, int qlm, int pre_ctle)
+{
+	cvmx_gserx_lane_vma_fine_ctrl_2_t lane_vma_fine_ctrl_2;
+
+	lane_vma_fine_ctrl_2.u64 = csr_rd_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm));
+	lane_vma_fine_ctrl_2.s.rx_prectle_gain_min_fine = pre_ctle;
+	csr_wr_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm), lane_vma_fine_ctrl_2.u64);
+}
+
+static void __qlm_init_errata_20844(int node, int qlm)
+{
+	int lane;
+
+	/* Only applies to CN78XX pass 1.x */
+	if (!OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+		return;
+
+	/* Errata GSER-20844: Electrical Idle logic can coast
+	 * 1) After the link first comes up write the following
+	 * register on each lane to prevent the application logic
+	 * from stomping on the Coast inputs. This is a one time write,
+	 * or if you prefer you could put it in the link up loop and
+	 * write it every time the link comes up.
+	 * 1a) Then write GSER(0..13)_LANE(0..3)_PCS_CTLIFC_2
+	 * Set CTLIFC_OVRRD_REQ (later)
+	 * Set CFG_RX_CDR_COAST_REQ_OVRRD_EN
+	 * Its not clear if #1 and #1a can be combined, lets try it
+	 * this way first.
+	 */
+	for (lane = 0; lane < 4; lane++) {
+		cvmx_gserx_lanex_rx_misc_ovrrd_t misc_ovrrd;
+		cvmx_gserx_lanex_pcs_ctlifc_2_t ctlifc_2;
+
+		ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+		ctlifc_2.s.cfg_rx_cdr_coast_req_ovrrd_en = 1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), ctlifc_2.u64);
+
+		misc_ovrrd.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, qlm));
+		misc_ovrrd.s.cfg_rx_eie_det_ovrrd_en = 1;
+		misc_ovrrd.s.cfg_rx_eie_det_ovrrd_val = 0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, qlm), misc_ovrrd.u64);
+
+		udelay(1);
+
+		misc_ovrrd.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, qlm));
+		misc_ovrrd.s.cfg_rx_eie_det_ovrrd_en = 1;
+		misc_ovrrd.s.cfg_rx_eie_det_ovrrd_val = 1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, qlm), misc_ovrrd.u64);
+		ctlifc_2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm));
+		ctlifc_2.s.ctlifc_ovrrd_req = 1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(lane, qlm), ctlifc_2.u64);
+	}
+}
+
+/** CN78xx reference clock register settings */
+struct refclk_settings_cn78xx {
+	bool valid; /** Reference clock speed supported */
+	union cvmx_gserx_pll_px_mode_0 mode_0;
+	union cvmx_gserx_pll_px_mode_1 mode_1;
+	union cvmx_gserx_lane_px_mode_0 pmode_0;
+	union cvmx_gserx_lane_px_mode_1 pmode_1;
+};
+
+/** Default reference clock for various modes */
+static const u8 def_ref_clk_cn78xx[R_NUM_LANE_MODES] = { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 };
+
+/**
+ * This data structure stores the reference clock for each mode for each QLM.
+ *
+ * It is indexed first by the node number, then the QLM number and then the
+ * lane mode.  It is initialized to the default values.
+ */
+static u8 ref_clk_cn78xx[CVMX_MAX_NODES][8][R_NUM_LANE_MODES] = {
+	{ { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 } },
+	{ { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 } },
+	{ { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 } },
+	{ { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 },
+	  { 0, 0, 0, 2, 2, 2, 2, 2, 2, 1, 1, 1 } }
+};
+
+/**
+ * This data structure contains the register values for the cn78xx PLLs
+ * It is indexed first by the reference clock and second by the mode.
+ * Note that not all combinations are supported.
+ */
+static const struct refclk_settings_cn78xx refclk_settings_cn78xx[R_NUM_LANE_MODES][4] = {
+	{   /* 0	R_2_5G_REFCLK100 */
+	{ /* 100MHz reference clock */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x4, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 125MHz reference clock */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x1,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 156.25MHz reference clock */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x10 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } },
+	{
+		/* 1	R_5G_REFCLK100 */
+		{ /* 100MHz reference clock */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x4, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+		  .mode_1.s = { .pll_16p5en = 0x0,
+				.pll_cpadj = 0x2,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x19 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x1,
+				 .tx_ldiv = 0x0,
+				 .rx_ldiv = 0x0,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x0,
+				 .cdr_fgain = 0xa,
+				 .ph_acc_adj = 0x14 } },
+		{ /* 125MHz reference clock */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+		  .mode_1.s = { .pll_16p5en = 0x0,
+				.pll_cpadj = 0x1,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x14 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x1,
+				 .tx_ldiv = 0x0,
+				 .rx_ldiv = 0x0,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x0,
+				 .cdr_fgain = 0xa,
+				 .ph_acc_adj = 0x14 } },
+		{ /* 156.25MHz reference clock */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+		  .mode_1.s = { .pll_16p5en = 0x0,
+				.pll_cpadj = 0x2,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x10 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x1,
+				 .tx_ldiv = 0x0,
+				 .rx_ldiv = 0x0,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x0,
+				 .cdr_fgain = 0xa,
+				 .ph_acc_adj = 0x14 } },
+		{
+			/* 161.1328125MHz reference clock */
+			.valid = false,
+		},
+	},
+	{   /* 2	R_8G_REFCLK100 */
+	{ /* 100MHz reference clock */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x1,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x28 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xb,
+			   .ph_acc_adj = 0x23 } },
+	{ /* 125MHz reference clock */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x2, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x1,
+			  .pll_pcie3en = 0x1,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x20 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xb,
+			   .ph_acc_adj = 0x23 } },
+	{ /* 156.25MHz reference clock not supported */
+	    .valid = false } },
+	{
+		/* 3	R_125G_REFCLK15625_KX */
+		{ /* 100MHz reference */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+		  .mode_1.s = { .pll_16p5en = 0x1,
+				.pll_cpadj = 0x2,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x19 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x0,
+				 .tx_ldiv = 0x2,
+				 .rx_ldiv = 0x2,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x1,
+				 .cdr_fgain = 0xc,
+				 .ph_acc_adj = 0x1e } },
+		{ /* 125MHz reference */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+		  .mode_1.s = { .pll_16p5en = 0x1,
+				.pll_cpadj = 0x2,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x14 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x0,
+				 .tx_ldiv = 0x2,
+				 .rx_ldiv = 0x2,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x1,
+				 .cdr_fgain = 0xc,
+				 .ph_acc_adj = 0x1e } },
+		{ /* 156.25MHz reference */
+		  .valid = true,
+		  .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+		  .mode_1.s = { .pll_16p5en = 0x1,
+				.pll_cpadj = 0x3,
+				.pll_pcie3en = 0x0,
+				.pll_opr = 0x0,
+				.pll_div = 0x10 },
+		  .pmode_0.s = { .ctle = 0x0,
+				 .pcie = 0x0,
+				 .tx_ldiv = 0x2,
+				 .rx_ldiv = 0x2,
+				 .srate = 0x0,
+				 .tx_mode = 0x3,
+				 .rx_mode = 0x3 },
+		  .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+				 .vma_mm = 0x1,
+				 .cdr_fgain = 0xc,
+				 .ph_acc_adj = 0x1e } },
+		{
+			/* 161.1328125MHz reference clock */
+			.valid = false,
+		},
+	},
+	{   /* 4	R_3125G_REFCLK15625_XAUI */
+	{ /* 100MHz reference */
+	    .valid = false },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x14 },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{ /* 156.25MHz reference, default */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x14 },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } },
+	{   /* 5	R_103125G_REFCLK15625_KR */
+	{ /* 100MHz reference */
+	    .valid = false },
+	{ /* 125MHz reference */
+	    .valid = false },
+	{ /* 156.25MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x21 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x1,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0xf } },
+	{ /* 161.1328125 reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x20 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x1,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0xf } } },
+	{   /* 6	R_125G_REFCLK15625_SGMII */
+	{ /* 100MHz reference clock */
+	    .valid = 1,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x2,
+			   .rx_ldiv = 0x2,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{ /* 125MHz reference clock */
+	    .valid = 1,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x2,
+			   .rx_ldiv = 0x2,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{ /* 156.25MHz reference clock */
+	    .valid = 1,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0x28 },
+	    .mode_1.s = { .pll_16p5en = 0x1,
+			  .pll_cpadj = 0x3,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x10 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x2,
+			   .rx_ldiv = 0x2,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } } },
+	{   /* 7	R_5G_REFCLK15625_QSGMII */
+	{ /* 100MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x4, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0, .pll_cpadj = 0x2, .pll_pcie3en = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0, .pll_cpadj = 0x1, .pll_pcie3en = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{ /* 156.25MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0, .pll_cpadj = 0x2, .pll_pcie3en = 0x0,
+			  .pll_div = 0x10 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xc,
+			   .ph_acc_adj = 0x1e } },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } },
+	{   /* 8	R_625G_REFCLK15625_RXAUI */
+	{ /* 100MHz reference */
+	    .valid = false },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 156.25MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 161.1328125 reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x1, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } } },
+	{   /* 9	R_2_5G_REFCLK125 */
+	{ /* 100MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x4, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x1,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 156,25MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0x5 },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x10 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x1,
+			   .rx_ldiv = 0x1,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x1,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } },
+	{   /* 0xa	R_5G_REFCLK125 */
+	{ /* 100MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x4, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x19 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x1,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x14 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{ /* 156.25MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x3, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x0,
+			  .pll_opr = 0x0,
+			  .pll_div = 0x10 },
+	    .pmode_0.s = { .ctle = 0x0,
+			   .pcie = 0x1,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xa,
+			   .ph_acc_adj = 0x14 } },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } },
+	{   /* 0xb	R_8G_REFCLK125 */
+	{ /* 100MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x3, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x2,
+			  .pll_pcie3en = 0x1,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x28 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xb,
+			   .ph_acc_adj = 0x23 } },
+	{ /* 125MHz reference */
+	    .valid = true,
+	    .mode_0.s = { .pll_icp = 0x2, .pll_rloop = 0x5, .pll_pcs_div = 0xa },
+	    .mode_1.s = { .pll_16p5en = 0x0,
+			  .pll_cpadj = 0x1,
+			  .pll_pcie3en = 0x1,
+			  .pll_opr = 0x1,
+			  .pll_div = 0x20 },
+	    .pmode_0.s = { .ctle = 0x3,
+			   .pcie = 0x0,
+			   .tx_ldiv = 0x0,
+			   .rx_ldiv = 0x0,
+			   .srate = 0x0,
+			   .tx_mode = 0x3,
+			   .rx_mode = 0x3 },
+	    .pmode_1.s = { .vma_fine_cfg_sel = 0x0,
+			   .vma_mm = 0x0,
+			   .cdr_fgain = 0xb,
+			   .ph_acc_adj = 0x23 } },
+	{ /* 156.25MHz reference */
+	    .valid = false },
+	{
+		  /* 161.1328125MHz reference clock */
+		  .valid = false,
+	  } }
+};
+
+/**
+ * Set a non-standard reference clock for a node, qlm and lane mode.
+ *
+ * @INTERNAL
+ *
+ * @param node		node number the reference clock is used with
+ * @param qlm		qlm number the reference clock is hooked up to
+ * @param lane_mode	current lane mode selected for the QLM
+ * @param ref_clk_sel	0 = 100MHz, 1 = 125MHz, 2 = 156.25MHz,
+ *			3 = 161.1328125MHz
+ *
+ * @return 0 for success or -1 if the reference clock selector is not supported
+ *
+ * NOTE: This must be called before __qlm_setup_pll_cn78xx.
+ */
+static int __set_qlm_ref_clk_cn78xx(int node, int qlm, int lane_mode, int ref_clk_sel)
+{
+	if (ref_clk_sel > 3 || ref_clk_sel < 0 ||
+	    !refclk_settings_cn78xx[lane_mode][ref_clk_sel].valid) {
+		debug("%s: Invalid reference clock %d for lane mode %d for node %d, QLM %d\n",
+		      __func__, ref_clk_sel, lane_mode, node, qlm);
+		return -1;
+	}
+	debug("%s(%d, %d, 0x%x, %d)\n", __func__, node, qlm, lane_mode, ref_clk_sel);
+	ref_clk_cn78xx[node][qlm][lane_mode] = ref_clk_sel;
+	return 0;
+}
+
+/**
+ * KR - Inverted Tx Coefficient Direction Change.  Changing Pre & Post Tap inc/dec direction
+ *
+ *
+ * @INTERNAL
+ *
+ * @param node	Node number to configure
+ * @param qlm	QLM number to configure
+ */
+static void __qlm_kr_inc_dec_gser26636(int node, int qlm)
+{
+	cvmx_gserx_rx_txdir_ctrl_1_t rx_txdir_ctrl;
+
+	/* Apply workaround for Errata GSER-26636,
+	 * KR training coefficient update inverted
+	 */
+	rx_txdir_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_RX_TXDIR_CTRL_1(qlm));
+	rx_txdir_ctrl.s.rx_precorr_chg_dir = 1;
+	rx_txdir_ctrl.s.rx_tap1_chg_dir = 1;
+	csr_wr_node(node, CVMX_GSERX_RX_TXDIR_CTRL_1(qlm), rx_txdir_ctrl.u64);
+}
+
+/**
+ * Updating the RX EQ settings to support wider temperature range
+ * @INTERNAL
+ *
+ * @param node	Node number to configure
+ * @param qlm	QLM number to configure
+ */
+static void __qlm_rx_eq_temp_gser27140(int node, int qlm)
+{
+	int lane;
+	int num_lanes = cvmx_qlm_get_lanes(qlm);
+	cvmx_gserx_lanex_rx_valbbd_ctrl_0_t rx_valbbd_ctrl_0;
+	cvmx_gserx_lane_vma_fine_ctrl_2_t lane_vma_fine_ctrl_2;
+	cvmx_gserx_lane_vma_fine_ctrl_0_t lane_vma_fine_ctrl_0;
+	cvmx_gserx_rx_txdir_ctrl_1_t rx_txdir_ctrl_1;
+	cvmx_gserx_eq_wait_time_t eq_wait_time;
+	cvmx_gserx_rx_txdir_ctrl_2_t rx_txdir_ctrl_2;
+	cvmx_gserx_rx_txdir_ctrl_0_t rx_txdir_ctrl_0;
+
+	for (lane = 0; lane < num_lanes; lane++) {
+		rx_valbbd_ctrl_0.u64 =
+			csr_rd_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(lane, qlm));
+		rx_valbbd_ctrl_0.s.agc_gain = 3;
+		rx_valbbd_ctrl_0.s.dfe_gain = 2;
+		csr_wr_node(node, CVMX_GSERX_LANEX_RX_VALBBD_CTRL_0(lane, qlm),
+			    rx_valbbd_ctrl_0.u64);
+	}
+
+	/* do_pre_ctle_limits_work_around: */
+	lane_vma_fine_ctrl_2.u64 = csr_rd_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm));
+	//lane_vma_fine_ctrl_2.s.rx_prectle_peak_max_fine = 11;
+	lane_vma_fine_ctrl_2.s.rx_prectle_gain_max_fine = 11;
+	//lane_vma_fine_ctrl_2.s.rx_prectle_peak_min_fine = 6;
+	lane_vma_fine_ctrl_2.s.rx_prectle_gain_min_fine = 6;
+	csr_wr_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_2(qlm), lane_vma_fine_ctrl_2.u64);
+
+	/* do_inc_dec_thres_work_around: */
+	rx_txdir_ctrl_0.u64 = csr_rd_node(node, CVMX_GSERX_RX_TXDIR_CTRL_0(qlm));
+	rx_txdir_ctrl_0.s.rx_boost_hi_thrs = 11;
+	rx_txdir_ctrl_0.s.rx_boost_lo_thrs = 4;
+	rx_txdir_ctrl_0.s.rx_boost_hi_val = 15;
+	csr_wr_node(node, CVMX_GSERX_RX_TXDIR_CTRL_0(qlm), rx_txdir_ctrl_0.u64);
+
+	/* do_sdll_iq_work_around: */
+	lane_vma_fine_ctrl_0.u64 = csr_rd_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_0(qlm));
+	lane_vma_fine_ctrl_0.s.rx_sdll_iq_max_fine = 14;
+	lane_vma_fine_ctrl_0.s.rx_sdll_iq_min_fine = 8;
+	lane_vma_fine_ctrl_0.s.rx_sdll_iq_step_fine = 2;
+
+	/* do_vma_window_work_around_2: */
+	lane_vma_fine_ctrl_0.s.vma_window_wait_fine = 5;
+	lane_vma_fine_ctrl_0.s.lms_wait_time_fine = 5;
+
+	csr_wr_node(node, CVMX_GSERX_LANE_VMA_FINE_CTRL_0(qlm), lane_vma_fine_ctrl_0.u64);
+
+	/* Set dfe_tap_1_lo_thres_val: */
+	rx_txdir_ctrl_1.u64 = csr_rd_node(node, CVMX_GSERX_RX_TXDIR_CTRL_1(qlm));
+	rx_txdir_ctrl_1.s.rx_tap1_lo_thrs = 8;
+	rx_txdir_ctrl_1.s.rx_tap1_hi_thrs = 0x17;
+	csr_wr_node(node, CVMX_GSERX_RX_TXDIR_CTRL_1(qlm), rx_txdir_ctrl_1.u64);
+
+	/* do_rxeq_wait_cnt_work_around: */
+	eq_wait_time.u64 = csr_rd_node(node, CVMX_GSERX_EQ_WAIT_TIME(qlm));
+	eq_wait_time.s.rxeq_wait_cnt = 6;
+	csr_wr_node(node, CVMX_GSERX_EQ_WAIT_TIME(qlm), eq_wait_time.u64);
+
+	/* do_write_rx_txdir_precorr_thresholds: */
+	rx_txdir_ctrl_2.u64 = csr_rd_node(node, CVMX_GSERX_RX_TXDIR_CTRL_2(qlm));
+	rx_txdir_ctrl_2.s.rx_precorr_hi_thrs = 0xc0;
+	rx_txdir_ctrl_2.s.rx_precorr_lo_thrs = 0x40;
+	csr_wr_node(node, CVMX_GSERX_RX_TXDIR_CTRL_2(qlm), rx_txdir_ctrl_2.u64);
+}
+
+/* Errata GSER-26150: 10G PHY PLL Temperature Failure
+ * This workaround must be completed after the final deassertion of
+ * GSERx_PHY_CTL[PHY_RESET]
+ */
+static int __qlm_errata_gser_26150(int node, int qlm, int is_pcie)
+{
+	int num_lanes = 4;
+	int i;
+	cvmx_gserx_glbl_pll_cfg_3_t pll_cfg_3;
+	cvmx_gserx_glbl_misc_config_1_t misc_config_1;
+
+	/* PCIe only requires the LC-VCO parameters to be updated */
+	if (is_pcie) {
+		/* Update PLL parameters */
+		/* Step 1: Set GSER()_GLBL_PLL_CFG_3[PLL_VCTRL_SEL_LCVCO_VAL] = 0x2, and
+		 * GSER()_GLBL_PLL_CFG_3[PCS_SDS_PLL_VCO_AMP] = 0
+		 */
+		pll_cfg_3.u64 = csr_rd_node(node, CVMX_GSERX_GLBL_PLL_CFG_3(qlm));
+		pll_cfg_3.s.pcs_sds_pll_vco_amp = 0;
+		pll_cfg_3.s.pll_vctrl_sel_lcvco_val = 2;
+		csr_wr_node(node, CVMX_GSERX_GLBL_PLL_CFG_3(qlm), pll_cfg_3.u64);
+
+		/* Step 2: Set GSER()_GLBL_MISC_CONFIG_1[PCS_SDS_TRIM_CHP_REG] = 0x2. */
+		misc_config_1.u64 = csr_rd_node(node, CVMX_GSERX_GLBL_MISC_CONFIG_1(qlm));
+		misc_config_1.s.pcs_sds_trim_chp_reg = 2;
+		csr_wr_node(node, CVMX_GSERX_GLBL_MISC_CONFIG_1(qlm), misc_config_1.u64);
+		return 0;
+	}
+
+	/* Applying this errata twice causes problems */
+	pll_cfg_3.u64 = csr_rd_node(node, CVMX_GSERX_GLBL_PLL_CFG_3(qlm));
+	if (pll_cfg_3.s.pll_vctrl_sel_lcvco_val == 0x2)
+		return 0;
+
+	/* (GSER-26150) 10 Gb temperature excursions can cause lock failure */
+	/* Change the calibration point of the VCO@start up to shift some
+	 * available range of the VCO from -deltaT direction to the +deltaT
+	 * ramp direction allowing a greater range of VCO temperatures before
+	 * experiencing the failure.
+	 */
+
+	/* Check for DLMs on CN73XX and CNF75XX */
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) && (qlm == 5 || qlm == 6))
+		num_lanes = 2;
+
+	/* Put PHY in P2 Power-down state  Need to Power down all lanes in a
+	 * QLM/DLM to force PHY to P2 state
+	 */
+	for (i = 0; i < num_lanes; i++) {
+		cvmx_gserx_lanex_pcs_ctlifc_0_t ctlifc0;
+		cvmx_gserx_lanex_pcs_ctlifc_1_t ctlifc1;
+		cvmx_gserx_lanex_pcs_ctlifc_2_t ctlifc2;
+
+		/* Step 1: Set Set GSER()_LANE(lane_n)_PCS_CTLIFC_0[CFG_TX_PSTATE_REQ_OVERRD_VAL]
+		 * = 0x3
+		 * Select P2 power state for Tx lane
+		 */
+		ctlifc0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(i, qlm));
+		ctlifc0.s.cfg_tx_pstate_req_ovrrd_val = 0x3;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(i, qlm), ctlifc0.u64);
+		/* Step 2: Set GSER()_LANE(lane_n)_PCS_CTLIFC_1[CFG_RX_PSTATE_REQ_OVERRD_VAL]
+		 * = 0x3
+		 * Select P2 power state for Rx lane
+		 */
+		ctlifc1.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_1(i, qlm));
+		ctlifc1.s.cfg_rx_pstate_req_ovrrd_val = 0x3;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_1(i, qlm), ctlifc1.u64);
+		/* Step 3: Set GSER()_LANE(lane_n)_PCS_CTLIFC_2[CFG_TX_PSTATE_REQ_OVRRD_EN] = 1
+		 * Enable Tx power state override and Set
+		 * GSER()_LANE(lane_n)_PCS_CTLIFC_2[CFG_RX_PSTATE_REQ_OVRRD_EN] = 1
+		 * Enable Rx power state override
+		 */
+		ctlifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm));
+		ctlifc2.s.cfg_tx_pstate_req_ovrrd_en = 0x1;
+		ctlifc2.s.cfg_rx_pstate_req_ovrrd_en = 0x1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm), ctlifc2.u64);
+		/* Step 4: Set GSER()_LANE(lane_n)_PCS_CTLIFC_2[CTLIFC_OVRRD_REQ] = 1
+		 * Start the CTLIFC override state machine
+		 */
+		ctlifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm));
+		ctlifc2.s.ctlifc_ovrrd_req = 0x1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm), ctlifc2.u64);
+	}
+
+	/* Update PLL parameters */
+	/* Step 5: Set GSER()_GLBL_PLL_CFG_3[PLL_VCTRL_SEL_LCVCO_VAL] = 0x2, and
+	 * GSER()_GLBL_PLL_CFG_3[PCS_SDS_PLL_VCO_AMP] = 0
+	 */
+	pll_cfg_3.u64 = csr_rd_node(node, CVMX_GSERX_GLBL_PLL_CFG_3(qlm));
+	pll_cfg_3.s.pcs_sds_pll_vco_amp = 0;
+	pll_cfg_3.s.pll_vctrl_sel_lcvco_val = 2;
+	csr_wr_node(node, CVMX_GSERX_GLBL_PLL_CFG_3(qlm), pll_cfg_3.u64);
+
+	/* Step 6: Set GSER()_GLBL_MISC_CONFIG_1[PCS_SDS_TRIM_CHP_REG] = 0x2. */
+	misc_config_1.u64 = csr_rd_node(node, CVMX_GSERX_GLBL_MISC_CONFIG_1(qlm));
+	misc_config_1.s.pcs_sds_trim_chp_reg = 2;
+	csr_wr_node(node, CVMX_GSERX_GLBL_MISC_CONFIG_1(qlm), misc_config_1.u64);
+
+	/* Wake up PHY and transition to P0 Power-up state to bring-up the lanes,
+	 * need to wake up all PHY lanes
+	 */
+	for (i = 0; i < num_lanes; i++) {
+		cvmx_gserx_lanex_pcs_ctlifc_0_t ctlifc0;
+		cvmx_gserx_lanex_pcs_ctlifc_1_t ctlifc1;
+		cvmx_gserx_lanex_pcs_ctlifc_2_t ctlifc2;
+		/* Step 7: Set GSER()_LANE(lane_n)_PCS_CTLIFC_0[CFG_TX_PSTATE_REQ_OVERRD_VAL] = 0x0
+		 * Select P0 power state for Tx lane
+		 */
+		ctlifc0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(i, qlm));
+		ctlifc0.s.cfg_tx_pstate_req_ovrrd_val = 0x0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_0(i, qlm), ctlifc0.u64);
+		/* Step 8: Set GSER()_LANE(lane_n)_PCS_CTLIFC_1[CFG_RX_PSTATE_REQ_OVERRD_VAL] = 0x0
+		 * Select P0 power state for Rx lane
+		 */
+		ctlifc1.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_1(i, qlm));
+		ctlifc1.s.cfg_rx_pstate_req_ovrrd_val = 0x0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_1(i, qlm), ctlifc1.u64);
+		/* Step 9: Set GSER()_LANE(lane_n)_PCS_CTLIFC_2[CFG_TX_PSTATE_REQ_OVRRD_EN] = 1
+		 * Enable Tx power state override and Set
+		 * GSER()_LANE(lane_n)_PCS_CTLIFC_2[CFG_RX_PSTATE_REQ_OVRRD_EN] = 1
+		 * Enable Rx power state override
+		 */
+		ctlifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm));
+		ctlifc2.s.cfg_tx_pstate_req_ovrrd_en = 0x1;
+		ctlifc2.s.cfg_rx_pstate_req_ovrrd_en = 0x1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm), ctlifc2.u64);
+		/* Step 10: Set GSER()_LANE(lane_n)_PCS_CTLIFC_2[CTLIFC_OVRRD_REQ] = 1
+		 * Start the CTLIFC override state machine
+		 */
+		ctlifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm));
+		ctlifc2.s.ctlifc_ovrrd_req = 0x1;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm), ctlifc2.u64);
+	}
+
+	/* Step 11: Wait 10 msec */
+	mdelay(10);
+
+	/* Release Lane Tx/Rx Power state override enables. */
+	for (i = 0; i < num_lanes; i++) {
+		cvmx_gserx_lanex_pcs_ctlifc_2_t ctlifc2;
+
+		ctlifc2.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm));
+		ctlifc2.s.cfg_tx_pstate_req_ovrrd_en = 0x0;
+		ctlifc2.s.cfg_rx_pstate_req_ovrrd_en = 0x0;
+		csr_wr_node(node, CVMX_GSERX_LANEX_PCS_CTLIFC_2(i, qlm), ctlifc2.u64);
+	}
+
+	/* Step 12:  Poll GSER()_PLL_STAT.[PLL_LOCK] = 1
+	 * Poll and check that PLL is locked
+	 */
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_PLL_STAT(qlm), cvmx_gserx_pll_stat_t,
+				       pll_lock, ==, 1, 10000)) {
+		printf("%d:QLM%d: Timeout waiting for GSERX_PLL_STAT[pll_lock]\n", node, qlm);
+		return -1;
+	}
+
+	/* Step 13:  Poll GSER()_QLM_STAT.[RST_RDY] = 1
+	 * Poll and check that QLM/DLM is Ready
+	 */
+	if (is_pcie == 0 &&
+	    CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_QLM_STAT(qlm), cvmx_gserx_qlm_stat_t,
+				       rst_rdy, ==, 1, 10000)) {
+		printf("%d:QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", node, qlm);
+		return -1;
+	}
+
+	return 0;
+}
+
+/**
+ * Configure all of the PLLs for a particular node and qlm
+ * @INTERNAL
+ *
+ * @param node	Node number to configure
+ * @param qlm	QLM number to configure
+ */
+static void __qlm_setup_pll_cn78xx(int node, int qlm)
+{
+	cvmx_gserx_pll_px_mode_0_t mode_0;
+	cvmx_gserx_pll_px_mode_1_t mode_1;
+	cvmx_gserx_lane_px_mode_0_t pmode_0;
+	cvmx_gserx_lane_px_mode_1_t pmode_1;
+	int lane_mode;
+	int ref_clk;
+	const struct refclk_settings_cn78xx *clk_settings;
+
+	for (lane_mode = 0; lane_mode < R_NUM_LANE_MODES; lane_mode++) {
+		mode_0.u64 = csr_rd_node(node, CVMX_GSERX_PLL_PX_MODE_0(lane_mode, qlm));
+		mode_1.u64 = csr_rd_node(node, CVMX_GSERX_PLL_PX_MODE_1(lane_mode, qlm));
+		pmode_0.u64 = 0;
+		pmode_1.u64 = 0;
+		ref_clk = ref_clk_cn78xx[node][qlm][lane_mode];
+		clk_settings = &refclk_settings_cn78xx[lane_mode][ref_clk];
+		debug("%s(%d, %d): lane_mode: 0x%x, ref_clk: %d\n", __func__, node, qlm, lane_mode,
+		      ref_clk);
+
+		if (!clk_settings->valid) {
+			printf("%s: Error: reference clock %d is not supported for lane mode %d on qlm %d\n",
+			       __func__, ref_clk, lane_mode, qlm);
+			continue;
+		}
+
+		mode_0.s.pll_icp = clk_settings->mode_0.s.pll_icp;
+		mode_0.s.pll_rloop = clk_settings->mode_0.s.pll_rloop;
+		mode_0.s.pll_pcs_div = clk_settings->mode_0.s.pll_pcs_div;
+
+		mode_1.s.pll_16p5en = clk_settings->mode_1.s.pll_16p5en;
+		mode_1.s.pll_cpadj = clk_settings->mode_1.s.pll_cpadj;
+		mode_1.s.pll_pcie3en = clk_settings->mode_1.s.pll_pcie3en;
+		mode_1.s.pll_opr = clk_settings->mode_1.s.pll_opr;
+		mode_1.s.pll_div = clk_settings->mode_1.s.pll_div;
+
+		pmode_0.u64 = clk_settings->pmode_0.u64;
+
+		pmode_1.u64 = clk_settings->pmode_1.u64;
+
+		csr_wr_node(node, CVMX_GSERX_PLL_PX_MODE_1(lane_mode, qlm), mode_1.u64);
+		csr_wr_node(node, CVMX_GSERX_LANE_PX_MODE_0(lane_mode, qlm), pmode_0.u64);
+		csr_wr_node(node, CVMX_GSERX_LANE_PX_MODE_1(lane_mode, qlm), pmode_1.u64);
+		csr_wr_node(node, CVMX_GSERX_PLL_PX_MODE_0(lane_mode, qlm), mode_0.u64);
+	}
+}
+
+/**
+ * Get the lane mode for the specified node and QLM.
+ *
+ * @param ref_clk_sel	The reference-clock selection to use to configure QLM
+ *			 0 = REF_100MHZ
+ *			 1 = REF_125MHZ
+ *			 2 = REF_156MHZ
+ * @param baud_mhz   The speed the QLM needs to be configured in Mhz.
+ * @param[out] alt_pll_settings	If non-NULL this will be set if non-default PLL
+ *				settings are required for the mode.
+ *
+ * @return lane mode to use or -1 on error
+ *
+ * NOTE: In some modes
+ */
+static int __get_lane_mode_for_speed_and_ref_clk(int ref_clk_sel, int baud_mhz,
+						 bool *alt_pll_settings)
+{
+	if (alt_pll_settings)
+		*alt_pll_settings = false;
+	switch (baud_mhz) {
+	case 98304:
+	case 49152:
+	case 24576:
+	case 12288:
+		if (ref_clk_sel != 3) {
+			printf("Error: Invalid ref clock\n");
+			return -1;
+		}
+		return 0x5;
+	case 6144:
+	case 3072:
+		if (ref_clk_sel != 3) {
+			printf("Error: Invalid ref clock\n");
+			return -1;
+		}
+		return 0x8;
+	case 1250:
+		if (alt_pll_settings)
+			*alt_pll_settings = (ref_clk_sel != 2);
+		return R_125G_REFCLK15625_SGMII;
+	case 2500:
+		if (ref_clk_sel == 0)
+			return R_2_5G_REFCLK100;
+
+		if (alt_pll_settings)
+			*alt_pll_settings = (ref_clk_sel != 1);
+		return R_2_5G_REFCLK125;
+	case 3125:
+		if (ref_clk_sel == 2) {
+			return R_3125G_REFCLK15625_XAUI;
+		} else if (ref_clk_sel == 1) {
+			if (alt_pll_settings)
+				*alt_pll_settings = true;
+			return R_3125G_REFCLK15625_XAUI;
+		}
+
+		printf("Error: Invalid speed\n");
+		return -1;
+	case 5000:
+		if (ref_clk_sel == 0) {
+			return R_5G_REFCLK100;
+		} else if (ref_clk_sel == 1) {
+			if (alt_pll_settings)
+				*alt_pll_settings = (ref_clk_sel != 1);
+			return R_5G_REFCLK125;
+		} else {
+			return R_5G_REFCLK15625_QSGMII;
+		}
+	case 6250:
+		if (ref_clk_sel != 0) {
+			if (alt_pll_settings)
+				*alt_pll_settings = (ref_clk_sel != 2);
+			return R_625G_REFCLK15625_RXAUI;
+		}
+
+		printf("Error: Invalid speed\n");
+		return -1;
+	case 6316:
+		if (ref_clk_sel != 3) {
+			printf("Error: Invalid speed\n");
+		} else {
+			*alt_pll_settings = true;
+			return R_625G_REFCLK15625_RXAUI;
+		}
+	case 8000:
+		if (ref_clk_sel == 0)
+			return R_8G_REFCLK100;
+		else if (ref_clk_sel == 1)
+			return R_8G_REFCLK125;
+
+		printf("Error: Invalid speed\n");
+		return -1;
+	case 103125:
+		if (ref_clk_sel == 3 && alt_pll_settings)
+			*alt_pll_settings = true;
+
+		if (ref_clk_sel == 2 || ref_clk_sel == 3)
+			return R_103125G_REFCLK15625_KR;
+
+	default:
+		printf("Error: Invalid speed\n");
+		return -1;
+	}
+
+	return -1;
+}
+
+/*
+ * Errata PEM-31375 PEM RSL accesses to PCLK registers can timeout
+ * during speed change. Change SLI_WINDOW_CTL[time] to 525us
+ */
+static void __set_sli_window_ctl_errata_31375(int node)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CNF75XX)) {
+		cvmx_sli_window_ctl_t window_ctl;
+
+		window_ctl.u64 = csr_rd_node(node, CVMX_PEXP_SLI_WINDOW_CTL);
+		/* Configure SLI_WINDOW_CTL only once */
+		if (window_ctl.s.time != 8191)
+			return;
+
+		window_ctl.s.time = gd->bus_clk * 525ull / 1000000;
+		csr_wr_node(node, CVMX_PEXP_SLI_WINDOW_CTL, window_ctl.u64);
+	}
+}
+
+static void __cvmx_qlm_pcie_errata_ep_cn78xx(int node, int pem)
+{
+	cvmx_pciercx_cfg031_t cfg031;
+	cvmx_pciercx_cfg032_t cfg032;
+	cvmx_pciercx_cfg040_t cfg040;
+	cvmx_pemx_cfg_t pemx_cfg;
+	cvmx_pemx_on_t pemx_on;
+	int low_qlm, high_qlm;
+	int qlm, lane;
+	u64 start_cycle;
+
+	pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(pem));
+
+	/* Errata (GSER-21178) PCIe gen3 doesn't work, continued */
+
+	/* Wait for the link to come up as Gen1 */
+	printf("PCIe%d: Waiting for EP out of reset\n", pem);
+	while (pemx_on.s.pemoor == 0) {
+		udelay(1000);
+		pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(pem));
+	}
+
+	/* Enable gen3 speed selection */
+	printf("PCIe%d: Enabling Gen3 for EP\n", pem);
+	/* Force Gen1 for initial link bringup. We'll fix it later */
+	pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(pem));
+	pemx_cfg.s.md = 2;
+	csr_wr_node(node, CVMX_PEMX_CFG(pem), pemx_cfg.u64);
+	cfg031.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG031(pem));
+	cfg031.s.mls = 2;
+	cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG031(pem), cfg031.u32);
+	cfg040.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG040(pem));
+	cfg040.s.tls = 3;
+	cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG040(pem), cfg040.u32);
+
+	/* Wait up to 10ms for the link speed change to complete */
+	start_cycle = get_timer(0);
+	do {
+		if (get_timer(start_cycle) > 10)
+			return;
+
+		mdelay(1);
+		cfg032.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG032(pem));
+	} while (cfg032.s.ls != 3);
+
+	pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(pem));
+	low_qlm = pem; /* FIXME */
+	high_qlm = (pemx_cfg.cn78xx.lanes8) ? low_qlm + 1 : low_qlm;
+
+	/* Toggle cfg_rx_dll_locken_ovrrd_en and rx_resetn_ovrrd_en across
+	 * all QM lanes in use
+	 */
+	for (qlm = low_qlm; qlm <= high_qlm; qlm++) {
+		for (lane = 0; lane < 4; lane++) {
+			cvmx_gserx_lanex_rx_misc_ovrrd_t misc_ovrrd;
+			cvmx_gserx_lanex_pwr_ctrl_t pwr_ctrl;
+
+			misc_ovrrd.u64 =
+				csr_rd_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, pem));
+			misc_ovrrd.s.cfg_rx_dll_locken_ovrrd_en = 1;
+			csr_wr_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, pem),
+				    misc_ovrrd.u64);
+			pwr_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PWR_CTRL(lane, pem));
+			pwr_ctrl.s.rx_resetn_ovrrd_en = 1;
+			csr_wr_node(node, CVMX_GSERX_LANEX_PWR_CTRL(lane, pem), pwr_ctrl.u64);
+		}
+	}
+	for (qlm = low_qlm; qlm <= high_qlm; qlm++) {
+		for (lane = 0; lane < 4; lane++) {
+			cvmx_gserx_lanex_rx_misc_ovrrd_t misc_ovrrd;
+			cvmx_gserx_lanex_pwr_ctrl_t pwr_ctrl;
+
+			misc_ovrrd.u64 =
+				csr_rd_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, pem));
+			misc_ovrrd.s.cfg_rx_dll_locken_ovrrd_en = 0;
+			csr_wr_node(node, CVMX_GSERX_LANEX_RX_MISC_OVRRD(lane, pem),
+				    misc_ovrrd.u64);
+			pwr_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PWR_CTRL(lane, pem));
+			pwr_ctrl.s.rx_resetn_ovrrd_en = 0;
+			csr_wr_node(node, CVMX_GSERX_LANEX_PWR_CTRL(lane, pem), pwr_ctrl.u64);
+		}
+	}
+
+	//printf("PCIe%d: Waiting for EP link up at Gen3\n", pem);
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_PEMX_ON(pem), cvmx_pemx_on_t, pemoor, ==, 1,
+				       1000000)) {
+		printf("PCIe%d: Timeout waiting for EP link up at Gen3\n", pem);
+		return;
+	}
+}
+
+static void __cvmx_qlm_pcie_errata_cn78xx(int node, int qlm)
+{
+	int pem, i, q;
+	int is_8lanes;
+	int is_high_lanes;
+	int low_qlm, high_qlm, is_host;
+	int need_ep_monitor;
+	cvmx_pemx_cfg_t pem_cfg, pem3_cfg;
+	cvmx_gserx_slice_cfg_t slice_cfg;
+	cvmx_gserx_rx_pwr_ctrl_p1_t pwr_ctrl_p1;
+	cvmx_rst_soft_prstx_t soft_prst;
+
+	/* Only applies to CN78XX pass 1.x */
+	if (!OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+		return;
+
+	/* Determine the PEM for this QLM, whether we're in 8 lane mode,
+	 * and whether these are the top lanes of the 8
+	 */
+	switch (qlm) {
+	case 0: /* First 4 lanes of PEM0 */
+		pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(0));
+		pem = 0;
+		is_8lanes = pem_cfg.cn78xx.lanes8;
+		is_high_lanes = 0;
+		break;
+	case 1: /* Either last 4 lanes of PEM0, or PEM1 */
+		pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(0));
+		pem = (pem_cfg.cn78xx.lanes8) ? 0 : 1;
+		is_8lanes = pem_cfg.cn78xx.lanes8;
+		is_high_lanes = is_8lanes;
+		break;
+	case 2: /* First 4 lanes of PEM2 */
+		pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+		pem = 2;
+		is_8lanes = pem_cfg.cn78xx.lanes8;
+		is_high_lanes = 0;
+		break;
+	case 3: /* Either last 4 lanes of PEM2, or PEM3 */
+		pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+		pem3_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+		pem = (pem_cfg.cn78xx.lanes8) ? 2 : 3;
+		is_8lanes = (pem == 2) ? pem_cfg.cn78xx.lanes8 : pem3_cfg.cn78xx.lanes8;
+		is_high_lanes = (pem == 2) && is_8lanes;
+		break;
+	case 4: /* Last 4 lanes of PEM3 */
+		pem = 3;
+		is_8lanes = 1;
+		is_high_lanes = 1;
+		break;
+	default:
+		return;
+	}
+
+	/* These workaround must be applied once per PEM. Since we're called per
+	 * QLM, wait for the 2nd half of 8 lane setups before doing the workaround
+	 */
+	if (is_8lanes && !is_high_lanes)
+		return;
+
+	pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(pem));
+	is_host = pem_cfg.cn78xx.hostmd;
+	low_qlm = (is_8lanes) ? qlm - 1 : qlm;
+	high_qlm = qlm;
+	qlm = -1;
+
+	if (!is_host) {
+		/* Read the current slice config value. If its at the value we will
+		 * program then skip doing the workaround. We're probably doing a
+		 * hot reset and the workaround is already applied
+		 */
+		slice_cfg.u64 = csr_rd_node(node, CVMX_GSERX_SLICE_CFG(low_qlm));
+		if (slice_cfg.s.tx_rx_detect_lvl_enc == 7 && OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+			return;
+	}
+
+	if (is_host && OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0)) {
+		/* (GSER-XXXX) GSER PHY needs to be reset@initialization */
+		cvmx_gserx_phy_ctl_t phy_ctl;
+
+		for (q = low_qlm; q <= high_qlm; q++) {
+			phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(q));
+			phy_ctl.s.phy_reset = 1;
+			csr_wr_node(node, CVMX_GSERX_PHY_CTL(q), phy_ctl.u64);
+		}
+		udelay(5);
+
+		for (q = low_qlm; q <= high_qlm; q++) {
+			phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(q));
+			phy_ctl.s.phy_reset = 0;
+			csr_wr_node(node, CVMX_GSERX_PHY_CTL(q), phy_ctl.u64);
+		}
+		udelay(5);
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0)) {
+		/* (GSER-20936) GSER has wrong PCIe RX detect reset value */
+		for (q = low_qlm; q <= high_qlm; q++) {
+			slice_cfg.u64 = csr_rd_node(node, CVMX_GSERX_SLICE_CFG(q));
+			slice_cfg.s.tx_rx_detect_lvl_enc = 7;
+			csr_wr_node(node, CVMX_GSERX_SLICE_CFG(q), slice_cfg.u64);
+		}
+
+		/* Clear the bit in GSERX_RX_PWR_CTRL_P1[p1_rx_subblk_pd]
+		 * that coresponds to "Lane DLL"
+		 */
+		for (q = low_qlm; q <= high_qlm; q++) {
+			pwr_ctrl_p1.u64 = csr_rd_node(node, CVMX_GSERX_RX_PWR_CTRL_P1(q));
+			pwr_ctrl_p1.s.p1_rx_subblk_pd &= ~4;
+			csr_wr_node(node, CVMX_GSERX_RX_PWR_CTRL_P1(q), pwr_ctrl_p1.u64);
+		}
+
+		/* Errata (GSER-20888) GSER incorrect synchronizers hurts PCIe
+		 * Override TX Power State machine TX reset control signal
+		 */
+		for (q = low_qlm; q <= high_qlm; q++) {
+			for (i = 0; i < 4; i++) {
+				cvmx_gserx_lanex_tx_cfg_0_t tx_cfg;
+				cvmx_gserx_lanex_pwr_ctrl_t pwr_ctrl;
+
+				tx_cfg.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_0(i, q));
+				tx_cfg.s.tx_resetn_ovrrd_val = 1;
+				csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_0(i, q), tx_cfg.u64);
+				pwr_ctrl.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_PWR_CTRL(i, q));
+				pwr_ctrl.s.tx_p2s_resetn_ovrrd_en = 1;
+				csr_wr_node(node, CVMX_GSERX_LANEX_PWR_CTRL(i, q), pwr_ctrl.u64);
+			}
+		}
+	}
+
+	if (!is_host) {
+		cvmx_pciercx_cfg089_t cfg089;
+		cvmx_pciercx_cfg090_t cfg090;
+		cvmx_pciercx_cfg091_t cfg091;
+		cvmx_pciercx_cfg092_t cfg092;
+		cvmx_pciercx_cfg548_t cfg548;
+		cvmx_pciercx_cfg554_t cfg554;
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0)) {
+			/* Errata (GSER-21178) PCIe gen3 doesn't work */
+			/* The starting equalization hints are incorrect on CN78XX pass 1.x. Fix
+			 * them for the 8 possible lanes. It doesn't hurt to program them even
+			 * for lanes not in use
+			 */
+			cfg089.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG089(pem));
+			cfg089.s.l1urph = 2;
+			cfg089.s.l1utp = 7;
+			cfg089.s.l0urph = 2;
+			cfg089.s.l0utp = 7;
+			cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG089(pem), cfg089.u32);
+			cfg090.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG090(pem));
+			cfg090.s.l3urph = 2;
+			cfg090.s.l3utp = 7;
+			cfg090.s.l2urph = 2;
+			cfg090.s.l2utp = 7;
+			cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG090(pem), cfg090.u32);
+			cfg091.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG091(pem));
+			cfg091.s.l5urph = 2;
+			cfg091.s.l5utp = 7;
+			cfg091.s.l4urph = 2;
+			cfg091.s.l4utp = 7;
+			cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG091(pem), cfg091.u32);
+			cfg092.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG092(pem));
+			cfg092.s.l7urph = 2;
+			cfg092.s.l7utp = 7;
+			cfg092.s.l6urph = 2;
+			cfg092.s.l6utp = 7;
+			cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG092(pem), cfg092.u32);
+			/* FIXME: Disable phase 2 and phase 3 equalization */
+			cfg548.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG548(pem));
+			cfg548.s.ep2p3d = 1;
+			cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG548(pem), cfg548.u32);
+		}
+		/* Errata (GSER-21331) GEN3 Equalization may fail */
+		/* Disable preset #10 and disable the 2ms timeout */
+		cfg554.u32 = cvmx_pcie_cfgx_read_node(node, pem, CVMX_PCIERCX_CFG554(pem));
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0))
+			cfg554.s.p23td = 1;
+		cfg554.s.prv = 0x3ff;
+		cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG554(pem), cfg554.u32);
+
+		if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0)) {
+			need_ep_monitor = (pem_cfg.s.md == 2);
+			if (need_ep_monitor) {
+				cvmx_pciercx_cfg031_t cfg031;
+				cvmx_pciercx_cfg040_t cfg040;
+
+				/* Force Gen1 for initial link bringup. We'll
+				 * fix it later
+				 */
+				pem_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(pem));
+				pem_cfg.s.md = 0;
+				csr_wr_node(node, CVMX_PEMX_CFG(pem), pem_cfg.u64);
+				cfg031.u32 = cvmx_pcie_cfgx_read_node(node, pem,
+								      CVMX_PCIERCX_CFG031(pem));
+				cfg031.s.mls = 0;
+				cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG031(pem),
+							  cfg031.u32);
+				cfg040.u32 = cvmx_pcie_cfgx_read_node(node, pem,
+								      CVMX_PCIERCX_CFG040(pem));
+				cfg040.s.tls = 1;
+				cvmx_pcie_cfgx_write_node(node, pem, CVMX_PCIERCX_CFG040(pem),
+							  cfg040.u32);
+				__cvmx_qlm_pcie_errata_ep_cn78xx(node, pem);
+			}
+			return;
+		}
+	}
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_0)) {
+		/* De-assert the SOFT_RST bit for this QLM (PEM), causing the PCIe
+		 * workarounds code above to take effect.
+		 */
+		soft_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(pem));
+		soft_prst.s.soft_prst = 0;
+		csr_wr_node(node, CVMX_RST_SOFT_PRSTX(pem), soft_prst.u64);
+		udelay(1);
+
+		/* Assert the SOFT_RST bit for this QLM (PEM), putting the PCIe back into
+		 * reset state with disturbing the workarounds.
+		 */
+		soft_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(pem));
+		soft_prst.s.soft_prst = 1;
+		csr_wr_node(node, CVMX_RST_SOFT_PRSTX(pem), soft_prst.u64);
+	}
+	udelay(1);
+}
+
+/**
+ * Setup the PEM to either driver or receive reset from PRST based on RC or EP
+ *
+ * @param node   Node to use in a Numa setup
+ * @param pem    Which PEM to setuo
+ * @param is_endpoint
+ *               Non zero if PEM is a EP
+ */
+static void __setup_pem_reset(int node, int pem, int is_endpoint)
+{
+	cvmx_rst_ctlx_t rst_ctl;
+
+	/* Make sure is_endpoint is either 0 or 1 */
+	is_endpoint = (is_endpoint != 0);
+	rst_ctl.u64 = csr_rd_node(node, CVMX_RST_CTLX(pem));
+	rst_ctl.s.prst_link = 0;	  /* Link down causes soft reset */
+	rst_ctl.s.rst_link = is_endpoint; /* EP PERST causes a soft reset */
+	rst_ctl.s.rst_drv = !is_endpoint; /* Drive if RC */
+	rst_ctl.s.rst_rcv = is_endpoint;  /* Only read PERST in EP mode */
+	rst_ctl.s.rst_chip = 0;		  /* PERST doesn't pull CHIP_RESET */
+	csr_wr_node(node, CVMX_RST_CTLX(pem), rst_ctl.u64);
+}
+
+/**
+ * Configure QLM speed and mode for cn78xx.
+ *
+ * @param node    Node to configure the QLM
+ * @param qlm     The QLM to configure
+ * @param baud_mhz   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP mode.
+ * @param gen3    Only used for PCIe
+ *			gen3 = 2 GEN3 mode
+ *			gen3 = 1 GEN2 mode
+ *			gen3 = 0 GEN1 mode
+ *
+ * @param ref_clk_sel    The reference-clock selection to use to configure QLM
+ *			 0 = REF_100MHZ
+ *			 1 = REF_125MHZ
+ *			 2 = REF_156MHZ
+ *			 3 = REF_161MHZ
+ * @param ref_clk_input  The reference-clock input to use to configure QLM
+ *
+ * @return       Return 0 on success or -1.
+ */
+int octeon_configure_qlm_cn78xx(int node, int qlm, int baud_mhz, int mode, int rc, int gen3,
+				int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_lane_mode_t lmode;
+	cvmx_gserx_cfg_t cfg;
+	cvmx_gserx_refclk_sel_t refclk_sel;
+
+	int is_pcie = 0;
+	int is_ilk = 0;
+	int is_bgx = 0;
+	int lane_mode = 0;
+	int lmac_type = 0;
+	bool alt_pll = false;
+	int num_ports = 0;
+	int lane_to_sds = 0;
+
+	debug("%s(node: %d, qlm: %d, baud_mhz: %d, mode: %d, rc: %d, gen3: %d, ref_clk_sel: %d, ref_clk_input: %d\n",
+	      __func__, node, qlm, baud_mhz, mode, rc, gen3, ref_clk_sel, ref_clk_input);
+	if (OCTEON_IS_MODEL(OCTEON_CN76XX) && qlm > 4) {
+		debug("%s: qlm %d not present on CN76XX\n", __func__, qlm);
+		return -1;
+	}
+
+	/* Errata PEM-31375 PEM RSL accesses to PCLK registers can timeout
+	 * during speed change. Change SLI_WINDOW_CTL[time] to 525us
+	 */
+	__set_sli_window_ctl_errata_31375(node);
+
+	cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+	/* If PEM is in EP, no need to do anything */
+
+	if (cfg.s.pcie && rc == 0) {
+		debug("%s: node %d, qlm %d is in PCIe endpoint mode, returning\n",
+		      __func__, node, qlm);
+		return 0;
+	}
+
+	/* Set the reference clock to use */
+	refclk_sel.u64 = 0;
+	if (ref_clk_input == 0) { /* External ref clock */
+		refclk_sel.s.com_clk_sel = 0;
+		refclk_sel.s.use_com1 = 0;
+	} else if (ref_clk_input == 1) {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 0;
+	} else {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 1;
+	}
+
+	csr_wr_node(node, CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+
+	/* Reset the QLM after changing the reference clock */
+	phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 1;
+	phy_ctl.s.phy_pd = 1;
+	csr_wr_node(node, CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	udelay(1000);
+
+	/* Always restore the reference clocks for a QLM */
+	memcpy(ref_clk_cn78xx[node][qlm], def_ref_clk_cn78xx, sizeof(def_ref_clk_cn78xx));
+	switch (mode) {
+	case CVMX_QLM_MODE_PCIE:
+	case CVMX_QLM_MODE_PCIE_1X8: {
+		cvmx_pemx_cfg_t pemx_cfg;
+		cvmx_pemx_on_t pemx_on;
+
+		is_pcie = 1;
+
+		if (ref_clk_sel == 0) {
+			refclk_sel.u64 = csr_rd_node(node, CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 0;
+			csr_wr_node(node, CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK100;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK100;
+			else
+				lane_mode = R_8G_REFCLK100;
+		} else if (ref_clk_sel == 1) {
+			refclk_sel.u64 = csr_rd_node(node, CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 1;
+			csr_wr_node(node, CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK125;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK125;
+			else
+				lane_mode = R_8G_REFCLK125;
+		} else {
+			printf("Invalid reference clock for PCIe on QLM%d\n", qlm);
+			return -1;
+		}
+
+		switch (qlm) {
+		case 0: /* Either x4 or x8 based on PEM0 */
+		{
+			cvmx_rst_soft_prstx_t rst_prst;
+
+			rst_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(0));
+			rst_prst.s.soft_prst = rc;
+			csr_wr_node(node, CVMX_RST_SOFT_PRSTX(0), rst_prst.u64);
+			__setup_pem_reset(node, 0, !rc);
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(0));
+			pemx_cfg.cn78xx.lanes8 = (mode == CVMX_QLM_MODE_PCIE_1X8);
+			pemx_cfg.cn78xx.hostmd = rc;
+			pemx_cfg.cn78xx.md = gen3;
+			csr_wr_node(node, CVMX_PEMX_CFG(0), pemx_cfg.u64);
+			/* x8 mode waits for QLM1 setup before turning on the PEM */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		}
+		case 1: /* Either PEM0 x8 or PEM1 x4 */
+		{
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				cvmx_rst_soft_prstx_t rst_prst;
+				cvmx_pemx_cfg_t pemx_cfg;
+
+				rst_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(1));
+				rst_prst.s.soft_prst = rc;
+				csr_wr_node(node, CVMX_RST_SOFT_PRSTX(1), rst_prst.u64);
+				__setup_pem_reset(node, 1, !rc);
+
+				pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(1));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr_node(node, CVMX_PEMX_CFG(1), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(1));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(1), pemx_on.u64);
+			} else {
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		}
+		case 2: /* Either PEM2 x4 or PEM2 x8 */
+		{
+			cvmx_rst_soft_prstx_t rst_prst;
+
+			rst_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(2));
+			rst_prst.s.soft_prst = rc;
+			csr_wr_node(node, CVMX_RST_SOFT_PRSTX(2), rst_prst.u64);
+			__setup_pem_reset(node, 2, !rc);
+
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+			pemx_cfg.cn78xx.lanes8 = (mode == CVMX_QLM_MODE_PCIE_1X8);
+			pemx_cfg.cn78xx.hostmd = rc;
+			pemx_cfg.cn78xx.md = gen3;
+			csr_wr_node(node, CVMX_PEMX_CFG(2), pemx_cfg.u64);
+			/* x8 mode waits for QLM3 setup before turning on the PEM */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(2));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(2), pemx_on.u64);
+			}
+			break;
+		}
+		case 3: /* Either PEM2 x8 or PEM3 x4 */
+		{
+			pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* Last 4 lanes of PEM2 */
+				/* PEMX_CFG already setup */
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(2));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(2), pemx_on.u64);
+			}
+			/* Check if PEM3 uses QLM3 and in x4 lane mode */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				cvmx_rst_soft_prstx_t rst_prst;
+
+				rst_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(3));
+				rst_prst.s.soft_prst = rc;
+				csr_wr_node(node, CVMX_RST_SOFT_PRSTX(3), rst_prst.u64);
+				__setup_pem_reset(node, 3, !rc);
+
+				pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr_node(node, CVMX_PEMX_CFG(3), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(3));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(3), pemx_on.u64);
+			}
+			break;
+		}
+		case 4: /* Either PEM3 x4 or PEM3 x8 */
+		{
+			if (mode == CVMX_QLM_MODE_PCIE_1X8) {
+				/* Last 4 lanes of PEM3 */
+				/* PEMX_CFG already setup */
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(3));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(3), pemx_on.u64);
+			} else {
+				/* 4 lanes of PEM3 */
+				cvmx_pemx_qlm_t pemx_qlm;
+				cvmx_rst_soft_prstx_t rst_prst;
+
+				rst_prst.u64 = csr_rd_node(node, CVMX_RST_SOFT_PRSTX(3));
+				rst_prst.s.soft_prst = rc;
+				csr_wr_node(node, CVMX_RST_SOFT_PRSTX(3), rst_prst.u64);
+				__setup_pem_reset(node, 3, !rc);
+
+				pemx_cfg.u64 = csr_rd_node(node, CVMX_PEMX_CFG(3));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr_node(node, CVMX_PEMX_CFG(3), pemx_cfg.u64);
+				/* PEM3 is on QLM4 */
+				pemx_qlm.u64 = csr_rd_node(node, CVMX_PEMX_QLM(3));
+				pemx_qlm.cn78xx.pem3qlm = 1;
+				csr_wr_node(node, CVMX_PEMX_QLM(3), pemx_qlm.u64);
+				pemx_on.u64 = csr_rd_node(node, CVMX_PEMX_ON(3));
+				pemx_on.s.pemon = 1;
+				csr_wr_node(node, CVMX_PEMX_ON(3), pemx_on.u64);
+			}
+			break;
+		}
+		default:
+			break;
+		}
+		break;
+	}
+	case CVMX_QLM_MODE_ILK:
+		is_ilk = 1;
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		if (lane_mode == -1)
+			return -1;
+		/* FIXME: Set lane_mode for other speeds */
+		break;
+	case CVMX_QLM_MODE_SGMII:
+		is_bgx = 1;
+		lmac_type = 0;
+		lane_to_sds = 1;
+		num_ports = 4;
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		debug("%s: SGMII lane mode: %d, alternate PLL: %s\n", __func__, lane_mode,
+		      alt_pll ? "true" : "false");
+		if (lane_mode == -1)
+			return -1;
+		break;
+	case CVMX_QLM_MODE_XAUI:
+		is_bgx = 5;
+		lmac_type = 1;
+		lane_to_sds = 0xe4;
+		num_ports = 1;
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		debug("%s: XAUI lane mode: %d\n", __func__, lane_mode);
+		if (lane_mode == -1)
+			return -1;
+		break;
+	case CVMX_QLM_MODE_RXAUI:
+		is_bgx = 3;
+		lmac_type = 2;
+		lane_to_sds = 0;
+		num_ports = 2;
+		debug("%s: RXAUI lane mode: %d\n", __func__, lane_mode);
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		if (lane_mode == -1)
+			return -1;
+		break;
+	case CVMX_QLM_MODE_XFI: /* 10GR_4X1 */
+	case CVMX_QLM_MODE_10G_KR:
+		is_bgx = 1;
+		lmac_type = 3;
+		lane_to_sds = 1;
+		num_ports = 4;
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		debug("%s: XFI/10G_KR lane mode: %d\n", __func__, lane_mode);
+		if (lane_mode == -1)
+			return -1;
+		break;
+	case CVMX_QLM_MODE_XLAUI: /* 40GR4_1X4 */
+	case CVMX_QLM_MODE_40G_KR4:
+		is_bgx = 5;
+		lmac_type = 4;
+		lane_to_sds = 0xe4;
+		num_ports = 1;
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+		debug("%s: XLAUI/40G_KR4 lane mode: %d\n", __func__, lane_mode);
+		if (lane_mode == -1)
+			return -1;
+		break;
+	case CVMX_QLM_MODE_DISABLED:
+		/* Power down the QLM */
+		phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+		phy_ctl.s.phy_pd = 1;
+		phy_ctl.s.phy_reset = 1;
+		csr_wr_node(node, CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+		/* Disable all modes */
+		csr_wr_node(node, CVMX_GSERX_CFG(qlm), 0);
+		/* Do nothing */
+		return 0;
+	default:
+		break;
+	}
+
+	if (alt_pll) {
+		debug("%s: alternate PLL settings used for node %d, qlm %d, lane mode %d, reference clock %d\n",
+		      __func__, node, qlm, lane_mode, ref_clk_sel);
+		if (__set_qlm_ref_clk_cn78xx(node, qlm, lane_mode, ref_clk_sel)) {
+			printf("%s: Error: reference clock %d is not supported for node %d, qlm %d\n",
+			       __func__, ref_clk_sel, node, qlm);
+			return -1;
+		}
+	}
+
+	/* Power up PHY, but keep it in reset */
+	phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_pd = 0;
+	phy_ctl.s.phy_reset = 1;
+	csr_wr_node(node, CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/* Errata GSER-20788: GSER(0..13)_CFG[BGX_QUAD]=1 is broken. Force the
+	 * BGX_QUAD bit to be clear for CN78XX pass 1.x
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+		is_bgx &= 3;
+
+	/* Set GSER for the interface mode */
+	cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+	cfg.s.ila = is_ilk;
+	cfg.s.bgx = is_bgx & 1;
+	cfg.s.bgx_quad = (is_bgx >> 2) & 1;
+	cfg.s.bgx_dual = (is_bgx >> 1) & 1;
+	cfg.s.pcie = is_pcie;
+	csr_wr_node(node, CVMX_GSERX_CFG(qlm), cfg.u64);
+
+	/* Lane mode */
+	lmode.u64 = csr_rd_node(node, CVMX_GSERX_LANE_MODE(qlm));
+	lmode.s.lmode = lane_mode;
+	csr_wr_node(node, CVMX_GSERX_LANE_MODE(qlm), lmode.u64);
+
+	/* BGX0-1 can connect to QLM0-1 or QLM 2-3. Program the select bit if we're
+	 * one of these QLMs and we're using BGX
+	 */
+	if (qlm < 4 && is_bgx) {
+		int bgx = qlm & 1;
+		int use_upper = (qlm >> 1) & 1;
+		cvmx_bgxx_cmr_global_config_t global_cfg;
+
+		global_cfg.u64 = csr_rd_node(node, CVMX_BGXX_CMR_GLOBAL_CONFIG(bgx));
+		global_cfg.s.pmux_sds_sel = use_upper;
+		csr_wr_node(node, CVMX_BGXX_CMR_GLOBAL_CONFIG(bgx), global_cfg.u64);
+	}
+
+	/* Bring phy out of reset */
+	phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 0;
+	csr_wr_node(node, CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+	csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+
+	/*
+	 * Wait 250 ns until the management interface is ready to accept
+	 * read/write commands.
+	 */
+	udelay(1);
+
+	if (is_bgx) {
+		int bgx = (qlm < 2) ? qlm : qlm - 2;
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		int index;
+
+		for (index = 0; index < num_ports; index++) {
+			cmr_config.u64 = csr_rd_node(node, CVMX_BGXX_CMRX_CONFIG(index, bgx));
+			cmr_config.s.enable = 0;
+			cmr_config.s.data_pkt_tx_en = 0;
+			cmr_config.s.data_pkt_rx_en = 0;
+			cmr_config.s.lmac_type = lmac_type;
+			cmr_config.s.lane_to_sds = ((lane_to_sds == 1) ?
+						    index : ((lane_to_sds == 0) ?
+							     (index ? 0xe : 4) :
+							     lane_to_sds));
+			csr_wr_node(node, CVMX_BGXX_CMRX_CONFIG(index, bgx), cmr_config.u64);
+		}
+		csr_wr_node(node, CVMX_BGXX_CMR_TX_LMACS(bgx), num_ports);
+		csr_wr_node(node, CVMX_BGXX_CMR_RX_LMACS(bgx), num_ports);
+
+		/* Enable/disable training for 10G_KR/40G_KR4/XFI/XLAUI modes */
+		for (index = 0; index < num_ports; index++) {
+			cvmx_bgxx_spux_br_pmd_control_t spu_pmd_control;
+
+			spu_pmd_control.u64 =
+				csr_rd_node(node, CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx));
+
+			if (mode == CVMX_QLM_MODE_10G_KR || mode == CVMX_QLM_MODE_40G_KR4)
+				spu_pmd_control.s.train_en = 1;
+			else if (mode == CVMX_QLM_MODE_XFI || mode == CVMX_QLM_MODE_XLAUI)
+				spu_pmd_control.s.train_en = 0;
+
+			csr_wr_node(node, CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx),
+				    spu_pmd_control.u64);
+		}
+	}
+
+	/* Configure the gser pll */
+	if (!is_pcie)
+		__qlm_setup_pll_cn78xx(node, qlm);
+
+	/* Wait for reset to complete and the PLL to lock */
+	if (CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_PLL_STAT(qlm),
+				       cvmx_gserx_pll_stat_t,
+				       pll_lock, ==, 1, 10000)) {
+		printf("%d:QLM%d: Timeout waiting for GSERX_PLL_STAT[pll_lock]\n",
+		       node, qlm);
+		return -1;
+	}
+
+	/* Perform PCIe errata workaround */
+	if (is_pcie)
+		__cvmx_qlm_pcie_errata_cn78xx(node, qlm);
+	else
+		__qlm_init_errata_20844(node, qlm);
+
+	/* Wait for reset to complete and the PLL to lock */
+	/* PCIe mode doesn't become ready until the PEM block attempts to bring
+	 * the interface up. Skip this check for PCIe
+	 */
+	if (!is_pcie && CVMX_WAIT_FOR_FIELD64_NODE(node, CVMX_GSERX_QLM_STAT(qlm),
+						   cvmx_gserx_qlm_stat_t, rst_rdy,
+						   ==, 1, 10000)) {
+		printf("%d:QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n",
+		       node, qlm);
+		return -1;
+	}
+
+	/* Errata GSER-26150: 10G PHY PLL Temperature Failure */
+	/* This workaround must be completed after the final deassertion of
+	 * GSERx_PHY_CTL[PHY_RESET].
+	 * Apply the workaround to 10.3125Gbps and 8Gbps only.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+	    (baud_mhz == 103125 || (is_pcie && gen3 == 2)))
+		__qlm_errata_gser_26150(0, qlm, is_pcie);
+
+	/* Errata GSER-26636: 10G-KR/40G-KR - Inverted Tx Coefficient Direction
+	 * Change. Applied to all 10G standards (required for KR) but also
+	 * applied to other standards in case software training is used
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && baud_mhz == 103125)
+		__qlm_kr_inc_dec_gser26636(node, qlm);
+
+	/* Errata GSER-25992: RX EQ Default Settings Update (CTLE Bias) */
+	/* This workaround will only be applied to Pass 1.x */
+	/* It will also only be applied if the SERDES data-rate is 10G */
+	/* or if PCIe Gen3 (gen3=2 is PCIe Gen3) */
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) &&
+	    (baud_mhz == 103125 || (is_pcie && gen3 == 2)))
+		cvmx_qlm_gser_errata_25992(node, qlm);
+
+	/* Errata GSER-27140: Updating the RX EQ settings due to temperature
+	 * drift sensitivities
+	 */
+	/* This workaround will also only be applied if the SERDES data-rate is 10G */
+	if (baud_mhz == 103125)
+		__qlm_rx_eq_temp_gser27140(node, qlm);
+
+	/* Reduce the voltage amplitude coming from Marvell PHY and also change
+	 * DFE threshold settings for RXAUI interface
+	 */
+	if (is_bgx && mode == CVMX_QLM_MODE_RXAUI) {
+		int l;
+
+		for (l = 0; l < 4; l++) {
+			cvmx_gserx_lanex_rx_cfg_4_t cfg4;
+			cvmx_gserx_lanex_tx_cfg_0_t cfg0;
+			/* Change the Q/QB error sampler 0 threshold from 0xD to 0xF */
+			cfg4.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_RX_CFG_4(l, qlm));
+			cfg4.s.cfg_rx_errdet_ctrl = 0xcf6f;
+			csr_wr_node(node, CVMX_GSERX_LANEX_RX_CFG_4(l, qlm), cfg4.u64);
+			/* Reduce the voltage swing to roughly 460mV */
+			cfg0.u64 = csr_rd_node(node, CVMX_GSERX_LANEX_TX_CFG_0(l, qlm));
+			cfg0.s.cfg_tx_swing = 0x12;
+			csr_wr_node(node, CVMX_GSERX_LANEX_TX_CFG_0(l, qlm), cfg0.u64);
+		}
+	}
+
+	return 0;
+}
+
+static int __is_qlm_valid_bgx_cn73xx(int qlm)
+{
+	if (qlm == 2 || qlm == 3 || qlm == 5 || qlm == 6)
+		return 0;
+	return 1;
+}
+
+/**
+ * Configure QLM/DLM speed and mode for cn73xx.
+ *
+ * @param qlm     The QLM to configure
+ * @param baud_mhz   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP mode.
+ * @param gen3    Only used for PCIe
+ *			gen3 = 2 GEN3 mode
+ *			gen3 = 1 GEN2 mode
+ *			gen3 = 0 GEN1 mode
+ *
+ * @param ref_clk_sel   The reference-clock selection to use to configure QLM
+ *			0 = REF_100MHZ
+ *			1 = REF_125MHZ
+ *			2 = REF_156MHZ
+ *			3 = REF_161MHZ
+ *
+ * @param ref_clk_input  The reference-clock input to use to configure QLM
+ *			 0 = QLM/DLM reference clock input
+ *			 1 = common reference clock input 0
+ *			 2 = common reference clock input 1
+ *
+ * @return       Return 0 on success or -1.
+ */
+static int octeon_configure_qlm_cn73xx(int qlm, int baud_mhz, int mode, int rc, int gen3,
+				       int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_lane_mode_t lmode;
+	cvmx_gserx_cfg_t cfg;
+	cvmx_gserx_refclk_sel_t refclk_sel;
+	int is_pcie = 0;
+	int is_bgx = 0;
+	int lane_mode = 0;
+	short lmac_type[4] = { 0 };
+	short sds_lane[4] = { 0 };
+	bool alt_pll = false;
+	int enable_training = 0;
+	int additional_lmacs = 0;
+
+	debug("%s(qlm: %d, baud_mhz: %d, mode: %d, rc: %d, gen3: %d, ref_clk_sel: %d, ref_clk_input: %d\n",
+	      __func__, qlm, baud_mhz, mode, rc, gen3, ref_clk_sel, ref_clk_input);
+
+	/* Don't configure QLM4 if it is not in SATA mode */
+	if (qlm == 4) {
+		if (mode == CVMX_QLM_MODE_SATA_2X1)
+			return __setup_sata(qlm, baud_mhz, ref_clk_sel, ref_clk_input);
+
+		printf("Invalid mode for QLM4\n");
+		return 0;
+	}
+
+	cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+
+	/* Errata PEM-31375 PEM RSL accesses to PCLK registers can timeout
+	 * during speed change. Change SLI_WINDOW_CTL[time] to 525us
+	 */
+	__set_sli_window_ctl_errata_31375(0);
+	/* If PEM is in EP, no need to do anything */
+	if (cfg.s.pcie && rc == 0 &&
+	    (mode == CVMX_QLM_MODE_PCIE || mode == CVMX_QLM_MODE_PCIE_1X8 ||
+	     mode == CVMX_QLM_MODE_PCIE_1X2)) {
+		debug("%s: qlm %d is in PCIe endpoint mode, returning\n", __func__, qlm);
+		return 0;
+	}
+
+	/* Set the reference clock to use */
+	refclk_sel.u64 = 0;
+	if (ref_clk_input == 0) { /* External ref clock */
+		refclk_sel.s.com_clk_sel = 0;
+		refclk_sel.s.use_com1 = 0;
+	} else if (ref_clk_input == 1) {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 0;
+	} else {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 1;
+	}
+
+	csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+
+	/* Reset the QLM after changing the reference clock */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 1;
+	phy_ctl.s.phy_pd = 1;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	udelay(1000);
+
+	/* Check if QLM is a valid BGX interface */
+	if (mode != CVMX_QLM_MODE_PCIE && mode != CVMX_QLM_MODE_PCIE_1X2 &&
+	    mode != CVMX_QLM_MODE_PCIE_1X8) {
+		if (__is_qlm_valid_bgx_cn73xx(qlm))
+			return -1;
+	}
+
+	switch (mode) {
+	case CVMX_QLM_MODE_PCIE:
+	case CVMX_QLM_MODE_PCIE_1X2:
+	case CVMX_QLM_MODE_PCIE_1X8: {
+		cvmx_pemx_cfg_t pemx_cfg;
+		cvmx_pemx_on_t pemx_on;
+		cvmx_pemx_qlm_t pemx_qlm;
+		cvmx_rst_soft_prstx_t rst_prst;
+		int port = 0;
+
+		is_pcie = 1;
+
+		if (qlm < 5 && mode == CVMX_QLM_MODE_PCIE_1X2) {
+			printf("Invalid PCIe mode(%d) for QLM%d\n", mode, qlm);
+			return -1;
+		}
+
+		if (ref_clk_sel == 0) {
+			refclk_sel.u64 = csr_rd(CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 0;
+			csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK100;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK100;
+			else
+				lane_mode = R_8G_REFCLK100;
+		} else if (ref_clk_sel == 1) {
+			refclk_sel.u64 = csr_rd(CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 1;
+			csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK125;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK125;
+			else
+				lane_mode = R_8G_REFCLK125;
+		} else {
+			printf("Invalid reference clock for PCIe on QLM%d\n", qlm);
+			return -1;
+		}
+
+		switch (qlm) {
+		case 0: /* Either x4 or x8 based on PEM0 */
+			rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(0));
+			rst_prst.s.soft_prst = rc;
+			csr_wr(CVMX_RST_SOFT_PRSTX(0), rst_prst.u64);
+			__setup_pem_reset(0, 0, !rc);
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+			pemx_cfg.cn78xx.lanes8 = (mode == CVMX_QLM_MODE_PCIE_1X8);
+			pemx_cfg.cn78xx.hostmd = rc;
+			pemx_cfg.cn78xx.md = gen3;
+			csr_wr(CVMX_PEMX_CFG(0), pemx_cfg.u64);
+			/* x8 mode waits for QLM1 setup before turning on the PEM */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		case 1: /* Either PEM0 x8 or PEM1 x4 */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(1));
+				rst_prst.s.soft_prst = rc;
+				csr_wr(CVMX_RST_SOFT_PRSTX(1), rst_prst.u64);
+				__setup_pem_reset(0, 1, !rc);
+
+				pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr(CVMX_PEMX_CFG(1), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(1));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(1), pemx_on.u64);
+			} else { /* x8 mode */
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		case 2: /* Either PEM2 x4 or PEM2 x8 or BGX0 */
+		{
+			pemx_qlm.u64 = csr_rd(CVMX_PEMX_QLM(2));
+			pemx_qlm.cn73xx.pemdlmsel = 0;
+			csr_wr(CVMX_PEMX_QLM(2), pemx_qlm.u64);
+
+			rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(2));
+			rst_prst.s.soft_prst = rc;
+			csr_wr(CVMX_RST_SOFT_PRSTX(2), rst_prst.u64);
+			__setup_pem_reset(0, 2, !rc);
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+			pemx_cfg.cn78xx.lanes8 = (mode == CVMX_QLM_MODE_PCIE_1X8);
+			pemx_cfg.cn78xx.hostmd = rc;
+			pemx_cfg.cn78xx.md = gen3;
+			csr_wr(CVMX_PEMX_CFG(2), pemx_cfg.u64);
+			/* x8 mode waits for QLM3 setup before turning on the PEM */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(2));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(2), pemx_on.u64);
+			}
+			break;
+		}
+		case 3: /* Either PEM2 x8 or PEM3 x4 or BGX1 */
+			/* PEM2/PEM3 are configured to use QLM2/3 */
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(2));
+			if (pemx_cfg.cn78xx.lanes8) {
+				/* Last 4 lanes of PEM2 */
+				/* PEMX_CFG already setup */
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(2));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(2), pemx_on.u64);
+			}
+			/* Check if PEM3 uses QLM3 and in x4 lane mode */
+			if (mode == CVMX_QLM_MODE_PCIE) {
+				pemx_qlm.u64 = csr_rd(CVMX_PEMX_QLM(3));
+				pemx_qlm.cn73xx.pemdlmsel = 0;
+				csr_wr(CVMX_PEMX_QLM(3), pemx_qlm.u64);
+
+				rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(3));
+				rst_prst.s.soft_prst = rc;
+				csr_wr(CVMX_RST_SOFT_PRSTX(3), rst_prst.u64);
+				__setup_pem_reset(0, 3, !rc);
+
+				pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(3));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr(CVMX_PEMX_CFG(3), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(3));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(3), pemx_on.u64);
+			}
+			break;
+		case 5: /* PEM2/PEM3 x2 or BGX2 */
+		case 6:
+			port = (qlm == 5) ? 2 : 3;
+			if (mode == CVMX_QLM_MODE_PCIE_1X2) {
+				/* PEM2/PEM3 are configured to use DLM5/6 */
+				pemx_qlm.u64 = csr_rd(CVMX_PEMX_QLM(port));
+				pemx_qlm.cn73xx.pemdlmsel = 1;
+				csr_wr(CVMX_PEMX_QLM(port), pemx_qlm.u64);
+				/* 2 lanes of PEM3 */
+				rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(port));
+				rst_prst.s.soft_prst = rc;
+				csr_wr(CVMX_RST_SOFT_PRSTX(port), rst_prst.u64);
+				__setup_pem_reset(0, port, !rc);
+
+				pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(port));
+				pemx_cfg.cn78xx.lanes8 = 0;
+				pemx_cfg.cn78xx.hostmd = rc;
+				pemx_cfg.cn78xx.md = gen3;
+				csr_wr(CVMX_PEMX_CFG(port), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(port));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(port), pemx_on.u64);
+			}
+			break;
+		default:
+			break;
+		}
+		break;
+	}
+	case CVMX_QLM_MODE_SGMII:
+		is_bgx = 1;
+		lmac_type[0] = 0;
+		lmac_type[1] = 0;
+		lmac_type[2] = 0;
+		lmac_type[3] = 0;
+		sds_lane[0] = 0;
+		sds_lane[1] = 1;
+		sds_lane[2] = 2;
+		sds_lane[3] = 3;
+		break;
+	case CVMX_QLM_MODE_SGMII_2X1:
+		if (qlm == 5) {
+			is_bgx = 1;
+			lmac_type[0] = 0;
+			lmac_type[1] = 0;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0;
+			sds_lane[1] = 1;
+		} else if (qlm == 6) {
+			is_bgx = 1;
+			lmac_type[0] = -1;
+			lmac_type[1] = -1;
+			lmac_type[2] = 0;
+			lmac_type[3] = 0;
+			sds_lane[2] = 2;
+			sds_lane[3] = 3;
+			additional_lmacs = 2;
+		}
+		break;
+	case CVMX_QLM_MODE_XAUI:
+		is_bgx = 5;
+		lmac_type[0] = 1;
+		lmac_type[1] = -1;
+		lmac_type[2] = -1;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0xe4;
+		break;
+	case CVMX_QLM_MODE_RXAUI:
+		is_bgx = 3;
+		lmac_type[0] = 2;
+		lmac_type[1] = 2;
+		lmac_type[2] = -1;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0x4;
+		sds_lane[1] = 0xe;
+		break;
+	case CVMX_QLM_MODE_RXAUI_1X2:
+		if (qlm == 5) {
+			is_bgx = 3;
+			lmac_type[0] = 2;
+			lmac_type[1] = -1;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0x4;
+		}
+		if (qlm == 6) {
+			is_bgx = 3;
+			lmac_type[0] = -1;
+			lmac_type[1] = -1;
+			lmac_type[2] = 2;
+			lmac_type[3] = -1;
+			sds_lane[2] = 0xe;
+			additional_lmacs = 2;
+		}
+		break;
+	case CVMX_QLM_MODE_10G_KR:
+		enable_training = 1;
+	case CVMX_QLM_MODE_XFI: /* 10GR_4X1 */
+		is_bgx = 1;
+		lmac_type[0] = 3;
+		lmac_type[1] = 3;
+		lmac_type[2] = 3;
+		lmac_type[3] = 3;
+		sds_lane[0] = 0;
+		sds_lane[1] = 1;
+		sds_lane[2] = 2;
+		sds_lane[3] = 3;
+		break;
+	case CVMX_QLM_MODE_10G_KR_1X2:
+		enable_training = 1;
+	case CVMX_QLM_MODE_XFI_1X2:
+		if (qlm == 5) {
+			is_bgx = 1;
+			lmac_type[0] = 3;
+			lmac_type[1] = 3;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0;
+			sds_lane[1] = 1;
+		} else if (qlm == 6) {
+			is_bgx = 1;
+			lmac_type[0] = -1;
+			lmac_type[1] = -1;
+			lmac_type[2] = 3;
+			lmac_type[3] = 3;
+			sds_lane[2] = 2;
+			sds_lane[3] = 3;
+			additional_lmacs = 2;
+		}
+		break;
+	case CVMX_QLM_MODE_40G_KR4:
+		enable_training = 1;
+	case CVMX_QLM_MODE_XLAUI: /* 40GR4_1X4 */
+		is_bgx = 5;
+		lmac_type[0] = 4;
+		lmac_type[1] = -1;
+		lmac_type[2] = -1;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0xe4;
+		break;
+	case CVMX_QLM_MODE_RGMII_SGMII:
+		is_bgx = 1;
+		lmac_type[0] = 5;
+		lmac_type[1] = 0;
+		lmac_type[2] = 0;
+		lmac_type[3] = 0;
+		sds_lane[0] = 0;
+		sds_lane[1] = 1;
+		sds_lane[2] = 2;
+		sds_lane[3] = 3;
+		break;
+	case CVMX_QLM_MODE_RGMII_SGMII_1X1:
+		if (qlm == 5) {
+			is_bgx = 1;
+			lmac_type[0] = 5;
+			lmac_type[1] = 0;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0;
+			sds_lane[1] = 1;
+		}
+		break;
+	case CVMX_QLM_MODE_RGMII_SGMII_2X1:
+		if (qlm == 6) {
+			is_bgx = 1;
+			lmac_type[0] = 5;
+			lmac_type[1] = -1;
+			lmac_type[2] = 0;
+			lmac_type[3] = 0;
+			sds_lane[0] = 0;
+			sds_lane[2] = 0;
+			sds_lane[3] = 1;
+		}
+		break;
+	case CVMX_QLM_MODE_RGMII_10G_KR:
+		enable_training = 1;
+	case CVMX_QLM_MODE_RGMII_XFI:
+		is_bgx = 1;
+		lmac_type[0] = 5;
+		lmac_type[1] = 3;
+		lmac_type[2] = 3;
+		lmac_type[3] = 3;
+		sds_lane[0] = 0;
+		sds_lane[1] = 1;
+		sds_lane[2] = 2;
+		sds_lane[3] = 3;
+		break;
+	case CVMX_QLM_MODE_RGMII_10G_KR_1X1:
+		enable_training = 1;
+	case CVMX_QLM_MODE_RGMII_XFI_1X1:
+		if (qlm == 5) {
+			is_bgx = 3;
+			lmac_type[0] = 5;
+			lmac_type[1] = 3;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0;
+			sds_lane[1] = 1;
+		}
+		break;
+	case CVMX_QLM_MODE_RGMII_40G_KR4:
+		enable_training = 1;
+	case CVMX_QLM_MODE_RGMII_XLAUI:
+		is_bgx = 5;
+		lmac_type[0] = 5;
+		lmac_type[1] = 4;
+		lmac_type[2] = -1;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0x0;
+		sds_lane[1] = 0xe4;
+		break;
+	case CVMX_QLM_MODE_RGMII_RXAUI:
+		is_bgx = 3;
+		lmac_type[0] = 5;
+		lmac_type[1] = 2;
+		lmac_type[2] = 2;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0x0;
+		sds_lane[1] = 0x4;
+		sds_lane[2] = 0xe;
+		break;
+	case CVMX_QLM_MODE_RGMII_XAUI:
+		is_bgx = 5;
+		lmac_type[0] = 5;
+		lmac_type[1] = 1;
+		lmac_type[2] = -1;
+		lmac_type[3] = -1;
+		sds_lane[0] = 0;
+		sds_lane[1] = 0xe4;
+		break;
+	default:
+		break;
+	}
+
+	if (is_pcie == 0)
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz, &alt_pll);
+	debug("%s: %d lane mode: %d, alternate PLL: %s\n", __func__, mode, lane_mode,
+	      alt_pll ? "true" : "false");
+	if (lane_mode == -1)
+		return -1;
+
+	if (alt_pll) {
+		debug("%s: alternate PLL settings used for qlm %d, lane mode %d, reference clock %d\n",
+		      __func__, qlm, lane_mode, ref_clk_sel);
+		if (__set_qlm_ref_clk_cn78xx(0, qlm, lane_mode, ref_clk_sel)) {
+			printf("%s: Error: reference clock %d is not supported for qlm %d, lane mode: 0x%x\n",
+			       __func__, ref_clk_sel, qlm, lane_mode);
+			return -1;
+		}
+	}
+
+	/* Power up PHY, but keep it in reset */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_pd = 0;
+	phy_ctl.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/* Set GSER for the interface mode */
+	cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+	cfg.s.bgx = is_bgx & 1;
+	cfg.s.bgx_quad = (is_bgx >> 2) & 1;
+	cfg.s.bgx_dual = (is_bgx >> 1) & 1;
+	cfg.s.pcie = is_pcie;
+	csr_wr(CVMX_GSERX_CFG(qlm), cfg.u64);
+
+	/* Lane mode */
+	lmode.u64 = csr_rd(CVMX_GSERX_LANE_MODE(qlm));
+	lmode.s.lmode = lane_mode;
+	csr_wr(CVMX_GSERX_LANE_MODE(qlm), lmode.u64);
+
+	/* Program lmac_type to figure out the type of BGX interface configured */
+	if (is_bgx) {
+		int bgx = (qlm < 4) ? qlm - 2 : 2;
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		cvmx_bgxx_cmr_rx_lmacs_t rx_lmacs;
+		cvmx_bgxx_spux_br_pmd_control_t spu_pmd_control;
+		int index, total_lmacs = 0;
+
+		for (index = 0; index < 4; index++) {
+			cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(index, bgx));
+			cmr_config.s.enable = 0;
+			cmr_config.s.data_pkt_rx_en = 0;
+			cmr_config.s.data_pkt_tx_en = 0;
+			if (lmac_type[index] != -1) {
+				cmr_config.s.lmac_type = lmac_type[index];
+				cmr_config.s.lane_to_sds = sds_lane[index];
+				total_lmacs++;
+				/* RXAUI takes up 2 lmacs */
+				if (lmac_type[index] == 2)
+					total_lmacs += 1;
+			}
+			csr_wr(CVMX_BGXX_CMRX_CONFIG(index, bgx), cmr_config.u64);
+
+			/* Errata (TBD) RGMII doesn't turn on clock if its by
+			 * itself. Force them on
+			 */
+			if (lmac_type[index] == 5) {
+				cvmx_bgxx_cmr_global_config_t global_config;
+
+				global_config.u64 = csr_rd(CVMX_BGXX_CMR_GLOBAL_CONFIG(bgx));
+				global_config.s.bgx_clk_enable = 1;
+				csr_wr(CVMX_BGXX_CMR_GLOBAL_CONFIG(bgx), global_config.u64);
+			}
+
+			/* Enable training for 10G_KR/40G_KR4 modes */
+			if (enable_training == 1 &&
+			    (lmac_type[index] == 3 || lmac_type[index] == 4)) {
+				spu_pmd_control.u64 =
+					csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx));
+				spu_pmd_control.s.train_en = 1;
+				csr_wr(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx),
+				       spu_pmd_control.u64);
+			}
+		}
+
+		/* Update the total number of lmacs */
+		rx_lmacs.u64 = csr_rd(CVMX_BGXX_CMR_RX_LMACS(bgx));
+		rx_lmacs.s.lmacs = total_lmacs + additional_lmacs;
+		csr_wr(CVMX_BGXX_CMR_RX_LMACS(bgx), rx_lmacs.u64);
+		csr_wr(CVMX_BGXX_CMR_TX_LMACS(bgx), rx_lmacs.u64);
+	}
+
+	/* Bring phy out of reset */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/*
+	 * Wait 1us until the management interface is ready to accept
+	 * read/write commands.
+	 */
+	udelay(1);
+
+	/* Wait for reset to complete and the PLL to lock */
+	/* PCIe mode doesn't become ready until the PEM block attempts to bring
+	 * the interface up. Skip this check for PCIe
+	 */
+	if (!is_pcie && CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm),
+					      cvmx_gserx_qlm_stat_t,
+					      rst_rdy, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", qlm);
+		return -1;
+	}
+
+	/* Configure the gser pll */
+	if (!is_pcie)
+		__qlm_setup_pll_cn78xx(0, qlm);
+
+	/* Wait for reset to complete and the PLL to lock */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_PLL_STAT(qlm), cvmx_gserx_pll_stat_t,
+				  pll_lock, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_PLL_STAT[pll_lock]\n", qlm);
+		return -1;
+	}
+
+	/* Errata GSER-26150: 10G PHY PLL Temperature Failure */
+	/* This workaround must be completed after the final deassertion of
+	 * GSERx_PHY_CTL[PHY_RESET].
+	 * Apply the workaround to 10.3125Gbps and 8Gbps only.
+	 */
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX_PASS1_0) &&
+	    (baud_mhz == 103125 || (is_pcie && gen3 == 2)))
+		__qlm_errata_gser_26150(0, qlm, is_pcie);
+
+	/* Errata GSER-26636: 10G-KR/40G-KR - Inverted Tx Coefficient Direction
+	 * Change. Applied to all 10G standards (required for KR) but also
+	 * applied to other standards in case software training is used
+	 */
+	if (baud_mhz == 103125)
+		__qlm_kr_inc_dec_gser26636(0, qlm);
+
+	/* Errata GSER-25992: RX EQ Default Settings Update (CTLE Bias) */
+	/* This workaround will only be applied to Pass 1.x */
+	/* It will also only be applied if the SERDES data-rate is 10G */
+	/* or if PCIe Gen3 (gen3=2 is PCIe Gen3) */
+	if (baud_mhz == 103125 || (is_pcie && gen3 == 2))
+		cvmx_qlm_gser_errata_25992(0, qlm);
+
+	/* Errata GSER-27140: Updating the RX EQ settings due to temperature
+	 * drift sensitivities
+	 */
+	/* This workaround will also only be applied if the SERDES data-rate is 10G */
+	if (baud_mhz == 103125)
+		__qlm_rx_eq_temp_gser27140(0, qlm);
+
+	/* Reduce the voltage amplitude coming from Marvell PHY and also change
+	 * DFE threshold settings for RXAUI interface
+	 */
+	if (is_bgx) {
+		int l;
+
+		for (l = 0; l < 4; l++) {
+			cvmx_gserx_lanex_rx_cfg_4_t cfg4;
+			cvmx_gserx_lanex_tx_cfg_0_t cfg0;
+
+			if (lmac_type[l] == 2) {
+				/* Change the Q/QB error sampler 0 threshold from 0xD to 0xF */
+				cfg4.u64 = csr_rd(CVMX_GSERX_LANEX_RX_CFG_4(l, qlm));
+				cfg4.s.cfg_rx_errdet_ctrl = 0xcf6f;
+				csr_wr(CVMX_GSERX_LANEX_RX_CFG_4(l, qlm), cfg4.u64);
+				/* Reduce the voltage swing to roughly 460mV */
+				cfg0.u64 = csr_rd(CVMX_GSERX_LANEX_TX_CFG_0(l, qlm));
+				cfg0.s.cfg_tx_swing = 0x12;
+				csr_wr(CVMX_GSERX_LANEX_TX_CFG_0(l, qlm), cfg0.u64);
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int __rmac_pll_config(int baud_mhz, int qlm, int mode)
+{
+	cvmx_gserx_pll_px_mode_0_t pmode0;
+	cvmx_gserx_pll_px_mode_1_t pmode1;
+	cvmx_gserx_lane_px_mode_0_t lmode0;
+	cvmx_gserx_lane_px_mode_1_t lmode1;
+	cvmx_gserx_lane_mode_t lmode;
+
+	switch (baud_mhz) {
+	case 98304:
+		pmode0.u64 = 0x1a0a;
+		pmode1.u64 = 0x3228;
+		lmode0.u64 = 0x600f;
+		lmode1.u64 = 0xa80f;
+		break;
+	case 49152:
+		if (mode == CVMX_QLM_MODE_SDL) {
+			pmode0.u64 = 0x3605;
+			pmode1.u64 = 0x0814;
+			lmode0.u64 = 0x000f;
+			lmode1.u64 = 0x6814;
+		} else {
+			pmode0.u64 = 0x1a0a;
+			pmode1.u64 = 0x3228;
+			lmode0.u64 = 0x650f;
+			lmode1.u64 = 0xe80f;
+		}
+		break;
+	case 24576:
+		pmode0.u64 = 0x1a0a;
+		pmode1.u64 = 0x3228;
+		lmode0.u64 = 0x6a0f;
+		lmode1.u64 = 0xe80f;
+		break;
+	case 12288:
+		pmode0.u64 = 0x1a0a;
+		pmode1.u64 = 0x3228;
+		lmode0.u64 = 0x6f0f;
+		lmode1.u64 = 0xe80f;
+		break;
+	case 6144:
+		pmode0.u64 = 0x160a;
+		pmode1.u64 = 0x1019;
+		lmode0.u64 = 0x000f;
+		lmode1.u64 = 0x2814;
+		break;
+	case 3072:
+		pmode0.u64 = 0x160a;
+		pmode1.u64 = 0x1019;
+		lmode0.u64 = 0x050f;
+		lmode1.u64 = 0x6814;
+		break;
+	default:
+		printf("Invalid speed for CPRI/SDL configuration\n");
+		return -1;
+	}
+
+	lmode.u64 = csr_rd(CVMX_GSERX_LANE_MODE(qlm));
+	csr_wr(CVMX_GSERX_PLL_PX_MODE_0(lmode.s.lmode, qlm), pmode0.u64);
+	csr_wr(CVMX_GSERX_PLL_PX_MODE_1(lmode.s.lmode, qlm), pmode1.u64);
+	csr_wr(CVMX_GSERX_LANE_PX_MODE_0(lmode.s.lmode, qlm), lmode0.u64);
+	csr_wr(CVMX_GSERX_LANE_PX_MODE_1(lmode.s.lmode, qlm), lmode1.u64);
+	return 0;
+}
+
+/**
+ * Configure QLM/DLM speed and mode for cnf75xx.
+ *
+ * @param qlm     The QLM to configure
+ * @param baud_mhz   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP mode.
+ * @param gen3    Only used for PCIe
+ *			gen3 = 2 GEN3 mode
+ *			gen3 = 1 GEN2 mode
+ *			gen3 = 0 GEN1 mode
+ *
+ * @param ref_clk_sel    The reference-clock selection to use to configure QLM
+ *			 0 = REF_100MHZ
+ *			 1 = REF_125MHZ
+ *			 2 = REF_156MHZ
+ *			 3 = REF_122MHZ
+ * @param ref_clk_input  The reference-clock input to use to configure QLM
+ *
+ * @return       Return 0 on success or -1.
+ */
+static int octeon_configure_qlm_cnf75xx(int qlm, int baud_mhz, int mode, int rc, int gen3,
+					int ref_clk_sel, int ref_clk_input)
+{
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_lane_mode_t lmode;
+	cvmx_gserx_cfg_t cfg;
+	cvmx_gserx_refclk_sel_t refclk_sel;
+	int is_pcie = 0;
+	int is_bgx = 0;
+	int is_srio = 0;
+	int is_rmac = 0;
+	int is_rmac_pipe = 0;
+	int lane_mode = 0;
+	short lmac_type[4] = { 0 };
+	short sds_lane[4] = { 0 };
+	bool alt_pll = false;
+	int enable_training = 0;
+	int additional_lmacs = 0;
+	int port = (qlm == 3) ? 1 : 0;
+	cvmx_sriox_status_reg_t status_reg;
+
+	debug("%s(qlm: %d, baud_mhz: %d, mode: %d, rc: %d, gen3: %d, ref_clk_sel: %d, ref_clk_input: %d\n",
+	      __func__, qlm, baud_mhz, mode, rc, gen3, ref_clk_sel, ref_clk_input);
+	if (qlm > 8) {
+		printf("Invalid qlm%d passed\n", qlm);
+		return -1;
+	}
+
+	/* Errata PEM-31375 PEM RSL accesses to PCLK registers can timeout
+	 *  during speed change. Change SLI_WINDOW_CTL[time] to 525us
+	 */
+	__set_sli_window_ctl_errata_31375(0);
+
+	cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+
+	/* If PEM is in EP, no need to do anything */
+	if (cfg.s.pcie && rc == 0) {
+		debug("%s: qlm %d is in PCIe endpoint mode, returning\n", __func__, qlm);
+		return 0;
+	}
+
+	if (cfg.s.srio && rc == 0) {
+		debug("%s: qlm %d is in SRIO endpoint mode, returning\n", __func__, qlm);
+		return 0;
+	}
+
+	/* Set the reference clock to use */
+	refclk_sel.u64 = 0;
+	if (ref_clk_input == 0) { /* External ref clock */
+		refclk_sel.s.com_clk_sel = 0;
+		refclk_sel.s.use_com1 = 0;
+	} else if (ref_clk_input == 1) {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 0;
+	} else {
+		refclk_sel.s.com_clk_sel = 1;
+		refclk_sel.s.use_com1 = 1;
+	}
+
+	csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+
+	/* Reset the QLM after changing the reference clock */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 1;
+	phy_ctl.s.phy_pd = 1;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	udelay(1000);
+
+	switch (mode) {
+	case CVMX_QLM_MODE_PCIE:
+	case CVMX_QLM_MODE_PCIE_1X2:
+	case CVMX_QLM_MODE_PCIE_2X1: {
+		cvmx_pemx_cfg_t pemx_cfg;
+		cvmx_pemx_on_t pemx_on;
+		cvmx_rst_soft_prstx_t rst_prst;
+
+		is_pcie = 1;
+
+		if (qlm > 1) {
+			printf("Invalid PCIe mode for QLM%d\n", qlm);
+			return -1;
+		}
+
+		if (ref_clk_sel == 0) {
+			refclk_sel.u64 = csr_rd(CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 0;
+			csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK100;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK100;
+			else
+				lane_mode = R_8G_REFCLK100;
+		} else if (ref_clk_sel == 1) {
+			refclk_sel.u64 = csr_rd(CVMX_GSERX_REFCLK_SEL(qlm));
+			refclk_sel.s.pcie_refclk125 = 1;
+			csr_wr(CVMX_GSERX_REFCLK_SEL(qlm), refclk_sel.u64);
+			if (gen3 == 0) /* Gen1 mode */
+				lane_mode = R_2_5G_REFCLK125;
+			else if (gen3 == 1) /* Gen2 mode */
+				lane_mode = R_5G_REFCLK125;
+			else
+				lane_mode = R_8G_REFCLK125;
+		} else {
+			printf("Invalid reference clock for PCIe on QLM%d\n", qlm);
+			return -1;
+		}
+
+		switch (qlm) {
+		case 0: /* Either x4 or x2 based on PEM0 */
+			rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(0));
+			rst_prst.s.soft_prst = rc;
+			csr_wr(CVMX_RST_SOFT_PRSTX(0), rst_prst.u64);
+			__setup_pem_reset(0, 0, !rc);
+
+			pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(0));
+			pemx_cfg.cnf75xx.hostmd = rc;
+			pemx_cfg.cnf75xx.lanes8 = (mode == CVMX_QLM_MODE_PCIE);
+			pemx_cfg.cnf75xx.md = gen3;
+			csr_wr(CVMX_PEMX_CFG(0), pemx_cfg.u64);
+			/* x4 mode waits for QLM1 setup before turning on the PEM */
+			if (mode == CVMX_QLM_MODE_PCIE_1X2 || mode == CVMX_QLM_MODE_PCIE_2X1) {
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		case 1: /* Either PEM0 x4 or PEM1 x2 */
+			if (mode == CVMX_QLM_MODE_PCIE_1X2 || mode == CVMX_QLM_MODE_PCIE_2X1) {
+				rst_prst.u64 = csr_rd(CVMX_RST_SOFT_PRSTX(1));
+				rst_prst.s.soft_prst = rc;
+				csr_wr(CVMX_RST_SOFT_PRSTX(1), rst_prst.u64);
+				__setup_pem_reset(0, 1, !rc);
+
+				pemx_cfg.u64 = csr_rd(CVMX_PEMX_CFG(1));
+				pemx_cfg.cnf75xx.hostmd = rc;
+				pemx_cfg.cnf75xx.md = gen3;
+				csr_wr(CVMX_PEMX_CFG(1), pemx_cfg.u64);
+
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(1));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(1), pemx_on.u64);
+			} else {
+				pemx_on.u64 = csr_rd(CVMX_PEMX_ON(0));
+				pemx_on.s.pemon = 1;
+				csr_wr(CVMX_PEMX_ON(0), pemx_on.u64);
+			}
+			break;
+		default:
+			break;
+		}
+		break;
+	}
+	case CVMX_QLM_MODE_SRIO_1X4:
+	case CVMX_QLM_MODE_SRIO_2X2:
+	case CVMX_QLM_MODE_SRIO_4X1: {
+		int spd = 0xf;
+
+		if (cvmx_fuse_read(1601)) {
+			debug("SRIO is not supported on cnf73xx model\n");
+			return -1;
+		}
+
+		switch (baud_mhz) {
+		case 1250:
+			switch (ref_clk_sel) {
+			case 0: /* 100 MHz ref clock */
+				spd = 0x3;
+				break;
+			case 1: /* 125 MHz ref clock */
+				spd = 0xa;
+				break;
+			case 2: /* 156.25 MHz ref clock */
+				spd = 0x4;
+				break;
+			default:
+				spd = 0xf; /* Disabled */
+				break;
+			}
+			break;
+		case 2500:
+			switch (ref_clk_sel) {
+			case 0: /* 100 MHz ref clock */
+				spd = 0x2;
+				break;
+			case 1: /* 125 MHz ref clock */
+				spd = 0x9;
+				break;
+			case 2: /* 156.25 MHz ref clock */
+				spd = 0x7;
+				break;
+			default:
+				spd = 0xf; /* Disabled */
+				break;
+			}
+			break;
+		case 3125:
+			switch (ref_clk_sel) {
+			case 1: /* 125 MHz ref clock */
+				spd = 0x8;
+				break;
+			case 2: /* 156.25 MHz ref clock */
+				spd = 0xe;
+				break;
+			default:
+				spd = 0xf; /* Disabled */
+				break;
+			}
+			break;
+		case 5000:
+			switch (ref_clk_sel) {
+			case 0: /* 100 MHz ref clock */
+				spd = 0x0;
+				break;
+			case 1: /* 125 MHz ref clock */
+				spd = 0x6;
+				break;
+			case 2: /* 156.25 MHz ref clock */
+				spd = 0xb;
+				break;
+			default:
+				spd = 0xf; /* Disabled */
+				break;
+			}
+			break;
+		default:
+			spd = 0xf;
+			break;
+		}
+
+		if (spd == 0xf) {
+			printf("ERROR: Invalid SRIO speed (%d) configured for QLM%d\n", baud_mhz,
+			       qlm);
+			return -1;
+		}
+
+		status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(port));
+		status_reg.s.spd = spd;
+		csr_wr(CVMX_SRIOX_STATUS_REG(port), status_reg.u64);
+		is_srio = 1;
+		break;
+	}
+
+	case CVMX_QLM_MODE_SGMII_2X1:
+		if (qlm == 4) {
+			is_bgx = 1;
+			lmac_type[0] = 0;
+			lmac_type[1] = 0;
+			lmac_type[2] = -1;
+			lmac_type[3] = -1;
+			sds_lane[0] = 0;
+			sds_lane[1] = 1;
+		} else if (qlm == 5) {
+			is_bgx = 1;
+			lmac_type[0] = -1;
+			lmac_type[1] = -1;
+			lmac_type[2] = 0;
+			lmac_type[3] = 0;
+			sds_lane[2] = 2;
+			sds_lane[3] = 3;
+			additional_lmacs = 2;
+		}
+		break;
+	case CVMX_QLM_MODE_10G_KR_1X2:
+		enable_training = 1;
+	case CVMX_QLM_MODE_XFI_1X2:
+		if (qlm == 5) {
+			is_bgx = 1;
+			lmac_type[0] = -1;
+			lmac_type[1] = -1;
+			lmac_type[2] = 3;
+			lmac_type[3] = 3;
+			sds_lane[2] = 2;
+			sds_lane[3] = 3;
+			additional_lmacs = 2;
+		}
+		break;
+	case CVMX_QLM_MODE_CPRI: /* CPRI / JESD204B */
+		is_rmac = 1;
+		break;
+	case CVMX_QLM_MODE_SDL: /* Serdes Lite (SDL) */
+		is_rmac = 1;
+		is_rmac_pipe = 1;
+		lane_mode = 1;
+		break;
+	default:
+		break;
+	}
+
+	if (is_rmac_pipe == 0 && is_pcie == 0) {
+		lane_mode = __get_lane_mode_for_speed_and_ref_clk(ref_clk_sel, baud_mhz,
+								  &alt_pll);
+	}
+
+	debug("%s: %d lane mode: %d, alternate PLL: %s\n", __func__, mode, lane_mode,
+	      alt_pll ? "true" : "false");
+	if (lane_mode == -1)
+		return -1;
+
+	if (alt_pll) {
+		debug("%s: alternate PLL settings used for qlm %d, lane mode %d, reference clock %d\n",
+		      __func__, qlm, lane_mode, ref_clk_sel);
+		if (__set_qlm_ref_clk_cn78xx(0, qlm, lane_mode, ref_clk_sel)) {
+			printf("%s: Error: reference clock %d is not supported for qlm %d\n",
+			       __func__, ref_clk_sel, qlm);
+			return -1;
+		}
+	}
+
+	/* Power up PHY, but keep it in reset */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_pd = 0;
+	phy_ctl.s.phy_reset = 1;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/* Set GSER for the interface mode */
+	cfg.u64 = csr_rd(CVMX_GSERX_CFG(qlm));
+	cfg.s.bgx = is_bgx & 1;
+	cfg.s.bgx_quad = (is_bgx >> 2) & 1;
+	cfg.s.bgx_dual = (is_bgx >> 1) & 1;
+	cfg.s.pcie = is_pcie;
+	cfg.s.srio = is_srio;
+	cfg.s.rmac = is_rmac;
+	cfg.s.rmac_pipe = is_rmac_pipe;
+	csr_wr(CVMX_GSERX_CFG(qlm), cfg.u64);
+
+	/* Lane mode */
+	lmode.u64 = csr_rd(CVMX_GSERX_LANE_MODE(qlm));
+	lmode.s.lmode = lane_mode;
+	csr_wr(CVMX_GSERX_LANE_MODE(qlm), lmode.u64);
+
+	/* Because of the Errata where quad mode does not work, program
+	 * lmac_type to figure out the type of BGX interface configured
+	 */
+	if (is_bgx) {
+		int bgx = 0;
+		cvmx_bgxx_cmrx_config_t cmr_config;
+		cvmx_bgxx_cmr_rx_lmacs_t rx_lmacs;
+		cvmx_bgxx_spux_br_pmd_control_t spu_pmd_control;
+		int index, total_lmacs = 0;
+
+		for (index = 0; index < 4; index++) {
+			cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(index, bgx));
+			cmr_config.s.enable = 0;
+			cmr_config.s.data_pkt_rx_en = 0;
+			cmr_config.s.data_pkt_tx_en = 0;
+			if (lmac_type[index] != -1) {
+				cmr_config.s.lmac_type = lmac_type[index];
+				cmr_config.s.lane_to_sds = sds_lane[index];
+				total_lmacs++;
+			}
+			csr_wr(CVMX_BGXX_CMRX_CONFIG(index, bgx), cmr_config.u64);
+
+			/* Enable training for 10G_KR/40G_KR4 modes */
+			if (enable_training == 1 &&
+			    (lmac_type[index] == 3 || lmac_type[index] == 4)) {
+				spu_pmd_control.u64 =
+					csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx));
+				spu_pmd_control.s.train_en = 1;
+				csr_wr(CVMX_BGXX_SPUX_BR_PMD_CONTROL(index, bgx),
+				       spu_pmd_control.u64);
+			}
+		}
+
+		/* Update the total number of lmacs */
+		rx_lmacs.u64 = csr_rd(CVMX_BGXX_CMR_RX_LMACS(bgx));
+		rx_lmacs.s.lmacs = total_lmacs + additional_lmacs;
+		csr_wr(CVMX_BGXX_CMR_RX_LMACS(bgx), rx_lmacs.u64);
+		csr_wr(CVMX_BGXX_CMR_TX_LMACS(bgx), rx_lmacs.u64);
+	}
+
+	/* Bring phy out of reset */
+	phy_ctl.u64 = csr_rd(CVMX_GSERX_PHY_CTL(qlm));
+	phy_ctl.s.phy_reset = 0;
+	csr_wr(CVMX_GSERX_PHY_CTL(qlm), phy_ctl.u64);
+
+	/*
+	 * Wait 1us until the management interface is ready to accept
+	 * read/write commands.
+	 */
+	udelay(1);
+
+	if (is_srio) {
+		status_reg.u64 = csr_rd(CVMX_SRIOX_STATUS_REG(port));
+		status_reg.s.srio = 1;
+		csr_wr(CVMX_SRIOX_STATUS_REG(port), status_reg.u64);
+		return 0;
+	}
+
+	/* Wait for reset to complete and the PLL to lock */
+	/* PCIe mode doesn't become ready until the PEM block attempts to bring
+	 * the interface up. Skip this check for PCIe
+	 */
+	if (!is_pcie && CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_QLM_STAT(qlm), cvmx_gserx_qlm_stat_t,
+					      rst_rdy, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_QLM_STAT[rst_rdy]\n", qlm);
+		return -1;
+	}
+
+	/* Configure the gser pll */
+	if (is_rmac)
+		__rmac_pll_config(baud_mhz, qlm, mode);
+	else if (!(is_pcie || is_srio))
+		__qlm_setup_pll_cn78xx(0, qlm);
+
+	/* Wait for reset to complete and the PLL to lock */
+	if (CVMX_WAIT_FOR_FIELD64(CVMX_GSERX_PLL_STAT(qlm), cvmx_gserx_pll_stat_t,
+				  pll_lock, ==, 1, 10000)) {
+		printf("QLM%d: Timeout waiting for GSERX_PLL_STAT[pll_lock]\n", qlm);
+		return -1;
+	}
+
+	/* Errata GSER-27140: Updating the RX EQ settings due to temperature
+	 * drift sensitivities
+	 */
+	/* This workaround will also only be applied if the SERDES data-rate is 10G */
+	if (baud_mhz == 103125)
+		__qlm_rx_eq_temp_gser27140(0, qlm);
+
+	return 0;
+}
+
+/**
+ * Configure qlm/dlm speed and mode.
+ * @param qlm     The QLM or DLM to configure
+ * @param speed   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP
+ *		  mode.
+ * @param pcie_mode Only used when qlm/dlm are in pcie mode.
+ * @param ref_clk_sel Reference clock to use for 70XX where:
+ *			0: 100MHz
+ *			1: 125MHz
+ *			2: 156.25MHz
+ *			3: 122MHz (Used by RMAC)
+ * @param ref_clk_input	This selects which reference clock input to use.  For
+ *			cn70xx:
+ *				0: DLMC_REF_CLK0
+ *				1: DLMC_REF_CLK1
+ *				2: DLM0_REF_CLK
+ *			cn61xx: (not used)
+ *			cn78xx/cn76xx/cn73xx:
+ *				0: Internal clock (QLM[0-7]_REF_CLK)
+ *				1: QLMC_REF_CLK0
+ *				2: QLMC_REF_CLK1
+ *
+ * @return       Return 0 on success or -1.
+ */
+int octeon_configure_qlm(int qlm, int speed, int mode, int rc, int pcie_mode, int ref_clk_sel,
+			 int ref_clk_input)
+{
+	int node = 0; // ToDo: corrently only node 0 is supported
+
+	debug("%s(%d, %d, %d, %d, %d, %d, %d)\n", __func__, qlm, speed, mode, rc, pcie_mode,
+	      ref_clk_sel, ref_clk_input);
+	if (OCTEON_IS_MODEL(OCTEON_CN61XX) || OCTEON_IS_MODEL(OCTEON_CNF71XX))
+		return octeon_configure_qlm_cn61xx(qlm, speed, mode, rc, pcie_mode);
+	else if (OCTEON_IS_MODEL(OCTEON_CN70XX))
+		return octeon_configure_qlm_cn70xx(qlm, speed, mode, rc, pcie_mode, ref_clk_sel,
+						   ref_clk_input);
+	else if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return octeon_configure_qlm_cn78xx(node, qlm, speed, mode, rc, pcie_mode,
+						   ref_clk_sel, ref_clk_input);
+	else if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return octeon_configure_qlm_cn73xx(qlm, speed, mode, rc, pcie_mode, ref_clk_sel,
+						   ref_clk_input);
+	else if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return octeon_configure_qlm_cnf75xx(qlm, speed, mode, rc, pcie_mode, ref_clk_sel,
+						    ref_clk_input);
+	else
+		return -1;
+}
+
+void octeon_init_qlm(int node)
+{
+	int qlm;
+	cvmx_gserx_phy_ctl_t phy_ctl;
+	cvmx_gserx_cfg_t cfg;
+	int baud_mhz;
+	int pem;
+
+	if (!OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return;
+
+	for (qlm = 0; qlm < 8; qlm++) {
+		phy_ctl.u64 = csr_rd_node(node, CVMX_GSERX_PHY_CTL(qlm));
+		if (phy_ctl.s.phy_reset == 0) {
+			cfg.u64 = csr_rd_node(node, CVMX_GSERX_CFG(qlm));
+			if (cfg.s.pcie)
+				__cvmx_qlm_pcie_errata_cn78xx(node, qlm);
+			else
+				__qlm_init_errata_20844(node, qlm);
+
+			baud_mhz = cvmx_qlm_get_gbaud_mhz_node(node, qlm);
+			if (baud_mhz == 6250 || baud_mhz == 6316)
+				octeon_qlm_tune_v3(node, qlm, baud_mhz, 0xa, 0xa0, -1, -1);
+			else if (baud_mhz == 103125)
+				octeon_qlm_tune_v3(node, qlm, baud_mhz, 0xd, 0xd0, -1, -1);
+		}
+	}
+
+	/* Setup how each PEM drives the PERST lines */
+	for (pem = 0; pem < 4; pem++) {
+		cvmx_rst_ctlx_t rst_ctl;
+
+		rst_ctl.u64 = csr_rd_node(node, CVMX_RST_CTLX(pem));
+		__setup_pem_reset(node, pem, !rst_ctl.s.host_mode);
+	}
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 45/50] mips: octeon: Makefile: Enable building of the newly added C files
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (43 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 44/50] mips: octeon: Add octeon_qlm.c Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 46/50] mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT Stefan Roese
                   ` (7 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

This patch adds the newly added C files to the Makefile to enable
compilation. This is done in a separate step, to not introduce build
breakage while adding the single files with potentially missing
externals.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/Makefile | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/mips/mach-octeon/Makefile b/arch/mips/mach-octeon/Makefile
index 3486aa9d8b..40ddab27ea 100644
--- a/arch/mips/mach-octeon/Makefile
+++ b/arch/mips/mach-octeon/Makefile
@@ -11,3 +11,14 @@ obj-y += dram.o
 obj-y += cvmx-coremask.o
 obj-y += cvmx-bootmem.o
 obj-y += bootoctlinux.o
+
+# QLM related code
+obj-y += cvmx-helper-cfg.o
+obj-y += cvmx-helper-fdt.o
+obj-y += cvmx-helper-jtag.o
+obj-y += cvmx-helper-util.o
+obj-y += cvmx-helper.o
+obj-y += cvmx-pcie.o
+obj-y += cvmx-qlm.o
+obj-y += octeon_fdt.o
+obj-y += octeon_qlm.o
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 46/50] mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (44 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 45/50] mips: octeon: Makefile: Enable building of the newly added C files Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 47/50] mips: octeon: mrvl, cn73xx.dtsi: Add PCIe controller DT node Stefan Roese
                   ` (6 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

Setting CONFIG_SYS_PCI_64BIT is needed for correct PCIe functionality on
MIPS Octeon.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/mach-octeon/Kconfig | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/mips/mach-octeon/Kconfig b/arch/mips/mach-octeon/Kconfig
index e8596ed99a..d69408cc27 100644
--- a/arch/mips/mach-octeon/Kconfig
+++ b/arch/mips/mach-octeon/Kconfig
@@ -55,6 +55,10 @@ config SYS_ICACHE_SIZE
 config SYS_ICACHE_LINE_SIZE
 	default 128
 
+config SYS_PCI_64BIT
+	bool
+	default y
+
 source "board/Marvell/octeon_ebb7304/Kconfig"
 
 endmenu
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 47/50] mips: octeon: mrvl, cn73xx.dtsi: Add PCIe controller DT node
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (45 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 46/50] mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 48/50] mips: octeon: octeon_ebb7304: Add board specific QLM init code Stefan Roese
                   ` (5 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

This patch adds the PCIe controller node to the MIPS Octeon 73xx dtsi
file.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/mips/dts/mrvl,cn73xx.dtsi | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/mips/dts/mrvl,cn73xx.dtsi b/arch/mips/dts/mrvl,cn73xx.dtsi
index 40eb85ee0c..461ed07969 100644
--- a/arch/mips/dts/mrvl,cn73xx.dtsi
+++ b/arch/mips/dts/mrvl,cn73xx.dtsi
@@ -203,5 +203,21 @@
 				dr_mode = "host";
 			};
 		};
+
+		/* PCIe 0 */
+		pcie0: pcie at 1180069000000 {
+			compatible =  "marvell,pcie-host-octeon";
+			reg = <0 0xf2600000 0 0x10000>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			device_type = "pci";
+			dma-coherent;
+
+			bus-range = <0 0xff>;
+			marvell,pcie-port = <0>;
+			ranges = <0x81000000 0x00000000 0xd0000000 0x00011a00 0xd0000000 0x00000000 0x01000000 /* IO */
+				  0x02000000 0x00000000 0xe0000000 0x00011b00 0xe0000000 0x00000000 0x10000000	/* non-prefetchable memory */
+				  0x43000000 0x00011c00 0x00000000 0x00011c00 0x00000000 0x00000010 0x00000000>;/* prefetchable memory */
+		};
 	};
 };
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 48/50] mips: octeon: octeon_ebb7304: Add board specific QLM init code
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (46 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 47/50] mips: octeon: mrvl, cn73xx.dtsi: Add PCIe controller DT node Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 49/50] mips: octeon: Add Octeon PCIe host controller driver Stefan Roese
                   ` (4 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

This patch adds the board specific QLM/DLM init code to the Octeon 3
EBB7304 board. The configuration of each port is read from the
environment exactly as done in the 2013 U-Boot version to keep the
board and it's configuration compatible.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
---

 board/Marvell/octeon_ebb7304/board.c | 732 ++++++++++++++++++++++++++-
 1 file changed, 730 insertions(+), 2 deletions(-)

diff --git a/board/Marvell/octeon_ebb7304/board.c b/board/Marvell/octeon_ebb7304/board.c
index 611b18fa6a..9aac5f0b09 100644
--- a/board/Marvell/octeon_ebb7304/board.c
+++ b/board/Marvell/octeon_ebb7304/board.c
@@ -3,20 +3,32 @@
  * Copyright (C) 2020 Stefan Roese <sr@denx.de>
  */
 
-#include <common.h>
 #include <dm.h>
+#include <fdt_support.h>
 #include <ram.h>
+#include <asm/gpio.h>
 
 #include <mach/octeon_ddr.h>
+#include <mach/cvmx-qlm.h>
+#include <mach/octeon_qlm.h>
+#include <mach/octeon_fdt.h>
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-cfg.h>
+#include <mach/cvmx-helper-util.h>
+#include <mach/cvmx-bgxx-defs.h>
 
 #include "board_ddr.h"
 
+#define MAX_MIX_ENV_VARS	4
+
 #define EBB7304_DEF_DRAM_FREQ	800
 
 static struct ddr_conf board_ddr_conf[] = {
-	 OCTEON_EBB7304_DDR_CONFIGURATION
+	OCTEON_EBB7304_DDR_CONFIGURATION
 };
 
+static int no_phy[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+
 struct ddr_conf *octeon_ddr_conf_table_get(int *count, int *def_ddr_freq)
 {
 	*count = ARRAY_SIZE(board_ddr_conf);
@@ -24,3 +36,719 @@ struct ddr_conf *octeon_ddr_conf_table_get(int *count, int *def_ddr_freq)
 
 	return board_ddr_conf;
 }
+
+/*
+ * parse_env_var:	Parse the environment variable ("bgx_for_mix%d") to
+ *			extract the lmac it is set to.
+ *
+ *  index:		Index of environment variable to parse.
+ *			environment variable.
+ *  env_bgx:		Updated with the bgx of the lmac in the environment
+ *			variable.
+ *  env_lmac:		Updated with the index of lmac in the environment
+ *			variable.
+ *
+ *  returns:		Zero on success, error otherwise.
+ */
+static int parse_env_var(int index, int *env_bgx, int *env_lmac)
+{
+	char env_var[20];
+	ulong xipd_port;
+
+	sprintf(env_var, "bgx_for_mix%d", index);
+	xipd_port = env_get_ulong(env_var, 0, 0xffff);
+	if (xipd_port != 0xffff) {
+		int xiface;
+		struct cvmx_xiface xi;
+		struct cvmx_xport xp;
+
+		/*
+		 * The environemt variable is set to the xipd port. Convert the
+		 * xipd port to numa node, bgx, and lmac.
+		 */
+		xiface = cvmx_helper_get_interface_num(xipd_port);
+		xi = cvmx_helper_xiface_to_node_interface(xiface);
+		xp = cvmx_helper_ipd_port_to_xport(xipd_port);
+		*env_bgx = xi.interface;
+		*env_lmac = cvmx_helper_get_interface_index_num(xp.port);
+		return 0;
+	}
+
+	return -1;
+}
+
+/*
+ * get_lmac_fdt_node:	Search the device tree for the node corresponding to
+ *			a given bgx lmac.
+ *
+ *  fdt:		Pointer to flat device tree
+ *  search_node:	Numa node of the lmac to search for.
+ *  search_bgx:		Bgx of the lmac to search for.
+ *  search_lmac:	Lmac index to search for.
+ *  compat:		Compatible string to search for.
+
+ *  returns:		The device tree node of the lmac if found,
+ *			or -1 otherwise.
+ */
+static int get_lmac_fdt_node(const void *fdt, int search_node, int search_bgx, int search_lmac,
+			     const char *compat)
+{
+	int node;
+	const fdt32_t *reg;
+	u64 addr;
+	int fdt_node = -1;
+	int fdt_bgx = -1;
+	int fdt_lmac = -1;
+	int len;
+	int parent;
+
+	/* Iterate through all bgx ports */
+	node = -1;
+	while ((node = fdt_node_offset_by_compatible((void *)fdt, node,
+						     compat)) >= 0) {
+		/* Get the node and bgx from the physical address */
+		parent = fdt_parent_offset(fdt, node);
+		reg = fdt_getprop(fdt, parent, "reg", &len);
+		if (parent < 0 || !reg)
+			continue;
+
+		addr = fdt_translate_address((void *)fdt, parent, reg);
+		fdt_node = (addr >> 36) & 0x7;
+		fdt_bgx = (addr >> 24) & 0xf;
+
+		/* Get the lmac index from the reg property */
+		reg = fdt_getprop(fdt, node, "reg", &len);
+		if (reg)
+			fdt_lmac = *reg;
+
+		/* Check for a match */
+		if (search_node == fdt_node && search_bgx == fdt_bgx &&
+		    search_lmac == fdt_lmac)
+			return node;
+	}
+
+	return -1;
+}
+
+/*
+ * get_mix_fdt_node:	Search the device tree for the node corresponding to
+ *			a given mix.
+ *
+ *  fdt:		Pointer to flat device tree
+ *  search_node:	Mix numa node to search for.
+ *  search_index:	Mix index to search for.
+ *
+ *  returns:		The device tree node of the lmac if found,
+ *			or -1 otherwise.
+ */
+static int get_mix_fdt_node(const void *fdt, int search_node, int search_index)
+{
+	int node;
+
+	/* Iterate through all the mix fdt nodes */
+	node = -1;
+	while ((node = fdt_node_offset_by_compatible((void *)fdt, node,
+						     "cavium,octeon-7890-mix")) >= 0) {
+		int parent;
+		int len;
+		const char *name;
+		int mix_numa_node;
+		const fdt32_t *reg;
+		int mix_index = -1;
+		u64 addr;
+
+		/* Get the numa node of the mix from the parent node name */
+		parent = fdt_parent_offset(fdt, node);
+		if (parent < 0 ||
+		    ((name = fdt_get_name(fdt, parent, &len)) == NULL) ||
+		    ((name = strchr(name, '@')) == NULL))
+			continue;
+
+		name++;
+		mix_numa_node = simple_strtol(name, NULL, 0) ? 1 : 0;
+
+		/* Get the mix index from the reg property */
+		reg = fdt_getprop(fdt, node, "reg", &len);
+		if (reg) {
+			addr = fdt_translate_address((void *)fdt, parent, reg);
+			mix_index = (addr >> 11) & 1;
+		}
+
+		/* Check for a match */
+		if (mix_numa_node == search_node && mix_index == search_index)
+			return node;
+	}
+
+	return -1;
+}
+
+/*
+ * fdt_fix_mix:		Fix the mix nodes in the device tree. Only the mix nodes
+ *			configured by the user will be preserved. All other mix
+ *			nodes will be trimmed.
+ *
+ *  fdt:		Pointer to flat device tree
+ *
+ *  returns:		Zero on success, error otherwise.
+ */
+static int fdt_fix_mix(const void *fdt)
+{
+	int node;
+	int next_node;
+	int len;
+	int i;
+
+	/* Parse all the mix port environment variables */
+	for (i = 0; i < MAX_MIX_ENV_VARS; i++) {
+		int env_node = 0;
+		int env_bgx = -1;
+		int env_lmac = -1;
+		int lmac_fdt_node = -1;
+		int mix_fdt_node = -1;
+		int lmac_phandle;
+		char *compat;
+
+		/* Get the lmac for this environment variable */
+		if (parse_env_var(i, &env_bgx, &env_lmac))
+			continue;
+
+		/* Get the fdt node for this lmac and add a phandle to it */
+		compat = "cavium,octeon-7890-bgx-port";
+		lmac_fdt_node = get_lmac_fdt_node(fdt, env_node, env_bgx,
+						  env_lmac, compat);
+		if (lmac_fdt_node < 0) {
+			/* Must check for the xcv compatible string too */
+			compat = "cavium,octeon-7360-xcv";
+			lmac_fdt_node = get_lmac_fdt_node(fdt, env_node,
+							  env_bgx, env_lmac,
+							  compat);
+			if (lmac_fdt_node < 0) {
+				printf("WARNING: Failed to get lmac fdt node for %d%d%d\n",
+				       env_node, env_bgx, env_lmac);
+				continue;
+			}
+		}
+
+		lmac_phandle = fdt_alloc_phandle((void *)fdt);
+		fdt_set_phandle((void *)fdt, lmac_fdt_node, lmac_phandle);
+
+		/* Get the fdt mix node corresponding to this lmac */
+		mix_fdt_node = get_mix_fdt_node(fdt, env_node, env_lmac);
+		if (mix_fdt_node < 0)
+			continue;
+
+		/* Point the mix to the lmac */
+		fdt_getprop(fdt, mix_fdt_node, "cavium,mac-handle", &len);
+		fdt_setprop_inplace((void *)fdt, mix_fdt_node,
+				    "cavium,mac-handle", &lmac_phandle, len);
+	}
+
+	/* Trim unused mix'es from the device tree */
+	for (node = fdt_next_node(fdt, -1, NULL); node >= 0; node = next_node) {
+		const char *compat;
+		const fdt32_t *reg;
+
+		next_node = fdt_next_node(fdt, node, NULL);
+
+		compat = fdt_getprop(fdt, node, "compatible", &len);
+		if (compat) {
+			if (strcmp(compat, "cavium,octeon-7890-mix"))
+				continue;
+
+			reg = fdt_getprop(fdt, node, "cavium,mac-handle", &len);
+			if (reg) {
+				if (*reg == 0xffff)
+					fdt_nop_node((void *)fdt, node);
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void kill_fdt_phy(void *fdt, int offset, void *arg)
+{
+	int len, phy_offset;
+	const fdt32_t *php;
+	u32 phandle;
+
+	php = fdt_getprop(fdt, offset, "phy-handle", &len);
+	if (php && len == sizeof(*php)) {
+		phandle = fdt32_to_cpu(*php);
+		fdt_nop_property(fdt, offset, "phy-handle");
+		phy_offset = fdt_node_offset_by_phandle(fdt, phandle);
+		if (phy_offset > 0)
+			fdt_nop_node(fdt, phy_offset);
+	}
+}
+
+void __fixup_xcv(void)
+{
+	unsigned long bgx = env_get_ulong("bgx_for_rgmii", 10,
+					  (unsigned long)-1);
+	char fdt_key[16];
+	int i;
+
+	debug("%s: BGX %d\n", __func__, (int)bgx);
+
+	for (i = 0; i < 3; i++) {
+		snprintf(fdt_key, sizeof(fdt_key),
+			 bgx == i ? "%d,xcv" : "%d,not-xcv", i);
+		debug("%s: trimming bgx %lu with key %s\n",
+		      __func__, bgx, fdt_key);
+
+		octeon_fdt_patch_rename((void *)gd->fdt_blob, fdt_key,
+					"cavium,xcv-trim", true, NULL, NULL);
+	}
+}
+
+/* QLM0 - QLM6 */
+void __fixup_fdt(void)
+{
+	int qlm;
+	int speed = 0;
+
+	for (qlm = 0; qlm < 7; qlm++) {
+		enum cvmx_qlm_mode mode;
+		char fdt_key[16];
+		const char *type_str = "none";
+
+		mode = cvmx_qlm_get_mode(qlm);
+		switch (mode) {
+		case CVMX_QLM_MODE_SGMII:
+		case CVMX_QLM_MODE_RGMII_SGMII:
+		case CVMX_QLM_MODE_RGMII_SGMII_1X1:
+			type_str = "sgmii";
+			break;
+		case CVMX_QLM_MODE_XAUI:
+		case CVMX_QLM_MODE_RGMII_XAUI:
+			speed = (cvmx_qlm_get_gbaud_mhz(qlm) * 8 / 10) * 4;
+			if (speed == 10000)
+				type_str = "xaui";
+			else
+				type_str = "dxaui";
+			break;
+		case CVMX_QLM_MODE_RXAUI:
+		case CVMX_QLM_MODE_RGMII_RXAUI:
+			type_str = "rxaui";
+			break;
+		case CVMX_QLM_MODE_XLAUI:
+		case CVMX_QLM_MODE_RGMII_XLAUI:
+			type_str = "xlaui";
+			break;
+		case CVMX_QLM_MODE_XFI:
+		case CVMX_QLM_MODE_RGMII_XFI:
+		case CVMX_QLM_MODE_RGMII_XFI_1X1:
+			type_str = "xfi";
+			break;
+		case CVMX_QLM_MODE_10G_KR:
+		case CVMX_QLM_MODE_RGMII_10G_KR:
+			type_str = "10G_KR";
+			break;
+		case CVMX_QLM_MODE_40G_KR4:
+		case CVMX_QLM_MODE_RGMII_40G_KR4:
+			type_str = "40G_KR4";
+			break;
+		case CVMX_QLM_MODE_SATA_2X1:
+			type_str = "sata";
+			break;
+		case CVMX_QLM_MODE_SGMII_2X1:
+		case CVMX_QLM_MODE_XFI_1X2:
+		case CVMX_QLM_MODE_10G_KR_1X2:
+		case CVMX_QLM_MODE_RXAUI_1X2:
+		case CVMX_QLM_MODE_MIXED: // special for DLM5 & DLM6
+		{
+			cvmx_bgxx_cmrx_config_t cmr_config;
+			cvmx_bgxx_spux_br_pmd_control_t pmd_control;
+			int mux = cvmx_qlm_mux_interface(2);
+
+			if (mux == 2) { // only dlm6
+				cmr_config.u64 = csr_rd(CVMX_BGXX_CMRX_CONFIG(2, 2));
+				pmd_control.u64 =
+					csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(2, 2));
+			} else {
+				if (qlm == 5) {
+					cmr_config.u64 =
+						csr_rd(CVMX_BGXX_CMRX_CONFIG(0, 2));
+					pmd_control.u64 =
+						csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(0, 2));
+				} else {
+					cmr_config.u64 =
+						csr_rd(CVMX_BGXX_CMRX_CONFIG(2, 2));
+					pmd_control.u64 =
+						csr_rd(CVMX_BGXX_SPUX_BR_PMD_CONTROL(2, 2));
+				}
+			}
+			switch (cmr_config.s.lmac_type) {
+			case 0:
+				type_str = "sgmii";
+				break;
+			case 1:
+				type_str = "xaui";
+				break;
+			case 2:
+				type_str = "rxaui";
+				break;
+			case 3:
+				if (pmd_control.s.train_en)
+					type_str = "10G_KR";
+				else
+					type_str = "xfi";
+				break;
+			case 4:
+				if (pmd_control.s.train_en)
+					type_str = "40G_KR4";
+				else
+					type_str = "xlaui";
+				break;
+			default:
+				type_str = "none";
+				break;
+			}
+			break;
+		}
+		default:
+			type_str = "none";
+			break;
+		}
+		sprintf(fdt_key, "%d,%s", qlm, type_str);
+		debug("Patching qlm %d for %s for mode %d%s\n", qlm, fdt_key, mode,
+		      no_phy[qlm] ? ", removing PHY" : "");
+		octeon_fdt_patch_rename((void *)gd->fdt_blob, fdt_key, NULL, true,
+					no_phy[qlm] ? kill_fdt_phy : NULL, NULL);
+	}
+}
+
+int board_fix_fdt(void)
+{
+	__fixup_fdt();
+	__fixup_xcv();
+
+	/* Fix the mix ports */
+	fdt_fix_mix(gd->fdt_blob);
+
+	return 0;
+}
+
+/*
+ * Here is the description of the parameters that are passed to QLM
+ * configuration:
+ *
+ *	param0 : The QLM to configure
+ *	param1 : Speed to configure the QLM at
+ *	param2 : Mode the QLM to configure
+ *	param3 : 1 = RC, 0 = EP
+ *	param4 : 0 = GEN1, 1 = GEN2, 2 = GEN3
+ *	param5 : ref clock select, 0 = 100Mhz, 1 = 125MHz, 2 = 156MHz
+ *	param6 : ref clock input to use:
+ *		 0 - external reference (QLMx_REF_CLK)
+ *		 1 = common clock 0 (QLMC_REF_CLK0)
+ *		 2 = common_clock 1 (QLMC_REF_CLK1)
+ */
+static void board_configure_qlms(void)
+{
+	int speed[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+	int mode[8] = { -1, -1, -1, -1, -1, -1, -1, -1 };
+	int pcie_rc[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+	int pcie_gen[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+	int ref_clock_sel[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+	int ref_clock_input[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+	struct gpio_desc desc;
+	int rbgx, rqlm;
+	char env_var[16];
+	int qlm;
+	int ret;
+
+	/* RGMII PHY reset GPIO */
+	ret = dm_gpio_lookup_name("gpio-controllerA27", &desc);
+	if (ret)
+		debug("gpio ret=%d\n", ret);
+	ret = dm_gpio_request(&desc, "rgmii_phy_reset");
+	if (ret)
+		debug("gpio_request ret=%d\n", ret);
+	ret = dm_gpio_set_dir_flags(&desc, GPIOD_IS_OUT);
+	if (ret)
+		debug("gpio dir ret=%d\n", ret);
+
+	/* Put RGMII PHY in reset */
+	dm_gpio_set_value(&desc, 0);
+
+	octeon_init_qlm(0);
+
+	rbgx = env_get_ulong("bgx_for_rgmii", 10, (unsigned long)-1);
+	switch (rbgx) {
+	case 0:
+		rqlm = 2;
+		break;
+	case 1:
+		rqlm = 3;
+		break;
+	case 2:
+		rqlm = 5;
+		break;
+	default:
+		rqlm = -1;
+		break;
+	}
+
+	for (qlm = 0; qlm < 7; qlm++) {
+		const char *mode_str;
+		char spd_env[16];
+
+		mode[qlm] = CVMX_QLM_MODE_DISABLED;
+		sprintf(env_var, "qlm%d_mode", qlm);
+		mode_str = env_get(env_var);
+		if (!mode_str)
+			continue;
+
+		if (qlm == 4 && mode[4] != -1 &&
+		    mode[4] != CVMX_QLM_MODE_SATA_2X1) {
+			printf("Error: DLM 4 can only be configured for SATA\n");
+			continue;
+		}
+
+		if (strstr(mode_str, ",no_phy"))
+			no_phy[qlm] = 1;
+
+		if (!strncmp(mode_str, "sgmii", 5)) {
+			bool rgmii = false;
+
+			speed[qlm] = 1250;
+			if (rqlm == qlm && qlm < 5) {
+				mode[qlm] = CVMX_QLM_MODE_RGMII_SGMII;
+				rgmii = true;
+			} else if (qlm == 6 || qlm == 5) {
+				if (rqlm == qlm && qlm == 5) {
+					mode[qlm] = CVMX_QLM_MODE_RGMII_SGMII_1X1;
+					rgmii = true;
+				} else if (rqlm == 5 && qlm == 6 &&
+					   mode[5] != CVMX_QLM_MODE_RGMII_SGMII_1X1) {
+					mode[qlm] = CVMX_QLM_MODE_RGMII_SGMII_2X1;
+					rgmii = true;
+				} else {
+					mode[qlm] = CVMX_QLM_MODE_SGMII_2X1;
+				}
+			} else {
+				mode[qlm] = CVMX_QLM_MODE_SGMII;
+			}
+			ref_clock_sel[qlm] = 2;
+
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+
+			if (no_phy[qlm]) {
+				int i;
+				int start = 0, stop = 2;
+
+				rbgx = 0;
+				switch (qlm) {
+				case 3:
+					rbgx = 1;
+				case 2:
+					for (i = 0; i < 4; i++) {
+						printf("Ignoring PHY for interface: %d, port: %d\n",
+						       rbgx, i);
+						cvmx_helper_set_port_force_link_up(rbgx, i, true);
+					}
+					break;
+				case 6:
+					start = 2;
+					stop = 4;
+				case 5:
+					for (i = start; i < stop; i++) {
+						printf("Ignoring PHY for interface: %d, port: %d\n",
+						       2, i);
+						cvmx_helper_set_port_force_link_up(2, i, true);
+					}
+					break;
+				default:
+					printf("SGMII not supported for QLM/DLM %d\n",
+					       qlm);
+					break;
+				}
+			}
+			printf("QLM %d: SGMII%s\n",
+			       qlm, rgmii ? ", RGMII" : "");
+		} else if (!strncmp(mode_str, "xaui", 4)) {
+			speed[qlm] = 3125;
+			mode[qlm] = CVMX_QLM_MODE_XAUI;
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: XAUI\n", qlm);
+		} else if (!strncmp(mode_str, "dxaui", 5)) {
+			speed[qlm] = 6250;
+			mode[qlm] = CVMX_QLM_MODE_XAUI;
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: DXAUI\n", qlm);
+		} else if (!strncmp(mode_str, "rxaui", 5)) {
+			bool rgmii = false;
+
+			speed[qlm] = 6250;
+			if (qlm == 5 || qlm == 6) {
+				if (rqlm == qlm && qlm == 5) {
+					mode[qlm] = CVMX_QLM_MODE_RGMII_RXAUI;
+					rgmii = true;
+				} else {
+					mode[qlm] = CVMX_QLM_MODE_RXAUI_1X2;
+				}
+			} else {
+				mode[qlm] = CVMX_QLM_MODE_RXAUI;
+			}
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: RXAUI%s\n",
+			       qlm, rgmii ? ", rgmii" : "");
+		} else if (!strncmp(mode_str, "xlaui", 5)) {
+			speed[qlm] = 103125;
+			mode[qlm] = CVMX_QLM_MODE_XLAUI;
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			sprintf(spd_env, "qlm%d_speed", qlm);
+			if (env_get(spd_env)) {
+				int spd = env_get_ulong(spd_env, 0, 8);
+
+				if (spd)
+					speed[qlm] = spd;
+				else
+					speed[qlm] = 103125;
+			}
+			printf("QLM %d: XLAUI\n", qlm);
+		} else if (!strncmp(mode_str, "xfi", 3)) {
+			bool rgmii = false;
+
+			speed[qlm] = 103125;
+			if (rqlm == qlm) {
+				mode[qlm] = CVMX_QLM_MODE_RGMII_XFI;
+				rgmii = true;
+			} else if (qlm == 5 || qlm == 6) {
+				mode[qlm] = CVMX_QLM_MODE_XFI_1X2;
+			} else {
+				mode[qlm] = CVMX_QLM_MODE_XFI;
+			}
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: XFI%s\n", qlm, rgmii ? ", RGMII" : "");
+		} else if (!strncmp(mode_str, "10G_KR", 6)) {
+			speed[qlm] = 103125;
+			if (rqlm == qlm && qlm == 5)
+				mode[qlm] = CVMX_QLM_MODE_RGMII_10G_KR;
+			else if (qlm == 5 || qlm == 6)
+				mode[qlm] = CVMX_QLM_MODE_10G_KR_1X2;
+			else
+				mode[qlm] = CVMX_QLM_MODE_10G_KR;
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: 10G_KR\n", qlm);
+		} else if (!strncmp(mode_str, "40G_KR4", 7)) {
+			speed[qlm] = 103125;
+			mode[qlm] = CVMX_QLM_MODE_40G_KR4;
+			ref_clock_sel[qlm] = 2;
+			if (qlm == 5 || qlm == 6)
+				ref_clock_input[qlm] = 2; // use QLMC_REF_CLK1
+			printf("QLM %d: 40G_KR4\n", qlm);
+		} else if (!strcmp(mode_str, "pcie")) {
+			char *pmode;
+			int lanes = 0;
+
+			sprintf(env_var, "pcie%d_mode", qlm);
+			pmode = env_get(env_var);
+			if (pmode && !strcmp(pmode, "ep"))
+				pcie_rc[qlm] = 0;
+			else
+				pcie_rc[qlm] = 1;
+			sprintf(env_var, "pcie%d_gen", qlm);
+			pcie_gen[qlm] = env_get_ulong(env_var, 0, 3);
+			sprintf(env_var, "pcie%d_lanes", qlm);
+			lanes = env_get_ulong(env_var, 0, 8);
+			if (lanes == 8) {
+				mode[qlm] = CVMX_QLM_MODE_PCIE_1X8;
+			} else if (qlm == 5 || qlm == 6) {
+				if (lanes != 2) {
+					printf("QLM%d: Invalid lanes selected, defaulting to 2 lanes\n",
+					       qlm);
+				}
+				mode[qlm] = CVMX_QLM_MODE_PCIE_1X2;
+				ref_clock_input[qlm] = 1; // use QLMC_REF_CLK0
+			} else {
+				mode[qlm] = CVMX_QLM_MODE_PCIE;
+			}
+			ref_clock_sel[qlm] = 0;
+			printf("QLM %d: PCIe gen%d %s, x%d lanes\n",
+			       qlm, pcie_gen[qlm] + 1,
+			       pcie_rc[qlm] ? "root complex" : "endpoint",
+			       lanes);
+		} else if (!strcmp(mode_str, "sata")) {
+			mode[qlm] = CVMX_QLM_MODE_SATA_2X1;
+			ref_clock_sel[qlm] = 0;
+			ref_clock_input[qlm] = 1;
+			sprintf(spd_env, "qlm%d_speed", qlm);
+			if (env_get(spd_env)) {
+				int spd = env_get_ulong(spd_env, 0, 8);
+
+				if (spd == 1500 || spd == 3000 || spd == 3000)
+					speed[qlm] = spd;
+				else
+					speed[qlm] = 6000;
+			} else {
+				speed[qlm] = 6000;
+			}
+		} else {
+			printf("QLM %d: disabled\n", qlm);
+		}
+	}
+
+	for (qlm = 0; qlm < 7; qlm++) {
+		int rc;
+
+		if (mode[qlm] == -1)
+			continue;
+
+		debug("Configuring qlm%d with speed(%d), mode(%d), RC(%d), Gen(%d), REF_CLK(%d), CLK_SOURCE(%d)\n",
+		      qlm, speed[qlm], mode[qlm], pcie_rc[qlm],
+		      pcie_gen[qlm] + 1,
+		      ref_clock_sel[qlm], ref_clock_input[qlm]);
+		rc = octeon_configure_qlm(qlm, speed[qlm], mode[qlm],
+					  pcie_rc[qlm], pcie_gen[qlm],
+					  ref_clock_sel[qlm],
+					  ref_clock_input[qlm]);
+
+		if (speed[qlm] == 6250) {
+			if (mode[qlm] == CVMX_QLM_MODE_RXAUI) {
+				octeon_qlm_tune_v3(0, qlm, speed[qlm], 0x12,
+						   0xa0, -1, -1);
+			} else {
+				octeon_qlm_tune_v3(0, qlm, speed[qlm], 0xa,
+						   0xa0, -1, -1);
+			}
+		} else if (speed[qlm] == 103125) {
+			octeon_qlm_tune_v3(0, qlm, speed[qlm], 0xd, 0xd0,
+					   -1, -1);
+		}
+
+		if (qlm == 4 && rc != 0)
+			/*
+			 * There is a bug with SATA with 73xx.  Until it's
+			 * fixed we need to strip it from the device tree.
+			 */
+			octeon_fdt_patch_rename((void *)gd->fdt_blob, "4,none",
+						NULL, true, NULL, NULL);
+	}
+
+	dm_gpio_set_value(&desc, 0); /* Put RGMII PHY in reset */
+	mdelay(10);
+	dm_gpio_set_value(&desc, 1); /* Take RGMII PHY out of reset */
+}
+
+int board_late_init(void)
+{
+	board_configure_qlms();
+
+	return 0;
+}
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 49/50] mips: octeon: Add Octeon PCIe host controller driver
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (47 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 48/50] mips: octeon: octeon_ebb7304: Add board specific QLM init code Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2021-04-07  6:43   ` [PATCH v2 " Stefan Roese
  2020-12-11 16:06 ` [PATCH v1 50/50] mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and E1000 Stefan Roese
                   ` (3 subsequent siblings)
  52 siblings, 1 reply; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

This patch adds the PCIe host controller driver for MIPS Octeon II/III.
The driver mainly consist of the PCI config functions, as all of the
complex serdes related port / lane setup, is done in the serdes / pcie
code available in the "arch/mips/mach-octeon" directory.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 drivers/pci/Kconfig       |   6 ++
 drivers/pci/Makefile      |   1 +
 drivers/pci/pcie_octeon.c | 159 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 166 insertions(+)
 create mode 100644 drivers/pci/pcie_octeon.c

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index af92784950..bea36144e1 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -158,6 +158,12 @@ config PCI_OCTEONTX
 	  These controllers provide PCI configuration access to all on-board
 	  peripherals so it should only be disabled for testing purposes
 
+config PCIE_OCTEON
+	bool "MIPS Octeon PCIe support"
+	depends on ARCH_OCTEON
+	help
+	  Enable support for the MIPS Octeon SoC family PCIe controllers.
+
 config PCI_XILINX
 	bool "Xilinx AXI Bridge for PCI Express"
 	depends on DM_PCI
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 8b4d49a590..c8cc8272e1 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -50,3 +50,4 @@ obj-$(CONFIG_PCIE_MEDIATEK) += pcie_mediatek.o
 obj-$(CONFIG_PCIE_ROCKCHIP) += pcie_rockchip.o
 obj-$(CONFIG_PCI_BRCMSTB) += pcie_brcmstb.o
 obj-$(CONFIG_PCI_OCTEONTX) += pci_octeontx.o
+obj-$(CONFIG_PCIE_OCTEON) += pcie_octeon.o
diff --git a/drivers/pci/pcie_octeon.c b/drivers/pci/pcie_octeon.c
new file mode 100644
index 0000000000..1a76d0c429
--- /dev/null
+++ b/drivers/pci/pcie_octeon.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Copyright (C) 2020 Stefan Roese <sr@denx.de>
+ */
+
+#include <dm.h>
+#include <errno.h>
+#include <fdtdec.h>
+#include <log.h>
+#include <pci.h>
+#include <linux/delay.h>
+
+#include <mach/octeon-model.h>
+#include <mach/octeon_pci.h>
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-pemx-defs.h>
+
+struct octeon_pcie {
+	void *base;
+	int first_busno;
+	u32 port;
+	struct udevice *dev;
+	int pcie_port;
+};
+
+static bool octeon_bdf_invalid(pci_dev_t bdf, int first_busno)
+{
+	/*
+	 * In PCIe only a single device (0) can exist on the local bus.
+	 * Beyound the local bus, there might be a switch and everything
+	 * is possible.
+	 */
+	if ((PCI_BUS(bdf) == first_busno) && (PCI_DEV(bdf) > 0))
+		return true;
+
+	return false;
+}
+
+static int pcie_octeon_write_config(struct udevice *bus, pci_dev_t bdf,
+				    uint offset, ulong value,
+				    enum pci_size_t size)
+{
+	struct octeon_pcie *pcie = dev_get_priv(bus);
+	struct pci_controller *hose = dev_get_uclass_priv(bus);
+	int busno;
+	int port;
+
+	debug("PCIE CFG write: (b,d,f)=(%2d,%2d,%2d) ",
+	      PCI_BUS(bdf), PCI_DEV(bdf), PCI_FUNC(bdf));
+	debug("(addr,size,val)=(0x%04x, %d, 0x%08lx)\n", offset, size, value);
+
+	port = pcie->pcie_port;
+	busno = PCI_BUS(bdf) - hose->first_busno + 1;
+
+	switch (size) {
+	case PCI_SIZE_8:
+		cvmx_pcie_config_write8(port, busno, PCI_DEV(bdf),
+					PCI_FUNC(bdf), offset, value);
+		break;
+	case PCI_SIZE_16:
+		cvmx_pcie_config_write16(port, busno, PCI_DEV(bdf),
+					 PCI_FUNC(bdf), offset, value);
+		break;
+	case PCI_SIZE_32:
+		cvmx_pcie_config_write32(port, busno, PCI_DEV(bdf),
+					 PCI_FUNC(bdf), offset, value);
+		break;
+	default:
+		printf("Invalid size\n");
+	};
+
+	return 0;
+}
+
+static int pcie_octeon_read_config(const struct udevice *bus, pci_dev_t bdf,
+				   uint offset, ulong *valuep,
+				   enum pci_size_t size)
+{
+	struct octeon_pcie *pcie = dev_get_priv(bus);
+	struct pci_controller *hose = dev_get_uclass_priv(bus);
+	int busno;
+	int port;
+
+	port = pcie->pcie_port;
+	busno = PCI_BUS(bdf) - hose->first_busno + 1;
+	if (octeon_bdf_invalid(bdf, pcie->first_busno)) {
+		*valuep = pci_get_ff(size);
+		return 0;
+	}
+
+	switch (size) {
+	case PCI_SIZE_8:
+		*valuep = cvmx_pcie_config_read8(port, busno, PCI_DEV(bdf),
+						 PCI_FUNC(bdf), offset);
+		break;
+	case PCI_SIZE_16:
+		*valuep = cvmx_pcie_config_read16(port, busno, PCI_DEV(bdf),
+						  PCI_FUNC(bdf), offset);
+		break;
+	case PCI_SIZE_32:
+		*valuep = cvmx_pcie_config_read32(port, busno, PCI_DEV(bdf),
+						  PCI_FUNC(bdf), offset);
+		break;
+	default:
+		printf("Invalid size\n");
+	};
+
+	debug("%02x.%02x.%02x: u%d %x -> %lx\n",
+	      PCI_BUS(bdf), PCI_DEV(bdf), PCI_FUNC(bdf), size, offset, *valuep);
+
+	return 0;
+}
+
+static int pcie_octeon_probe(struct udevice *dev)
+{
+	struct octeon_pcie *pcie = dev_get_priv(dev);
+	int node = cvmx_get_node_num();
+	int pcie_port;
+	int ret = 0;
+
+	/* Get port number, lane number and memory target / attr */
+	if (ofnode_read_u32(dev_ofnode(dev), "marvell,pcie-port",
+			    &pcie->port)) {
+		ret = -ENODEV;
+		goto err;
+	}
+
+	pcie->first_busno = dev->seq;
+	pcie_port = ((node << 4) | pcie->port);
+	ret = cvmx_pcie_rc_initialize(pcie_port);
+	if (ret != 0)
+		return ret;
+
+	return 0;
+
+err:
+	return ret;
+}
+
+static const struct dm_pci_ops pcie_octeon_ops = {
+	.read_config = pcie_octeon_read_config,
+	.write_config = pcie_octeon_write_config,
+};
+
+static const struct udevice_id pcie_octeon_ids[] = {
+	{ .compatible = "marvell,pcie-host-octeon" },
+	{ }
+};
+
+U_BOOT_DRIVER(pcie_octeon) = {
+	.name			= "pcie_octeon",
+	.id			= UCLASS_PCI,
+	.of_match		= pcie_octeon_ids,
+	.ops			= &pcie_octeon_ops,
+	.probe			= pcie_octeon_probe,
+	.priv_auto_alloc_size	= sizeof(struct octeon_pcie),
+	.flags			= DM_FLAG_PRE_RELOC,
+};
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v1 50/50] mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and E1000
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (48 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 49/50] mips: octeon: Add Octeon PCIe host controller driver Stefan Roese
@ 2020-12-11 16:06 ` Stefan Roese
  2021-04-23  3:56 ` [PATCH v2 33/50] mips: octeon: Add misc remaining header files Stefan Roese
                   ` (2 subsequent siblings)
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2020-12-11 16:06 UTC (permalink / raw)
  To: u-boot

This patch changes the MIPS Octeon defconfig to enable some features
for PCIe enablement. This includes CONFIG_BOARD_LATE_INIT to call the
board specific serdes init code.

With these features enabled, the serdes and PCIe driver including the
Intel E1000 driver can be tested on the Octeon EBB7304.

Signed-off-by: Stefan Roese <sr@denx.de>

---

 configs/octeon_ebb7304_defconfig | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/configs/octeon_ebb7304_defconfig b/configs/octeon_ebb7304_defconfig
index a98d73a268..b2ddb4f684 100644
--- a/configs/octeon_ebb7304_defconfig
+++ b/configs/octeon_ebb7304_defconfig
@@ -10,7 +10,9 @@ CONFIG_ARCH_OCTEON=y
 # CONFIG_MIPS_CACHE_SETUP is not set
 # CONFIG_MIPS_CACHE_DISABLE is not set
 CONFIG_DEBUG_UART=y
+CONFIG_OF_BOARD_FIXUP=y
 CONFIG_SYS_CONSOLE_INFO_QUIET=y
+CONFIG_BOARD_LATE_INIT=y
 CONFIG_HUSH_PARSER=y
 CONFIG_CMD_GPIO=y
 CONFIG_CMD_I2C=y
@@ -42,9 +44,10 @@ CONFIG_DM_SPI_FLASH=y
 CONFIG_SPI_FLASH_ATMEL=y
 CONFIG_SPI_FLASH_SPANSION=y
 CONFIG_SPI_FLASH_STMICRO=y
-# CONFIG_NETDEVICES is not set
+CONFIG_E1000=y
 CONFIG_PCI=y
 CONFIG_DM_PCI=y
+CONFIG_PCIE_OCTEON=y
 CONFIG_RAM=y
 CONFIG_RAM_OCTEON=y
 CONFIG_RAM_OCTEON_DDR4=y
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 49/50] mips: octeon: Add Octeon PCIe host controller driver
  2020-12-11 16:06 ` [PATCH v1 49/50] mips: octeon: Add Octeon PCIe host controller driver Stefan Roese
@ 2021-04-07  6:43   ` Stefan Roese
  0 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2021-04-07  6:43 UTC (permalink / raw)
  To: u-boot

This patch adds the PCIe host controller driver for MIPS Octeon II/III.
The driver mainly consist of the PCI config functions, as all of the
complex serdes related port / lane setup, is done in the serdes / pcie
code available in the "arch/mips/mach-octeon" directory.

Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Aaron Williams <awilliams@marvell.com>
Cc: Chandrakala Chavva <cchavva@marvell.com>
Cc: Daniel Schwierzeck <daniel.schwierzeck@gmail.com>
---
I'm sending only this patch from the serdes / QLM series with the PCIe
support, as it's the only one that has some changes. To not "pollute"
the list with this huge series without need. Please let me know if I
should post the complete series again to the list or provide a gitlab
branch.

v2:
- Rebased on top of latest master
- Changed priv_auto_alloc_size to priv_auto

 drivers/pci/Kconfig       |   6 ++
 drivers/pci/Makefile      |   1 +
 drivers/pci/pcie_octeon.c | 159 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 166 insertions(+)
 create mode 100644 drivers/pci/pcie_octeon.c

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index ba41787f64dc..0b2daeac23b6 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -158,6 +158,12 @@ config PCI_OCTEONTX
 	  These controllers provide PCI configuration access to all on-board
 	  peripherals so it should only be disabled for testing purposes
 
+config PCIE_OCTEON
+	bool "MIPS Octeon PCIe support"
+	depends on ARCH_OCTEON
+	help
+	  Enable support for the MIPS Octeon SoC family PCIe controllers.
+
 config PCI_XILINX
 	bool "Xilinx AXI Bridge for PCI Express"
 	depends on DM_PCI
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 5ed94bc95c27..dc04d1c7d675 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -51,3 +51,4 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie_rockchip.o
 obj-$(CONFIG_PCIE_DW_ROCKCHIP) += pcie_dw_rockchip.o
 obj-$(CONFIG_PCI_BRCMSTB) += pcie_brcmstb.o
 obj-$(CONFIG_PCI_OCTEONTX) += pci_octeontx.o
+obj-$(CONFIG_PCIE_OCTEON) += pcie_octeon.o
diff --git a/drivers/pci/pcie_octeon.c b/drivers/pci/pcie_octeon.c
new file mode 100644
index 000000000000..3b28bd81439f
--- /dev/null
+++ b/drivers/pci/pcie_octeon.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Copyright (C) 2020 Stefan Roese <sr@denx.de>
+ */
+
+#include <dm.h>
+#include <errno.h>
+#include <fdtdec.h>
+#include <log.h>
+#include <pci.h>
+#include <linux/delay.h>
+
+#include <mach/octeon-model.h>
+#include <mach/octeon_pci.h>
+#include <mach/cvmx-regs.h>
+#include <mach/cvmx-pcie.h>
+#include <mach/cvmx-pemx-defs.h>
+
+struct octeon_pcie {
+	void *base;
+	int first_busno;
+	u32 port;
+	struct udevice *dev;
+	int pcie_port;
+};
+
+static bool octeon_bdf_invalid(pci_dev_t bdf, int first_busno)
+{
+	/*
+	 * In PCIe only a single device (0) can exist on the local bus.
+	 * Beyound the local bus, there might be a switch and everything
+	 * is possible.
+	 */
+	if ((PCI_BUS(bdf) == first_busno) && (PCI_DEV(bdf) > 0))
+		return true;
+
+	return false;
+}
+
+static int pcie_octeon_write_config(struct udevice *bus, pci_dev_t bdf,
+				    uint offset, ulong value,
+				    enum pci_size_t size)
+{
+	struct octeon_pcie *pcie = dev_get_priv(bus);
+	struct pci_controller *hose = dev_get_uclass_priv(bus);
+	int busno;
+	int port;
+
+	debug("PCIE CFG write: (b,d,f)=(%2d,%2d,%2d) ",
+	      PCI_BUS(bdf), PCI_DEV(bdf), PCI_FUNC(bdf));
+	debug("(addr,size,val)=(0x%04x, %d, 0x%08lx)\n", offset, size, value);
+
+	port = pcie->pcie_port;
+	busno = PCI_BUS(bdf) - hose->first_busno + 1;
+
+	switch (size) {
+	case PCI_SIZE_8:
+		cvmx_pcie_config_write8(port, busno, PCI_DEV(bdf),
+					PCI_FUNC(bdf), offset, value);
+		break;
+	case PCI_SIZE_16:
+		cvmx_pcie_config_write16(port, busno, PCI_DEV(bdf),
+					 PCI_FUNC(bdf), offset, value);
+		break;
+	case PCI_SIZE_32:
+		cvmx_pcie_config_write32(port, busno, PCI_DEV(bdf),
+					 PCI_FUNC(bdf), offset, value);
+		break;
+	default:
+		printf("Invalid size\n");
+	};
+
+	return 0;
+}
+
+static int pcie_octeon_read_config(const struct udevice *bus, pci_dev_t bdf,
+				   uint offset, ulong *valuep,
+				   enum pci_size_t size)
+{
+	struct octeon_pcie *pcie = dev_get_priv(bus);
+	struct pci_controller *hose = dev_get_uclass_priv(bus);
+	int busno;
+	int port;
+
+	port = pcie->pcie_port;
+	busno = PCI_BUS(bdf) - hose->first_busno + 1;
+	if (octeon_bdf_invalid(bdf, pcie->first_busno)) {
+		*valuep = pci_get_ff(size);
+		return 0;
+	}
+
+	switch (size) {
+	case PCI_SIZE_8:
+		*valuep = cvmx_pcie_config_read8(port, busno, PCI_DEV(bdf),
+						 PCI_FUNC(bdf), offset);
+		break;
+	case PCI_SIZE_16:
+		*valuep = cvmx_pcie_config_read16(port, busno, PCI_DEV(bdf),
+						  PCI_FUNC(bdf), offset);
+		break;
+	case PCI_SIZE_32:
+		*valuep = cvmx_pcie_config_read32(port, busno, PCI_DEV(bdf),
+						  PCI_FUNC(bdf), offset);
+		break;
+	default:
+		printf("Invalid size\n");
+	};
+
+	debug("%02x.%02x.%02x: u%d %x -> %lx\n",
+	      PCI_BUS(bdf), PCI_DEV(bdf), PCI_FUNC(bdf), size, offset, *valuep);
+
+	return 0;
+}
+
+static int pcie_octeon_probe(struct udevice *dev)
+{
+	struct octeon_pcie *pcie = dev_get_priv(dev);
+	int node = cvmx_get_node_num();
+	int pcie_port;
+	int ret = 0;
+
+	/* Get port number, lane number and memory target / attr */
+	if (ofnode_read_u32(dev_ofnode(dev), "marvell,pcie-port",
+			    &pcie->port)) {
+		ret = -ENODEV;
+		goto err;
+	}
+
+	pcie->first_busno = dev_seq(dev);
+	pcie_port = ((node << 4) | pcie->port);
+	ret = cvmx_pcie_rc_initialize(pcie_port);
+	if (ret != 0)
+		return ret;
+
+	return 0;
+
+err:
+	return ret;
+}
+
+static const struct dm_pci_ops pcie_octeon_ops = {
+	.read_config = pcie_octeon_read_config,
+	.write_config = pcie_octeon_write_config,
+};
+
+static const struct udevice_id pcie_octeon_ids[] = {
+	{ .compatible = "marvell,pcie-host-octeon" },
+	{ }
+};
+
+U_BOOT_DRIVER(pcie_octeon) = {
+	.name		= "pcie_octeon",
+	.id		= UCLASS_PCI,
+	.of_match	= pcie_octeon_ids,
+	.ops		= &pcie_octeon_ops,
+	.probe		= pcie_octeon_probe,
+	.priv_auto	= sizeof(struct octeon_pcie),
+	.flags		= DM_FLAG_PRE_RELOC,
+};
-- 
2.31.1

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 33/50] mips: octeon: Add misc remaining header files
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (49 preceding siblings ...)
  2020-12-11 16:06 ` [PATCH v1 50/50] mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and E1000 Stefan Roese
@ 2021-04-23  3:56 ` Stefan Roese
  2021-04-23 16:38   ` Daniel Schwierzeck
  2021-04-23 17:56 ` [PATCH v3 " Stefan Roese
  2021-04-24 22:49 ` [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Daniel Schwierzeck
  52 siblings, 1 reply; 57+ messages in thread
From: Stefan Roese @ 2021-04-23  3:56 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import misc remaining header files from 2013 U-Boot. These will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Aaron Williams <awilliams@marvell.com>
Cc: Chandrakala Chavva <cchavva@marvell.com>
Cc: Daniel Schwierzeck <daniel.schwierzeck@gmail.com>
---
v2:
- Add missing mach/octeon_qlm.h file (forgot to commit it in v1)

 .../mach-octeon/include/mach/cvmx-address.h   |  209 ++
 .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +++
 .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
 arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
 .../mach-octeon/include/mach/cvmx-error.h     |  456 +++
 arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 ++++
 .../include/mach/cvmx-global-resources.h      |  213 ++
 arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
 .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++++
 .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 ++++
 arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
 arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 ++
 .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
 .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 ++
 arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++++++
 .../include/mach/cvmx-pki-resources.h         |  157 +
 arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++++++
 .../mach/cvmx-pko-internal-ports-range.h      |   43 +
 .../include/mach/cvmx-pko3-queue.h            |  175 +
 arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991 +++++++++++++++++
 arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 ++
 .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
 arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 ++++++++
 .../mach-octeon/include/mach/octeon_qlm.h     |  109 +
 26 files changed, 11765 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
 create mode 100644 arch/mips/mach-octeon/include/mach/octeon_qlm.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-address.h b/arch/mips/mach-octeon/include/mach/cvmx-address.h
new file mode 100644
index 000000000000..984f574a75bb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-address.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Typedefs and defines for working with Octeon physical addresses.
+ */
+
+#ifndef __CVMX_ADDRESS_H__
+#define __CVMX_ADDRESS_H__
+
+typedef enum {
+	CVMX_MIPS_SPACE_XKSEG = 3LL,
+	CVMX_MIPS_SPACE_XKPHYS = 2LL,
+	CVMX_MIPS_SPACE_XSSEG = 1LL,
+	CVMX_MIPS_SPACE_XUSEG = 0LL
+} cvmx_mips_space_t;
+
+typedef enum {
+	CVMX_MIPS_XKSEG_SPACE_KSEG0 = 0LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG1 = 1LL,
+	CVMX_MIPS_XKSEG_SPACE_SSEG = 2LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG3 = 3LL
+} cvmx_mips_xkseg_space_t;
+
+/* decodes <14:13> of a kseg3 window address */
+typedef enum {
+	CVMX_ADD_WIN_SCR = 0L,
+	CVMX_ADD_WIN_DMA = 1L,
+	CVMX_ADD_WIN_UNUSED = 2L,
+	CVMX_ADD_WIN_UNUSED2 = 3L
+} cvmx_add_win_dec_t;
+
+/* decode within DMA space */
+typedef enum {
+	CVMX_ADD_WIN_DMA_ADD = 0L,
+	CVMX_ADD_WIN_DMA_SENDMEM = 1L,
+	/* store data must be normal DRAM memory space address in this case */
+	CVMX_ADD_WIN_DMA_SENDDMA = 2L,
+	/* see CVMX_ADD_WIN_DMA_SEND_DEC for data contents */
+	CVMX_ADD_WIN_DMA_SENDIO = 3L,
+	/* store data must be normal IO space address in this case */
+	CVMX_ADD_WIN_DMA_SENDSINGLE = 4L,
+	/* no write buffer data needed/used */
+} cvmx_add_win_dma_dec_t;
+
+/**
+ *   Physical Address Decode
+ *
+ * Octeon-I HW never interprets this X (<39:36> reserved
+ * for future expansion), software should set to 0.
+ *
+ *  - 0x0 XXX0 0000 0000 to      DRAM         Cached
+ *  - 0x0 XXX0 0FFF FFFF
+ *
+ *  - 0x0 XXX0 1000 0000 to      Boot Bus     Uncached  (Converted to 0x1 00X0 1000 0000
+ *  - 0x0 XXX0 1FFF FFFF         + EJTAG                           to 0x1 00X0 1FFF FFFF)
+ *
+ *  - 0x0 XXX0 2000 0000 to      DRAM         Cached
+ *  - 0x0 XXXF FFFF FFFF
+ *
+ *  - 0x1 00X0 0000 0000 to      Boot Bus     Uncached
+ *  - 0x1 00XF FFFF FFFF
+ *
+ *  - 0x1 01X0 0000 0000 to      Other NCB    Uncached
+ *  - 0x1 FFXF FFFF FFFF         devices
+ *
+ * Decode of all Octeon addresses
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 offset : 62;
+	} sva;
+
+	struct {
+		u64 zeroes : 33;
+		u64 offset : 31;
+	} suseg;
+
+	struct {
+		u64 ones : 33;
+		cvmx_mips_xkseg_space_t sp : 2;
+		u64 offset : 29;
+	} sxkseg;
+
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 cca : 3;
+		u64 mbz : 10;
+		u64 pa : 49;
+	} sxkphys;
+
+	struct {
+		u64 mbz : 15;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sphys;
+
+	struct {
+		u64 zeroes : 24;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} smem;
+
+	struct {
+		u64 mem_region : 2;
+		u64 mbz : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sio;
+
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 addr : 13;
+	} sscr;
+
+	/* there should only be stores to IOBDMA space, no loads */
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 unused2 : 3;
+		cvmx_add_win_dma_dec_t type : 3;
+		u64 addr : 7;
+	} sdma;
+
+	struct {
+		u64 didspace : 24;
+		u64 unused : 40;
+	} sfilldidspace;
+} cvmx_addr_t;
+
+/* These macros for used by 32 bit applications */
+
+#define CVMX_MIPS32_SPACE_KSEG0	     1l
+#define CVMX_ADD_SEG32(segment, add) (((s32)segment << 31) | (s32)(add))
+
+/*
+ * Currently all IOs are performed using XKPHYS addressing. Linux uses the
+ * CvmMemCtl register to enable XKPHYS addressing to IO space from user mode.
+ * Future OSes may need to change the upper bits of IO addresses. The
+ * following define controls the upper two bits for all IO addresses generated
+ * by the simple executive library
+ */
+#define CVMX_IO_SEG CVMX_MIPS_SPACE_XKPHYS
+
+/* These macros simplify the process of creating common IO addresses */
+#define CVMX_ADD_SEG(segment, add) ((((u64)segment) << 62) | (add))
+
+#define CVMX_ADD_IO_SEG(add) (add)
+
+#define CVMX_ADDR_DIDSPACE(did)	   (((CVMX_IO_SEG) << 22) | ((1ULL) << 8) | (did))
+#define CVMX_ADDR_DID(did)	   (CVMX_ADDR_DIDSPACE(did) << 40)
+#define CVMX_FULL_DID(did, subdid) (((did) << 3) | (subdid))
+
+/* from include/ncb_rsl_id.v */
+#define CVMX_OCT_DID_MIS  0ULL /* misc stuff */
+#define CVMX_OCT_DID_GMX0 1ULL
+#define CVMX_OCT_DID_GMX1 2ULL
+#define CVMX_OCT_DID_PCI  3ULL
+#define CVMX_OCT_DID_KEY  4ULL
+#define CVMX_OCT_DID_FPA  5ULL
+#define CVMX_OCT_DID_DFA  6ULL
+#define CVMX_OCT_DID_ZIP  7ULL
+#define CVMX_OCT_DID_RNG  8ULL
+#define CVMX_OCT_DID_IPD  9ULL
+#define CVMX_OCT_DID_PKT  10ULL
+#define CVMX_OCT_DID_TIM  11ULL
+#define CVMX_OCT_DID_TAG  12ULL
+/* the rest are not on the IO bus */
+#define CVMX_OCT_DID_L2C  16ULL
+#define CVMX_OCT_DID_LMC  17ULL
+#define CVMX_OCT_DID_SPX0 18ULL
+#define CVMX_OCT_DID_SPX1 19ULL
+#define CVMX_OCT_DID_PIP  20ULL
+#define CVMX_OCT_DID_ASX0 22ULL
+#define CVMX_OCT_DID_ASX1 23ULL
+#define CVMX_OCT_DID_IOB  30ULL
+
+#define CVMX_OCT_DID_PKT_SEND	 CVMX_FULL_DID(CVMX_OCT_DID_PKT, 2ULL)
+#define CVMX_OCT_DID_TAG_SWTAG	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 0ULL)
+#define CVMX_OCT_DID_TAG_TAG1	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 1ULL)
+#define CVMX_OCT_DID_TAG_TAG2	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 2ULL)
+#define CVMX_OCT_DID_TAG_TAG3	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 3ULL)
+#define CVMX_OCT_DID_TAG_NULL_RD CVMX_FULL_DID(CVMX_OCT_DID_TAG, 4ULL)
+#define CVMX_OCT_DID_TAG_TAG5	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 5ULL)
+#define CVMX_OCT_DID_TAG_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 7ULL)
+#define CVMX_OCT_DID_FAU_FAI	 CVMX_FULL_DID(CVMX_OCT_DID_IOB, 0ULL)
+#define CVMX_OCT_DID_TIM_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TIM, 0ULL)
+#define CVMX_OCT_DID_KEY_RW	 CVMX_FULL_DID(CVMX_OCT_DID_KEY, 0ULL)
+#define CVMX_OCT_DID_PCI_6	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 6ULL)
+#define CVMX_OCT_DID_MIS_BOO	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 0ULL)
+#define CVMX_OCT_DID_PCI_RML	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 0ULL)
+#define CVMX_OCT_DID_IPD_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_IPD, 7ULL)
+#define CVMX_OCT_DID_DFA_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_DFA, 7ULL)
+#define CVMX_OCT_DID_MIS_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 7ULL)
+#define CVMX_OCT_DID_ZIP_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_ZIP, 0ULL)
+
+/* Cast to unsigned long long, mainly for use in printfs. */
+#define CAST_ULL(v) ((unsigned long long)(v))
+
+#define UNMAPPED_PTR(x) ((1ULL << 63) | (x))
+
+#endif /* __CVMX_ADDRESS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
new file mode 100644
index 000000000000..ddc294348cb4
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
@@ -0,0 +1,441 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Support functions for managing command queues used for
+ * various hardware blocks.
+ *
+ * The common command queue infrastructure abstracts out the
+ * software necessary for adding to Octeon's chained queue
+ * structures. These structures are used for commands to the
+ * PKO, ZIP, DFA, RAID, HNA, and DMA engine blocks. Although each
+ * hardware unit takes commands and CSRs of different types,
+ * they all use basic linked command buffers to store the
+ * pending request. In general, users of the CVMX API don't
+ * call cvmx-cmd-queue functions directly. Instead the hardware
+ * unit specific wrapper should be used. The wrappers perform
+ * unit specific validation and CSR writes to submit the
+ * commands.
+ *
+ * Even though most software will never directly interact with
+ * cvmx-cmd-queue, knowledge of its internal workings can help
+ * in diagnosing performance problems and help with debugging.
+ *
+ * Command queue pointers are stored in a global named block
+ * called "cvmx_cmd_queues". Except for the PKO queues, each
+ * hardware queue is stored in its own cache line to reduce SMP
+ * contention on spin locks. The PKO queues are stored such that
+ * every 16th queue is next to each other in memory. This scheme
+ * allows for queues being in separate cache lines when there
+ * are low number of queues per port. With 16 queues per port,
+ * the first queue for each port is in the same cache area. The
+ * second queues for each port are in another area, etc. This
+ * allows software to implement very efficient lockless PKO with
+ * 16 queues per port using a minimum of cache lines per core.
+ * All queues for a given core will be isolated in the same
+ * cache area.
+ *
+ * In addition to the memory pointer layout, cvmx-cmd-queue
+ * provides an optimized fair ll/sc locking mechanism for the
+ * queues. The lock uses a "ticket / now serving" model to
+ * maintain fair order on contended locks. In addition, it uses
+ * predicted locking time to limit cache contention. When a core
+ * know it must wait in line for a lock, it spins on the
+ * internal cycle counter to completely eliminate any causes of
+ * bus traffic.
+ */
+
+#ifndef __CVMX_CMD_QUEUE_H__
+#define __CVMX_CMD_QUEUE_H__
+
+/**
+ * By default we disable the max depth support. Most programs
+ * don't use it and it slows down the command queue processing
+ * significantly.
+ */
+#ifndef CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH
+#define CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH 0
+#endif
+
+/**
+ * Enumeration representing all hardware blocks that use command
+ * queues. Each hardware block has up to 65536 sub identifiers for
+ * multiple command queues. Not all chips support all hardware
+ * units.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_PKO_BASE = 0x00000,
+#define CVMX_CMD_QUEUE_PKO(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_PKO_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_ZIP = 0x10000,
+#define CVMX_CMD_QUEUE_ZIP_QUE(queue)                                                              \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_ZIP + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_DFA = 0x20000,
+	CVMX_CMD_QUEUE_RAID = 0x30000,
+	CVMX_CMD_QUEUE_DMA_BASE = 0x40000,
+#define CVMX_CMD_QUEUE_DMA(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_DMA_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_BCH = 0x50000,
+#define CVMX_CMD_QUEUE_BCH(queue) ((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_BCH + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_HNA = 0x60000,
+	CVMX_CMD_QUEUE_END = 0x70000,
+} cvmx_cmd_queue_id_t;
+
+#define CVMX_CMD_QUEUE_ZIP3_QUE(node, queue)                                                       \
+	((cvmx_cmd_queue_id_t)((node) << 24 | CVMX_CMD_QUEUE_ZIP | (0xffff & (queue))))
+
+/**
+ * Command write operations can fail if the command queue needs
+ * a new buffer and the associated FPA pool is empty. It can also
+ * fail if the number of queued command words reaches the maximum
+ * set@initialization.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_SUCCESS = 0,
+	CVMX_CMD_QUEUE_NO_MEMORY = -1,
+	CVMX_CMD_QUEUE_FULL = -2,
+	CVMX_CMD_QUEUE_INVALID_PARAM = -3,
+	CVMX_CMD_QUEUE_ALREADY_SETUP = -4,
+} cvmx_cmd_queue_result_t;
+
+typedef struct {
+	/* First 64-bit word: */
+	u64 fpa_pool : 16;
+	u64 base_paddr : 48;
+	s32 index;
+	u16 max_depth;
+	u16 pool_size_m1;
+} __cvmx_cmd_queue_state_t;
+
+/**
+ * command-queue locking uses a fair ticket spinlock algo,
+ * with 64-bit tickets for endianness-neutrality and
+ * counter overflow protection.
+ * Lock is free when both counters are of equal value.
+ */
+typedef struct {
+	u64 ticket;
+	u64 now_serving;
+} __cvmx_cmd_queue_lock_t;
+
+/**
+ * @INTERNAL
+ * This structure contains the global state of all command queues.
+ * It is stored in a bootmem named block and shared by all
+ * applications running on Octeon. Tickets are stored in a different
+ * cache line that queue information to reduce the contention on the
+ * ll/sc used to get a ticket. If this is not the case, the update
+ * of queue state causes the ll/sc to fail quite often.
+ */
+typedef struct {
+	__cvmx_cmd_queue_lock_t lock[(CVMX_CMD_QUEUE_END >> 16) * 256];
+	__cvmx_cmd_queue_state_t state[(CVMX_CMD_QUEUE_END >> 16) * 256];
+} __cvmx_cmd_queue_all_state_t;
+
+extern __cvmx_cmd_queue_all_state_t *__cvmx_cmd_queue_state_ptrs[CVMX_MAX_NODES];
+
+/**
+ * @INTERNAL
+ * Internal function to handle the corner cases
+ * of adding command words to a queue when the current
+ * block is getting full.
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_write_raw(cvmx_cmd_queue_id_t queue_id,
+						   __cvmx_cmd_queue_state_t *qptr, int cmd_count,
+						   const u64 *cmds);
+
+/**
+ * Initialize a command queue for use. The initial FPA buffer is
+ * allocated and the hardware unit is configured to point to the
+ * new command queue.
+ *
+ * @param queue_id  Hardware command queue to initialize.
+ * @param max_depth Maximum outstanding commands that can be queued.
+ * @param fpa_pool  FPA pool the command queues should come from.
+ * @param pool_size Size of each buffer in the FPA pool (bytes)
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_initialize(cvmx_cmd_queue_id_t queue_id, int max_depth,
+						  int fpa_pool, int pool_size);
+
+/**
+ * Shutdown a queue a free it's command buffers to the FPA. The
+ * hardware connected to the queue must be stopped before this
+ * function is called.
+ *
+ * @param queue_id Queue to shutdown
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_shutdown(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the number of command words pending in the queue. This
+ * function may be relatively slow for some hardware units.
+ *
+ * @param queue_id Hardware command queue to query
+ *
+ * @return Number of outstanding commands
+ */
+int cvmx_cmd_queue_length(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the command buffer to be written to. The purpose of this
+ * function is to allow CVMX routine access to the low level buffer
+ * for initial hardware setup. User applications should not call this
+ * function directly.
+ *
+ * @param queue_id Command queue to query
+ *
+ * @return Command buffer or NULL on failure
+ */
+void *cvmx_cmd_queue_buffer(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * @INTERNAL
+ * Retrieve or allocate command queue state named block
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(unsigned int node);
+
+/**
+ * @INTERNAL
+ * Get the index into the state arrays for the supplied queue id.
+ *
+ * @param queue_id Queue ID to get an index for
+ *
+ * @return Index into the state arrays
+ */
+static inline unsigned int __cvmx_cmd_queue_get_index(cvmx_cmd_queue_id_t queue_id)
+{
+	/* Warning: This code currently only works with devices that have 256
+	 * queues or less.  Devices with more than 16 queues are laid out in
+	 * memory to allow cores quick access to every 16th queue. This reduces
+	 * cache thrashing when you are running 16 queues per port to support
+	 * lockless operation
+	 */
+	unsigned int unit = (queue_id >> 16) & 0xff;
+	unsigned int q = (queue_id >> 4) & 0xf;
+	unsigned int core = queue_id & 0xf;
+
+	return (unit << 8) | (core << 4) | q;
+}
+
+static inline int __cvmx_cmd_queue_get_node(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int node = queue_id >> 24;
+	return node;
+}
+
+/**
+ * @INTERNAL
+ * Lock the supplied queue so nobody else is updating it at the same
+ * time as us.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_lock(cvmx_cmd_queue_id_t queue_id)
+{
+}
+
+/**
+ * @INTERNAL
+ * Unlock the queue, flushing all writes.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_unlock(cvmx_cmd_queue_id_t queue_id)
+{
+	CVMX_SYNCWS; /* nudge out the unlock. */
+}
+
+/**
+ * @INTERNAL
+ * Initialize a command-queue lock to "unlocked" state.
+ */
+static inline void __cvmx_cmd_queue_lock_init(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index = __cvmx_cmd_queue_get_index(queue_id);
+	unsigned int node = __cvmx_cmd_queue_get_node(queue_id);
+
+	__cvmx_cmd_queue_state_ptrs[node]->lock[index] = (__cvmx_cmd_queue_lock_t){ 0, 0 };
+	CVMX_SYNCWS;
+}
+
+/**
+ * @INTERNAL
+ * Get the queue state structure for the given queue id
+ *
+ * @param queue_id Queue id to get
+ *
+ * @return Queue structure or NULL on failure
+ */
+static inline __cvmx_cmd_queue_state_t *__cvmx_cmd_queue_get_state(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index;
+	unsigned int node;
+	__cvmx_cmd_queue_state_t *qptr;
+
+	node = __cvmx_cmd_queue_get_node(queue_id);
+	index = __cvmx_cmd_queue_get_index(queue_id);
+
+	if (cvmx_unlikely(!__cvmx_cmd_queue_state_ptrs[node]))
+		__cvmx_cmd_queue_init_state_ptr(node);
+
+	qptr = &__cvmx_cmd_queue_state_ptrs[node]->state[index];
+	return qptr;
+}
+
+/**
+ * Write an arbitrary number of command words to a command queue.
+ * This is a generic function; the fixed number of command word
+ * functions yield higher performance.
+ *
+ * @param queue_id  Hardware command queue to write to
+ * @param use_locking
+ *                  Use internal locking to ensure exclusive access for queue
+ *                  updates. If you don't use this locking you must ensure
+ *                  exclusivity some other way. Locking is strongly recommended.
+ * @param cmd_count Number of command words to write
+ * @param cmds      Array of commands to write
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write(cvmx_cmd_queue_id_t queue_id, bool use_locking, int cmd_count, const u64 *cmds)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	/* Most of the time there is lots of free words in current block */
+	if (cvmx_unlikely((qptr->index + cmd_count) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, cmd_count, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		/* Loop easy for compiler to unroll for the likely case */
+		while (cmd_count > 0) {
+			cmd_ptr[qptr->index++] = *cmds++;
+			cmd_count--;
+		}
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write two command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t cvmx_cmd_queue_write2(cvmx_cmd_queue_id_t queue_id,
+							    bool use_locking, u64 cmd1, u64 cmd2)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 2) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		u64 cmds[2];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 2, cmds);
+	} else {
+		/* Likely case to work fast */
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 2;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write three command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ * @param cmd3     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write3(cvmx_cmd_queue_id_t queue_id, bool use_locking, u64 cmd1, u64 cmd2, u64 cmd3)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+	u64 *cmd_ptr;
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 3) >= qptr->pool_size_m1)) {
+		/* Most of the time there is lots of free words in current block */
+		u64 cmds[3];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		cmds[2] = cmd3;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 3, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 3;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+		cmd_ptr[2] = cmd3;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+#endif /* __CVMX_CMD_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
new file mode 100644
index 000000000000..a8625b4228ac
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Definitions for enumerations used with Octeon CSRs.
+ */
+
+#ifndef __CVMX_CSR_ENUMS_H__
+#define __CVMX_CSR_ENUMS_H__
+
+typedef enum {
+	CVMX_IPD_OPC_MODE_STT = 0LL,
+	CVMX_IPD_OPC_MODE_STF = 1LL,
+	CVMX_IPD_OPC_MODE_STF1_STT = 2LL,
+	CVMX_IPD_OPC_MODE_STF2_STT = 3LL
+} cvmx_ipd_mode_t;
+
+/**
+ * Enumeration representing the amount of packet processing
+ * and validation performed by the input hardware.
+ */
+typedef enum {
+	CVMX_PIP_PORT_CFG_MODE_NONE = 0ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPL2 = 1ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPIP = 2ull
+} cvmx_pip_port_parse_mode_t;
+
+/**
+ * This enumeration controls how a QoS watcher matches a packet.
+ *
+ * @deprecated  This enumeration was used with cvmx_pip_config_watcher which has
+ *              been deprecated.
+ */
+typedef enum {
+	CVMX_PIP_QOS_WATCH_DISABLE = 0ull,
+	CVMX_PIP_QOS_WATCH_PROTNH = 1ull,
+	CVMX_PIP_QOS_WATCH_TCP = 2ull,
+	CVMX_PIP_QOS_WATCH_UDP = 3ull
+} cvmx_pip_qos_watch_types;
+
+/**
+ * This enumeration is used in PIP tag config to control how
+ * POW tags are generated by the hardware.
+ */
+typedef enum {
+	CVMX_PIP_TAG_MODE_TUPLE = 0ull,
+	CVMX_PIP_TAG_MODE_MASK = 1ull,
+	CVMX_PIP_TAG_MODE_IP_OR_MASK = 2ull,
+	CVMX_PIP_TAG_MODE_TUPLE_XOR_MASK = 3ull
+} cvmx_pip_tag_mode_t;
+
+/**
+ * Tag type definitions
+ */
+typedef enum {
+	CVMX_POW_TAG_TYPE_ORDERED = 0L,
+	CVMX_POW_TAG_TYPE_ATOMIC = 1L,
+	CVMX_POW_TAG_TYPE_NULL = 2L,
+	CVMX_POW_TAG_TYPE_NULL_NULL = 3L
+} cvmx_pow_tag_type_t;
+
+/**
+ * LCR bits 0 and 1 control the number of bits per character. See the following table for encodings:
+ *
+ * - 00 = 5 bits (bits 0-4 sent)
+ * - 01 = 6 bits (bits 0-5 sent)
+ * - 10 = 7 bits (bits 0-6 sent)
+ * - 11 = 8 bits (all bits sent)
+ */
+typedef enum {
+	CVMX_UART_BITS5 = 0,
+	CVMX_UART_BITS6 = 1,
+	CVMX_UART_BITS7 = 2,
+	CVMX_UART_BITS8 = 3
+} cvmx_uart_bits_t;
+
+typedef enum {
+	CVMX_UART_IID_NONE = 1,
+	CVMX_UART_IID_RX_ERROR = 6,
+	CVMX_UART_IID_RX_DATA = 4,
+	CVMX_UART_IID_RX_TIMEOUT = 12,
+	CVMX_UART_IID_TX_EMPTY = 2,
+	CVMX_UART_IID_MODEM = 0,
+	CVMX_UART_IID_BUSY = 7
+} cvmx_uart_iid_t;
+
+#endif /* __CVMX_CSR_ENUMS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr.h b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
new file mode 100644
index 000000000000..730d54bb9278
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) address and type definitions for
+ * Octoen.
+ */
+
+#ifndef __CVMX_CSR_H__
+#define __CVMX_CSR_H__
+
+#include "cvmx-csr-enums.h"
+#include "cvmx-pip-defs.h"
+
+typedef cvmx_pip_prt_cfgx_t cvmx_pip_port_cfg_t;
+
+/* The CSRs for bootbus region zero used to be independent of the
+    other 1-7. As of SDK 1.7.0 these were combined. These macros
+    are for backwards compactability */
+#define CVMX_MIO_BOOT_REG_CFG0 CVMX_MIO_BOOT_REG_CFGX(0)
+#define CVMX_MIO_BOOT_REG_TIM0 CVMX_MIO_BOOT_REG_TIMX(0)
+
+/* The CN3XXX and CN58XX chips used to not have a LMC number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_LMC_BIST_CTL	  CVMX_LMCX_BIST_CTL(0)
+#define CVMX_LMC_BIST_RESULT	  CVMX_LMCX_BIST_RESULT(0)
+#define CVMX_LMC_COMP_CTL	  CVMX_LMCX_COMP_CTL(0)
+#define CVMX_LMC_CTL		  CVMX_LMCX_CTL(0)
+#define CVMX_LMC_CTL1		  CVMX_LMCX_CTL1(0)
+#define CVMX_LMC_DCLK_CNT_HI	  CVMX_LMCX_DCLK_CNT_HI(0)
+#define CVMX_LMC_DCLK_CNT_LO	  CVMX_LMCX_DCLK_CNT_LO(0)
+#define CVMX_LMC_DCLK_CTL	  CVMX_LMCX_DCLK_CTL(0)
+#define CVMX_LMC_DDR2_CTL	  CVMX_LMCX_DDR2_CTL(0)
+#define CVMX_LMC_DELAY_CFG	  CVMX_LMCX_DELAY_CFG(0)
+#define CVMX_LMC_DLL_CTL	  CVMX_LMCX_DLL_CTL(0)
+#define CVMX_LMC_DUAL_MEMCFG	  CVMX_LMCX_DUAL_MEMCFG(0)
+#define CVMX_LMC_ECC_SYND	  CVMX_LMCX_ECC_SYND(0)
+#define CVMX_LMC_FADR		  CVMX_LMCX_FADR(0)
+#define CVMX_LMC_IFB_CNT_HI	  CVMX_LMCX_IFB_CNT_HI(0)
+#define CVMX_LMC_IFB_CNT_LO	  CVMX_LMCX_IFB_CNT_LO(0)
+#define CVMX_LMC_MEM_CFG0	  CVMX_LMCX_MEM_CFG0(0)
+#define CVMX_LMC_MEM_CFG1	  CVMX_LMCX_MEM_CFG1(0)
+#define CVMX_LMC_OPS_CNT_HI	  CVMX_LMCX_OPS_CNT_HI(0)
+#define CVMX_LMC_OPS_CNT_LO	  CVMX_LMCX_OPS_CNT_LO(0)
+#define CVMX_LMC_PLL_BWCTL	  CVMX_LMCX_PLL_BWCTL(0)
+#define CVMX_LMC_PLL_CTL	  CVMX_LMCX_PLL_CTL(0)
+#define CVMX_LMC_PLL_STATUS	  CVMX_LMCX_PLL_STATUS(0)
+#define CVMX_LMC_READ_LEVEL_CTL	  CVMX_LMCX_READ_LEVEL_CTL(0)
+#define CVMX_LMC_READ_LEVEL_DBG	  CVMX_LMCX_READ_LEVEL_DBG(0)
+#define CVMX_LMC_READ_LEVEL_RANKX CVMX_LMCX_READ_LEVEL_RANKX(0)
+#define CVMX_LMC_RODT_COMP_CTL	  CVMX_LMCX_RODT_COMP_CTL(0)
+#define CVMX_LMC_RODT_CTL	  CVMX_LMCX_RODT_CTL(0)
+#define CVMX_LMC_WODT_CTL	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL0	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL1	  CVMX_LMCX_WODT_CTL1(0)
+
+/* The CN3XXX and CN58XX chips used to not have a TWSI bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_MIO_TWS_INT	 CVMX_MIO_TWSX_INT(0)
+#define CVMX_MIO_TWS_SW_TWSI	 CVMX_MIO_TWSX_SW_TWSI(0)
+#define CVMX_MIO_TWS_SW_TWSI_EXT CVMX_MIO_TWSX_SW_TWSI_EXT(0)
+#define CVMX_MIO_TWS_TWSI_SW	 CVMX_MIO_TWSX_TWSI_SW(0)
+
+/* The CN3XXX and CN58XX chips used to not have a SMI/MDIO bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_SMI_CLK	CVMX_SMIX_CLK(0)
+#define CVMX_SMI_CMD	CVMX_SMIX_CMD(0)
+#define CVMX_SMI_EN	CVMX_SMIX_EN(0)
+#define CVMX_SMI_RD_DAT CVMX_SMIX_RD_DAT(0)
+#define CVMX_SMI_WR_DAT CVMX_SMIX_WR_DAT(0)
+
+#endif /* __CVMX_CSR_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-error.h b/arch/mips/mach-octeon/include/mach/cvmx-error.h
new file mode 100644
index 000000000000..9a13ed422484
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-error.h
@@ -0,0 +1,456 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the Octeon extended error status.
+ */
+
+#ifndef __CVMX_ERROR_H__
+#define __CVMX_ERROR_H__
+
+/**
+ * There are generally many error status bits associated with a
+ * single logical group. The enumeration below is used to
+ * communicate high level groups to the error infastructure so
+ * error status bits can be enable or disabled in large groups.
+ */
+typedef enum {
+	CVMX_ERROR_GROUP_INTERNAL,
+	CVMX_ERROR_GROUP_L2C,
+	CVMX_ERROR_GROUP_ETHERNET,
+	CVMX_ERROR_GROUP_MGMT_PORT,
+	CVMX_ERROR_GROUP_PCI,
+	CVMX_ERROR_GROUP_SRIO,
+	CVMX_ERROR_GROUP_USB,
+	CVMX_ERROR_GROUP_LMC,
+	CVMX_ERROR_GROUP_ILK,
+	CVMX_ERROR_GROUP_DFM,
+	CVMX_ERROR_GROUP_ILA,
+} cvmx_error_group_t;
+
+/**
+ * Flags representing special handling for some error registers.
+ * These flags are passed to cvmx_error_initialize() to control
+ * the handling of bits where the same flags were passed to the
+ * added cvmx_error_info_t.
+ */
+typedef enum {
+	CVMX_ERROR_TYPE_NONE = 0,
+	CVMX_ERROR_TYPE_SBE = 1 << 0,
+	CVMX_ERROR_TYPE_DBE = 1 << 1,
+} cvmx_error_type_t;
+
+/**
+ * When registering for interest in an error status register, the
+ * type of the register needs to be known by cvmx-error. Most
+ * registers are either IO64 or IO32, but some blocks contain
+ * registers that can't be directly accessed. A good example of
+ * would be PCIe extended error state stored in config space.
+ */
+typedef enum {
+	__CVMX_ERROR_REGISTER_NONE,
+	CVMX_ERROR_REGISTER_IO64,
+	CVMX_ERROR_REGISTER_IO32,
+	CVMX_ERROR_REGISTER_PCICONFIG,
+	CVMX_ERROR_REGISTER_SRIOMAINT,
+} cvmx_error_register_t;
+
+struct cvmx_error_info;
+/**
+ * Error handling functions must have the following prototype.
+ */
+typedef int (*cvmx_error_func_t)(const struct cvmx_error_info *info);
+
+/**
+ * This structure is passed to all error handling functions.
+ */
+typedef struct cvmx_error_info {
+	cvmx_error_register_t reg_type;
+	u64 status_addr;
+	u64 status_mask;
+	u64 enable_addr;
+	u64 enable_mask;
+	cvmx_error_type_t flags;
+	cvmx_error_group_t group;
+	int group_index;
+	cvmx_error_func_t func;
+	u64 user_info;
+	struct {
+		cvmx_error_register_t reg_type;
+		u64 status_addr;
+		u64 status_mask;
+	} parent;
+} cvmx_error_info_t;
+
+/**
+ * Initialize the error status system. This should be called once
+ * before any other functions are called. This function adds default
+ * handlers for most all error events but does not enable them. Later
+ * calls to cvmx_error_enable() are needed.
+ *
+ * @param flags  Optional flags.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize(void);
+
+/**
+ * Poll the error status registers and call the appropriate error
+ * handlers. This should be called in the RSL interrupt handler
+ * for your application or operating system.
+ *
+ * @return Number of error handlers called. Zero means this call
+ *         found no errors and was spurious.
+ */
+int cvmx_error_poll(void);
+
+/**
+ * Register to be called when an error status bit is set. Most users
+ * will not need to call this function as cvmx_error_initialize()
+ * registers default handlers for most error conditions. This function
+ * is normally used to add more handlers without changing the existing
+ * handlers.
+ *
+ * @param new_info Information about the handler for a error register. The
+ *                 structure passed is copied and can be destroyed after the
+ *                 call. All members of the structure must be populated, even the
+ *                 parent information.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_add(const cvmx_error_info_t *new_info);
+
+/**
+ * Remove all handlers for a status register and mask. Normally
+ * this function should not be called. Instead a new handler should be
+ * installed to replace the existing handler. In the even that all
+ * reporting of a error bit should be removed, then use this
+ * function.
+ *
+ * @param reg_type Type of the status register to remove
+ * @param status_addr
+ *                 Status register to remove.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 removed.
+ * @param old_info If not NULL, this is filled with information about the handler
+ *                 that was removed.
+ *
+ * @return Zero on success, negative on failure (not found).
+ */
+int cvmx_error_remove(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+		      cvmx_error_info_t *old_info);
+
+/**
+ * Change the function and user_info for an existing error status
+ * register. This function should be used to replace the default
+ * handler with an application specific version as needed.
+ *
+ * @param reg_type Type of the status register to change
+ * @param status_addr
+ *                 Status register to change.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 changed.
+ * @param new_func New function to use to handle the error status
+ * @param new_user_info
+ *                 New user info parameter for the function
+ * @param old_func If not NULL, the old function is returned. Useful for restoring
+ *                 the old handler.
+ * @param old_user_info
+ *                 If not NULL, the old user info parameter.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_error_change_handler(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+			      cvmx_error_func_t new_func, u64 new_user_info,
+			      cvmx_error_func_t *old_func, u64 *old_user_info);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not enable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_enable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Disable all error registers for a logical group. This should be
+ * called whenever a logical group is brought offline. Many blocks
+ * will report spurious errors when offline unless this function
+ * is called.
+ *
+ * @param group  Logical group to disable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not disable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_disable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Enable all handlers for a specific status register mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 enabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_enable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * Disable all handlers for a specific status register and mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 disabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_disable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * @INTERNAL
+ * Function for processing non leaf error status registers. This function
+ * calls all handlers for this passed register and all children linked
+ * to it.
+ *
+ * @param info   Error register to check
+ *
+ * @return Number of error status bits found or zero if no bits were set.
+ */
+int __cvmx_error_decode(const cvmx_error_info_t *info);
+
+/**
+ * @INTERNAL
+ * This error bit handler simply prints a message and clears the status bit
+ *
+ * @param info   Error register to check
+ *
+ * @return
+ */
+int __cvmx_error_display(const cvmx_error_info_t *info);
+
+/**
+ * Find the handler for a specific status register and mask
+ *
+ * @param status_addr
+ *                Status register address
+ *
+ * @return  Return the handler on success or null on failure.
+ */
+cvmx_error_info_t *cvmx_error_get_index(u64 status_addr);
+
+void __cvmx_install_gmx_error_handler_for_xaui(void);
+
+/**
+ * 78xx related
+ */
+/**
+ * Compare two INTSN values.
+ *
+ * @param key INTSN value to search for
+ * @param data current entry from the searched array
+ *
+ * @return Negative, 0 or positive when respectively key is less than,
+ *		equal or greater than data.
+ */
+int cvmx_error_intsn_cmp(const void *key, const void *data);
+
+/**
+ * @INTERNAL
+ *
+ * @param intsn   Interrupt source number to display
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, -1 on error
+ */
+int cvmx_error_intsn_display_v3(int node, u32 intsn);
+
+/**
+ * Initialize the error status system for cn78xx. This should be called once
+ * before any other functions are called. This function enables the interrupts
+ * described in the array.
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize_cn78xx(int node);
+
+/**
+ * Enable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_enable_v3(int node, u32 intsn);
+
+/**
+ * Disable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_disable_v3(int node, u32 intsn);
+
+/**
+ * Clear interrupt for a specific INTSN.
+ *
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_clear_v3(int node, u32 intsn);
+
+/**
+ * Enable interrupts for a specific CSR(all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_csr_enable_v3(int node, u64 csr_address);
+
+/**
+ * Disable interrupts for a specific CSR (all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero
+ */
+int cvmx_error_csr_disable_v3(int node, u64 csr_address);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Disable all error registers for a logical group.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param type   Category in a logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				    int xipd_port);
+
+/**
+ * Disable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to disable
+ * @param type   Category in a logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				     int xipd_port);
+
+/**
+ * Clear all error registers for a logical group.
+ *
+ * @param group  Logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_clear_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to enable
+ *
+ *@return Zero.
+ */
+int cvmx_error_enable_type_v3(int node, cvmx_error_type_t type);
+
+/**
+ * Disable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to disable
+ *
+ *@return Zero.
+ */
+int cvmx_error_disable_type_v3(int node, cvmx_error_type_t type);
+
+void cvmx_octeon_hang(void) __attribute__((__noreturn__));
+
+/**
+ * @INTERNAL
+ *
+ * Process L2C single and multi-bit ECC errors
+ *
+ */
+int __cvmx_cn7xxx_l2c_l2d_ecc_error_display(int node, int intsn);
+
+/**
+ * Handle L2 cache TAG ECC errors and noway errors
+ *
+ * @param	CCPI node
+ * @param	intsn	intsn from error array.
+ * @param	remote	true for remote node (cn78xx only)
+ *
+ * @return	1 if handled, 0 if not handled
+ */
+int __cvmx_cn7xxx_l2c_tag_error_display(int node, int intsn, bool remote);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
new file mode 100644
index 000000000000..297fb3f4a28c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator.
+ */
+
+#ifndef __CVMX_FPA_H__
+#define __CVMX_FPA_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa1.h"
+#include "cvmx-fpa3.h"
+
+#define CVMX_FPA_MIN_BLOCK_SIZE 128
+#define CVMX_FPA_ALIGNMENT	128
+#define CVMX_FPA_POOL_NAME_LEN	16
+
+/* On CN78XX in backward-compatible mode, pool is mapped to AURA */
+#define CVMX_FPA_NUM_POOLS                                                                         \
+	(octeon_has_feature(OCTEON_FEATURE_FPA3) ? cvmx_fpa3_num_auras() : CVMX_FPA1_NUM_POOLS)
+
+/**
+ * Structure to store FPA pool configuration parameters.
+ */
+struct cvmx_fpa_pool_config {
+	s64 pool_num;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+typedef struct cvmx_fpa_pool_config cvmx_fpa_pool_config_t;
+
+/**
+ * Return the name of the pool
+ *
+ * @param pool_num   Pool to get the name of
+ * @return The name
+ */
+const char *cvmx_fpa_get_name(int pool_num);
+
+/**
+ * Initialize FPA per node
+ */
+int cvmx_fpa_global_init_node(int node);
+
+/**
+ * Enable the FPA
+ */
+static inline void cvmx_fpa_enable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_enable();
+	else
+		cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * Disable the FPA
+ */
+static inline void cvmx_fpa_disable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_disable();
+	/* FPA3 does not have a disable function */
+}
+
+/**
+ * @INTERNAL
+ * @deprecated OBSOLETE
+ *
+ * Kept for transition assistance only
+ */
+static inline void cvmx_fpa_global_initialize(void)
+{
+	cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * @INTERNAL
+ *
+ * Convert FPA1 style POOL into FPA3 AURA in
+ * backward compatibility mode.
+ */
+static inline cvmx_fpa3_gaura_t cvmx_fpa1_pool_to_fpa3_aura(cvmx_fpa1_pool_t pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		unsigned int node = cvmx_get_node_num();
+		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(node, pool);
+		return aura;
+	}
+	return CVMX_FPA3_INVALID_GAURA;
+}
+
+/**
+ * Get a new block from the FPA
+ *
+ * @param pool   Pool to get the block from
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_alloc(u64 pool)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_alloc(cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_alloc(pool);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa_async_alloc(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_async_alloc(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_async_alloc(scr_addr, pool);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_async_alloc_finish(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		return cvmx_fpa3_async_alloc_finish(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	else
+		return cvmx_fpa1_async_alloc_finish(scr_addr, pool);
+}
+
+/**
+ * Free a block allocated with a FPA pool.
+ * Does NOT provide memory ordering in cases where the memory block was
+ * modified by the core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free_nosync(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free_nosync(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free_nosync(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Free a block allocated with a FPA pool.  Provides required memory
+ * ordering in cases where memory block was modified by core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Setup a FPA pool to control a new block of memory.
+ * This can only be called once per pool. Make sure proper
+ * locking enforces this.
+ *
+ * @param pool       Pool to initialize
+ * @param name       Constant character string to name this pool.
+ *                   String is not copied.
+ * @param buffer     Pointer to the block of memory to use. This must be
+ *                   accessible by all processors and external hardware.
+ * @param block_size Size for each block controlled by the FPA
+ * @param num_blocks Number of blocks
+ *
+ * @return the pool number on Success,
+ *         -1 on failure
+ */
+int cvmx_fpa_setup_pool(int pool, const char *name, void *buffer, u64 block_size, u64 num_blocks);
+
+int cvmx_fpa_shutdown_pool(int pool);
+
+/**
+ * Gets the block size of buffer in specified pool
+ * @param pool	 Pool to get the block size from
+ * @return       Size of buffer in specified pool
+ */
+unsigned int cvmx_fpa_get_block_size(int pool);
+
+int cvmx_fpa_is_pool_available(int pool_num);
+u64 cvmx_fpa_get_pool_owner(int pool_num);
+int cvmx_fpa_get_max_pools(void);
+int cvmx_fpa_get_current_count(int pool_num);
+int cvmx_fpa_validate_pool(int pool);
+
+#endif /*  __CVM_FPA_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
new file mode 100644
index 000000000000..6985083a5d66
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator on Octeon chips.
+ * These are the legacy models, i.e. prior to CN78XX/CN76XX.
+ */
+
+#ifndef __CVMX_FPA1_HW_H__
+#define __CVMX_FPA1_HW_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa3.h"
+
+/* Legacy pool range is 0..7 and 8 on CN68XX */
+typedef int cvmx_fpa1_pool_t;
+
+#define CVMX_FPA1_NUM_POOLS    8
+#define CVMX_FPA1_INVALID_POOL ((cvmx_fpa1_pool_t)-1)
+#define CVMX_FPA1_NAME_SIZE    16
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+} cvmx_fpa1_iobdma_data_t;
+
+/*
+ * Allocate or reserve the specified fpa pool.
+ *
+ * @param pool	  FPA pool to allocate/reserve. If -1 it
+ *                finds an empty pool to allocate.
+ * @return        Alloctaed pool number or CVMX_FPA1_POOL_INVALID
+ *                if fails to allocate the pool
+ */
+cvmx_fpa1_pool_t cvmx_fpa1_reserve_pool(cvmx_fpa1_pool_t pool);
+
+/**
+ * Free the specified fpa pool.
+ * @param pool	   Pool to free
+ * @return         0 for success -1 failure
+ */
+int cvmx_fpa1_release_pool(cvmx_fpa1_pool_t pool);
+
+static inline void cvmx_fpa1_free(void *ptr, cvmx_fpa1_pool_t pool, u64 num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Make sure that any previous writes to memory go out before we free
+	 * this buffer.  This also serves as a barrier to prevent GCC from
+	 * reordering operations to after the free.
+	 */
+	CVMX_SYNCWS;
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+static inline void cvmx_fpa1_free_nosync(void *ptr, cvmx_fpa1_pool_t pool,
+					 unsigned int num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Prevent GCC from reordering around free */
+	asm volatile("" : : : "memory");
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+/**
+ * Enable the FPA for use. Must be performed after any CSR
+ * configuration but before any other FPA functions.
+ */
+static inline void cvmx_fpa1_enable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	if (status.s.enb) {
+		/*
+		 * CN68XXP1 should not reset the FPA (doing so may break
+		 * the SSO, so we may end up enabling it more than once.
+		 * Just return and don't spew messages.
+		 */
+		return;
+	}
+
+	status.u64 = 0;
+	status.s.enb = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+/**
+ * Reset FPA to disable. Make sure buffers from all FPA pools are freed
+ * before disabling FPA.
+ */
+static inline void cvmx_fpa1_disable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
+		return;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	status.s.reset = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+static inline void *cvmx_fpa1_alloc(cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	for (;;) {
+		address = csr_rd(CVMX_ADDR_DID(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool)));
+		if (cvmx_likely(address)) {
+			return cvmx_phys_to_ptr(address);
+		} else {
+			if (csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool)) > 0)
+				udelay(50);
+			else
+				return NULL;
+		}
+	}
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ * @INTERNAL
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa1_async_alloc(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	cvmx_fpa1_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.s.scraddr = scr_addr >> 3;
+	data.s.len = 1;
+	data.s.did = CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool);
+	data.s.addr = 0;
+
+	cvmx_scratch_write64(scr_addr, 0ull);
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ * @INTERNAL
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa1_async_alloc_finish(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		return cvmx_fpa1_alloc(pool);
+}
+
+static inline u64 cvmx_fpa1_get_available(cvmx_fpa1_pool_t pool)
+{
+	return csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool));
+}
+
+#endif /* __CVMX_FPA1_HW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
new file mode 100644
index 000000000000..229982b83163
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
@@ -0,0 +1,566 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the CN78XX Free Pool Allocator, a.k.a. FPA3
+ */
+
+#include "cvmx-address.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-scratch.h"
+
+#ifndef __CVMX_FPA3_H__
+#define __CVMX_FPA3_H__
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 2;
+	unsigned lpool : 6;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_pool_t;
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 6;
+	unsigned laura : 10;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_gaura_t;
+
+#define CVMX_FPA3_VALID_MAGIC	0xf9a3
+#define CVMX_FPA3_INVALID_GAURA ((cvmx_fpa3_gaura_t){ 0, 0, 0, 0, 0 })
+#define CVMX_FPA3_INVALID_POOL	((cvmx_fpa3_pool_t){ 0, 0, 0, 0, 0 })
+
+static inline bool __cvmx_fpa3_aura_valid(cvmx_fpa3_gaura_t aura)
+{
+	if (aura.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline bool __cvmx_fpa3_pool_valid(cvmx_fpa3_pool_t pool)
+{
+	if (pool.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline cvmx_fpa3_gaura_t __cvmx_fpa3_gaura(int node, int laura)
+{
+	cvmx_fpa3_gaura_t aura;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (laura < 0)
+		return CVMX_FPA3_INVALID_GAURA;
+
+	aura.node = node;
+	aura.laura = laura;
+	aura.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return aura;
+}
+
+static inline cvmx_fpa3_pool_t __cvmx_fpa3_pool(int node, int lpool)
+{
+	cvmx_fpa3_pool_t pool;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (lpool < 0)
+		return CVMX_FPA3_INVALID_POOL;
+
+	pool.node = node;
+	pool.lpool = lpool;
+	pool.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return pool;
+}
+
+#undef CVMX_FPA3_VALID_MAGIC
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	} cn78xx;
+} cvmx_fpa3_iobdma_data_t;
+
+/**
+ * Struct describing load allocate operation addresses for FPA pool.
+ */
+union cvmx_fpa3_load_data {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	};
+};
+
+typedef union cvmx_fpa3_load_data cvmx_fpa3_load_data_t;
+
+/**
+ * Struct describing store free operation addresses from FPA pool.
+ */
+union cvmx_fpa3_store_addr {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved2 : 10;
+		u64 aura : 10;
+		u64 fabs : 1;
+		u64 reserved3 : 3;
+		u64 dwb_count : 9;
+		u64 reserved4 : 3;
+	};
+};
+
+typedef union cvmx_fpa3_store_addr cvmx_fpa3_store_addr_t;
+
+enum cvmx_fpa3_pool_alignment_e {
+	FPA_NATURAL_ALIGNMENT,
+	FPA_OFFSET_ALIGNMENT,
+	FPA_OPAQUE_ALIGNMENT
+};
+
+#define CVMX_FPA3_AURAX_LIMIT_MAX ((1ull << 40) - 1)
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of POOLS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_pools(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 32;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 32;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of AURAS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_auras(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1024;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 512;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 512;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * Get the FPA3 POOL underneath FPA3 AURA, containing all its buffers
+ *
+ */
+static inline cvmx_fpa3_pool_t cvmx_fpa3_aura_to_pool(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_aurax_pool_t aurax_pool;
+
+	aurax_pool.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_POOL(aura.laura));
+
+	pool = __cvmx_fpa3_pool(aura.node, aurax_pool.s.pool);
+	return pool;
+}
+
+/**
+ * Get a new block from the FPA pool
+ *
+ * @param aura  - aura number
+ * @return pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_alloc(cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+	cvmx_fpa3_load_data_t load_addr;
+
+	load_addr.u64 = 0;
+	load_addr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	load_addr.io = 1;
+	load_addr.did = 0x29; /* Device ID. Indicates FPA. */
+	load_addr.node = aura.node;
+	load_addr.red = 0; /* Perform RED on allocation.
+				  * FIXME to use config option
+				  */
+	load_addr.aura = aura.laura;
+
+	address = cvmx_read64_uint64(load_addr.u64);
+	if (!address)
+		return NULL;
+	return cvmx_phys_to_ptr(address);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param aura     Global aura to get the block from
+ */
+static inline void cvmx_fpa3_async_alloc(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.cn78xx.scraddr = scr_addr >> 3;
+	data.cn78xx.len = 1;
+	data.cn78xx.did = 0x29;
+	data.cn78xx.node = aura.node;
+	data.cn78xx.aura = aura.laura;
+	cvmx_scratch_write64(scr_addr, 0ull);
+
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa3_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param aura Global aura the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_async_alloc_finish(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		/* Try regular alloc if async failed */
+		return cvmx_fpa3_alloc(aura);
+}
+
+/**
+ * Free a pointer back to the pool.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free(void *ptr, cvmx_fpa3_gaura_t aura, unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Make sure that any previous writes to memory go out before we free
+	   this buffer. This also serves as a barrier to prevent GCC from
+	   reordering operations to after the free. */
+	CVMX_SYNCWS;
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+/**
+ * Free a pointer back to the pool without flushing the write buffer.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free_nosync(void *ptr, cvmx_fpa3_gaura_t aura,
+					 unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Prevent GCC from reordering writes to (*ptr) */
+	asm volatile("" : : : "memory");
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+static inline int cvmx_fpa3_pool_is_enabled(cvmx_fpa3_pool_t pool)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	if (!__cvmx_fpa3_pool_valid(pool))
+		return -1;
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	return pool_cfg.cn78xx.ena;
+}
+
+static inline int cvmx_fpa3_config_red_params(unsigned int node, int qos_avg_en, int red_lvl_dly,
+					      int avg_dly)
+{
+	cvmx_fpa_gen_cfg_t fpa_cfg;
+	cvmx_fpa_red_delay_t red_delay;
+
+	fpa_cfg.u64 = cvmx_read_csr_node(node, CVMX_FPA_GEN_CFG);
+	fpa_cfg.s.avg_en = qos_avg_en;
+	fpa_cfg.s.lvl_dly = red_lvl_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_GEN_CFG, fpa_cfg.u64);
+
+	red_delay.u64 = cvmx_read_csr_node(node, CVMX_FPA_RED_DELAY);
+	red_delay.s.avg_dly = avg_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_RED_DELAY, red_delay.u64);
+	return 0;
+}
+
+/**
+ * Gets the buffer size of the specified pool,
+ *
+ * @param aura Global aura number
+ * @return Returns size of the buffers in the specified pool.
+ */
+static inline int cvmx_fpa3_get_aura_buf_size(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+	int block_size;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	block_size = pool_cfg.cn78xx.buf_size << 7;
+	return block_size;
+}
+
+/**
+ * Return the number of available buffers in an AURA
+ *
+ * @param aura to receive count for
+ * @return available buffer count
+ */
+static inline long long cvmx_fpa3_get_available(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_available_t avail_reg;
+	cvmx_fpa_aurax_cnt_t cnt_reg;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	long long ret;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	/* Get POOL available buffer count */
+	avail_reg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_AVAILABLE(pool.lpool));
+
+	/* Get AURA current available count */
+	cnt_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT(aura.laura));
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (limit_reg.cn78xx.limit < cnt_reg.cn78xx.cnt)
+		return 0;
+
+	/* Calculate AURA-based buffer allowance */
+	ret = limit_reg.cn78xx.limit - cnt_reg.cn78xx.cnt;
+
+	/* Use POOL real buffer availability when less then allowance */
+	if (ret > (long long)avail_reg.cn78xx.count)
+		ret = avail_reg.cn78xx.count;
+
+	return ret;
+}
+
+/**
+ * Configure the QoS parameters of an FPA3 AURA
+ *
+ * @param aura is the FPA3 AURA handle
+ * @param ena_bp enables backpressure when outstanding count exceeds 'bp_thresh'
+ * @param ena_red enables random early discard when outstanding count exceeds 'pass_thresh'
+ * @param pass_thresh is the maximum count to invoke flow control
+ * @param drop_thresh is the count threshold to begin dropping packets
+ * @param bp_thresh is the back-pressure threshold
+ *
+ */
+static inline void cvmx_fpa3_setup_aura_qos(cvmx_fpa3_gaura_t aura, bool ena_red, u64 pass_thresh,
+					    u64 drop_thresh, bool ena_bp, u64 bp_thresh)
+{
+	unsigned int shift = 0;
+	u64 shift_thresh;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	cvmx_fpa_aurax_cnt_levels_t aura_level;
+
+	if (!__cvmx_fpa3_aura_valid(aura))
+		return;
+
+	/* Get AURAX count limit for validation */
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (pass_thresh < 256)
+		pass_thresh = 255;
+
+	if (drop_thresh <= pass_thresh || drop_thresh > limit_reg.cn78xx.limit)
+		drop_thresh = limit_reg.cn78xx.limit;
+
+	if (bp_thresh < 256 || bp_thresh > limit_reg.cn78xx.limit)
+		bp_thresh = limit_reg.cn78xx.limit >> 1;
+
+	shift_thresh = (bp_thresh > drop_thresh) ? bp_thresh : drop_thresh;
+
+	/* Calculate shift so that the largest threshold fits in 8 bits */
+	for (shift = 0; shift < (1 << 6); shift++) {
+		if (0 == ((shift_thresh >> shift) & ~0xffull))
+			break;
+	};
+
+	aura_level.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura));
+	aura_level.s.pass = pass_thresh >> shift;
+	aura_level.s.drop = drop_thresh >> shift;
+	aura_level.s.bp = bp_thresh >> shift;
+	aura_level.s.shift = shift;
+	aura_level.s.red_ena = ena_red;
+	aura_level.s.bp_ena = ena_bp;
+	cvmx_write_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura), aura_level.u64);
+}
+
+cvmx_fpa3_gaura_t cvmx_fpa3_reserve_aura(int node, int desired_aura_num);
+int cvmx_fpa3_release_aura(cvmx_fpa3_gaura_t aura);
+cvmx_fpa3_pool_t cvmx_fpa3_reserve_pool(int node, int desired_pool_num);
+int cvmx_fpa3_release_pool(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_is_aura_available(int node, int aura_num);
+int cvmx_fpa3_is_pool_available(int node, int pool_num);
+
+cvmx_fpa3_pool_t cvmx_fpa3_setup_fill_pool(int node, int desired_pool, const char *name,
+					   unsigned int block_size, unsigned int num_blocks,
+					   void *buffer);
+
+/**
+ * Function to attach an aura to an existing pool
+ *
+ * @param node - configure fpa on this node
+ * @param pool - configured pool to attach aura to
+ * @param desired_aura - pointer to aura to use, set to -1 to allocate
+ * @param name - name to register
+ * @param block_size - size of buffers to use
+ * @param num_blocks - number of blocks to allocate
+ *
+ * @return configured gaura on success, CVMX_FPA3_INVALID_GAURA on failure
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_set_aura_for_pool(cvmx_fpa3_pool_t pool, int desired_aura,
+					      const char *name, unsigned int block_size,
+					      unsigned int num_blocks);
+
+/**
+ * Function to setup and initialize a pool.
+ *
+ * @param node - configure fpa on this node
+ * @param desired_aura - aura to use, -1 for dynamic allocation
+ * @param name - name to register
+ * @param block_size - size of buffers in pool
+ * @param num_blocks - max number of buffers allowed
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_setup_aura_and_pool(int node, int desired_aura, const char *name,
+						void *buffer, unsigned int block_size,
+						unsigned int num_blocks);
+
+int cvmx_fpa3_shutdown_aura_and_pool(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_aura(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_pool(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_pool_name(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_get_pool_buf_size(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_aura_name(cvmx_fpa3_gaura_t aura);
+
+/* FIXME: Need a different macro for stage2 of u-boot */
+
+static inline void cvmx_fpa3_stage2_init(int aura, int pool, u64 stack_paddr, int stacklen,
+					 int buffer_sz, int buf_cnt)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	/* Configure pool stack */
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), stack_paddr + stacklen);
+
+	/* Configure pool with buffer size */
+	pool_cfg.u64 = 0;
+	pool_cfg.cn78xx.nat_align = 1;
+	pool_cfg.cn78xx.buf_size = buffer_sz >> 7;
+	pool_cfg.cn78xx.l_type = 0x2;
+	pool_cfg.cn78xx.ena = 0;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+	/* Reset pool before starting */
+	pool_cfg.cn78xx.ena = 1;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CFG(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CNT_ADD(aura), buf_cnt);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), (u64)pool);
+}
+
+static inline void cvmx_fpa3_stage2_disable(int aura, int pool)
+{
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), 0);
+}
+
+#endif /* __CVMX_FPA3_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
new file mode 100644
index 000000000000..28c32ddbe17a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CVMX_GLOBAL_RESOURCES_T_
+#define _CVMX_GLOBAL_RESOURCES_T_
+
+#define CVMX_GLOBAL_RESOURCES_DATA_NAME "cvmx-global-resources"
+
+/*In macros below abbreviation GR stands for global resources. */
+#define CVMX_GR_TAG_INVALID                                                                        \
+	cvmx_get_gr_tag('i', 'n', 'v', 'a', 'l', 'i', 'd', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+/*Tag for pko que table range. */
+#define CVMX_GR_TAG_PKO_QUEUES                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'q', 'u', 'e', 'u', 's', '.', '.', \
+			'.')
+/*Tag for a pko internal ports range */
+#define CVMX_GR_TAG_PKO_IPORTS                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'i', 'p', 'o', 'r', 't', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FPA                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'p', 'a', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FAU                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'a', 'u', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_SSO_GRP(n)                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 's', 'o', '_', '0', (n) + '0', '.', '.', '.',     \
+			'.', '.', '.');
+#define CVMX_GR_TAG_TIM(n)                                                                         \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 't', 'i', 'm', '_', (n) + '0', '.', '.', '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTERS(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'u', 's', 't', 'e', 'r', '_', (x + '0'),     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTER_GRP(x)                                                                 \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'g', 'r', 'p', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_STYLE(x)                                                                       \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 't', 'y', 'l', 'e', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_QPG_ENTRY(x)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'q', 'p', 'g', 'e', 't', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_BPID(x)                                                                        \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'b', 'p', 'i', 'd', 's', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_MTAG_IDX(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'm', 't', 'a', 'g', 'x', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_PCAM(x, y, z)                                                                  \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'c', 'a', 'm', '_', (x + '0'), (y + '0'),         \
+			(z + '0'), '.', '.', '.', '.')
+
+#define CVMX_GR_TAG_CIU3_IDT(_n)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 'i', 'd',  \
+			't', '.', '.')
+
+/* Allocation of the 512 SW INTSTs (in the  12 bit SW INTSN space) */
+#define CVMX_GR_TAG_CIU3_SWINTSN(_n)                                                               \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 's', 'w',  \
+			'i', 's', 'n')
+
+#define TAG_INIT_PART(A, B, C, D, E, F, G, H)                                                      \
+	((((u64)(A) & 0xff) << 56) | (((u64)(B) & 0xff) << 48) | (((u64)(C) & 0xff) << 40) |             \
+	 (((u64)(D) & 0xff) << 32) | (((u64)(E) & 0xff) << 24) | (((u64)(F) & 0xff) << 16) |             \
+	 (((u64)(G) & 0xff) << 8) | (((u64)(H) & 0xff)))
+
+struct global_resource_tag {
+	u64 lo;
+	u64 hi;
+};
+
+enum cvmx_resource_err { CVMX_RESOURCE_ALLOC_FAILED = -1, CVMX_RESOURCE_ALREADY_RESERVED = -2 };
+
+/*
+ * @INTERNAL
+ * Creates a tag from the specified characters.
+ */
+static inline struct global_resource_tag cvmx_get_gr_tag(char a, char b, char c, char d, char e,
+							 char f, char g, char h, char i, char j,
+							 char k, char l, char m, char n, char o,
+							 char p)
+{
+	struct global_resource_tag tag;
+
+	tag.lo = TAG_INIT_PART(a, b, c, d, e, f, g, h);
+	tag.hi = TAG_INIT_PART(i, j, k, l, m, n, o, p);
+	return tag;
+}
+
+static inline int cvmx_gr_same_tag(struct global_resource_tag gr1, struct global_resource_tag gr2)
+{
+	return (gr1.hi == gr2.hi) && (gr1.lo == gr2.lo);
+}
+
+/*
+ * @INTERNAL
+ * Creates a global resource range that can hold the specified number of
+ * elements
+ * @param tag is the tag of the range. The taga is created using the method
+ * cvmx_get_gr_tag()
+ * @param nelements is the number of elements to be held in the resource range.
+ */
+int cvmx_create_global_resource_range(struct global_resource_tag tag, int nelements);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag. It
+ * is assumed that prior
+ * to calling this the global resource range has already been created using
+ * cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @aligment specifes the required alignment of the returned base number.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_allocate_global_resource_range(struct global_resource_tag tag, u64 owner, int nelements,
+					int alignment);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag.
+ * The elements allocated need not be contiguous. It is assumed that prior to
+ * calling this the global resource range has already
+ * been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of the allocated
+ * elements.
+ * @param allocated_elements returns indexs of the allocated entries.
+ * @return returns 0 on success and -1 on failure.
+ */
+int cvmx_resource_alloc_many(struct global_resource_tag tag, u64 owner, int nelements,
+			     int allocated_elements[]);
+int cvmx_resource_alloc_reverse(struct global_resource_tag, u64 owner);
+/*
+ * @INTERNAL
+ * Reserve nelements starting from base in the global resource range with the
+ * specified tag.
+ * It is assumed that prior to calling this the global resource range has
+ * already been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @base specifies the base start of nelements.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_reserve_global_resource_range(struct global_resource_tag tag, u64 owner, int base,
+				       int nelements);
+/*
+ * @INTERNAL
+ * Free nelements starting@base in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param base is the base number
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_base(struct global_resource_tag tag, int base,
+					      int nelements);
+
+/*
+ * @INTERNAL
+ * Free nelements with the bases specified in bases[] with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param bases is an array containing the bases to be freed.
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_multiple(struct global_resource_tag tag, int bases[],
+					     int nelements);
+/*
+ * @INTERNAL
+ * Free elements from the specified owner in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param owner is the owner of resources that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_owner(struct global_resource_tag tag, int owner);
+
+/*
+ * @INTERNAL
+ * Frees all the global resources that have been created.
+ * For use only from the bootloader, when it shutdown and boots up the
+ * application or kernel.
+ */
+int free_global_resources(void);
+
+u64 cvmx_get_global_resource_owner(struct global_resource_tag tag, int base);
+/*
+ * @INTERNAL
+ * Shows the global resource range with the specified tag. Use mainly for debug.
+ */
+void cvmx_show_global_resource_range(struct global_resource_tag tag);
+
+/*
+ * @INTERNAL
+ * Shows all the global resources. Used mainly for debug.
+ */
+void cvmx_global_resources_show(void);
+
+u64 cvmx_allocate_app_id(void);
+u64 cvmx_get_app_id(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmx.h b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
new file mode 100644
index 000000000000..2df7da102a0f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the GMX hardware.
+ */
+
+#ifndef __CVMX_GMX_H__
+#define __CVMX_GMX_H__
+
+/* CSR typedefs have been moved to cvmx-gmx-defs.h */
+
+int cvmx_gmx_set_backpressure_override(u32 interface, u32 port_mask);
+int cvmx_agl_set_backpressure_override(u32 interface, u32 port_mask);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
new file mode 100644
index 000000000000..59772190aa3b
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ */
+
+/**
+ * @file
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ *
+ */
+
+#ifndef __CVMX_HWFAU_H__
+#define __CVMX_HWFAU_H__
+
+typedef int cvmx_fau_reg64_t;
+typedef int cvmx_fau_reg32_t;
+typedef int cvmx_fau_reg16_t;
+typedef int cvmx_fau_reg8_t;
+
+#define CVMX_FAU_REG_ANY -1
+
+/*
+ * Octeon Fetch and Add Unit (FAU)
+ */
+
+#define CVMX_FAU_LOAD_IO_ADDRESS cvmx_build_io_address(0x1e, 0)
+#define CVMX_FAU_BITS_SCRADDR	 63, 56
+#define CVMX_FAU_BITS_LEN	 55, 48
+#define CVMX_FAU_BITS_INEVAL	 35, 14
+#define CVMX_FAU_BITS_TAGWAIT	 13, 13
+#define CVMX_FAU_BITS_NOADD	 13, 13
+#define CVMX_FAU_BITS_SIZE	 12, 11
+#define CVMX_FAU_BITS_REGISTER	 10, 0
+
+#define CVMX_FAU_MAX_REGISTERS_8 (2048)
+
+typedef enum {
+	CVMX_FAU_OP_SIZE_8 = 0,
+	CVMX_FAU_OP_SIZE_16 = 1,
+	CVMX_FAU_OP_SIZE_32 = 2,
+	CVMX_FAU_OP_SIZE_64 = 3
+} cvmx_fau_op_size_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s64 value : 63;
+} cvmx_fau_tagwait64_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s32 value : 31;
+} cvmx_fau_tagwait32_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s16 value : 15;
+} cvmx_fau_tagwait16_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	int8_t value : 7;
+} cvmx_fau_tagwait8_t;
+
+/**
+ * Asynchronous tagwait return definition. If a timeout occurs,
+ * the error bit will be set. Otherwise the value of the
+ * register before the update will be returned.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 invalid : 1;
+		u64 data : 63; /* unpredictable if invalid is set */
+	} s;
+} cvmx_fau_async_tagwait_result_t;
+
+#define SWIZZLE_8  0
+#define SWIZZLE_16 0
+#define SWIZZLE_32 0
+
+/**
+ * @INTERNAL
+ * Builds a store I/O address for writing to the FAU
+ *
+ * @param noadd  0 = Store value is atomically added to the current value
+ *               1 = Store value is atomically written over the current value
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ *               - Step by 4 for 32 bit access.
+ *               - Step by 8 for 64 bit access.
+ * @return Address to store for atomic update
+ */
+static inline u64 __cvmx_hwfau_store_address(u64 noadd, u64 reg)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_NOADD, noadd) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * @INTERNAL
+ * Builds a I/O address for accessing the FAU
+ *
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @return Address to read from for atomic update
+ */
+static inline u64 __cvmx_hwfau_atomic_address(u64 tagwait, u64 reg, s64 value)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s64 cvmx_hwfau_fetch_and_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	return cvmx_read64_int64(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s32 cvmx_hwfau_fetch_and_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	return cvmx_read64_int32(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline s16 cvmx_hwfau_fetch_and_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	return cvmx_read64_int16(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline int8_t cvmx_hwfau_fetch_and_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	return cvmx_read64_int8(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 64 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 8 for 64 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait64_t cvmx_hwfau_tagwait_fetch_and_add64(cvmx_fau_reg64_t reg,
+								      s64 value)
+{
+	union {
+		u64 i64;
+		cvmx_fau_tagwait64_t t;
+	} result;
+	result.i64 = cvmx_read64_int64(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 32 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 4 for 32 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait32_t cvmx_hwfau_tagwait_fetch_and_add32(cvmx_fau_reg32_t reg,
+								      s32 value)
+{
+	union {
+		u64 i32;
+		cvmx_fau_tagwait32_t t;
+	} result;
+	reg ^= SWIZZLE_32;
+	result.i32 = cvmx_read64_int32(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 16 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait16_t cvmx_hwfau_tagwait_fetch_and_add16(cvmx_fau_reg16_t reg,
+								      s16 value)
+{
+	union {
+		u64 i16;
+		cvmx_fau_tagwait16_t t;
+	} result;
+	reg ^= SWIZZLE_16;
+	result.i16 = cvmx_read64_int16(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 8 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait8_t cvmx_hwfau_tagwait_fetch_and_add8(cvmx_fau_reg8_t reg,
+								    int8_t value)
+{
+	union {
+		u64 i8;
+		cvmx_fau_tagwait8_t t;
+	} result;
+	reg ^= SWIZZLE_8;
+	result.i8 = cvmx_read64_int8(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * @INTERNAL
+ * Builds I/O data for async operations
+ *
+ * @param scraddr Scratch pad byte address to write to.  Must be 8 byte aligned
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param size    The size of the operation:
+ *                - CVMX_FAU_OP_SIZE_8  (0) = 8 bits
+ *                - CVMX_FAU_OP_SIZE_16 (1) = 16 bits
+ *                - CVMX_FAU_OP_SIZE_32 (2) = 32 bits
+ *                - CVMX_FAU_OP_SIZE_64 (3) = 64 bits
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @return Data to write using cvmx_send_single
+ */
+static inline u64 __cvmx_fau_iobdma_data(u64 scraddr, s64 value, u64 tagwait,
+					 cvmx_fau_op_size_t size, u64 reg)
+{
+	return (CVMX_FAU_LOAD_IO_ADDRESS | cvmx_build_bits(CVMX_FAU_BITS_SCRADDR, scraddr >> 3) |
+		cvmx_build_bits(CVMX_FAU_BITS_LEN, 1) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_SIZE, size) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg, s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg, s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add. The old value is
+ * placed in the scratch memory@byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg, int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg,
+							    s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg,
+							    s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg,
+							    s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg,
+							   int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 64 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/** Allocates 64bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau64_alloc(int reserve);
+
+/** Allocates 32bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau32_alloc(int reserve);
+
+/** Allocates 16bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau16_alloc(int reserve);
+
+/** Allocates 8bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau8_alloc(int reserve);
+
+/** Frees the specified FAU register.
+ *  @param address Base address of register to release.
+ *  @return 0 on success; -1 on failure
+ */
+int cvmx_fau_free(int address);
+
+/** Display the fau registers array
+ */
+void cvmx_fau_show(void);
+
+#endif /* __CVMX_HWFAU_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
new file mode 100644
index 000000000000..459c19bbc0f1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Output unit.
+ *
+ * Starting with SDK 1.7.0, the PKO output functions now support
+ * two types of locking. CVMX_PKO_LOCK_ATOMIC_TAG continues to
+ * function similarly to previous SDKs by using POW atomic tags
+ * to preserve ordering and exclusivity. As a new option, you
+ * can now pass CVMX_PKO_LOCK_CMD_QUEUE which uses a ll/sc
+ * memory based locking instead. This locking has the advantage
+ * of not affecting the tag state but doesn't preserve packet
+ * ordering. CVMX_PKO_LOCK_CMD_QUEUE is appropriate in most
+ * generic code while CVMX_PKO_LOCK_CMD_QUEUE should be used
+ * with hand tuned fast path code.
+ *
+ * Some of other SDK differences visible to the command command
+ * queuing:
+ * - PKO indexes are no longer stored in the FAU. A large
+ *   percentage of the FAU register block used to be tied up
+ *   maintaining PKO queue pointers. These are now stored in a
+ *   global named block.
+ * - The PKO <b>use_locking</b> parameter can now have a global
+ *   effect. Since all application use the same named block,
+ *   queue locking correctly applies across all operating
+ *   systems when using CVMX_PKO_LOCK_CMD_QUEUE.
+ * - PKO 3 word commands are now supported. Use
+ *   cvmx_pko_send_packet_finish3().
+ */
+
+#ifndef __CVMX_HWPKO_H__
+#define __CVMX_HWPKO_H__
+
+#include "cvmx-hwfau.h"
+#include "cvmx-fpa.h"
+#include "cvmx-pow.h"
+#include "cvmx-cmd-queue.h"
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+
+/* Adjust the command buffer size by 1 word so that in the case of using only
+** two word PKO commands no command words stradle buffers.  The useful values
+** for this are 0 and 1. */
+#define CVMX_PKO_COMMAND_BUFFER_SIZE_ADJUST (1)
+
+#define CVMX_PKO_MAX_OUTPUT_QUEUES_STATIC 256
+#define CVMX_PKO_MAX_OUTPUT_QUEUES                                                                 \
+	((OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) ? 256 : 128)
+#define CVMX_PKO_NUM_OUTPUT_PORTS                                                                  \
+	((OCTEON_IS_MODEL(OCTEON_CN63XX)) ? 44 : (OCTEON_IS_MODEL(OCTEON_CN66XX) ? 48 : 40))
+#define CVMX_PKO_MEM_QUEUE_PTRS_ILLEGAL_PID 63
+#define CVMX_PKO_QUEUE_STATIC_PRIORITY	    9
+#define CVMX_PKO_ILLEGAL_QUEUE		    0xFFFF
+#define CVMX_PKO_MAX_QUEUE_DEPTH	    0
+
+typedef enum {
+	CVMX_PKO_SUCCESS,
+	CVMX_PKO_INVALID_PORT,
+	CVMX_PKO_INVALID_QUEUE,
+	CVMX_PKO_INVALID_PRIORITY,
+	CVMX_PKO_NO_MEMORY,
+	CVMX_PKO_PORT_ALREADY_SETUP,
+	CVMX_PKO_CMD_QUEUE_INIT_ERROR
+} cvmx_pko_return_value_t;
+
+/**
+ * This enumeration represents the differnet locking modes supported by PKO.
+ */
+typedef enum {
+	CVMX_PKO_LOCK_NONE = 0,
+	CVMX_PKO_LOCK_ATOMIC_TAG = 1,
+	CVMX_PKO_LOCK_CMD_QUEUE = 2,
+} cvmx_pko_lock_t;
+
+typedef struct cvmx_pko_port_status {
+	u32 packets;
+	u64 octets;
+	u64 doorbell;
+} cvmx_pko_port_status_t;
+
+/**
+ * This structure defines the address to use on a packet enqueue
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t mem_space : 2;
+		u64 reserved : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved2 : 4;
+		u64 reserved3 : 15;
+		u64 port : 9;
+		u64 queue : 9;
+		u64 reserved4 : 3;
+	} s;
+} cvmx_pko_doorbell_address_t;
+
+/**
+ * Structure of the first packet output command word.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_fau_op_size_t size1 : 2;
+		cvmx_fau_op_size_t size0 : 2;
+		u64 subone1 : 1;
+		u64 reg1 : 11;
+		u64 subone0 : 1;
+		u64 reg0 : 11;
+		u64 le : 1;
+		u64 n2 : 1;
+		u64 wqp : 1;
+		u64 rsp : 1;
+		u64 gather : 1;
+		u64 ipoffp1 : 7;
+		u64 ignore_i : 1;
+		u64 dontfree : 1;
+		u64 segs : 6;
+		u64 total_bytes : 16;
+	} s;
+} cvmx_pko_command_word0_t;
+
+/**
+ * Call before any other calls to initialize the packet
+ * output system.
+ */
+
+void cvmx_pko_hw_init(u8 pool, unsigned int bufsize);
+
+/**
+ * Enables the packet output hardware. It must already be
+ * configured.
+ */
+void cvmx_pko_enable(void);
+
+/**
+ * Disables the packet output. Does not affect any configuration.
+ */
+void cvmx_pko_disable(void);
+
+/**
+ * Shutdown and free resources required by packet output.
+ */
+
+void cvmx_pko_shutdown(void);
+
+/**
+ * Configure a output port and the associated queues for use.
+ *
+ * @param port       Port to configure.
+ * @param base_queue First queue number to associate with this port.
+ * @param num_queues Number of queues t oassociate with this port
+ * @param priority   Array of priority levels for each queue. Values are
+ *                   allowed to be 1-8. A value of 8 get 8 times the traffic
+ *                   of a value of 1. There must be num_queues elements in the
+ *                   array.
+ */
+cvmx_pko_return_value_t cvmx_pko_config_port(int port, int base_queue, int num_queues,
+					     const u8 priority[]);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_doorbell_pkoid() directly if the PKO port identifier is
+ * known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell(u64 ipd_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+	u64 pko_port;
+
+	pko_port = ipd_port;
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		pko_port = cvmx_helper_cfg_ipd2pko_port_base(ipd_port);
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Prepare to send a packet.  This may initiate a tag switch to
+ * get exclusive access to the output queue structure, and
+ * performs other prep work for the packet send operation.
+ *
+ * cvmx_pko_send_packet_finish() MUST be called after this function is called,
+ * and must be called with the same port/queue/use_locking arguments.
+ *
+ * The use_locking parameter allows the caller to use three
+ * possible locking modes.
+ * - CVMX_PKO_LOCK_NONE
+ *      - PKO doesn't do any locking. It is the responsibility
+ *          of the application to make sure that no other core
+ *          is accessing the same queue at the same time.
+ * - CVMX_PKO_LOCK_ATOMIC_TAG
+ *      - PKO performs an atomic tagswitch to insure exclusive
+ *          access to the output queue. This will maintain
+ *          packet ordering on output.
+ * - CVMX_PKO_LOCK_CMD_QUEUE
+ *      - PKO uses the common command queue locks to insure
+ *          exclusive access to the output queue. This is a
+ *          memory based ll/sc. This is the most portable
+ *          locking mechanism.
+ *
+ * NOTE: If atomic locking is used, the POW entry CANNOT be
+ * descheduled, as it does not contain a valid WQE pointer.
+ *
+ * @param port   Port to send it on, this can be either IPD port or PKO
+ *		 port.
+ * @param queue  Queue to use
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ */
+static inline void cvmx_pko_send_packet_prepare(u64 port __attribute__((unused)), u64 queue,
+						cvmx_pko_lock_t use_locking)
+{
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG) {
+		/*
+		 * Must do a full switch here to handle all cases.  We use a
+		 * fake WQE pointer, as the POW does not access this memory.
+		 * The WQE pointer and group are only used if this work is
+		 * descheduled, which is not supported by the
+		 * cvmx_pko_send_packet_prepare/cvmx_pko_send_packet_finish
+		 * combination. Note that this is a special case in which these
+		 * fake values can be used - this is not a general technique.
+		 */
+		u32 tag = CVMX_TAG_SW_BITS_INTERNAL << CVMX_TAG_SW_SHIFT |
+			  CVMX_TAG_SUBGROUP_PKO << CVMX_TAG_SUBGROUP_SHIFT |
+			  (CVMX_TAG_SUBGROUP_MASK & queue);
+		cvmx_pow_tag_sw_full((cvmx_wqe_t *)cvmx_phys_to_ptr(0x80), tag,
+				     CVMX_POW_TAG_TYPE_ATOMIC, 0);
+	}
+}
+
+#define cvmx_pko_send_packet_prepare_pkoid cvmx_pko_send_packet_prepare
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			      cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish3_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			       cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Get the first pko_port for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_base_pko_port(int interface, int index);
+
+/**
+ * Get the number of pko_ports for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_num_pko_ports(int interface, int index);
+
+/**
+ * For a given port number, return the base pko output queue
+ * for the port.
+ *
+ * @param port   IPD port number
+ * @return Base output queue
+ */
+int cvmx_pko_get_base_queue(int port);
+
+/**
+ * For a given port number, return the number of pko output queues.
+ *
+ * @param port   IPD port number
+ * @return Number of output queues
+ */
+int cvmx_pko_get_num_queues(int port);
+
+/**
+ * Sets the internal FPA pool data structure for PKO comamnd queue.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ *
+ * @note the caller is responsable for setting up the pool with
+ * an appropriate buffer size and sufficient buffer count.
+ */
+void cvmx_pko_set_cmd_que_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param ipd_port Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ *
+ * Note:
+ *     - Only the doorbell for the base queue of the ipd_port is
+ *       collected.
+ *     - Retrieving the stats involves writing the index through
+ *       CVMX_PKO_REG_READ_IDX and reading the stat CSRs, in that
+ *       order. It is not MP-safe and caller should guarantee
+ *       atomicity.
+ */
+void cvmx_pko_get_port_status(u64 ipd_port, u64 clear, cvmx_pko_port_status_t *status);
+
+/**
+ * Rate limit a PKO port to a max packets/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port      Port to rate limit
+ * @param packets_s Maximum packet/sec
+ * @param burst     Maximum number of packets to burst in a row before rate
+ *                  limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_packets(int port, int packets_s, int burst);
+
+/**
+ * Rate limit a PKO port to a max bits/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port   Port to rate limit
+ * @param bits_s PKO rate limit in bits/sec
+ * @param burst  Maximum number of bits to burst before rate
+ *               limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_bits(int port, u64 bits_s, int burst);
+
+/**
+ * @INTERNAL
+ *
+ * Retrieve the PKO pipe number for a port
+ *
+ * @param interface
+ * @param index
+ *
+ * @return negative on error.
+ *
+ * This applies only to the non-loopback interfaces.
+ *
+ */
+int __cvmx_pko_get_pipe(int interface, int index);
+
+/**
+ * For a given PKO port number, return the base output queue
+ * for the port.
+ *
+ * @param pko_port   PKO port number
+ * @return           Base output queue
+ */
+int cvmx_pko_get_base_queue_pkoid(int pko_port);
+
+/**
+ * For a given PKO port number, return the number of output queues
+ * for the port.
+ *
+ * @param pko_port	PKO port number
+ * @return		the number of output queues
+ */
+int cvmx_pko_get_num_queues_pkoid(int pko_port);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * @param pko_port   Port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell_pkoid(u64 pko_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   Port to send it on
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish_pkoid(int pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				    cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   The PKO port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3_pkoid(u64 pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				     cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/*
+ * Obtain the number of PKO commands pending in a queue
+ *
+ * @param queue is the queue identifier to be queried
+ * @return the number of commands pending transmission or -1 on error
+ */
+int cvmx_pko_queue_pend_count(cvmx_cmd_queue_id_t queue);
+
+void cvmx_pko_set_cmd_queue_pool_buffer_count(u64 buffer_count);
+
+#endif /* __CVMX_HWPKO_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ilk.h b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
new file mode 100644
index 000000000000..727298352c28
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file contains defines for the ILK interface
+ */
+
+#ifndef __CVMX_ILK_H__
+#define __CVMX_ILK_H__
+
+/* CSR typedefs have been moved to cvmx-ilk-defs.h */
+
+/*
+ * Note: this macro must match the first ilk port in the ipd_port_map_68xx[]
+ * and ipd_port_map_78xx[] arrays.
+ */
+static inline int CVMX_ILK_GBL_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 5;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 6;
+	return -1;
+}
+
+static inline int CVMX_ILK_QLM_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 1;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 4;
+	return -1;
+}
+
+typedef struct {
+	int intf_en : 1;
+	int la_mode : 1;
+	int reserved : 14; /* unused */
+	int lane_speed : 16;
+	/* add more here */
+} cvmx_ilk_intf_t;
+
+#define CVMX_NUM_ILK_INTF 2
+static inline int CVMX_ILK_MAX_LANES(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 8;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 16;
+	return -1;
+}
+
+extern unsigned short cvmx_ilk_lane_mask[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int pipe;
+	unsigned int chan;
+} cvmx_ilk_pipe_chan_t;
+
+#define CVMX_ILK_MAX_PIPES 45
+/* Max number of channels allowed */
+#define CVMX_ILK_MAX_CHANS 256
+
+extern int cvmx_ilk_chans[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int chan;
+	unsigned int pknd;
+} cvmx_ilk_chan_pknd_t;
+
+#define CVMX_ILK_MAX_PKNDS 16 /* must be <45 */
+
+typedef struct {
+	int *chan_list; /* for discrete channels. or, must be null */
+	unsigned int num_chans;
+
+	unsigned int chan_start; /* for continuous channels */
+	unsigned int chan_end;
+	unsigned int chan_step;
+
+	unsigned int clr_on_rd;
+} cvmx_ilk_stats_ctrl_t;
+
+#define CVMX_ILK_MAX_CAL      288
+#define CVMX_ILK_MAX_CAL_IDX  (CVMX_ILK_MAX_CAL / 8)
+#define CVMX_ILK_TX_MIN_CAL   1
+#define CVMX_ILK_RX_MIN_CAL   1
+#define CVMX_ILK_CAL_GRP_SZ   8
+#define CVMX_ILK_PIPE_BPID_SZ 7
+#define CVMX_ILK_ENT_CTRL_SZ  2
+#define CVMX_ILK_RX_FIFO_WM   0x200
+
+typedef enum { PIPE_BPID = 0, LINK, XOFF, XON } cvmx_ilk_cal_ent_ctrl_t;
+
+typedef struct {
+	unsigned char pipe_bpid;
+	cvmx_ilk_cal_ent_ctrl_t ent_ctrl;
+} cvmx_ilk_cal_entry_t;
+
+typedef enum { CVMX_ILK_LPBK_DISA = 0, CVMX_ILK_LPBK_ENA } cvmx_ilk_lpbk_ena_t;
+
+typedef enum { CVMX_ILK_LPBK_INT = 0, CVMX_ILK_LPBK_EXT } cvmx_ilk_lpbk_mode_t;
+
+/**
+ * This header is placed in front of all received ILK look-aside mode packets
+ */
+typedef union {
+	u64 u64;
+
+	struct {
+		u32 reserved_63_57 : 7;	  /* bits 63...57 */
+		u32 nsp_cmd : 5;	  /* bits 56...52 */
+		u32 nsp_flags : 4;	  /* bits 51...48 */
+		u32 nsp_grp_id_upper : 6; /* bits 47...42 */
+		u32 reserved_41_40 : 2;	  /* bits 41...40 */
+		/* Protocol type, 1 for LA mode packet */
+		u32 la_mode : 1;	  /* bit  39      */
+		u32 nsp_grp_id_lower : 2; /* bits 38...37 */
+		u32 nsp_xid_upper : 4;	  /* bits 36...33 */
+		/* ILK channel number, 0 or 1 */
+		u32 ilk_channel : 1;   /* bit  32      */
+		u32 nsp_xid_lower : 8; /* bits 31...24 */
+		/* Unpredictable, may be any value */
+		u32 reserved_23_0 : 24; /* bits 23...0  */
+	} s;
+} cvmx_ilk_la_nsp_compact_hdr_t;
+
+typedef struct cvmx_ilk_LA_mode_struct {
+	int ilk_LA_mode;
+	int ilk_LA_mode_cal_ena;
+} cvmx_ilk_LA_mode_t;
+
+extern cvmx_ilk_LA_mode_t cvmx_ilk_LA_mode[CVMX_NUM_ILK_INTF];
+
+int cvmx_ilk_use_la_mode(int interface, int channel);
+int cvmx_ilk_start_interface(int interface, unsigned short num_lanes);
+int cvmx_ilk_start_interface_la(int interface, unsigned char num_lanes);
+int cvmx_ilk_set_pipe(int interface, int pipe_base, unsigned int pipe_len);
+int cvmx_ilk_tx_set_channel(int interface, cvmx_ilk_pipe_chan_t *pch, unsigned int num_chs);
+int cvmx_ilk_rx_set_pknd(int interface, cvmx_ilk_chan_pknd_t *chpknd, unsigned int num_pknd);
+int cvmx_ilk_enable(int interface);
+int cvmx_ilk_disable(int interface);
+int cvmx_ilk_get_intf_ena(int interface);
+int cvmx_ilk_get_chan_info(int interface, unsigned char **chans, unsigned char *num_chan);
+cvmx_ilk_la_nsp_compact_hdr_t cvmx_ilk_enable_la_header(int ipd_port, int mode);
+void cvmx_ilk_show_stats(int interface, cvmx_ilk_stats_ctrl_t *pstats);
+int cvmx_ilk_cal_setup_rx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent, int hi_wm,
+			  unsigned char cal_ena);
+int cvmx_ilk_cal_setup_tx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent,
+			  unsigned char cal_ena);
+int cvmx_ilk_lpbk(int interface, cvmx_ilk_lpbk_ena_t enable, cvmx_ilk_lpbk_mode_t mode);
+int cvmx_ilk_la_mode_enable_rx_calendar(int interface);
+
+#endif /* __CVMX_ILK_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd.h b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
new file mode 100644
index 000000000000..cdff36fffb56
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Input Packet Data unit.
+ */
+
+#ifndef __CVMX_IPD_H__
+#define __CVMX_IPD_H__
+
+#include "cvmx-pki.h"
+
+/* CSR typedefs have been moved to cvmx-ipd-defs.h */
+
+typedef cvmx_ipd_1st_mbuff_skip_t cvmx_ipd_mbuff_not_first_skip_t;
+typedef cvmx_ipd_1st_next_ptr_back_t cvmx_ipd_second_next_ptr_back_t;
+
+typedef struct cvmx_ipd_tag_fields {
+	u64 ipv6_src_ip : 1;
+	u64 ipv6_dst_ip : 1;
+	u64 ipv6_src_port : 1;
+	u64 ipv6_dst_port : 1;
+	u64 ipv6_next_header : 1;
+	u64 ipv4_src_ip : 1;
+	u64 ipv4_dst_ip : 1;
+	u64 ipv4_src_port : 1;
+	u64 ipv4_dst_port : 1;
+	u64 ipv4_protocol : 1;
+	u64 input_port : 1;
+} cvmx_ipd_tag_fields_t;
+
+typedef struct cvmx_pip_port_config {
+	u64 parse_mode;
+	u64 tag_type;
+	u64 tag_mode;
+	cvmx_ipd_tag_fields_t tag_fields;
+} cvmx_pip_port_config_t;
+
+typedef struct cvmx_ipd_config_struct {
+	u64 first_mbuf_skip;
+	u64 not_first_mbuf_skip;
+	u64 ipd_enable;
+	u64 enable_len_M8_fix;
+	u64 cache_mode;
+	cvmx_fpa_pool_config_t packet_pool;
+	cvmx_fpa_pool_config_t wqe_pool;
+	cvmx_pip_port_config_t port_config;
+} cvmx_ipd_config_t;
+
+extern cvmx_ipd_config_t cvmx_ipd_cfg;
+
+/**
+ * Gets the fpa pool number of packet pool
+ */
+static inline s64 cvmx_fpa_get_packet_pool(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of packet pool buffer
+ */
+static inline u64 cvmx_fpa_get_packet_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of packet pool
+ */
+static inline u64 cvmx_fpa_get_packet_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_count);
+}
+
+/**
+ * Gets the fpa pool number of wqe pool
+ */
+static inline s64 cvmx_fpa_get_wqe_pool(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of wqe pool buffer
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of wqe pool
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_count);
+}
+
+/**
+ * Sets the ipd related configuration in internal structure which is then used
+ * for seting IPD hardware block
+ */
+int cvmx_ipd_set_config(cvmx_ipd_config_t ipd_config);
+
+/**
+ * Gets the ipd related configuration from internal structure.
+ */
+void cvmx_ipd_get_config(cvmx_ipd_config_t *ipd_config);
+
+/**
+ * Sets the internal FPA pool data structure for packet buffer pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Sets the internal FPA pool data structure for wqe pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Gets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_get_packet_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.packet_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.packet_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.packet_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_packet_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Gets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_get_wqe_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.wqe_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.wqe_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.wqe_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_wqe_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Configure IPD
+ *
+ * @param mbuff_size Packets buffer size in 8 byte words
+ * @param first_mbuff_skip
+ *                   Number of 8 byte words to skip in the first buffer
+ * @param not_first_mbuff_skip
+ *                   Number of 8 byte words to skip in each following buffer
+ * @param first_back Must be same as first_mbuff_skip / 128
+ * @param second_back
+ *                   Must be same as not_first_mbuff_skip / 128
+ * @param wqe_fpa_pool
+ *                   FPA pool to get work entries from
+ * @param cache_mode
+ * @param back_pres_enable_flag
+ *                   Enable or disable port back pressure at a global level.
+ *                   This should always be 1 as more accurate control can be
+ *                   found in IPD_PORTX_BP_PAGE_CNT[BP_ENB].
+ */
+void cvmx_ipd_config(u64 mbuff_size, u64 first_mbuff_skip, u64 not_first_mbuff_skip, u64 first_back,
+		     u64 second_back, u64 wqe_fpa_pool, cvmx_ipd_mode_t cache_mode,
+		     u64 back_pres_enable_flag);
+/**
+ * Enable IPD
+ */
+void cvmx_ipd_enable(void);
+
+/**
+ * Disable IPD
+ */
+void cvmx_ipd_disable(void);
+
+void __cvmx_ipd_free_ptr(void);
+
+void cvmx_ipd_set_packet_pool_buffer_count(u64 buffer_count);
+void cvmx_ipd_set_wqe_pool_buffer_count(u64 buffer_count);
+
+/**
+ * Setup Random Early Drop on a specific input queue
+ *
+ * @param queue  Input queue to setup RED on (0-7)
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red_queue(int queue, int pass_thresh, int drop_thresh);
+
+/**
+ * Setup Random Early Drop to automatically begin dropping packets.
+ *
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red(int pass_thresh, int drop_thresh);
+
+#endif /*  __CVMX_IPD_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-packet.h b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
new file mode 100644
index 000000000000..f3cfe9c64f43
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Packet buffer defines.
+ */
+
+#ifndef __CVMX_PACKET_H__
+#define __CVMX_PACKET_H__
+
+union cvmx_buf_ptr_pki {
+	u64 u64;
+	struct {
+		u64 size : 16;
+		u64 packet_outside_wqe : 1;
+		u64 rsvd0 : 5;
+		u64 addr : 42;
+	};
+};
+
+typedef union cvmx_buf_ptr_pki cvmx_buf_ptr_pki_t;
+
+/**
+ * This structure defines a buffer pointer on Octeon
+ */
+union cvmx_buf_ptr {
+	void *ptr;
+	u64 u64;
+	struct {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 addr : 40;
+	} s;
+};
+
+typedef union cvmx_buf_ptr cvmx_buf_ptr_t;
+
+#endif /*  __CVMX_PACKET_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcie.h b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
new file mode 100644
index 000000000000..a819196c021c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PCIE_H__
+#define __CVMX_PCIE_H__
+
+#define CVMX_PCIE_MAX_PORTS 4
+#define CVMX_PCIE_PORTS                                                                            \
+	((OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX)) ?                      \
+		       CVMX_PCIE_MAX_PORTS :                                                             \
+		       (OCTEON_IS_MODEL(OCTEON_CN70XX) ? 3 : 2))
+
+/*
+ * The physical memory base mapped by BAR1.  256MB at the end of the
+ * first 4GB.
+ */
+#define CVMX_PCIE_BAR1_PHYS_BASE ((1ull << 32) - (1ull << 28))
+#define CVMX_PCIE_BAR1_PHYS_SIZE BIT_ULL(28)
+
+/*
+ * The RC base of BAR1.  gen1 has a 39-bit BAR2, gen2 has 41-bit BAR2,
+ * place BAR1 so it is the same for both.
+ */
+#define CVMX_PCIE_BAR1_RC_BASE BIT_ULL(41)
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 1 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 reserved_29_31 : 3;	 /* Must be zero */
+		u64 ty : 1;
+		u64 bus : 8;
+		u64 dev : 5;
+		u64 func : 3;
+		u64 reg : 12;
+	} config;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 2 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 address : 32;	 /* PCIe IO address */
+	} io;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 3-6 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 address : 36;	 /* PCIe Mem address */
+	} mem;
+} cvmx_pcie_address_t;
+
+/**
+ * Return the Core virtual base address for PCIe IO access. IOs are
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_io_base_address(int pcie_port);
+
+/**
+ * Size of the IO address region returned at address
+ * cvmx_pcie_get_io_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the IO window
+ */
+u64 cvmx_pcie_get_io_size(int pcie_port);
+
+/**
+ * Return the Core virtual base address for PCIe MEM access. Memory is
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_mem_base_address(int pcie_port);
+
+/**
+ * Size of the Mem address region returned@address
+ * cvmx_pcie_get_mem_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the Mem window
+ */
+u64 cvmx_pcie_get_mem_size(int pcie_port);
+
+/**
+ * Initialize a PCIe port for use in host(RC) mode. It doesn't enumerate the bus.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_initialize(int pcie_port);
+
+/**
+ * Shutdown a PCIe port and put it in reset
+ *
+ * @param pcie_port PCIe port to shutdown
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_shutdown(int pcie_port);
+
+/**
+ * Read 8bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u8 cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 16bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u16 cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 32bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u32 cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Write 8bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int fn, int reg, u8 val);
+
+/**
+ * Write 16bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int fn, int reg, u16 val);
+
+/**
+ * Write 32bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int fn, int reg, u32 val);
+
+/**
+ * Read a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to read from
+ * @param cfg_offset Address to read
+ *
+ * @return Value read
+ */
+u32 cvmx_pcie_cfgx_read(int pcie_port, u32 cfg_offset);
+u32 cvmx_pcie_cfgx_read_node(int node, int pcie_port, u32 cfg_offset);
+
+/**
+ * Write a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to write to
+ * @param cfg_offset Address to write
+ * @param val        Value to write
+ */
+void cvmx_pcie_cfgx_write(int pcie_port, u32 cfg_offset, u32 val);
+void cvmx_pcie_cfgx_write_node(int node, int pcie_port, u32 cfg_offset, u32 val);
+
+/**
+ * Write a 32bit value to the Octeon NPEI register space
+ *
+ * @param address Address to write to
+ * @param val     Value to write
+ */
+static inline void cvmx_pcie_npei_write32(u64 address, u32 val)
+{
+	cvmx_write64_uint32(address ^ 4, val);
+	cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Read a 32bit value from the Octeon NPEI register space
+ *
+ * @param address Address to read
+ * @return The result
+ */
+static inline u32 cvmx_pcie_npei_read32(u64 address)
+{
+	return cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Initialize a PCIe port for use in target(EP) mode.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_ep_initialize(int pcie_port);
+
+/**
+ * Wait for posted PCIe read/writes to reach the other side of
+ * the internal PCIe switch. This will insure that core
+ * read/writes are posted before anything after this function
+ * is called. This may be necessary when writing to memory that
+ * will later be read using the DMA/PKT engines.
+ *
+ * @param pcie_port PCIe port to wait for
+ */
+void cvmx_pcie_wait_for_pending(int pcie_port);
+
+/**
+ * Returns if a PCIe port is in host or target mode.
+ *
+ * @param pcie_port PCIe port number (PEM number)
+ *
+ * @return 0 if PCIe port is in target mode, !0 if in host mode.
+ */
+int cvmx_pcie_is_host_mode(int pcie_port);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip.h b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
new file mode 100644
index 000000000000..013f533fb7bb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
@@ -0,0 +1,1080 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Processing unit.
+ */
+
+#ifndef __CVMX_PIP_H__
+#define __CVMX_PIP_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pki.h"
+#include "cvmx-helper-pki.h"
+
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-pki-resources.h"
+
+#define CVMX_PIP_NUM_INPUT_PORTS 46
+#define CVMX_PIP_NUM_WATCHERS	 8
+
+/*
+ * Encodes the different error and exception codes
+ */
+typedef enum {
+	CVMX_PIP_L4_NO_ERR = 0ull,
+	/*        1  = TCP (UDP) packet not long enough to cover TCP (UDP) header */
+	CVMX_PIP_L4_MAL_ERR = 1ull,
+	/*        2  = TCP/UDP checksum failure */
+	CVMX_PIP_CHK_ERR = 2ull,
+	/*        3  = TCP/UDP length check (TCP/UDP length does not match IP length) */
+	CVMX_PIP_L4_LENGTH_ERR = 3ull,
+	/*        4  = illegal TCP/UDP port (either source or dest port is zero) */
+	CVMX_PIP_BAD_PRT_ERR = 4ull,
+	/*        8  = TCP flags = FIN only */
+	CVMX_PIP_TCP_FLG8_ERR = 8ull,
+	/*        9  = TCP flags = 0 */
+	CVMX_PIP_TCP_FLG9_ERR = 9ull,
+	/*        10 = TCP flags = FIN+RST+* */
+	CVMX_PIP_TCP_FLG10_ERR = 10ull,
+	/*        11 = TCP flags = SYN+URG+* */
+	CVMX_PIP_TCP_FLG11_ERR = 11ull,
+	/*        12 = TCP flags = SYN+RST+* */
+	CVMX_PIP_TCP_FLG12_ERR = 12ull,
+	/*        13 = TCP flags = SYN+FIN+* */
+	CVMX_PIP_TCP_FLG13_ERR = 13ull
+} cvmx_pip_l4_err_t;
+
+typedef enum {
+	CVMX_PIP_IP_NO_ERR = 0ull,
+	/*        1 = not IPv4 or IPv6 */
+	CVMX_PIP_NOT_IP = 1ull,
+	/*        2 = IPv4 header checksum violation */
+	CVMX_PIP_IPV4_HDR_CHK = 2ull,
+	/*        3 = malformed (packet not long enough to cover IP hdr) */
+	CVMX_PIP_IP_MAL_HDR = 3ull,
+	/*        4 = malformed (packet not long enough to cover len in IP hdr) */
+	CVMX_PIP_IP_MAL_PKT = 4ull,
+	/*        5 = TTL / hop count equal zero */
+	CVMX_PIP_TTL_HOP = 5ull,
+	/*        6 = IPv4 options / IPv6 early extension headers */
+	CVMX_PIP_OPTS = 6ull
+} cvmx_pip_ip_exc_t;
+
+/**
+ * NOTES
+ *       late collision (data received before collision)
+ *            late collisions cannot be detected by the receiver
+ *            they would appear as JAM bits which would appear as bad FCS
+ *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+ */
+typedef enum {
+	/**
+	 * No error
+	 */
+	CVMX_PIP_RX_NO_ERR = 0ull,
+
+	CVMX_PIP_PARTIAL_ERR =
+		1ull, /* RGM+SPI            1 = partially received packet (buffering/bandwidth not adequate) */
+	CVMX_PIP_JABBER_ERR =
+		2ull, /* RGM+SPI            2 = receive packet too large and truncated */
+	CVMX_PIP_OVER_FCS_ERR =
+		3ull, /* RGM                3 = max frame error (pkt len > max frame len) (with FCS error) */
+	CVMX_PIP_OVER_ERR =
+		4ull, /* RGM+SPI            4 = max frame error (pkt len > max frame len) */
+	CVMX_PIP_ALIGN_ERR =
+		5ull, /* RGM                5 = nibble error (data not byte multiple - 100M and 10M only) */
+	CVMX_PIP_UNDER_FCS_ERR =
+		6ull, /* RGM                6 = min frame error (pkt len < min frame len) (with FCS error) */
+	CVMX_PIP_GMX_FCS_ERR = 7ull, /* RGM                7 = FCS error */
+	CVMX_PIP_UNDER_ERR =
+		8ull, /* RGM+SPI            8 = min frame error (pkt len < min frame len) */
+	CVMX_PIP_EXTEND_ERR = 9ull, /* RGM                9 = Frame carrier extend error */
+	CVMX_PIP_TERMINATE_ERR =
+		9ull, /* XAUI               9 = Packet was terminated with an idle cycle */
+	CVMX_PIP_LENGTH_ERR =
+		10ull, /* RGM               10 = length mismatch (len did not match len in L2 length/type) */
+	CVMX_PIP_DAT_ERR =
+		11ull, /* RGM               11 = Frame error (some or all data bits marked err) */
+	CVMX_PIP_DIP_ERR = 11ull, /*     SPI           11 = DIP4 error */
+	CVMX_PIP_SKIP_ERR =
+		12ull, /* RGM               12 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_NIBBLE_ERR =
+		13ull, /* RGM               13 = studder error (data not repeated - 100M and 10M only) */
+	CVMX_PIP_PIP_FCS = 16L, /* RGM+SPI           16 = FCS error */
+	CVMX_PIP_PIP_SKIP_ERR =
+		17L, /* RGM+SPI+PCI       17 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_PIP_L2_MAL_HDR =
+		18L, /* RGM+SPI+PCI       18 = malformed l2 (packet not long enough to cover L2 hdr) */
+	CVMX_PIP_PUNY_ERR =
+		47L /* SGMII             47 = PUNY error (packet was 4B or less when FCS stripping is enabled) */
+	/* NOTES
+	 *       xx = late collision (data received before collision)
+	 *            late collisions cannot be detected by the receiver
+	 *            they would appear as JAM bits which would appear as bad FCS
+	 *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+	 */
+} cvmx_pip_rcv_err_t;
+
+/**
+ * This defines the err_code field errors in the work Q entry
+ */
+typedef union {
+	cvmx_pip_l4_err_t l4_err;
+	cvmx_pip_ip_exc_t ip_exc;
+	cvmx_pip_rcv_err_t rcv_err;
+} cvmx_pip_err_t;
+
+/**
+ * Status statistics for a port
+ */
+typedef struct {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+} cvmx_pip_port_status_t;
+
+/**
+ * Definition of the PIP custom header that can be prepended
+ * to a packet by external hardware.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 rawfull : 1;
+		u64 reserved0 : 5;
+		cvmx_pip_port_parse_mode_t parse_mode : 2;
+		u64 reserved1 : 1;
+		u64 skip_len : 7;
+		u64 grpext : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 rs : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pip_pkt_inst_hdr_t;
+
+enum cvmx_pki_pcam_match {
+	CVMX_PKI_PCAM_MATCH_IP,
+	CVMX_PKI_PCAM_MATCH_IPV4,
+	CVMX_PKI_PCAM_MATCH_IPV6,
+	CVMX_PKI_PCAM_MATCH_TCP
+};
+
+/* CSR typedefs have been moved to cvmx-pip-defs.h */
+static inline int cvmx_pip_config_watcher(int index, int type, u16 match, u16 mask, int grp,
+					  int qos)
+{
+	if (index >= CVMX_PIP_NUM_WATCHERS) {
+		debug("ERROR: pip watcher %d is > than supported\n", index);
+		return -1;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		/* store in software for now, only when the watcher is enabled program the entry*/
+		if (type == CVMX_PIP_QOS_WATCH_PROTNH) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L3_FLAGS;
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_TCP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x060000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_UDP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x110000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == 0x4 /*CVMX_PIP_QOS_WATCH_ETHERTYPE*/) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+			if (match == 0x8100) {
+				debug("ERROR: default vlan entry already exist, cant set watcher\n");
+				return -1;
+			}
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 4;
+		} else {
+			debug("ERROR: Unsupported watcher type %d\n", type);
+			return -1;
+		}
+		if (grp >= 32) {
+			debug("ERROR: grp %d out of range for backward compat 78xx\n", grp);
+			return -1;
+		}
+		qos_watcher[index].sso_grp = (u8)(grp << 3 | qos);
+		qos_watcher[index].configured = 1;
+	} else {
+		/* Implement it later */
+	}
+	return 0;
+}
+
+static inline int __cvmx_pip_set_tag_type(int node, int style, int tag_type, int field)
+{
+	struct cvmx_pki_style_config style_cfg;
+	int style_num;
+	int pcam_offset;
+	int bank;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+
+	/* All other style parameters remain same except tag type */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tag_type;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to set tag type\n");
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	if (field == CVMX_PKI_PCAM_MATCH_IP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+		/* legacy will write to all clusters*/
+		bank = 0;
+		pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+							CVMX_PKI_CLUSTER_ALL);
+		if (pcam_offset < 0) {
+			debug("ERROR: pcam entry not available to enable qos watcher\n");
+			cvmx_pki_style_free(node, style_num);
+			return -1;
+		}
+		pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+		pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+		pcam_action.style_add = (u8)(style_num - style);
+		cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input,
+					  pcam_action);
+		field = CVMX_PKI_PCAM_MATCH_IPV6;
+	}
+	if (field == CVMX_PKI_PCAM_MATCH_IPV4) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_IPV6) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x86dd00000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_TCP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_L4_PORT;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x60000;
+		pcam_input.data_mask = 0xff0000;
+		pcam_action.pointer_advance = 0;
+	}
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	bank = pcam_input.field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		return -1;
+	}
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return style_num;
+}
+
+/* Only for legacy internal use */
+static inline int __cvmx_pip_enable_watcher_78xx(int node, int index, int style)
+{
+	struct cvmx_pki_style_config style_cfg;
+	struct cvmx_pki_qpg_config qpg_cfg;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+	int style_num;
+	int qpg_offset;
+	int pcam_offset;
+	int bank;
+
+	if (!qos_watcher[index].configured) {
+		debug("ERROR: qos watcher %d should be configured before enable\n", index);
+		return -1;
+	}
+	/* All other style parameters remain same except grp and qos and qps base */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	cvmx_pki_read_qpg_entry(node, style_cfg.parm_cfg.qpg_base, &qpg_cfg);
+	qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+	qpg_cfg.grp_ok = qos_watcher[index].sso_grp;
+	qpg_cfg.grp_bad = qos_watcher[index].sso_grp;
+	qpg_offset = cvmx_helper_pki_set_qpg_entry(node, &qpg_cfg);
+	if (qpg_offset == -1) {
+		debug("Warning: no new qpg entry available to enable watcher\n");
+		return -1;
+	}
+	/* try to reserve the style, if it is not configured already, reserve
+	   and configure it */
+	style_cfg.parm_cfg.qpg_base = qpg_offset;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to enable qos watcher\n");
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	/* legacy will write to all clusters*/
+	bank = qos_watcher[index].field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	pcam_input.field = qos_watcher[index].field;
+	pcam_input.field_mask = 0xff;
+	pcam_input.data = qos_watcher[index].data;
+	pcam_input.data_mask = qos_watcher[index].data_mask;
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	pcam_action.pointer_advance = qos_watcher[index].advance;
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return 0;
+}
+
+/**
+ * Configure an ethernet input port
+ *
+ * @param ipd_port Port number to configure
+ * @param port_cfg Port hardware configuration
+ * @param port_tag_cfg Port POW tagging configuration
+ */
+static inline void cvmx_pip_config_port(u64 ipd_port, cvmx_pip_prt_cfgx_t port_cfg,
+					cvmx_pip_prt_tagx_t port_tag_cfg)
+{
+	struct cvmx_pki_qpg_config qpg_cfg;
+	int qpg_offset;
+	u8 tcp_tag = 0xff;
+	u8 ip_tag = 0xaa;
+	int style, nstyle, n4style, n6style;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		struct cvmx_pki_port_config pki_prt_cfg;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		cvmx_pki_get_port_config(ipd_port, &pki_prt_cfg);
+		style = pki_prt_cfg.pkind_cfg.initial_style;
+		if (port_cfg.s.ih_pri || port_cfg.s.vlan_len || port_cfg.s.pad_len)
+			debug("Warning: 78xx: use different config for this option\n");
+		pki_prt_cfg.style_cfg.parm_cfg.minmax_sel = port_cfg.s.len_chk_sel;
+		pki_prt_cfg.style_cfg.parm_cfg.lenerr_en = port_cfg.s.lenerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.maxerr_en = port_cfg.s.maxerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.minerr_en = port_cfg.s.minerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.fcs_chk = port_cfg.s.crc_en;
+		if (port_cfg.s.grp_wat || port_cfg.s.qos_wat || port_cfg.s.grp_wat_47 ||
+		    port_cfg.s.qos_wat_47) {
+			u8 group_mask = (u8)(port_cfg.s.grp_wat | (u8)(port_cfg.s.grp_wat_47 << 4));
+			u8 qos_mask = (u8)(port_cfg.s.qos_wat | (u8)(port_cfg.s.qos_wat_47 << 4));
+			int i;
+
+			for (i = 0; i < CVMX_PIP_NUM_WATCHERS; i++) {
+				if ((group_mask & (1 << i)) || (qos_mask & (1 << i)))
+					__cvmx_pip_enable_watcher_78xx(xp.node, i, style);
+			}
+		}
+		if (port_tag_cfg.s.tag_mode) {
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+				cvmx_printf("Warning: mask tag is not supported in 78xx pass1\n");
+			else {
+			}
+			/* need to implement for 78xx*/
+		}
+		if (port_cfg.s.tag_inc)
+			debug("Warning: 78xx uses differnet method for tag generation\n");
+		pki_prt_cfg.style_cfg.parm_cfg.rawdrp = port_cfg.s.rawdrp;
+		pki_prt_cfg.pkind_cfg.parse_en.inst_hdr = port_cfg.s.inst_hdr;
+		if (port_cfg.s.hg_qos)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_HIGIG;
+		else if (port_cfg.s.qos_vlan)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_VLAN;
+		else if (port_cfg.s.qos_diff)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_DIFFSERV;
+		if (port_cfg.s.qos_vod)
+			debug("Warning: 78xx needs pcam entries installed to achieve qos_vod\n");
+		if (port_cfg.s.qos) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= port_cfg.s.qos;
+			qpg_cfg.grp_bad |= port_cfg.s.qos;
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify qos\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		if (port_tag_cfg.s.grp != pki_dflt_sso_grp[xp.node].group) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_cfg.grp_bad |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify group\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		pki_prt_cfg.pkind_cfg.parse_en.dsa_en = port_cfg.s.dsa_en;
+		pki_prt_cfg.pkind_cfg.parse_en.hg_en = port_cfg.s.higig_en;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_src =
+			port_tag_cfg.s.ip6_src_flag | port_tag_cfg.s.ip4_src_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_dst =
+			port_tag_cfg.s.ip6_dst_flag | port_tag_cfg.s.ip4_dst_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.ip_prot_nexthdr =
+			port_tag_cfg.s.ip6_nxth_flag | port_tag_cfg.s.ip4_pctl_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_src =
+			port_tag_cfg.s.ip6_sprt_flag | port_tag_cfg.s.ip4_sprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_dst =
+			port_tag_cfg.s.ip6_dprt_flag | port_tag_cfg.s.ip4_dprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.input_port = port_tag_cfg.s.inc_prt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.first_vlan = port_tag_cfg.s.inc_vlan;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.second_vlan = port_tag_cfg.s.inc_vs;
+
+		if (port_tag_cfg.s.tcp6_tag_type == port_tag_cfg.s.tcp4_tag_type)
+			tcp_tag = port_tag_cfg.s.tcp6_tag_type;
+		if (port_tag_cfg.s.ip6_tag_type == port_tag_cfg.s.ip4_tag_type)
+			ip_tag = port_tag_cfg.s.ip6_tag_type;
+		pki_prt_cfg.style_cfg.parm_cfg.tag_type =
+			(enum cvmx_sso_tag_type)port_tag_cfg.s.non_tag_type;
+		if (tcp_tag == ip_tag && tcp_tag == port_tag_cfg.s.non_tag_type)
+			pki_prt_cfg.style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tcp_tag;
+		else if (tcp_tag == ip_tag) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for ip6 && ip4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, ip_tag, CVMX_PKI_PCAM_MATCH_IP);
+		} else if (ip_tag == port_tag_cfg.s.non_tag_type) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for tcp6 & tcp4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, tcp_tag, CVMX_PKI_PCAM_MATCH_TCP);
+		} else {
+			if (ip_tag != 0xaa) {
+				nstyle = __cvmx_pip_set_tag_type(xp.node, style, ip_tag,
+								 CVMX_PKI_PCAM_MATCH_IP);
+				if (tcp_tag != 0xff)
+					__cvmx_pip_set_tag_type(xp.node, nstyle, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				else {
+					n4style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV4);
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					n6style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV6);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			} else {
+				n4style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip4_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV4);
+				n6style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip6_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV6);
+				if (tcp_tag != 0xff) {
+					__cvmx_pip_set_tag_type(xp.node, n4style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				} else {
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			}
+		}
+		pki_prt_cfg.style_cfg.parm_cfg.qpg_dis_padd = !port_tag_cfg.s.portadd_en;
+
+		if (port_cfg.s.mode == 0x1)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LA_TO_LG;
+		else if (port_cfg.s.mode == 0x2)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LC_TO_LG;
+		else
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_NOTHING;
+		/* This is only for backward compatibility, not all the parameters are supported in 78xx */
+		cvmx_pki_set_port_config(ipd_port, &pki_prt_cfg);
+	} else {
+		if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+			int interface, index, pknd;
+
+			interface = cvmx_helper_get_interface_num(ipd_port);
+			index = cvmx_helper_get_interface_index_num(ipd_port);
+			pknd = cvmx_helper_get_pknd(interface, index);
+
+			ipd_port = pknd; /* overload port_num with pknd */
+		}
+		csr_wr(CVMX_PIP_PRT_CFGX(ipd_port), port_cfg.u64);
+		csr_wr(CVMX_PIP_PRT_TAGX(ipd_port), port_tag_cfg.u64);
+	}
+}
+
+/**
+ * Configure the VLAN priority to QoS queue mapping.
+ *
+ * @param vlan_priority
+ *               VLAN priority (0-7)
+ * @param qos    QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_vlan_qos(u64 vlan_priority, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_vlanx_t pip_qos_vlanx;
+
+		pip_qos_vlanx.u64 = 0;
+		pip_qos_vlanx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_VLANX(vlan_priority), pip_qos_vlanx.u64);
+	}
+}
+
+/**
+ * Configure the Diffserv to QoS queue mapping.
+ *
+ * @param diffserv Diffserv field value (0-63)
+ * @param qos      QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_diffserv_qos(u64 diffserv, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_diffx_t pip_qos_diffx;
+
+		pip_qos_diffx.u64 = 0;
+		pip_qos_diffx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_DIFFX(diffserv), pip_qos_diffx.u64);
+	}
+}
+
+/**
+ * Get the status counters for a port for older non PKI chips.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_stats(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	cvmx_pip_stat_ctl_t pip_stat_ctl;
+	cvmx_pip_stat0_prtx_t stat0;
+	cvmx_pip_stat1_prtx_t stat1;
+	cvmx_pip_stat2_prtx_t stat2;
+	cvmx_pip_stat3_prtx_t stat3;
+	cvmx_pip_stat4_prtx_t stat4;
+	cvmx_pip_stat5_prtx_t stat5;
+	cvmx_pip_stat6_prtx_t stat6;
+	cvmx_pip_stat7_prtx_t stat7;
+	cvmx_pip_stat8_prtx_t stat8;
+	cvmx_pip_stat9_prtx_t stat9;
+	cvmx_pip_stat10_x_t stat10;
+	cvmx_pip_stat11_x_t stat11;
+	cvmx_pip_stat_inb_pktsx_t pip_stat_inb_pktsx;
+	cvmx_pip_stat_inb_octsx_t pip_stat_inb_octsx;
+	cvmx_pip_stat_inb_errsx_t pip_stat_inb_errsx;
+	int interface = cvmx_helper_get_interface_num(port_num);
+	int index = cvmx_helper_get_interface_index_num(port_num);
+
+	pip_stat_ctl.u64 = 0;
+	pip_stat_ctl.s.rdclr = clear;
+	csr_wr(CVMX_PIP_STAT_CTL, pip_stat_ctl.u64);
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+		/*
+		 * PIP_STAT_CTL[MODE] 0 means pkind.
+		 */
+		stat0.u64 = csr_rd(CVMX_PIP_STAT0_X(pknd));
+		stat1.u64 = csr_rd(CVMX_PIP_STAT1_X(pknd));
+		stat2.u64 = csr_rd(CVMX_PIP_STAT2_X(pknd));
+		stat3.u64 = csr_rd(CVMX_PIP_STAT3_X(pknd));
+		stat4.u64 = csr_rd(CVMX_PIP_STAT4_X(pknd));
+		stat5.u64 = csr_rd(CVMX_PIP_STAT5_X(pknd));
+		stat6.u64 = csr_rd(CVMX_PIP_STAT6_X(pknd));
+		stat7.u64 = csr_rd(CVMX_PIP_STAT7_X(pknd));
+		stat8.u64 = csr_rd(CVMX_PIP_STAT8_X(pknd));
+		stat9.u64 = csr_rd(CVMX_PIP_STAT9_X(pknd));
+		stat10.u64 = csr_rd(CVMX_PIP_STAT10_X(pknd));
+		stat11.u64 = csr_rd(CVMX_PIP_STAT11_X(pknd));
+	} else {
+		if (port_num >= 40) {
+			stat0.u64 = csr_rd(CVMX_PIP_XSTAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_XSTAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_XSTAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_XSTAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_XSTAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_XSTAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_XSTAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_XSTAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_XSTAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_XSTAT9_PRTX(port_num));
+			if (OCTEON_IS_MODEL(OCTEON_CN6XXX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_XSTAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_XSTAT11_PRTX(port_num));
+			}
+		} else {
+			stat0.u64 = csr_rd(CVMX_PIP_STAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_STAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_STAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_STAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_STAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_STAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_STAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_STAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_STAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_STAT9_PRTX(port_num));
+			if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_STAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_STAT11_PRTX(port_num));
+			}
+		}
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTS_PKNDX(pknd));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTS_PKNDX(pknd));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRS_PKNDX(pknd));
+	} else {
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTSX(port_num));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTSX(port_num));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRSX(port_num));
+	}
+
+	status->dropped_octets = stat0.s.drp_octs;
+	status->dropped_packets = stat0.s.drp_pkts;
+	status->octets = stat1.s.octs;
+	status->pci_raw_packets = stat2.s.raw;
+	status->packets = stat2.s.pkts;
+	status->multicast_packets = stat3.s.mcst;
+	status->broadcast_packets = stat3.s.bcst;
+	status->len_64_packets = stat4.s.h64;
+	status->len_65_127_packets = stat4.s.h65to127;
+	status->len_128_255_packets = stat5.s.h128to255;
+	status->len_256_511_packets = stat5.s.h256to511;
+	status->len_512_1023_packets = stat6.s.h512to1023;
+	status->len_1024_1518_packets = stat6.s.h1024to1518;
+	status->len_1519_max_packets = stat7.s.h1519;
+	status->fcs_align_err_packets = stat7.s.fcs;
+	status->runt_packets = stat8.s.undersz;
+	status->runt_crc_packets = stat8.s.frag;
+	status->oversize_packets = stat9.s.oversz;
+	status->oversize_crc_packets = stat9.s.jabber;
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		status->mcast_l2_red_packets = stat10.s.mcast;
+		status->bcast_l2_red_packets = stat10.s.bcast;
+		status->mcast_l3_red_packets = stat11.s.mcast;
+		status->bcast_l3_red_packets = stat11.s.bcast;
+	}
+	status->inb_packets = pip_stat_inb_pktsx.s.pkts;
+	status->inb_octets = pip_stat_inb_octsx.s.octs;
+	status->inb_errors = pip_stat_inb_errsx.s.errs;
+}
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_status(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		unsigned int node = cvmx_get_node_num();
+
+		cvmx_pki_get_port_stats(node, port_num, (struct cvmx_pki_port_stats *)status);
+	} else {
+		cvmx_pip_get_port_stats(port_num, clear, status);
+	}
+}
+
+/**
+ * Configure the hardware CRC engine
+ *
+ * @param interface Interface to configure (0 or 1)
+ * @param invert_result
+ *                 Invert the result of the CRC
+ * @param reflect  Reflect
+ * @param initialization_vector
+ *                 CRC initialization vector
+ */
+static inline void cvmx_pip_config_crc(u64 interface, u64 invert_result, u64 reflect,
+				       u32 initialization_vector)
+{
+	/* Only CN38XX & CN58XX */
+}
+
+/**
+ * Clear all bits in a tag mask. This should be called on
+ * startup before any calls to cvmx_pip_tag_mask_set. Each bit
+ * set in the final mask represent a byte used in the packet for
+ * tag generation.
+ *
+ * @param mask_index Which tag mask to clear (0..3)
+ */
+static inline void cvmx_pip_tag_mask_clear(u64 mask_index)
+{
+	u64 index;
+	cvmx_pip_tag_incx_t pip_tag_incx;
+
+	pip_tag_incx.u64 = 0;
+	pip_tag_incx.s.en = 0;
+	for (index = mask_index * 16; index < (mask_index + 1) * 16; index++)
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+}
+
+/**
+ * Sets a range of bits in the tag mask. The tag mask is used
+ * when the cvmx_pip_port_tag_cfg_t tag_mode is non zero.
+ * There are four separate masks that can be configured.
+ *
+ * @param mask_index Which tag mask to modify (0..3)
+ * @param offset     Offset into the bitmask to set bits at. Use the GCC macro
+ *                   offsetof() to determine the offsets into packet headers.
+ *                   For example, offsetof(ethhdr, protocol) returns the offset
+ *                   of the ethernet protocol field.  The bitmask selects which bytes
+ *                   to include the the tag, with bit offset X selecting byte@offset X
+ *                   from the beginning of the packet data.
+ * @param len        Number of bytes to include. Usually this is the sizeof()
+ *                   the field.
+ */
+static inline void cvmx_pip_tag_mask_set(u64 mask_index, u64 offset, u64 len)
+{
+	while (len--) {
+		cvmx_pip_tag_incx_t pip_tag_incx;
+		u64 index = mask_index * 16 + offset / 8;
+
+		pip_tag_incx.u64 = csr_rd(CVMX_PIP_TAG_INCX(index));
+		pip_tag_incx.s.en |= 0x80 >> (offset & 0x7);
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+		offset++;
+	}
+}
+
+/**
+ * Set byte count for Max-Sized and Min Sized frame check.
+ *
+ * @param interface   Which interface to set the limit
+ * @param max_size    Byte count for Max-Size frame check
+ */
+static inline void cvmx_pip_set_frame_check(int interface, u32 max_size)
+{
+	cvmx_pip_frm_len_chkx_t frm_len;
+
+	/* max_size and min_size are passed as 0, reset to default values. */
+	if (max_size < 1536)
+		max_size = 1536;
+
+	/* On CN68XX frame check is enabled for a pkind n and
+	   PIP_PRT_CFG[len_chk_sel] selects which set of
+	   MAXLEN/MINLEN to use. */
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int port;
+		int num_ports = cvmx_helper_ports_on_interface(interface);
+
+		for (port = 0; port < num_ports; port++) {
+			if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+				int ipd_port;
+
+				ipd_port = cvmx_helper_get_ipd_port(interface, port);
+				cvmx_pki_set_max_frm_len(ipd_port, max_size);
+			} else {
+				int pknd;
+				int sel;
+				cvmx_pip_prt_cfgx_t config;
+
+				pknd = cvmx_helper_get_pknd(interface, port);
+				config.u64 = csr_rd(CVMX_PIP_PRT_CFGX(pknd));
+				sel = config.s.len_chk_sel;
+				frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(sel));
+				frm_len.s.maxlen = max_size;
+				csr_wr(CVMX_PIP_FRM_LEN_CHKX(sel), frm_len.u64);
+			}
+		}
+	}
+	/* on cn6xxx and cn7xxx models, PIP_FRM_LEN_CHK0 applies to
+	 *     all incoming traffic */
+	else if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(0));
+		frm_len.s.maxlen = max_size;
+		csr_wr(CVMX_PIP_FRM_LEN_CHKX(0), frm_len.u64);
+	}
+}
+
+/**
+ * Initialize Bit Select Extractor config. Their are 8 bit positions and valids
+ * to be used when using the corresponding extractor.
+ *
+ * @param bit     Bit Select Extractor to use
+ * @param pos     Which position to update
+ * @param val     The value to update the position with
+ */
+static inline void cvmx_pip_set_bsel_pos(int bit, int pos, int val)
+{
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	if (bit < 0 || bit > 3) {
+		debug("ERROR: cvmx_pip_set_bsel_pos: Invalid Bit-Select Extractor (%d) passed\n",
+		      bit);
+		return;
+	}
+
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+	switch (pos) {
+	case 0:
+		bsel_pos.s.pos0_val = 1;
+		bsel_pos.s.pos0 = val & 0x7f;
+		break;
+	case 1:
+		bsel_pos.s.pos1_val = 1;
+		bsel_pos.s.pos1 = val & 0x7f;
+		break;
+	case 2:
+		bsel_pos.s.pos2_val = 1;
+		bsel_pos.s.pos2 = val & 0x7f;
+		break;
+	case 3:
+		bsel_pos.s.pos3_val = 1;
+		bsel_pos.s.pos3 = val & 0x7f;
+		break;
+	case 4:
+		bsel_pos.s.pos4_val = 1;
+		bsel_pos.s.pos4 = val & 0x7f;
+		break;
+	case 5:
+		bsel_pos.s.pos5_val = 1;
+		bsel_pos.s.pos5 = val & 0x7f;
+		break;
+	case 6:
+		bsel_pos.s.pos6_val = 1;
+		bsel_pos.s.pos6 = val & 0x7f;
+		break;
+	case 7:
+		bsel_pos.s.pos7_val = 1;
+		bsel_pos.s.pos7 = val & 0x7f;
+		break;
+	default:
+		debug("Warning: cvmx_pip_set_bsel_pos: Invalid pos(%d)\n", pos);
+		break;
+	}
+	csr_wr(CVMX_PIP_BSEL_EXT_POSX(bit), bsel_pos.u64);
+}
+
+/**
+ * Initialize offset and skip values to use by bit select extractor.
+
+ * @param bit	Bit Select Extractor to use
+ * @param offset	Offset to add to extractor mem addr to get final address
+ *			to lookup table.
+ * @param skip		Number of bytes to skip from start of packet 0-64
+ */
+static inline void cvmx_pip_bsel_config(int bit, int offset, int skip)
+{
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_cfg.s.offset = offset;
+	bsel_cfg.s.skip = skip;
+	csr_wr(CVMX_PIP_BSEL_EXT_CFGX(bit), bsel_cfg.u64);
+}
+
+/**
+ * Get the entry for the Bit Select Extractor Table.
+ * @param work   pointer to work queue entry
+ * @return       Index of the Bit Select Extractor Table
+ */
+static inline int cvmx_pip_get_bsel_table_index(cvmx_wqe_t *work)
+{
+	int bit = cvmx_wqe_get_port(work) & 0x3;
+	/* Get the Bit select table index. */
+	int index;
+	int y;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+
+	for (y = 0; y < 8; y++) {
+		char *ptr = (char *)cvmx_phys_to_ptr(work->packet_ptr.s.addr);
+		int bit_loc = 0;
+		int bit;
+
+		ptr += bsel_cfg.s.skip;
+		switch (y) {
+		case 0:
+			ptr += (bsel_pos.s.pos0 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos0 & 0x3);
+			break;
+		case 1:
+			ptr += (bsel_pos.s.pos1 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos1 & 0x3);
+			break;
+		case 2:
+			ptr += (bsel_pos.s.pos2 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos2 & 0x3);
+			break;
+		case 3:
+			ptr += (bsel_pos.s.pos3 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos3 & 0x3);
+			break;
+		case 4:
+			ptr += (bsel_pos.s.pos4 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos4 & 0x3);
+			break;
+		case 5:
+			ptr += (bsel_pos.s.pos5 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos5 & 0x3);
+			break;
+		case 6:
+			ptr += (bsel_pos.s.pos6 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos6 & 0x3);
+			break;
+		case 7:
+			ptr += (bsel_pos.s.pos7 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos7 & 0x3);
+			break;
+		}
+		bit = (*ptr >> bit_loc) & 1;
+		index |= bit << y;
+	}
+	index += bsel_cfg.s.offset;
+	index &= 0x1ff;
+	return index;
+}
+
+static inline int cvmx_pip_get_bsel_qos(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.qos;
+}
+
+static inline int cvmx_pip_get_bsel_grp(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.grp;
+}
+
+static inline int cvmx_pip_get_bsel_tt(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.tt;
+}
+
+static inline int cvmx_pip_get_bsel_tag(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	int port = cvmx_wqe_get_port(work);
+	int bit = port & 0x3;
+	int upper_tag = 0;
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_prt_tagx_t prt_tag;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+
+	prt_tag.u64 = csr_rd(CVMX_PIP_PRT_TAGX(port));
+	if (prt_tag.s.inc_prt_flag == 0)
+		upper_tag = bsel_cfg.s.upper_tag;
+	return bsel_tbl.s.tag | ((bsel_cfg.s.tag << 8) & 0xff00) | ((upper_tag << 16) & 0xffff0000);
+}
+
+#endif /*  __CVMX_PIP_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
new file mode 100644
index 000000000000..79b99b0bd7c2
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Resource management for PKI resources.
+ */
+
+#ifndef __CVMX_PKI_RESOURCES_H__
+#define __CVMX_PKI_RESOURCES_H__
+
+/**
+ * This function allocates/reserves a style from pool of global styles per node.
+ * @param node	 node to allocate style from.
+ * @param style	 style to allocate, if -1 it will be allocated
+		 first available style from style resource. If index is positive
+		 number and in range, it will try to allocate specified style.
+ * @return	 style number on success, -1 on failure.
+ */
+int cvmx_pki_style_alloc(int node, int style);
+
+/**
+ * This function allocates/reserves a cluster group from per node
+   cluster group resources.
+ * @param node		node to allocate cluster group from.
+   @param cl_grp	cluster group to allocate/reserve, if -1 ,
+			allocate any available cluster group.
+ * @return		cluster group number or -1 on failure
+ */
+int cvmx_pki_cluster_grp_alloc(int node, int cl_grp);
+
+/**
+ * This function allocates/reserves a cluster from per node
+   cluster resources.
+ * @param node		node to allocate cluster group from.
+   @param cluster_mask	mask of clusters  to allocate/reserve, if -1 ,
+			allocate any available clusters.
+ * @param num_clusters	number of clusters that will be allocated
+ */
+int cvmx_pki_cluster_alloc(int node, int num_clusters, u64 *cluster_mask);
+
+/**
+ * This function allocates/reserves a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191), if -1 ,
+			allocate any available pcam entry.
+ * @param bank		pcam bank where to allocate/reserve pcan entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is needed.
+ * @return		pcam entry of -1 on failure
+ */
+int cvmx_pki_pcam_entry_alloc(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+			qpg base_offset will be allocated. If base_offset is positive
+			number and in range, it will try to allocate specified base_offset.
+   @param count		number of consecutive qpg entries to allocate. They will be consecutive
+			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_alloc(int node, int base_offset, int count);
+
+/**
+ * This function frees a style from pool of global styles per node.
+ * @param node	 node to free style from.
+ * @param style	 style to free
+ * @return	 0 on success, -1 on failure.
+ */
+int cvmx_pki_style_free(int node, int style);
+
+/**
+ * This function frees a cluster group from per node
+   cluster group resources.
+ * @param node		node to free cluster group from.
+   @param cl_grp	cluster group to free
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_grp_free(int node, int cl_grp);
+
+/**
+ * This function frees QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+ *			qpg base_offset will be allocated. If base_offset is positive
+ *			number and in range, it will try to allocate specified base_offset.
+ * @param count		number of consecutive qpg entries to allocate. They will be consecutive
+ *			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_free(int node, int base_offset, int count);
+
+/**
+ * This function frees  clusters  from per node
+   clusters resources.
+ * @param node		node to free clusters from.
+ * @param cluster_mask  mask of clusters need freeing
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_free(int node, u64 cluster_mask);
+
+/**
+ * This function frees a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191) needs to be freed.
+ * @param bank		pcam bank where to free pcam entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is freed.
+ * @return		0 on success OR -1 on failure
+ */
+int cvmx_pki_pcam_entry_free(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves a bpid from pool of global bpid per node.
+ * @param node	node to allocate bpid from.
+ * @param bpid	bpid  to allocate, if -1 it will be allocated
+ *		first available boid from bpid resource. If index is positive
+ *		number and in range, it will try to allocate specified bpid.
+ * @return	bpid number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_bpid_alloc(int node, int bpid);
+
+/**
+ * This function frees a bpid from pool of global bpid per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_bpid_free(int node, int bpid);
+
+/**
+ * This function frees all the PKI software resources
+ * (clusters, styles, qpg_entry, pcam_entry etc) for the specified node
+ */
+
+/**
+ * This function allocates/reserves an index from pool of global MTAG-IDX per node.
+ * @param node	node to allocate index from.
+ * @param idx	index  to allocate, if -1 it will be allocated
+ * @return	MTAG index number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_mtag_idx_alloc(int node, int idx);
+
+/**
+ * This function frees an index from pool of global MTAG-IDX per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_mtag_idx_free(int node, int idx);
+
+void __cvmx_pki_global_rsrc_free(int node);
+
+#endif /*  __CVM_PKI_RESOURCES_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki.h b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
new file mode 100644
index 000000000000..c1feb55a1f01
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
@@ -0,0 +1,970 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Data unit.
+ */
+
+#ifndef __CVMX_PKI_H__
+#define __CVMX_PKI_H__
+
+#include "cvmx-fpa3.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+#include "cvmx-error.h"
+
+/* PKI AURA and BPID count are equal to FPA AURA count */
+#define CVMX_PKI_NUM_AURA	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_BPID	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_SSO_GROUP	       (cvmx_sso_num_xgrp())
+#define CVMX_PKI_NUM_CLUSTER_GROUP_MAX 1
+#define CVMX_PKI_NUM_CLUSTER_GROUP     (cvmx_pki_num_cl_grp())
+#define CVMX_PKI_NUM_CLUSTER	       (cvmx_pki_num_clusters())
+
+/* FIXME: Reduce some of these values, convert to routines XXX */
+#define CVMX_PKI_NUM_CHANNEL	    4096
+#define CVMX_PKI_NUM_PKIND	    64
+#define CVMX_PKI_NUM_INTERNAL_STYLE 256
+#define CVMX_PKI_NUM_FINAL_STYLE    64
+#define CVMX_PKI_NUM_QPG_ENTRY	    2048
+#define CVMX_PKI_NUM_MTAG_IDX	    (32 / 4) /* 32 registers grouped by 4*/
+#define CVMX_PKI_NUM_LTYPE	    32
+#define CVMX_PKI_NUM_PCAM_BANK	    2
+#define CVMX_PKI_NUM_PCAM_ENTRY	    192
+#define CVMX_PKI_NUM_FRAME_CHECK    2
+#define CVMX_PKI_NUM_BELTYPE	    32
+#define CVMX_PKI_MAX_FRAME_SIZE	    65535
+#define CVMX_PKI_FIND_AVAL_ENTRY    (-1)
+#define CVMX_PKI_CLUSTER_ALL	    0xf
+
+#ifdef CVMX_SUPPORT_SEPARATE_CLUSTER_CONFIG
+#define CVMX_PKI_TOTAL_PCAM_ENTRY                                                                  \
+	((CVMX_PKI_NUM_CLUSTER) * (CVMX_PKI_NUM_PCAM_BANK) * (CVMX_PKI_NUM_PCAM_ENTRY))
+#else
+#define CVMX_PKI_TOTAL_PCAM_ENTRY (CVMX_PKI_NUM_PCAM_BANK * CVMX_PKI_NUM_PCAM_ENTRY)
+#endif
+
+static inline unsigned int cvmx_pki_num_clusters(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 2;
+	return 4;
+}
+
+static inline unsigned int cvmx_pki_num_cl_grp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1;
+	return 0;
+}
+
+enum cvmx_pki_pkind_parse_mode {
+	CVMX_PKI_PARSE_LA_TO_LG = 0,  /* Parse LA(L2) to LG */
+	CVMX_PKI_PARSE_LB_TO_LG = 1,  /* Parse LB(custom) to LG */
+	CVMX_PKI_PARSE_LC_TO_LG = 3,  /* Parse LC(L3) to LG */
+	CVMX_PKI_PARSE_LG = 0x3f,     /* Parse LG */
+	CVMX_PKI_PARSE_NOTHING = 0x7f /* Parse nothing */
+};
+
+enum cvmx_pki_parse_mode_chg {
+	CVMX_PKI_PARSE_NO_CHG = 0x0,
+	CVMX_PKI_PARSE_SKIP_TO_LB = 0x1,
+	CVMX_PKI_PARSE_SKIP_TO_LC = 0x3,
+	CVMX_PKI_PARSE_SKIP_TO_LD = 0x7,
+	CVMX_PKI_PARSE_SKIP_TO_LG = 0x3f,
+	CVMX_PKI_PARSE_SKIP_ALL = 0x7f,
+};
+
+enum cvmx_pki_l2_len_mode { PKI_L2_LENCHK_EQUAL_GREATER = 0, PKI_L2_LENCHK_EQUAL_ONLY };
+
+enum cvmx_pki_cache_mode {
+	CVMX_PKI_OPC_MODE_STT = 0LL,	  /* All blocks write through DRAM,*/
+	CVMX_PKI_OPC_MODE_STF = 1LL,	  /* All blocks into L2 */
+	CVMX_PKI_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
+	CVMX_PKI_OPC_MODE_STF2_STT = 3LL  /* 1st, 2nd blocks L2, rest DRAM */
+};
+
+/**
+ * Tag type definitions
+ */
+enum cvmx_sso_tag_type {
+	CVMX_SSO_TAG_TYPE_ORDERED = 0L,
+	CVMX_SSO_TAG_TYPE_ATOMIC = 1L,
+	CVMX_SSO_TAG_TYPE_UNTAGGED = 2L,
+	CVMX_SSO_TAG_TYPE_EMPTY = 3L
+};
+
+enum cvmx_pki_qpg_qos {
+	CVMX_PKI_QPG_QOS_NONE = 0,
+	CVMX_PKI_QPG_QOS_VLAN,
+	CVMX_PKI_QPG_QOS_MPLS,
+	CVMX_PKI_QPG_QOS_DSA_SRC,
+	CVMX_PKI_QPG_QOS_DIFFSERV,
+	CVMX_PKI_QPG_QOS_HIGIG,
+};
+
+enum cvmx_pki_wqe_vlan { CVMX_PKI_USE_FIRST_VLAN = 0, CVMX_PKI_USE_SECOND_VLAN };
+
+/**
+ * Controls how the PKI statistics counters are handled
+ * The PKI_STAT*_X registers can be indexed either by port kind (pkind), or
+ * final style. (Does not apply to the PKI_STAT_INB* registers.)
+ *    0 = X represents the packet?s pkind
+ *    1 = X represents the low 6-bits of packet?s final style
+ */
+enum cvmx_pki_stats_mode { CVMX_PKI_STAT_MODE_PKIND, CVMX_PKI_STAT_MODE_STYLE };
+
+enum cvmx_pki_fpa_wait { CVMX_PKI_DROP_PKT, CVMX_PKI_WAIT_PKT };
+
+#define PKI_BELTYPE_E__NONE_M 0x0
+#define PKI_BELTYPE_E__MISC_M 0x1
+#define PKI_BELTYPE_E__IP4_M  0x2
+#define PKI_BELTYPE_E__IP6_M  0x3
+#define PKI_BELTYPE_E__TCP_M  0x4
+#define PKI_BELTYPE_E__UDP_M  0x5
+#define PKI_BELTYPE_E__SCTP_M 0x6
+#define PKI_BELTYPE_E__SNAP_M 0x7
+
+/* PKI_BELTYPE_E_t */
+enum cvmx_pki_beltype {
+	CVMX_PKI_BELTYPE_NONE = PKI_BELTYPE_E__NONE_M,
+	CVMX_PKI_BELTYPE_MISC = PKI_BELTYPE_E__MISC_M,
+	CVMX_PKI_BELTYPE_IP4 = PKI_BELTYPE_E__IP4_M,
+	CVMX_PKI_BELTYPE_IP6 = PKI_BELTYPE_E__IP6_M,
+	CVMX_PKI_BELTYPE_TCP = PKI_BELTYPE_E__TCP_M,
+	CVMX_PKI_BELTYPE_UDP = PKI_BELTYPE_E__UDP_M,
+	CVMX_PKI_BELTYPE_SCTP = PKI_BELTYPE_E__SCTP_M,
+	CVMX_PKI_BELTYPE_SNAP = PKI_BELTYPE_E__SNAP_M,
+	CVMX_PKI_BELTYPE_MAX = CVMX_PKI_BELTYPE_SNAP
+};
+
+struct cvmx_pki_frame_len {
+	u16 maxlen;
+	u16 minlen;
+};
+
+struct cvmx_pki_tag_fields {
+	u64 layer_g_src : 1;
+	u64 layer_f_src : 1;
+	u64 layer_e_src : 1;
+	u64 layer_d_src : 1;
+	u64 layer_c_src : 1;
+	u64 layer_b_src : 1;
+	u64 layer_g_dst : 1;
+	u64 layer_f_dst : 1;
+	u64 layer_e_dst : 1;
+	u64 layer_d_dst : 1;
+	u64 layer_c_dst : 1;
+	u64 layer_b_dst : 1;
+	u64 input_port : 1;
+	u64 mpls_label : 1;
+	u64 first_vlan : 1;
+	u64 second_vlan : 1;
+	u64 ip_prot_nexthdr : 1;
+	u64 tag_sync : 1;
+	u64 tag_spi : 1;
+	u64 tag_gtp : 1;
+	u64 tag_vni : 1;
+};
+
+struct cvmx_pki_pkind_parse {
+	u64 mpls_en : 1;
+	u64 inst_hdr : 1;
+	u64 lg_custom : 1;
+	u64 fulc_en : 1;
+	u64 dsa_en : 1;
+	u64 hg2_en : 1;
+	u64 hg_en : 1;
+};
+
+struct cvmx_pki_pool_config {
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+struct cvmx_pki_qpg_config {
+	int qpg_base;
+	int port_add;
+	int aura_num;
+	int grp_ok;
+	int grp_bad;
+	int grptag_ok;
+	int grptag_bad;
+};
+
+struct cvmx_pki_aura_config {
+	int aura_num;
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa3_gaura_t aura;
+	int buffer_count;
+};
+
+struct cvmx_pki_cluster_grp_config {
+	int grp_num;
+	u64 cluster_mask; /* Bit mask of cluster assigned to this cluster group */
+};
+
+struct cvmx_pki_sso_grp_config {
+	int group;
+	int priority;
+	int weight;
+	int affinity;
+	u64 core_mask;
+	u8 core_mask_set;
+};
+
+/* This is per style structure for configuring port parameters,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modifying
+ * that style will modify the respective parameters for all the ports
+ * which are using this style
+ */
+struct cvmx_pki_style_parm {
+	bool ip6_udp_opt;
+	bool lenerr_en;
+	bool maxerr_en;
+	bool minerr_en;
+	u8 lenerr_eqpad;
+	u8 minmax_sel;
+	bool qpg_dis_grptag;
+	bool fcs_strip;
+	bool fcs_chk;
+	bool rawdrp;
+	bool force_drop;
+	bool nodrop;
+	bool qpg_dis_padd;
+	bool qpg_dis_grp;
+	bool qpg_dis_aura;
+	u16 qpg_base;
+	enum cvmx_pki_qpg_qos qpg_qos;
+	u8 qpg_port_sh;
+	u8 qpg_port_msb;
+	u8 apad_nip;
+	u8 wqe_vs;
+	enum cvmx_sso_tag_type tag_type;
+	bool pkt_lend;
+	u8 wqe_hsz;
+	u16 wqe_skip;
+	u16 first_skip;
+	u16 later_skip;
+	enum cvmx_pki_cache_mode cache_mode;
+	u8 dis_wq_dat;
+	u64 mbuff_size;
+	bool len_lg;
+	bool len_lf;
+	bool len_le;
+	bool len_ld;
+	bool len_lc;
+	bool len_lb;
+	bool csum_lg;
+	bool csum_lf;
+	bool csum_le;
+	bool csum_ld;
+	bool csum_lc;
+	bool csum_lb;
+};
+
+/* This is per style structure for configuring port's tag configuration,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modiying that style
+ * will modify the respective parameters for all the ports which are
+ * using this style */
+enum cvmx_pki_mtag_ptrsel {
+	CVMX_PKI_MTAG_PTRSEL_SOP = 0,
+	CVMX_PKI_MTAG_PTRSEL_LA = 8,
+	CVMX_PKI_MTAG_PTRSEL_LB = 9,
+	CVMX_PKI_MTAG_PTRSEL_LC = 10,
+	CVMX_PKI_MTAG_PTRSEL_LD = 11,
+	CVMX_PKI_MTAG_PTRSEL_LE = 12,
+	CVMX_PKI_MTAG_PTRSEL_LF = 13,
+	CVMX_PKI_MTAG_PTRSEL_LG = 14,
+	CVMX_PKI_MTAG_PTRSEL_VL = 15,
+};
+
+struct cvmx_pki_mask_tag {
+	bool enable;
+	int base;   /* CVMX_PKI_MTAG_PTRSEL_XXX */
+	int offset; /* Offset from base. */
+	u64 val;    /* Bitmask:
+		1 = enable, 0 = disabled for each byte in the 64-byte array.*/
+};
+
+struct cvmx_pki_style_tag_cfg {
+	struct cvmx_pki_tag_fields tag_fields;
+	struct cvmx_pki_mask_tag mask_tag[4];
+};
+
+struct cvmx_pki_style_config {
+	struct cvmx_pki_style_parm parm_cfg;
+	struct cvmx_pki_style_tag_cfg tag_cfg;
+};
+
+struct cvmx_pki_pkind_config {
+	u8 cluster_grp;
+	bool fcs_pres;
+	struct cvmx_pki_pkind_parse parse_en;
+	enum cvmx_pki_pkind_parse_mode initial_parse_mode;
+	u8 fcs_skip;
+	u8 inst_skip;
+	int initial_style;
+	bool custom_l2_hdr;
+	u8 l2_scan_offset;
+	u64 lg_scan_offset;
+};
+
+struct cvmx_pki_port_config {
+	struct cvmx_pki_pkind_config pkind_cfg;
+	struct cvmx_pki_style_config style_cfg;
+};
+
+struct cvmx_pki_global_parse {
+	u64 virt_pen : 1;
+	u64 clg_pen : 1;
+	u64 cl2_pen : 1;
+	u64 l4_pen : 1;
+	u64 il3_pen : 1;
+	u64 l3_pen : 1;
+	u64 mpls_pen : 1;
+	u64 fulc_pen : 1;
+	u64 dsa_pen : 1;
+	u64 hg_pen : 1;
+};
+
+struct cvmx_pki_tag_sec {
+	u16 dst6;
+	u16 src6;
+	u16 dst;
+	u16 src;
+};
+
+struct cvmx_pki_global_config {
+	u64 cluster_mask[CVMX_PKI_NUM_CLUSTER_GROUP_MAX];
+	enum cvmx_pki_stats_mode stat_mode;
+	enum cvmx_pki_fpa_wait fpa_wait;
+	struct cvmx_pki_global_parse gbl_pen;
+	struct cvmx_pki_tag_sec tag_secret;
+	struct cvmx_pki_frame_len frm_len[CVMX_PKI_NUM_FRAME_CHECK];
+	enum cvmx_pki_beltype ltype_map[CVMX_PKI_NUM_BELTYPE];
+	int pki_enable;
+};
+
+#define CVMX_PKI_PCAM_TERM_E_NONE_M	 0x0
+#define CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M 0x2
+#define CVMX_PKI_PCAM_TERM_E_HIGIGD_M	 0x4
+#define CVMX_PKI_PCAM_TERM_E_HIGIG_M	 0x5
+#define CVMX_PKI_PCAM_TERM_E_SMACH_M	 0x8
+#define CVMX_PKI_PCAM_TERM_E_SMACL_M	 0x9
+#define CVMX_PKI_PCAM_TERM_E_DMACH_M	 0xA
+#define CVMX_PKI_PCAM_TERM_E_DMACL_M	 0xB
+#define CVMX_PKI_PCAM_TERM_E_GLORT_M	 0x12
+#define CVMX_PKI_PCAM_TERM_E_DSA_M	 0x13
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M	 0x18
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M	 0x19
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M	 0x1A
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M	 0x1B
+#define CVMX_PKI_PCAM_TERM_E_MPLS0_M	 0x1E
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M	 0x1F
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M	 0x20
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPML_M	 0x21
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M	 0x22
+#define CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M	 0x23
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M	 0x24
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M	 0x25
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPML_M	 0x26
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M	 0x27
+#define CVMX_PKI_PCAM_TERM_E_LD_VNI_M	 0x28
+#define CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M 0x2B
+#define CVMX_PKI_PCAM_TERM_E_LF_SPI_M	 0x2E
+#define CVMX_PKI_PCAM_TERM_E_L4_SPORT_M	 0x2f
+#define CVMX_PKI_PCAM_TERM_E_L4_PORT_M	 0x30
+#define CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M 0x39
+
+enum cvmx_pki_term {
+	CVMX_PKI_PCAM_TERM_NONE = CVMX_PKI_PCAM_TERM_E_NONE_M,
+	CVMX_PKI_PCAM_TERM_L2_CUSTOM = CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M,
+	CVMX_PKI_PCAM_TERM_HIGIGD = CVMX_PKI_PCAM_TERM_E_HIGIGD_M,
+	CVMX_PKI_PCAM_TERM_HIGIG = CVMX_PKI_PCAM_TERM_E_HIGIG_M,
+	CVMX_PKI_PCAM_TERM_SMACH = CVMX_PKI_PCAM_TERM_E_SMACH_M,
+	CVMX_PKI_PCAM_TERM_SMACL = CVMX_PKI_PCAM_TERM_E_SMACL_M,
+	CVMX_PKI_PCAM_TERM_DMACH = CVMX_PKI_PCAM_TERM_E_DMACH_M,
+	CVMX_PKI_PCAM_TERM_DMACL = CVMX_PKI_PCAM_TERM_E_DMACL_M,
+	CVMX_PKI_PCAM_TERM_GLORT = CVMX_PKI_PCAM_TERM_E_GLORT_M,
+	CVMX_PKI_PCAM_TERM_DSA = CVMX_PKI_PCAM_TERM_E_DSA_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE0 = CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE1 = CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE2 = CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE3 = CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M,
+	CVMX_PKI_PCAM_TERM_MPLS0 = CVMX_PKI_PCAM_TERM_E_MPLS0_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPHH = CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPMH = CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPML = CVMX_PKI_PCAM_TERM_E_L3_SIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPLL = CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M,
+	CVMX_PKI_PCAM_TERM_L3_FLAGS = CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPHH = CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPMH = CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPML = CVMX_PKI_PCAM_TERM_E_L3_DIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPLL = CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M,
+	CVMX_PKI_PCAM_TERM_LD_VNI = CVMX_PKI_PCAM_TERM_E_LD_VNI_M,
+	CVMX_PKI_PCAM_TERM_IL3_FLAGS = CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_LF_SPI = CVMX_PKI_PCAM_TERM_E_LF_SPI_M,
+	CVMX_PKI_PCAM_TERM_L4_PORT = CVMX_PKI_PCAM_TERM_E_L4_PORT_M,
+	CVMX_PKI_PCAM_TERM_L4_SPORT = CVMX_PKI_PCAM_TERM_E_L4_SPORT_M,
+	CVMX_PKI_PCAM_TERM_LG_CUSTOM = CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M
+};
+
+#define CVMX_PKI_DMACH_SHIFT	  32
+#define CVMX_PKI_DMACH_MASK	  cvmx_build_mask(16)
+#define CVMX_PKI_DMACL_MASK	  CVMX_PKI_DATA_MASK_32
+#define CVMX_PKI_DATA_MASK_32	  cvmx_build_mask(32)
+#define CVMX_PKI_DATA_MASK_16	  cvmx_build_mask(16)
+#define CVMX_PKI_DMAC_MATCH_EXACT cvmx_build_mask(48)
+
+struct cvmx_pki_pcam_input {
+	u64 style;
+	u64 style_mask; /* bits: 1-match, 0-dont care */
+	enum cvmx_pki_term field;
+	u32 field_mask; /* bits: 1-match, 0-dont care */
+	u64 data;
+	u64 data_mask; /* bits: 1-match, 0-dont care */
+};
+
+struct cvmx_pki_pcam_action {
+	enum cvmx_pki_parse_mode_chg parse_mode_chg;
+	enum cvmx_pki_layer_type layer_type_set;
+	int style_add;
+	int parse_flag_set;
+	int pointer_advance;
+};
+
+struct cvmx_pki_pcam_config {
+	int in_use;
+	int entry_num;
+	u64 cluster_mask;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+};
+
+/**
+ * Status statistics for a port
+ */
+struct cvmx_pki_port_stats {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+};
+
+/**
+ * PKI Packet Instruction Header Structure (PKI_INST_HDR_S)
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 w : 1;    /* INST_HDR size: 0 = 2 bytes, 1 = 4 or 8 bytes */
+		u64 raw : 1;  /* RAW packet indicator in WQE[RAW]: 1 = enable */
+		u64 utag : 1; /* Use INST_HDR[TAG] to compute WQE[TAG]: 1 = enable */
+		u64 uqpg : 1; /* Use INST_HDR[QPG] to compute QPG: 1 = enable */
+		u64 rsvd1 : 1;
+		u64 pm : 3; /* Packet parsing mode. Legal values = 0x0..0x7 */
+		u64 sl : 8; /* Number of bytes in INST_HDR. */
+		/* The following fields are not present, if INST_HDR[W] = 0: */
+		u64 utt : 1; /* Use INST_HDR[TT] to compute WQE[TT]: 1 = enable */
+		u64 tt : 2;  /* INST_HDR[TT] => WQE[TT], if INST_HDR[UTT] = 1 */
+		u64 rsvd2 : 2;
+		u64 qpg : 11; /* INST_HDR[QPG] => QPG, if INST_HDR[UQPG] = 1 */
+		u64 tag : 32; /* INST_HDR[TAG] => WQE[TAG], if INST_HDR[UTAG] = 1 */
+	} s;
+} cvmx_pki_inst_hdr_t;
+
+/**
+ * This function assignes the clusters to a group, later pkind can be
+ * configured to use that group depending on number of clusters pkind
+ * would use. A given cluster can only be enabled in a single cluster group.
+ * Number of clusters assign to that group determines how many engine can work
+ * in parallel to process the packet. Eack cluster can process x MPPS.
+ *
+ * @param node	Node
+ * @param cluster_group Group to attach clusters to.
+ * @param cluster_mask The mask of clusters which needs to be assigned to the group.
+ */
+static inline int cvmx_pki_attach_cluster_to_group(int node, u64 cluster_group, u64 cluster_mask)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cluster_group >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: config cluster group %d", (int)cluster_group);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group));
+	pki_cl_grp.s.clusters = cluster_mask;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group), pki_cl_grp.u64);
+	return 0;
+}
+
+static inline void cvmx_pki_write_global_parse(int node, struct cvmx_pki_global_parse gbl_pen)
+{
+	cvmx_pki_gbl_pen_t gbl_pen_reg;
+
+	gbl_pen_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_GBL_PEN);
+	gbl_pen_reg.s.virt_pen = gbl_pen.virt_pen;
+	gbl_pen_reg.s.clg_pen = gbl_pen.clg_pen;
+	gbl_pen_reg.s.cl2_pen = gbl_pen.cl2_pen;
+	gbl_pen_reg.s.l4_pen = gbl_pen.l4_pen;
+	gbl_pen_reg.s.il3_pen = gbl_pen.il3_pen;
+	gbl_pen_reg.s.l3_pen = gbl_pen.l3_pen;
+	gbl_pen_reg.s.mpls_pen = gbl_pen.mpls_pen;
+	gbl_pen_reg.s.fulc_pen = gbl_pen.fulc_pen;
+	gbl_pen_reg.s.dsa_pen = gbl_pen.dsa_pen;
+	gbl_pen_reg.s.hg_pen = gbl_pen.hg_pen;
+	cvmx_write_csr_node(node, CVMX_PKI_GBL_PEN, gbl_pen_reg.u64);
+}
+
+static inline void cvmx_pki_write_tag_secret(int node, struct cvmx_pki_tag_sec tag_secret)
+{
+	cvmx_pki_tag_secret_t tag_secret_reg;
+
+	tag_secret_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_TAG_SECRET);
+	tag_secret_reg.s.dst6 = tag_secret.dst6;
+	tag_secret_reg.s.src6 = tag_secret.src6;
+	tag_secret_reg.s.dst = tag_secret.dst;
+	tag_secret_reg.s.src = tag_secret.src;
+	cvmx_write_csr_node(node, CVMX_PKI_TAG_SECRET, tag_secret_reg.u64);
+}
+
+static inline void cvmx_pki_write_ltype_map(int node, enum cvmx_pki_layer_type layer,
+					    enum cvmx_pki_beltype backend)
+{
+	cvmx_pki_ltypex_map_t ltype_map;
+
+	if (layer > CVMX_PKI_LTYPE_E_MAX || backend > CVMX_PKI_BELTYPE_MAX) {
+		debug("ERROR: invalid ltype beltype mapping\n");
+		return;
+	}
+	ltype_map.u64 = cvmx_read_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer));
+	ltype_map.s.beltype = backend;
+	cvmx_write_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer), ltype_map.u64);
+}
+
+/**
+ * This function enables the cluster group to start parsing.
+ *
+ * @param node    Node number.
+ * @param cl_grp  Cluster group to enable parsing.
+ */
+static inline int cvmx_pki_parse_enable(int node, unsigned int cl_grp)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cl_grp >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: pki parse en group %d", (int)cl_grp);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp));
+	pki_cl_grp.s.pena = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp), pki_cl_grp.u64);
+	return 0;
+}
+
+/**
+ * This function enables the PKI to send bpid level backpressure to CN78XX inputs.
+ *
+ * @param node Node number.
+ */
+static inline void cvmx_pki_enable_backpressure(int node)
+{
+	cvmx_pki_buf_ctl_t pki_buf_ctl;
+
+	pki_buf_ctl.u64 = cvmx_read_csr_node(node, CVMX_PKI_BUF_CTL);
+	pki_buf_ctl.s.pbp_en = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_BUF_CTL, pki_buf_ctl.u64);
+}
+
+/**
+ * Clear the statistics counters for a port.
+ *
+ * @param node Node number.
+ * @param port Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ */
+void cvmx_pki_clear_port_stats(int node, u64 port);
+
+/**
+ * Get the status counters for index from PKI.
+ *
+ * @param node	  Node number.
+ * @param index   PKIND number, if PKI_STATS_CTL:mode = 0 or
+ *     style(flow) number, if PKI_STATS_CTL:mode = 1
+ * @param status  Where to put the results.
+ */
+void cvmx_pki_get_stats(int node, int index, struct cvmx_pki_port_stats *status);
+
+/**
+ * Get the statistics counters for a port.
+ *
+ * @param node	 Node number
+ * @param port   Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_port_stats(int node, u64 port, struct cvmx_pki_port_stats *status)
+{
+	int xipd = cvmx_helper_node_to_ipd_port(node, port);
+	int xiface = cvmx_helper_get_interface_num(xipd);
+	int index = cvmx_helper_get_interface_index_num(port);
+	int pknd = cvmx_helper_get_pknd(xiface, index);
+
+	cvmx_pki_get_stats(node, pknd, status);
+}
+
+/**
+ * Get the statistics counters for a flow represented by style in PKI.
+ *
+ * @param node Node number.
+ * @param style_num Style number to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 1 for collecting per style/flow stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_flow_stats(int node, u64 style_num,
+					   struct cvmx_pki_port_stats *status)
+{
+	cvmx_pki_get_stats(node, style_num, status);
+}
+
+/**
+ * Show integrated PKI configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_config_dump(unsigned int node);
+
+/**
+ * Show integrated PKI statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_stats_dump(unsigned int node);
+
+/**
+ * Clear PKI statistics.
+ *
+ * @param node	   node number
+ */
+void cvmx_pki_stats_clear(unsigned int node);
+
+/**
+ * This function enables PKI.
+ *
+ * @param node	 node to enable pki in.
+ */
+void cvmx_pki_enable(int node);
+
+/**
+ * This function disables PKI.
+ *
+ * @param node	node to disable pki in.
+ */
+void cvmx_pki_disable(int node);
+
+/**
+ * This function soft resets PKI.
+ *
+ * @param node	node to enable pki in.
+ */
+void cvmx_pki_reset(int node);
+
+/**
+ * This function sets the clusters in PKI.
+ *
+ * @param node	node to set clusters in.
+ */
+int cvmx_pki_setup_clusters(int node);
+
+/**
+ * This function reads global configuration of PKI block.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to read global configuration
+ */
+void cvmx_pki_read_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function writes global configuration of PKI into hw.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to global configuration
+ */
+void cvmx_pki_write_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function reads per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration read
+ *     from hardware.
+ */
+int cvmx_pki_read_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function writes per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration need
+ *     to be written in hardware.
+ */
+int cvmx_pki_write_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function reads parameters associated with tag configuration in hardware.
+ *
+ * @param node	 Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to tag configuration struct.
+ */
+void cvmx_pki_read_tag_config(int node, int style, u64 cluster_mask,
+			      struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function writes/configures parameters associated with tag
+ * configuration in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to taf configuration struct.
+ */
+void cvmx_pki_write_tag_config(int node, int style, u64 cluster_mask,
+			       struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function reads parameters associated with style in hardware.
+ *
+ * @param node	Node number.
+ * @param style  Style to read from.
+ * @param cluster_mask  Mask of clusters style belongs to.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_read_style_config(int node, int style, u64 cluster_mask,
+				struct cvmx_pki_style_config *style_cfg);
+
+/**
+ * This function writes/configures parameters associated with style in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_write_style_config(int node, u64 style, u64 cluster_mask,
+				 struct cvmx_pki_style_config *style_cfg);
+/**
+ * This function reads qpg entry at specified offset from qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to read from.
+ * @param qpg_cfg  Pointer to structure containing qpg values
+ */
+int cvmx_pki_read_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes qpg entry at specified offset in qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to write to.
+ * @param qpg_cfg  Pointer to stricture containing qpg values.
+ */
+void cvmx_pki_write_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes pcam entry at given offset in pcam table in hardware
+ *
+ * @param node  Node number.
+ * @param index	 Offset in pcam table.
+ * @param cluster_mask  Mask of clusters in which to write pcam entry.
+ * @param input  Input keys to pcam match passed as struct.
+ * @param action  PCAM match action passed as struct
+ */
+int cvmx_pki_pcam_write_entry(int node, int index, u64 cluster_mask,
+			      struct cvmx_pki_pcam_input input, struct cvmx_pki_pcam_action action);
+/**
+ * Configures the channel which will receive backpressure from the specified bpid.
+ * Each channel listens for backpressure on a specific bpid.
+ * Each bpid can backpressure multiple channels.
+ * @param node  Node number.
+ * @param bpid  BPID from which channel will receive backpressure.
+ * @param channel  Channel number to receive backpressue.
+ */
+int cvmx_pki_write_channel_bpid(int node, int channel, int bpid);
+
+/**
+ * Configures the bpid on which, specified channel will
+ * assert backpressure.
+ * Each bpid receives backpressure from auras.
+ * Multiple auras can backpressure single bpid.
+ * @param node  Node number.
+ * @param aura  Number which will assert backpressure on that bpid.
+ * @param bpid  To assert backpressure on.
+ */
+int cvmx_pki_write_aura_bpid(int node, int aura, int bpid);
+
+/**
+ * Enables/Disabled QoS (RED Drop, Tail Drop & backpressure) for the* PKI aura.
+ *
+ * @param node  Node number
+ * @param aura  To enable/disable QoS on.
+ * @param ena_red  Enable/Disable RED drop between pass and drop level
+ *    1-enable 0-disable
+ * @param ena_drop  Enable/disable tail drop when max drop level exceeds
+ *    1-enable 0-disable
+ * @param ena_bp  Enable/Disable asserting backpressure on bpid when
+ *    max DROP level exceeds.
+ *    1-enable 0-disable
+ */
+int cvmx_pki_enable_aura_qos(int node, int aura, bool ena_red, bool ena_drop, bool ena_bp);
+
+/**
+ * This function gives the initial style used by that pkind.
+ *
+ * @param node  Node number.
+ * @param pkind  PKIND number.
+ */
+int cvmx_pki_get_pkind_style(int node, int pkind);
+
+/**
+ * This function sets the wqe buffer mode. First packet data buffer can reside
+ * either in same buffer as wqe OR it can go in separate buffer. If used the later mode,
+ * make sure software allocate enough buffers to now have wqe separate from packet data.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param pkt_outside_wqe
+ *    0 = The packet link pointer will be at word [FIRST_SKIP] immediately
+ *    followed by packet data, in the same buffer as the work queue entry.
+ *    1 = The packet link pointer will be at word [FIRST_SKIP] in a new
+ *    buffer separate from the work queue entry. Words following the
+ *    WQE in the same cache line will be zeroed, other lines in the
+ *    buffer will not be modified and will retain stale data (from the
+ *    buffer?s previous use). This setting may decrease the peak PKI
+ *    performance by up to half on small packets.
+ */
+void cvmx_pki_set_wqe_mode(int node, u64 style, bool pkt_outside_wqe);
+
+/**
+ * This function sets the Packet mode of all ports and styles to little-endian.
+ * It Changes write operations of packet data to L2C to
+ * be in little-endian. Does not change the WQE header format, which is
+ * properly endian neutral.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ */
+void cvmx_pki_set_little_endian(int node, u64 style);
+
+/**
+ * Enables/Disables L2 length error check and max & min frame length checks.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to disable error for.
+ * @param l2len_err	 L2 length error check enable.
+ * @param maxframe_err	Max frame error check enable.
+ * @param minframe_err	Min frame error check enable.
+ *    1 -- Enabel err checks
+ *    0 -- Disable error checks
+ */
+void cvmx_pki_endis_l2_errs(int node, int pknd, bool l2len_err, bool maxframe_err,
+			    bool minframe_err);
+
+/**
+ * Enables/Disables fcs check and fcs stripping on the pkind.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to apply settings on.
+ * @param fcs_chk  Enable/disable fcs check.
+ *    1 -- enable fcs error check.
+ *    0 -- disable fcs error check.
+ * @param fcs_strip	 Strip L2 FCS bytes from packet, decrease WQE[LEN] by 4 bytes
+ *    1 -- strip L2 FCS.
+ *    0 -- Do not strip L2 FCS.
+ */
+void cvmx_pki_endis_fcs_check(int node, int pknd, bool fcs_chk, bool fcs_strip);
+
+/**
+ * This function shows the qpg table entries, read directly from hardware.
+ *
+ * @param node  Node number.
+ * @param num_entry  Number of entries to print.
+ */
+void cvmx_pki_show_qpg_entries(int node, u16 num_entry);
+
+/**
+ * This function shows the pcam table in raw format read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_pcam_entries(int node);
+
+/**
+ * This function shows the valid entries in readable format,
+ * read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_valid_pcam_entries(int node);
+
+/**
+ * This function shows the pkind attributes in readable format,
+ * read directly from hardware.
+ * @param node  Node number.
+ * @param pkind  PKIND number to print.
+ */
+void cvmx_pki_show_pkind_attributes(int node, int pkind);
+
+/**
+ * @INTERNAL
+ * This function is called by cvmx_helper_shutdown() to extract all FPA buffers
+ * out of the PKI. After this function completes, all FPA buffers that were
+ * prefetched by PKI will be in the appropriate FPA pool.
+ * This functions does not reset the PKI.
+ * WARNING: It is very important that PKI be reset soon after a call to this function.
+ *
+ * @param node  Node number.
+ */
+void __cvmx_pki_free_ptr(int node);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
new file mode 100644
index 000000000000..1fb49b3fb6de
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_INTERNAL_PORTS_RANGE__
+#define __CVMX_INTERNAL_PORTS_RANGE__
+
+/*
+ * Allocated a block of internal ports for the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ * @param  count      the number of internal ports requested
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_alloc(int interface, int port, u64 count);
+
+/*
+ * Free the internal ports associated with the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_free(int interface, int port);
+
+/*
+ * Frees up all the allocated internal ports.
+ */
+void cvmx_pko_internal_ports_range_free_all(void);
+
+void cvmx_pko_internal_ports_range_show(void);
+
+int __cvmx_pko_internal_ports_range_init(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
new file mode 100644
index 000000000000..5f8398904953
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PKO3_QUEUE_H__
+#define __CVMX_PKO3_QUEUE_H__
+
+/**
+ * @INTERNAL
+ *
+ * Find or allocate global port/dq map table
+ * which is a named table, contains entries for
+ * all possible OCI nodes.
+ *
+ * The table global pointer is stored in core-local variable
+ * so that every core will call this function once, on first use.
+ */
+int __cvmx_pko3_dq_table_setup(void);
+
+/*
+ * Get the base Descriptor Queue number for an IPD port on the local node
+ */
+int cvmx_pko3_get_queue_base(int ipd_port);
+
+/*
+ * Get the number of Descriptor Queues assigned for an IPD port
+ */
+int cvmx_pko3_get_queue_num(int ipd_port);
+
+/**
+ * Get L1/Port Queue number assigned to interface port.
+ *
+ * @param xiface is interface number.
+ * @param index is port index.
+ */
+int cvmx_pko3_get_port_queue(int xiface, int index);
+
+/*
+ * Configure L3 through L5 Scheduler Queues and Descriptor Queues
+ *
+ * The Scheduler Queues in Levels 3 to 5 and Descriptor Queues are
+ * configured one-to-one or many-to-one to a single parent Scheduler
+ * Queues. The level of the parent SQ is specified in an argument,
+ * as well as the number of children to attach to the specific parent.
+ * The children can have fair round-robin or priority-based scheduling
+ * when multiple children are assigned a single parent.
+ *
+ * @param node is the OCI node location for the queues to be configured
+ * @param parent_level is the level of the parent queue, 2 to 5.
+ * @param parent_queue is the number of the parent Scheduler Queue
+ * @param child_base is the number of the first child SQ or DQ to assign to
+ * @param parent
+ * @param child_count is the number of consecutive children to assign
+ * @param stat_prio_count is the priority setting for the children L2 SQs
+ *
+ * If <stat_prio_count> is -1, the Ln children will have equal Round-Robin
+ * relationship with eachother. If <stat_prio_count> is 0, all Ln children
+ * will be arranged in Weighted-Round-Robin, with the first having the most
+ * precedence. If <stat_prio_count> is between 1 and 8, it indicates how
+ * many children will have static priority settings (with the first having
+ * the most precedence), with the remaining Ln children having WRR scheduling.
+ *
+ * @returns 0 on success, -1 on failure.
+ *
+ * Note: this function supports the configuration of node-local unit.
+ */
+int cvmx_pko3_sq_config_children(unsigned int node, unsigned int parent_level,
+				 unsigned int parent_queue, unsigned int child_base,
+				 unsigned int child_count, int stat_prio_count);
+
+/*
+ * @INTERNAL
+ * Register a range of Descriptor Queues wth an interface port
+ *
+ * This function poulates the DQ-to-IPD translation table
+ * used by the application to retrieve the DQ range (typically ordered
+ * by priority) for a given IPD-port, which is either a physical port,
+ * or a channel on a channelized interface (i.e. ILK).
+ *
+ * @param xiface is the physical interface number
+ * @param index is either a physical port on an interface
+ * @param or a channel of an ILK interface
+ * @param dq_base is the first Descriptor Queue number in a consecutive range
+ * @param dq_count is the number of consecutive Descriptor Queues leading
+ * @param the same channel or port.
+ *
+ * Only a consecurive range of Descriptor Queues can be associated with any
+ * given channel/port, and usually they are ordered from most to least
+ * in terms of scheduling priority.
+ *
+ * Note: thus function only populates the node-local translation table.
+ *
+ * @returns 0 on success, -1 on failure.
+ */
+int __cvmx_pko3_ipd_dq_register(int xiface, int index, unsigned int dq_base, unsigned int dq_count);
+
+/**
+ * @INTERNAL
+ *
+ * Unregister DQs associated with CHAN_E (IPD port)
+ */
+int __cvmx_pko3_ipd_dq_unregister(int xiface, int index);
+
+/*
+ * Map channel number in PKO
+ *
+ * @param node is to specify the node to which this configuration is applied.
+ * @param pq_num specifies the Port Queue (i.e. L1) queue number.
+ * @param l2_l3_q_num  specifies L2/L3 queue number.
+ * @param channel specifies the channel number to map to the queue.
+ *
+ * The channel assignment applies to L2 or L3 Shaper Queues depending
+ * on the setting of channel credit level.
+ *
+ * @return returns none.
+ */
+void cvmx_pko3_map_channel(unsigned int node, unsigned int pq_num, unsigned int l2_l3_q_num,
+			   u16 channel);
+
+int cvmx_pko3_pq_config(unsigned int node, unsigned int mac_num, unsigned int pq_num);
+
+int cvmx_pko3_port_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			   unsigned int burst_bytes, int adj_bytes);
+int cvmx_pko3_dq_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+int cvmx_pko3_dq_pir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+typedef enum {
+	CVMX_PKO3_SHAPE_RED_STALL,
+	CVMX_PKO3_SHAPE_RED_DISCARD,
+	CVMX_PKO3_SHAPE_RED_PASS
+} red_action_t;
+
+void cvmx_pko3_dq_red(unsigned int node, unsigned int dq_num, red_action_t red_act,
+		      int8_t len_adjust);
+
+/**
+ * Macros to deal with short floating point numbers,
+ * where unsigned exponent, and an unsigned normalized
+ * mantissa are represented each with a defined field width.
+ *
+ */
+#define CVMX_SHOFT_MANT_BITS 8
+#define CVMX_SHOFT_EXP_BITS  4
+
+/**
+ * Convert short-float to an unsigned integer
+ * Note that it will lose precision.
+ */
+#define CVMX_SHOFT_TO_U64(m, e)                                                                    \
+	((((1ull << CVMX_SHOFT_MANT_BITS) | (m)) << (e)) >> CVMX_SHOFT_MANT_BITS)
+
+/**
+ * Convert to short-float from an unsigned integer
+ */
+#define CVMX_SHOFT_FROM_U64(ui, m, e)                                                              \
+	do {                                                                                       \
+		unsigned long long u;                                                              \
+		unsigned int k;                                                                    \
+		k = (1ull << (CVMX_SHOFT_MANT_BITS + 1)) - 1;                                      \
+		(e) = 0;                                                                           \
+		u = (ui) << CVMX_SHOFT_MANT_BITS;                                                  \
+		while ((u) > k) {                                                                  \
+			u >>= 1;                                                                   \
+			(e)++;                                                                     \
+		}                                                                                  \
+		(m) = u & (k >> 1);                                                                \
+	} while (0);
+
+#define CVMX_SHOFT_MAX()                                                                           \
+	CVMX_SHOFT_TO_U64((1 << CVMX_SHOFT_MANT_BITS) - 1, (1 << CVMX_SHOFT_EXP_BITS) - 1)
+#define CVMX_SHOFT_MIN() CVMX_SHOFT_TO_U64(0, 0)
+
+#endif /* __CVMX_PKO3_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow.h b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
new file mode 100644
index 000000000000..0680ca258f12
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
@@ -0,0 +1,2991 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Scheduling unit.
+ *
+ * New, starting with SDK 1.7.0, cvmx-pow supports a number of
+ * extended consistency checks. The define
+ * CVMX_ENABLE_POW_CHECKS controls the runtime insertion of POW
+ * internal state checks to find common programming errors. If
+ * CVMX_ENABLE_POW_CHECKS is not defined, checks are by default
+ * enabled. For example, cvmx-pow will check for the following
+ * program errors or POW state inconsistency.
+ * - Requesting a POW operation with an active tag switch in
+ *   progress.
+ * - Waiting for a tag switch to complete for an excessively
+ *   long period. This is normally a sign of an error in locking
+ *   causing deadlock.
+ * - Illegal tag switches from NULL_NULL.
+ * - Illegal tag switches from NULL.
+ * - Illegal deschedule request.
+ * - WQE pointer not matching the one attached to the core by
+ *   the POW.
+ */
+
+#ifndef __CVMX_POW_H__
+#define __CVMX_POW_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pow-defs.h"
+#include "cvmx-sso-defs.h"
+#include "cvmx-address.h"
+#include "cvmx-coremask.h"
+
+/* Default to having all POW constancy checks turned on */
+#ifndef CVMX_ENABLE_POW_CHECKS
+#define CVMX_ENABLE_POW_CHECKS 1
+#endif
+
+/*
+ * Special type for CN78XX style SSO groups (0..255),
+ * for distinction from legacy-style groups (0..15)
+ */
+typedef union {
+	u8 xgrp;
+	/* Fields that map XGRP for backwards compatibility */
+	struct __attribute__((__packed__)) {
+		u8 group : 5;
+		u8 qus : 3;
+	};
+} cvmx_xgrp_t;
+
+/*
+ * Softwsare-only structure to convey a return value
+ * containing multiple information fields about an work queue entry
+ */
+typedef struct {
+	u32 tag;
+	u16 index;
+	u8 grp; /* Legacy group # (0..15) */
+	u8 tag_type;
+} cvmx_pow_tag_info_t;
+
+/**
+ * Wait flag values for pow functions.
+ */
+typedef enum {
+	CVMX_POW_WAIT = 1,
+	CVMX_POW_NO_WAIT = 0,
+} cvmx_pow_wait_t;
+
+/**
+ *  POW tag operations.  These are used in the data stored to the POW.
+ */
+typedef enum {
+	CVMX_POW_TAG_OP_SWTAG = 0L,
+	CVMX_POW_TAG_OP_SWTAG_FULL = 1L,
+	CVMX_POW_TAG_OP_SWTAG_DESCH = 2L,
+	CVMX_POW_TAG_OP_DESCH = 3L,
+	CVMX_POW_TAG_OP_ADDWQ = 4L,
+	CVMX_POW_TAG_OP_UPDATE_WQP_GRP = 5L,
+	CVMX_POW_TAG_OP_SET_NSCHED = 6L,
+	CVMX_POW_TAG_OP_CLR_NSCHED = 7L,
+	CVMX_POW_TAG_OP_NOP = 15L
+} cvmx_pow_tag_op_t;
+
+/**
+ * This structure defines the store data on a store to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 no_sched : 1;
+		u64 unused : 2;
+		u64 index : 13;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused2 : 2;
+		u64 qos : 3;
+		u64 grp : 4;
+		cvmx_pow_tag_type_t type : 3;
+		u64 tag : 32;
+	} s_cn38xx;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 4;
+		u64 index : 11;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_clr;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 12;
+		u64 qos : 3;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_add;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 16;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_other;
+	struct {
+		u64 rsvd_62_63 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t type : 2;
+		u64 no_sched : 1;
+		u64 rsvd_48 : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 rsvd_42_43 : 2;
+		u64 wqp : 42;
+	} s_cn78xx_other;
+
+} cvmx_pow_tag_req_t;
+
+union cvmx_pow_tag_req_addr {
+	u64 u64;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 tag : 32;
+		u64 reserved_0_3 : 4;
+	} s_cn78xx;
+};
+
+/**
+ * This structure describes the address to load stuff from POW
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Address for new work request loads (did<2:0> == 0)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_4_39 : 36;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved_32_35 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 reserved_16_28 : 13;
+		u64 index : 12;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork_78xx;
+	/**
+	 * Address for loads to get POW internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_10_39 : 30;
+		u64 coreid : 4;
+		u64 get_rev : 1;
+		u64 get_cur : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} sstatus;
+	/**
+	 * Address for loads to get 68XX SS0 internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_14_39 : 26;
+		u64 coreid : 5;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sstatus_cn68xx;
+	/**
+	 * Address for memory loads to get POW internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_16_39 : 24;
+		u64 index : 11;
+		u64 get_des : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} smemload;
+	/**
+	 * Address for memory loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_20_39 : 20;
+		u64 index : 11;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} smemload_cn68xx;
+	/**
+	 * Address for index/pointer loads
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_9_39 : 31;
+		u64 qosgrp : 4;
+		u64 get_des_get_tail : 1;
+		u64 get_rmt : 1;
+		u64 reserved_0_2 : 3;
+	} sindexload;
+	/**
+	 * Address for a Index/Pointer loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_15_39 : 25;
+		u64 qos_grp : 6;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sindexload_cn68xx;
+	/**
+	 * Address for NULL_RD request (did<2:0> == 4)
+	 * when this is read, HW attempts to change the state to NULL if it is NULL_NULL
+	 * (the hardware cannot switch from NULL_NULL to NULL if a POW entry is not available -
+	 * software may need to recover by finishing another piece of work before a POW
+	 * entry can ever become available.)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_0_39 : 40;
+	} snull_rd;
+} cvmx_pow_load_addr_t;
+
+/**
+ * This structure defines the response to a load/SENDSINGLE to POW (except CSR reads)
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Response to new work request loads
+	 */
+	struct {
+		u64 no_work : 1;
+		u64 pend_switch : 1;
+		u64 tt : 2;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 addr : 42;
+	} s_work;
+
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 reserved_34_35 : 2;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDTAG)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_48_56 : 9;
+		u64 pend_index : 11;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0_cn68xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 pend_wqp : 36;
+	} s_sstatus1;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDWQP)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_51_56 : 6;
+		u64 pend_index : 11;
+		u64 reserved_38_39 : 2;
+		u64 pend_wqp : 38;
+	} s_sstatus1_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_56 : 1;
+		u64 prep_index : 12;
+		u64 reserved_42_43 : 2;
+		u64 pend_tag : 42;
+	} s_sso_ppx_pendwqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_TAG)
+	 */
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_46_47 : 2;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ppx_tag_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus3;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_WQP)
+	 */
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 index : 11;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_sstatus3_cn68xx;
+
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 tag : 42;
+	} s_sso_ppx_wqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus4;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_LINKS)
+	 */
+	struct {
+		u64 reserved_46_63 : 18;
+		u64 index : 11;
+		u64 reserved_34 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_24_25 : 2;
+		u64 revlink_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 link_index : 11;
+	} s_sstatus4_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_38_47 : 10;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_25 : 1;
+		u64 revlink_index : 12;
+		u64 link_index_vld : 1;
+		u64 link_index : 12;
+	} s_sso_ppx_links_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus5;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 0)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 reserved_35 : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_TAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0_cn68xx;
+
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sso_iaq_ppx_tag_cn78xx;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_smemload1;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_WQPGRP)
+	 */
+	struct {
+		u64 reserved_48_63 : 16;
+		u64 nosched : 1;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_smemload1_cn68xx;
+
+	/**
+	 * Entry structures for the CN7XXX chips.
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tailc : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ientx_tag_cn78xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_56_59 : 4;
+		u64 grp : 8;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn73xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn78xx;
+
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ientx_pendtag_cn78xx;
+
+	struct {
+		u64 reserved_26_63 : 38;
+		u64 prev_index : 10;
+		u64 reserved_11_15 : 5;
+		u64 next_index_vld : 1;
+		u64 next_index : 10;
+	} s_sso_ientx_links_cn73xx;
+
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 prev_index : 12;
+		u64 reserved_13_15 : 3;
+		u64 next_index_vld : 1;
+		u64 next_index : 12;
+	} s_sso_ientx_links_cn78xx;
+
+	/**
+	 * Result For POW Memory Load (get_des == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 fwd_index : 11;
+		u64 grp : 4;
+		u64 nosched : 1;
+		u64 pend_switch : 1;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_PENTAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_34_56 : 23;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ppx_pendtag_cn78xx;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_LINKS)
+	 */
+	struct {
+		u64 reserved_24_63 : 40;
+		u64 fwd_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 next_index : 11;
+	} s_smemload3_cn68xx;
+
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 free_val : 1;
+		u64 free_one : 1;
+		u64 reserved_49 : 1;
+		u64 free_head : 11;
+		u64 reserved_37 : 1;
+		u64 free_tail : 11;
+		u64 loc_val : 1;
+		u64 loc_one : 1;
+		u64 reserved_23 : 1;
+		u64 loc_head : 11;
+		u64 reserved_11 : 1;
+		u64 loc_tail : 11;
+	} sindexload0;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode ==
+	 * IPL_IQ/IPL_DESCHED/IPL_NOSCHED)
+	 */
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload0_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 nosched_val : 1;
+		u64 nosched_one : 1;
+		u64 reserved_49 : 1;
+		u64 nosched_head : 11;
+		u64 reserved_37 : 1;
+		u64 nosched_tail : 11;
+		u64 des_val : 1;
+		u64 des_one : 1;
+		u64 reserved_23 : 1;
+		u64 des_head : 11;
+		u64 reserved_11 : 1;
+		u64 des_tail : 11;
+	} sindexload1;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode == IPL_FREE0/IPL_FREE1/IPL_FREE2)
+	 */
+	struct {
+		u64 reserved_60_63 : 4;
+		u64 qnum_head : 2;
+		u64 qnum_tail : 2;
+		u64 reserved_28_55 : 28;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload1_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_head : 36;
+	} sindexload2;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_tail : 36;
+	} sindexload3;
+	/**
+	 * Response to NULL_RD request loads
+	 */
+	struct {
+		u64 unused : 62;
+		u64 state : 2;
+	} s_null_rd;
+
+} cvmx_pow_tag_load_resp_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pow_sl_tag_resp_t;
+
+/**
+ * This structure describes the address used for stores to the POW.
+ *  The store address is meaningful on stores to the POW.  The hardware assumes that an aligned
+ *  64-bit store was used for all these stores.
+ *  Note the assumption that the work queue entry is aligned on an 8-byte
+ *  boundary (since the low-order 3 address bits must be zero).
+ *  Note that not all fields are used by all operations.
+ *
+ *  NOTE: The following is the behavior of the pending switch bit at the PP
+ *       for POW stores (i.e. when did<7:3> == 0xc)
+ *     - did<2:0> == 0      => pending switch bit is set
+ *     - did<2:0> == 1      => no affect on the pending switch bit
+ *     - did<2:0> == 3      => pending switch bit is cleared
+ *     - did<2:0> == 7      => no affect on the pending switch bit
+ *     - did<2:0> == others => must not be used
+ *     - No other loads/stores have an affect on the pending switch bit
+ *     - The switch bus from POW can clear the pending switch bit
+ *
+ *  NOTE: did<2:0> == 2 is used by the HW for a special single-cycle ADDWQ command
+ *  that only contains the pointer). SW must never use did<2:0> == 2.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 mem_reg : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} stag;
+} cvmx_pow_tag_store_addr_t; /* FIXME- this type is unused */
+
+/**
+ * Decode of the store data when an IOBDMA SENDSINGLE is sent to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 unused : 36;
+		u64 wait : 1;
+		u64 unused2 : 3;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 unused1 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 unused2 : 13;
+		u64 index_grp_mask : 12;
+		u64 wait : 1;
+		u64 unused3 : 3;
+	} s_cn78xx;
+} cvmx_pow_iobdma_store_t;
+
+/* CSR typedefs have been moved to cvmx-pow-defs.h */
+
+/*enum for group priority parameters which needs modification*/
+enum cvmx_sso_group_modify_mask {
+	CVMX_SSO_MODIFY_GROUP_PRIORITY = 0x01,
+	CVMX_SSO_MODIFY_GROUP_WEIGHT = 0x02,
+	CVMX_SSO_MODIFY_GROUP_AFFINITY = 0x04
+};
+
+/**
+ * @INTERNAL
+ * Return the number of SSO groups for a given SoC model
+ */
+static inline unsigned int cvmx_sso_num_xgrp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 256;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 64;
+	printf("ERROR: %s: Unknown model\n", __func__);
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of POW groups on current model.
+ * In case of CN78XX/CN73XX this is the number of equivalent
+ * "legacy groups" on the chip when it is used in backward
+ * compatible mode.
+ */
+static inline unsigned int cvmx_pow_num_groups(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return cvmx_sso_num_xgrp() >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		return 64;
+	else
+		return 16;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of mask-set registers.
+ */
+static inline unsigned int cvmx_sso_num_maskset(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return 2;
+	else
+		return 1;
+}
+
+/**
+ * Get the POW tag for this core. This returns the current
+ * tag type, tag, group, and POW entry index associated with
+ * this core. Index is only valid if the tag type isn't NULL_NULL.
+ * If a tag switch is pending this routine returns the tag before
+ * the tag switch, not after.
+ *
+ * @return Current tag
+ */
+static inline cvmx_pow_tag_info_t cvmx_pow_get_current_tag(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_info_t result;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_tag_t sl_ppx_tag;
+		cvmx_xgrp_t xgrp;
+		int node, core;
+
+		CVMX_SYNCS;
+		node = cvmx_get_node_num();
+		core = cvmx_get_local_core_num();
+		sl_ppx_tag.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_TAG(core));
+		result.index = sl_ppx_tag.s.index;
+		result.tag_type = sl_ppx_tag.s.tt;
+		result.tag = sl_ppx_tag.s.tag;
+
+		/* Get native XGRP value */
+		xgrp.xgrp = sl_ppx_tag.s.grp;
+
+		/* Return legacy style group 0..15 */
+		result.grp = xgrp.group;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_pow_sl_tag_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 3;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s.grp;
+		result.index = load_resp.s.index;
+		result.tag_type = load_resp.s.tag_type;
+		result.tag = load_resp.s.tag;
+	} else {
+		cvmx_pow_tag_load_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s_sstatus2.grp;
+		result.index = load_resp.s_sstatus2.index;
+		result.tag_type = load_resp.s_sstatus2.tag_type;
+		result.tag = load_resp.s_sstatus2.tag;
+	}
+	return result;
+}
+
+/**
+ * Get the POW WQE for this core. This returns the work queue
+ * entry currently associated with this core.
+ *
+ * @return WQE pointer
+ */
+static inline cvmx_wqe_t *cvmx_pow_get_current_wqp(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_load_resp_t load_resp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_wqp_t sso_wqp;
+		int node = cvmx_get_node_num();
+		int core = cvmx_get_local_core_num();
+
+		sso_wqp.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_WQP(core));
+		if (sso_wqp.s.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(sso_wqp.s.wqp);
+		return (cvmx_wqe_t *)0;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 4;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		if (load_resp.s_sstatus3_cn68xx.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus3_cn68xx.wqp);
+		else
+			return (cvmx_wqe_t *)0;
+	} else {
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_addr.sstatus.get_wqp = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus4.wqp);
+	}
+}
+
+/**
+ * @INTERNAL
+ * Print a warning if a tag switch is pending for this core
+ *
+ * @param function Function name checking for a pending tag switch
+ */
+static inline void __cvmx_pow_warn_if_pending_switch(const char *function)
+{
+	u64 switch_complete;
+
+	CVMX_MF_CHORD(switch_complete);
+	cvmx_warn_if(!switch_complete, "%s called with tag switch in progress\n", function);
+}
+
+/**
+ * Waits for a tag switch to complete by polling the completion bit.
+ * Note that switches to NULL complete immediately and do not need
+ * to be waited for.
+ */
+static inline void cvmx_pow_tag_sw_wait(void)
+{
+	const u64 TIMEOUT_MS = 10; /* 10ms timeout */
+	u64 switch_complete;
+	u64 start_cycle;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		start_cycle = get_timer(0);
+
+	while (1) {
+		CVMX_MF_CHORD(switch_complete);
+		if (cvmx_likely(switch_complete))
+			break;
+
+		if (CVMX_ENABLE_POW_CHECKS) {
+			if (cvmx_unlikely(get_timer(start_cycle) > TIMEOUT_MS)) {
+				debug("WARNING: %s: Tag switch is taking a long time, possible deadlock\n",
+				      __func__);
+			}
+		}
+	}
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync_nocheck(cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.node = cvmx_get_node_num();
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork_78xx.wait = wait;
+	} else {
+		ptr.swork.mem_region = CVMX_IO_SEG;
+		ptr.swork.is_io = 1;
+		ptr.swork.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork.wait = wait;
+	}
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync(cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	return (cvmx_pow_work_request_sync_nocheck(wait));
+}
+
+/**
+ * Synchronous null_rd request.  Requests a switch out of NULL_NULL POW state.
+ * This function waits for any previous tag switch to complete before
+ * requesting the null_rd.
+ *
+ * @return Returns the POW state of type cvmx_pow_tag_type_t.
+ */
+static inline cvmx_pow_tag_type_t cvmx_pow_work_request_null_rd(void)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_NULL_RD;
+		ptr.swork_78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.snull_rd.mem_region = CVMX_IO_SEG;
+		ptr.snull_rd.is_io = 1;
+		ptr.snull_rd.did = CVMX_OCT_DID_TAG_NULL_RD;
+	}
+	result.u64 = csr_rd(ptr.u64);
+	return (cvmx_pow_tag_type_t)result.s_null_rd.state;
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async_nocheck(int scr_addr, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		data.s_cn78xx.node = cvmx_get_node_num();
+		data.s_cn78xx.scraddr = scr_addr >> 3;
+		data.s_cn78xx.len = 1;
+		data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s_cn78xx.wait = wait;
+	} else {
+		data.s.scraddr = scr_addr >> 3;
+		data.s.len = 1;
+		data.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s.wait = wait;
+	}
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async(int scr_addr, cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_work_request_async_nocheck(scr_addr, wait);
+}
+
+/**
+ * Gets result of asynchronous work request.  Performs a IOBDMA sync
+ * to wait for the response.
+ *
+ * @param scr_addr Scratch memory address to get result from
+ *                  Byte address, must be 8 byte aligned.
+ * @return Returns the WQE from the scratch register, or NULL if no work was
+ *         available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_response_async(int scr_addr)
+{
+	cvmx_pow_tag_load_resp_t result;
+
+	CVMX_SYNCIOBDMA;
+	result.u64 = cvmx_scratch_read64(scr_addr);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Checks if a work queue entry pointer returned by a work
+ * request is valid.  It may be invalid due to no work
+ * being available or due to a timeout.
+ *
+ * @param wqe_ptr pointer to a work queue entry returned by the POW
+ *
+ * @return 0 if pointer is valid
+ *         1 if invalid (no work was returned)
+ */
+static inline u64 cvmx_pow_work_invalid(cvmx_wqe_t *wqe_ptr)
+{
+	return (!wqe_ptr); /* FIXME: improve */
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details.
+	 * It is the application's responsibility to keep track of the
+	 * current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = tag_type;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+	}
+	/* Once this store arrives at POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function waits for any previous tag switch to complete, and also
+ * displays an error on tag switches to NULL.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details. It is the application's responsibility to keep
+	 * track of the current tag value if that is important.
+	 */
+
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch
+	 * cannot be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_nocheck(tag, tag_type);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param wqp      pointer to work queue entry to submit.  This entry is
+ *                 updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param group    group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full_nocheck(cvmx_wqe_t *wqp, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u64 group)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	unsigned int node = cvmx_get_node_num();
+	u64 wqp_phys = cvmx_ptr_to_phys(wqp);
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not
+	 * read from DRAM once the WQE is in flight.  See hardware manual
+	 * for complete details. It is the application's responsibility to
+	 * keep track of the current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int xgrp;
+
+		if (wqp_phys != 0x80) {
+			/* If WQE is valid, use its XGRP:
+			 * WQE GRP is 10 bits, and is mapped
+			 * to legacy GRP + QoS, includes node number.
+			 */
+			xgrp = wqp->word1.cn78xx.grp;
+			/* Use XGRP[node] too */
+			node = xgrp >> 8;
+			/* Modify XGRP with legacy group # from arg */
+			xgrp &= ~0xf8;
+			xgrp |= 0xf8 & (group << 3);
+
+		} else {
+			/* If no WQE, build XGRP with QoS=0 and current node */
+			xgrp = group << 3;
+			xgrp |= node << 8;
+		}
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = xgrp;
+		tag_req.s_cn78xx_other.wqp = wqp_phys;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s.addr = wqp_phys;
+	}
+	/* Once this store arrives@POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.
+ * Completion for the tag switch must be checked for separately.
+ * This function does NOT update the work queue entry in dram to match tag value
+ * and type, so the application must keep track of these if they are important
+ * to the application. This tag switch command must not be used for switches
+ * to NULL, as the tag switch pending bit will be set by the switch request,
+ * but never cleared by the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function waits for any pending tag switches to complete
+ * before requesting the tag switch.
+ *
+ * @param wqp      Pointer to work queue entry to submit.
+ *     This entry is updated to match the other parameters
+ * @param tag      Tag value to be assigned to work queue entry
+ * @param tag_type Type of tag
+ * @param group    Group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 group)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_full_nocheck(wqp, tag, tag_type, group);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that any previous tag switches have completed.
+ */
+static inline void cvmx_pow_tag_sw_null_nocheck(void)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called when we already have a NULL tag\n", __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.type = CVMX_POW_TAG_TYPE_NULL;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function waits for any pending tag switches to complete
+ * before requesting the switch to NULL.
+ */
+static inline void cvmx_pow_tag_sw_null(void)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_null_nocheck();
+}
+
+/**
+ * Submits work to an input queue.
+ * This function updates the work queue entry in DRAM to match the arguments given.
+ * Note that the tag provided is for the work queue entry submitted, and
+ * is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp      pointer to work queue entry to submit.
+ *                 This entry is updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param qos      Input queue to add to.
+ * @param grp      group value for the work queue entry.
+ */
+static inline void cvmx_pow_work_submit(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 qos, u64 grp)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	tag_req.u64 = 0;
+	ptr.u64 = 0;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		unsigned int xgrp;
+
+		xgrp = (grp & 0x1f) << 3;
+		xgrp |= (qos & 7);
+		xgrp |= 0x300 & (node << 8);
+
+		wqp->word1.cn78xx.rsvd_0 = 0;
+		wqp->word1.cn78xx.rsvd_1 = 0;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = xgrp;
+		CVMX_SYNCWS;
+
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+		tag_req.s_cn78xx_other.grp = xgrp;
+
+		ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		/* Reset all reserved bits */
+		wqp->word1.cn68xx.zero_0 = 0;
+		wqp->word1.cn68xx.zero_1 = 0;
+		wqp->word1.cn68xx.zero_2 = 0;
+		wqp->word1.cn68xx.qos = qos;
+		wqp->word1.cn68xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn68xx_add.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn68xx_add.type = tag_type;
+		tag_req.s_cn68xx_add.tag = tag;
+		tag_req.s_cn68xx_add.qos = qos;
+		tag_req.s_cn68xx_add.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	} else {
+		/* Reset all reserved bits */
+		wqp->word1.cn38xx.zero_2 = 0;
+		wqp->word1.cn38xx.qos = qos;
+		wqp->word1.cn38xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.qos = qos;
+		tag_req.s_cn38xx.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	}
+	/* SYNC write to memory before the work submit.
+	 * This is necessary as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from. There are
+ * 16 groups.
+ *
+ * @param core_num   core to apply mask to
+ * @param mask   Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline void cvmx_pow_set_group_mask(u64 core_num, u64 mask)
+{
+	u64 valid_mask;
+	int num_groups = cvmx_pow_num_groups();
+
+	if (num_groups >= 64)
+		valid_mask = ~0ull;
+	else
+		valid_mask = (1ull << num_groups) - 1;
+
+	if ((mask & valid_mask) == 0) {
+		printf("ERROR: %s empty group mask disables work on core# %llu, ignored.\n",
+		       __func__, (unsigned long long)core_num);
+		return;
+	}
+	cvmx_warn_if(mask & (~valid_mask), "%s group number range exceeded: %#llx\n", __func__,
+		     (unsigned long long)mask);
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int mask_set;
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node;
+		unsigned int rix;  /* Register index */
+		unsigned int grp;  /* Legacy group # */
+		unsigned int bit;  /* bit index */
+		unsigned int xgrp; /* native group # */
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = 0; rix < (cvmx_sso_num_xgrp() >> 6); rix++) {
+			grp_msk.u64 = 0;
+			for (bit = 0; bit < 64; bit++) {
+				/* 8-bit native XGRP number */
+				xgrp = (rix << 6) | bit;
+				/* Legacy 5-bit group number */
+				grp = (xgrp >> 3) & 0x1f;
+				/* Inspect legacy mask by legacy group */
+				if (mask & (1ull << grp))
+					grp_msk.s.grp_msk |= 1ull << bit;
+				/* Pre-set to all 0's */
+			}
+			for (mask_set = 0; mask_set < cvmx_sso_num_maskset(); mask_set++) {
+				csr_wr_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, mask_set, rix),
+					    grp_msk.u64);
+			}
+		}
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.s.grp_msk = mask;
+		csr_wr(CVMX_SSO_PPX_GRP_MSK(core_num), grp_msk.u64);
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.grp_msk = mask & 0xffff;
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from.
+ *
+ * @param core_num   core to apply mask to
+ * @return	Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline u64 cvmx_pow_get_group_mask(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read only mask_set=0 (both 'set' was written same) */
+			grp_msk.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			/* ASSUME: (this is how mask bits got written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* DO: mask[7:0] = grp_mask.u64[56,48,40,32,24,16,8,0] */
+			for (i = 0; i < 8; i++)
+				mask |= (grp_msk.u64 & ((u64)1 << (i * 8))) >> (7 * i);
+			/* we collected 8 MSBs in mask[7:0], <<=8 and continue */
+			if (cvmx_likely(rix != 0))
+				mask <<= 8;
+		}
+		return mask & 0xFFFFFFFF;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_SSO_PPX_GRP_MSK(core_num));
+		return grp_msk.u64;
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		return grp_msk.u64 & 0xffff;
+	}
+}
+
+/*
+ * Returns 0 if 78xx(73xx,75xx) is not programmed in legacy compatible mode
+ * Returns 1 if 78xx(73xx,75xx) is programmed in legacy compatible mode
+ * Returns 1 if octeon model is not 78xx(73xx,75xx)
+ */
+static inline u64 cvmx_pow_is_legacy78mode(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk0, grp_msk1;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		/* 1) in order for the 78_SSO to be in legacy compatible mode
+		 * the both mask_sets should be programmed the same */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read mask_set=0 (both 'set' was written same) */
+			grp_msk0.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			grp_msk1.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 1, rix));
+			if (grp_msk0.u64 != grp_msk1.u64) {
+				return 0;
+			}
+			/* (this is how mask bits should be written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* 2) in order for the 78_SSO to be in legacy compatible
+			 * mode above should be true (test only mask_set=0 */
+			for (i = 0; i < 8; i++) {
+				mask = (grp_msk0.u64 >> (i << 3)) & 0xFF;
+				if (!(mask == 0 || mask == 0xFF)) {
+					return 0;
+				}
+			}
+		}
+		/* if we come here, the 78_SSO is in legacy compatible mode */
+	}
+	return 1; /* the SSO/POW is in legacy (or compatible) mode */
+}
+
+/**
+ * This function sets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param core_num   core to apply priorities to
+ * @param priority   Vector of 8 priorities, one per POW Input Queue (0-7).
+ *                   Highest priority is 0 and lowest is 7. A priority value
+ *                   of 0xF instructs POW to skip the Input Queue when
+ *                   scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_set_priority(u64 core_num, const u8 priority[])
+{
+	/* Detect gaps between priorities and flag error */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR: POW static priorities should be contiguous (0x%llx)\n",
+			      (unsigned long long)prio_mask);
+			return;
+		}
+	}
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/*grp_pri.s.weight = 0x3f; these will be anyway overwritten */
+		/*grp_pri.s.affinity = 0xf; by the next csr_rd_node(..), */
+
+		for (group = 0; group < cvmx_sso_num_xgrp(); group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			grp_pri.s.pri = priority[group & 0x7];
+			csr_wr_node(node, CVMX_SSO_GRPX_PRI(group), grp_pri.u64);
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		qos_pri.s.qos0_pri = priority[0];
+		qos_pri.s.qos1_pri = priority[1];
+		qos_pri.s.qos2_pri = priority[2];
+		qos_pri.s.qos3_pri = priority[3];
+		qos_pri.s.qos4_pri = priority[4];
+		qos_pri.s.qos5_pri = priority[5];
+		qos_pri.s.qos6_pri = priority[6];
+		qos_pri.s.qos7_pri = priority[7];
+		csr_wr(CVMX_SSO_PPX_QOS_PRI(core_num), qos_pri.u64);
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.qos0_pri = priority[0];
+		grp_msk.s.qos1_pri = priority[1];
+		grp_msk.s.qos2_pri = priority[2];
+		grp_msk.s.qos3_pri = priority[3];
+		grp_msk.s.qos4_pri = priority[4];
+		grp_msk.s.qos5_pri = priority[5];
+		grp_msk.s.qos6_pri = priority[6];
+		grp_msk.s.qos7_pri = priority[7];
+
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param[in]  core_num core to get priorities for
+ * @param[out] priority Pointer to u8[] where to return priorities
+ *			Vector of 8 priorities, one per POW Input Queue (0-7).
+ *			Highest priority is 0 and lowest is 7. A priority value
+ *			of 0xF instructs POW to skip the Input Queue when
+ *			scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_get_priority(u64 core_num, u8 priority[])
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/* read priority only from the first 8 groups */
+		/* the next groups are programmed the same (periodicaly) */
+		for (group = 0; group < 8 /*cvmx_sso_num_xgrp() */; group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			priority[group /* & 0x7 */] = grp_pri.s.pri;
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		priority[0] = qos_pri.s.qos0_pri;
+		priority[1] = qos_pri.s.qos1_pri;
+		priority[2] = qos_pri.s.qos2_pri;
+		priority[3] = qos_pri.s.qos3_pri;
+		priority[4] = qos_pri.s.qos4_pri;
+		priority[5] = qos_pri.s.qos5_pri;
+		priority[6] = qos_pri.s.qos6_pri;
+		priority[7] = qos_pri.s.qos7_pri;
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		priority[0] = grp_msk.s.qos0_pri;
+		priority[1] = grp_msk.s.qos1_pri;
+		priority[2] = grp_msk.s.qos2_pri;
+		priority[3] = grp_msk.s.qos3_pri;
+		priority[4] = grp_msk.s.qos4_pri;
+		priority[5] = grp_msk.s.qos5_pri;
+		priority[6] = grp_msk.s.qos6_pri;
+		priority[7] = grp_msk.s.qos7_pri;
+	}
+
+	/* Detect gaps between priorities and flag error - (optional) */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR:%s: POW static priorities should be contiguous (0x%llx)\n",
+			      __func__, (unsigned long long)prio_mask);
+			return;
+		}
+	}
+}
+
+static inline void cvmx_sso_get_group_priority(int node, cvmx_xgrp_t xgrp, int *priority,
+					       int *weight, int *affinity)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	*affinity = grp_pri.s.affinity;
+	*priority = grp_pri.s.pri;
+	*weight = grp_pri.s.weight;
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function does NOT wait for any prior tag switches to complete, so the
+ * calling code must do this.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+						   u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_t *wqp = cvmx_pow_get_current_wqp();
+
+		if (!wqp) {
+			debug("ERROR: Failed to get WQE, %s\n", __func__);
+			return;
+		}
+		group &= 0x1f;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = group << 3;
+		CVMX_SYNCWS;
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = group << 3;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		group &= 0x3f;
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		group &= 0x0f;
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function waits for any prior tag switches to complete, so the
+ * calling code may call this function with a pending tag switch.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+					   u64 no_sched)
+{
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be started
+	 * if a previous switch is still pending.  */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_desched_nocheck(tag, tag_type, group, no_sched);
+}
+
+/**
+ * Descchedules the current work queue entry.
+ *
+ * @param no_sched no schedule flag value to be set on the work queue entry.
+ *     If this is set the entry will not be rescheduled.
+ */
+static inline void cvmx_pow_desched(u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not expected from NULL state\n",
+			     __func__);
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/******************************************************************************/
+/* OCTEON3-specific functions.                                                */
+/******************************************************************************/
+/**
+ * This function sets the the affinity of group to the cores in 78xx.
+ * It sets up all the cores in core_mask to accept work from the specified group.
+ *
+ * @param xgrp	Group to accept work from, 0 - 255.
+ * @param core_mask	Mask of all the cores which will accept work from this group
+ * @param mask_set	Every core has set of 2 masks which can be set to accept work
+ *     from 256 groups. At the time of get_work, cores can choose which mask_set
+ *     to get work from. 'mask_set' values range from 0 to 3, where	each of the
+ *     two bits represents a mask set. Cores will be added to the mask set with
+ *     corresponding bit set, and removed from the mask set with corresponding
+ *     bit clear.
+ * Note: cores can only accept work from SSO groups on the same node,
+ * so the node number for the group is derived from the core number.
+ */
+static inline void cvmx_sso_set_group_core_affinity(cvmx_xgrp_t xgrp,
+						    const struct cvmx_coremask *core_mask,
+						    u8 mask_set)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	int core;
+	int grp_index = xgrp.xgrp >> 6;
+	int bit_pos = xgrp.xgrp % 64;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	cvmx_coremask_for_each_core(core, core_mask)
+	{
+		unsigned int node, ncore;
+		u64 reg_addr;
+
+		node = cvmx_coremask_core_to_node(core);
+		ncore = cvmx_coremask_core_on_node(core);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 0, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 1)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 1, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 2)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+	}
+}
+
+/**
+ * This function sets the priority and group affinity arbitration for each group.
+ *
+ * @param node		Node number
+ * @param xgrp	Group 0 - 255 to apply mask parameters to
+ * @param priority	Priority of the group relative to other groups
+ *     0x0 - highest priority
+ *     0x7 - lowest priority
+ * @param weight	Cross-group arbitration weight to apply to this group.
+ *     valid values are 1-63
+ *     h/w default is 0x3f
+ * @param affinity	Processor affinity arbitration weight to apply to this group.
+ *     If zero, affinity is disabled.
+ *     valid values are 0-15
+ *     h/w default which is 0xf.
+ * @param modify_mask   mask of the parameters which needs to be modified.
+ *     enum cvmx_sso_group_modify_mask
+ *     to modify only priority -- set bit0
+ *     to modify only weight   -- set bit1
+ *     to modify only affinity -- set bit2
+ */
+static inline void cvmx_sso_set_group_priority(int node, cvmx_xgrp_t xgrp, int priority, int weight,
+					       int affinity,
+					       enum cvmx_sso_group_modify_mask modify_mask)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	if (weight <= 0)
+		weight = 0x3f; /* Force HW default when out of range */
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	if (grp_pri.s.weight == 0)
+		grp_pri.s.weight = 0x3f;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_PRIORITY)
+		grp_pri.s.pri = priority;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_WEIGHT)
+		grp_pri.s.weight = weight;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_AFFINITY)
+		grp_pri.s.affinity = affinity;
+	csr_wr_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp), grp_pri.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Only works on CN78XX style SSO.
+ *
+ * Work is requested from the SSO unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param xgrp  Group to receive work for (0-255).
+ * @param wait
+ *     1 to cause response to wait for work to become available (or timeout)
+ *     0 to cause response to return immediately
+ */
+static inline void cvmx_sso_work_request_grp_async_nocheck(int scr_addr, cvmx_xgrp_t xgrp,
+							   cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+	unsigned int node = cvmx_get_node_num();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	data.s_cn78xx.scraddr = scr_addr >> 3;
+	data.s_cn78xx.len = 1;
+	data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	data.s_cn78xx.grouped = 1;
+	data.s_cn78xx.index_grp_mask = (node << 8) | xgrp.xgrp;
+	data.s_cn78xx.wait = wait;
+	data.s_cn78xx.node = node;
+
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Synchronous work request from the node-local SSO without verifying
+ * pending tag switch. It requests work from a specific SSO group.
+ *
+ * @param lgrp The local group number (within the SSO of the node of the caller)
+ *     from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from SSO.
+ *     Returns NULL if no work was available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync_nocheck(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+	unsigned int node = cvmx_get_node_num() & 3;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	ptr.u64 = 0;
+	ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+	ptr.swork_78xx.is_io = 1;
+	ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.swork_78xx.node = node;
+	ptr.swork_78xx.grouped = 1;
+	ptr.swork_78xx.index = (lgrp & 0xff) | node << 8;
+	ptr.swork_78xx.wait = wait;
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request from the node-local SSO.
+ * It requests work from a specific SSO group.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param lgrp The node-local group number from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return The WQE pointer or NULL, if work is not available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_tag_sw_wait();
+	return cvmx_sso_work_request_grp_sync_nocheck(lgrp, wait);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Group mask array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask, if set, enables
+ *     the core to accept work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ *
+ * If the 'mask_set' argument is 3, both mask sets are configured
+ * with the same value (which is not typically the intention),
+ * so keep in mind the function needs to be called twice
+ * to set a different value into each of the mask sets,
+ * once with 'mask_set=1' and second time with 'mask_set=2'.
+ */
+static inline void cvmx_pow_set_xgrp_mask(u64 core_num, u8 mask_set, const u64 xgrp_mask[])
+{
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(((mask_set < 1) || (mask_set > 3)), "Invalid mask set");
+	}
+
+	if ((mask_set < 1) || (mask_set > 3))
+		mask_set = 3;
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < (cvmx_sso_num_xgrp() >> 6); grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Provide pointer to u64 mask[8] output array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask represents
+ *     the core accepts work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ */
+static inline void cvmx_pow_get_xgrp_mask(u64 core_num, u8 mask_set, u64 *xgrp_mask)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(mask_set != 1 && mask_set != 2, "Invalid mask set");
+	}
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp() >> 6; grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+	}
+}
+
+/**
+ * Executes SSO SWTAG command.
+ * This is similar to cvmx_pow_tag_sw() function, but uses linear
+ * (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+	tag_req.s_cn78xx_other.type = tag_type;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes SSO SWTAG_FULL command.
+ * This is similar to cvmx_pow_tag_sw_full() function, but
+ * uses linear (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_full_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 gxgrp;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be
+	 * started, if a previous switch is still pending. */
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+	gxgrp = node;
+	gxgrp = gxgrp << 8 | xgrp;
+	wqp->word1.cn78xx.grp = gxgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = gxgrp;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Submits work to an SSO group on any OCI node.
+ * This function updates the work queue entry in DRAM to match
+ * the arguments given.
+ * Note that the tag provided is for the work queue entry submitted,
+ * and is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp pointer to work queue entry to submit.
+ * This entry is updated to match the other parameters
+ * @param tag tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param xgrp native CN78XX group in the range 0..255
+ * @param node The OCI node number for the target group
+ *
+ * When this function is called on a model prior to CN78XX, which does
+ * not support OCI nodes, the 'node' argument is ignored, and the 'xgrp'
+ * parameter is converted into 'qos' (the lower 3 bits) and 'grp' (the higher
+ * 5 bits), following the backward-compatibility scheme of translating
+ * between new and old style group numbers.
+ */
+static inline void cvmx_pow_work_submit_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+	tag_req.s_cn78xx_other.grp = group;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+
+	/* SYNC write to memory before the work submit.  This is necessary
+	 ** as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes the SSO SWTAG_DESCHED operation.
+ * This is similar to the cvmx_pow_tag_sw_desched() function, but
+ * uses linear (vs. unified group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_desched_node(cvmx_wqe_t *wqe, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u8 xgrp, u64 no_sched,
+						u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqe->word1.cn78xx.tag = tag;
+	wqe->word1.cn78xx.tag_type = tag_type;
+	wqe->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = group;
+	tag_req.s_cn78xx_other.no_sched = no_sched;
+
+	ptr.u64 = 0;
+	ptr.s.mem_region = CVMX_IO_SEG;
+	ptr.s.is_io = 1;
+	ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/* Executes the UPD_WQP_GRP SSO operation.
+ *
+ * @param wqp  Pointer to the new work queue entry to switch to.
+ * @param xgrp SSO group in the range 0..255
+ *
+ * NOTE: The operation can be performed only on the local node.
+ */
+static inline void cvmx_sso_update_wqp_group(cvmx_wqe_t *wqp, u8 xgrp)
+{
+	union cvmx_pow_tag_req_addr addr;
+	cvmx_pow_tag_req_t data;
+	int node = cvmx_get_node_num();
+	int group = node << 8 | xgrp;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	data.u64 = 0;
+	data.s_cn78xx_other.op = CVMX_POW_TAG_OP_UPDATE_WQP_GRP;
+	data.s_cn78xx_other.grp = group;
+	data.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	addr.u64 = 0;
+	addr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	addr.s_cn78xx.is_io = 1;
+	addr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+	addr.s_cn78xx.node = node;
+	cvmx_write_io(addr.u64, data.u64);
+}
+
+/******************************************************************************/
+/* Define usage of bits within the 32 bit tag values.                         */
+/******************************************************************************/
+/*
+ * Number of bits of the tag used by software.  The SW bits
+ * are always a contiguous block of the high starting at bit 31.
+ * The hardware bits are always the low bits.  By default, the top 8 bits
+ * of the tag are reserved for software, and the low 24 are set by the IPD unit.
+ */
+#define CVMX_TAG_SW_BITS  (8)
+#define CVMX_TAG_SW_SHIFT (32 - CVMX_TAG_SW_BITS)
+
+/* Below is the list of values for the top 8 bits of the tag. */
+/*
+ * Tag values with top byte of this value are reserved for internal executive
+ * uses
+ */
+#define CVMX_TAG_SW_BITS_INTERNAL 0x1
+
+/*
+ * The executive divides the remaining 24 bits as follows:
+ * the upper 8 bits (bits 23 - 16 of the tag) define a subgroup
+ * the lower 16 bits (bits 15 - 0 of the tag) define are the value with
+ * the subgroup. Note that this section describes the format of tags generated
+ * by software - refer to the hardware documentation for a description of the
+ * tags values generated by the packet input hardware.
+ * Subgroups are defined here
+ */
+
+/* Mask for the value portion of the tag */
+#define CVMX_TAG_SUBGROUP_MASK	0xFFFF
+#define CVMX_TAG_SUBGROUP_SHIFT 16
+#define CVMX_TAG_SUBGROUP_PKO	0x1
+
+/* End of executive tag subgroup definitions */
+
+/* The remaining values software bit values 0x2 - 0xff are available
+ * for application use */
+
+/**
+ * This function creates a 32 bit tag value from the two values provided.
+ *
+ * @param sw_bits The upper bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the hw_bits parameter.
+ * @param hw_bits The lower bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the sw_bits parameter.
+ *
+ * @return 32 bit value of the combined hw and sw bits.
+ */
+static inline u32 cvmx_pow_tag_compose(u64 sw_bits, u64 hw_bits)
+{
+	return (((sw_bits & cvmx_build_mask(CVMX_TAG_SW_BITS)) << CVMX_TAG_SW_SHIFT) |
+		(hw_bits & cvmx_build_mask(32 - CVMX_TAG_SW_BITS)));
+}
+
+/**
+ * Extracts the bits allocated for software use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return N bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_sw_bits(u64 tag)
+{
+	return ((tag >> (32 - CVMX_TAG_SW_BITS)) & cvmx_build_mask(CVMX_TAG_SW_BITS));
+}
+
+/**
+ *
+ * Extracts the bits allocated for hardware use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return (32 - N) bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_hw_bits(u64 tag)
+{
+	return (tag & cvmx_build_mask(32 - CVMX_TAG_SW_BITS));
+}
+
+static inline u64 cvmx_sso3_get_wqe_count(int node)
+{
+	cvmx_sso_grpx_aq_cnt_t aq_cnt;
+	unsigned int grp = 0;
+	u64 cnt = 0;
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp(); grp++) {
+		aq_cnt.u64 = csr_rd_node(node, CVMX_SSO_GRPX_AQ_CNT(grp));
+		cnt += aq_cnt.s.aq_cnt;
+	}
+	return cnt;
+}
+
+static inline u64 cvmx_sso_get_total_wqe_count(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int node = cvmx_get_node_num();
+
+		return cvmx_sso3_get_wqe_count(node);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		cvmx_sso_iq_com_cnt_t sso_iq_com_cnt;
+
+		sso_iq_com_cnt.u64 = csr_rd(CVMX_SSO_IQ_COM_CNT);
+		return (sso_iq_com_cnt.s.iq_cnt);
+	} else {
+		cvmx_pow_iq_com_cnt_t pow_iq_com_cnt;
+
+		pow_iq_com_cnt.u64 = csr_rd(CVMX_POW_IQ_COM_CNT);
+		return (pow_iq_com_cnt.s.iq_cnt);
+	}
+}
+
+/**
+ * Store the current POW internal state into the supplied
+ * buffer. It is recommended that you pass a buffer of at least
+ * 128KB. The format of the capture may change based on SDK
+ * version and Octeon chip.
+ *
+ * @param buffer Buffer to store capture into
+ * @param buffer_size The size of the supplied buffer
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pow_capture(void *buffer, int buffer_size);
+
+/**
+ * Dump a POW capture to the console in a human readable format.
+ *
+ * @param buffer POW capture from cvmx_pow_capture()
+ * @param buffer_size Size of the buffer
+ */
+void cvmx_pow_display(void *buffer, int buffer_size);
+
+/**
+ * Return the number of POW entries supported by this chip
+ *
+ * @return Number of POW entries
+ */
+int cvmx_pow_get_num_entries(void);
+int cvmx_pow_get_dump_size(void);
+
+/**
+ * This will allocate count number of SSO groups on the specified node to the
+ * calling application. These groups will be for exclusive use of the
+ * application until they are freed.
+ * @param node The numa node for the allocation.
+ * @param base_group Pointer to the initial group, -1 to allocate anywhere.
+ * @param count  The number of consecutive groups to allocate.
+ * @return 0 on success and -1 on failure.
+ */
+int cvmx_sso_reserve_group_range(int node, int *base_group, int count);
+#define cvmx_sso_allocate_group_range cvmx_sso_reserve_group_range
+int cvmx_sso_reserve_group(int node);
+#define cvmx_sso_allocate_group cvmx_sso_reserve_group
+int cvmx_sso_release_group_range(int node, int base_group, int count);
+int cvmx_sso_release_group(int node, int group);
+
+/**
+ * Show integrated SSO configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_config_dump(unsigned int node);
+
+/**
+ * Show integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_dump(unsigned int node);
+
+/**
+ * Clear integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_clear(unsigned int node);
+
+/**
+ * Show SSO core-group affinity and priority per node (multi-node systems)
+ */
+void cvmx_pow_mask_priority_dump_node(unsigned int node, struct cvmx_coremask *avail_coremask);
+
+/**
+ * Show POW/SSO core-group affinity and priority (legacy, single-node systems)
+ */
+static inline void cvmx_pow_mask_priority_dump(struct cvmx_coremask *avail_coremask)
+{
+	cvmx_pow_mask_priority_dump_node(0 /*node */, avail_coremask);
+}
+
+/**
+ * Show SSO performance counters (multi-node systems)
+ */
+void cvmx_pow_show_perf_counters_node(unsigned int node);
+
+/**
+ * Show POW/SSO performance counters (legacy, single-node systems)
+ */
+static inline void cvmx_pow_show_perf_counters(void)
+{
+	cvmx_pow_show_perf_counters_node(0 /*node */);
+}
+
+#endif /* __CVMX_POW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-qlm.h b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
new file mode 100644
index 000000000000..19915eb82c51
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_QLM_H__
+#define __CVMX_QLM_H__
+
+/*
+ * Interface 0 on the 78xx can be connected to qlm 0 or qlm 2. When interface
+ * 0 is connected to qlm 0, this macro must be set to 0. When interface 0 is
+ * connected to qlm 2, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE0 0
+
+/*
+ * Interface 1 on the 78xx can be connected to qlm 1 or qlm 3. When interface
+ * 1 is connected to qlm 1, this macro must be set to 0. When interface 1 is
+ * connected to qlm 3, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE1 0
+
+/* Uncomment this line to print QLM JTAG state */
+/* #define CVMX_QLM_DUMP_STATE 1 */
+
+typedef struct {
+	const char *name;
+	int stop_bit;
+	int start_bit;
+} __cvmx_qlm_jtag_field_t;
+
+/**
+ * Return the number of QLMs supported by the chip
+ *
+ * @return  Number of QLMs
+ */
+int cvmx_qlm_get_num(void);
+
+/**
+ * Return the qlm number based on the interface
+ *
+ * @param xiface  Interface to look
+ */
+int cvmx_qlm_interface(int xiface);
+
+/**
+ * Return the qlm number based for a port in the interface
+ *
+ * @param xiface  interface to look up
+ * @param index  index in an interface
+ *
+ * @return the qlm number based on the xiface
+ */
+int cvmx_qlm_lmac(int xiface, int index);
+
+/**
+ * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
+ *
+ * @param BGX  BGX to search for.
+ *
+ * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
+ */
+int cvmx_qlm_mux_interface(int bgx);
+
+/**
+ * Return number of lanes for a given qlm
+ *
+ * @param qlm QLM block to query
+ *
+ * @return  Number of lanes
+ */
+int cvmx_qlm_get_lanes(int qlm);
+
+/**
+ * Get the QLM JTAG fields based on Octeon model on the supported chips.
+ *
+ * @return  qlm_jtag_field_t structure
+ */
+const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void);
+
+/**
+ * Get the QLM JTAG length by going through qlm_jtag_field for each
+ * Octeon model that is supported
+ *
+ * @return return the length.
+ */
+int cvmx_qlm_jtag_get_length(void);
+
+/**
+ * Initialize the QLM layer
+ */
+void cvmx_qlm_init(void);
+
+/**
+ * Get a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to get
+ * @param lane   Lane in QLM to get
+ * @param name   String name of field
+ *
+ * @return JTAG field value
+ */
+u64 cvmx_qlm_jtag_get(int qlm, int lane, const char *name);
+
+/**
+ * Set a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to set
+ * @param lane   Lane in QLM to set, or -1 for all lanes
+ * @param name   String name of field
+ * @param value  Value of the field
+ */
+void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, u64 value);
+
+/**
+ * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
+ * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function tweaks the
+ * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
+ */
+void __cvmx_qlm_speed_tweak(void);
+
+/**
+ * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
+ * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM tweak.
+ * This function tweaks the JTAG setting for a QLMs for PCIe to run better.
+ */
+void __cvmx_qlm_pcie_idle_dac_tweak(void);
+
+void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane);
+
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz.
+ *
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz(int qlm);
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz on specific node.
+ *
+ * @param node   Target QLM node
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm);
+
+enum cvmx_qlm_mode {
+	CVMX_QLM_MODE_DISABLED = -1,
+	CVMX_QLM_MODE_SGMII = 1,
+	CVMX_QLM_MODE_XAUI,
+	CVMX_QLM_MODE_RXAUI,
+	CVMX_QLM_MODE_PCIE,	/* gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X2, /* 1x2 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_2X1, /* 2x1 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X1, /* 1x1 gen2 / gen1 */
+	CVMX_QLM_MODE_SRIO_1X4, /* 1x4 short / long */
+	CVMX_QLM_MODE_SRIO_2X2, /* 2x2 short / long */
+	CVMX_QLM_MODE_SRIO_4X1, /* 4x1 short / long */
+	CVMX_QLM_MODE_ILK,
+	CVMX_QLM_MODE_QSGMII,
+	CVMX_QLM_MODE_SGMII_SGMII,
+	CVMX_QLM_MODE_SGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_SGMII,
+	CVMX_QLM_MODE_SGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_QSGMII,
+	CVMX_QLM_MODE_QSGMII_SGMII,
+	CVMX_QLM_MODE_RXAUI_1X2,
+	CVMX_QLM_MODE_SATA_2X1,
+	CVMX_QLM_MODE_XLAUI,
+	CVMX_QLM_MODE_XFI,
+	CVMX_QLM_MODE_10G_KR,
+	CVMX_QLM_MODE_40G_KR4,
+	CVMX_QLM_MODE_PCIE_1X8, /* 1x8 gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_RGMII_SGMII,
+	CVMX_QLM_MODE_RGMII_XFI,
+	CVMX_QLM_MODE_RGMII_10G_KR,
+	CVMX_QLM_MODE_RGMII_RXAUI,
+	CVMX_QLM_MODE_RGMII_XAUI,
+	CVMX_QLM_MODE_RGMII_XLAUI,
+	CVMX_QLM_MODE_RGMII_40G_KR4,
+	CVMX_QLM_MODE_MIXED,		/* BGX2 is mixed mode, DLM5(SGMII) & DLM6(XFI) */
+	CVMX_QLM_MODE_SGMII_2X1,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_10G_KR_1X2,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_XFI_1X2,		/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_RGMII_SGMII_1X1,	/* Configure BGX2, applies to DLM5 */
+	CVMX_QLM_MODE_RGMII_SGMII_2X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_10G_KR_1X1, /* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_XFI_1X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_SDL,		/* RMAC Pipe */
+	CVMX_QLM_MODE_CPRI,		/* RMAC */
+	CVMX_QLM_MODE_OCI
+};
+
+enum cvmx_gmx_inf_mode {
+	CVMX_GMX_INF_MODE_DISABLED = 0,
+	CVMX_GMX_INF_MODE_SGMII = 1,  /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_QSGMII = 2, /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_RXAUI = 3,  /* Only interface 0, interface 1 must be DISABLED */
+};
+
+/**
+ * Eye diagram captures are stored in the following structure
+ */
+typedef struct {
+	int width;	   /* Width in the x direction (time) */
+	int height;	   /* Height in the y direction (voltage) */
+	u32 data[64][128]; /* Error count@location, saturates as max */
+} cvmx_qlm_eye_t;
+
+/**
+ * These apply to DLM1 and DLM2 if its not in SATA mode
+ * Manual refers to lanes as follows:
+ *  DML 0 lane 0 == GSER0 lane 0
+ *  DML 0 lane 1 == GSER0 lane 1
+ *  DML 1 lane 2 == GSER1 lane 0
+ *  DML 1 lane 3 == GSER1 lane 1
+ *  DML 2 lane 4 == GSER2 lane 0
+ *  DML 2 lane 5 == GSER2 lane 1
+ */
+enum cvmx_pemx_cfg_mode {
+	CVMX_PEM_MD_GEN2_2LANE = 0, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN2_1LANE = 1, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN2_4LANE = 2, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+	CVMX_PEM_MD_GEN1_2LANE = 4, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN1_1LANE = 5, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN1_4LANE = 6, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+};
+
+/*
+ * Read QLM and return mode.
+ */
+enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int dlm_mode, int interface);
+void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier);
+
+void cvmx_qlm_display_registers(int qlm);
+
+int cvmx_qlm_measure_clock(int qlm);
+
+/**
+ * Measure the reference clock of a QLM on a multi-node setup
+ *
+ * @param node   node to measure
+ * @param qlm    QLM to measure
+ *
+ * @return Clock rate in Hz
+ */
+int cvmx_qlm_measure_clock_node(int node, int qlm);
+
+/*
+ * Perform RX equalization on a QLM
+ *
+ * @param node	Node the QLM is on
+ * @param qlm	QLM to perform RX equalization on
+ * @param lane	Lane to use, or -1 for all lanes
+ *
+ * @return Zero on success, negative if any lane failed RX equalization
+ */
+int __cvmx_qlm_rx_equalization(int node, int qlm, int lane);
+
+/**
+ * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
+ * Training may not update PHY Tx Taps. This function is not static
+ * so we can share it with BGX KR
+ *
+ * @param node	Node to apply errata workaround
+ * @param qlm	QLM to apply errata workaround
+ * @param lane	Lane to apply the errata
+ */
+int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane);
+
+void cvmx_qlm_gser_errata_25992(int node, int qlm);
+
+#ifdef CVMX_DUMP_GSER
+/**
+ * Dump GSER configuration for node 0
+ */
+int cvmx_dump_gser_config(unsigned int gser);
+/**
+ * Dump GSER status for node 0
+ */
+int cvmx_dump_gser_status(unsigned int gser);
+/**
+ * Dump GSER configuration
+ */
+int cvmx_dump_gser_config_node(unsigned int node, unsigned int gser);
+/**
+ * Dump GSER status
+ */
+int cvmx_dump_gser_status_node(unsigned int node, unsigned int gser);
+#endif
+
+int cvmx_qlm_eye_display(int node, int qlm, int qlm_lane, int format, const cvmx_qlm_eye_t *eye);
+
+void cvmx_prbs_process_cmd(int node, int qlm, int mode);
+
+#endif /* __CVMX_QLM_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-scratch.h b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
new file mode 100644
index 000000000000..d567a8453b7a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file provides support for the processor local scratch memory.
+ * Scratch memory is byte addressable - all addresses are byte addresses.
+ */
+
+#ifndef __CVMX_SCRATCH_H__
+#define __CVMX_SCRATCH_H__
+
+/* Note: This define must be a long, not a long long in order to compile
+	without warnings for both 32bit and 64bit. */
+#define CVMX_SCRATCH_BASE (-32768l) /* 0xffffffffffff8000 */
+
+/* Scratch line for LMTST/LMTDMA on Octeon3 models */
+#ifdef CVMX_CAVIUM_OCTEON3
+#define CVMX_PKO_LMTLINE 2ull
+#endif
+
+/**
+ * Reads an 8 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u8 cvmx_scratch_read8(u64 address)
+{
+	return *CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 16 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u16 cvmx_scratch_read16(u64 address)
+{
+	return *CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 32 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u32 cvmx_scratch_read32(u64 address)
+{
+	return *CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 64 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u64 cvmx_scratch_read64(u64 address)
+{
+	return *CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Writes an 8 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write8(u64 address, u64 value)
+{
+	*CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address) = (u8)value;
+}
+
+/**
+ * Writes a 32 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write16(u64 address, u64 value)
+{
+	*CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address) = (u16)value;
+}
+
+/**
+ * Writes a 16 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write32(u64 address, u64 value)
+{
+	*CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address) = (u32)value;
+}
+
+/**
+ * Writes a 64 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write64(u64 address, u64 value)
+{
+	*CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address) = value;
+}
+
+#endif /* __CVMX_SCRATCH_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-wqe.h b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
new file mode 100644
index 000000000000..c9e3c8312a65
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
@@ -0,0 +1,1462 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This header file defines the work queue entry (wqe) data structure.
+ * Since this is a commonly used structure that depends on structures
+ * from several hardware blocks, those definitions have been placed
+ * in this file to create a single point of definition of the wqe
+ * format.
+ * Data structures are still named according to the block that they
+ * relate to.
+ */
+
+#ifndef __CVMX_WQE_H__
+#define __CVMX_WQE_H__
+
+#include "cvmx-packet.h"
+#include "cvmx-csr-enums.h"
+#include "cvmx-pki-defs.h"
+#include "cvmx-pip-defs.h"
+#include "octeon-feature.h"
+
+#define OCT_TAG_TYPE_STRING(x)						\
+	(((x) == CVMX_POW_TAG_TYPE_ORDERED) ?				\
+	 "ORDERED" :							\
+	 (((x) == CVMX_POW_TAG_TYPE_ATOMIC) ?				\
+	  "ATOMIC" :							\
+	  (((x) == CVMX_POW_TAG_TYPE_NULL) ? "NULL" : "NULL_NULL")))
+
+/* Error levels in WQE WORD2 (ERRLEV).*/
+#define PKI_ERRLEV_E__RE_M 0x0
+#define PKI_ERRLEV_E__LA_M 0x1
+#define PKI_ERRLEV_E__LB_M 0x2
+#define PKI_ERRLEV_E__LC_M 0x3
+#define PKI_ERRLEV_E__LD_M 0x4
+#define PKI_ERRLEV_E__LE_M 0x5
+#define PKI_ERRLEV_E__LF_M 0x6
+#define PKI_ERRLEV_E__LG_M 0x7
+
+enum cvmx_pki_errlevel {
+	CVMX_PKI_ERRLEV_E_RE = PKI_ERRLEV_E__RE_M,
+	CVMX_PKI_ERRLEV_E_LA = PKI_ERRLEV_E__LA_M,
+	CVMX_PKI_ERRLEV_E_LB = PKI_ERRLEV_E__LB_M,
+	CVMX_PKI_ERRLEV_E_LC = PKI_ERRLEV_E__LC_M,
+	CVMX_PKI_ERRLEV_E_LD = PKI_ERRLEV_E__LD_M,
+	CVMX_PKI_ERRLEV_E_LE = PKI_ERRLEV_E__LE_M,
+	CVMX_PKI_ERRLEV_E_LF = PKI_ERRLEV_E__LF_M,
+	CVMX_PKI_ERRLEV_E_LG = PKI_ERRLEV_E__LG_M
+};
+
+#define CVMX_PKI_ERRLEV_MAX BIT(3) /* The size of WORD2:ERRLEV field.*/
+
+/* Error code in WQE WORD2 (OPCODE).*/
+#define CVMX_PKI_OPCODE_RE_NONE	      0x0
+#define CVMX_PKI_OPCODE_RE_PARTIAL    0x1
+#define CVMX_PKI_OPCODE_RE_JABBER     0x2
+#define CVMX_PKI_OPCODE_RE_FCS	      0x7
+#define CVMX_PKI_OPCODE_RE_FCS_RCV    0x8
+#define CVMX_PKI_OPCODE_RE_TERMINATE  0x9
+#define CVMX_PKI_OPCODE_RE_RX_CTL     0xb
+#define CVMX_PKI_OPCODE_RE_SKIP	      0xc
+#define CVMX_PKI_OPCODE_RE_DMAPKT     0xf
+#define CVMX_PKI_OPCODE_RE_PKIPAR     0x13
+#define CVMX_PKI_OPCODE_RE_PKIPCAM    0x14
+#define CVMX_PKI_OPCODE_RE_MEMOUT     0x15
+#define CVMX_PKI_OPCODE_RE_BUFS_OFLOW 0x16
+#define CVMX_PKI_OPCODE_L2_FRAGMENT   0x20
+#define CVMX_PKI_OPCODE_L2_OVERRUN    0x21
+#define CVMX_PKI_OPCODE_L2_PFCS	      0x22
+#define CVMX_PKI_OPCODE_L2_PUNY	      0x23
+#define CVMX_PKI_OPCODE_L2_MAL	      0x24
+#define CVMX_PKI_OPCODE_L2_OVERSIZE   0x25
+#define CVMX_PKI_OPCODE_L2_UNDERSIZE  0x26
+#define CVMX_PKI_OPCODE_L2_LENMISM    0x27
+#define CVMX_PKI_OPCODE_IP_NOT	      0x41
+#define CVMX_PKI_OPCODE_IP_CHK	      0x42
+#define CVMX_PKI_OPCODE_IP_MAL	      0x43
+#define CVMX_PKI_OPCODE_IP_MALD	      0x44
+#define CVMX_PKI_OPCODE_IP_HOP	      0x45
+#define CVMX_PKI_OPCODE_L4_MAL	      0x61
+#define CVMX_PKI_OPCODE_L4_CHK	      0x62
+#define CVMX_PKI_OPCODE_L4_LEN	      0x63
+#define CVMX_PKI_OPCODE_L4_PORT	      0x64
+#define CVMX_PKI_OPCODE_TCP_FLAG      0x65
+
+#define CVMX_PKI_OPCODE_MAX BIT(8) /* The size of WORD2:OPCODE field.*/
+
+/* Layer types in pki */
+#define CVMX_PKI_LTYPE_E_NONE_M	      0x0
+#define CVMX_PKI_LTYPE_E_ENET_M	      0x1
+#define CVMX_PKI_LTYPE_E_VLAN_M	      0x2
+#define CVMX_PKI_LTYPE_E_SNAP_PAYLD_M 0x5
+#define CVMX_PKI_LTYPE_E_ARP_M	      0x6
+#define CVMX_PKI_LTYPE_E_RARP_M	      0x7
+#define CVMX_PKI_LTYPE_E_IP4_M	      0x8
+#define CVMX_PKI_LTYPE_E_IP4_OPT_M    0x9
+#define CVMX_PKI_LTYPE_E_IP6_M	      0xA
+#define CVMX_PKI_LTYPE_E_IP6_OPT_M    0xB
+#define CVMX_PKI_LTYPE_E_IPSEC_ESP_M  0xC
+#define CVMX_PKI_LTYPE_E_IPFRAG_M     0xD
+#define CVMX_PKI_LTYPE_E_IPCOMP_M     0xE
+#define CVMX_PKI_LTYPE_E_TCP_M	      0x10
+#define CVMX_PKI_LTYPE_E_UDP_M	      0x11
+#define CVMX_PKI_LTYPE_E_SCTP_M	      0x12
+#define CVMX_PKI_LTYPE_E_UDP_VXLAN_M  0x13
+#define CVMX_PKI_LTYPE_E_GRE_M	      0x14
+#define CVMX_PKI_LTYPE_E_NVGRE_M      0x15
+#define CVMX_PKI_LTYPE_E_GTP_M	      0x16
+#define CVMX_PKI_LTYPE_E_SW28_M	      0x1C
+#define CVMX_PKI_LTYPE_E_SW29_M	      0x1D
+#define CVMX_PKI_LTYPE_E_SW30_M	      0x1E
+#define CVMX_PKI_LTYPE_E_SW31_M	      0x1F
+
+enum cvmx_pki_layer_type {
+	CVMX_PKI_LTYPE_E_NONE = CVMX_PKI_LTYPE_E_NONE_M,
+	CVMX_PKI_LTYPE_E_ENET = CVMX_PKI_LTYPE_E_ENET_M,
+	CVMX_PKI_LTYPE_E_VLAN = CVMX_PKI_LTYPE_E_VLAN_M,
+	CVMX_PKI_LTYPE_E_SNAP_PAYLD = CVMX_PKI_LTYPE_E_SNAP_PAYLD_M,
+	CVMX_PKI_LTYPE_E_ARP = CVMX_PKI_LTYPE_E_ARP_M,
+	CVMX_PKI_LTYPE_E_RARP = CVMX_PKI_LTYPE_E_RARP_M,
+	CVMX_PKI_LTYPE_E_IP4 = CVMX_PKI_LTYPE_E_IP4_M,
+	CVMX_PKI_LTYPE_E_IP4_OPT = CVMX_PKI_LTYPE_E_IP4_OPT_M,
+	CVMX_PKI_LTYPE_E_IP6 = CVMX_PKI_LTYPE_E_IP6_M,
+	CVMX_PKI_LTYPE_E_IP6_OPT = CVMX_PKI_LTYPE_E_IP6_OPT_M,
+	CVMX_PKI_LTYPE_E_IPSEC_ESP = CVMX_PKI_LTYPE_E_IPSEC_ESP_M,
+	CVMX_PKI_LTYPE_E_IPFRAG = CVMX_PKI_LTYPE_E_IPFRAG_M,
+	CVMX_PKI_LTYPE_E_IPCOMP = CVMX_PKI_LTYPE_E_IPCOMP_M,
+	CVMX_PKI_LTYPE_E_TCP = CVMX_PKI_LTYPE_E_TCP_M,
+	CVMX_PKI_LTYPE_E_UDP = CVMX_PKI_LTYPE_E_UDP_M,
+	CVMX_PKI_LTYPE_E_SCTP = CVMX_PKI_LTYPE_E_SCTP_M,
+	CVMX_PKI_LTYPE_E_UDP_VXLAN = CVMX_PKI_LTYPE_E_UDP_VXLAN_M,
+	CVMX_PKI_LTYPE_E_GRE = CVMX_PKI_LTYPE_E_GRE_M,
+	CVMX_PKI_LTYPE_E_NVGRE = CVMX_PKI_LTYPE_E_NVGRE_M,
+	CVMX_PKI_LTYPE_E_GTP = CVMX_PKI_LTYPE_E_GTP_M,
+	CVMX_PKI_LTYPE_E_SW28 = CVMX_PKI_LTYPE_E_SW28_M,
+	CVMX_PKI_LTYPE_E_SW29 = CVMX_PKI_LTYPE_E_SW29_M,
+	CVMX_PKI_LTYPE_E_SW30 = CVMX_PKI_LTYPE_E_SW30_M,
+	CVMX_PKI_LTYPE_E_SW31 = CVMX_PKI_LTYPE_E_SW31_M,
+	CVMX_PKI_LTYPE_E_MAX = CVMX_PKI_LTYPE_E_SW31
+};
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 ptr_vlan : 8;
+		u64 ptr_layer_g : 8;
+		u64 ptr_layer_f : 8;
+		u64 ptr_layer_e : 8;
+		u64 ptr_layer_d : 8;
+		u64 ptr_layer_c : 8;
+		u64 ptr_layer_b : 8;
+		u64 ptr_layer_a : 8;
+	};
+} cvmx_pki_wqe_word4_t;
+
+/**
+ * HW decode / err_code in work queue entry
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 4;
+		u64 unassigned2 : 4;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn38xx;
+	struct {
+		u64 unused1 : 16;
+		u64 vlan : 16;
+		u64 unused2 : 32;
+	} svlan;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 8;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn38xx;
+} cvmx_pip_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 software : 1;
+		u64 lg_hdr_type : 5;
+		u64 lf_hdr_type : 5;
+		u64 le_hdr_type : 5;
+		u64 ld_hdr_type : 5;
+		u64 lc_hdr_type : 5;
+		u64 lb_hdr_type : 5;
+		u64 is_la_ether : 1;
+		u64 rsvd_0 : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 stat_inc : 1;
+		u64 pcam_flag4 : 1;
+		u64 pcam_flag3 : 1;
+		u64 pcam_flag2 : 1;
+		u64 pcam_flag1 : 1;
+		u64 is_frag : 1;
+		u64 is_l3_bcast : 1;
+		u64 is_l3_mcast : 1;
+		u64 is_l2_bcast : 1;
+		u64 is_l2_mcast : 1;
+		u64 is_raw : 1;
+		u64 err_level : 3;
+		u64 err_code : 8;
+	};
+} cvmx_pki_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	cvmx_pki_wqe_word2_t pki;
+	cvmx_pip_wqe_word2_t pip;
+} cvmx_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u16 hw_chksum;
+		u8 unused;
+		u64 next_ptr : 40;
+	} cn38xx;
+	struct {
+		u64 l4ptr : 8;	  /* 56..63 */
+		u64 unused0 : 8;  /* 48..55 */
+		u64 l3ptr : 8;	  /* 40..47 */
+		u64 l2ptr : 8;	  /* 32..39 */
+		u64 unused1 : 18; /* 14..31 */
+		u64 bpid : 6;	  /* 8..13 */
+		u64 unused2 : 2;  /* 6..7 */
+		u64 pknd : 6;	  /* 0..5 */
+	} cn68xx;
+} cvmx_pip_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 4;
+		u64 aura : 12;
+		u64 rsvd_1 : 1;
+		u64 apad : 3;
+		u64 channel : 12;
+		u64 bufs : 8;
+		u64 style : 8;
+		u64 rsvd_2 : 10;
+		u64 pknd : 6;
+	};
+} cvmx_pki_wqe_word0_t;
+
+/* Use reserved bit, set by HW to 0, to indicate buf_ptr legacy translation*/
+#define pki_wqe_translated word0.rsvd_1
+
+typedef union {
+	u64 u64;
+	cvmx_pip_wqe_word0_t pip;
+	cvmx_pki_wqe_word0_t pki;
+	struct {
+		u64 unused : 24;
+		u64 next_ptr : 40; /* On cn68xx this is unused as well */
+	} raw;
+} cvmx_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 rsvd_0 : 2;
+		u64 rsvd_1 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+} cvmx_pki_wqe_word1_t;
+
+#define pki_errata20776 word1.rsvd_0
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 varies : 14;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+	cvmx_pki_wqe_word1_t cn78xx;
+	struct {
+		u64 len : 16;
+		u64 zero_0 : 1;
+		u64 qos : 3;
+		u64 zero_1 : 1;
+		u64 grp : 6;
+		u64 zero_2 : 3;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn68xx;
+	struct {
+		u64 len : 16;
+		u64 ipprt : 6;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 zero_2 : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn38xx;
+} cvmx_wqe_word1_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 8;
+		u64 hwerr : 8;
+		u64 rsvd_1 : 24;
+		u64 sqid : 8;
+		u64 rsvd_2 : 4;
+		u64 vfnum : 12;
+	};
+} cvmx_wqe_word3_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 21;
+		u64 sqfc : 11;
+		u64 rsvd_1 : 5;
+		u64 sqtail : 11;
+		u64 rsvd_2 : 3;
+		u64 sqhead : 13;
+	};
+} cvmx_wqe_word4_t;
+
+/**
+ * Work queue entry format.
+ * Must be 8-byte aligned.
+ */
+typedef struct cvmx_wqe_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 *conditions.
+	 */
+	cvmx_pip_wqe_word2_t word2;
+
+	/* Pointer to the first segment of the packet. */
+	cvmx_buf_ptr_t packet_ptr;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[96];
+
+	/* If desired, SW can make the work Q entry any length. For the purposes
+	 * of discussion here, Assume 128B always, as this is all that the hardware
+	 * deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_t;
+
+/**
+ * Work queue entry format for NQM
+ * Must be 8-byte aligned
+ */
+typedef struct cvmx_wqe_nqm_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Reserved */
+	u64 word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word3_t word3;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word4_t word4;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[88];
+
+	/* If desired, SW can make the work Q entry any length.
+	 * For the purposes of discussion here, assume 128B always, as this is
+	 * all that the hardware deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_nqm_t;
+
+/**
+ * Work queue entry format for 78XX.
+ * In 78XX packet data always resides in WQE buffer unless option
+ * DIS_WQ_DAT=1 in PKI_STYLE_BUF, which causes packet data to use separate buffer.
+ *
+ * Must be 8-byte aligned.
+ */
+typedef struct {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 * conditions.
+	 */
+	cvmx_pki_wqe_word2_t word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Pointer to the first segment of the packet.*/
+	cvmx_buf_ptr_pki_t packet_ptr;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives contains a byte pointer to the start of Layer
+	 * A/B/C/D/E/F/G relative of start of packet.
+	 */
+	cvmx_pki_wqe_word4_t word4;
+
+	/*-------------------------------------------------------------------*/
+	/* WORDs 5/6/7 may be extended there, if WQE_HSZ is set.             */
+	/*-------------------------------------------------------------------*/
+	u64 wqe_data[11];
+
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_78xx_t;
+
+/* Node LS-bit position in the WQE[grp] or PKI_QPG_TBL[grp_ok].*/
+#define CVMX_WQE_GRP_NODE_SHIFT 8
+
+/*
+ * This is an accessor function into the WQE that retreives the
+ * ingress port number, which can also be used as a destination
+ * port number for the same port.
+ *
+ * @param work - Work Queue Entrey pointer
+ * @returns returns the normalized port number, also known as "ipd" port
+ */
+static inline int cvmx_wqe_get_port(cvmx_wqe_t *work)
+{
+	int port;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* In 78xx wqe entry has channel number not port*/
+		port = work->word0.pki.channel;
+		/* For BGX interfaces (0x800 - 0xdff) the 4 LSBs indicate
+		 * the PFC channel, must be cleared to normalize to "ipd"
+		 */
+		if (port & 0x800)
+			port &= 0xff0;
+		/* Node number is in AURA field, make it part of port # */
+		port |= (work->word0.pki.aura >> 10) << 12;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		port = work->word2.s_cn68xx.port;
+	} else {
+		port = work->word1.cn38xx.ipprt;
+	}
+
+	return port;
+}
+
+static inline void cvmx_wqe_set_port(cvmx_wqe_t *work, int port)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = port;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word2.s_cn68xx.port = port;
+	else
+		work->word1.cn38xx.ipprt = port;
+}
+
+static inline int cvmx_wqe_get_grp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		/* legacy: GRP[0..2] :=QOS */
+		grp = (0xff & work->word1.cn78xx.grp) >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_xgrp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.grp = grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.grp = grp;
+	else
+		work->word1.cn38xx.grp = grp;
+}
+
+static inline int cvmx_wqe_get_xgrp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		grp = work->word1.cn78xx.grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_grp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		/* Legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= 0x7;
+		work->word1.cn78xx.grp |= 0xff & (grp << 3);
+		work->word1.cn78xx.grp |= (node << 8);
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.grp = grp;
+	} else {
+		work->word1.cn38xx.grp = grp;
+	}
+}
+
+static inline int cvmx_wqe_get_qos(cvmx_wqe_t *work)
+{
+	int qos;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* Legacy: GRP[0..2] :=QOS */
+		qos = work->word1.cn78xx.grp & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		qos = work->word1.cn68xx.qos;
+	} else {
+		qos = work->word1.cn38xx.qos;
+	}
+
+	return qos;
+}
+
+static inline void cvmx_wqe_set_qos(cvmx_wqe_t *work, int qos)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= ~0x7;
+		work->word1.cn78xx.grp |= qos & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.qos = qos;
+	} else {
+		work->word1.cn38xx.qos = qos;
+	}
+}
+
+static inline int cvmx_wqe_get_len(cvmx_wqe_t *work)
+{
+	int len;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		len = work->word1.cn78xx.len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		len = work->word1.cn68xx.len;
+	else
+		len = work->word1.cn38xx.len;
+
+	return len;
+}
+
+static inline void cvmx_wqe_set_len(cvmx_wqe_t *work, int len)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.len = len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.len = len;
+	else
+		work->word1.cn38xx.len = len;
+}
+
+/**
+ * This function returns, if there was L2/L1 errors detected in packet.
+ *
+ * @param work	pointer to work queue entry
+ *
+ * @return	0 if packet had no error, non-zero to indicate error code.
+ *
+ * Please refer to HRM for the specific model for full enumaration of error codes.
+ * With Octeon1/Octeon2 models, the returned code indicates L1/L2 errors.
+ * On CN73XX/CN78XX, the return code is the value of PKI_OPCODE_E,
+ * if it is non-zero, otherwise the returned code will be derived from
+ * PKI_ERRLEV_E such that an error indicated in LayerA will return 0x20,
+ * LayerB - 0x30, LayerC - 0x40 and so forth.
+ */
+static inline int cvmx_wqe_get_rcv_err(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_RE || wqe->word2.err_code != 0)
+			return wqe->word2.err_code;
+		else
+			return (wqe->word2.err_level << 4) + 0x10;
+	} else if (work->word2.snoip.rcv_error) {
+		return work->word2.snoip.err_code;
+	}
+
+	return 0;
+}
+
+static inline u32 cvmx_wqe_get_tag(cvmx_wqe_t *work)
+{
+	return work->word1.tag;
+}
+
+static inline void cvmx_wqe_set_tag(cvmx_wqe_t *work, u32 tag)
+{
+	work->word1.tag = tag;
+}
+
+static inline int cvmx_wqe_get_tt(cvmx_wqe_t *work)
+{
+	return work->word1.tag_type;
+}
+
+static inline void cvmx_wqe_set_tt(cvmx_wqe_t *work, int tt)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word1.cn78xx.tag_type = (cvmx_pow_tag_type_t)tt;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn68xx.zero_2 = 0;
+	} else {
+		work->word1.cn38xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn38xx.zero_2 = 0;
+	}
+}
+
+static inline u8 cvmx_wqe_get_unused8(cvmx_wqe_t *work)
+{
+	u8 bits;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		bits = wqe->word2.rsvd_0;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		bits = work->word0.pip.cn68xx.unused1;
+	} else {
+		bits = work->word0.pip.cn38xx.unused;
+	}
+
+	return bits;
+}
+
+static inline void cvmx_wqe_set_unused8(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.rsvd_0 = v;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word0.pip.cn68xx.unused1 = v;
+	} else {
+		work->word0.pip.cn38xx.unused = v;
+	}
+}
+
+static inline u8 cvmx_wqe_get_user_flags(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return work->word0.pki.rsvd_2;
+	else
+		return 0;
+}
+
+static inline void cvmx_wqe_set_user_flags(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.rsvd_2 = v;
+}
+
+static inline int cvmx_wqe_get_channel(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.channel);
+	else
+		return cvmx_wqe_get_port(work);
+}
+
+static inline void cvmx_wqe_set_channel(cvmx_wqe_t *work, int channel)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = channel;
+	else
+		debug("%s: ERROR: not supported for model\n", __func__);
+}
+
+static inline int cvmx_wqe_get_aura(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.aura);
+	else
+		return (work->packet_ptr.s.pool);
+}
+
+static inline void cvmx_wqe_set_aura(cvmx_wqe_t *work, int aura)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.aura = aura;
+	else
+		work->packet_ptr.s.pool = aura;
+}
+
+static inline int cvmx_wqe_get_style(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.style);
+	return 0;
+}
+
+static inline void cvmx_wqe_set_style(cvmx_wqe_t *work, int style)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.style = style;
+}
+
+static inline int cvmx_wqe_is_l3_ip(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match all 4 values for v4/v6 with.without options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return !work->word2.s_cn38xx.not_IP;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv4(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			!work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv6(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline bool cvmx_wqe_is_l4_udp_or_tcp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_TCP)
+			return true;
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_UDP)
+			return true;
+		return false;
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return false;
+
+	return (work->word2.s_cn38xx.tcp_or_udp != 0);
+}
+
+static inline int cvmx_wqe_is_l2_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_bcast;
+	} else {
+		return work->word2.s_cn38xx.is_bcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l2_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_mcast;
+	} else {
+		return work->word2.s_cn38xx.is_mcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_bcast(cvmx_wqe_t *work, bool bcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_bcast = bcast;
+	} else {
+		work->word2.s_cn38xx.is_bcast = bcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_mcast(cvmx_wqe_t *work, bool mcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_mcast = mcast;
+	} else {
+		work->word2.s_cn38xx.is_mcast = mcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_bcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+static inline int cvmx_wqe_is_l3_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_mcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+/**
+ * This function returns is there was IP error detected in packet.
+ * For 78XX it does not flag ipv4 options and ipv6 extensions.
+ * For older chips if PIP_GBL_CTL was proviosned to flag ip4_otions and
+ * ipv6 extension, it will be flag them.
+ * @param work	pointer to work queue entry
+ * @return	1 -- If IP error was found in packet
+ *          0 -- If no IP error was found in packet.
+ */
+static inline int cvmx_wqe_is_ip_exception(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LC)
+			return 1;
+		else
+			return 0;
+	}
+
+	return work->word2.s.IP_exc;
+}
+
+static inline int cvmx_wqe_is_l4_error(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LF)
+			return 1;
+		else
+			return 0;
+	} else {
+		return work->word2.s.L4_error;
+	}
+}
+
+static inline void cvmx_wqe_set_vlan(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.vlan_valid = set;
+	} else {
+		work->word2.s.vlan_valid = set;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_valid;
+	} else {
+		return work->word2.s.vlan_valid;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan_stacked(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_stacked;
+	} else {
+		return work->word2.s.vlan_stacked;
+	}
+}
+
+/**
+ * Extract packet data buffer pointer from work queue entry.
+ *
+ * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
+ * for the linked buffer list.
+ * On CN78XX, the native buffer pointer structure is converted into
+ * the legacy format.
+ * The legacy buf_ptr is then stored in the WQE, and word0 reserved
+ * field is set to indicate that the buffer pointers were translated.
+ * If the packet data is only found inside the work queue entry,
+ * a standard buffer pointer structure is created for it.
+ */
+cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work);
+
+static inline int cvmx_wqe_get_bufs(cvmx_wqe_t *work)
+{
+	int bufs;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		bufs = work->word0.pki.bufs;
+	} else {
+		/* Adjust for packet-in-WQE cases */
+		if (cvmx_unlikely(work->word2.s_cn38xx.bufs == 0 && !work->word2.s.software))
+			(void)cvmx_wqe_get_packet_ptr(work);
+		bufs = work->word2.s_cn38xx.bufs;
+	}
+	return bufs;
+}
+
+/**
+ * Free Work Queue Entry memory
+ *
+ * Will return the WQE buffer to its pool, unless the WQE contains
+ * non-redundant packet data.
+ * This function is intended to be called AFTER the packet data
+ * has been passed along to PKO for transmission and release.
+ * It can also follow a call to cvmx_helper_free_packet_data()
+ * to release the WQE after associated data was released.
+ */
+void cvmx_wqe_free(cvmx_wqe_t *work);
+
+/**
+ * Check if a work entry has been intiated by software
+ *
+ */
+static inline bool cvmx_wqe_is_soft(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.software;
+	} else {
+		return work->word2.s.software;
+	}
+}
+
+/**
+ * Allocate a work-queue entry for delivering software-initiated
+ * event notifications.
+ * The application data is copied into the work-queue entry,
+ * if the space is sufficient.
+ */
+cvmx_wqe_t *cvmx_wqe_soft_create(void *data_p, unsigned int data_sz);
+
+/* Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
+ * CN78XX pass 1.x has a bug where the packet pointer in each segment is
+ * written in the opposite endianness of the configured mode. Fix these here.
+ */
+static inline void cvmx_wqe_pki_errata_20776(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !wqe->pki_errata20776) {
+		u64 bufs;
+		cvmx_buf_ptr_pki_t buffer_next;
+
+		bufs = wqe->word0.bufs;
+		buffer_next = wqe->packet_ptr;
+		while (bufs > 1) {
+			cvmx_buf_ptr_pki_t next;
+			void *nextaddr = cvmx_phys_to_ptr(buffer_next.addr - 8);
+
+			memcpy(&next, nextaddr, sizeof(next));
+			next.u64 = __builtin_bswap64(next.u64);
+			memcpy(nextaddr, &next, sizeof(next));
+			buffer_next = next;
+			bufs--;
+		}
+		wqe->pki_errata20776 = 1;
+	}
+}
+
+/**
+ * @INTERNAL
+ *
+ * Extract the native PKI-specific buffer pointer from WQE.
+ *
+ * NOTE: Provisional, may be superceded.
+ */
+static inline cvmx_buf_ptr_pki_t cvmx_wqe_get_pki_pkt_ptr(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_buf_ptr_pki_t x = { 0 };
+		return x;
+	}
+
+	cvmx_wqe_pki_errata_20776(work);
+	return wqe->packet_ptr;
+}
+
+/**
+ * Set the buffer segment count for a packet.
+ *
+ * @return Returns the actual resulting value in the WQE fielda
+ *
+ */
+static inline unsigned int cvmx_wqe_set_bufs(cvmx_wqe_t *work, unsigned int bufs)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word0.pki.bufs = bufs;
+		return work->word0.pki.bufs;
+	}
+
+	work->word2.s.bufs = bufs;
+	return work->word2.s.bufs;
+}
+
+/**
+ * Get the offset of Layer-3 header,
+ * only supported when Layer-3 protocol is IPv4 or IPv6.
+ *
+ * @return Returns the offset, or 0 if the offset is not known or unsupported.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_get_l3_offset(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return wqe->word4.ptr_layer_c;
+	} else {
+		return work->word2.s.ip_offset;
+	}
+
+	return 0;
+}
+
+/**
+ * Set the offset of Layer-3 header in a packet.
+ * Typically used when an IP packet is generated by software
+ * or when the Layer-2 header length is modified, and
+ * a subsequent recalculation of checksums is anticipated.
+ *
+ * @return Returns the actual value of the work entry offset field.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_set_l3_offset(cvmx_wqe_t *work, unsigned int ip_off)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			wqe->word4.ptr_layer_c = ip_off;
+	} else {
+		work->word2.s.ip_offset = ip_off;
+	}
+
+	return cvmx_wqe_get_l3_offset(work);
+}
+
+/**
+ * Set the indication that the packet contains a IPv4 Layer-3 * header.
+ * Use 'cvmx_wqe_set_l3_ipv6()' if the protocol is IPv6.
+ * When 'set' is false, the call will result in an indication
+ * that the Layer-3 protocol is neither IPv4 nor IPv6.
+ *
+ * FIXME: Add IPV4_OPT handling based on L3 header length.
+ */
+static inline void cvmx_wqe_set_l3_ipv4(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP4;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 0;
+	}
+}
+
+/**
+ * Set packet Layer-3 protocol to IPv6.
+ *
+ * FIXME: Add IPV6_OPT handling based on presence of extended headers.
+ */
+static inline void cvmx_wqe_set_l3_ipv6(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP6;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s_cn38xx.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 1;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to UDP.
+ */
+static inline void cvmx_wqe_set_l4_udp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_UDP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to TCP.
+ */
+static inline void cvmx_wqe_set_l4_tcp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_TCP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set the "software" flag in a work entry.
+ */
+static inline void cvmx_wqe_set_soft(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.software = set;
+	} else {
+		work->word2.s.software = set;
+	}
+}
+
+/**
+ * Return true if the packet is an IP fragment.
+ */
+static inline bool cvmx_wqe_is_l3_frag(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.is_frag != 0);
+	}
+
+	if (!work->word2.s_cn38xx.not_IP)
+		return (work->word2.s.is_frag != 0);
+
+	return false;
+}
+
+/**
+ * Set the indicator that the packet is an fragmented IP packet.
+ */
+static inline void cvmx_wqe_set_l3_frag(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_frag = set;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s.is_frag = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to RARP.
+ */
+static inline void cvmx_wqe_set_l3_rarp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_RARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_rarp = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to ARP.
+ */
+static inline void cvmx_wqe_set_l3_arp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_ARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_arp = set;
+	}
+}
+
+/**
+ * Return true if the packet Layer-3 protocol is ARP.
+ */
+static inline bool cvmx_wqe_is_l3_arp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.lc_hdr_type == CVMX_PKI_LTYPE_E_ARP);
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return (work->word2.snoip.is_arp != 0);
+
+	return false;
+}
+
+#endif /* __CVMX_WQE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon_qlm.h b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
new file mode 100644
index 000000000000..219625b25688
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __OCTEON_QLM_H__
+#define __OCTEON_QLM_H__
+
+/* Reference clock selector values for ref_clk_sel */
+#define OCTEON_QLM_REF_CLK_100MHZ 0 /** 100 MHz */
+#define OCTEON_QLM_REF_CLK_125MHZ 1 /** 125 MHz */
+#define OCTEON_QLM_REF_CLK_156MHZ 2 /** 156.25 MHz */
+#define OCTEON_QLM_REF_CLK_161MHZ 3 /** 161.1328125 MHz */
+
+/**
+ * Configure qlm/dlm speed and mode.
+ * @param qlm     The QLM or DLM to configure
+ * @param speed   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP
+ *		  mode.
+ * @param pcie_mode Only used when qlm/dlm are in pcie mode.
+ * @param ref_clk_sel Reference clock to use for 70XX where:
+ *			0: 100MHz
+ *			1: 125MHz
+ *			2: 156.25MHz
+ *			3: 161.1328125MHz (CN73XX and CN78XX only)
+ * @param ref_clk_input	This selects which reference clock input to use.  For
+ *			cn70xx:
+ *				0: DLMC_REF_CLK0
+ *				1: DLMC_REF_CLK1
+ *				2: DLM0_REF_CLK
+ *			cn61xx: (not used)
+ *			cn78xx/cn76xx/cn73xx:
+ *				0: Internal clock (QLM[0-7]_REF_CLK)
+ *				1: QLMC_REF_CLK0
+ *				2: QLMC_REF_CLK1
+ *
+ * @return	Return 0 on success or -1.
+ *
+ * @note	When the 161MHz clock is used it can only be used for
+ *		XLAUI mode with a 6316 speed or XFI mode with a 103125 speed.
+ *		This rate is also only supported for CN73XX and CN78XX.
+ */
+int octeon_configure_qlm(int qlm, int speed, int mode, int rc, int pcie_mode, int ref_clk_sel,
+			 int ref_clk_input);
+
+int octeon_configure_qlm_cn78xx(int node, int qlm, int speed, int mode, int rc, int pcie_mode,
+				int ref_clk_sel, int ref_clk_input);
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param lane     Lane the apply the tuning parameters
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_pre   pre-cursor pre-emphasis
+ * @param tx_post  post-cursor pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ */
+void octeon_qlm_tune_per_lane_v3(int node, int qlm, int baud_mhz, int lane, int tx_swing,
+				 int tx_pre, int tx_post, int tx_gain, int tx_vboost);
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_premptap bits [0:3] pre-cursor pre-emphasis, bits[4:8] post-cursor
+ *		      pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ */
+void octeon_qlm_tune_v3(int node, int qlm, int baud_mhz, int tx_swing, int tx_premptap, int tx_gain,
+			int tx_vboost);
+
+/**
+ * Disables DFE for the specified QLM lane(s).
+ * This function should only be called for low-loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param lane     Lane to configure, or -1 all lanes
+ * @param baud_mhz The speed the QLM needs to be configured in Mhz.
+ * @param mode     The QLM to be configured as SGMII/XAUI/PCIe.
+ */
+void octeon_qlm_dfe_disable(int node, int qlm, int lane, int baud_mhz, int mode);
+
+/**
+ * Some QLMs need to override the default pre-ctle for low loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param pre_ctle pre-ctle settings for low loss channels
+ */
+void octeon_qlm_set_channel_v3(int node, int qlm, int pre_ctle);
+
+void octeon_init_qlm(int node);
+
+int octeon_mcu_probe(int node);
+
+#endif /* __OCTEON_QLM_H__ */
-- 
2.31.1

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 33/50] mips: octeon: Add misc remaining header files
  2021-04-23  3:56 ` [PATCH v2 33/50] mips: octeon: Add misc remaining header files Stefan Roese
@ 2021-04-23 16:38   ` Daniel Schwierzeck
  2021-04-23 17:57     ` Stefan Roese
  0 siblings, 1 reply; 57+ messages in thread
From: Daniel Schwierzeck @ 2021-04-23 16:38 UTC (permalink / raw)
  To: u-boot

Hi Stefan,

Am Freitag, den 23.04.2021, 05:56 +0200 schrieb Stefan Roese:
> From: Aaron Williams <awilliams@marvell.com>
> 
> Import misc remaining header files from 2013 U-Boot. These will be
> used
> by the later added drivers to support PCIe and networking on the MIPS
> Octeon II / III platforms.
> 
> Signed-off-by: Aaron Williams <awilliams@marvell.com>
> Signed-off-by: Stefan Roese <sr@denx.de>
> Cc: Aaron Williams <awilliams@marvell.com>
> Cc: Chandrakala Chavva <cchavva@marvell.com>
> Cc: Daniel Schwierzeck <daniel.schwierzeck@gmail.com>
> ---
> v2:
> - Add missing mach/octeon_qlm.h file (forgot to commit it in v1)
> 

the patch didn't show up in patchwork. But when manually applying,
there is still a build error due to missing mach/octeon_fdt.h

>  .../mach-octeon/include/mach/cvmx-address.h   |  209 ++
>  .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +++
>  .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
>  arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
>  .../mach-octeon/include/mach/cvmx-error.h     |  456 +++
>  arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 ++
>  .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 ++
>  .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 ++++
>  .../include/mach/cvmx-global-resources.h      |  213 ++
>  arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
>  .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++++
>  .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 ++++
>  arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
>  arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 ++
>  .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
>  .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 ++
>  arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++++++
>  .../include/mach/cvmx-pki-resources.h         |  157 +
>  arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++++++
>  .../mach/cvmx-pko-internal-ports-range.h      |   43 +
>  .../include/mach/cvmx-pko3-queue.h            |  175 +
>  arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991
> +++++++++++++++++
>  arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 ++
>  .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
>  arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 ++++++++
>  .../mach-octeon/include/mach/octeon_qlm.h     |  109 +
>  26 files changed, 11765 insertions(+)
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-
> queue.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-
> enums.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-
> resources.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-
> resources.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-
> internal-ports-range.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-
> queue.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/octeon_qlm.h
> 
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-address.h
> b/arch/mips/mach-octeon/include/mach/cvmx-address.h
> new file mode 100644
> index 000000000000..984f574a75bb
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-address.h
> @@ -0,0 +1,209 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Typedefs and defines for working with Octeon physical addresses.
> + */
> +
> +#ifndef __CVMX_ADDRESS_H__
> +#define __CVMX_ADDRESS_H__
> +
> +typedef enum {
> +	CVMX_MIPS_SPACE_XKSEG = 3LL,
> +	CVMX_MIPS_SPACE_XKPHYS = 2LL,
> +	CVMX_MIPS_SPACE_XSSEG = 1LL,
> +	CVMX_MIPS_SPACE_XUSEG = 0LL
> +} cvmx_mips_space_t;
> +
> +typedef enum {
> +	CVMX_MIPS_XKSEG_SPACE_KSEG0 = 0LL,
> +	CVMX_MIPS_XKSEG_SPACE_KSEG1 = 1LL,
> +	CVMX_MIPS_XKSEG_SPACE_SSEG = 2LL,
> +	CVMX_MIPS_XKSEG_SPACE_KSEG3 = 3LL
> +} cvmx_mips_xkseg_space_t;
> +
> +/* decodes <14:13> of a kseg3 window address */
> +typedef enum {
> +	CVMX_ADD_WIN_SCR = 0L,
> +	CVMX_ADD_WIN_DMA = 1L,
> +	CVMX_ADD_WIN_UNUSED = 2L,
> +	CVMX_ADD_WIN_UNUSED2 = 3L
> +} cvmx_add_win_dec_t;
> +
> +/* decode within DMA space */
> +typedef enum {
> +	CVMX_ADD_WIN_DMA_ADD = 0L,
> +	CVMX_ADD_WIN_DMA_SENDMEM = 1L,
> +	/* store data must be normal DRAM memory space address in this
> case */
> +	CVMX_ADD_WIN_DMA_SENDDMA = 2L,
> +	/* see CVMX_ADD_WIN_DMA_SEND_DEC for data contents */
> +	CVMX_ADD_WIN_DMA_SENDIO = 3L,
> +	/* store data must be normal IO space address in this case */
> +	CVMX_ADD_WIN_DMA_SENDSINGLE = 4L,
> +	/* no write buffer data needed/used */
> +} cvmx_add_win_dma_dec_t;
> +
> +/**
> + *   Physical Address Decode
> + *
> + * Octeon-I HW never interprets this X (<39:36> reserved
> + * for future expansion), software should set to 0.
> + *
> + *  - 0x0 XXX0 0000 0000 to      DRAM         Cached
> + *  - 0x0 XXX0 0FFF FFFF
> + *
> + *  - 0x0 XXX0 1000 0000 to      Boot Bus     Uncached  (Converted
> to 0x1 00X0 1000 0000
> + *  - 0x0 XXX0 1FFF FFFF         +
> EJTAG                           to 0x1 00X0 1FFF FFFF)
> + *
> + *  - 0x0 XXX0 2000 0000 to      DRAM         Cached
> + *  - 0x0 XXXF FFFF FFFF
> + *
> + *  - 0x1 00X0 0000 0000 to      Boot Bus     Uncached
> + *  - 0x1 00XF FFFF FFFF
> + *
> + *  - 0x1 01X0 0000 0000 to      Other NCB    Uncached
> + *  - 0x1 FFXF FFFF FFFF         devices
> + *
> + * Decode of all Octeon addresses
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		cvmx_mips_space_t R : 2;
> +		u64 offset : 62;
> +	} sva;
> +
> +	struct {
> +		u64 zeroes : 33;
> +		u64 offset : 31;
> +	} suseg;
> +
> +	struct {
> +		u64 ones : 33;
> +		cvmx_mips_xkseg_space_t sp : 2;
> +		u64 offset : 29;
> +	} sxkseg;
> +
> +	struct {
> +		cvmx_mips_space_t R : 2;
> +		u64 cca : 3;
> +		u64 mbz : 10;
> +		u64 pa : 49;
> +	} sxkphys;
> +
> +	struct {
> +		u64 mbz : 15;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 unaddr : 4;
> +		u64 offset : 36;
> +	} sphys;
> +
> +	struct {
> +		u64 zeroes : 24;
> +		u64 unaddr : 4;
> +		u64 offset : 36;
> +	} smem;
> +
> +	struct {
> +		u64 mem_region : 2;
> +		u64 mbz : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 unaddr : 4;
> +		u64 offset : 36;
> +	} sio;
> +
> +	struct {
> +		u64 ones : 49;
> +		cvmx_add_win_dec_t csrdec : 2;
> +		u64 addr : 13;
> +	} sscr;
> +
> +	/* there should only be stores to IOBDMA space, no loads */
> +	struct {
> +		u64 ones : 49;
> +		cvmx_add_win_dec_t csrdec : 2;
> +		u64 unused2 : 3;
> +		cvmx_add_win_dma_dec_t type : 3;
> +		u64 addr : 7;
> +	} sdma;
> +
> +	struct {
> +		u64 didspace : 24;
> +		u64 unused : 40;
> +	} sfilldidspace;
> +} cvmx_addr_t;
> +
> +/* These macros for used by 32 bit applications */
> +
> +#define CVMX_MIPS32_SPACE_KSEG0	     1l
> +#define CVMX_ADD_SEG32(segment, add) (((s32)segment << 31) |
> (s32)(add))
> +
> +/*
> + * Currently all IOs are performed using XKPHYS addressing. Linux
> uses the
> + * CvmMemCtl register to enable XKPHYS addressing to IO space from
> user mode.
> + * Future OSes may need to change the upper bits of IO addresses.
> The
> + * following define controls the upper two bits for all IO addresses
> generated
> + * by the simple executive library
> + */
> +#define CVMX_IO_SEG CVMX_MIPS_SPACE_XKPHYS
> +
> +/* These macros simplify the process of creating common IO addresses
> */
> +#define CVMX_ADD_SEG(segment, add) ((((u64)segment) << 62) | (add))
> +
> +#define CVMX_ADD_IO_SEG(add) (add)
> +
> +#define CVMX_ADDR_DIDSPACE(did)	   (((CVMX_IO_SEG) << 22) |
> ((1ULL) << 8) | (did))
> +#define CVMX_ADDR_DID(did)	   (CVMX_ADDR_DIDSPACE(did) << 40)
> +#define CVMX_FULL_DID(did, subdid) (((did) << 3) | (subdid))
> +
> +/* from include/ncb_rsl_id.v */
> +#define CVMX_OCT_DID_MIS  0ULL /* misc stuff */
> +#define CVMX_OCT_DID_GMX0 1ULL
> +#define CVMX_OCT_DID_GMX1 2ULL
> +#define CVMX_OCT_DID_PCI  3ULL
> +#define CVMX_OCT_DID_KEY  4ULL
> +#define CVMX_OCT_DID_FPA  5ULL
> +#define CVMX_OCT_DID_DFA  6ULL
> +#define CVMX_OCT_DID_ZIP  7ULL
> +#define CVMX_OCT_DID_RNG  8ULL
> +#define CVMX_OCT_DID_IPD  9ULL
> +#define CVMX_OCT_DID_PKT  10ULL
> +#define CVMX_OCT_DID_TIM  11ULL
> +#define CVMX_OCT_DID_TAG  12ULL
> +/* the rest are not on the IO bus */
> +#define CVMX_OCT_DID_L2C  16ULL
> +#define CVMX_OCT_DID_LMC  17ULL
> +#define CVMX_OCT_DID_SPX0 18ULL
> +#define CVMX_OCT_DID_SPX1 19ULL
> +#define CVMX_OCT_DID_PIP  20ULL
> +#define CVMX_OCT_DID_ASX0 22ULL
> +#define CVMX_OCT_DID_ASX1 23ULL
> +#define CVMX_OCT_DID_IOB  30ULL
> +
> +#define CVMX_OCT_DID_PKT_SEND	 CVMX_FULL_DID(CVMX_OCT_DID_PKT
> , 2ULL)
> +#define CVMX_OCT_DID_TAG_SWTAG	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
> , 0ULL)
> +#define CVMX_OCT_DID_TAG_TAG1	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
> , 1ULL)
> +#define CVMX_OCT_DID_TAG_TAG2	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
> , 2ULL)
> +#define CVMX_OCT_DID_TAG_TAG3	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
> , 3ULL)
> +#define CVMX_OCT_DID_TAG_NULL_RD CVMX_FULL_DID(CVMX_OCT_DID_TAG,
> 4ULL)
> +#define CVMX_OCT_DID_TAG_TAG5	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
> , 5ULL)
> +#define CVMX_OCT_DID_TAG_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 7ULL)
> +#define CVMX_OCT_DID_FAU_FAI	 CVMX_FULL_DID(CVMX_OCT_DID_IOB, 0ULL)
> +#define CVMX_OCT_DID_TIM_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TIM, 0ULL)
> +#define CVMX_OCT_DID_KEY_RW	 CVMX_FULL_DID(CVMX_OCT_DID_KEY, 0ULL)
> +#define CVMX_OCT_DID_PCI_6	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 6ULL)
> +#define CVMX_OCT_DID_MIS_BOO	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 0ULL)
> +#define CVMX_OCT_DID_PCI_RML	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 0ULL)
> +#define CVMX_OCT_DID_IPD_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_IPD, 7ULL)
> +#define CVMX_OCT_DID_DFA_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_DFA, 7ULL)
> +#define CVMX_OCT_DID_MIS_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 7ULL)
> +#define CVMX_OCT_DID_ZIP_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_ZIP, 0ULL)
> +
> +/* Cast to unsigned long long, mainly for use in printfs. */
> +#define CAST_ULL(v) ((unsigned long long)(v))
> +
> +#define UNMAPPED_PTR(x) ((1ULL << 63) | (x))
> +
> +#endif /* __CVMX_ADDRESS_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
> b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
> new file mode 100644
> index 000000000000..ddc294348cb4
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
> @@ -0,0 +1,441 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Support functions for managing command queues used for
> + * various hardware blocks.
> + *
> + * The common command queue infrastructure abstracts out the
> + * software necessary for adding to Octeon's chained queue
> + * structures. These structures are used for commands to the
> + * PKO, ZIP, DFA, RAID, HNA, and DMA engine blocks. Although each
> + * hardware unit takes commands and CSRs of different types,
> + * they all use basic linked command buffers to store the
> + * pending request. In general, users of the CVMX API don't
> + * call cvmx-cmd-queue functions directly. Instead the hardware
> + * unit specific wrapper should be used. The wrappers perform
> + * unit specific validation and CSR writes to submit the
> + * commands.
> + *
> + * Even though most software will never directly interact with
> + * cvmx-cmd-queue, knowledge of its internal workings can help
> + * in diagnosing performance problems and help with debugging.
> + *
> + * Command queue pointers are stored in a global named block
> + * called "cvmx_cmd_queues". Except for the PKO queues, each
> + * hardware queue is stored in its own cache line to reduce SMP
> + * contention on spin locks. The PKO queues are stored such that
> + * every 16th queue is next to each other in memory. This scheme
> + * allows for queues being in separate cache lines when there
> + * are low number of queues per port. With 16 queues per port,
> + * the first queue for each port is in the same cache area. The
> + * second queues for each port are in another area, etc. This
> + * allows software to implement very efficient lockless PKO with
> + * 16 queues per port using a minimum of cache lines per core.
> + * All queues for a given core will be isolated in the same
> + * cache area.
> + *
> + * In addition to the memory pointer layout, cvmx-cmd-queue
> + * provides an optimized fair ll/sc locking mechanism for the
> + * queues. The lock uses a "ticket / now serving" model to
> + * maintain fair order on contended locks. In addition, it uses
> + * predicted locking time to limit cache contention. When a core
> + * know it must wait in line for a lock, it spins on the
> + * internal cycle counter to completely eliminate any causes of
> + * bus traffic.
> + */
> +
> +#ifndef __CVMX_CMD_QUEUE_H__
> +#define __CVMX_CMD_QUEUE_H__
> +
> +/**
> + * By default we disable the max depth support. Most programs
> + * don't use it and it slows down the command queue processing
> + * significantly.
> + */
> +#ifndef CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH
> +#define CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH 0
> +#endif
> +
> +/**
> + * Enumeration representing all hardware blocks that use command
> + * queues. Each hardware block has up to 65536 sub identifiers for
> + * multiple command queues. Not all chips support all hardware
> + * units.
> + */
> +typedef enum {
> +	CVMX_CMD_QUEUE_PKO_BASE = 0x00000,
> +#define
> CVMX_CMD_QUEUE_PKO(queue)                                            
>                       \
> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_PKO_BASE + (0xffff &
> (queue))))
> +	CVMX_CMD_QUEUE_ZIP = 0x10000,
> +#define
> CVMX_CMD_QUEUE_ZIP_QUE(queue)                                        
>                       \
> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_ZIP + (0xffff &
> (queue))))
> +	CVMX_CMD_QUEUE_DFA = 0x20000,
> +	CVMX_CMD_QUEUE_RAID = 0x30000,
> +	CVMX_CMD_QUEUE_DMA_BASE = 0x40000,
> +#define
> CVMX_CMD_QUEUE_DMA(queue)                                            
>                       \
> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_DMA_BASE + (0xffff &
> (queue))))
> +	CVMX_CMD_QUEUE_BCH = 0x50000,
> +#define CVMX_CMD_QUEUE_BCH(queue)
> ((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_BCH + (0xffff & (queue))))
> +	CVMX_CMD_QUEUE_HNA = 0x60000,
> +	CVMX_CMD_QUEUE_END = 0x70000,
> +} cvmx_cmd_queue_id_t;
> +
> +#define CVMX_CMD_QUEUE_ZIP3_QUE(node,
> queue)                                                       \
> +	((cvmx_cmd_queue_id_t)((node) << 24 | CVMX_CMD_QUEUE_ZIP |
> (0xffff & (queue))))
> +
> +/**
> + * Command write operations can fail if the command queue needs
> + * a new buffer and the associated FPA pool is empty. It can also
> + * fail if the number of queued command words reaches the maximum
> + * set at initialization.
> + */
> +typedef enum {
> +	CVMX_CMD_QUEUE_SUCCESS = 0,
> +	CVMX_CMD_QUEUE_NO_MEMORY = -1,
> +	CVMX_CMD_QUEUE_FULL = -2,
> +	CVMX_CMD_QUEUE_INVALID_PARAM = -3,
> +	CVMX_CMD_QUEUE_ALREADY_SETUP = -4,
> +} cvmx_cmd_queue_result_t;
> +
> +typedef struct {
> +	/* First 64-bit word: */
> +	u64 fpa_pool : 16;
> +	u64 base_paddr : 48;
> +	s32 index;
> +	u16 max_depth;
> +	u16 pool_size_m1;
> +} __cvmx_cmd_queue_state_t;
> +
> +/**
> + * command-queue locking uses a fair ticket spinlock algo,
> + * with 64-bit tickets for endianness-neutrality and
> + * counter overflow protection.
> + * Lock is free when both counters are of equal value.
> + */
> +typedef struct {
> +	u64 ticket;
> +	u64 now_serving;
> +} __cvmx_cmd_queue_lock_t;
> +
> +/**
> + * @INTERNAL
> + * This structure contains the global state of all command queues.
> + * It is stored in a bootmem named block and shared by all
> + * applications running on Octeon. Tickets are stored in a different
> + * cache line that queue information to reduce the contention on the
> + * ll/sc used to get a ticket. If this is not the case, the update
> + * of queue state causes the ll/sc to fail quite often.
> + */
> +typedef struct {
> +	__cvmx_cmd_queue_lock_t lock[(CVMX_CMD_QUEUE_END >> 16) * 256];
> +	__cvmx_cmd_queue_state_t state[(CVMX_CMD_QUEUE_END >> 16) *
> 256];
> +} __cvmx_cmd_queue_all_state_t;
> +
> +extern __cvmx_cmd_queue_all_state_t
> *__cvmx_cmd_queue_state_ptrs[CVMX_MAX_NODES];
> +
> +/**
> + * @INTERNAL
> + * Internal function to handle the corner cases
> + * of adding command words to a queue when the current
> + * block is getting full.
> + */
> +cvmx_cmd_queue_result_t
> __cvmx_cmd_queue_write_raw(cvmx_cmd_queue_id_t queue_id,
> +						   __cvmx_cmd_queue_sta
> te_t *qptr, int cmd_count,
> +						   const u64 *cmds);
> +
> +/**
> + * Initialize a command queue for use. The initial FPA buffer is
> + * allocated and the hardware unit is configured to point to the
> + * new command queue.
> + *
> + * @param queue_id  Hardware command queue to initialize.
> + * @param max_depth Maximum outstanding commands that can be queued.
> + * @param fpa_pool  FPA pool the command queues should come from.
> + * @param pool_size Size of each buffer in the FPA pool (bytes)
> + *
> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
> + */
> +cvmx_cmd_queue_result_t
> cvmx_cmd_queue_initialize(cvmx_cmd_queue_id_t queue_id, int
> max_depth,
> +						  int fpa_pool, int
> pool_size);
> +
> +/**
> + * Shutdown a queue a free it's command buffers to the FPA. The
> + * hardware connected to the queue must be stopped before this
> + * function is called.
> + *
> + * @param queue_id Queue to shutdown
> + *
> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
> + */
> +cvmx_cmd_queue_result_t cvmx_cmd_queue_shutdown(cvmx_cmd_queue_id_t
> queue_id);
> +
> +/**
> + * Return the number of command words pending in the queue. This
> + * function may be relatively slow for some hardware units.
> + *
> + * @param queue_id Hardware command queue to query
> + *
> + * @return Number of outstanding commands
> + */
> +int cvmx_cmd_queue_length(cvmx_cmd_queue_id_t queue_id);
> +
> +/**
> + * Return the command buffer to be written to. The purpose of this
> + * function is to allow CVMX routine access to the low level buffer
> + * for initial hardware setup. User applications should not call
> this
> + * function directly.
> + *
> + * @param queue_id Command queue to query
> + *
> + * @return Command buffer or NULL on failure
> + */
> +void *cvmx_cmd_queue_buffer(cvmx_cmd_queue_id_t queue_id);
> +
> +/**
> + * @INTERNAL
> + * Retrieve or allocate command queue state named block
> + */
> +cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(unsigned int
> node);
> +
> +/**
> + * @INTERNAL
> + * Get the index into the state arrays for the supplied queue id.
> + *
> + * @param queue_id Queue ID to get an index for
> + *
> + * @return Index into the state arrays
> + */
> +static inline unsigned int
> __cvmx_cmd_queue_get_index(cvmx_cmd_queue_id_t queue_id)
> +{
> +	/* Warning: This code currently only works with devices that
> have 256
> +	 * queues or less.  Devices with more than 16 queues are laid
> out in
> +	 * memory to allow cores quick access to every 16th queue. This
> reduces
> +	 * cache thrashing when you are running 16 queues per port to
> support
> +	 * lockless operation
> +	 */
> +	unsigned int unit = (queue_id >> 16) & 0xff;
> +	unsigned int q = (queue_id >> 4) & 0xf;
> +	unsigned int core = queue_id & 0xf;
> +
> +	return (unit << 8) | (core << 4) | q;
> +}
> +
> +static inline int __cvmx_cmd_queue_get_node(cvmx_cmd_queue_id_t
> queue_id)
> +{
> +	unsigned int node = queue_id >> 24;
> +	return node;
> +}
> +
> +/**
> + * @INTERNAL
> + * Lock the supplied queue so nobody else is updating it at the same
> + * time as us.
> + *
> + * @param queue_id Queue ID to lock
> + *
> + */
> +static inline void __cvmx_cmd_queue_lock(cvmx_cmd_queue_id_t
> queue_id)
> +{
> +}
> +
> +/**
> + * @INTERNAL
> + * Unlock the queue, flushing all writes.
> + *
> + * @param queue_id Queue ID to lock
> + *
> + */
> +static inline void __cvmx_cmd_queue_unlock(cvmx_cmd_queue_id_t
> queue_id)
> +{
> +	CVMX_SYNCWS; /* nudge out the unlock. */
> +}
> +
> +/**
> + * @INTERNAL
> + * Initialize a command-queue lock to "unlocked" state.
> + */
> +static inline void __cvmx_cmd_queue_lock_init(cvmx_cmd_queue_id_t
> queue_id)
> +{
> +	unsigned int index = __cvmx_cmd_queue_get_index(queue_id);
> +	unsigned int node = __cvmx_cmd_queue_get_node(queue_id);
> +
> +	__cvmx_cmd_queue_state_ptrs[node]->lock[index] =
> (__cvmx_cmd_queue_lock_t){ 0, 0 };
> +	CVMX_SYNCWS;
> +}
> +
> +/**
> + * @INTERNAL
> + * Get the queue state structure for the given queue id
> + *
> + * @param queue_id Queue id to get
> + *
> + * @return Queue structure or NULL on failure
> + */
> +static inline __cvmx_cmd_queue_state_t
> *__cvmx_cmd_queue_get_state(cvmx_cmd_queue_id_t queue_id)
> +{
> +	unsigned int index;
> +	unsigned int node;
> +	__cvmx_cmd_queue_state_t *qptr;
> +
> +	node = __cvmx_cmd_queue_get_node(queue_id);
> +	index = __cvmx_cmd_queue_get_index(queue_id);
> +
> +	if (cvmx_unlikely(!__cvmx_cmd_queue_state_ptrs[node]))
> +		__cvmx_cmd_queue_init_state_ptr(node);
> +
> +	qptr = &__cvmx_cmd_queue_state_ptrs[node]->state[index];
> +	return qptr;
> +}
> +
> +/**
> + * Write an arbitrary number of command words to a command queue.
> + * This is a generic function; the fixed number of command word
> + * functions yield higher performance.
> + *
> + * @param queue_id  Hardware command queue to write to
> + * @param use_locking
> + *                  Use internal locking to ensure exclusive access
> for queue
> + *                  updates. If you don't use this locking you must
> ensure
> + *                  exclusivity some other way. Locking is strongly
> recommended.
> + * @param cmd_count Number of command words to write
> + * @param cmds      Array of commands to write
> + *
> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
> + */
> +static inline cvmx_cmd_queue_result_t
> +cvmx_cmd_queue_write(cvmx_cmd_queue_id_t queue_id, bool use_locking,
> int cmd_count, const u64 *cmds)
> +{
> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
> +	u64 *cmd_ptr;
> +
> +	__cvmx_cmd_queue_state_t *qptr =
> __cvmx_cmd_queue_get_state(queue_id);
> +
> +	/* Make sure nobody else is updating the same queue */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_lock(queue_id);
> +
> +	/* Most of the time there is lots of free words in current
> block */
> +	if (cvmx_unlikely((qptr->index + cmd_count) >= qptr-
> >pool_size_m1)) {
> +		/* The rare case when nearing end of block */
> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr,
> cmd_count, cmds);
> +	} else {
> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
> >base_paddr);
> +		/* Loop easy for compiler to unroll for the likely case
> */
> +		while (cmd_count > 0) {
> +			cmd_ptr[qptr->index++] = *cmds++;
> +			cmd_count--;
> +		}
> +	}
> +
> +	/* All updates are complete. Release the lock and return */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_unlock(queue_id);
> +	else
> +		CVMX_SYNCWS;
> +
> +	return ret;
> +}
> +
> +/**
> + * Simple function to write two command words to a command queue.
> + *
> + * @param queue_id Hardware command queue to write to
> + * @param use_locking
> + *                 Use internal locking to ensure exclusive access
> for queue
> + *                 updates. If you don't use this locking you must
> ensure
> + *                 exclusivity some other way. Locking is strongly
> recommended.
> + * @param cmd1     Command
> + * @param cmd2     Command
> + *
> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
> + */
> +static inline cvmx_cmd_queue_result_t
> cvmx_cmd_queue_write2(cvmx_cmd_queue_id_t queue_id,
> +							    bool
> use_locking, u64 cmd1, u64 cmd2)
> +{
> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
> +	u64 *cmd_ptr;
> +
> +	__cvmx_cmd_queue_state_t *qptr =
> __cvmx_cmd_queue_get_state(queue_id);
> +
> +	/* Make sure nobody else is updating the same queue */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_lock(queue_id);
> +
> +	if (cvmx_unlikely((qptr->index + 2) >= qptr->pool_size_m1)) {
> +		/* The rare case when nearing end of block */
> +		u64 cmds[2];
> +
> +		cmds[0] = cmd1;
> +		cmds[1] = cmd2;
> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 2,
> cmds);
> +	} else {
> +		/* Likely case to work fast */
> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
> >base_paddr);
> +		cmd_ptr += qptr->index;
> +		qptr->index += 2;
> +		cmd_ptr[0] = cmd1;
> +		cmd_ptr[1] = cmd2;
> +	}
> +
> +	/* All updates are complete. Release the lock and return */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_unlock(queue_id);
> +	else
> +		CVMX_SYNCWS;
> +
> +	return ret;
> +}
> +
> +/**
> + * Simple function to write three command words to a command queue.
> + *
> + * @param queue_id Hardware command queue to write to
> + * @param use_locking
> + *                 Use internal locking to ensure exclusive access
> for queue
> + *                 updates. If you don't use this locking you must
> ensure
> + *                 exclusivity some other way. Locking is strongly
> recommended.
> + * @param cmd1     Command
> + * @param cmd2     Command
> + * @param cmd3     Command
> + *
> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
> + */
> +static inline cvmx_cmd_queue_result_t
> +cvmx_cmd_queue_write3(cvmx_cmd_queue_id_t queue_id, bool
> use_locking, u64 cmd1, u64 cmd2, u64 cmd3)
> +{
> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
> +	__cvmx_cmd_queue_state_t *qptr =
> __cvmx_cmd_queue_get_state(queue_id);
> +	u64 *cmd_ptr;
> +
> +	/* Make sure nobody else is updating the same queue */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_lock(queue_id);
> +
> +	if (cvmx_unlikely((qptr->index + 3) >= qptr->pool_size_m1)) {
> +		/* Most of the time there is lots of free words in
> current block */
> +		u64 cmds[3];
> +
> +		cmds[0] = cmd1;
> +		cmds[1] = cmd2;
> +		cmds[2] = cmd3;
> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 3,
> cmds);
> +	} else {
> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
> >base_paddr);
> +		cmd_ptr += qptr->index;
> +		qptr->index += 3;
> +		cmd_ptr[0] = cmd1;
> +		cmd_ptr[1] = cmd2;
> +		cmd_ptr[2] = cmd3;
> +	}
> +
> +	/* All updates are complete. Release the lock and return */
> +	if (cvmx_likely(use_locking))
> +		__cvmx_cmd_queue_unlock(queue_id);
> +	else
> +		CVMX_SYNCWS;
> +
> +	return ret;
> +}
> +
> +#endif /* __CVMX_CMD_QUEUE_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
> b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
> new file mode 100644
> index 000000000000..a8625b4228ac
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
> @@ -0,0 +1,87 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Definitions for enumerations used with Octeon CSRs.
> + */
> +
> +#ifndef __CVMX_CSR_ENUMS_H__
> +#define __CVMX_CSR_ENUMS_H__
> +
> +typedef enum {
> +	CVMX_IPD_OPC_MODE_STT = 0LL,
> +	CVMX_IPD_OPC_MODE_STF = 1LL,
> +	CVMX_IPD_OPC_MODE_STF1_STT = 2LL,
> +	CVMX_IPD_OPC_MODE_STF2_STT = 3LL
> +} cvmx_ipd_mode_t;
> +
> +/**
> + * Enumeration representing the amount of packet processing
> + * and validation performed by the input hardware.
> + */
> +typedef enum {
> +	CVMX_PIP_PORT_CFG_MODE_NONE = 0ull,
> +	CVMX_PIP_PORT_CFG_MODE_SKIPL2 = 1ull,
> +	CVMX_PIP_PORT_CFG_MODE_SKIPIP = 2ull
> +} cvmx_pip_port_parse_mode_t;
> +
> +/**
> + * This enumeration controls how a QoS watcher matches a packet.
> + *
> + * @deprecated  This enumeration was used with
> cvmx_pip_config_watcher which has
> + *              been deprecated.
> + */
> +typedef enum {
> +	CVMX_PIP_QOS_WATCH_DISABLE = 0ull,
> +	CVMX_PIP_QOS_WATCH_PROTNH = 1ull,
> +	CVMX_PIP_QOS_WATCH_TCP = 2ull,
> +	CVMX_PIP_QOS_WATCH_UDP = 3ull
> +} cvmx_pip_qos_watch_types;
> +
> +/**
> + * This enumeration is used in PIP tag config to control how
> + * POW tags are generated by the hardware.
> + */
> +typedef enum {
> +	CVMX_PIP_TAG_MODE_TUPLE = 0ull,
> +	CVMX_PIP_TAG_MODE_MASK = 1ull,
> +	CVMX_PIP_TAG_MODE_IP_OR_MASK = 2ull,
> +	CVMX_PIP_TAG_MODE_TUPLE_XOR_MASK = 3ull
> +} cvmx_pip_tag_mode_t;
> +
> +/**
> + * Tag type definitions
> + */
> +typedef enum {
> +	CVMX_POW_TAG_TYPE_ORDERED = 0L,
> +	CVMX_POW_TAG_TYPE_ATOMIC = 1L,
> +	CVMX_POW_TAG_TYPE_NULL = 2L,
> +	CVMX_POW_TAG_TYPE_NULL_NULL = 3L
> +} cvmx_pow_tag_type_t;
> +
> +/**
> + * LCR bits 0 and 1 control the number of bits per character. See
> the following table for encodings:
> + *
> + * - 00 = 5 bits (bits 0-4 sent)
> + * - 01 = 6 bits (bits 0-5 sent)
> + * - 10 = 7 bits (bits 0-6 sent)
> + * - 11 = 8 bits (all bits sent)
> + */
> +typedef enum {
> +	CVMX_UART_BITS5 = 0,
> +	CVMX_UART_BITS6 = 1,
> +	CVMX_UART_BITS7 = 2,
> +	CVMX_UART_BITS8 = 3
> +} cvmx_uart_bits_t;
> +
> +typedef enum {
> +	CVMX_UART_IID_NONE = 1,
> +	CVMX_UART_IID_RX_ERROR = 6,
> +	CVMX_UART_IID_RX_DATA = 4,
> +	CVMX_UART_IID_RX_TIMEOUT = 12,
> +	CVMX_UART_IID_TX_EMPTY = 2,
> +	CVMX_UART_IID_MODEM = 0,
> +	CVMX_UART_IID_BUSY = 7
> +} cvmx_uart_iid_t;
> +
> +#endif /* __CVMX_CSR_ENUMS_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr.h
> b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
> new file mode 100644
> index 000000000000..730d54bb9278
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Configuration and status register (CSR) address and type
> definitions for
> + * Octoen.
> + */
> +
> +#ifndef __CVMX_CSR_H__
> +#define __CVMX_CSR_H__
> +
> +#include "cvmx-csr-enums.h"
> +#include "cvmx-pip-defs.h"
> +
> +typedef cvmx_pip_prt_cfgx_t cvmx_pip_port_cfg_t;
> +
> +/* The CSRs for bootbus region zero used to be independent of the
> +    other 1-7. As of SDK 1.7.0 these were combined. These macros
> +    are for backwards compactability */
> +#define CVMX_MIO_BOOT_REG_CFG0 CVMX_MIO_BOOT_REG_CFGX(0)
> +#define CVMX_MIO_BOOT_REG_TIM0 CVMX_MIO_BOOT_REG_TIMX(0)
> +
> +/* The CN3XXX and CN58XX chips used to not have a LMC number
> +    passed to the address macros. These are here to supply backwards
> +    compatibility with old code. Code should really use the new
> addresses
> +    with bus arguments for support on other chips */
> +#define CVMX_LMC_BIST_CTL	  CVMX_LMCX_BIST_CTL(0)
> +#define CVMX_LMC_BIST_RESULT	  CVMX_LMCX_BIST_RESULT(0)
> +#define CVMX_LMC_COMP_CTL	  CVMX_LMCX_COMP_CTL(0)
> +#define CVMX_LMC_CTL		  CVMX_LMCX_CTL(0)
> +#define CVMX_LMC_CTL1		  CVMX_LMCX_CTL1(0)
> +#define CVMX_LMC_DCLK_CNT_HI	  CVMX_LMCX_DCLK_CNT_HI(0)
> +#define CVMX_LMC_DCLK_CNT_LO	  CVMX_LMCX_DCLK_CNT_LO(0)
> +#define CVMX_LMC_DCLK_CTL	  CVMX_LMCX_DCLK_CTL(0)
> +#define CVMX_LMC_DDR2_CTL	  CVMX_LMCX_DDR2_CTL(0)
> +#define CVMX_LMC_DELAY_CFG	  CVMX_LMCX_DELAY_CFG(0)
> +#define CVMX_LMC_DLL_CTL	  CVMX_LMCX_DLL_CTL(0)
> +#define CVMX_LMC_DUAL_MEMCFG	  CVMX_LMCX_DUAL_MEMCFG(0)
> +#define CVMX_LMC_ECC_SYND	  CVMX_LMCX_ECC_SYND(0)
> +#define CVMX_LMC_FADR		  CVMX_LMCX_FADR(0)
> +#define CVMX_LMC_IFB_CNT_HI	  CVMX_LMCX_IFB_CNT_HI(0)
> +#define CVMX_LMC_IFB_CNT_LO	  CVMX_LMCX_IFB_CNT_LO(0)
> +#define CVMX_LMC_MEM_CFG0	  CVMX_LMCX_MEM_CFG0(0)
> +#define CVMX_LMC_MEM_CFG1	  CVMX_LMCX_MEM_CFG1(0)
> +#define CVMX_LMC_OPS_CNT_HI	  CVMX_LMCX_OPS_CNT_HI(0)
> +#define CVMX_LMC_OPS_CNT_LO	  CVMX_LMCX_OPS_CNT_LO(0)
> +#define CVMX_LMC_PLL_BWCTL	  CVMX_LMCX_PLL_BWCTL(0)
> +#define CVMX_LMC_PLL_CTL	  CVMX_LMCX_PLL_CTL(0)
> +#define CVMX_LMC_PLL_STATUS	  CVMX_LMCX_PLL_STATUS(0)
> +#define CVMX_LMC_READ_LEVEL_CTL	  CVMX_LMCX_READ_LEVEL_CTL(0)
> +#define CVMX_LMC_READ_LEVEL_DBG	  CVMX_LMCX_READ_LEVEL_DBG(0)
> +#define CVMX_LMC_READ_LEVEL_RANKX CVMX_LMCX_READ_LEVEL_RANKX(0)
> +#define CVMX_LMC_RODT_COMP_CTL	  CVMX_LMCX_RODT_COMP_CTL(0)
> +#define CVMX_LMC_RODT_CTL	  CVMX_LMCX_RODT_CTL(0)
> +#define CVMX_LMC_WODT_CTL	  CVMX_LMCX_WODT_CTL0(0)
> +#define CVMX_LMC_WODT_CTL0	  CVMX_LMCX_WODT_CTL0(0)
> +#define CVMX_LMC_WODT_CTL1	  CVMX_LMCX_WODT_CTL1(0)
> +
> +/* The CN3XXX and CN58XX chips used to not have a TWSI bus number
> +    passed to the address macros. These are here to supply backwards
> +    compatibility with old code. Code should really use the new
> addresses
> +    with bus arguments for support on other chips */
> +#define CVMX_MIO_TWS_INT	 CVMX_MIO_TWSX_INT(0)
> +#define CVMX_MIO_TWS_SW_TWSI	 CVMX_MIO_TWSX_SW_TWSI(0)
> +#define CVMX_MIO_TWS_SW_TWSI_EXT CVMX_MIO_TWSX_SW_TWSI_EXT(0)
> +#define CVMX_MIO_TWS_TWSI_SW	 CVMX_MIO_TWSX_TWSI_SW(0)
> +
> +/* The CN3XXX and CN58XX chips used to not have a SMI/MDIO bus
> number
> +    passed to the address macros. These are here to supply backwards
> +    compatibility with old code. Code should really use the new
> addresses
> +    with bus arguments for support on other chips */
> +#define CVMX_SMI_CLK	CVMX_SMIX_CLK(0)
> +#define CVMX_SMI_CMD	CVMX_SMIX_CMD(0)
> +#define CVMX_SMI_EN	CVMX_SMIX_EN(0)
> +#define CVMX_SMI_RD_DAT CVMX_SMIX_RD_DAT(0)
> +#define CVMX_SMI_WR_DAT CVMX_SMIX_WR_DAT(0)
> +
> +#endif /* __CVMX_CSR_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-error.h
> b/arch/mips/mach-octeon/include/mach/cvmx-error.h
> new file mode 100644
> index 000000000000..9a13ed422484
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-error.h
> @@ -0,0 +1,456 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the Octeon extended error status.
> + */
> +
> +#ifndef __CVMX_ERROR_H__
> +#define __CVMX_ERROR_H__
> +
> +/**
> + * There are generally many error status bits associated with a
> + * single logical group. The enumeration below is used to
> + * communicate high level groups to the error infastructure so
> + * error status bits can be enable or disabled in large groups.
> + */
> +typedef enum {
> +	CVMX_ERROR_GROUP_INTERNAL,
> +	CVMX_ERROR_GROUP_L2C,
> +	CVMX_ERROR_GROUP_ETHERNET,
> +	CVMX_ERROR_GROUP_MGMT_PORT,
> +	CVMX_ERROR_GROUP_PCI,
> +	CVMX_ERROR_GROUP_SRIO,
> +	CVMX_ERROR_GROUP_USB,
> +	CVMX_ERROR_GROUP_LMC,
> +	CVMX_ERROR_GROUP_ILK,
> +	CVMX_ERROR_GROUP_DFM,
> +	CVMX_ERROR_GROUP_ILA,
> +} cvmx_error_group_t;
> +
> +/**
> + * Flags representing special handling for some error registers.
> + * These flags are passed to cvmx_error_initialize() to control
> + * the handling of bits where the same flags were passed to the
> + * added cvmx_error_info_t.
> + */
> +typedef enum {
> +	CVMX_ERROR_TYPE_NONE = 0,
> +	CVMX_ERROR_TYPE_SBE = 1 << 0,
> +	CVMX_ERROR_TYPE_DBE = 1 << 1,
> +} cvmx_error_type_t;
> +
> +/**
> + * When registering for interest in an error status register, the
> + * type of the register needs to be known by cvmx-error. Most
> + * registers are either IO64 or IO32, but some blocks contain
> + * registers that can't be directly accessed. A good example of
> + * would be PCIe extended error state stored in config space.
> + */
> +typedef enum {
> +	__CVMX_ERROR_REGISTER_NONE,
> +	CVMX_ERROR_REGISTER_IO64,
> +	CVMX_ERROR_REGISTER_IO32,
> +	CVMX_ERROR_REGISTER_PCICONFIG,
> +	CVMX_ERROR_REGISTER_SRIOMAINT,
> +} cvmx_error_register_t;
> +
> +struct cvmx_error_info;
> +/**
> + * Error handling functions must have the following prototype.
> + */
> +typedef int (*cvmx_error_func_t)(const struct cvmx_error_info
> *info);
> +
> +/**
> + * This structure is passed to all error handling functions.
> + */
> +typedef struct cvmx_error_info {
> +	cvmx_error_register_t reg_type;
> +	u64 status_addr;
> +	u64 status_mask;
> +	u64 enable_addr;
> +	u64 enable_mask;
> +	cvmx_error_type_t flags;
> +	cvmx_error_group_t group;
> +	int group_index;
> +	cvmx_error_func_t func;
> +	u64 user_info;
> +	struct {
> +		cvmx_error_register_t reg_type;
> +		u64 status_addr;
> +		u64 status_mask;
> +	} parent;
> +} cvmx_error_info_t;
> +
> +/**
> + * Initialize the error status system. This should be called once
> + * before any other functions are called. This function adds default
> + * handlers for most all error events but does not enable them.
> Later
> + * calls to cvmx_error_enable() are needed.
> + *
> + * @param flags  Optional flags.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_initialize(void);
> +
> +/**
> + * Poll the error status registers and call the appropriate error
> + * handlers. This should be called in the RSL interrupt handler
> + * for your application or operating system.
> + *
> + * @return Number of error handlers called. Zero means this call
> + *         found no errors and was spurious.
> + */
> +int cvmx_error_poll(void);
> +
> +/**
> + * Register to be called when an error status bit is set. Most users
> + * will not need to call this function as cvmx_error_initialize()
> + * registers default handlers for most error conditions. This
> function
> + * is normally used to add more handlers without changing the
> existing
> + * handlers.
> + *
> + * @param new_info Information about the handler for a error
> register. The
> + *                 structure passed is copied and can be destroyed
> after the
> + *                 call. All members of the structure must be
> populated, even the
> + *                 parent information.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_add(const cvmx_error_info_t *new_info);
> +
> +/**
> + * Remove all handlers for a status register and mask. Normally
> + * this function should not be called. Instead a new handler should
> be
> + * installed to replace the existing handler. In the even that all
> + * reporting of a error bit should be removed, then use this
> + * function.
> + *
> + * @param reg_type Type of the status register to remove
> + * @param status_addr
> + *                 Status register to remove.
> + * @param status_mask
> + *                 All handlers for this status register with this
> mask will be
> + *                 removed.
> + * @param old_info If not NULL, this is filled with information
> about the handler
> + *                 that was removed.
> + *
> + * @return Zero on success, negative on failure (not found).
> + */
> +int cvmx_error_remove(cvmx_error_register_t reg_type, u64
> status_addr, u64 status_mask,
> +		      cvmx_error_info_t *old_info);
> +
> +/**
> + * Change the function and user_info for an existing error status
> + * register. This function should be used to replace the default
> + * handler with an application specific version as needed.
> + *
> + * @param reg_type Type of the status register to change
> + * @param status_addr
> + *                 Status register to change.
> + * @param status_mask
> + *                 All handlers for this status register with this
> mask will be
> + *                 changed.
> + * @param new_func New function to use to handle the error status
> + * @param new_user_info
> + *                 New user info parameter for the function
> + * @param old_func If not NULL, the old function is returned. Useful
> for restoring
> + *                 the old handler.
> + * @param old_user_info
> + *                 If not NULL, the old user info parameter.
> + *
> + * @return Zero on success, negative on failure
> + */
> +int cvmx_error_change_handler(cvmx_error_register_t reg_type, u64
> status_addr, u64 status_mask,
> +			      cvmx_error_func_t new_func, u64
> new_user_info,
> +			      cvmx_error_func_t *old_func, u64
> *old_user_info);
> +
> +/**
> + * Enable all error registers for a logical group. This should be
> + * called whenever a logical group is brought online.
> + *
> + * @param group  Logical group to enable
> + * @param group_index
> + *               Index for the group as defined in the
> cvmx_error_group_t
> + *               comments.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +/*
> + * Rather than conditionalize the calls throughout the executive to
> not enable
> + * interrupts in Uboot, simply make the enable function do nothing
> + */
> +static inline int cvmx_error_enable_group(cvmx_error_group_t group,
> int group_index)
> +{
> +	return 0;
> +}
> +
> +/**
> + * Disable all error registers for a logical group. This should be
> + * called whenever a logical group is brought offline. Many blocks
> + * will report spurious errors when offline unless this function
> + * is called.
> + *
> + * @param group  Logical group to disable
> + * @param group_index
> + *               Index for the group as defined in the
> cvmx_error_group_t
> + *               comments.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +/*
> + * Rather than conditionalize the calls throughout the executive to
> not disable
> + * interrupts in Uboot, simply make the enable function do nothing
> + */
> +static inline int cvmx_error_disable_group(cvmx_error_group_t group,
> int group_index)
> +{
> +	return 0;
> +}
> +
> +/**
> + * Enable all handlers for a specific status register mask.
> + *
> + * @param reg_type Type of the status register
> + * @param status_addr
> + *                 Status register address
> + * @param status_mask
> + *                 All handlers for this status register with this
> mask will be
> + *                 enabled.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_enable(cvmx_error_register_t reg_type, u64
> status_addr, u64 status_mask);
> +
> +/**
> + * Disable all handlers for a specific status register and mask.
> + *
> + * @param reg_type Type of the status register
> + * @param status_addr
> + *                 Status register address
> + * @param status_mask
> + *                 All handlers for this status register with this
> mask will be
> + *                 disabled.
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_disable(cvmx_error_register_t reg_type, u64
> status_addr, u64 status_mask);
> +
> +/**
> + * @INTERNAL
> + * Function for processing non leaf error status registers. This
> function
> + * calls all handlers for this passed register and all children
> linked
> + * to it.
> + *
> + * @param info   Error register to check
> + *
> + * @return Number of error status bits found or zero if no bits were
> set.
> + */
> +int __cvmx_error_decode(const cvmx_error_info_t *info);
> +
> +/**
> + * @INTERNAL
> + * This error bit handler simply prints a message and clears the
> status bit
> + *
> + * @param info   Error register to check
> + *
> + * @return
> + */
> +int __cvmx_error_display(const cvmx_error_info_t *info);
> +
> +/**
> + * Find the handler for a specific status register and mask
> + *
> + * @param status_addr
> + *                Status register address
> + *
> + * @return  Return the handler on success or null on failure.
> + */
> +cvmx_error_info_t *cvmx_error_get_index(u64 status_addr);
> +
> +void __cvmx_install_gmx_error_handler_for_xaui(void);
> +
> +/**
> + * 78xx related
> + */
> +/**
> + * Compare two INTSN values.
> + *
> + * @param key INTSN value to search for
> + * @param data current entry from the searched array
> + *
> + * @return Negative, 0 or positive when respectively key is less
> than,
> + *		equal or greater than data.
> + */
> +int cvmx_error_intsn_cmp(const void *key, const void *data);
> +
> +/**
> + * @INTERNAL
> + *
> + * @param intsn   Interrupt source number to display
> + *
> + * @param node Node number
> + *
> + * @return Zero on success, -1 on error
> + */
> +int cvmx_error_intsn_display_v3(int node, u32 intsn);
> +
> +/**
> + * Initialize the error status system for cn78xx. This should be
> called once
> + * before any other functions are called. This function enables the
> interrupts
> + * described in the array.
> + *
> + * @param node Node number
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_initialize_cn78xx(int node);
> +
> +/**
> + * Enable interrupt for a specific INTSN.
> + *
> + * @param node Node number
> + * @param intsn Interrupt source number
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_intsn_enable_v3(int node, u32 intsn);
> +
> +/**
> + * Disable interrupt for a specific INTSN.
> + *
> + * @param node Node number
> + * @param intsn Interrupt source number
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_intsn_disable_v3(int node, u32 intsn);
> +
> +/**
> + * Clear interrupt for a specific INTSN.
> + *
> + * @param intsn Interrupt source number
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_intsn_clear_v3(int node, u32 intsn);
> +
> +/**
> + * Enable interrupts for a specific CSR(all the bits/intsn in the
> csr).
> + *
> + * @param node Node number
> + * @param csr_address CSR address
> + *
> + * @return Zero on success, negative on failure.
> + */
> +int cvmx_error_csr_enable_v3(int node, u64 csr_address);
> +
> +/**
> + * Disable interrupts for a specific CSR (all the bits/intsn in the
> csr).
> + *
> + * @param node Node number
> + * @param csr_address CSR address
> + *
> + * @return Zero
> + */
> +int cvmx_error_csr_disable_v3(int node, u64 csr_address);
> +
> +/**
> + * Enable all error registers for a logical group. This should be
> + * called whenever a logical group is brought online.
> + *
> + * @param group  Logical group to enable
> + * @param xipd_port  The IPD port value
> + *
> + * @return Zero.
> + */
> +int cvmx_error_enable_group_v3(cvmx_error_group_t group, int
> xipd_port);
> +
> +/**
> + * Disable all error registers for a logical group.
> + *
> + * @param group  Logical group to enable
> + * @param xipd_port  The IPD port value
> + *
> + * @return Zero.
> + */
> +int cvmx_error_disable_group_v3(cvmx_error_group_t group, int
> xipd_port);
> +
> +/**
> + * Enable all error registers for a specific category in a logical
> group.
> + * This should be called whenever a logical group is brought online.
> + *
> + * @param group  Logical group to enable
> + * @param type   Category in a logical group to enable
> + * @param xipd_port  The IPD port value
> + *
> + * @return Zero.
> + */
> +int cvmx_error_enable_group_type_v3(cvmx_error_group_t group,
> cvmx_error_type_t type,
> +				    int xipd_port);
> +
> +/**
> + * Disable all error registers for a specific category in a logical
> group.
> + * This should be called whenever a logical group is brought online.
> + *
> + * @param group  Logical group to disable
> + * @param type   Category in a logical group to disable
> + * @param xipd_port  The IPD port value
> + *
> + * @return Zero.
> + */
> +int cvmx_error_disable_group_type_v3(cvmx_error_group_t group,
> cvmx_error_type_t type,
> +				     int xipd_port);
> +
> +/**
> + * Clear all error registers for a logical group.
> + *
> + * @param group  Logical group to disable
> + * @param xipd_port  The IPD port value
> + *
> + * @return Zero.
> + */
> +int cvmx_error_clear_group_v3(cvmx_error_group_t group, int
> xipd_port);
> +
> +/**
> + * Enable all error registers for a particular category.
> + *
> + * @param node  CCPI node
> + * @param type  category to enable
> + *
> + *@return Zero.
> + */
> +int cvmx_error_enable_type_v3(int node, cvmx_error_type_t type);
> +
> +/**
> + * Disable all error registers for a particular category.
> + *
> + * @param node  CCPI node
> + * @param type  category to disable
> + *
> + *@return Zero.
> + */
> +int cvmx_error_disable_type_v3(int node, cvmx_error_type_t type);
> +
> +void cvmx_octeon_hang(void) __attribute__((__noreturn__));
> +
> +/**
> + * @INTERNAL
> + *
> + * Process L2C single and multi-bit ECC errors
> + *
> + */
> +int __cvmx_cn7xxx_l2c_l2d_ecc_error_display(int node, int intsn);
> +
> +/**
> + * Handle L2 cache TAG ECC errors and noway errors
> + *
> + * @param	CCPI node
> + * @param	intsn	intsn from error array.
> + * @param	remote	true for remote node (cn78xx only)
> + *
> + * @return	1 if handled, 0 if not handled
> + */
> +int __cvmx_cn7xxx_l2c_tag_error_display(int node, int intsn, bool
> remote);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
> b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
> new file mode 100644
> index 000000000000..297fb3f4a28c
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
> @@ -0,0 +1,217 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Free Pool Allocator.
> + */
> +
> +#ifndef __CVMX_FPA_H__
> +#define __CVMX_FPA_H__
> +
> +#include "cvmx-scratch.h"
> +#include "cvmx-fpa-defs.h"
> +#include "cvmx-fpa1.h"
> +#include "cvmx-fpa3.h"
> +
> +#define CVMX_FPA_MIN_BLOCK_SIZE 128
> +#define CVMX_FPA_ALIGNMENT	128
> +#define CVMX_FPA_POOL_NAME_LEN	16
> +
> +/* On CN78XX in backward-compatible mode, pool is mapped to AURA */
> +#define
> CVMX_FPA_NUM_POOLS                                                   
>                       \
> +	(octeon_has_feature(OCTEON_FEATURE_FPA3) ?
> cvmx_fpa3_num_auras() : CVMX_FPA1_NUM_POOLS)
> +
> +/**
> + * Structure to store FPA pool configuration parameters.
> + */
> +struct cvmx_fpa_pool_config {
> +	s64 pool_num;
> +	u64 buffer_size;
> +	u64 buffer_count;
> +};
> +
> +typedef struct cvmx_fpa_pool_config cvmx_fpa_pool_config_t;
> +
> +/**
> + * Return the name of the pool
> + *
> + * @param pool_num   Pool to get the name of
> + * @return The name
> + */
> +const char *cvmx_fpa_get_name(int pool_num);
> +
> +/**
> + * Initialize FPA per node
> + */
> +int cvmx_fpa_global_init_node(int node);
> +
> +/**
> + * Enable the FPA
> + */
> +static inline void cvmx_fpa_enable(void)
> +{
> +	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
> +		cvmx_fpa1_enable();
> +	else
> +		cvmx_fpa_global_init_node(cvmx_get_node_num());
> +}
> +
> +/**
> + * Disable the FPA
> + */
> +static inline void cvmx_fpa_disable(void)
> +{
> +	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
> +		cvmx_fpa1_disable();
> +	/* FPA3 does not have a disable function */
> +}
> +
> +/**
> + * @INTERNAL
> + * @deprecated OBSOLETE
> + *
> + * Kept for transition assistance only
> + */
> +static inline void cvmx_fpa_global_initialize(void)
> +{
> +	cvmx_fpa_global_init_node(cvmx_get_node_num());
> +}
> +
> +/**
> + * @INTERNAL
> + *
> + * Convert FPA1 style POOL into FPA3 AURA in
> + * backward compatibility mode.
> + */
> +static inline cvmx_fpa3_gaura_t
> cvmx_fpa1_pool_to_fpa3_aura(cvmx_fpa1_pool_t pool)
> +{
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
> +		unsigned int node = cvmx_get_node_num();
> +		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(node, pool);
> +		return aura;
> +	}
> +	return CVMX_FPA3_INVALID_GAURA;
> +}
> +
> +/**
> + * Get a new block from the FPA
> + *
> + * @param pool   Pool to get the block from
> + * @return Pointer to the block or NULL on failure
> + */
> +static inline void *cvmx_fpa_alloc(u64 pool)
> +{
> +	/* FPA3 is handled differently */
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
> +		return
> cvmx_fpa3_alloc(cvmx_fpa1_pool_to_fpa3_aura(pool));
> +	} else
> +		return cvmx_fpa1_alloc(pool);
> +}
> +
> +/**
> + * Asynchronously get a new block from the FPA
> + *
> + * The result of cvmx_fpa_async_alloc() may be retrieved using
> + * cvmx_fpa_async_alloc_finish().
> + *
> + * @param scr_addr Local scratch address to put response in.  This
> is a byte
> + *		   address but must be 8 byte aligned.
> + * @param pool      Pool to get the block from
> + */
> +static inline void cvmx_fpa_async_alloc(u64 scr_addr, u64 pool)
> +{
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
> +		return cvmx_fpa3_async_alloc(scr_addr,
> cvmx_fpa1_pool_to_fpa3_aura(pool));
> +	} else
> +		return cvmx_fpa1_async_alloc(scr_addr, pool);
> +}
> +
> +/**
> + * Retrieve the result of cvmx_fpa_async_alloc
> + *
> + * @param scr_addr The Local scratch address.  Must be the same
> value
> + * passed to cvmx_fpa_async_alloc().
> + *
> + * @param pool Pool the block came from.  Must be the same value
> + * passed to cvmx_fpa_async_alloc.
> + *
> + * @return Pointer to the block or NULL on failure
> + */
> +static inline void *cvmx_fpa_async_alloc_finish(u64 scr_addr, u64
> pool)
> +{
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
> +		return cvmx_fpa3_async_alloc_finish(scr_addr,
> cvmx_fpa1_pool_to_fpa3_aura(pool));
> +	else
> +		return cvmx_fpa1_async_alloc_finish(scr_addr, pool);
> +}
> +
> +/**
> + * Free a block allocated with a FPA pool.
> + * Does NOT provide memory ordering in cases where the memory block
> was
> + * modified by the core.
> + *
> + * @param ptr    Block to free
> + * @param pool   Pool to put it in
> + * @param num_cache_lines
> + *               Cache lines to invalidate
> + */
> +static inline void cvmx_fpa_free_nosync(void *ptr, u64 pool, u64
> num_cache_lines)
> +{
> +	/* FPA3 is handled differently */
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
> +		cvmx_fpa3_free_nosync(ptr,
> cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
> +	else
> +		cvmx_fpa1_free_nosync(ptr, pool, num_cache_lines);
> +}
> +
> +/**
> + * Free a block allocated with a FPA pool.  Provides required memory
> + * ordering in cases where memory block was modified by core.
> + *
> + * @param ptr    Block to free
> + * @param pool   Pool to put it in
> + * @param num_cache_lines
> + *               Cache lines to invalidate
> + */
> +static inline void cvmx_fpa_free(void *ptr, u64 pool, u64
> num_cache_lines)
> +{
> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
> +		cvmx_fpa3_free(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool),
> num_cache_lines);
> +	else
> +		cvmx_fpa1_free(ptr, pool, num_cache_lines);
> +}
> +
> +/**
> + * Setup a FPA pool to control a new block of memory.
> + * This can only be called once per pool. Make sure proper
> + * locking enforces this.
> + *
> + * @param pool       Pool to initialize
> + * @param name       Constant character string to name this pool.
> + *                   String is not copied.
> + * @param buffer     Pointer to the block of memory to use. This
> must be
> + *                   accessible by all processors and external
> hardware.
> + * @param block_size Size for each block controlled by the FPA
> + * @param num_blocks Number of blocks
> + *
> + * @return the pool number on Success,
> + *         -1 on failure
> + */
> +int cvmx_fpa_setup_pool(int pool, const char *name, void *buffer,
> u64 block_size, u64 num_blocks);
> +
> +int cvmx_fpa_shutdown_pool(int pool);
> +
> +/**
> + * Gets the block size of buffer in specified pool
> + * @param pool	 Pool to get the block size from
> + * @return       Size of buffer in specified pool
> + */
> +unsigned int cvmx_fpa_get_block_size(int pool);
> +
> +int cvmx_fpa_is_pool_available(int pool_num);
> +u64 cvmx_fpa_get_pool_owner(int pool_num);
> +int cvmx_fpa_get_max_pools(void);
> +int cvmx_fpa_get_current_count(int pool_num);
> +int cvmx_fpa_validate_pool(int pool);
> +
> +#endif /*  __CVM_FPA_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
> b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
> new file mode 100644
> index 000000000000..6985083a5d66
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
> @@ -0,0 +1,196 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Free Pool Allocator on Octeon chips.
> + * These are the legacy models, i.e. prior to CN78XX/CN76XX.
> + */
> +
> +#ifndef __CVMX_FPA1_HW_H__
> +#define __CVMX_FPA1_HW_H__
> +
> +#include "cvmx-scratch.h"
> +#include "cvmx-fpa-defs.h"
> +#include "cvmx-fpa3.h"
> +
> +/* Legacy pool range is 0..7 and 8 on CN68XX */
> +typedef int cvmx_fpa1_pool_t;
> +
> +#define CVMX_FPA1_NUM_POOLS    8
> +#define CVMX_FPA1_INVALID_POOL ((cvmx_fpa1_pool_t)-1)
> +#define CVMX_FPA1_NAME_SIZE    16
> +
> +/**
> + * Structure describing the data format used for stores to the FPA.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 scraddr : 8;
> +		u64 len : 8;
> +		u64 did : 8;
> +		u64 addr : 40;
> +	} s;
> +} cvmx_fpa1_iobdma_data_t;
> +
> +/*
> + * Allocate or reserve the specified fpa pool.
> + *
> + * @param pool	  FPA pool to allocate/reserve. If -1 it
> + *                finds an empty pool to allocate.
> + * @return        Alloctaed pool number or CVMX_FPA1_POOL_INVALID
> + *                if fails to allocate the pool
> + */
> +cvmx_fpa1_pool_t cvmx_fpa1_reserve_pool(cvmx_fpa1_pool_t pool);
> +
> +/**
> + * Free the specified fpa pool.
> + * @param pool	   Pool to free
> + * @return         0 for success -1 failure
> + */
> +int cvmx_fpa1_release_pool(cvmx_fpa1_pool_t pool);
> +
> +static inline void cvmx_fpa1_free(void *ptr, cvmx_fpa1_pool_t pool,
> u64 num_cache_lines)
> +{
> +	cvmx_addr_t newptr;
> +
> +	newptr.u64 = cvmx_ptr_to_phys(ptr);
> +	newptr.sfilldidspace.didspace =
> CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
> +	/* Make sure that any previous writes to memory go out before
> we free
> +	 * this buffer.  This also serves as a barrier to prevent GCC
> from
> +	 * reordering operations to after the free.
> +	 */
> +	CVMX_SYNCWS;
> +	/* value written is number of cache lines not written back */
> +	cvmx_write_io(newptr.u64, num_cache_lines);
> +}
> +
> +static inline void cvmx_fpa1_free_nosync(void *ptr, cvmx_fpa1_pool_t
> pool,
> +					 unsigned int num_cache_lines)
> +{
> +	cvmx_addr_t newptr;
> +
> +	newptr.u64 = cvmx_ptr_to_phys(ptr);
> +	newptr.sfilldidspace.didspace =
> CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
> +	/* Prevent GCC from reordering around free */
> +	asm volatile("" : : : "memory");
> +	/* value written is number of cache lines not written back */
> +	cvmx_write_io(newptr.u64, num_cache_lines);
> +}
> +
> +/**
> + * Enable the FPA for use. Must be performed after any CSR
> + * configuration but before any other FPA functions.
> + */
> +static inline void cvmx_fpa1_enable(void)
> +{
> +	cvmx_fpa_ctl_status_t status;
> +
> +	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
> +	if (status.s.enb) {
> +		/*
> +		 * CN68XXP1 should not reset the FPA (doing so may
> break
> +		 * the SSO, so we may end up enabling it more than
> once.
> +		 * Just return and don't spew messages.
> +		 */
> +		return;
> +	}
> +
> +	status.u64 = 0;
> +	status.s.enb = 1;
> +	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
> +}
> +
> +/**
> + * Reset FPA to disable. Make sure buffers from all FPA pools are
> freed
> + * before disabling FPA.
> + */
> +static inline void cvmx_fpa1_disable(void)
> +{
> +	cvmx_fpa_ctl_status_t status;
> +
> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
> +		return;
> +
> +	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
> +	status.s.reset = 1;
> +	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
> +}
> +
> +static inline void *cvmx_fpa1_alloc(cvmx_fpa1_pool_t pool)
> +{
> +	u64 address;
> +
> +	for (;;) {
> +		address =
> csr_rd(CVMX_ADDR_DID(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool)));
> +		if (cvmx_likely(address)) {
> +			return cvmx_phys_to_ptr(address);
> +		} else {
> +			if (csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool)) > 0)
> +				udelay(50);
> +			else
> +				return NULL;
> +		}
> +	}
> +}
> +
> +/**
> + * Asynchronously get a new block from the FPA
> + * @INTERNAL
> + *
> + * The result of cvmx_fpa_async_alloc() may be retrieved using
> + * cvmx_fpa_async_alloc_finish().
> + *
> + * @param scr_addr Local scratch address to put response in.  This
> is a byte
> + *		   address but must be 8 byte aligned.
> + * @param pool      Pool to get the block from
> + */
> +static inline void cvmx_fpa1_async_alloc(u64 scr_addr,
> cvmx_fpa1_pool_t pool)
> +{
> +	cvmx_fpa1_iobdma_data_t data;
> +
> +	/* Hardware only uses 64 bit aligned locations, so convert from
> byte
> +	 * address to 64-bit index
> +	 */
> +	data.u64 = 0ull;
> +	data.s.scraddr = scr_addr >> 3;
> +	data.s.len = 1;
> +	data.s.did = CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool);
> +	data.s.addr = 0;
> +
> +	cvmx_scratch_write64(scr_addr, 0ull);
> +	CVMX_SYNCW;
> +	cvmx_send_single(data.u64);
> +}
> +
> +/**
> + * Retrieve the result of cvmx_fpa_async_alloc
> + * @INTERNAL
> + *
> + * @param scr_addr The Local scratch address.  Must be the same
> value
> + * passed to cvmx_fpa_async_alloc().
> + *
> + * @param pool Pool the block came from.  Must be the same value
> + * passed to cvmx_fpa_async_alloc.
> + *
> + * @return Pointer to the block or NULL on failure
> + */
> +static inline void *cvmx_fpa1_async_alloc_finish(u64 scr_addr,
> cvmx_fpa1_pool_t pool)
> +{
> +	u64 address;
> +
> +	CVMX_SYNCIOBDMA;
> +
> +	address = cvmx_scratch_read64(scr_addr);
> +	if (cvmx_likely(address))
> +		return cvmx_phys_to_ptr(address);
> +	else
> +		return cvmx_fpa1_alloc(pool);
> +}
> +
> +static inline u64 cvmx_fpa1_get_available(cvmx_fpa1_pool_t pool)
> +{
> +	return csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool));
> +}
> +
> +#endif /* __CVMX_FPA1_HW_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
> b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
> new file mode 100644
> index 000000000000..229982b83163
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
> @@ -0,0 +1,566 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the CN78XX Free Pool Allocator, a.k.a. FPA3
> + */
> +
> +#include "cvmx-address.h"
> +#include "cvmx-fpa-defs.h"
> +#include "cvmx-scratch.h"
> +
> +#ifndef __CVMX_FPA3_H__
> +#define __CVMX_FPA3_H__
> +
> +typedef struct {
> +	unsigned res0 : 6;
> +	unsigned node : 2;
> +	unsigned res1 : 2;
> +	unsigned lpool : 6;
> +	unsigned valid_magic : 16;
> +} cvmx_fpa3_pool_t;
> +
> +typedef struct {
> +	unsigned res0 : 6;
> +	unsigned node : 2;
> +	unsigned res1 : 6;
> +	unsigned laura : 10;
> +	unsigned valid_magic : 16;
> +} cvmx_fpa3_gaura_t;
> +
> +#define CVMX_FPA3_VALID_MAGIC	0xf9a3
> +#define CVMX_FPA3_INVALID_GAURA ((cvmx_fpa3_gaura_t){ 0, 0, 0, 0, 0
> })
> +#define CVMX_FPA3_INVALID_POOL	((cvmx_fpa3_pool_t){ 0, 0, 0,
> 0, 0 })
> +
> +static inline bool __cvmx_fpa3_aura_valid(cvmx_fpa3_gaura_t aura)
> +{
> +	if (aura.valid_magic != CVMX_FPA3_VALID_MAGIC)
> +		return false;
> +	return true;
> +}
> +
> +static inline bool __cvmx_fpa3_pool_valid(cvmx_fpa3_pool_t pool)
> +{
> +	if (pool.valid_magic != CVMX_FPA3_VALID_MAGIC)
> +		return false;
> +	return true;
> +}
> +
> +static inline cvmx_fpa3_gaura_t __cvmx_fpa3_gaura(int node, int
> laura)
> +{
> +	cvmx_fpa3_gaura_t aura;
> +
> +	if (node < 0)
> +		node = cvmx_get_node_num();
> +	if (laura < 0)
> +		return CVMX_FPA3_INVALID_GAURA;
> +
> +	aura.node = node;
> +	aura.laura = laura;
> +	aura.valid_magic = CVMX_FPA3_VALID_MAGIC;
> +	return aura;
> +}
> +
> +static inline cvmx_fpa3_pool_t __cvmx_fpa3_pool(int node, int lpool)
> +{
> +	cvmx_fpa3_pool_t pool;
> +
> +	if (node < 0)
> +		node = cvmx_get_node_num();
> +	if (lpool < 0)
> +		return CVMX_FPA3_INVALID_POOL;
> +
> +	pool.node = node;
> +	pool.lpool = lpool;
> +	pool.valid_magic = CVMX_FPA3_VALID_MAGIC;
> +	return pool;
> +}
> +
> +#undef CVMX_FPA3_VALID_MAGIC
> +
> +/**
> + * Structure describing the data format used for stores to the FPA.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 scraddr : 8;
> +		u64 len : 8;
> +		u64 did : 8;
> +		u64 addr : 40;
> +	} s;
> +	struct {
> +		u64 scraddr : 8;
> +		u64 len : 8;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 red : 1;
> +		u64 reserved2 : 9;
> +		u64 aura : 10;
> +		u64 reserved3 : 16;
> +	} cn78xx;
> +} cvmx_fpa3_iobdma_data_t;
> +
> +/**
> + * Struct describing load allocate operation addresses for FPA pool.
> + */
> +union cvmx_fpa3_load_data {
> +	u64 u64;
> +	struct {
> +		u64 seg : 2;
> +		u64 reserved1 : 13;
> +		u64 io : 1;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 red : 1;
> +		u64 reserved2 : 9;
> +		u64 aura : 10;
> +		u64 reserved3 : 16;
> +	};
> +};
> +
> +typedef union cvmx_fpa3_load_data cvmx_fpa3_load_data_t;
> +
> +/**
> + * Struct describing store free operation addresses from FPA pool.
> + */
> +union cvmx_fpa3_store_addr {
> +	u64 u64;
> +	struct {
> +		u64 seg : 2;
> +		u64 reserved1 : 13;
> +		u64 io : 1;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 reserved2 : 10;
> +		u64 aura : 10;
> +		u64 fabs : 1;
> +		u64 reserved3 : 3;
> +		u64 dwb_count : 9;
> +		u64 reserved4 : 3;
> +	};
> +};
> +
> +typedef union cvmx_fpa3_store_addr cvmx_fpa3_store_addr_t;
> +
> +enum cvmx_fpa3_pool_alignment_e {
> +	FPA_NATURAL_ALIGNMENT,
> +	FPA_OFFSET_ALIGNMENT,
> +	FPA_OPAQUE_ALIGNMENT
> +};
> +
> +#define CVMX_FPA3_AURAX_LIMIT_MAX ((1ull << 40) - 1)
> +
> +/**
> + * @INTERNAL
> + * Accessor functions to return number of POOLS in an FPA3
> + * depending on SoC model.
> + * The number is per-node for models supporting multi-node
> configurations.
> + */
> +static inline int cvmx_fpa3_num_pools(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 64;
> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
> +		return 32;
> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
> +		return 32;
> +	printf("ERROR: %s: Unknowm model\n", __func__);
> +	return -1;
> +}
> +
> +/**
> + * @INTERNAL
> + * Accessor functions to return number of AURAS in an FPA3
> + * depending on SoC model.
> + * The number is per-node for models supporting multi-node
> configurations.
> + */
> +static inline int cvmx_fpa3_num_auras(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 1024;
> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
> +		return 512;
> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
> +		return 512;
> +	printf("ERROR: %s: Unknowm model\n", __func__);
> +	return -1;
> +}
> +
> +/**
> + * Get the FPA3 POOL underneath FPA3 AURA, containing all its
> buffers
> + *
> + */
> +static inline cvmx_fpa3_pool_t
> cvmx_fpa3_aura_to_pool(cvmx_fpa3_gaura_t aura)
> +{
> +	cvmx_fpa3_pool_t pool;
> +	cvmx_fpa_aurax_pool_t aurax_pool;
> +
> +	aurax_pool.u64 = cvmx_read_csr_node(aura.node,
> CVMX_FPA_AURAX_POOL(aura.laura));
> +
> +	pool = __cvmx_fpa3_pool(aura.node, aurax_pool.s.pool);
> +	return pool;
> +}
> +
> +/**
> + * Get a new block from the FPA pool
> + *
> + * @param aura  - aura number
> + * @return pointer to the block or NULL on failure
> + */
> +static inline void *cvmx_fpa3_alloc(cvmx_fpa3_gaura_t aura)
> +{
> +	u64 address;
> +	cvmx_fpa3_load_data_t load_addr;
> +
> +	load_addr.u64 = 0;
> +	load_addr.seg = CVMX_MIPS_SPACE_XKPHYS;
> +	load_addr.io = 1;
> +	load_addr.did = 0x29; /* Device ID. Indicates FPA. */
> +	load_addr.node = aura.node;
> +	load_addr.red = 0; /* Perform RED on allocation.
> +				  * FIXME to use config option
> +				  */
> +	load_addr.aura = aura.laura;
> +
> +	address = cvmx_read64_uint64(load_addr.u64);
> +	if (!address)
> +		return NULL;
> +	return cvmx_phys_to_ptr(address);
> +}
> +
> +/**
> + * Asynchronously get a new block from the FPA
> + *
> + * The result of cvmx_fpa_async_alloc() may be retrieved using
> + * cvmx_fpa_async_alloc_finish().
> + *
> + * @param scr_addr Local scratch address to put response in.  This
> is a byte
> + *		   address but must be 8 byte aligned.
> + * @param aura     Global aura to get the block from
> + */
> +static inline void cvmx_fpa3_async_alloc(u64 scr_addr,
> cvmx_fpa3_gaura_t aura)
> +{
> +	cvmx_fpa3_iobdma_data_t data;
> +
> +	/* Hardware only uses 64 bit aligned locations, so convert from
> byte
> +	 * address to 64-bit index
> +	 */
> +	data.u64 = 0ull;
> +	data.cn78xx.scraddr = scr_addr >> 3;
> +	data.cn78xx.len = 1;
> +	data.cn78xx.did = 0x29;
> +	data.cn78xx.node = aura.node;
> +	data.cn78xx.aura = aura.laura;
> +	cvmx_scratch_write64(scr_addr, 0ull);
> +
> +	CVMX_SYNCW;
> +	cvmx_send_single(data.u64);
> +}
> +
> +/**
> + * Retrieve the result of cvmx_fpa3_async_alloc
> + *
> + * @param scr_addr The Local scratch address.  Must be the same
> value
> + * passed to cvmx_fpa_async_alloc().
> + *
> + * @param aura Global aura the block came from.  Must be the same
> value
> + * passed to cvmx_fpa_async_alloc.
> + *
> + * @return Pointer to the block or NULL on failure
> + */
> +static inline void *cvmx_fpa3_async_alloc_finish(u64 scr_addr,
> cvmx_fpa3_gaura_t aura)
> +{
> +	u64 address;
> +
> +	CVMX_SYNCIOBDMA;
> +
> +	address = cvmx_scratch_read64(scr_addr);
> +	if (cvmx_likely(address))
> +		return cvmx_phys_to_ptr(address);
> +	else
> +		/* Try regular alloc if async failed */
> +		return cvmx_fpa3_alloc(aura);
> +}
> +
> +/**
> + * Free a pointer back to the pool.
> + *
> + * @param aura   global aura number
> + * @param ptr    physical address of block to free.
> + * @param num_cache_lines Cache lines to invalidate
> + */
> +static inline void cvmx_fpa3_free(void *ptr, cvmx_fpa3_gaura_t aura,
> unsigned int num_cache_lines)
> +{
> +	cvmx_fpa3_store_addr_t newptr;
> +	cvmx_addr_t newdata;
> +
> +	newdata.u64 = cvmx_ptr_to_phys(ptr);
> +
> +	/* Make sure that any previous writes to memory go out before
> we free
> +	   this buffer. This also serves as a barrier to prevent GCC
> from
> +	   reordering operations to after the free. */
> +	CVMX_SYNCWS;
> +
> +	newptr.u64 = 0;
> +	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
> +	newptr.io = 1;
> +	newptr.did = 0x29; /* Device id, indicates FPA */
> +	newptr.node = aura.node;
> +	newptr.aura = aura.laura;
> +	newptr.fabs = 0; /* Free absolute. FIXME to use config option
> */
> +	newptr.dwb_count = num_cache_lines;
> +
> +	cvmx_write_io(newptr.u64, newdata.u64);
> +}
> +
> +/**
> + * Free a pointer back to the pool without flushing the write
> buffer.
> + *
> + * @param aura   global aura number
> + * @param ptr    physical address of block to free.
> + * @param num_cache_lines Cache lines to invalidate
> + */
> +static inline void cvmx_fpa3_free_nosync(void *ptr,
> cvmx_fpa3_gaura_t aura,
> +					 unsigned int num_cache_lines)
> +{
> +	cvmx_fpa3_store_addr_t newptr;
> +	cvmx_addr_t newdata;
> +
> +	newdata.u64 = cvmx_ptr_to_phys(ptr);
> +
> +	/* Prevent GCC from reordering writes to (*ptr) */
> +	asm volatile("" : : : "memory");
> +
> +	newptr.u64 = 0;
> +	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
> +	newptr.io = 1;
> +	newptr.did = 0x29; /* Device id, indicates FPA */
> +	newptr.node = aura.node;
> +	newptr.aura = aura.laura;
> +	newptr.fabs = 0; /* Free absolute. FIXME to use config option
> */
> +	newptr.dwb_count = num_cache_lines;
> +
> +	cvmx_write_io(newptr.u64, newdata.u64);
> +}
> +
> +static inline int cvmx_fpa3_pool_is_enabled(cvmx_fpa3_pool_t pool)
> +{
> +	cvmx_fpa_poolx_cfg_t pool_cfg;
> +
> +	if (!__cvmx_fpa3_pool_valid(pool))
> +		return -1;
> +
> +	pool_cfg.u64 = cvmx_read_csr_node(pool.node,
> CVMX_FPA_POOLX_CFG(pool.lpool));
> +	return pool_cfg.cn78xx.ena;
> +}
> +
> +static inline int cvmx_fpa3_config_red_params(unsigned int node, int
> qos_avg_en, int red_lvl_dly,
> +					      int avg_dly)
> +{
> +	cvmx_fpa_gen_cfg_t fpa_cfg;
> +	cvmx_fpa_red_delay_t red_delay;
> +
> +	fpa_cfg.u64 = cvmx_read_csr_node(node, CVMX_FPA_GEN_CFG);
> +	fpa_cfg.s.avg_en = qos_avg_en;
> +	fpa_cfg.s.lvl_dly = red_lvl_dly;
> +	cvmx_write_csr_node(node, CVMX_FPA_GEN_CFG, fpa_cfg.u64);
> +
> +	red_delay.u64 = cvmx_read_csr_node(node, CVMX_FPA_RED_DELAY);
> +	red_delay.s.avg_dly = avg_dly;
> +	cvmx_write_csr_node(node, CVMX_FPA_RED_DELAY, red_delay.u64);
> +	return 0;
> +}
> +
> +/**
> + * Gets the buffer size of the specified pool,
> + *
> + * @param aura Global aura number
> + * @return Returns size of the buffers in the specified pool.
> + */
> +static inline int cvmx_fpa3_get_aura_buf_size(cvmx_fpa3_gaura_t
> aura)
> +{
> +	cvmx_fpa3_pool_t pool;
> +	cvmx_fpa_poolx_cfg_t pool_cfg;
> +	int block_size;
> +
> +	pool = cvmx_fpa3_aura_to_pool(aura);
> +
> +	pool_cfg.u64 = cvmx_read_csr_node(pool.node,
> CVMX_FPA_POOLX_CFG(pool.lpool));
> +	block_size = pool_cfg.cn78xx.buf_size << 7;
> +	return block_size;
> +}
> +
> +/**
> + * Return the number of available buffers in an AURA
> + *
> + * @param aura to receive count for
> + * @return available buffer count
> + */
> +static inline long long cvmx_fpa3_get_available(cvmx_fpa3_gaura_t
> aura)
> +{
> +	cvmx_fpa3_pool_t pool;
> +	cvmx_fpa_poolx_available_t avail_reg;
> +	cvmx_fpa_aurax_cnt_t cnt_reg;
> +	cvmx_fpa_aurax_cnt_limit_t limit_reg;
> +	long long ret;
> +
> +	pool = cvmx_fpa3_aura_to_pool(aura);
> +
> +	/* Get POOL available buffer count */
> +	avail_reg.u64 = cvmx_read_csr_node(pool.node,
> CVMX_FPA_POOLX_AVAILABLE(pool.lpool));
> +
> +	/* Get AURA current available count */
> +	cnt_reg.u64 = cvmx_read_csr_node(aura.node,
> CVMX_FPA_AURAX_CNT(aura.laura));
> +	limit_reg.u64 = cvmx_read_csr_node(aura.node,
> CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
> +
> +	if (limit_reg.cn78xx.limit < cnt_reg.cn78xx.cnt)
> +		return 0;
> +
> +	/* Calculate AURA-based buffer allowance */
> +	ret = limit_reg.cn78xx.limit - cnt_reg.cn78xx.cnt;
> +
> +	/* Use POOL real buffer availability when less then allowance
> */
> +	if (ret > (long long)avail_reg.cn78xx.count)
> +		ret = avail_reg.cn78xx.count;
> +
> +	return ret;
> +}
> +
> +/**
> + * Configure the QoS parameters of an FPA3 AURA
> + *
> + * @param aura is the FPA3 AURA handle
> + * @param ena_bp enables backpressure when outstanding count exceeds
> 'bp_thresh'
> + * @param ena_red enables random early discard when outstanding
> count exceeds 'pass_thresh'
> + * @param pass_thresh is the maximum count to invoke flow control
> + * @param drop_thresh is the count threshold to begin dropping
> packets
> + * @param bp_thresh is the back-pressure threshold
> + *
> + */
> +static inline void cvmx_fpa3_setup_aura_qos(cvmx_fpa3_gaura_t aura,
> bool ena_red, u64 pass_thresh,
> +					    u64 drop_thresh, bool
> ena_bp, u64 bp_thresh)
> +{
> +	unsigned int shift = 0;
> +	u64 shift_thresh;
> +	cvmx_fpa_aurax_cnt_limit_t limit_reg;
> +	cvmx_fpa_aurax_cnt_levels_t aura_level;
> +
> +	if (!__cvmx_fpa3_aura_valid(aura))
> +		return;
> +
> +	/* Get AURAX count limit for validation */
> +	limit_reg.u64 = cvmx_read_csr_node(aura.node,
> CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
> +
> +	if (pass_thresh < 256)
> +		pass_thresh = 255;
> +
> +	if (drop_thresh <= pass_thresh || drop_thresh >
> limit_reg.cn78xx.limit)
> +		drop_thresh = limit_reg.cn78xx.limit;
> +
> +	if (bp_thresh < 256 || bp_thresh > limit_reg.cn78xx.limit)
> +		bp_thresh = limit_reg.cn78xx.limit >> 1;
> +
> +	shift_thresh = (bp_thresh > drop_thresh) ? bp_thresh :
> drop_thresh;
> +
> +	/* Calculate shift so that the largest threshold fits in 8 bits
> */
> +	for (shift = 0; shift < (1 << 6); shift++) {
> +		if (0 == ((shift_thresh >> shift) & ~0xffull))
> +			break;
> +	};
> +
> +	aura_level.u64 = cvmx_read_csr_node(aura.node,
> CVMX_FPA_AURAX_CNT_LEVELS(aura.laura));
> +	aura_level.s.pass = pass_thresh >> shift;
> +	aura_level.s.drop = drop_thresh >> shift;
> +	aura_level.s.bp = bp_thresh >> shift;
> +	aura_level.s.shift = shift;
> +	aura_level.s.red_ena = ena_red;
> +	aura_level.s.bp_ena = ena_bp;
> +	cvmx_write_csr_node(aura.node,
> CVMX_FPA_AURAX_CNT_LEVELS(aura.laura), aura_level.u64);
> +}
> +
> +cvmx_fpa3_gaura_t cvmx_fpa3_reserve_aura(int node, int
> desired_aura_num);
> +int cvmx_fpa3_release_aura(cvmx_fpa3_gaura_t aura);
> +cvmx_fpa3_pool_t cvmx_fpa3_reserve_pool(int node, int
> desired_pool_num);
> +int cvmx_fpa3_release_pool(cvmx_fpa3_pool_t pool);
> +int cvmx_fpa3_is_aura_available(int node, int aura_num);
> +int cvmx_fpa3_is_pool_available(int node, int pool_num);
> +
> +cvmx_fpa3_pool_t cvmx_fpa3_setup_fill_pool(int node, int
> desired_pool, const char *name,
> +					   unsigned int block_size,
> unsigned int num_blocks,
> +					   void *buffer);
> +
> +/**
> + * Function to attach an aura to an existing pool
> + *
> + * @param node - configure fpa on this node
> + * @param pool - configured pool to attach aura to
> + * @param desired_aura - pointer to aura to use, set to -1 to
> allocate
> + * @param name - name to register
> + * @param block_size - size of buffers to use
> + * @param num_blocks - number of blocks to allocate
> + *
> + * @return configured gaura on success, CVMX_FPA3_INVALID_GAURA on
> failure
> + */
> +cvmx_fpa3_gaura_t cvmx_fpa3_set_aura_for_pool(cvmx_fpa3_pool_t pool,
> int desired_aura,
> +					      const char *name,
> unsigned int block_size,
> +					      unsigned int num_blocks);
> +
> +/**
> + * Function to setup and initialize a pool.
> + *
> + * @param node - configure fpa on this node
> + * @param desired_aura - aura to use, -1 for dynamic allocation
> + * @param name - name to register
> + * @param block_size - size of buffers in pool
> + * @param num_blocks - max number of buffers allowed
> + */
> +cvmx_fpa3_gaura_t cvmx_fpa3_setup_aura_and_pool(int node, int
> desired_aura, const char *name,
> +						void *buffer, unsigned
> int block_size,
> +						unsigned int
> num_blocks);
> +
> +int cvmx_fpa3_shutdown_aura_and_pool(cvmx_fpa3_gaura_t aura);
> +int cvmx_fpa3_shutdown_aura(cvmx_fpa3_gaura_t aura);
> +int cvmx_fpa3_shutdown_pool(cvmx_fpa3_pool_t pool);
> +const char *cvmx_fpa3_get_pool_name(cvmx_fpa3_pool_t pool);
> +int cvmx_fpa3_get_pool_buf_size(cvmx_fpa3_pool_t pool);
> +const char *cvmx_fpa3_get_aura_name(cvmx_fpa3_gaura_t aura);
> +
> +/* FIXME: Need a different macro for stage2 of u-boot */
> +
> +static inline void cvmx_fpa3_stage2_init(int aura, int pool, u64
> stack_paddr, int stacklen,
> +					 int buffer_sz, int buf_cnt)
> +{
> +	cvmx_fpa_poolx_cfg_t pool_cfg;
> +
> +	/* Configure pool stack */
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool),
> stack_paddr);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool),
> stack_paddr);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool),
> stack_paddr + stacklen);
> +
> +	/* Configure pool with buffer size */
> +	pool_cfg.u64 = 0;
> +	pool_cfg.cn78xx.nat_align = 1;
> +	pool_cfg.cn78xx.buf_size = buffer_sz >> 7;
> +	pool_cfg.cn78xx.l_type = 0x2;
> +	pool_cfg.cn78xx.ena = 0;
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
> +	/* Reset pool before starting */
> +	pool_cfg.cn78xx.ena = 1;
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
> +
> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CFG(aura), 0);
> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CNT_ADD(aura), buf_cnt);
> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), (u64)pool);
> +}
> +
> +static inline void cvmx_fpa3_stage2_disable(int aura, int pool)
> +{
> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), 0);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), 0);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), 0);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), 0);
> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), 0);
> +}
> +
> +#endif /* __CVMX_FPA3_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-global-
> resources.h b/arch/mips/mach-octeon/include/mach/cvmx-global-
> resources.h
> new file mode 100644
> index 000000000000..28c32ddbe17a
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
> @@ -0,0 +1,213 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef _CVMX_GLOBAL_RESOURCES_T_
> +#define _CVMX_GLOBAL_RESOURCES_T_
> +
> +#define CVMX_GLOBAL_RESOURCES_DATA_NAME "cvmx-global-resources"
> +
> +/*In macros below abbreviation GR stands for global resources. */
> +#define
> CVMX_GR_TAG_INVALID                                                  
>                       \
> +	cvmx_get_gr_tag('i', 'n', 'v', 'a', 'l', 'i', 'd', '.', '.',
> '.', '.', '.', '.', '.', '.', \
> +			'.')
> +/*Tag for pko que table range. */
> +#define
> CVMX_GR_TAG_PKO_QUEUES                                               
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'q',
> 'u', 'e', 'u', 's', '.', '.', \
> +			'.')
> +/*Tag for a pko internal ports range */
> +#define
> CVMX_GR_TAG_PKO_IPORTS                                               
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'i',
> 'p', 'o', 'r', 't', '.', '.', \
> +			'.')
> +#define
> CVMX_GR_TAG_FPA                                                      
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'p', 'a', '.', '.',
> '.', '.', '.', '.', '.', '.', \
> +			'.')
> +#define
> CVMX_GR_TAG_FAU                                                      
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'a', 'u', '.', '.',
> '.', '.', '.', '.', '.', '.', \
> +			'.')
> +#define
> CVMX_GR_TAG_SSO_GRP(n)                                               
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 's', 'o', '_', '0',
> (n) + '0', '.', '.', '.',     \
> +			'.', '.', '.');
> +#define
> CVMX_GR_TAG_TIM(n)                                                   
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 't', 'i', 'm', '_', (n) +
> '0', '.', '.', '.', '.',     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_CLUSTERS(x)                                              
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'u', 's', 't',
> 'e', 'r', '_', (x + '0'),     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_CLUSTER_GRP(x)                                           
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'g', 'r', 'p',
> '_', (x + '0'), '.', '.',     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_STYLE(x)                                                 
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 't', 'y', 'l', 'e',
> '_', (x + '0'), '.', '.',     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_QPG_ENTRY(x)                                             
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'q', 'p', 'g', 'e', 't',
> '_', (x + '0'), '.', '.',     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_BPID(x)                                                  
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'b', 'p', 'i', 'd', 's',
> '_', (x + '0'), '.', '.',     \
> +			'.', '.', '.')
> +#define
> CVMX_GR_TAG_MTAG_IDX(x)                                              
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'm', 't', 'a', 'g', 'x',
> '_', (x + '0'), '.', '.',     \
> +			'.', '.', '.')
> +#define CVMX_GR_TAG_PCAM(x, y,
> z)                                                                  \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'c', 'a', 'm', '_', (x
> + '0'), (y + '0'),         \
> +			(z + '0'), '.', '.', '.', '.')
> +
> +#define
> CVMX_GR_TAG_CIU3_IDT(_n)                                             
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_',
> ((_n) + '0'), '_', 'i', 'd',  \
> +			't', '.', '.')
> +
> +/* Allocation of the 512 SW INTSTs (in the  12 bit SW INTSN space)
> */
> +#define
> CVMX_GR_TAG_CIU3_SWINTSN(_n)                                         
>                       \
> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_',
> ((_n) + '0'), '_', 's', 'w',  \
> +			'i', 's', 'n')
> +
> +#define TAG_INIT_PART(A, B, C, D, E, F, G,
> H)                                                      \
> +	((((u64)(A) & 0xff) << 56) | (((u64)(B) & 0xff) << 48) |
> (((u64)(C) & 0xff) << 40) |             \
> +	 (((u64)(D) & 0xff) << 32) | (((u64)(E) & 0xff) << 24) |
> (((u64)(F) & 0xff) << 16) |             \
> +	 (((u64)(G) & 0xff) << 8) | (((u64)(H) & 0xff)))
> +
> +struct global_resource_tag {
> +	u64 lo;
> +	u64 hi;
> +};
> +
> +enum cvmx_resource_err { CVMX_RESOURCE_ALLOC_FAILED = -1,
> CVMX_RESOURCE_ALREADY_RESERVED = -2 };
> +
> +/*
> + * @INTERNAL
> + * Creates a tag from the specified characters.
> + */
> +static inline struct global_resource_tag cvmx_get_gr_tag(char a,
> char b, char c, char d, char e,
> +							 char f, char
> g, char h, char i, char j,
> +							 char k, char
> l, char m, char n, char o,
> +							 char p)
> +{
> +	struct global_resource_tag tag;
> +
> +	tag.lo = TAG_INIT_PART(a, b, c, d, e, f, g, h);
> +	tag.hi = TAG_INIT_PART(i, j, k, l, m, n, o, p);
> +	return tag;
> +}
> +
> +static inline int cvmx_gr_same_tag(struct global_resource_tag gr1,
> struct global_resource_tag gr2)
> +{
> +	return (gr1.hi == gr2.hi) && (gr1.lo == gr2.lo);
> +}
> +
> +/*
> + * @INTERNAL
> + * Creates a global resource range that can hold the specified
> number of
> + * elements
> + * @param tag is the tag of the range. The taga is created using the
> method
> + * cvmx_get_gr_tag()
> + * @param nelements is the number of elements to be held in the
> resource range.
> + */
> +int cvmx_create_global_resource_range(struct global_resource_tag
> tag, int nelements);
> +
> +/*
> + * @INTERNAL
> + * Allocate nelements in the global resource range with the
> specified tag. It
> + * is assumed that prior
> + * to calling this the global resource range has already been
> created using
> + * cvmx_create_global_resource_range().
> + * @param tag is the tag of the global resource range.
> + * @param nelements is the number of elements to be allocated.
> + * @param owner is a 64 bit number that identifes the owner of this
> range.
> + * @aligment specifes the required alignment of the returned base
> number.
> + * @return returns the base of the allocated range. -1 return value
> indicates
> + * failure.
> + */
> +int cvmx_allocate_global_resource_range(struct global_resource_tag
> tag, u64 owner, int nelements,
> +					int alignment);
> +
> +/*
> + * @INTERNAL
> + * Allocate nelements in the global resource range with the
> specified tag.
> + * The elements allocated need not be contiguous. It is assumed that
> prior to
> + * calling this the global resource range has already
> + * been created using cvmx_create_global_resource_range().
> + * @param tag is the tag of the global resource range.
> + * @param nelements is the number of elements to be allocated.
> + * @param owner is a 64 bit number that identifes the owner of the
> allocated
> + * elements.
> + * @param allocated_elements returns indexs of the allocated
> entries.
> + * @return returns 0 on success and -1 on failure.
> + */
> +int cvmx_resource_alloc_many(struct global_resource_tag tag, u64
> owner, int nelements,
> +			     int allocated_elements[]);
> +int cvmx_resource_alloc_reverse(struct global_resource_tag, u64
> owner);
> +/*
> + * @INTERNAL
> + * Reserve nelements starting from base in the global resource range
> with the
> + * specified tag.
> + * It is assumed that prior to calling this the global resource
> range has
> + * already been created using cvmx_create_global_resource_range().
> + * @param tag is the tag of the global resource range.
> + * @param nelements is the number of elements to be allocated.
> + * @param owner is a 64 bit number that identifes the owner of this
> range.
> + * @base specifies the base start of nelements.
> + * @return returns the base of the allocated range. -1 return value
> indicates
> + * failure.
> + */
> +int cvmx_reserve_global_resource_range(struct global_resource_tag
> tag, u64 owner, int base,
> +				       int nelements);
> +/*
> + * @INTERNAL
> + * Free nelements starting at base in the global resource range with
> the
> + * specified tag.
> + * @param tag is the tag of the global resource range.
> + * @param base is the base number
> + * @param nelements is the number of elements that are to be freed.
> + * @return returns 0 if successful and -1 on failure.
> + */
> +int cvmx_free_global_resource_range_with_base(struct
> global_resource_tag tag, int base,
> +					      int nelements);
> +
> +/*
> + * @INTERNAL
> + * Free nelements with the bases specified in bases[] with the
> + * specified tag.
> + * @param tag is the tag of the global resource range.
> + * @param bases is an array containing the bases to be freed.
> + * @param nelements is the number of elements that are to be freed.
> + * @return returns 0 if successful and -1 on failure.
> + */
> +int cvmx_free_global_resource_range_multiple(struct
> global_resource_tag tag, int bases[],
> +					     int nelements);
> +/*
> + * @INTERNAL
> + * Free elements from the specified owner in the global resource
> range with the
> + * specified tag.
> + * @param tag is the tag of the global resource range.
> + * @param owner is the owner of resources that are to be freed.
> + * @return returns 0 if successful and -1 on failure.
> + */
> +int cvmx_free_global_resource_range_with_owner(struct
> global_resource_tag tag, int owner);
> +
> +/*
> + * @INTERNAL
> + * Frees all the global resources that have been created.
> + * For use only from the bootloader, when it shutdown and boots up
> the
> + * application or kernel.
> + */
> +int free_global_resources(void);
> +
> +u64 cvmx_get_global_resource_owner(struct global_resource_tag tag,
> int base);
> +/*
> + * @INTERNAL
> + * Shows the global resource range with the specified tag. Use
> mainly for debug.
> + */
> +void cvmx_show_global_resource_range(struct global_resource_tag
> tag);
> +
> +/*
> + * @INTERNAL
> + * Shows all the global resources. Used mainly for debug.
> + */
> +void cvmx_global_resources_show(void);
> +
> +u64 cvmx_allocate_app_id(void);
> +u64 cvmx_get_app_id(void);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
> b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
> new file mode 100644
> index 000000000000..2df7da102a0f
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the GMX hardware.
> + */
> +
> +#ifndef __CVMX_GMX_H__
> +#define __CVMX_GMX_H__
> +
> +/* CSR typedefs have been moved to cvmx-gmx-defs.h */
> +
> +int cvmx_gmx_set_backpressure_override(u32 interface, u32
> port_mask);
> +int cvmx_agl_set_backpressure_override(u32 interface, u32
> port_mask);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
> b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
> new file mode 100644
> index 000000000000..59772190aa3b
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
> @@ -0,0 +1,606 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Fetch and Add Unit.
> + */
> +
> +/**
> + * @file
> + *
> + * Interface to the hardware Fetch and Add Unit.
> + *
> + */
> +
> +#ifndef __CVMX_HWFAU_H__
> +#define __CVMX_HWFAU_H__
> +
> +typedef int cvmx_fau_reg64_t;
> +typedef int cvmx_fau_reg32_t;
> +typedef int cvmx_fau_reg16_t;
> +typedef int cvmx_fau_reg8_t;
> +
> +#define CVMX_FAU_REG_ANY -1
> +
> +/*
> + * Octeon Fetch and Add Unit (FAU)
> + */
> +
> +#define CVMX_FAU_LOAD_IO_ADDRESS cvmx_build_io_address(0x1e, 0)
> +#define CVMX_FAU_BITS_SCRADDR	 63, 56
> +#define CVMX_FAU_BITS_LEN	 55, 48
> +#define CVMX_FAU_BITS_INEVAL	 35, 14
> +#define CVMX_FAU_BITS_TAGWAIT	 13, 13
> +#define CVMX_FAU_BITS_NOADD	 13, 13
> +#define CVMX_FAU_BITS_SIZE	 12, 11
> +#define CVMX_FAU_BITS_REGISTER	 10, 0
> +
> +#define CVMX_FAU_MAX_REGISTERS_8 (2048)
> +
> +typedef enum {
> +	CVMX_FAU_OP_SIZE_8 = 0,
> +	CVMX_FAU_OP_SIZE_16 = 1,
> +	CVMX_FAU_OP_SIZE_32 = 2,
> +	CVMX_FAU_OP_SIZE_64 = 3
> +} cvmx_fau_op_size_t;
> +
> +/**
> + * Tagwait return definition. If a timeout occurs, the error
> + * bit will be set. Otherwise the value of the register before
> + * the update will be returned.
> + */
> +typedef struct {
> +	u64 error : 1;
> +	s64 value : 63;
> +} cvmx_fau_tagwait64_t;
> +
> +/**
> + * Tagwait return definition. If a timeout occurs, the error
> + * bit will be set. Otherwise the value of the register before
> + * the update will be returned.
> + */
> +typedef struct {
> +	u64 error : 1;
> +	s32 value : 31;
> +} cvmx_fau_tagwait32_t;
> +
> +/**
> + * Tagwait return definition. If a timeout occurs, the error
> + * bit will be set. Otherwise the value of the register before
> + * the update will be returned.
> + */
> +typedef struct {
> +	u64 error : 1;
> +	s16 value : 15;
> +} cvmx_fau_tagwait16_t;
> +
> +/**
> + * Tagwait return definition. If a timeout occurs, the error
> + * bit will be set. Otherwise the value of the register before
> + * the update will be returned.
> + */
> +typedef struct {
> +	u64 error : 1;
> +	int8_t value : 7;
> +} cvmx_fau_tagwait8_t;
> +
> +/**
> + * Asynchronous tagwait return definition. If a timeout occurs,
> + * the error bit will be set. Otherwise the value of the
> + * register before the update will be returned.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 invalid : 1;
> +		u64 data : 63; /* unpredictable if invalid is set */
> +	} s;
> +} cvmx_fau_async_tagwait_result_t;
> +
> +#define SWIZZLE_8  0
> +#define SWIZZLE_16 0
> +#define SWIZZLE_32 0
> +
> +/**
> + * @INTERNAL
> + * Builds a store I/O address for writing to the FAU
> + *
> + * @param noadd  0 = Store value is atomically added to the current
> value
> + *               1 = Store value is atomically written over the
> current value
> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
> + *               - Step by 2 for 16 bit access.
> + *               - Step by 4 for 32 bit access.
> + *               - Step by 8 for 64 bit access.
> + * @return Address to store for atomic update
> + */
> +static inline u64 __cvmx_hwfau_store_address(u64 noadd, u64 reg)
> +{
> +	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
> +		cvmx_build_bits(CVMX_FAU_BITS_NOADD, noadd) |
> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
> +}
> +
> +/**
> + * @INTERNAL
> + * Builds a I/O address for accessing the FAU
> + *
> + * @param tagwait Should the atomic add wait for the current tag
> switch
> + *                operation to complete.
> + *                - 0 = Don't wait
> + *                - 1 = Wait for tag switch to complete
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + *                - Step by 4 for 32 bit access.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to add.
> + *                Note: When performing 32 and 64 bit access, only
> the low
> + *                22 bits are available.
> + * @return Address to read from for atomic update
> + */
> +static inline u64 __cvmx_hwfau_atomic_address(u64 tagwait, u64 reg,
> s64 value)
> +{
> +	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
> +		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
> +		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
> +}
> +
> +/**
> + * Perform an atomic 64 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Value of the register before the update
> + */
> +static inline s64 cvmx_hwfau_fetch_and_add64(cvmx_fau_reg64_t reg,
> s64 value)
> +{
> +	return cvmx_read64_int64(__cvmx_hwfau_atomic_address(0, reg,
> value));
> +}
> +
> +/**
> + * Perform an atomic 32 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 4 for 32 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Value of the register before the update
> + */
> +static inline s32 cvmx_hwfau_fetch_and_add32(cvmx_fau_reg32_t reg,
> s32 value)
> +{
> +	reg ^= SWIZZLE_32;
> +	return cvmx_read64_int32(__cvmx_hwfau_atomic_address(0, reg,
> value));
> +}
> +
> +/**
> + * Perform an atomic 16 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + * @param value   Signed value to add.
> + * @return Value of the register before the update
> + */
> +static inline s16 cvmx_hwfau_fetch_and_add16(cvmx_fau_reg16_t reg,
> s16 value)
> +{
> +	reg ^= SWIZZLE_16;
> +	return cvmx_read64_int16(__cvmx_hwfau_atomic_address(0, reg,
> value));
> +}
> +
> +/**
> + * Perform an atomic 8 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + * @param value   Signed value to add.
> + * @return Value of the register before the update
> + */
> +static inline int8_t cvmx_hwfau_fetch_and_add8(cvmx_fau_reg8_t reg,
> int8_t value)
> +{
> +	reg ^= SWIZZLE_8;
> +	return cvmx_read64_int8(__cvmx_hwfau_atomic_address(0, reg,
> value));
> +}
> +
> +/**
> + * Perform an atomic 64 bit add after the current tag switch
> + * completes
> + *
> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
> + *               - Step by 8 for 64 bit access.
> + * @param value  Signed value to add.
> + *               Note: Only the low 22 bits are available.
> + * @return If a timeout occurs, the error bit will be set. Otherwise
> + *         the value of the register before the update will be
> + *         returned
> + */
> +static inline cvmx_fau_tagwait64_t
> cvmx_hwfau_tagwait_fetch_and_add64(cvmx_fau_reg64_t reg,
> +								      s
> 64 value)
> +{
> +	union {
> +		u64 i64;
> +		cvmx_fau_tagwait64_t t;
> +	} result;
> +	result.i64 = cvmx_read64_int64(__cvmx_hwfau_atomic_address(1,
> reg, value));
> +	return result.t;
> +}
> +
> +/**
> + * Perform an atomic 32 bit add after the current tag switch
> + * completes
> + *
> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
> + *               - Step by 4 for 32 bit access.
> + * @param value  Signed value to add.
> + *               Note: Only the low 22 bits are available.
> + * @return If a timeout occurs, the error bit will be set. Otherwise
> + *         the value of the register before the update will be
> + *         returned
> + */
> +static inline cvmx_fau_tagwait32_t
> cvmx_hwfau_tagwait_fetch_and_add32(cvmx_fau_reg32_t reg,
> +								      s
> 32 value)
> +{
> +	union {
> +		u64 i32;
> +		cvmx_fau_tagwait32_t t;
> +	} result;
> +	reg ^= SWIZZLE_32;
> +	result.i32 = cvmx_read64_int32(__cvmx_hwfau_atomic_address(1,
> reg, value));
> +	return result.t;
> +}
> +
> +/**
> + * Perform an atomic 16 bit add after the current tag switch
> + * completes
> + *
> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
> + *               - Step by 2 for 16 bit access.
> + * @param value  Signed value to add.
> + * @return If a timeout occurs, the error bit will be set. Otherwise
> + *         the value of the register before the update will be
> + *         returned
> + */
> +static inline cvmx_fau_tagwait16_t
> cvmx_hwfau_tagwait_fetch_and_add16(cvmx_fau_reg16_t reg,
> +								      s
> 16 value)
> +{
> +	union {
> +		u64 i16;
> +		cvmx_fau_tagwait16_t t;
> +	} result;
> +	reg ^= SWIZZLE_16;
> +	result.i16 = cvmx_read64_int16(__cvmx_hwfau_atomic_address(1,
> reg, value));
> +	return result.t;
> +}
> +
> +/**
> + * Perform an atomic 8 bit add after the current tag switch
> + * completes
> + *
> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
> + * @param value  Signed value to add.
> + * @return If a timeout occurs, the error bit will be set. Otherwise
> + *         the value of the register before the update will be
> + *         returned
> + */
> +static inline cvmx_fau_tagwait8_t
> cvmx_hwfau_tagwait_fetch_and_add8(cvmx_fau_reg8_t reg,
> +								    int
> 8_t value)
> +{
> +	union {
> +		u64 i8;
> +		cvmx_fau_tagwait8_t t;
> +	} result;
> +	reg ^= SWIZZLE_8;
> +	result.i8 = cvmx_read64_int8(__cvmx_hwfau_atomic_address(1,
> reg, value));
> +	return result.t;
> +}
> +
> +/**
> + * @INTERNAL
> + * Builds I/O data for async operations
> + *
> + * @param scraddr Scratch pad byte address to write to.  Must be 8
> byte aligned
> + * @param value   Signed value to add.
> + *                Note: When performing 32 and 64 bit access, only
> the low
> + *                22 bits are available.
> + * @param tagwait Should the atomic add wait for the current tag
> switch
> + *                operation to complete.
> + *                - 0 = Don't wait
> + *                - 1 = Wait for tag switch to complete
> + * @param size    The size of the operation:
> + *                - CVMX_FAU_OP_SIZE_8  (0) = 8 bits
> + *                - CVMX_FAU_OP_SIZE_16 (1) = 16 bits
> + *                - CVMX_FAU_OP_SIZE_32 (2) = 32 bits
> + *                - CVMX_FAU_OP_SIZE_64 (3) = 64 bits
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + *                - Step by 4 for 32 bit access.
> + *                - Step by 8 for 64 bit access.
> + * @return Data to write using cvmx_send_single
> + */
> +static inline u64 __cvmx_fau_iobdma_data(u64 scraddr, s64 value, u64
> tagwait,
> +					 cvmx_fau_op_size_t size, u64
> reg)
> +{
> +	return (CVMX_FAU_LOAD_IO_ADDRESS |
> cvmx_build_bits(CVMX_FAU_BITS_SCRADDR, scraddr >> 3) |
> +		cvmx_build_bits(CVMX_FAU_BITS_LEN, 1) |
> +		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
> +		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
> +		cvmx_build_bits(CVMX_FAU_BITS_SIZE, size) |
> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
> +}
> +
> +/**
> + * Perform an async atomic 64 bit add. The old value is
> + * placed in the scratch memory at byte address scraddr.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_fetch_and_add64(u64 scraddr,
> cvmx_fau_reg64_t reg, s64 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
> CVMX_FAU_OP_SIZE_64, reg));
> +}
> +
> +/**
> + * Perform an async atomic 32 bit add. The old value is
> + * placed in the scratch memory at byte address scraddr.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 4 for 32 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_fetch_and_add32(u64 scraddr,
> cvmx_fau_reg32_t reg, s32 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
> CVMX_FAU_OP_SIZE_32, reg));
> +}
> +
> +/**
> + * Perform an async atomic 16 bit add. The old value is
> + * placed in the scratch memory at byte address scraddr.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + * @param value   Signed value to add.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_fetch_and_add16(u64 scraddr,
> cvmx_fau_reg16_t reg, s16 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
> CVMX_FAU_OP_SIZE_16, reg));
> +}
> +
> +/**
> + * Perform an async atomic 8 bit add. The old value is
> + * placed in the scratch memory at byte address scraddr.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + * @param value   Signed value to add.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_fetch_and_add8(u64 scraddr,
> cvmx_fau_reg8_t reg, int8_t value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
> CVMX_FAU_OP_SIZE_8, reg));
> +}
> +
> +/**
> + * Perform an async atomic 64 bit add after the current tag
> + * switch completes.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + *                If a timeout occurs, the error bit (63) will be
> set. Otherwise
> + *                the value of the register before the update will
> be
> + *                returned
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add64(u64
> scraddr, cvmx_fau_reg64_t reg,
> +							    s64 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
> CVMX_FAU_OP_SIZE_64, reg));
> +}
> +
> +/**
> + * Perform an async atomic 32 bit add after the current tag
> + * switch completes.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + *                If a timeout occurs, the error bit (63) will be
> set. Otherwise
> + *                the value of the register before the update will
> be
> + *                returned
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 4 for 32 bit access.
> + * @param value   Signed value to add.
> + *                Note: Only the low 22 bits are available.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add32(u64
> scraddr, cvmx_fau_reg32_t reg,
> +							    s32 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
> CVMX_FAU_OP_SIZE_32, reg));
> +}
> +
> +/**
> + * Perform an async atomic 16 bit add after the current tag
> + * switch completes.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + *                If a timeout occurs, the error bit (63) will be
> set. Otherwise
> + *                the value of the register before the update will
> be
> + *                returned
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + * @param value   Signed value to add.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add16(u64
> scraddr, cvmx_fau_reg16_t reg,
> +							    s16 value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
> CVMX_FAU_OP_SIZE_16, reg));
> +}
> +
> +/**
> + * Perform an async atomic 8 bit add after the current tag
> + * switch completes.
> + *
> + * @param scraddr Scratch memory byte address to put response in.
> + *                Must be 8 byte aligned.
> + *                If a timeout occurs, the error bit (63) will be
> set. Otherwise
> + *                the value of the register before the update will
> be
> + *                returned
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + * @param value   Signed value to add.
> + * @return Placed in the scratch pad register
> + */
> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add8(u64
> scraddr, cvmx_fau_reg8_t reg,
> +							   int8_t
> value)
> +{
> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
> CVMX_FAU_OP_SIZE_8, reg));
> +}
> +
> +/**
> + * Perform an atomic 64 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to add.
> + */
> +static inline void cvmx_hwfau_atomic_add64(cvmx_fau_reg64_t reg, s64
> value)
> +{
> +	cvmx_write64_int64(__cvmx_hwfau_store_address(0, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 32 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 4 for 32 bit access.
> + * @param value   Signed value to add.
> + */
> +static inline void cvmx_hwfau_atomic_add32(cvmx_fau_reg32_t reg, s32
> value)
> +{
> +	reg ^= SWIZZLE_32;
> +	cvmx_write64_int32(__cvmx_hwfau_store_address(0, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 16 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + * @param value   Signed value to add.
> + */
> +static inline void cvmx_hwfau_atomic_add16(cvmx_fau_reg16_t reg, s16
> value)
> +{
> +	reg ^= SWIZZLE_16;
> +	cvmx_write64_int16(__cvmx_hwfau_store_address(0, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 8 bit add
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + * @param value   Signed value to add.
> + */
> +static inline void cvmx_hwfau_atomic_add8(cvmx_fau_reg8_t reg,
> int8_t value)
> +{
> +	reg ^= SWIZZLE_8;
> +	cvmx_write64_int8(__cvmx_hwfau_store_address(0, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 64 bit write
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 8 for 64 bit access.
> + * @param value   Signed value to write.
> + */
> +static inline void cvmx_hwfau_atomic_write64(cvmx_fau_reg64_t reg,
> s64 value)
> +{
> +	cvmx_write64_int64(__cvmx_hwfau_store_address(1, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 32 bit write
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 4 for 32 bit access.
> + * @param value   Signed value to write.
> + */
> +static inline void cvmx_hwfau_atomic_write32(cvmx_fau_reg32_t reg,
> s32 value)
> +{
> +	reg ^= SWIZZLE_32;
> +	cvmx_write64_int32(__cvmx_hwfau_store_address(1, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 16 bit write
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + *                - Step by 2 for 16 bit access.
> + * @param value   Signed value to write.
> + */
> +static inline void cvmx_hwfau_atomic_write16(cvmx_fau_reg16_t reg,
> s16 value)
> +{
> +	reg ^= SWIZZLE_16;
> +	cvmx_write64_int16(__cvmx_hwfau_store_address(1, reg), value);
> +}
> +
> +/**
> + * Perform an atomic 8 bit write
> + *
> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
> + * @param value   Signed value to write.
> + */
> +static inline void cvmx_hwfau_atomic_write8(cvmx_fau_reg8_t reg,
> int8_t value)
> +{
> +	reg ^= SWIZZLE_8;
> +	cvmx_write64_int8(__cvmx_hwfau_store_address(1, reg), value);
> +}
> +
> +/** Allocates 64bit FAU register.
> + *  @return value is the base address of allocated FAU register
> + */
> +int cvmx_fau64_alloc(int reserve);
> +
> +/** Allocates 32bit FAU register.
> + *  @return value is the base address of allocated FAU register
> + */
> +int cvmx_fau32_alloc(int reserve);
> +
> +/** Allocates 16bit FAU register.
> + *  @return value is the base address of allocated FAU register
> + */
> +int cvmx_fau16_alloc(int reserve);
> +
> +/** Allocates 8bit FAU register.
> + *  @return value is the base address of allocated FAU register
> + */
> +int cvmx_fau8_alloc(int reserve);
> +
> +/** Frees the specified FAU register.
> + *  @param address Base address of register to release.
> + *  @return 0 on success; -1 on failure
> + */
> +int cvmx_fau_free(int address);
> +
> +/** Display the fau registers array
> + */
> +void cvmx_fau_show(void);
> +
> +#endif /* __CVMX_HWFAU_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
> b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
> new file mode 100644
> index 000000000000..459c19bbc0f1
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
> @@ -0,0 +1,570 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Packet Output unit.
> + *
> + * Starting with SDK 1.7.0, the PKO output functions now support
> + * two types of locking. CVMX_PKO_LOCK_ATOMIC_TAG continues to
> + * function similarly to previous SDKs by using POW atomic tags
> + * to preserve ordering and exclusivity. As a new option, you
> + * can now pass CVMX_PKO_LOCK_CMD_QUEUE which uses a ll/sc
> + * memory based locking instead. This locking has the advantage
> + * of not affecting the tag state but doesn't preserve packet
> + * ordering. CVMX_PKO_LOCK_CMD_QUEUE is appropriate in most
> + * generic code while CVMX_PKO_LOCK_CMD_QUEUE should be used
> + * with hand tuned fast path code.
> + *
> + * Some of other SDK differences visible to the command command
> + * queuing:
> + * - PKO indexes are no longer stored in the FAU. A large
> + *   percentage of the FAU register block used to be tied up
> + *   maintaining PKO queue pointers. These are now stored in a
> + *   global named block.
> + * - The PKO <b>use_locking</b> parameter can now have a global
> + *   effect. Since all application use the same named block,
> + *   queue locking correctly applies across all operating
> + *   systems when using CVMX_PKO_LOCK_CMD_QUEUE.
> + * - PKO 3 word commands are now supported. Use
> + *   cvmx_pko_send_packet_finish3().
> + */
> +
> +#ifndef __CVMX_HWPKO_H__
> +#define __CVMX_HWPKO_H__
> +
> +#include "cvmx-hwfau.h"
> +#include "cvmx-fpa.h"
> +#include "cvmx-pow.h"
> +#include "cvmx-cmd-queue.h"
> +#include "cvmx-helper.h"
> +#include "cvmx-helper-util.h"
> +#include "cvmx-helper-cfg.h"
> +
> +/* Adjust the command buffer size by 1 word so that in the case of
> using only
> +** two word PKO commands no command words stradle buffers.  The
> useful values
> +** for this are 0 and 1. */
> +#define CVMX_PKO_COMMAND_BUFFER_SIZE_ADJUST (1)
> +
> +#define CVMX_PKO_MAX_OUTPUT_QUEUES_STATIC 256
> +#define
> CVMX_PKO_MAX_OUTPUT_QUEUES                                           
>                       \
> +	((OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) ? 256
> : 128)
> +#define
> CVMX_PKO_NUM_OUTPUT_PORTS                                            
>                       \
> +	((OCTEON_IS_MODEL(OCTEON_CN63XX)) ? 44 :
> (OCTEON_IS_MODEL(OCTEON_CN66XX) ? 48 : 40))
> +#define CVMX_PKO_MEM_QUEUE_PTRS_ILLEGAL_PID 63
> +#define CVMX_PKO_QUEUE_STATIC_PRIORITY	    9
> +#define CVMX_PKO_ILLEGAL_QUEUE		    0xFFFF
> +#define CVMX_PKO_MAX_QUEUE_DEPTH	    0
> +
> +typedef enum {
> +	CVMX_PKO_SUCCESS,
> +	CVMX_PKO_INVALID_PORT,
> +	CVMX_PKO_INVALID_QUEUE,
> +	CVMX_PKO_INVALID_PRIORITY,
> +	CVMX_PKO_NO_MEMORY,
> +	CVMX_PKO_PORT_ALREADY_SETUP,
> +	CVMX_PKO_CMD_QUEUE_INIT_ERROR
> +} cvmx_pko_return_value_t;
> +
> +/**
> + * This enumeration represents the differnet locking modes supported
> by PKO.
> + */
> +typedef enum {
> +	CVMX_PKO_LOCK_NONE = 0,
> +	CVMX_PKO_LOCK_ATOMIC_TAG = 1,
> +	CVMX_PKO_LOCK_CMD_QUEUE = 2,
> +} cvmx_pko_lock_t;
> +
> +typedef struct cvmx_pko_port_status {
> +	u32 packets;
> +	u64 octets;
> +	u64 doorbell;
> +} cvmx_pko_port_status_t;
> +
> +/**
> + * This structure defines the address to use on a packet enqueue
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		cvmx_mips_space_t mem_space : 2;
> +		u64 reserved : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved2 : 4;
> +		u64 reserved3 : 15;
> +		u64 port : 9;
> +		u64 queue : 9;
> +		u64 reserved4 : 3;
> +	} s;
> +} cvmx_pko_doorbell_address_t;
> +
> +/**
> + * Structure of the first packet output command word.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		cvmx_fau_op_size_t size1 : 2;
> +		cvmx_fau_op_size_t size0 : 2;
> +		u64 subone1 : 1;
> +		u64 reg1 : 11;
> +		u64 subone0 : 1;
> +		u64 reg0 : 11;
> +		u64 le : 1;
> +		u64 n2 : 1;
> +		u64 wqp : 1;
> +		u64 rsp : 1;
> +		u64 gather : 1;
> +		u64 ipoffp1 : 7;
> +		u64 ignore_i : 1;
> +		u64 dontfree : 1;
> +		u64 segs : 6;
> +		u64 total_bytes : 16;
> +	} s;
> +} cvmx_pko_command_word0_t;
> +
> +/**
> + * Call before any other calls to initialize the packet
> + * output system.
> + */
> +
> +void cvmx_pko_hw_init(u8 pool, unsigned int bufsize);
> +
> +/**
> + * Enables the packet output hardware. It must already be
> + * configured.
> + */
> +void cvmx_pko_enable(void);
> +
> +/**
> + * Disables the packet output. Does not affect any configuration.
> + */
> +void cvmx_pko_disable(void);
> +
> +/**
> + * Shutdown and free resources required by packet output.
> + */
> +
> +void cvmx_pko_shutdown(void);
> +
> +/**
> + * Configure a output port and the associated queues for use.
> + *
> + * @param port       Port to configure.
> + * @param base_queue First queue number to associate with this port.
> + * @param num_queues Number of queues t oassociate with this port
> + * @param priority   Array of priority levels for each queue. Values
> are
> + *                   allowed to be 1-8. A value of 8 get 8 times the
> traffic
> + *                   of a value of 1. There must be num_queues
> elements in the
> + *                   array.
> + */
> +cvmx_pko_return_value_t cvmx_pko_config_port(int port, int
> base_queue, int num_queues,
> +					     const u8 priority[]);
> +
> +/**
> + * Ring the packet output doorbell. This tells the packet
> + * output hardware that "len" command words have been added
> + * to its pending list.  This command includes the required
> + * CVMX_SYNCWS before the doorbell ring.
> + *
> + * WARNING: This function may have to look up the proper PKO port in
> + * the IPD port to PKO port map, and is thus slower than calling
> + * cvmx_pko_doorbell_pkoid() directly if the PKO port identifier is
> + * known.
> + *
> + * @param ipd_port   The IPD port corresponding the to pko port the
> packet is for
> + * @param queue  Queue the packet is for
> + * @param len    Length of the command in 64 bit words
> + */
> +static inline void cvmx_pko_doorbell(u64 ipd_port, u64 queue, u64
> len)
> +{
> +	cvmx_pko_doorbell_address_t ptr;
> +	u64 pko_port;
> +
> +	pko_port = ipd_port;
> +	if (octeon_has_feature(OCTEON_FEATURE_PKND))
> +		pko_port = cvmx_helper_cfg_ipd2pko_port_base(ipd_port);
> +
> +	ptr.u64 = 0;
> +	ptr.s.mem_space = CVMX_IO_SEG;
> +	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
> +	ptr.s.is_io = 1;
> +	ptr.s.port = pko_port;
> +	ptr.s.queue = queue;
> +	/* Need to make sure output queue data is in DRAM before
> doorbell write */
> +	CVMX_SYNCWS;
> +	cvmx_write_io(ptr.u64, len);
> +}
> +
> +/**
> + * Prepare to send a packet.  This may initiate a tag switch to
> + * get exclusive access to the output queue structure, and
> + * performs other prep work for the packet send operation.
> + *
> + * cvmx_pko_send_packet_finish() MUST be called after this function
> is called,
> + * and must be called with the same port/queue/use_locking
> arguments.
> + *
> + * The use_locking parameter allows the caller to use three
> + * possible locking modes.
> + * - CVMX_PKO_LOCK_NONE
> + *      - PKO doesn't do any locking. It is the responsibility
> + *          of the application to make sure that no other core
> + *          is accessing the same queue at the same time.
> + * - CVMX_PKO_LOCK_ATOMIC_TAG
> + *      - PKO performs an atomic tagswitch to insure exclusive
> + *          access to the output queue. This will maintain
> + *          packet ordering on output.
> + * - CVMX_PKO_LOCK_CMD_QUEUE
> + *      - PKO uses the common command queue locks to insure
> + *          exclusive access to the output queue. This is a
> + *          memory based ll/sc. This is the most portable
> + *          locking mechanism.
> + *
> + * NOTE: If atomic locking is used, the POW entry CANNOT be
> + * descheduled, as it does not contain a valid WQE pointer.
> + *
> + * @param port   Port to send it on, this can be either IPD port or
> PKO
> + *		 port.
> + * @param queue  Queue to use
> + * @param use_locking
> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
> CVMX_PKO_LOCK_CMD_QUEUE
> + */
> +static inline void cvmx_pko_send_packet_prepare(u64 port
> __attribute__((unused)), u64 queue,
> +						cvmx_pko_lock_t
> use_locking)
> +{
> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG) {
> +		/*
> +		 * Must do a full switch here to handle all cases.  We
> use a
> +		 * fake WQE pointer, as the POW does not access this
> memory.
> +		 * The WQE pointer and group are only used if this work
> is
> +		 * descheduled, which is not supported by the
> +		 *
> cvmx_pko_send_packet_prepare/cvmx_pko_send_packet_finish
> +		 * combination. Note that this is a special case in
> which these
> +		 * fake values can be used - this is not a general
> technique.
> +		 */
> +		u32 tag = CVMX_TAG_SW_BITS_INTERNAL <<
> CVMX_TAG_SW_SHIFT |
> +			  CVMX_TAG_SUBGROUP_PKO <<
> CVMX_TAG_SUBGROUP_SHIFT |
> +			  (CVMX_TAG_SUBGROUP_MASK & queue);
> +		cvmx_pow_tag_sw_full((cvmx_wqe_t
> *)cvmx_phys_to_ptr(0x80), tag,
> +				     CVMX_POW_TAG_TYPE_ATOMIC, 0);
> +	}
> +}
> +
> +#define cvmx_pko_send_packet_prepare_pkoid
> cvmx_pko_send_packet_prepare
> +
> +/**
> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
> called exactly once before this,
> + * and the same parameters must be passed to both
> cvmx_pko_send_packet_prepare() and
> + * cvmx_pko_send_packet_finish().
> + *
> + * WARNING: This function may have to look up the proper PKO port in
> + * the IPD port to PKO port map, and is thus slower than calling
> + * cvmx_pko_send_packet_finish_pkoid() directly if the PKO port
> + * identifier is known.
> + *
> + * @param ipd_port   The IPD port corresponding the to pko port the
> packet is for
> + * @param queue  Queue to use
> + * @param pko_command
> + *               PKO HW command word
> + * @param packet Packet to send
> + * @param use_locking
> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
> CVMX_PKO_LOCK_CMD_QUEUE
> + *
> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
> failure of output
> + */
> +static inline cvmx_pko_return_value_t
> +cvmx_hwpko_send_packet_finish(u64 ipd_port, u64 queue,
> cvmx_pko_command_word0_t pko_command,
> +			      cvmx_buf_ptr_t packet, cvmx_pko_lock_t
> use_locking)
> +{
> +	cvmx_cmd_queue_result_t result;
> +
> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
> +		cvmx_pow_tag_sw_wait();
> +
> +	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
> +				       (use_locking ==
> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
> +				       packet.u64);
> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
> +		cvmx_pko_doorbell(ipd_port, queue, 2);
> +		return CVMX_PKO_SUCCESS;
> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
> CVMX_CMD_QUEUE_FULL)) {
> +		return CVMX_PKO_NO_MEMORY;
> +	} else {
> +		return CVMX_PKO_INVALID_QUEUE;
> +	}
> +}
> +
> +/**
> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
> called exactly once before this,
> + * and the same parameters must be passed to both
> cvmx_pko_send_packet_prepare() and
> + * cvmx_pko_send_packet_finish().
> + *
> + * WARNING: This function may have to look up the proper PKO port in
> + * the IPD port to PKO port map, and is thus slower than calling
> + * cvmx_pko_send_packet_finish3_pkoid() directly if the PKO port
> + * identifier is known.
> + *
> + * @param ipd_port   The IPD port corresponding the to pko port the
> packet is for
> + * @param queue  Queue to use
> + * @param pko_command
> + *               PKO HW command word
> + * @param packet Packet to send
> + * @param addr   Plysical address of a work queue entry or physical
> address to zero on complete.
> + * @param use_locking
> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
> CVMX_PKO_LOCK_CMD_QUEUE
> + *
> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
> failure of output
> + */
> +static inline cvmx_pko_return_value_t
> +cvmx_hwpko_send_packet_finish3(u64 ipd_port, u64 queue,
> cvmx_pko_command_word0_t pko_command,
> +			       cvmx_buf_ptr_t packet, u64 addr,
> cvmx_pko_lock_t use_locking)
> +{
> +	cvmx_cmd_queue_result_t result;
> +
> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
> +		cvmx_pow_tag_sw_wait();
> +
> +	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
> +				       (use_locking ==
> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
> +				       packet.u64, addr);
> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
> +		cvmx_pko_doorbell(ipd_port, queue, 3);
> +		return CVMX_PKO_SUCCESS;
> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
> CVMX_CMD_QUEUE_FULL)) {
> +		return CVMX_PKO_NO_MEMORY;
> +	} else {
> +		return CVMX_PKO_INVALID_QUEUE;
> +	}
> +}
> +
> +/**
> + * Get the first pko_port for the (interface, index)
> + *
> + * @param interface
> + * @param index
> + */
> +int cvmx_pko_get_base_pko_port(int interface, int index);
> +
> +/**
> + * Get the number of pko_ports for the (interface, index)
> + *
> + * @param interface
> + * @param index
> + */
> +int cvmx_pko_get_num_pko_ports(int interface, int index);
> +
> +/**
> + * For a given port number, return the base pko output queue
> + * for the port.
> + *
> + * @param port   IPD port number
> + * @return Base output queue
> + */
> +int cvmx_pko_get_base_queue(int port);
> +
> +/**
> + * For a given port number, return the number of pko output queues.
> + *
> + * @param port   IPD port number
> + * @return Number of output queues
> + */
> +int cvmx_pko_get_num_queues(int port);
> +
> +/**
> + * Sets the internal FPA pool data structure for PKO comamnd queue.
> + * @param pool	fpa pool number yo use
> + * @param buffer_size	buffer size of pool
> + * @param buffer_count	number of buufers to allocate to pool
> + *
> + * @note the caller is responsable for setting up the pool with
> + * an appropriate buffer size and sufficient buffer count.
> + */
> +void cvmx_pko_set_cmd_que_pool_config(s64 pool, u64 buffer_size, u64
> buffer_count);
> +
> +/**
> + * Get the status counters for a port.
> + *
> + * @param ipd_port Port number (ipd_port) to get statistics for.
> + * @param clear    Set to 1 to clear the counters after they are
> read
> + * @param status   Where to put the results.
> + *
> + * Note:
> + *     - Only the doorbell for the base queue of the ipd_port is
> + *       collected.
> + *     - Retrieving the stats involves writing the index through
> + *       CVMX_PKO_REG_READ_IDX and reading the stat CSRs, in that
> + *       order. It is not MP-safe and caller should guarantee
> + *       atomicity.
> + */
> +void cvmx_pko_get_port_status(u64 ipd_port, u64 clear,
> cvmx_pko_port_status_t *status);
> +
> +/**
> + * Rate limit a PKO port to a max packets/sec. This function is only
> + * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
> + *
> + * @param port      Port to rate limit
> + * @param packets_s Maximum packet/sec
> + * @param burst     Maximum number of packets to burst in a row
> before rate
> + *                  limiting cuts in.
> + *
> + * @return Zero on success, negative on failure
> + */
> +int cvmx_pko_rate_limit_packets(int port, int packets_s, int burst);
> +
> +/**
> + * Rate limit a PKO port to a max bits/sec. This function is only
> + * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
> + *
> + * @param port   Port to rate limit
> + * @param bits_s PKO rate limit in bits/sec
> + * @param burst  Maximum number of bits to burst before rate
> + *               limiting cuts in.
> + *
> + * @return Zero on success, negative on failure
> + */
> +int cvmx_pko_rate_limit_bits(int port, u64 bits_s, int burst);
> +
> +/**
> + * @INTERNAL
> + *
> + * Retrieve the PKO pipe number for a port
> + *
> + * @param interface
> + * @param index
> + *
> + * @return negative on error.
> + *
> + * This applies only to the non-loopback interfaces.
> + *
> + */
> +int __cvmx_pko_get_pipe(int interface, int index);
> +
> +/**
> + * For a given PKO port number, return the base output queue
> + * for the port.
> + *
> + * @param pko_port   PKO port number
> + * @return           Base output queue
> + */
> +int cvmx_pko_get_base_queue_pkoid(int pko_port);
> +
> +/**
> + * For a given PKO port number, return the number of output queues
> + * for the port.
> + *
> + * @param pko_port	PKO port number
> + * @return		the number of output queues
> + */
> +int cvmx_pko_get_num_queues_pkoid(int pko_port);
> +
> +/**
> + * Ring the packet output doorbell. This tells the packet
> + * output hardware that "len" command words have been added
> + * to its pending list.  This command includes the required
> + * CVMX_SYNCWS before the doorbell ring.
> + *
> + * @param pko_port   Port the packet is for
> + * @param queue  Queue the packet is for
> + * @param len    Length of the command in 64 bit words
> + */
> +static inline void cvmx_pko_doorbell_pkoid(u64 pko_port, u64 queue,
> u64 len)
> +{
> +	cvmx_pko_doorbell_address_t ptr;
> +
> +	ptr.u64 = 0;
> +	ptr.s.mem_space = CVMX_IO_SEG;
> +	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
> +	ptr.s.is_io = 1;
> +	ptr.s.port = pko_port;
> +	ptr.s.queue = queue;
> +	/* Need to make sure output queue data is in DRAM before
> doorbell write */
> +	CVMX_SYNCWS;
> +	cvmx_write_io(ptr.u64, len);
> +}
> +
> +/**
> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
> called exactly once before this,
> + * and the same parameters must be passed to both
> cvmx_pko_send_packet_prepare() and
> + * cvmx_pko_send_packet_finish_pkoid().
> + *
> + * @param pko_port   Port to send it on
> + * @param queue  Queue to use
> + * @param pko_command
> + *               PKO HW command word
> + * @param packet Packet to send
> + * @param use_locking
> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
> CVMX_PKO_LOCK_CMD_QUEUE
> + *
> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
> failure of output
> + */
> +static inline cvmx_pko_return_value_t
> +cvmx_hwpko_send_packet_finish_pkoid(int pko_port, u64 queue,
> cvmx_pko_command_word0_t pko_command,
> +				    cvmx_buf_ptr_t packet,
> cvmx_pko_lock_t use_locking)
> +{
> +	cvmx_cmd_queue_result_t result;
> +
> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
> +		cvmx_pow_tag_sw_wait();
> +
> +	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
> +				       (use_locking ==
> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
> +				       packet.u64);
> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
> +		cvmx_pko_doorbell_pkoid(pko_port, queue, 2);
> +		return CVMX_PKO_SUCCESS;
> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
> CVMX_CMD_QUEUE_FULL)) {
> +		return CVMX_PKO_NO_MEMORY;
> +	} else {
> +		return CVMX_PKO_INVALID_QUEUE;
> +	}
> +}
> +
> +/**
> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
> called exactly once before this,
> + * and the same parameters must be passed to both
> cvmx_pko_send_packet_prepare() and
> + * cvmx_pko_send_packet_finish_pkoid().
> + *
> + * @param pko_port   The PKO port the packet is for
> + * @param queue  Queue to use
> + * @param pko_command
> + *               PKO HW command word
> + * @param packet Packet to send
> + * @param addr   Plysical address of a work queue entry or physical
> address to zero on complete.
> + * @param use_locking
> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
> CVMX_PKO_LOCK_CMD_QUEUE
> + *
> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
> failure of output
> + */
> +static inline cvmx_pko_return_value_t
> +cvmx_hwpko_send_packet_finish3_pkoid(u64 pko_port, u64 queue,
> cvmx_pko_command_word0_t pko_command,
> +				     cvmx_buf_ptr_t packet, u64 addr,
> cvmx_pko_lock_t use_locking)
> +{
> +	cvmx_cmd_queue_result_t result;
> +
> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
> +		cvmx_pow_tag_sw_wait();
> +
> +	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
> +				       (use_locking ==
> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
> +				       packet.u64, addr);
> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
> +		cvmx_pko_doorbell_pkoid(pko_port, queue, 3);
> +		return CVMX_PKO_SUCCESS;
> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
> CVMX_CMD_QUEUE_FULL)) {
> +		return CVMX_PKO_NO_MEMORY;
> +	} else {
> +		return CVMX_PKO_INVALID_QUEUE;
> +	}
> +}
> +
> +/*
> + * Obtain the number of PKO commands pending in a queue
> + *
> + * @param queue is the queue identifier to be queried
> + * @return the number of commands pending transmission or -1 on
> error
> + */
> +int cvmx_pko_queue_pend_count(cvmx_cmd_queue_id_t queue);
> +
> +void cvmx_pko_set_cmd_queue_pool_buffer_count(u64 buffer_count);
> +
> +#endif /* __CVMX_HWPKO_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
> b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
> new file mode 100644
> index 000000000000..727298352c28
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
> @@ -0,0 +1,154 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * This file contains defines for the ILK interface
> + */
> +
> +#ifndef __CVMX_ILK_H__
> +#define __CVMX_ILK_H__
> +
> +/* CSR typedefs have been moved to cvmx-ilk-defs.h */
> +
> +/*
> + * Note: this macro must match the first ilk port in the
> ipd_port_map_68xx[]
> + * and ipd_port_map_78xx[] arrays.
> + */
> +static inline int CVMX_ILK_GBL_BASE(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
> +		return 5;
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 6;
> +	return -1;
> +}
> +
> +static inline int CVMX_ILK_QLM_BASE(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
> +		return 1;
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 4;
> +	return -1;
> +}
> +
> +typedef struct {
> +	int intf_en : 1;
> +	int la_mode : 1;
> +	int reserved : 14; /* unused */
> +	int lane_speed : 16;
> +	/* add more here */
> +} cvmx_ilk_intf_t;
> +
> +#define CVMX_NUM_ILK_INTF 2
> +static inline int CVMX_ILK_MAX_LANES(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
> +		return 8;
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 16;
> +	return -1;
> +}
> +
> +extern unsigned short
> cvmx_ilk_lane_mask[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
> +
> +typedef struct {
> +	unsigned int pipe;
> +	unsigned int chan;
> +} cvmx_ilk_pipe_chan_t;
> +
> +#define CVMX_ILK_MAX_PIPES 45
> +/* Max number of channels allowed */
> +#define CVMX_ILK_MAX_CHANS 256
> +
> +extern int cvmx_ilk_chans[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
> +
> +typedef struct {
> +	unsigned int chan;
> +	unsigned int pknd;
> +} cvmx_ilk_chan_pknd_t;
> +
> +#define CVMX_ILK_MAX_PKNDS 16 /* must be <45 */
> +
> +typedef struct {
> +	int *chan_list; /* for discrete channels. or, must be null */
> +	unsigned int num_chans;
> +
> +	unsigned int chan_start; /* for continuous channels */
> +	unsigned int chan_end;
> +	unsigned int chan_step;
> +
> +	unsigned int clr_on_rd;
> +} cvmx_ilk_stats_ctrl_t;
> +
> +#define CVMX_ILK_MAX_CAL      288
> +#define CVMX_ILK_MAX_CAL_IDX  (CVMX_ILK_MAX_CAL / 8)
> +#define CVMX_ILK_TX_MIN_CAL   1
> +#define CVMX_ILK_RX_MIN_CAL   1
> +#define CVMX_ILK_CAL_GRP_SZ   8
> +#define CVMX_ILK_PIPE_BPID_SZ 7
> +#define CVMX_ILK_ENT_CTRL_SZ  2
> +#define CVMX_ILK_RX_FIFO_WM   0x200
> +
> +typedef enum { PIPE_BPID = 0, LINK, XOFF, XON }
> cvmx_ilk_cal_ent_ctrl_t;
> +
> +typedef struct {
> +	unsigned char pipe_bpid;
> +	cvmx_ilk_cal_ent_ctrl_t ent_ctrl;
> +} cvmx_ilk_cal_entry_t;
> +
> +typedef enum { CVMX_ILK_LPBK_DISA = 0, CVMX_ILK_LPBK_ENA }
> cvmx_ilk_lpbk_ena_t;
> +
> +typedef enum { CVMX_ILK_LPBK_INT = 0, CVMX_ILK_LPBK_EXT }
> cvmx_ilk_lpbk_mode_t;
> +
> +/**
> + * This header is placed in front of all received ILK look-aside
> mode packets
> + */
> +typedef union {
> +	u64 u64;
> +
> +	struct {
> +		u32 reserved_63_57 : 7;	  /* bits 63...57 */
> +		u32 nsp_cmd : 5;	  /* bits 56...52 */
> +		u32 nsp_flags : 4;	  /* bits 51...48 */
> +		u32 nsp_grp_id_upper : 6; /* bits 47...42 */
> +		u32 reserved_41_40 : 2;	  /* bits 41...40 */
> +		/* Protocol type, 1 for LA mode packet */
> +		u32 la_mode : 1;	  /* bit  39      */
> +		u32 nsp_grp_id_lower : 2; /* bits 38...37 */
> +		u32 nsp_xid_upper : 4;	  /* bits 36...33 */
> +		/* ILK channel number, 0 or 1 */
> +		u32 ilk_channel : 1;   /* bit  32      */
> +		u32 nsp_xid_lower : 8; /* bits 31...24 */
> +		/* Unpredictable, may be any value */
> +		u32 reserved_23_0 : 24; /* bits 23...0  */
> +	} s;
> +} cvmx_ilk_la_nsp_compact_hdr_t;
> +
> +typedef struct cvmx_ilk_LA_mode_struct {
> +	int ilk_LA_mode;
> +	int ilk_LA_mode_cal_ena;
> +} cvmx_ilk_LA_mode_t;
> +
> +extern cvmx_ilk_LA_mode_t cvmx_ilk_LA_mode[CVMX_NUM_ILK_INTF];
> +
> +int cvmx_ilk_use_la_mode(int interface, int channel);
> +int cvmx_ilk_start_interface(int interface, unsigned short
> num_lanes);
> +int cvmx_ilk_start_interface_la(int interface, unsigned char
> num_lanes);
> +int cvmx_ilk_set_pipe(int interface, int pipe_base, unsigned int
> pipe_len);
> +int cvmx_ilk_tx_set_channel(int interface, cvmx_ilk_pipe_chan_t
> *pch, unsigned int num_chs);
> +int cvmx_ilk_rx_set_pknd(int interface, cvmx_ilk_chan_pknd_t
> *chpknd, unsigned int num_pknd);
> +int cvmx_ilk_enable(int interface);
> +int cvmx_ilk_disable(int interface);
> +int cvmx_ilk_get_intf_ena(int interface);
> +int cvmx_ilk_get_chan_info(int interface, unsigned char **chans,
> unsigned char *num_chan);
> +cvmx_ilk_la_nsp_compact_hdr_t cvmx_ilk_enable_la_header(int
> ipd_port, int mode);
> +void cvmx_ilk_show_stats(int interface, cvmx_ilk_stats_ctrl_t
> *pstats);
> +int cvmx_ilk_cal_setup_rx(int interface, int cal_depth,
> cvmx_ilk_cal_entry_t *pent, int hi_wm,
> +			  unsigned char cal_ena);
> +int cvmx_ilk_cal_setup_tx(int interface, int cal_depth,
> cvmx_ilk_cal_entry_t *pent,
> +			  unsigned char cal_ena);
> +int cvmx_ilk_lpbk(int interface, cvmx_ilk_lpbk_ena_t enable,
> cvmx_ilk_lpbk_mode_t mode);
> +int cvmx_ilk_la_mode_enable_rx_calendar(int interface);
> +
> +#endif /* __CVMX_ILK_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
> b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
> new file mode 100644
> index 000000000000..cdff36fffb56
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
> @@ -0,0 +1,233 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Input Packet Data unit.
> + */
> +
> +#ifndef __CVMX_IPD_H__
> +#define __CVMX_IPD_H__
> +
> +#include "cvmx-pki.h"
> +
> +/* CSR typedefs have been moved to cvmx-ipd-defs.h */
> +
> +typedef cvmx_ipd_1st_mbuff_skip_t cvmx_ipd_mbuff_not_first_skip_t;
> +typedef cvmx_ipd_1st_next_ptr_back_t
> cvmx_ipd_second_next_ptr_back_t;
> +
> +typedef struct cvmx_ipd_tag_fields {
> +	u64 ipv6_src_ip : 1;
> +	u64 ipv6_dst_ip : 1;
> +	u64 ipv6_src_port : 1;
> +	u64 ipv6_dst_port : 1;
> +	u64 ipv6_next_header : 1;
> +	u64 ipv4_src_ip : 1;
> +	u64 ipv4_dst_ip : 1;
> +	u64 ipv4_src_port : 1;
> +	u64 ipv4_dst_port : 1;
> +	u64 ipv4_protocol : 1;
> +	u64 input_port : 1;
> +} cvmx_ipd_tag_fields_t;
> +
> +typedef struct cvmx_pip_port_config {
> +	u64 parse_mode;
> +	u64 tag_type;
> +	u64 tag_mode;
> +	cvmx_ipd_tag_fields_t tag_fields;
> +} cvmx_pip_port_config_t;
> +
> +typedef struct cvmx_ipd_config_struct {
> +	u64 first_mbuf_skip;
> +	u64 not_first_mbuf_skip;
> +	u64 ipd_enable;
> +	u64 enable_len_M8_fix;
> +	u64 cache_mode;
> +	cvmx_fpa_pool_config_t packet_pool;
> +	cvmx_fpa_pool_config_t wqe_pool;
> +	cvmx_pip_port_config_t port_config;
> +} cvmx_ipd_config_t;
> +
> +extern cvmx_ipd_config_t cvmx_ipd_cfg;
> +
> +/**
> + * Gets the fpa pool number of packet pool
> + */
> +static inline s64 cvmx_fpa_get_packet_pool(void)
> +{
> +	return (cvmx_ipd_cfg.packet_pool.pool_num);
> +}
> +
> +/**
> + * Gets the buffer size of packet pool buffer
> + */
> +static inline u64 cvmx_fpa_get_packet_pool_block_size(void)
> +{
> +	return (cvmx_ipd_cfg.packet_pool.buffer_size);
> +}
> +
> +/**
> + * Gets the buffer count of packet pool
> + */
> +static inline u64 cvmx_fpa_get_packet_pool_buffer_count(void)
> +{
> +	return (cvmx_ipd_cfg.packet_pool.buffer_count);
> +}
> +
> +/**
> + * Gets the fpa pool number of wqe pool
> + */
> +static inline s64 cvmx_fpa_get_wqe_pool(void)
> +{
> +	return (cvmx_ipd_cfg.wqe_pool.pool_num);
> +}
> +
> +/**
> + * Gets the buffer size of wqe pool buffer
> + */
> +static inline u64 cvmx_fpa_get_wqe_pool_block_size(void)
> +{
> +	return (cvmx_ipd_cfg.wqe_pool.buffer_size);
> +}
> +
> +/**
> + * Gets the buffer count of wqe pool
> + */
> +static inline u64 cvmx_fpa_get_wqe_pool_buffer_count(void)
> +{
> +	return (cvmx_ipd_cfg.wqe_pool.buffer_count);
> +}
> +
> +/**
> + * Sets the ipd related configuration in internal structure which is
> then used
> + * for seting IPD hardware block
> + */
> +int cvmx_ipd_set_config(cvmx_ipd_config_t ipd_config);
> +
> +/**
> + * Gets the ipd related configuration from internal structure.
> + */
> +void cvmx_ipd_get_config(cvmx_ipd_config_t *ipd_config);
> +
> +/**
> + * Sets the internal FPA pool data structure for packet buffer pool.
> + * @param pool	fpa pool number yo use
> + * @param buffer_size	buffer size of pool
> + * @param buffer_count	number of buufers to allocate to pool
> + */
> +void cvmx_ipd_set_packet_pool_config(s64 pool, u64 buffer_size, u64
> buffer_count);
> +
> +/**
> + * Sets the internal FPA pool data structure for wqe pool.
> + * @param pool	fpa pool number yo use
> + * @param buffer_size	buffer size of pool
> + * @param buffer_count	number of buufers to allocate to pool
> + */
> +void cvmx_ipd_set_wqe_pool_config(s64 pool, u64 buffer_size, u64
> buffer_count);
> +
> +/**
> + * Gets the FPA packet buffer pool parameters.
> + */
> +static inline void cvmx_fpa_get_packet_pool_config(s64 *pool, u64
> *buffer_size, u64 *buffer_count)
> +{
> +	if (pool)
> +		*pool = cvmx_ipd_cfg.packet_pool.pool_num;
> +	if (buffer_size)
> +		*buffer_size = cvmx_ipd_cfg.packet_pool.buffer_size;
> +	if (buffer_count)
> +		*buffer_count = cvmx_ipd_cfg.packet_pool.buffer_count;
> +}
> +
> +/**
> + * Sets the FPA packet buffer pool parameters.
> + */
> +static inline void cvmx_fpa_set_packet_pool_config(s64 pool, u64
> buffer_size, u64 buffer_count)
> +{
> +	cvmx_ipd_set_packet_pool_config(pool, buffer_size,
> buffer_count);
> +}
> +
> +/**
> + * Gets the FPA WQE pool parameters.
> + */
> +static inline void cvmx_fpa_get_wqe_pool_config(s64 *pool, u64
> *buffer_size, u64 *buffer_count)
> +{
> +	if (pool)
> +		*pool = cvmx_ipd_cfg.wqe_pool.pool_num;
> +	if (buffer_size)
> +		*buffer_size = cvmx_ipd_cfg.wqe_pool.buffer_size;
> +	if (buffer_count)
> +		*buffer_count = cvmx_ipd_cfg.wqe_pool.buffer_count;
> +}
> +
> +/**
> + * Sets the FPA WQE pool parameters.
> + */
> +static inline void cvmx_fpa_set_wqe_pool_config(s64 pool, u64
> buffer_size, u64 buffer_count)
> +{
> +	cvmx_ipd_set_wqe_pool_config(pool, buffer_size, buffer_count);
> +}
> +
> +/**
> + * Configure IPD
> + *
> + * @param mbuff_size Packets buffer size in 8 byte words
> + * @param first_mbuff_skip
> + *                   Number of 8 byte words to skip in the first
> buffer
> + * @param not_first_mbuff_skip
> + *                   Number of 8 byte words to skip in each
> following buffer
> + * @param first_back Must be same as first_mbuff_skip / 128
> + * @param second_back
> + *                   Must be same as not_first_mbuff_skip / 128
> + * @param wqe_fpa_pool
> + *                   FPA pool to get work entries from
> + * @param cache_mode
> + * @param back_pres_enable_flag
> + *                   Enable or disable port back pressure at a
> global level.
> + *                   This should always be 1 as more accurate
> control can be
> + *                   found in IPD_PORTX_BP_PAGE_CNT[BP_ENB].
> + */
> +void cvmx_ipd_config(u64 mbuff_size, u64 first_mbuff_skip, u64
> not_first_mbuff_skip, u64 first_back,
> +		     u64 second_back, u64 wqe_fpa_pool, cvmx_ipd_mode_t
> cache_mode,
> +		     u64 back_pres_enable_flag);
> +/**
> + * Enable IPD
> + */
> +void cvmx_ipd_enable(void);
> +
> +/**
> + * Disable IPD
> + */
> +void cvmx_ipd_disable(void);
> +
> +void __cvmx_ipd_free_ptr(void);
> +
> +void cvmx_ipd_set_packet_pool_buffer_count(u64 buffer_count);
> +void cvmx_ipd_set_wqe_pool_buffer_count(u64 buffer_count);
> +
> +/**
> + * Setup Random Early Drop on a specific input queue
> + *
> + * @param queue  Input queue to setup RED on (0-7)
> + * @param pass_thresh
> + *               Packets will begin slowly dropping when there are
> less than
> + *               this many packet buffers free in FPA 0.
> + * @param drop_thresh
> + *               All incoming packets will be dropped when there are
> less
> + *               than this many free packet buffers in FPA 0.
> + * @return Zero on success. Negative on failure
> + */
> +int cvmx_ipd_setup_red_queue(int queue, int pass_thresh, int
> drop_thresh);
> +
> +/**
> + * Setup Random Early Drop to automatically begin dropping packets.
> + *
> + * @param pass_thresh
> + *               Packets will begin slowly dropping when there are
> less than
> + *               this many packet buffers free in FPA 0.
> + * @param drop_thresh
> + *               All incoming packets will be dropped when there are
> less
> + *               than this many free packet buffers in FPA 0.
> + * @return Zero on success. Negative on failure
> + */
> +int cvmx_ipd_setup_red(int pass_thresh, int drop_thresh);
> +
> +#endif /*  __CVMX_IPD_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-packet.h
> b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
> new file mode 100644
> index 000000000000..f3cfe9c64f43
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
> @@ -0,0 +1,40 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Packet buffer defines.
> + */
> +
> +#ifndef __CVMX_PACKET_H__
> +#define __CVMX_PACKET_H__
> +
> +union cvmx_buf_ptr_pki {
> +	u64 u64;
> +	struct {
> +		u64 size : 16;
> +		u64 packet_outside_wqe : 1;
> +		u64 rsvd0 : 5;
> +		u64 addr : 42;
> +	};
> +};
> +
> +typedef union cvmx_buf_ptr_pki cvmx_buf_ptr_pki_t;
> +
> +/**
> + * This structure defines a buffer pointer on Octeon
> + */
> +union cvmx_buf_ptr {
> +	void *ptr;
> +	u64 u64;
> +	struct {
> +		u64 i : 1;
> +		u64 back : 4;
> +		u64 pool : 3;
> +		u64 size : 16;
> +		u64 addr : 40;
> +	} s;
> +};
> +
> +typedef union cvmx_buf_ptr cvmx_buf_ptr_t;
> +
> +#endif /*  __CVMX_PACKET_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
> new file mode 100644
> index 000000000000..a819196c021c
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
> @@ -0,0 +1,279 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef __CVMX_PCIE_H__
> +#define __CVMX_PCIE_H__
> +
> +#define CVMX_PCIE_MAX_PORTS 4
> +#define
> CVMX_PCIE_PORTS                                                      
>                       \
> +	((OCTEON_IS_MODEL(OCTEON_CN78XX) ||
> OCTEON_IS_MODEL(OCTEON_CN73XX)) ?                      \
> +		       CVMX_PCIE_MAX_PORTS
> :                                                             \
> +		       (OCTEON_IS_MODEL(OCTEON_CN70XX) ? 3 : 2))
> +
> +/*
> + * The physical memory base mapped by BAR1.  256MB at the end of the
> + * first 4GB.
> + */
> +#define CVMX_PCIE_BAR1_PHYS_BASE ((1ull << 32) - (1ull << 28))
> +#define CVMX_PCIE_BAR1_PHYS_SIZE BIT_ULL(28)
> +
> +/*
> + * The RC base of BAR1.  gen1 has a 39-bit BAR2, gen2 has 41-bit
> BAR2,
> + * place BAR1 so it is the same for both.
> + */
> +#define CVMX_PCIE_BAR1_RC_BASE BIT_ULL(41)
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
> +		u64 reserved_49_61 : 13; /* Must be zero */
> +		u64 io : 1;		 /* 1 for IO space access */
> +		u64 did : 5;		 /* PCIe DID = 3 */
> +		u64 subdid : 3;		 /* PCIe SubDID = 1 */
> +		u64 reserved_38_39 : 2;	 /* Must be zero */
> +		u64 node : 2;		 /* Numa node number */
> +		u64 es : 2;		 /* Endian swap = 1 */
> +		u64 port : 2;		 /* PCIe port 0,1 */
> +		u64 reserved_29_31 : 3;	 /* Must be zero */
> +		u64 ty : 1;
> +		u64 bus : 8;
> +		u64 dev : 5;
> +		u64 func : 3;
> +		u64 reg : 12;
> +	} config;
> +	struct {
> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
> +		u64 reserved_49_61 : 13; /* Must be zero */
> +		u64 io : 1;		 /* 1 for IO space access */
> +		u64 did : 5;		 /* PCIe DID = 3 */
> +		u64 subdid : 3;		 /* PCIe SubDID = 2 */
> +		u64 reserved_38_39 : 2;	 /* Must be zero */
> +		u64 node : 2;		 /* Numa node number */
> +		u64 es : 2;		 /* Endian swap = 1 */
> +		u64 port : 2;		 /* PCIe port 0,1 */
> +		u64 address : 32;	 /* PCIe IO address */
> +	} io;
> +	struct {
> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
> +		u64 reserved_49_61 : 13; /* Must be zero */
> +		u64 io : 1;		 /* 1 for IO space access */
> +		u64 did : 5;		 /* PCIe DID = 3 */
> +		u64 subdid : 3;		 /* PCIe SubDID = 3-6 */
> +		u64 reserved_38_39 : 2;	 /* Must be zero */
> +		u64 node : 2;		 /* Numa node number */
> +		u64 address : 36;	 /* PCIe Mem address */
> +	} mem;
> +} cvmx_pcie_address_t;
> +
> +/**
> + * Return the Core virtual base address for PCIe IO access. IOs are
> + * read/written as an offset from this address.
> + *
> + * @param pcie_port PCIe port the IO is for
> + *
> + * @return 64bit Octeon IO base address for read/write
> + */
> +u64 cvmx_pcie_get_io_base_address(int pcie_port);
> +
> +/**
> + * Size of the IO address region returned at address
> + * cvmx_pcie_get_io_base_address()
> + *
> + * @param pcie_port PCIe port the IO is for
> + *
> + * @return Size of the IO window
> + */
> +u64 cvmx_pcie_get_io_size(int pcie_port);
> +
> +/**
> + * Return the Core virtual base address for PCIe MEM access. Memory
> is
> + * read/written as an offset from this address.
> + *
> + * @param pcie_port PCIe port the IO is for
> + *
> + * @return 64bit Octeon IO base address for read/write
> + */
> +u64 cvmx_pcie_get_mem_base_address(int pcie_port);
> +
> +/**
> + * Size of the Mem address region returned at address
> + * cvmx_pcie_get_mem_base_address()
> + *
> + * @param pcie_port PCIe port the IO is for
> + *
> + * @return Size of the Mem window
> + */
> +u64 cvmx_pcie_get_mem_size(int pcie_port);
> +
> +/**
> + * Initialize a PCIe port for use in host(RC) mode. It doesn't
> enumerate the bus.
> + *
> + * @param pcie_port PCIe port to initialize
> + *
> + * @return Zero on success
> + */
> +int cvmx_pcie_rc_initialize(int pcie_port);
> +
> +/**
> + * Shutdown a PCIe port and put it in reset
> + *
> + * @param pcie_port PCIe port to shutdown
> + *
> + * @return Zero on success
> + */
> +int cvmx_pcie_rc_shutdown(int pcie_port);
> +
> +/**
> + * Read 8bits from a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + *
> + * @return Result of the read
> + */
> +u8 cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn,
> int reg);
> +
> +/**
> + * Read 16bits from a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + *
> + * @return Result of the read
> + */
> +u16 cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn,
> int reg);
> +
> +/**
> + * Read 32bits from a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + *
> + * @return Result of the read
> + */
> +u32 cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn,
> int reg);
> +
> +/**
> + * Write 8bits to a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + * @param val       Value to write
> + */
> +void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int
> fn, int reg, u8 val);
> +
> +/**
> + * Write 16bits to a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + * @param val       Value to write
> + */
> +void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int
> fn, int reg, u16 val);
> +
> +/**
> + * Write 32bits to a Device's config space
> + *
> + * @param pcie_port PCIe port the device is on
> + * @param bus       Sub bus
> + * @param dev       Device ID
> + * @param fn        Device sub function
> + * @param reg       Register to access
> + * @param val       Value to write
> + */
> +void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int
> fn, int reg, u32 val);
> +
> +/**
> + * Read a PCIe config space register indirectly. This is used for
> + * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
> + *
> + * @param pcie_port  PCIe port to read from
> + * @param cfg_offset Address to read
> + *
> + * @return Value read
> + */
> +u32 cvmx_pcie_cfgx_read(int pcie_port, u32 cfg_offset);
> +u32 cvmx_pcie_cfgx_read_node(int node, int pcie_port, u32
> cfg_offset);
> +
> +/**
> + * Write a PCIe config space register indirectly. This is used for
> + * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
> + *
> + * @param pcie_port  PCIe port to write to
> + * @param cfg_offset Address to write
> + * @param val        Value to write
> + */
> +void cvmx_pcie_cfgx_write(int pcie_port, u32 cfg_offset, u32 val);
> +void cvmx_pcie_cfgx_write_node(int node, int pcie_port, u32
> cfg_offset, u32 val);
> +
> +/**
> + * Write a 32bit value to the Octeon NPEI register space
> + *
> + * @param address Address to write to
> + * @param val     Value to write
> + */
> +static inline void cvmx_pcie_npei_write32(u64 address, u32 val)
> +{
> +	cvmx_write64_uint32(address ^ 4, val);
> +	cvmx_read64_uint32(address ^ 4);
> +}
> +
> +/**
> + * Read a 32bit value from the Octeon NPEI register space
> + *
> + * @param address Address to read
> + * @return The result
> + */
> +static inline u32 cvmx_pcie_npei_read32(u64 address)
> +{
> +	return cvmx_read64_uint32(address ^ 4);
> +}
> +
> +/**
> + * Initialize a PCIe port for use in target(EP) mode.
> + *
> + * @param pcie_port PCIe port to initialize
> + *
> + * @return Zero on success
> + */
> +int cvmx_pcie_ep_initialize(int pcie_port);
> +
> +/**
> + * Wait for posted PCIe read/writes to reach the other side of
> + * the internal PCIe switch. This will insure that core
> + * read/writes are posted before anything after this function
> + * is called. This may be necessary when writing to memory that
> + * will later be read using the DMA/PKT engines.
> + *
> + * @param pcie_port PCIe port to wait for
> + */
> +void cvmx_pcie_wait_for_pending(int pcie_port);
> +
> +/**
> + * Returns if a PCIe port is in host or target mode.
> + *
> + * @param pcie_port PCIe port number (PEM number)
> + *
> + * @return 0 if PCIe port is in target mode, !0 if in host mode.
> + */
> +int cvmx_pcie_is_host_mode(int pcie_port);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
> new file mode 100644
> index 000000000000..013f533fb7bb
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
> @@ -0,0 +1,1080 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Packet Input Processing unit.
> + */
> +
> +#ifndef __CVMX_PIP_H__
> +#define __CVMX_PIP_H__
> +
> +#include "cvmx-wqe.h"
> +#include "cvmx-pki.h"
> +#include "cvmx-helper-pki.h"
> +
> +#include "cvmx-helper.h"
> +#include "cvmx-helper-util.h"
> +#include "cvmx-pki-resources.h"
> +
> +#define CVMX_PIP_NUM_INPUT_PORTS 46
> +#define CVMX_PIP_NUM_WATCHERS	 8
> +
> +/*
> + * Encodes the different error and exception codes
> + */
> +typedef enum {
> +	CVMX_PIP_L4_NO_ERR = 0ull,
> +	/*        1  = TCP (UDP) packet not long enough to cover TCP
> (UDP) header */
> +	CVMX_PIP_L4_MAL_ERR = 1ull,
> +	/*        2  = TCP/UDP checksum failure */
> +	CVMX_PIP_CHK_ERR = 2ull,
> +	/*        3  = TCP/UDP length check (TCP/UDP length does not
> match IP length) */
> +	CVMX_PIP_L4_LENGTH_ERR = 3ull,
> +	/*        4  = illegal TCP/UDP port (either source or dest port
> is zero) */
> +	CVMX_PIP_BAD_PRT_ERR = 4ull,
> +	/*        8  = TCP flags = FIN only */
> +	CVMX_PIP_TCP_FLG8_ERR = 8ull,
> +	/*        9  = TCP flags = 0 */
> +	CVMX_PIP_TCP_FLG9_ERR = 9ull,
> +	/*        10 = TCP flags = FIN+RST+* */
> +	CVMX_PIP_TCP_FLG10_ERR = 10ull,
> +	/*        11 = TCP flags = SYN+URG+* */
> +	CVMX_PIP_TCP_FLG11_ERR = 11ull,
> +	/*        12 = TCP flags = SYN+RST+* */
> +	CVMX_PIP_TCP_FLG12_ERR = 12ull,
> +	/*        13 = TCP flags = SYN+FIN+* */
> +	CVMX_PIP_TCP_FLG13_ERR = 13ull
> +} cvmx_pip_l4_err_t;
> +
> +typedef enum {
> +	CVMX_PIP_IP_NO_ERR = 0ull,
> +	/*        1 = not IPv4 or IPv6 */
> +	CVMX_PIP_NOT_IP = 1ull,
> +	/*        2 = IPv4 header checksum violation */
> +	CVMX_PIP_IPV4_HDR_CHK = 2ull,
> +	/*        3 = malformed (packet not long enough to cover IP
> hdr) */
> +	CVMX_PIP_IP_MAL_HDR = 3ull,
> +	/*        4 = malformed (packet not long enough to cover len in
> IP hdr) */
> +	CVMX_PIP_IP_MAL_PKT = 4ull,
> +	/*        5 = TTL / hop count equal zero */
> +	CVMX_PIP_TTL_HOP = 5ull,
> +	/*        6 = IPv4 options / IPv6 early extension headers */
> +	CVMX_PIP_OPTS = 6ull
> +} cvmx_pip_ip_exc_t;
> +
> +/**
> + * NOTES
> + *       late collision (data received before collision)
> + *            late collisions cannot be detected by the receiver
> + *            they would appear as JAM bits which would appear as
> bad FCS
> + *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
> + */
> +typedef enum {
> +	/**
> +	 * No error
> +	 */
> +	CVMX_PIP_RX_NO_ERR = 0ull,
> +
> +	CVMX_PIP_PARTIAL_ERR =
> +		1ull, /* RGM+SPI            1 = partially received
> packet (buffering/bandwidth not adequate) */
> +	CVMX_PIP_JABBER_ERR =
> +		2ull, /* RGM+SPI            2 = receive packet too
> large and truncated */
> +	CVMX_PIP_OVER_FCS_ERR =
> +		3ull, /* RGM                3 = max frame error (pkt
> len > max frame len) (with FCS error) */
> +	CVMX_PIP_OVER_ERR =
> +		4ull, /* RGM+SPI            4 = max frame error (pkt
> len > max frame len) */
> +	CVMX_PIP_ALIGN_ERR =
> +		5ull, /* RGM                5 = nibble error (data not
> byte multiple - 100M and 10M only) */
> +	CVMX_PIP_UNDER_FCS_ERR =
> +		6ull, /* RGM                6 = min frame error (pkt
> len < min frame len) (with FCS error) */
> +	CVMX_PIP_GMX_FCS_ERR = 7ull, /* RGM                7 = FCS
> error */
> +	CVMX_PIP_UNDER_ERR =
> +		8ull, /* RGM+SPI            8 = min frame error (pkt
> len < min frame len) */
> +	CVMX_PIP_EXTEND_ERR = 9ull, /* RGM                9 = Frame
> carrier extend error */
> +	CVMX_PIP_TERMINATE_ERR =
> +		9ull, /* XAUI               9 = Packet was terminated
> with an idle cycle */
> +	CVMX_PIP_LENGTH_ERR =
> +		10ull, /* RGM               10 = length mismatch (len
> did not match len in L2 length/type) */
> +	CVMX_PIP_DAT_ERR =
> +		11ull, /* RGM               11 = Frame error (some or
> all data bits marked err) */
> +	CVMX_PIP_DIP_ERR = 11ull, /*     SPI           11 = DIP4 error
> */
> +	CVMX_PIP_SKIP_ERR =
> +		12ull, /* RGM               12 = packet was not large
> enough to pass the skipper - no inspection could occur */
> +	CVMX_PIP_NIBBLE_ERR =
> +		13ull, /* RGM               13 = studder error (data
> not repeated - 100M and 10M only) */
> +	CVMX_PIP_PIP_FCS = 16L, /* RGM+SPI           16 = FCS error */
> +	CVMX_PIP_PIP_SKIP_ERR =
> +		17L, /* RGM+SPI+PCI       17 = packet was not large
> enough to pass the skipper - no inspection could occur */
> +	CVMX_PIP_PIP_L2_MAL_HDR =
> +		18L, /* RGM+SPI+PCI       18 = malformed l2 (packet not
> long enough to cover L2 hdr) */
> +	CVMX_PIP_PUNY_ERR =
> +		47L /* SGMII             47 = PUNY error (packet was 4B
> or less when FCS stripping is enabled) */
> +	/* NOTES
> +	 *       xx = late collision (data received before collision)
> +	 *            late collisions cannot be detected by the
> receiver
> +	 *            they would appear as JAM bits which would appear
> as bad FCS
> +	 *            or carrier extend error which is
> CVMX_PIP_EXTEND_ERR
> +	 */
> +} cvmx_pip_rcv_err_t;
> +
> +/**
> + * This defines the err_code field errors in the work Q entry
> + */
> +typedef union {
> +	cvmx_pip_l4_err_t l4_err;
> +	cvmx_pip_ip_exc_t ip_exc;
> +	cvmx_pip_rcv_err_t rcv_err;
> +} cvmx_pip_err_t;
> +
> +/**
> + * Status statistics for a port
> + */
> +typedef struct {
> +	u64 dropped_octets;
> +	u64 dropped_packets;
> +	u64 pci_raw_packets;
> +	u64 octets;
> +	u64 packets;
> +	u64 multicast_packets;
> +	u64 broadcast_packets;
> +	u64 len_64_packets;
> +	u64 len_65_127_packets;
> +	u64 len_128_255_packets;
> +	u64 len_256_511_packets;
> +	u64 len_512_1023_packets;
> +	u64 len_1024_1518_packets;
> +	u64 len_1519_max_packets;
> +	u64 fcs_align_err_packets;
> +	u64 runt_packets;
> +	u64 runt_crc_packets;
> +	u64 oversize_packets;
> +	u64 oversize_crc_packets;
> +	u64 inb_packets;
> +	u64 inb_octets;
> +	u64 inb_errors;
> +	u64 mcast_l2_red_packets;
> +	u64 bcast_l2_red_packets;
> +	u64 mcast_l3_red_packets;
> +	u64 bcast_l3_red_packets;
> +} cvmx_pip_port_status_t;
> +
> +/**
> + * Definition of the PIP custom header that can be prepended
> + * to a packet by external hardware.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 rawfull : 1;
> +		u64 reserved0 : 5;
> +		cvmx_pip_port_parse_mode_t parse_mode : 2;
> +		u64 reserved1 : 1;
> +		u64 skip_len : 7;
> +		u64 grpext : 2;
> +		u64 nqos : 1;
> +		u64 ngrp : 1;
> +		u64 ntt : 1;
> +		u64 ntag : 1;
> +		u64 qos : 3;
> +		u64 grp : 4;
> +		u64 rs : 1;
> +		cvmx_pow_tag_type_t tag_type : 2;
> +		u64 tag : 32;
> +	} s;
> +} cvmx_pip_pkt_inst_hdr_t;
> +
> +enum cvmx_pki_pcam_match {
> +	CVMX_PKI_PCAM_MATCH_IP,
> +	CVMX_PKI_PCAM_MATCH_IPV4,
> +	CVMX_PKI_PCAM_MATCH_IPV6,
> +	CVMX_PKI_PCAM_MATCH_TCP
> +};
> +
> +/* CSR typedefs have been moved to cvmx-pip-defs.h */
> +static inline int cvmx_pip_config_watcher(int index, int type, u16
> match, u16 mask, int grp,
> +					  int qos)
> +{
> +	if (index >= CVMX_PIP_NUM_WATCHERS) {
> +		debug("ERROR: pip watcher %d is > than supported\n",
> index);
> +		return -1;
> +	}
> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
> +		/* store in software for now, only when the watcher is
> enabled program the entry*/
> +		if (type == CVMX_PIP_QOS_WATCH_PROTNH) {
> +			qos_watcher[index].field =
> CVMX_PKI_PCAM_TERM_L3_FLAGS;
> +			qos_watcher[index].data = (u32)(match << 16);
> +			qos_watcher[index].data_mask = (u32)(mask <<
> 16);
> +			qos_watcher[index].advance = 0;
> +		} else if (type == CVMX_PIP_QOS_WATCH_TCP) {
> +			qos_watcher[index].field =
> CVMX_PKI_PCAM_TERM_L4_PORT;
> +			qos_watcher[index].data = 0x060000;
> +			qos_watcher[index].data |= (u32)match;
> +			qos_watcher[index].data_mask = (u32)(mask);
> +			qos_watcher[index].advance = 0;
> +		} else if (type == CVMX_PIP_QOS_WATCH_UDP) {
> +			qos_watcher[index].field =
> CVMX_PKI_PCAM_TERM_L4_PORT;
> +			qos_watcher[index].data = 0x110000;
> +			qos_watcher[index].data |= (u32)match;
> +			qos_watcher[index].data_mask = (u32)(mask);
> +			qos_watcher[index].advance = 0;
> +		} else if (type == 0x4
> /*CVMX_PIP_QOS_WATCH_ETHERTYPE*/) {
> +			qos_watcher[index].field =
> CVMX_PKI_PCAM_TERM_ETHTYPE0;
> +			if (match == 0x8100) {
> +				debug("ERROR: default vlan entry
> already exist, cant set watcher\n");
> +				return -1;
> +			}
> +			qos_watcher[index].data = (u32)(match << 16);
> +			qos_watcher[index].data_mask = (u32)(mask <<
> 16);
> +			qos_watcher[index].advance = 4;
> +		} else {
> +			debug("ERROR: Unsupported watcher type %d\n",
> type);
> +			return -1;
> +		}
> +		if (grp >= 32) {
> +			debug("ERROR: grp %d out of range for backward
> compat 78xx\n", grp);
> +			return -1;
> +		}
> +		qos_watcher[index].sso_grp = (u8)(grp << 3 | qos);
> +		qos_watcher[index].configured = 1;
> +	} else {
> +		/* Implement it later */
> +	}
> +	return 0;
> +}
> +
> +static inline int __cvmx_pip_set_tag_type(int node, int style, int
> tag_type, int field)
> +{
> +	struct cvmx_pki_style_config style_cfg;
> +	int style_num;
> +	int pcam_offset;
> +	int bank;
> +	struct cvmx_pki_pcam_input pcam_input;
> +	struct cvmx_pki_pcam_action pcam_action;
> +
> +	/* All other style parameters remain same except tag type */
> +	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL,
> &style_cfg);
> +	style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tag_type;
> +	style_num = cvmx_pki_style_alloc(node, -1);
> +	if (style_num < 0) {
> +		debug("ERROR: style not available to set tag type\n");
> +		return -1;
> +	}
> +	cvmx_pki_write_style_config(node, style_num,
> CVMX_PKI_CLUSTER_ALL, &style_cfg);
> +	memset(&pcam_input, 0, sizeof(pcam_input));
> +	memset(&pcam_action, 0, sizeof(pcam_action));
> +	pcam_input.style = style;
> +	pcam_input.style_mask = 0xff;
> +	if (field == CVMX_PKI_PCAM_MATCH_IP) {
> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
> +		pcam_input.field_mask = 0xff;
> +		pcam_input.data = 0x08000000;
> +		pcam_input.data_mask = 0xffff0000;
> +		pcam_action.pointer_advance = 4;
> +		/* legacy will write to all clusters*/
> +		bank = 0;
> +		pcam_offset = cvmx_pki_pcam_entry_alloc(node,
> CVMX_PKI_FIND_AVAL_ENTRY, bank,
> +							CVMX_PKI_CLUSTE
> R_ALL);
> +		if (pcam_offset < 0) {
> +			debug("ERROR: pcam entry not available to
> enable qos watcher\n");
> +			cvmx_pki_style_free(node, style_num);
> +			return -1;
> +		}
> +		pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
> +		pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
> +		pcam_action.style_add = (u8)(style_num - style);
> +		cvmx_pki_pcam_write_entry(node, pcam_offset,
> CVMX_PKI_CLUSTER_ALL, pcam_input,
> +					  pcam_action);
> +		field = CVMX_PKI_PCAM_MATCH_IPV6;
> +	}
> +	if (field == CVMX_PKI_PCAM_MATCH_IPV4) {
> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
> +		pcam_input.field_mask = 0xff;
> +		pcam_input.data = 0x08000000;
> +		pcam_input.data_mask = 0xffff0000;
> +		pcam_action.pointer_advance = 4;
> +	} else if (field == CVMX_PKI_PCAM_MATCH_IPV6) {
> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
> +		pcam_input.field_mask = 0xff;
> +		pcam_input.data = 0x86dd00000;
> +		pcam_input.data_mask = 0xffff0000;
> +		pcam_action.pointer_advance = 4;
> +	} else if (field == CVMX_PKI_PCAM_MATCH_TCP) {
> +		pcam_input.field = CVMX_PKI_PCAM_TERM_L4_PORT;
> +		pcam_input.field_mask = 0xff;
> +		pcam_input.data = 0x60000;
> +		pcam_input.data_mask = 0xff0000;
> +		pcam_action.pointer_advance = 0;
> +	}
> +	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
> +	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
> +	pcam_action.style_add = (u8)(style_num - style);
> +	bank = pcam_input.field & 0x01;
> +	pcam_offset = cvmx_pki_pcam_entry_alloc(node,
> CVMX_PKI_FIND_AVAL_ENTRY, bank,
> +						CVMX_PKI_CLUSTER_ALL);
> +	if (pcam_offset < 0) {
> +		debug("ERROR: pcam entry not available to enable qos
> watcher\n");
> +		cvmx_pki_style_free(node, style_num);
> +		return -1;
> +	}
> +	cvmx_pki_pcam_write_entry(node, pcam_offset,
> CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
> +	return style_num;
> +}
> +
> +/* Only for legacy internal use */
> +static inline int __cvmx_pip_enable_watcher_78xx(int node, int
> index, int style)
> +{
> +	struct cvmx_pki_style_config style_cfg;
> +	struct cvmx_pki_qpg_config qpg_cfg;
> +	struct cvmx_pki_pcam_input pcam_input;
> +	struct cvmx_pki_pcam_action pcam_action;
> +	int style_num;
> +	int qpg_offset;
> +	int pcam_offset;
> +	int bank;
> +
> +	if (!qos_watcher[index].configured) {
> +		debug("ERROR: qos watcher %d should be configured
> before enable\n", index);
> +		return -1;
> +	}
> +	/* All other style parameters remain same except grp and qos
> and qps base */
> +	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL,
> &style_cfg);
> +	cvmx_pki_read_qpg_entry(node, style_cfg.parm_cfg.qpg_base,
> &qpg_cfg);
> +	qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
> +	qpg_cfg.grp_ok = qos_watcher[index].sso_grp;
> +	qpg_cfg.grp_bad = qos_watcher[index].sso_grp;
> +	qpg_offset = cvmx_helper_pki_set_qpg_entry(node, &qpg_cfg);
> +	if (qpg_offset == -1) {
> +		debug("Warning: no new qpg entry available to enable
> watcher\n");
> +		return -1;
> +	}
> +	/* try to reserve the style, if it is not configured already,
> reserve
> +	   and configure it */
> +	style_cfg.parm_cfg.qpg_base = qpg_offset;
> +	style_num = cvmx_pki_style_alloc(node, -1);
> +	if (style_num < 0) {
> +		debug("ERROR: style not available to enable qos
> watcher\n");
> +		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
> +		return -1;
> +	}
> +	cvmx_pki_write_style_config(node, style_num,
> CVMX_PKI_CLUSTER_ALL, &style_cfg);
> +	/* legacy will write to all clusters*/
> +	bank = qos_watcher[index].field & 0x01;
> +	pcam_offset = cvmx_pki_pcam_entry_alloc(node,
> CVMX_PKI_FIND_AVAL_ENTRY, bank,
> +						CVMX_PKI_CLUSTER_ALL);
> +	if (pcam_offset < 0) {
> +		debug("ERROR: pcam entry not available to enable qos
> watcher\n");
> +		cvmx_pki_style_free(node, style_num);
> +		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
> +		return -1;
> +	}
> +	memset(&pcam_input, 0, sizeof(pcam_input));
> +	memset(&pcam_action, 0, sizeof(pcam_action));
> +	pcam_input.style = style;
> +	pcam_input.style_mask = 0xff;
> +	pcam_input.field = qos_watcher[index].field;
> +	pcam_input.field_mask = 0xff;
> +	pcam_input.data = qos_watcher[index].data;
> +	pcam_input.data_mask = qos_watcher[index].data_mask;
> +	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
> +	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
> +	pcam_action.style_add = (u8)(style_num - style);
> +	pcam_action.pointer_advance = qos_watcher[index].advance;
> +	cvmx_pki_pcam_write_entry(node, pcam_offset,
> CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
> +	return 0;
> +}
> +
> +/**
> + * Configure an ethernet input port
> + *
> + * @param ipd_port Port number to configure
> + * @param port_cfg Port hardware configuration
> + * @param port_tag_cfg Port POW tagging configuration
> + */
> +static inline void cvmx_pip_config_port(u64 ipd_port,
> cvmx_pip_prt_cfgx_t port_cfg,
> +					cvmx_pip_prt_tagx_t
> port_tag_cfg)
> +{
> +	struct cvmx_pki_qpg_config qpg_cfg;
> +	int qpg_offset;
> +	u8 tcp_tag = 0xff;
> +	u8 ip_tag = 0xaa;
> +	int style, nstyle, n4style, n6style;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
> +		struct cvmx_pki_port_config pki_prt_cfg;
> +		struct cvmx_xport xp =
> cvmx_helper_ipd_port_to_xport(ipd_port);
> +
> +		cvmx_pki_get_port_config(ipd_port, &pki_prt_cfg);
> +		style = pki_prt_cfg.pkind_cfg.initial_style;
> +		if (port_cfg.s.ih_pri || port_cfg.s.vlan_len ||
> port_cfg.s.pad_len)
> +			debug("Warning: 78xx: use different config for
> this option\n");
> +		pki_prt_cfg.style_cfg.parm_cfg.minmax_sel =
> port_cfg.s.len_chk_sel;
> +		pki_prt_cfg.style_cfg.parm_cfg.lenerr_en =
> port_cfg.s.lenerr_en;
> +		pki_prt_cfg.style_cfg.parm_cfg.maxerr_en =
> port_cfg.s.maxerr_en;
> +		pki_prt_cfg.style_cfg.parm_cfg.minerr_en =
> port_cfg.s.minerr_en;
> +		pki_prt_cfg.style_cfg.parm_cfg.fcs_chk =
> port_cfg.s.crc_en;
> +		if (port_cfg.s.grp_wat || port_cfg.s.qos_wat ||
> port_cfg.s.grp_wat_47 ||
> +		    port_cfg.s.qos_wat_47) {
> +			u8 group_mask = (u8)(port_cfg.s.grp_wat |
> (u8)(port_cfg.s.grp_wat_47 << 4));
> +			u8 qos_mask = (u8)(port_cfg.s.qos_wat |
> (u8)(port_cfg.s.qos_wat_47 << 4));
> +			int i;
> +
> +			for (i = 0; i < CVMX_PIP_NUM_WATCHERS; i++) {
> +				if ((group_mask & (1 << i)) ||
> (qos_mask & (1 << i)))
> +					__cvmx_pip_enable_watcher_78xx(
> xp.node, i, style);
> +			}
> +		}
> +		if (port_tag_cfg.s.tag_mode) {
> +			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
> +				cvmx_printf("Warning: mask tag is not
> supported in 78xx pass1\n");
> +			else {
> +			}
> +			/* need to implement for 78xx*/
> +		}
> +		if (port_cfg.s.tag_inc)
> +			debug("Warning: 78xx uses differnet method for
> tag generation\n");
> +		pki_prt_cfg.style_cfg.parm_cfg.rawdrp =
> port_cfg.s.rawdrp;
> +		pki_prt_cfg.pkind_cfg.parse_en.inst_hdr =
> port_cfg.s.inst_hdr;
> +		if (port_cfg.s.hg_qos)
> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
> CVMX_PKI_QPG_QOS_HIGIG;
> +		else if (port_cfg.s.qos_vlan)
> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
> CVMX_PKI_QPG_QOS_VLAN;
> +		else if (port_cfg.s.qos_diff)
> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
> CVMX_PKI_QPG_QOS_DIFFSERV;
> +		if (port_cfg.s.qos_vod)
> +			debug("Warning: 78xx needs pcam entries
> installed to achieve qos_vod\n");
> +		if (port_cfg.s.qos) {
> +			cvmx_pki_read_qpg_entry(xp.node,
> pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
> +						&qpg_cfg);
> +			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
> +			qpg_cfg.grp_ok |= port_cfg.s.qos;
> +			qpg_cfg.grp_bad |= port_cfg.s.qos;
> +			qpg_offset =
> cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
> +			if (qpg_offset == -1)
> +				debug("Warning: no new qpg entry
> available, will not modify qos\n");
> +			else
> +				pki_prt_cfg.style_cfg.parm_cfg.qpg_base
> = qpg_offset;
> +		}
> +		if (port_tag_cfg.s.grp !=
> pki_dflt_sso_grp[xp.node].group) {
> +			cvmx_pki_read_qpg_entry(xp.node,
> pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
> +						&qpg_cfg);
> +			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
> +			qpg_cfg.grp_ok |= (u8)(port_tag_cfg.s.grp <<
> 3);
> +			qpg_cfg.grp_bad |= (u8)(port_tag_cfg.s.grp <<
> 3);
> +			qpg_offset =
> cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
> +			if (qpg_offset == -1)
> +				debug("Warning: no new qpg entry
> available, will not modify group\n");
> +			else
> +				pki_prt_cfg.style_cfg.parm_cfg.qpg_base
> = qpg_offset;
> +		}
> +		pki_prt_cfg.pkind_cfg.parse_en.dsa_en =
> port_cfg.s.dsa_en;
> +		pki_prt_cfg.pkind_cfg.parse_en.hg_en =
> port_cfg.s.higig_en;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_src =
> +			port_tag_cfg.s.ip6_src_flag |
> port_tag_cfg.s.ip4_src_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_dst =
> +			port_tag_cfg.s.ip6_dst_flag |
> port_tag_cfg.s.ip4_dst_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.ip_prot_nexthd
> r =
> +			port_tag_cfg.s.ip6_nxth_flag |
> port_tag_cfg.s.ip4_pctl_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_src =
> +			port_tag_cfg.s.ip6_sprt_flag |
> port_tag_cfg.s.ip4_sprt_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_dst =
> +			port_tag_cfg.s.ip6_dprt_flag |
> port_tag_cfg.s.ip4_dprt_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.input_port =
> port_tag_cfg.s.inc_prt_flag;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.first_vlan =
> port_tag_cfg.s.inc_vlan;
> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.second_vlan =
> port_tag_cfg.s.inc_vs;
> +
> +		if (port_tag_cfg.s.tcp6_tag_type ==
> port_tag_cfg.s.tcp4_tag_type)
> +			tcp_tag = port_tag_cfg.s.tcp6_tag_type;
> +		if (port_tag_cfg.s.ip6_tag_type ==
> port_tag_cfg.s.ip4_tag_type)
> +			ip_tag = port_tag_cfg.s.ip6_tag_type;
> +		pki_prt_cfg.style_cfg.parm_cfg.tag_type =
> +			(enum
> cvmx_sso_tag_type)port_tag_cfg.s.non_tag_type;
> +		if (tcp_tag == ip_tag && tcp_tag ==
> port_tag_cfg.s.non_tag_type)
> +			pki_prt_cfg.style_cfg.parm_cfg.tag_type = (enum
> cvmx_sso_tag_type)tcp_tag;
> +		else if (tcp_tag == ip_tag) {
> +			/* allocate and copy style */
> +			/* modify tag type */
> +			/*pcam entry for ip6 && ip4 match*/
> +			/* default is non tag type */
> +			__cvmx_pip_set_tag_type(xp.node, style, ip_tag,
> CVMX_PKI_PCAM_MATCH_IP);
> +		} else if (ip_tag == port_tag_cfg.s.non_tag_type) {
> +			/* allocate and copy style */
> +			/* modify tag type */
> +			/*pcam entry for tcp6 & tcp4 match*/
> +			/* default is non tag type */
> +			__cvmx_pip_set_tag_type(xp.node, style,
> tcp_tag, CVMX_PKI_PCAM_MATCH_TCP);
> +		} else {
> +			if (ip_tag != 0xaa) {
> +				nstyle =
> __cvmx_pip_set_tag_type(xp.node, style, ip_tag,
> +								 CVMX_P
> KI_PCAM_MATCH_IP);
> +				if (tcp_tag != 0xff)
> +					__cvmx_pip_set_tag_type(xp.node
> , nstyle, tcp_tag,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +				else {
> +					n4style =
> __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
> +									
>   CVMX_PKI_PCAM_MATCH_IPV4);
> +					__cvmx_pip_set_tag_type(xp.node
> , n4style,
> +								port_ta
> g_cfg.s.tcp4_tag_type,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +					n6style =
> __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
> +									
>   CVMX_PKI_PCAM_MATCH_IPV6);
> +					__cvmx_pip_set_tag_type(xp.node
> , n6style,
> +								port_ta
> g_cfg.s.tcp6_tag_type,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +				}
> +			} else {
> +				n4style =
> __cvmx_pip_set_tag_type(xp.node, style,
> +								  port_
> tag_cfg.s.ip4_tag_type,
> +								  CVMX_
> PKI_PCAM_MATCH_IPV4);
> +				n6style =
> __cvmx_pip_set_tag_type(xp.node, style,
> +								  port_
> tag_cfg.s.ip6_tag_type,
> +								  CVMX_
> PKI_PCAM_MATCH_IPV6);
> +				if (tcp_tag != 0xff) {
> +					__cvmx_pip_set_tag_type(xp.node
> , n4style, tcp_tag,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +					__cvmx_pip_set_tag_type(xp.node
> , n6style, tcp_tag,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +				} else {
> +					__cvmx_pip_set_tag_type(xp.node
> , n4style,
> +								port_ta
> g_cfg.s.tcp4_tag_type,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +					__cvmx_pip_set_tag_type(xp.node
> , n6style,
> +								port_ta
> g_cfg.s.tcp6_tag_type,
> +								CVMX_PK
> I_PCAM_MATCH_TCP);
> +				}
> +			}
> +		}
> +		pki_prt_cfg.style_cfg.parm_cfg.qpg_dis_padd =
> !port_tag_cfg.s.portadd_en;
> +
> +		if (port_cfg.s.mode == 0x1)
> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
> CVMX_PKI_PARSE_LA_TO_LG;
> +		else if (port_cfg.s.mode == 0x2)
> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
> CVMX_PKI_PARSE_LC_TO_LG;
> +		else
> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
> CVMX_PKI_PARSE_NOTHING;
> +		/* This is only for backward compatibility, not all the
> parameters are supported in 78xx */
> +		cvmx_pki_set_port_config(ipd_port, &pki_prt_cfg);
> +	} else {
> +		if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +			int interface, index, pknd;
> +
> +			interface =
> cvmx_helper_get_interface_num(ipd_port);
> +			index =
> cvmx_helper_get_interface_index_num(ipd_port);
> +			pknd = cvmx_helper_get_pknd(interface, index);
> +
> +			ipd_port = pknd; /* overload port_num with pknd
> */
> +		}
> +		csr_wr(CVMX_PIP_PRT_CFGX(ipd_port), port_cfg.u64);
> +		csr_wr(CVMX_PIP_PRT_TAGX(ipd_port), port_tag_cfg.u64);
> +	}
> +}
> +
> +/**
> + * Configure the VLAN priority to QoS queue mapping.
> + *
> + * @param vlan_priority
> + *               VLAN priority (0-7)
> + * @param qos    QoS queue for packets matching this watcher
> + */
> +static inline void cvmx_pip_config_vlan_qos(u64 vlan_priority, u64
> qos)
> +{
> +	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +		cvmx_pip_qos_vlanx_t pip_qos_vlanx;
> +
> +		pip_qos_vlanx.u64 = 0;
> +		pip_qos_vlanx.s.qos = qos;
> +		csr_wr(CVMX_PIP_QOS_VLANX(vlan_priority),
> pip_qos_vlanx.u64);
> +	}
> +}
> +
> +/**
> + * Configure the Diffserv to QoS queue mapping.
> + *
> + * @param diffserv Diffserv field value (0-63)
> + * @param qos      QoS queue for packets matching this watcher
> + */
> +static inline void cvmx_pip_config_diffserv_qos(u64 diffserv, u64
> qos)
> +{
> +	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +		cvmx_pip_qos_diffx_t pip_qos_diffx;
> +
> +		pip_qos_diffx.u64 = 0;
> +		pip_qos_diffx.s.qos = qos;
> +		csr_wr(CVMX_PIP_QOS_DIFFX(diffserv),
> pip_qos_diffx.u64);
> +	}
> +}
> +
> +/**
> + * Get the status counters for a port for older non PKI chips.
> + *
> + * @param port_num Port number (ipd_port) to get statistics for.
> + * @param clear    Set to 1 to clear the counters after they are
> read
> + * @param status   Where to put the results.
> + */
> +static inline void cvmx_pip_get_port_stats(u64 port_num, u64 clear,
> cvmx_pip_port_status_t *status)
> +{
> +	cvmx_pip_stat_ctl_t pip_stat_ctl;
> +	cvmx_pip_stat0_prtx_t stat0;
> +	cvmx_pip_stat1_prtx_t stat1;
> +	cvmx_pip_stat2_prtx_t stat2;
> +	cvmx_pip_stat3_prtx_t stat3;
> +	cvmx_pip_stat4_prtx_t stat4;
> +	cvmx_pip_stat5_prtx_t stat5;
> +	cvmx_pip_stat6_prtx_t stat6;
> +	cvmx_pip_stat7_prtx_t stat7;
> +	cvmx_pip_stat8_prtx_t stat8;
> +	cvmx_pip_stat9_prtx_t stat9;
> +	cvmx_pip_stat10_x_t stat10;
> +	cvmx_pip_stat11_x_t stat11;
> +	cvmx_pip_stat_inb_pktsx_t pip_stat_inb_pktsx;
> +	cvmx_pip_stat_inb_octsx_t pip_stat_inb_octsx;
> +	cvmx_pip_stat_inb_errsx_t pip_stat_inb_errsx;
> +	int interface = cvmx_helper_get_interface_num(port_num);
> +	int index = cvmx_helper_get_interface_index_num(port_num);
> +
> +	pip_stat_ctl.u64 = 0;
> +	pip_stat_ctl.s.rdclr = clear;
> +	csr_wr(CVMX_PIP_STAT_CTL, pip_stat_ctl.u64);
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +		int pknd = cvmx_helper_get_pknd(interface, index);
> +		/*
> +		 * PIP_STAT_CTL[MODE] 0 means pkind.
> +		 */
> +		stat0.u64 = csr_rd(CVMX_PIP_STAT0_X(pknd));
> +		stat1.u64 = csr_rd(CVMX_PIP_STAT1_X(pknd));
> +		stat2.u64 = csr_rd(CVMX_PIP_STAT2_X(pknd));
> +		stat3.u64 = csr_rd(CVMX_PIP_STAT3_X(pknd));
> +		stat4.u64 = csr_rd(CVMX_PIP_STAT4_X(pknd));
> +		stat5.u64 = csr_rd(CVMX_PIP_STAT5_X(pknd));
> +		stat6.u64 = csr_rd(CVMX_PIP_STAT6_X(pknd));
> +		stat7.u64 = csr_rd(CVMX_PIP_STAT7_X(pknd));
> +		stat8.u64 = csr_rd(CVMX_PIP_STAT8_X(pknd));
> +		stat9.u64 = csr_rd(CVMX_PIP_STAT9_X(pknd));
> +		stat10.u64 = csr_rd(CVMX_PIP_STAT10_X(pknd));
> +		stat11.u64 = csr_rd(CVMX_PIP_STAT11_X(pknd));
> +	} else {
> +		if (port_num >= 40) {
> +			stat0.u64 =
> csr_rd(CVMX_PIP_XSTAT0_PRTX(port_num));
> +			stat1.u64 =
> csr_rd(CVMX_PIP_XSTAT1_PRTX(port_num));
> +			stat2.u64 =
> csr_rd(CVMX_PIP_XSTAT2_PRTX(port_num));
> +			stat3.u64 =
> csr_rd(CVMX_PIP_XSTAT3_PRTX(port_num));
> +			stat4.u64 =
> csr_rd(CVMX_PIP_XSTAT4_PRTX(port_num));
> +			stat5.u64 =
> csr_rd(CVMX_PIP_XSTAT5_PRTX(port_num));
> +			stat6.u64 =
> csr_rd(CVMX_PIP_XSTAT6_PRTX(port_num));
> +			stat7.u64 =
> csr_rd(CVMX_PIP_XSTAT7_PRTX(port_num));
> +			stat8.u64 =
> csr_rd(CVMX_PIP_XSTAT8_PRTX(port_num));
> +			stat9.u64 =
> csr_rd(CVMX_PIP_XSTAT9_PRTX(port_num));
> +			if (OCTEON_IS_MODEL(OCTEON_CN6XXX)) {
> +				stat10.u64 =
> csr_rd(CVMX_PIP_XSTAT10_PRTX(port_num));
> +				stat11.u64 =
> csr_rd(CVMX_PIP_XSTAT11_PRTX(port_num));
> +			}
> +		} else {
> +			stat0.u64 =
> csr_rd(CVMX_PIP_STAT0_PRTX(port_num));
> +			stat1.u64 =
> csr_rd(CVMX_PIP_STAT1_PRTX(port_num));
> +			stat2.u64 =
> csr_rd(CVMX_PIP_STAT2_PRTX(port_num));
> +			stat3.u64 =
> csr_rd(CVMX_PIP_STAT3_PRTX(port_num));
> +			stat4.u64 =
> csr_rd(CVMX_PIP_STAT4_PRTX(port_num));
> +			stat5.u64 =
> csr_rd(CVMX_PIP_STAT5_PRTX(port_num));
> +			stat6.u64 =
> csr_rd(CVMX_PIP_STAT6_PRTX(port_num));
> +			stat7.u64 =
> csr_rd(CVMX_PIP_STAT7_PRTX(port_num));
> +			stat8.u64 =
> csr_rd(CVMX_PIP_STAT8_PRTX(port_num));
> +			stat9.u64 =
> csr_rd(CVMX_PIP_STAT9_PRTX(port_num));
> +			if (OCTEON_IS_OCTEON2() ||
> OCTEON_IS_MODEL(OCTEON_CN70XX)) {
> +				stat10.u64 =
> csr_rd(CVMX_PIP_STAT10_PRTX(port_num));
> +				stat11.u64 =
> csr_rd(CVMX_PIP_STAT11_PRTX(port_num));
> +			}
> +		}
> +	}
> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +		int pknd = cvmx_helper_get_pknd(interface, index);
> +
> +		pip_stat_inb_pktsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_PKTS_PKNDX(pknd));
> +		pip_stat_inb_octsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_OCTS_PKNDX(pknd));
> +		pip_stat_inb_errsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_ERRS_PKNDX(pknd));
> +	} else {
> +		pip_stat_inb_pktsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_PKTSX(port_num));
> +		pip_stat_inb_octsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_OCTSX(port_num));
> +		pip_stat_inb_errsx.u64 =
> csr_rd(CVMX_PIP_STAT_INB_ERRSX(port_num));
> +	}
> +
> +	status->dropped_octets = stat0.s.drp_octs;
> +	status->dropped_packets = stat0.s.drp_pkts;
> +	status->octets = stat1.s.octs;
> +	status->pci_raw_packets = stat2.s.raw;
> +	status->packets = stat2.s.pkts;
> +	status->multicast_packets = stat3.s.mcst;
> +	status->broadcast_packets = stat3.s.bcst;
> +	status->len_64_packets = stat4.s.h64;
> +	status->len_65_127_packets = stat4.s.h65to127;
> +	status->len_128_255_packets = stat5.s.h128to255;
> +	status->len_256_511_packets = stat5.s.h256to511;
> +	status->len_512_1023_packets = stat6.s.h512to1023;
> +	status->len_1024_1518_packets = stat6.s.h1024to1518;
> +	status->len_1519_max_packets = stat7.s.h1519;
> +	status->fcs_align_err_packets = stat7.s.fcs;
> +	status->runt_packets = stat8.s.undersz;
> +	status->runt_crc_packets = stat8.s.frag;
> +	status->oversize_packets = stat9.s.oversz;
> +	status->oversize_crc_packets = stat9.s.jabber;
> +	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
> +		status->mcast_l2_red_packets = stat10.s.mcast;
> +		status->bcast_l2_red_packets = stat10.s.bcast;
> +		status->mcast_l3_red_packets = stat11.s.mcast;
> +		status->bcast_l3_red_packets = stat11.s.bcast;
> +	}
> +	status->inb_packets = pip_stat_inb_pktsx.s.pkts;
> +	status->inb_octets = pip_stat_inb_octsx.s.octs;
> +	status->inb_errors = pip_stat_inb_errsx.s.errs;
> +}
> +
> +/**
> + * Get the status counters for a port.
> + *
> + * @param port_num Port number (ipd_port) to get statistics for.
> + * @param clear    Set to 1 to clear the counters after they are
> read
> + * @param status   Where to put the results.
> + */
> +static inline void cvmx_pip_get_port_status(u64 port_num, u64 clear,
> cvmx_pip_port_status_t *status)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
> +		unsigned int node = cvmx_get_node_num();
> +
> +		cvmx_pki_get_port_stats(node, port_num, (struct
> cvmx_pki_port_stats *)status);
> +	} else {
> +		cvmx_pip_get_port_stats(port_num, clear, status);
> +	}
> +}
> +
> +/**
> + * Configure the hardware CRC engine
> + *
> + * @param interface Interface to configure (0 or 1)
> + * @param invert_result
> + *                 Invert the result of the CRC
> + * @param reflect  Reflect
> + * @param initialization_vector
> + *                 CRC initialization vector
> + */
> +static inline void cvmx_pip_config_crc(u64 interface, u64
> invert_result, u64 reflect,
> +				       u32 initialization_vector)
> +{
> +	/* Only CN38XX & CN58XX */
> +}
> +
> +/**
> + * Clear all bits in a tag mask. This should be called on
> + * startup before any calls to cvmx_pip_tag_mask_set. Each bit
> + * set in the final mask represent a byte used in the packet for
> + * tag generation.
> + *
> + * @param mask_index Which tag mask to clear (0..3)
> + */
> +static inline void cvmx_pip_tag_mask_clear(u64 mask_index)
> +{
> +	u64 index;
> +	cvmx_pip_tag_incx_t pip_tag_incx;
> +
> +	pip_tag_incx.u64 = 0;
> +	pip_tag_incx.s.en = 0;
> +	for (index = mask_index * 16; index < (mask_index + 1) * 16;
> index++)
> +		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
> +}
> +
> +/**
> + * Sets a range of bits in the tag mask. The tag mask is used
> + * when the cvmx_pip_port_tag_cfg_t tag_mode is non zero.
> + * There are four separate masks that can be configured.
> + *
> + * @param mask_index Which tag mask to modify (0..3)
> + * @param offset     Offset into the bitmask to set bits at. Use the
> GCC macro
> + *                   offsetof() to determine the offsets into packet
> headers.
> + *                   For example, offsetof(ethhdr, protocol) returns
> the offset
> + *                   of the ethernet protocol field.  The bitmask
> selects which bytes
> + *                   to include the the tag, with bit offset X
> selecting byte at offset X
> + *                   from the beginning of the packet data.
> + * @param len        Number of bytes to include. Usually this is the
> sizeof()
> + *                   the field.
> + */
> +static inline void cvmx_pip_tag_mask_set(u64 mask_index, u64 offset,
> u64 len)
> +{
> +	while (len--) {
> +		cvmx_pip_tag_incx_t pip_tag_incx;
> +		u64 index = mask_index * 16 + offset / 8;
> +
> +		pip_tag_incx.u64 = csr_rd(CVMX_PIP_TAG_INCX(index));
> +		pip_tag_incx.s.en |= 0x80 >> (offset & 0x7);
> +		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
> +		offset++;
> +	}
> +}
> +
> +/**
> + * Set byte count for Max-Sized and Min Sized frame check.
> + *
> + * @param interface   Which interface to set the limit
> + * @param max_size    Byte count for Max-Size frame check
> + */
> +static inline void cvmx_pip_set_frame_check(int interface, u32
> max_size)
> +{
> +	cvmx_pip_frm_len_chkx_t frm_len;
> +
> +	/* max_size and min_size are passed as 0, reset to default
> values. */
> +	if (max_size < 1536)
> +		max_size = 1536;
> +
> +	/* On CN68XX frame check is enabled for a pkind n and
> +	   PIP_PRT_CFG[len_chk_sel] selects which set of
> +	   MAXLEN/MINLEN to use. */
> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
> +		int port;
> +		int num_ports =
> cvmx_helper_ports_on_interface(interface);
> +
> +		for (port = 0; port < num_ports; port++) {
> +			if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
> +				int ipd_port;
> +
> +				ipd_port =
> cvmx_helper_get_ipd_port(interface, port);
> +				cvmx_pki_set_max_frm_len(ipd_port,
> max_size);
> +			} else {
> +				int pknd;
> +				int sel;
> +				cvmx_pip_prt_cfgx_t config;
> +
> +				pknd = cvmx_helper_get_pknd(interface,
> port);
> +				config.u64 =
> csr_rd(CVMX_PIP_PRT_CFGX(pknd));
> +				sel = config.s.len_chk_sel;
> +				frm_len.u64 =
> csr_rd(CVMX_PIP_FRM_LEN_CHKX(sel));
> +				frm_len.s.maxlen = max_size;
> +				csr_wr(CVMX_PIP_FRM_LEN_CHKX(sel),
> frm_len.u64);
> +			}
> +		}
> +	}
> +	/* on cn6xxx and cn7xxx models, PIP_FRM_LEN_CHK0 applies to
> +	 *     all incoming traffic */
> +	else if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX))
> {
> +		frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(0));
> +		frm_len.s.maxlen = max_size;
> +		csr_wr(CVMX_PIP_FRM_LEN_CHKX(0), frm_len.u64);
> +	}
> +}
> +
> +/**
> + * Initialize Bit Select Extractor config. Their are 8 bit positions
> and valids
> + * to be used when using the corresponding extractor.
> + *
> + * @param bit     Bit Select Extractor to use
> + * @param pos     Which position to update
> + * @param val     The value to update the position with
> + */
> +static inline void cvmx_pip_set_bsel_pos(int bit, int pos, int val)
> +{
> +	cvmx_pip_bsel_ext_posx_t bsel_pos;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return;
> +
> +	if (bit < 0 || bit > 3) {
> +		debug("ERROR: cvmx_pip_set_bsel_pos: Invalid Bit-Select 
> Extractor (%d) passed\n",
> +		      bit);
> +		return;
> +	}
> +
> +	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
> +	switch (pos) {
> +	case 0:
> +		bsel_pos.s.pos0_val = 1;
> +		bsel_pos.s.pos0 = val & 0x7f;
> +		break;
> +	case 1:
> +		bsel_pos.s.pos1_val = 1;
> +		bsel_pos.s.pos1 = val & 0x7f;
> +		break;
> +	case 2:
> +		bsel_pos.s.pos2_val = 1;
> +		bsel_pos.s.pos2 = val & 0x7f;
> +		break;
> +	case 3:
> +		bsel_pos.s.pos3_val = 1;
> +		bsel_pos.s.pos3 = val & 0x7f;
> +		break;
> +	case 4:
> +		bsel_pos.s.pos4_val = 1;
> +		bsel_pos.s.pos4 = val & 0x7f;
> +		break;
> +	case 5:
> +		bsel_pos.s.pos5_val = 1;
> +		bsel_pos.s.pos5 = val & 0x7f;
> +		break;
> +	case 6:
> +		bsel_pos.s.pos6_val = 1;
> +		bsel_pos.s.pos6 = val & 0x7f;
> +		break;
> +	case 7:
> +		bsel_pos.s.pos7_val = 1;
> +		bsel_pos.s.pos7 = val & 0x7f;
> +		break;
> +	default:
> +		debug("Warning: cvmx_pip_set_bsel_pos: Invalid
> pos(%d)\n", pos);
> +		break;
> +	}
> +	csr_wr(CVMX_PIP_BSEL_EXT_POSX(bit), bsel_pos.u64);
> +}
> +
> +/**
> + * Initialize offset and skip values to use by bit select extractor.
> +
> + * @param bit	Bit Select Extractor to use
> + * @param offset	Offset to add to extractor mem addr to get
> final address
> + *			to lookup table.
> + * @param skip		Number of bytes to skip from start of
> packet 0-64
> + */
> +static inline void cvmx_pip_bsel_config(int bit, int offset, int
> skip)
> +{
> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return;
> +
> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
> +	bsel_cfg.s.offset = offset;
> +	bsel_cfg.s.skip = skip;
> +	csr_wr(CVMX_PIP_BSEL_EXT_CFGX(bit), bsel_cfg.u64);
> +}
> +
> +/**
> + * Get the entry for the Bit Select Extractor Table.
> + * @param work   pointer to work queue entry
> + * @return       Index of the Bit Select Extractor Table
> + */
> +static inline int cvmx_pip_get_bsel_table_index(cvmx_wqe_t *work)
> +{
> +	int bit = cvmx_wqe_get_port(work) & 0x3;
> +	/* Get the Bit select table index. */
> +	int index;
> +	int y;
> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
> +	cvmx_pip_bsel_ext_posx_t bsel_pos;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return -1;
> +
> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
> +	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
> +
> +	for (y = 0; y < 8; y++) {
> +		char *ptr = (char *)cvmx_phys_to_ptr(work-
> >packet_ptr.s.addr);
> +		int bit_loc = 0;
> +		int bit;
> +
> +		ptr += bsel_cfg.s.skip;
> +		switch (y) {
> +		case 0:
> +			ptr += (bsel_pos.s.pos0 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos0 & 0x3);
> +			break;
> +		case 1:
> +			ptr += (bsel_pos.s.pos1 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos1 & 0x3);
> +			break;
> +		case 2:
> +			ptr += (bsel_pos.s.pos2 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos2 & 0x3);
> +			break;
> +		case 3:
> +			ptr += (bsel_pos.s.pos3 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos3 & 0x3);
> +			break;
> +		case 4:
> +			ptr += (bsel_pos.s.pos4 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos4 & 0x3);
> +			break;
> +		case 5:
> +			ptr += (bsel_pos.s.pos5 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos5 & 0x3);
> +			break;
> +		case 6:
> +			ptr += (bsel_pos.s.pos6 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos6 & 0x3);
> +			break;
> +		case 7:
> +			ptr += (bsel_pos.s.pos7 >> 3);
> +			bit_loc = 7 - (bsel_pos.s.pos7 & 0x3);
> +			break;
> +		}
> +		bit = (*ptr >> bit_loc) & 1;
> +		index |= bit << y;
> +	}
> +	index += bsel_cfg.s.offset;
> +	index &= 0x1ff;
> +	return index;
> +}
> +
> +static inline int cvmx_pip_get_bsel_qos(cvmx_wqe_t *work)
> +{
> +	int index = cvmx_pip_get_bsel_table_index(work);
> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return -1;
> +
> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
> +
> +	return bsel_tbl.s.qos;
> +}
> +
> +static inline int cvmx_pip_get_bsel_grp(cvmx_wqe_t *work)
> +{
> +	int index = cvmx_pip_get_bsel_table_index(work);
> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return -1;
> +
> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
> +
> +	return bsel_tbl.s.grp;
> +}
> +
> +static inline int cvmx_pip_get_bsel_tt(cvmx_wqe_t *work)
> +{
> +	int index = cvmx_pip_get_bsel_table_index(work);
> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return -1;
> +
> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
> +
> +	return bsel_tbl.s.tt;
> +}
> +
> +static inline int cvmx_pip_get_bsel_tag(cvmx_wqe_t *work)
> +{
> +	int index = cvmx_pip_get_bsel_table_index(work);
> +	int port = cvmx_wqe_get_port(work);
> +	int bit = port & 0x3;
> +	int upper_tag = 0;
> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
> +	cvmx_pip_prt_tagx_t prt_tag;
> +
> +	/* The bit select extractor is available in CN61XX and CN68XX
> pass2.0 onwards. */
> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
> +		return -1;
> +
> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
> +
> +	prt_tag.u64 = csr_rd(CVMX_PIP_PRT_TAGX(port));
> +	if (prt_tag.s.inc_prt_flag == 0)
> +		upper_tag = bsel_cfg.s.upper_tag;
> +	return bsel_tbl.s.tag | ((bsel_cfg.s.tag << 8) & 0xff00) |
> ((upper_tag << 16) & 0xffff0000);
> +}
> +
> +#endif /*  __CVMX_PIP_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
> new file mode 100644
> index 000000000000..79b99b0bd7c2
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
> @@ -0,0 +1,157 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Resource management for PKI resources.
> + */
> +
> +#ifndef __CVMX_PKI_RESOURCES_H__
> +#define __CVMX_PKI_RESOURCES_H__
> +
> +/**
> + * This function allocates/reserves a style from pool of global
> styles per node.
> + * @param node	 node to allocate style from.
> + * @param style	 style to allocate, if -1 it will be allocated
> +		 first available style from style resource. If index is
> positive
> +		 number and in range, it will try to allocate specified
> style.
> + * @return	 style number on success, -1 on failure.
> + */
> +int cvmx_pki_style_alloc(int node, int style);
> +
> +/**
> + * This function allocates/reserves a cluster group from per node
> +   cluster group resources.
> + * @param node		node to allocate cluster group from.
> +   @param cl_grp	cluster group to allocate/reserve, if -1 ,
> +			allocate any available cluster group.
> + * @return		cluster group number or -1 on failure
> + */
> +int cvmx_pki_cluster_grp_alloc(int node, int cl_grp);
> +
> +/**
> + * This function allocates/reserves a cluster from per node
> +   cluster resources.
> + * @param node		node to allocate cluster group from.
> +   @param cluster_mask	mask of clusters  to allocate/reserve,
> if -1 ,
> +			allocate any available clusters.
> + * @param num_clusters	number of clusters that will be
> allocated
> + */
> +int cvmx_pki_cluster_alloc(int node, int num_clusters, u64
> *cluster_mask);
> +
> +/**
> + * This function allocates/reserves a pcam entry from node
> + * @param node		node to allocate pcam entry from.
> +   @param index	index of pacm entry (0-191), if -1 ,
> +			allocate any available pcam entry.
> + * @param bank		pcam bank where to allocate/reserve
> pcan entry from
> + * @param cluster_mask  mask of clusters from which pcam entry is
> needed.
> + * @return		pcam entry of -1 on failure
> + */
> +int cvmx_pki_pcam_entry_alloc(int node, int index, int bank, u64
> cluster_mask);
> +
> +/**
> + * This function allocates/reserves QPG table entries per node.
> + * @param node		node number.
> + * @param base_offset	base_offset in qpg table. If -1, first
> available
> +			qpg base_offset will be allocated. If
> base_offset is positive
> +			number and in range, it will try to allocate
> specified base_offset.
> +   @param count		number of consecutive qpg entries to
> allocate. They will be consecutive
> +			from base offset.
> + * @return		qpg table base offset number on success, -1 on
> failure.
> + */
> +int cvmx_pki_qpg_entry_alloc(int node, int base_offset, int count);
> +
> +/**
> + * This function frees a style from pool of global styles per node.
> + * @param node	 node to free style from.
> + * @param style	 style to free
> + * @return	 0 on success, -1 on failure.
> + */
> +int cvmx_pki_style_free(int node, int style);
> +
> +/**
> + * This function frees a cluster group from per node
> +   cluster group resources.
> + * @param node		node to free cluster group from.
> +   @param cl_grp	cluster group to free
> + * @return		0 on success or -1 on failure
> + */
> +int cvmx_pki_cluster_grp_free(int node, int cl_grp);
> +
> +/**
> + * This function frees QPG table entries per node.
> + * @param node		node number.
> + * @param base_offset	base_offset in qpg table. If -1, first
> available
> + *			qpg base_offset will be allocated. If
> base_offset is positive
> + *			number and in range, it will try to allocate
> specified base_offset.
> + * @param count		number of consecutive qpg entries to
> allocate. They will be consecutive
> + *			from base offset.
> + * @return		qpg table base offset number on success, -1 on
> failure.
> + */
> +int cvmx_pki_qpg_entry_free(int node, int base_offset, int count);
> +
> +/**
> + * This function frees  clusters  from per node
> +   clusters resources.
> + * @param node		node to free clusters from.
> + * @param cluster_mask  mask of clusters need freeing
> + * @return		0 on success or -1 on failure
> + */
> +int cvmx_pki_cluster_free(int node, u64 cluster_mask);
> +
> +/**
> + * This function frees a pcam entry from node
> + * @param node		node to allocate pcam entry from.
> +   @param index	index of pacm entry (0-191) needs to be freed.
> + * @param bank		pcam bank where to free pcam entry from
> + * @param cluster_mask  mask of clusters from which pcam entry is
> freed.
> + * @return		0 on success OR -1 on failure
> + */
> +int cvmx_pki_pcam_entry_free(int node, int index, int bank, u64
> cluster_mask);
> +
> +/**
> + * This function allocates/reserves a bpid from pool of global bpid
> per node.
> + * @param node	node to allocate bpid from.
> + * @param bpid	bpid  to allocate, if -1 it will be allocated
> + *		first available boid from bpid resource. If index is
> positive
> + *		number and in range, it will try to allocate specified
> bpid.
> + * @return	bpid number on success,
> + *		-1 on alloc failure.
> + *		-2 on resource already reserved.
> + */
> +int cvmx_pki_bpid_alloc(int node, int bpid);
> +
> +/**
> + * This function frees a bpid from pool of global bpid per node.
> + * @param node	 node to free bpid from.
> + * @param bpid	 bpid to free
> + * @return	 0 on success, -1 on failure or
> + */
> +int cvmx_pki_bpid_free(int node, int bpid);
> +
> +/**
> + * This function frees all the PKI software resources
> + * (clusters, styles, qpg_entry, pcam_entry etc) for the specified
> node
> + */
> +
> +/**
> + * This function allocates/reserves an index from pool of global
> MTAG-IDX per node.
> + * @param node	node to allocate index from.
> + * @param idx	index  to allocate, if -1 it will be allocated
> + * @return	MTAG index number on success,
> + *		-1 on alloc failure.
> + *		-2 on resource already reserved.
> + */
> +int cvmx_pki_mtag_idx_alloc(int node, int idx);
> +
> +/**
> + * This function frees an index from pool of global MTAG-IDX per
> node.
> + * @param node	 node to free bpid from.
> + * @param bpid	 bpid to free
> + * @return	 0 on success, -1 on failure or
> + */
> +int cvmx_pki_mtag_idx_free(int node, int idx);
> +
> +void __cvmx_pki_global_rsrc_free(int node);
> +
> +#endif /*  __CVM_PKI_RESOURCES_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
> new file mode 100644
> index 000000000000..c1feb55a1f01
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
> @@ -0,0 +1,970 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Packet Input Data unit.
> + */
> +
> +#ifndef __CVMX_PKI_H__
> +#define __CVMX_PKI_H__
> +
> +#include "cvmx-fpa3.h"
> +#include "cvmx-helper-util.h"
> +#include "cvmx-helper-cfg.h"
> +#include "cvmx-error.h"
> +
> +/* PKI AURA and BPID count are equal to FPA AURA count */
> +#define CVMX_PKI_NUM_AURA	       (cvmx_fpa3_num_auras())
> +#define CVMX_PKI_NUM_BPID	       (cvmx_fpa3_num_auras())
> +#define CVMX_PKI_NUM_SSO_GROUP	       (cvmx_sso_num_xgrp())
> +#define CVMX_PKI_NUM_CLUSTER_GROUP_MAX 1
> +#define CVMX_PKI_NUM_CLUSTER_GROUP     (cvmx_pki_num_cl_grp())
> +#define CVMX_PKI_NUM_CLUSTER	       (cvmx_pki_num_clusters())
> +
> +/* FIXME: Reduce some of these values, convert to routines XXX */
> +#define CVMX_PKI_NUM_CHANNEL	    4096
> +#define CVMX_PKI_NUM_PKIND	    64
> +#define CVMX_PKI_NUM_INTERNAL_STYLE 256
> +#define CVMX_PKI_NUM_FINAL_STYLE    64
> +#define CVMX_PKI_NUM_QPG_ENTRY	    2048
> +#define CVMX_PKI_NUM_MTAG_IDX	    (32 / 4) /* 32 registers
> grouped by 4*/
> +#define CVMX_PKI_NUM_LTYPE	    32
> +#define CVMX_PKI_NUM_PCAM_BANK	    2
> +#define CVMX_PKI_NUM_PCAM_ENTRY	    192
> +#define CVMX_PKI_NUM_FRAME_CHECK    2
> +#define CVMX_PKI_NUM_BELTYPE	    32
> +#define CVMX_PKI_MAX_FRAME_SIZE	    65535
> +#define CVMX_PKI_FIND_AVAL_ENTRY    (-1)
> +#define CVMX_PKI_CLUSTER_ALL	    0xf
> +
> +#ifdef CVMX_SUPPORT_SEPARATE_CLUSTER_CONFIG
> +#define
> CVMX_PKI_TOTAL_PCAM_ENTRY                                            
>                       \
> +	((CVMX_PKI_NUM_CLUSTER) * (CVMX_PKI_NUM_PCAM_BANK) *
> (CVMX_PKI_NUM_PCAM_ENTRY))
> +#else
> +#define CVMX_PKI_TOTAL_PCAM_ENTRY (CVMX_PKI_NUM_PCAM_BANK *
> CVMX_PKI_NUM_PCAM_ENTRY)
> +#endif
> +
> +static inline unsigned int cvmx_pki_num_clusters(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX) ||
> OCTEON_IS_MODEL(OCTEON_CNF75XX))
> +		return 2;
> +	return 4;
> +}
> +
> +static inline unsigned int cvmx_pki_num_cl_grp(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX) ||
> OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
> +	    OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 1;
> +	return 0;
> +}
> +
> +enum cvmx_pki_pkind_parse_mode {
> +	CVMX_PKI_PARSE_LA_TO_LG = 0,  /* Parse LA(L2) to LG */
> +	CVMX_PKI_PARSE_LB_TO_LG = 1,  /* Parse LB(custom) to LG */
> +	CVMX_PKI_PARSE_LC_TO_LG = 3,  /* Parse LC(L3) to LG */
> +	CVMX_PKI_PARSE_LG = 0x3f,     /* Parse LG */
> +	CVMX_PKI_PARSE_NOTHING = 0x7f /* Parse nothing */
> +};
> +
> +enum cvmx_pki_parse_mode_chg {
> +	CVMX_PKI_PARSE_NO_CHG = 0x0,
> +	CVMX_PKI_PARSE_SKIP_TO_LB = 0x1,
> +	CVMX_PKI_PARSE_SKIP_TO_LC = 0x3,
> +	CVMX_PKI_PARSE_SKIP_TO_LD = 0x7,
> +	CVMX_PKI_PARSE_SKIP_TO_LG = 0x3f,
> +	CVMX_PKI_PARSE_SKIP_ALL = 0x7f,
> +};
> +
> +enum cvmx_pki_l2_len_mode { PKI_L2_LENCHK_EQUAL_GREATER = 0,
> PKI_L2_LENCHK_EQUAL_ONLY };
> +
> +enum cvmx_pki_cache_mode {
> +	CVMX_PKI_OPC_MODE_STT = 0LL,	  /* All blocks write through
> DRAM,*/
> +	CVMX_PKI_OPC_MODE_STF = 1LL,	  /* All blocks into L2 */
> +	CVMX_PKI_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
> +	CVMX_PKI_OPC_MODE_STF2_STT = 3LL  /* 1st, 2nd blocks L2, rest
> DRAM */
> +};
> +
> +/**
> + * Tag type definitions
> + */
> +enum cvmx_sso_tag_type {
> +	CVMX_SSO_TAG_TYPE_ORDERED = 0L,
> +	CVMX_SSO_TAG_TYPE_ATOMIC = 1L,
> +	CVMX_SSO_TAG_TYPE_UNTAGGED = 2L,
> +	CVMX_SSO_TAG_TYPE_EMPTY = 3L
> +};
> +
> +enum cvmx_pki_qpg_qos {
> +	CVMX_PKI_QPG_QOS_NONE = 0,
> +	CVMX_PKI_QPG_QOS_VLAN,
> +	CVMX_PKI_QPG_QOS_MPLS,
> +	CVMX_PKI_QPG_QOS_DSA_SRC,
> +	CVMX_PKI_QPG_QOS_DIFFSERV,
> +	CVMX_PKI_QPG_QOS_HIGIG,
> +};
> +
> +enum cvmx_pki_wqe_vlan { CVMX_PKI_USE_FIRST_VLAN = 0,
> CVMX_PKI_USE_SECOND_VLAN };
> +
> +/**
> + * Controls how the PKI statistics counters are handled
> + * The PKI_STAT*_X registers can be indexed either by port kind
> (pkind), or
> + * final style. (Does not apply to the PKI_STAT_INB* registers.)
> + *    0 = X represents the packet?s pkind
> + *    1 = X represents the low 6-bits of packet?s final style
> + */
> +enum cvmx_pki_stats_mode { CVMX_PKI_STAT_MODE_PKIND,
> CVMX_PKI_STAT_MODE_STYLE };
> +
> +enum cvmx_pki_fpa_wait { CVMX_PKI_DROP_PKT, CVMX_PKI_WAIT_PKT };
> +
> +#define PKI_BELTYPE_E__NONE_M 0x0
> +#define PKI_BELTYPE_E__MISC_M 0x1
> +#define PKI_BELTYPE_E__IP4_M  0x2
> +#define PKI_BELTYPE_E__IP6_M  0x3
> +#define PKI_BELTYPE_E__TCP_M  0x4
> +#define PKI_BELTYPE_E__UDP_M  0x5
> +#define PKI_BELTYPE_E__SCTP_M 0x6
> +#define PKI_BELTYPE_E__SNAP_M 0x7
> +
> +/* PKI_BELTYPE_E_t */
> +enum cvmx_pki_beltype {
> +	CVMX_PKI_BELTYPE_NONE = PKI_BELTYPE_E__NONE_M,
> +	CVMX_PKI_BELTYPE_MISC = PKI_BELTYPE_E__MISC_M,
> +	CVMX_PKI_BELTYPE_IP4 = PKI_BELTYPE_E__IP4_M,
> +	CVMX_PKI_BELTYPE_IP6 = PKI_BELTYPE_E__IP6_M,
> +	CVMX_PKI_BELTYPE_TCP = PKI_BELTYPE_E__TCP_M,
> +	CVMX_PKI_BELTYPE_UDP = PKI_BELTYPE_E__UDP_M,
> +	CVMX_PKI_BELTYPE_SCTP = PKI_BELTYPE_E__SCTP_M,
> +	CVMX_PKI_BELTYPE_SNAP = PKI_BELTYPE_E__SNAP_M,
> +	CVMX_PKI_BELTYPE_MAX = CVMX_PKI_BELTYPE_SNAP
> +};
> +
> +struct cvmx_pki_frame_len {
> +	u16 maxlen;
> +	u16 minlen;
> +};
> +
> +struct cvmx_pki_tag_fields {
> +	u64 layer_g_src : 1;
> +	u64 layer_f_src : 1;
> +	u64 layer_e_src : 1;
> +	u64 layer_d_src : 1;
> +	u64 layer_c_src : 1;
> +	u64 layer_b_src : 1;
> +	u64 layer_g_dst : 1;
> +	u64 layer_f_dst : 1;
> +	u64 layer_e_dst : 1;
> +	u64 layer_d_dst : 1;
> +	u64 layer_c_dst : 1;
> +	u64 layer_b_dst : 1;
> +	u64 input_port : 1;
> +	u64 mpls_label : 1;
> +	u64 first_vlan : 1;
> +	u64 second_vlan : 1;
> +	u64 ip_prot_nexthdr : 1;
> +	u64 tag_sync : 1;
> +	u64 tag_spi : 1;
> +	u64 tag_gtp : 1;
> +	u64 tag_vni : 1;
> +};
> +
> +struct cvmx_pki_pkind_parse {
> +	u64 mpls_en : 1;
> +	u64 inst_hdr : 1;
> +	u64 lg_custom : 1;
> +	u64 fulc_en : 1;
> +	u64 dsa_en : 1;
> +	u64 hg2_en : 1;
> +	u64 hg_en : 1;
> +};
> +
> +struct cvmx_pki_pool_config {
> +	int pool_num;
> +	cvmx_fpa3_pool_t pool;
> +	u64 buffer_size;
> +	u64 buffer_count;
> +};
> +
> +struct cvmx_pki_qpg_config {
> +	int qpg_base;
> +	int port_add;
> +	int aura_num;
> +	int grp_ok;
> +	int grp_bad;
> +	int grptag_ok;
> +	int grptag_bad;
> +};
> +
> +struct cvmx_pki_aura_config {
> +	int aura_num;
> +	int pool_num;
> +	cvmx_fpa3_pool_t pool;
> +	cvmx_fpa3_gaura_t aura;
> +	int buffer_count;
> +};
> +
> +struct cvmx_pki_cluster_grp_config {
> +	int grp_num;
> +	u64 cluster_mask; /* Bit mask of cluster assigned to this
> cluster group */
> +};
> +
> +struct cvmx_pki_sso_grp_config {
> +	int group;
> +	int priority;
> +	int weight;
> +	int affinity;
> +	u64 core_mask;
> +	u8 core_mask_set;
> +};
> +
> +/* This is per style structure for configuring port parameters,
> + * it is kind of of profile which can be assigned to any port.
> + * If multiple ports are assigned same style be aware that modifying
> + * that style will modify the respective parameters for all the
> ports
> + * which are using this style
> + */
> +struct cvmx_pki_style_parm {
> +	bool ip6_udp_opt;
> +	bool lenerr_en;
> +	bool maxerr_en;
> +	bool minerr_en;
> +	u8 lenerr_eqpad;
> +	u8 minmax_sel;
> +	bool qpg_dis_grptag;
> +	bool fcs_strip;
> +	bool fcs_chk;
> +	bool rawdrp;
> +	bool force_drop;
> +	bool nodrop;
> +	bool qpg_dis_padd;
> +	bool qpg_dis_grp;
> +	bool qpg_dis_aura;
> +	u16 qpg_base;
> +	enum cvmx_pki_qpg_qos qpg_qos;
> +	u8 qpg_port_sh;
> +	u8 qpg_port_msb;
> +	u8 apad_nip;
> +	u8 wqe_vs;
> +	enum cvmx_sso_tag_type tag_type;
> +	bool pkt_lend;
> +	u8 wqe_hsz;
> +	u16 wqe_skip;
> +	u16 first_skip;
> +	u16 later_skip;
> +	enum cvmx_pki_cache_mode cache_mode;
> +	u8 dis_wq_dat;
> +	u64 mbuff_size;
> +	bool len_lg;
> +	bool len_lf;
> +	bool len_le;
> +	bool len_ld;
> +	bool len_lc;
> +	bool len_lb;
> +	bool csum_lg;
> +	bool csum_lf;
> +	bool csum_le;
> +	bool csum_ld;
> +	bool csum_lc;
> +	bool csum_lb;
> +};
> +
> +/* This is per style structure for configuring port's tag
> configuration,
> + * it is kind of of profile which can be assigned to any port.
> + * If multiple ports are assigned same style be aware that modiying
> that style
> + * will modify the respective parameters for all the ports which are
> + * using this style */
> +enum cvmx_pki_mtag_ptrsel {
> +	CVMX_PKI_MTAG_PTRSEL_SOP = 0,
> +	CVMX_PKI_MTAG_PTRSEL_LA = 8,
> +	CVMX_PKI_MTAG_PTRSEL_LB = 9,
> +	CVMX_PKI_MTAG_PTRSEL_LC = 10,
> +	CVMX_PKI_MTAG_PTRSEL_LD = 11,
> +	CVMX_PKI_MTAG_PTRSEL_LE = 12,
> +	CVMX_PKI_MTAG_PTRSEL_LF = 13,
> +	CVMX_PKI_MTAG_PTRSEL_LG = 14,
> +	CVMX_PKI_MTAG_PTRSEL_VL = 15,
> +};
> +
> +struct cvmx_pki_mask_tag {
> +	bool enable;
> +	int base;   /* CVMX_PKI_MTAG_PTRSEL_XXX */
> +	int offset; /* Offset from base. */
> +	u64 val;    /* Bitmask:
> +		1 = enable, 0 = disabled for each byte in the 64-byte
> array.*/
> +};
> +
> +struct cvmx_pki_style_tag_cfg {
> +	struct cvmx_pki_tag_fields tag_fields;
> +	struct cvmx_pki_mask_tag mask_tag[4];
> +};
> +
> +struct cvmx_pki_style_config {
> +	struct cvmx_pki_style_parm parm_cfg;
> +	struct cvmx_pki_style_tag_cfg tag_cfg;
> +};
> +
> +struct cvmx_pki_pkind_config {
> +	u8 cluster_grp;
> +	bool fcs_pres;
> +	struct cvmx_pki_pkind_parse parse_en;
> +	enum cvmx_pki_pkind_parse_mode initial_parse_mode;
> +	u8 fcs_skip;
> +	u8 inst_skip;
> +	int initial_style;
> +	bool custom_l2_hdr;
> +	u8 l2_scan_offset;
> +	u64 lg_scan_offset;
> +};
> +
> +struct cvmx_pki_port_config {
> +	struct cvmx_pki_pkind_config pkind_cfg;
> +	struct cvmx_pki_style_config style_cfg;
> +};
> +
> +struct cvmx_pki_global_parse {
> +	u64 virt_pen : 1;
> +	u64 clg_pen : 1;
> +	u64 cl2_pen : 1;
> +	u64 l4_pen : 1;
> +	u64 il3_pen : 1;
> +	u64 l3_pen : 1;
> +	u64 mpls_pen : 1;
> +	u64 fulc_pen : 1;
> +	u64 dsa_pen : 1;
> +	u64 hg_pen : 1;
> +};
> +
> +struct cvmx_pki_tag_sec {
> +	u16 dst6;
> +	u16 src6;
> +	u16 dst;
> +	u16 src;
> +};
> +
> +struct cvmx_pki_global_config {
> +	u64 cluster_mask[CVMX_PKI_NUM_CLUSTER_GROUP_MAX];
> +	enum cvmx_pki_stats_mode stat_mode;
> +	enum cvmx_pki_fpa_wait fpa_wait;
> +	struct cvmx_pki_global_parse gbl_pen;
> +	struct cvmx_pki_tag_sec tag_secret;
> +	struct cvmx_pki_frame_len frm_len[CVMX_PKI_NUM_FRAME_CHECK];
> +	enum cvmx_pki_beltype ltype_map[CVMX_PKI_NUM_BELTYPE];
> +	int pki_enable;
> +};
> +
> +#define CVMX_PKI_PCAM_TERM_E_NONE_M	 0x0
> +#define CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M 0x2
> +#define CVMX_PKI_PCAM_TERM_E_HIGIGD_M	 0x4
> +#define CVMX_PKI_PCAM_TERM_E_HIGIG_M	 0x5
> +#define CVMX_PKI_PCAM_TERM_E_SMACH_M	 0x8
> +#define CVMX_PKI_PCAM_TERM_E_SMACL_M	 0x9
> +#define CVMX_PKI_PCAM_TERM_E_DMACH_M	 0xA
> +#define CVMX_PKI_PCAM_TERM_E_DMACL_M	 0xB
> +#define CVMX_PKI_PCAM_TERM_E_GLORT_M	 0x12
> +#define CVMX_PKI_PCAM_TERM_E_DSA_M	 0x13
> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M	 0x18
> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M	 0x19
> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M	 0x1A
> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M	 0x1B
> +#define CVMX_PKI_PCAM_TERM_E_MPLS0_M	 0x1E
> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M	 0x1F
> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M	 0x20
> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPML_M	 0x21
> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M	 0x22
> +#define CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M	 0x23
> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M	 0x24
> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M	 0x25
> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPML_M	 0x26
> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M	 0x27
> +#define CVMX_PKI_PCAM_TERM_E_LD_VNI_M	 0x28
> +#define CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M 0x2B
> +#define CVMX_PKI_PCAM_TERM_E_LF_SPI_M	 0x2E
> +#define CVMX_PKI_PCAM_TERM_E_L4_SPORT_M	 0x2f
> +#define CVMX_PKI_PCAM_TERM_E_L4_PORT_M	 0x30
> +#define CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M 0x39
> +
> +enum cvmx_pki_term {
> +	CVMX_PKI_PCAM_TERM_NONE = CVMX_PKI_PCAM_TERM_E_NONE_M,
> +	CVMX_PKI_PCAM_TERM_L2_CUSTOM =
> CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M,
> +	CVMX_PKI_PCAM_TERM_HIGIGD = CVMX_PKI_PCAM_TERM_E_HIGIGD_M,
> +	CVMX_PKI_PCAM_TERM_HIGIG = CVMX_PKI_PCAM_TERM_E_HIGIG_M,
> +	CVMX_PKI_PCAM_TERM_SMACH = CVMX_PKI_PCAM_TERM_E_SMACH_M,
> +	CVMX_PKI_PCAM_TERM_SMACL = CVMX_PKI_PCAM_TERM_E_SMACL_M,
> +	CVMX_PKI_PCAM_TERM_DMACH = CVMX_PKI_PCAM_TERM_E_DMACH_M,
> +	CVMX_PKI_PCAM_TERM_DMACL = CVMX_PKI_PCAM_TERM_E_DMACL_M,
> +	CVMX_PKI_PCAM_TERM_GLORT = CVMX_PKI_PCAM_TERM_E_GLORT_M,
> +	CVMX_PKI_PCAM_TERM_DSA = CVMX_PKI_PCAM_TERM_E_DSA_M,
> +	CVMX_PKI_PCAM_TERM_ETHTYPE0 = CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M,
> +	CVMX_PKI_PCAM_TERM_ETHTYPE1 = CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M,
> +	CVMX_PKI_PCAM_TERM_ETHTYPE2 = CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M,
> +	CVMX_PKI_PCAM_TERM_ETHTYPE3 = CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M,
> +	CVMX_PKI_PCAM_TERM_MPLS0 = CVMX_PKI_PCAM_TERM_E_MPLS0_M,
> +	CVMX_PKI_PCAM_TERM_L3_SIPHH = CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M,
> +	CVMX_PKI_PCAM_TERM_L3_SIPMH = CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M,
> +	CVMX_PKI_PCAM_TERM_L3_SIPML = CVMX_PKI_PCAM_TERM_E_L3_SIPML_M,
> +	CVMX_PKI_PCAM_TERM_L3_SIPLL = CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M,
> +	CVMX_PKI_PCAM_TERM_L3_FLAGS = CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M,
> +	CVMX_PKI_PCAM_TERM_L3_DIPHH = CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M,
> +	CVMX_PKI_PCAM_TERM_L3_DIPMH = CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M,
> +	CVMX_PKI_PCAM_TERM_L3_DIPML = CVMX_PKI_PCAM_TERM_E_L3_DIPML_M,
> +	CVMX_PKI_PCAM_TERM_L3_DIPLL = CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M,
> +	CVMX_PKI_PCAM_TERM_LD_VNI = CVMX_PKI_PCAM_TERM_E_LD_VNI_M,
> +	CVMX_PKI_PCAM_TERM_IL3_FLAGS =
> CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M,
> +	CVMX_PKI_PCAM_TERM_LF_SPI = CVMX_PKI_PCAM_TERM_E_LF_SPI_M,
> +	CVMX_PKI_PCAM_TERM_L4_PORT = CVMX_PKI_PCAM_TERM_E_L4_PORT_M,
> +	CVMX_PKI_PCAM_TERM_L4_SPORT = CVMX_PKI_PCAM_TERM_E_L4_SPORT_M,
> +	CVMX_PKI_PCAM_TERM_LG_CUSTOM = CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M
> +};
> +
> +#define CVMX_PKI_DMACH_SHIFT	  32
> +#define CVMX_PKI_DMACH_MASK	  cvmx_build_mask(16)
> +#define CVMX_PKI_DMACL_MASK	  CVMX_PKI_DATA_MASK_32
> +#define CVMX_PKI_DATA_MASK_32	  cvmx_build_mask(32)
> +#define CVMX_PKI_DATA_MASK_16	  cvmx_build_mask(16)
> +#define CVMX_PKI_DMAC_MATCH_EXACT cvmx_build_mask(48)
> +
> +struct cvmx_pki_pcam_input {
> +	u64 style;
> +	u64 style_mask; /* bits: 1-match, 0-dont care */
> +	enum cvmx_pki_term field;
> +	u32 field_mask; /* bits: 1-match, 0-dont care */
> +	u64 data;
> +	u64 data_mask; /* bits: 1-match, 0-dont care */
> +};
> +
> +struct cvmx_pki_pcam_action {
> +	enum cvmx_pki_parse_mode_chg parse_mode_chg;
> +	enum cvmx_pki_layer_type layer_type_set;
> +	int style_add;
> +	int parse_flag_set;
> +	int pointer_advance;
> +};
> +
> +struct cvmx_pki_pcam_config {
> +	int in_use;
> +	int entry_num;
> +	u64 cluster_mask;
> +	struct cvmx_pki_pcam_input pcam_input;
> +	struct cvmx_pki_pcam_action pcam_action;
> +};
> +
> +/**
> + * Status statistics for a port
> + */
> +struct cvmx_pki_port_stats {
> +	u64 dropped_octets;
> +	u64 dropped_packets;
> +	u64 pci_raw_packets;
> +	u64 octets;
> +	u64 packets;
> +	u64 multicast_packets;
> +	u64 broadcast_packets;
> +	u64 len_64_packets;
> +	u64 len_65_127_packets;
> +	u64 len_128_255_packets;
> +	u64 len_256_511_packets;
> +	u64 len_512_1023_packets;
> +	u64 len_1024_1518_packets;
> +	u64 len_1519_max_packets;
> +	u64 fcs_align_err_packets;
> +	u64 runt_packets;
> +	u64 runt_crc_packets;
> +	u64 oversize_packets;
> +	u64 oversize_crc_packets;
> +	u64 inb_packets;
> +	u64 inb_octets;
> +	u64 inb_errors;
> +	u64 mcast_l2_red_packets;
> +	u64 bcast_l2_red_packets;
> +	u64 mcast_l3_red_packets;
> +	u64 bcast_l3_red_packets;
> +};
> +
> +/**
> + * PKI Packet Instruction Header Structure (PKI_INST_HDR_S)
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 w : 1;    /* INST_HDR size: 0 = 2 bytes, 1 = 4 or 8
> bytes */
> +		u64 raw : 1;  /* RAW packet indicator in WQE[RAW]: 1 =
> enable */
> +		u64 utag : 1; /* Use INST_HDR[TAG] to compute WQE[TAG]:
> 1 = enable */
> +		u64 uqpg : 1; /* Use INST_HDR[QPG] to compute QPG: 1 =
> enable */
> +		u64 rsvd1 : 1;
> +		u64 pm : 3; /* Packet parsing mode. Legal values =
> 0x0..0x7 */
> +		u64 sl : 8; /* Number of bytes in INST_HDR. */
> +		/* The following fields are not present, if INST_HDR[W]
> = 0: */
> +		u64 utt : 1; /* Use INST_HDR[TT] to compute WQE[TT]: 1
> = enable */
> +		u64 tt : 2;  /* INST_HDR[TT] => WQE[TT], if
> INST_HDR[UTT] = 1 */
> +		u64 rsvd2 : 2;
> +		u64 qpg : 11; /* INST_HDR[QPG] => QPG, if
> INST_HDR[UQPG] = 1 */
> +		u64 tag : 32; /* INST_HDR[TAG] => WQE[TAG], if
> INST_HDR[UTAG] = 1 */
> +	} s;
> +} cvmx_pki_inst_hdr_t;
> +
> +/**
> + * This function assignes the clusters to a group, later pkind can
> be
> + * configured to use that group depending on number of clusters
> pkind
> + * would use. A given cluster can only be enabled in a single
> cluster group.
> + * Number of clusters assign to that group determines how many
> engine can work
> + * in parallel to process the packet. Eack cluster can process x
> MPPS.
> + *
> + * @param node	Node
> + * @param cluster_group Group to attach clusters to.
> + * @param cluster_mask The mask of clusters which needs to be
> assigned to the group.
> + */
> +static inline int cvmx_pki_attach_cluster_to_group(int node, u64
> cluster_group, u64 cluster_mask)
> +{
> +	cvmx_pki_icgx_cfg_t pki_cl_grp;
> +
> +	if (cluster_group >= CVMX_PKI_NUM_CLUSTER_GROUP) {
> +		debug("ERROR: config cluster group %d",
> (int)cluster_group);
> +		return -1;
> +	}
> +	pki_cl_grp.u64 = cvmx_read_csr_node(node,
> CVMX_PKI_ICGX_CFG(cluster_group));
> +	pki_cl_grp.s.clusters = cluster_mask;
> +	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group),
> pki_cl_grp.u64);
> +	return 0;
> +}
> +
> +static inline void cvmx_pki_write_global_parse(int node, struct
> cvmx_pki_global_parse gbl_pen)
> +{
> +	cvmx_pki_gbl_pen_t gbl_pen_reg;
> +
> +	gbl_pen_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_GBL_PEN);
> +	gbl_pen_reg.s.virt_pen = gbl_pen.virt_pen;
> +	gbl_pen_reg.s.clg_pen = gbl_pen.clg_pen;
> +	gbl_pen_reg.s.cl2_pen = gbl_pen.cl2_pen;
> +	gbl_pen_reg.s.l4_pen = gbl_pen.l4_pen;
> +	gbl_pen_reg.s.il3_pen = gbl_pen.il3_pen;
> +	gbl_pen_reg.s.l3_pen = gbl_pen.l3_pen;
> +	gbl_pen_reg.s.mpls_pen = gbl_pen.mpls_pen;
> +	gbl_pen_reg.s.fulc_pen = gbl_pen.fulc_pen;
> +	gbl_pen_reg.s.dsa_pen = gbl_pen.dsa_pen;
> +	gbl_pen_reg.s.hg_pen = gbl_pen.hg_pen;
> +	cvmx_write_csr_node(node, CVMX_PKI_GBL_PEN, gbl_pen_reg.u64);
> +}
> +
> +static inline void cvmx_pki_write_tag_secret(int node, struct
> cvmx_pki_tag_sec tag_secret)
> +{
> +	cvmx_pki_tag_secret_t tag_secret_reg;
> +
> +	tag_secret_reg.u64 = cvmx_read_csr_node(node,
> CVMX_PKI_TAG_SECRET);
> +	tag_secret_reg.s.dst6 = tag_secret.dst6;
> +	tag_secret_reg.s.src6 = tag_secret.src6;
> +	tag_secret_reg.s.dst = tag_secret.dst;
> +	tag_secret_reg.s.src = tag_secret.src;
> +	cvmx_write_csr_node(node, CVMX_PKI_TAG_SECRET,
> tag_secret_reg.u64);
> +}
> +
> +static inline void cvmx_pki_write_ltype_map(int node, enum
> cvmx_pki_layer_type layer,
> +					    enum cvmx_pki_beltype
> backend)
> +{
> +	cvmx_pki_ltypex_map_t ltype_map;
> +
> +	if (layer > CVMX_PKI_LTYPE_E_MAX || backend >
> CVMX_PKI_BELTYPE_MAX) {
> +		debug("ERROR: invalid ltype beltype mapping\n");
> +		return;
> +	}
> +	ltype_map.u64 = cvmx_read_csr_node(node,
> CVMX_PKI_LTYPEX_MAP(layer));
> +	ltype_map.s.beltype = backend;
> +	cvmx_write_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer),
> ltype_map.u64);
> +}
> +
> +/**
> + * This function enables the cluster group to start parsing.
> + *
> + * @param node    Node number.
> + * @param cl_grp  Cluster group to enable parsing.
> + */
> +static inline int cvmx_pki_parse_enable(int node, unsigned int
> cl_grp)
> +{
> +	cvmx_pki_icgx_cfg_t pki_cl_grp;
> +
> +	if (cl_grp >= CVMX_PKI_NUM_CLUSTER_GROUP) {
> +		debug("ERROR: pki parse en group %d", (int)cl_grp);
> +		return -1;
> +	}
> +	pki_cl_grp.u64 = cvmx_read_csr_node(node,
> CVMX_PKI_ICGX_CFG(cl_grp));
> +	pki_cl_grp.s.pena = 1;
> +	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp),
> pki_cl_grp.u64);
> +	return 0;
> +}
> +
> +/**
> + * This function enables the PKI to send bpid level backpressure to
> CN78XX inputs.
> + *
> + * @param node Node number.
> + */
> +static inline void cvmx_pki_enable_backpressure(int node)
> +{
> +	cvmx_pki_buf_ctl_t pki_buf_ctl;
> +
> +	pki_buf_ctl.u64 = cvmx_read_csr_node(node, CVMX_PKI_BUF_CTL);
> +	pki_buf_ctl.s.pbp_en = 1;
> +	cvmx_write_csr_node(node, CVMX_PKI_BUF_CTL, pki_buf_ctl.u64);
> +}
> +
> +/**
> + * Clear the statistics counters for a port.
> + *
> + * @param node Node number.
> + * @param port Port number (ipd_port) to get statistics for.
> + *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per
> port/pkind stats.
> + */
> +void cvmx_pki_clear_port_stats(int node, u64 port);
> +
> +/**
> + * Get the status counters for index from PKI.
> + *
> + * @param node	  Node number.
> + * @param index   PKIND number, if PKI_STATS_CTL:mode = 0 or
> + *     style(flow) number, if PKI_STATS_CTL:mode = 1
> + * @param status  Where to put the results.
> + */
> +void cvmx_pki_get_stats(int node, int index, struct
> cvmx_pki_port_stats *status);
> +
> +/**
> + * Get the statistics counters for a port.
> + *
> + * @param node	 Node number
> + * @param port   Port number (ipd_port) to get statistics for.
> + *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per
> port/pkind stats.
> + * @param status Where to put the results.
> + */
> +static inline void cvmx_pki_get_port_stats(int node, u64 port,
> struct cvmx_pki_port_stats *status)
> +{
> +	int xipd = cvmx_helper_node_to_ipd_port(node, port);
> +	int xiface = cvmx_helper_get_interface_num(xipd);
> +	int index = cvmx_helper_get_interface_index_num(port);
> +	int pknd = cvmx_helper_get_pknd(xiface, index);
> +
> +	cvmx_pki_get_stats(node, pknd, status);
> +}
> +
> +/**
> + * Get the statistics counters for a flow represented by style in
> PKI.
> + *
> + * @param node Node number.
> + * @param style_num Style number to get statistics for.
> + *    Make sure PKI_STATS_CTL:mode is set to 1 for collecting per
> style/flow stats.
> + * @param status Where to put the results.
> + */
> +static inline void cvmx_pki_get_flow_stats(int node, u64 style_num,
> +					   struct cvmx_pki_port_stats
> *status)
> +{
> +	cvmx_pki_get_stats(node, style_num, status);
> +}
> +
> +/**
> + * Show integrated PKI configuration.
> + *
> + * @param node	   node number
> + */
> +int cvmx_pki_config_dump(unsigned int node);
> +
> +/**
> + * Show integrated PKI statistics.
> + *
> + * @param node	   node number
> + */
> +int cvmx_pki_stats_dump(unsigned int node);
> +
> +/**
> + * Clear PKI statistics.
> + *
> + * @param node	   node number
> + */
> +void cvmx_pki_stats_clear(unsigned int node);
> +
> +/**
> + * This function enables PKI.
> + *
> + * @param node	 node to enable pki in.
> + */
> +void cvmx_pki_enable(int node);
> +
> +/**
> + * This function disables PKI.
> + *
> + * @param node	node to disable pki in.
> + */
> +void cvmx_pki_disable(int node);
> +
> +/**
> + * This function soft resets PKI.
> + *
> + * @param node	node to enable pki in.
> + */
> +void cvmx_pki_reset(int node);
> +
> +/**
> + * This function sets the clusters in PKI.
> + *
> + * @param node	node to set clusters in.
> + */
> +int cvmx_pki_setup_clusters(int node);
> +
> +/**
> + * This function reads global configuration of PKI block.
> + *
> + * @param node    Node number.
> + * @param gbl_cfg Pointer to struct to read global configuration
> + */
> +void cvmx_pki_read_global_config(int node, struct
> cvmx_pki_global_config *gbl_cfg);
> +
> +/**
> + * This function writes global configuration of PKI into hw.
> + *
> + * @param node    Node number.
> + * @param gbl_cfg Pointer to struct to global configuration
> + */
> +void cvmx_pki_write_global_config(int node, struct
> cvmx_pki_global_config *gbl_cfg);
> +
> +/**
> + * This function reads per pkind parameters in hardware which
> defines how
> + * the incoming packet is processed.
> + *
> + * @param node   Node number.
> + * @param pkind  PKI supports a large number of incoming interfaces
> and packets
> + *     arriving on different interfaces or channels may want to be
> processed
> + *     differently. PKI uses the pkind to determine how the incoming
> packet
> + *     is processed.
> + * @param pkind_cfg	Pointer to struct conatining pkind
> configuration read
> + *     from hardware.
> + */
> +int cvmx_pki_read_pkind_config(int node, int pkind, struct
> cvmx_pki_pkind_config *pkind_cfg);
> +
> +/**
> + * This function writes per pkind parameters in hardware which
> defines how
> + * the incoming packet is processed.
> + *
> + * @param node   Node number.
> + * @param pkind  PKI supports a large number of incoming interfaces
> and packets
> + *     arriving on different interfaces or channels may want to be
> processed
> + *     differently. PKI uses the pkind to determine how the incoming
> packet
> + *     is processed.
> + * @param pkind_cfg	Pointer to struct conatining pkind
> configuration need
> + *     to be written in hardware.
> + */
> +int cvmx_pki_write_pkind_config(int node, int pkind, struct
> cvmx_pki_pkind_config *pkind_cfg);
> +
> +/**
> + * This function reads parameters associated with tag configuration
> in hardware.
> + *
> + * @param node	 Node number.
> + * @param style  Style to configure tag for.
> + * @param cluster_mask  Mask of clusters to configure the style for.
> + * @param tag_cfg  Pointer to tag configuration struct.
> + */
> +void cvmx_pki_read_tag_config(int node, int style, u64 cluster_mask,
> +			      struct cvmx_pki_style_tag_cfg *tag_cfg);
> +
> +/**
> + * This function writes/configures parameters associated with tag
> + * configuration in hardware.
> + *
> + * @param node  Node number.
> + * @param style  Style to configure tag for.
> + * @param cluster_mask  Mask of clusters to configure the style for.
> + * @param tag_cfg  Pointer to taf configuration struct.
> + */
> +void cvmx_pki_write_tag_config(int node, int style, u64
> cluster_mask,
> +			       struct cvmx_pki_style_tag_cfg *tag_cfg);
> +
> +/**
> + * This function reads parameters associated with style in hardware.
> + *
> + * @param node	Node number.
> + * @param style  Style to read from.
> + * @param cluster_mask  Mask of clusters style belongs to.
> + * @param style_cfg  Pointer to style config struct.
> + */
> +void cvmx_pki_read_style_config(int node, int style, u64
> cluster_mask,
> +				struct cvmx_pki_style_config
> *style_cfg);
> +
> +/**
> + * This function writes/configures parameters associated with style
> in hardware.
> + *
> + * @param node  Node number.
> + * @param style  Style to configure.
> + * @param cluster_mask  Mask of clusters to configure the style for.
> + * @param style_cfg  Pointer to style config struct.
> + */
> +void cvmx_pki_write_style_config(int node, u64 style, u64
> cluster_mask,
> +				 struct cvmx_pki_style_config
> *style_cfg);
> +/**
> + * This function reads qpg entry at specified offset from qpg table
> + *
> + * @param node  Node number.
> + * @param offset  Offset in qpg table to read from.
> + * @param qpg_cfg  Pointer to structure containing qpg values
> + */
> +int cvmx_pki_read_qpg_entry(int node, int offset, struct
> cvmx_pki_qpg_config *qpg_cfg);
> +
> +/**
> + * This function writes qpg entry at specified offset in qpg table
> + *
> + * @param node  Node number.
> + * @param offset  Offset in qpg table to write to.
> + * @param qpg_cfg  Pointer to stricture containing qpg values.
> + */
> +void cvmx_pki_write_qpg_entry(int node, int offset, struct
> cvmx_pki_qpg_config *qpg_cfg);
> +
> +/**
> + * This function writes pcam entry at given offset in pcam table in
> hardware
> + *
> + * @param node  Node number.
> + * @param index	 Offset in pcam table.
> + * @param cluster_mask  Mask of clusters in which to write pcam
> entry.
> + * @param input  Input keys to pcam match passed as struct.
> + * @param action  PCAM match action passed as struct
> + */
> +int cvmx_pki_pcam_write_entry(int node, int index, u64 cluster_mask,
> +			      struct cvmx_pki_pcam_input input, struct
> cvmx_pki_pcam_action action);
> +/**
> + * Configures the channel which will receive backpressure from the
> specified bpid.
> + * Each channel listens for backpressure on a specific bpid.
> + * Each bpid can backpressure multiple channels.
> + * @param node  Node number.
> + * @param bpid  BPID from which channel will receive backpressure.
> + * @param channel  Channel number to receive backpressue.
> + */
> +int cvmx_pki_write_channel_bpid(int node, int channel, int bpid);
> +
> +/**
> + * Configures the bpid on which, specified channel will
> + * assert backpressure.
> + * Each bpid receives backpressure from auras.
> + * Multiple auras can backpressure single bpid.
> + * @param node  Node number.
> + * @param aura  Number which will assert backpressure on that bpid.
> + * @param bpid  To assert backpressure on.
> + */
> +int cvmx_pki_write_aura_bpid(int node, int aura, int bpid);
> +
> +/**
> + * Enables/Disabled QoS (RED Drop, Tail Drop & backpressure) for
> the* PKI aura.
> + *
> + * @param node  Node number
> + * @param aura  To enable/disable QoS on.
> + * @param ena_red  Enable/Disable RED drop between pass and drop
> level
> + *    1-enable 0-disable
> + * @param ena_drop  Enable/disable tail drop when max drop level
> exceeds
> + *    1-enable 0-disable
> + * @param ena_bp  Enable/Disable asserting backpressure on bpid when
> + *    max DROP level exceeds.
> + *    1-enable 0-disable
> + */
> +int cvmx_pki_enable_aura_qos(int node, int aura, bool ena_red, bool
> ena_drop, bool ena_bp);
> +
> +/**
> + * This function gives the initial style used by that pkind.
> + *
> + * @param node  Node number.
> + * @param pkind  PKIND number.
> + */
> +int cvmx_pki_get_pkind_style(int node, int pkind);
> +
> +/**
> + * This function sets the wqe buffer mode. First packet data buffer
> can reside
> + * either in same buffer as wqe OR it can go in separate buffer. If
> used the later mode,
> + * make sure software allocate enough buffers to now have wqe
> separate from packet data.
> + *
> + * @param node  Node number.
> + * @param style  Style to configure.
> + * @param pkt_outside_wqe
> + *    0 = The packet link pointer will be at word [FIRST_SKIP]
> immediately
> + *    followed by packet data, in the same buffer as the work queue
> entry.
> + *    1 = The packet link pointer will be at word [FIRST_SKIP] in a
> new
> + *    buffer separate from the work queue entry. Words following the
> + *    WQE in the same cache line will be zeroed, other lines in the
> + *    buffer will not be modified and will retain stale data (from
> the
> + *    buffer?s previous use). This setting may decrease the peak PKI
> + *    performance by up to half on small packets.
> + */
> +void cvmx_pki_set_wqe_mode(int node, u64 style, bool
> pkt_outside_wqe);
> +
> +/**
> + * This function sets the Packet mode of all ports and styles to
> little-endian.
> + * It Changes write operations of packet data to L2C to
> + * be in little-endian. Does not change the WQE header format, which
> is
> + * properly endian neutral.
> + *
> + * @param node  Node number.
> + * @param style  Style to configure.
> + */
> +void cvmx_pki_set_little_endian(int node, u64 style);
> +
> +/**
> + * Enables/Disables L2 length error check and max & min frame length
> checks.
> + *
> + * @param node  Node number.
> + * @param pknd  PKIND to disable error for.
> + * @param l2len_err	 L2 length error check enable.
> + * @param maxframe_err	Max frame error check enable.
> + * @param minframe_err	Min frame error check enable.
> + *    1 -- Enabel err checks
> + *    0 -- Disable error checks
> + */
> +void cvmx_pki_endis_l2_errs(int node, int pknd, bool l2len_err, bool
> maxframe_err,
> +			    bool minframe_err);
> +
> +/**
> + * Enables/Disables fcs check and fcs stripping on the pkind.
> + *
> + * @param node  Node number.
> + * @param pknd  PKIND to apply settings on.
> + * @param fcs_chk  Enable/disable fcs check.
> + *    1 -- enable fcs error check.
> + *    0 -- disable fcs error check.
> + * @param fcs_strip	 Strip L2 FCS bytes from packet, decrease
> WQE[LEN] by 4 bytes
> + *    1 -- strip L2 FCS.
> + *    0 -- Do not strip L2 FCS.
> + */
> +void cvmx_pki_endis_fcs_check(int node, int pknd, bool fcs_chk, bool
> fcs_strip);
> +
> +/**
> + * This function shows the qpg table entries, read directly from
> hardware.
> + *
> + * @param node  Node number.
> + * @param num_entry  Number of entries to print.
> + */
> +void cvmx_pki_show_qpg_entries(int node, u16 num_entry);
> +
> +/**
> + * This function shows the pcam table in raw format read directly
> from hardware.
> + *
> + * @param node  Node number.
> + */
> +void cvmx_pki_show_pcam_entries(int node);
> +
> +/**
> + * This function shows the valid entries in readable format,
> + * read directly from hardware.
> + *
> + * @param node  Node number.
> + */
> +void cvmx_pki_show_valid_pcam_entries(int node);
> +
> +/**
> + * This function shows the pkind attributes in readable format,
> + * read directly from hardware.
> + * @param node  Node number.
> + * @param pkind  PKIND number to print.
> + */
> +void cvmx_pki_show_pkind_attributes(int node, int pkind);
> +
> +/**
> + * @INTERNAL
> + * This function is called by cvmx_helper_shutdown() to extract all
> FPA buffers
> + * out of the PKI. After this function completes, all FPA buffers
> that were
> + * prefetched by PKI will be in the appropriate FPA pool.
> + * This functions does not reset the PKI.
> + * WARNING: It is very important that PKI be reset soon after a call
> to this function.
> + *
> + * @param node  Node number.
> + */
> +void __cvmx_pki_free_ptr(int node);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-
> ports-range.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-
> ports-range.h
> new file mode 100644
> index 000000000000..1fb49b3fb6de
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-
> range.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef __CVMX_INTERNAL_PORTS_RANGE__
> +#define __CVMX_INTERNAL_PORTS_RANGE__
> +
> +/*
> + * Allocated a block of internal ports for the specified
> interface/port
> + *
> + * @param  interface  the interface for which the internal ports are
> requested
> + * @param  port       the index of the port within in the interface
> for which the internal ports
> + *                    are requested.
> + * @param  count      the number of internal ports requested
> + *
> + * @return  0 on success
> + *         -1 on failure
> + */
> +int cvmx_pko_internal_ports_alloc(int interface, int port, u64
> count);
> +
> +/*
> + * Free the internal ports associated with the specified
> interface/port
> + *
> + * @param  interface  the interface for which the internal ports are
> requested
> + * @param  port       the index of the port within in the interface
> for which the internal ports
> + *                    are requested.
> + *
> + * @return  0 on success
> + *         -1 on failure
> + */
> +int cvmx_pko_internal_ports_free(int interface, int port);
> +
> +/*
> + * Frees up all the allocated internal ports.
> + */
> +void cvmx_pko_internal_ports_range_free_all(void);
> +
> +void cvmx_pko_internal_ports_range_show(void);
> +
> +int __cvmx_pko_internal_ports_range_init(void);
> +
> +#endif
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
> new file mode 100644
> index 000000000000..5f8398904953
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
> @@ -0,0 +1,175 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef __CVMX_PKO3_QUEUE_H__
> +#define __CVMX_PKO3_QUEUE_H__
> +
> +/**
> + * @INTERNAL
> + *
> + * Find or allocate global port/dq map table
> + * which is a named table, contains entries for
> + * all possible OCI nodes.
> + *
> + * The table global pointer is stored in core-local variable
> + * so that every core will call this function once, on first use.
> + */
> +int __cvmx_pko3_dq_table_setup(void);
> +
> +/*
> + * Get the base Descriptor Queue number for an IPD port on the local
> node
> + */
> +int cvmx_pko3_get_queue_base(int ipd_port);
> +
> +/*
> + * Get the number of Descriptor Queues assigned for an IPD port
> + */
> +int cvmx_pko3_get_queue_num(int ipd_port);
> +
> +/**
> + * Get L1/Port Queue number assigned to interface port.
> + *
> + * @param xiface is interface number.
> + * @param index is port index.
> + */
> +int cvmx_pko3_get_port_queue(int xiface, int index);
> +
> +/*
> + * Configure L3 through L5 Scheduler Queues and Descriptor Queues
> + *
> + * The Scheduler Queues in Levels 3 to 5 and Descriptor Queues are
> + * configured one-to-one or many-to-one to a single parent Scheduler
> + * Queues. The level of the parent SQ is specified in an argument,
> + * as well as the number of children to attach to the specific
> parent.
> + * The children can have fair round-robin or priority-based
> scheduling
> + * when multiple children are assigned a single parent.
> + *
> + * @param node is the OCI node location for the queues to be
> configured
> + * @param parent_level is the level of the parent queue, 2 to 5.
> + * @param parent_queue is the number of the parent Scheduler Queue
> + * @param child_base is the number of the first child SQ or DQ to
> assign to
> + * @param parent
> + * @param child_count is the number of consecutive children to
> assign
> + * @param stat_prio_count is the priority setting for the children
> L2 SQs
> + *
> + * If <stat_prio_count> is -1, the Ln children will have equal
> Round-Robin
> + * relationship with eachother. If <stat_prio_count> is 0, all Ln
> children
> + * will be arranged in Weighted-Round-Robin, with the first having
> the most
> + * precedence. If <stat_prio_count> is between 1 and 8, it indicates
> how
> + * many children will have static priority settings (with the first
> having
> + * the most precedence), with the remaining Ln children having WRR
> scheduling.
> + *
> + * @returns 0 on success, -1 on failure.
> + *
> + * Note: this function supports the configuration of node-local
> unit.
> + */
> +int cvmx_pko3_sq_config_children(unsigned int node, unsigned int
> parent_level,
> +				 unsigned int parent_queue, unsigned
> int child_base,
> +				 unsigned int child_count, int
> stat_prio_count);
> +
> +/*
> + * @INTERNAL
> + * Register a range of Descriptor Queues wth an interface port
> + *
> + * This function poulates the DQ-to-IPD translation table
> + * used by the application to retrieve the DQ range (typically
> ordered
> + * by priority) for a given IPD-port, which is either a physical
> port,
> + * or a channel on a channelized interface (i.e. ILK).
> + *
> + * @param xiface is the physical interface number
> + * @param index is either a physical port on an interface
> + * @param or a channel of an ILK interface
> + * @param dq_base is the first Descriptor Queue number in a
> consecutive range
> + * @param dq_count is the number of consecutive Descriptor Queues
> leading
> + * @param the same channel or port.
> + *
> + * Only a consecurive range of Descriptor Queues can be associated
> with any
> + * given channel/port, and usually they are ordered from most to
> least
> + * in terms of scheduling priority.
> + *
> + * Note: thus function only populates the node-local translation
> table.
> + *
> + * @returns 0 on success, -1 on failure.
> + */
> +int __cvmx_pko3_ipd_dq_register(int xiface, int index, unsigned int
> dq_base, unsigned int dq_count);
> +
> +/**
> + * @INTERNAL
> + *
> + * Unregister DQs associated with CHAN_E (IPD port)
> + */
> +int __cvmx_pko3_ipd_dq_unregister(int xiface, int index);
> +
> +/*
> + * Map channel number in PKO
> + *
> + * @param node is to specify the node to which this configuration is
> applied.
> + * @param pq_num specifies the Port Queue (i.e. L1) queue number.
> + * @param l2_l3_q_num  specifies L2/L3 queue number.
> + * @param channel specifies the channel number to map to the queue.
> + *
> + * The channel assignment applies to L2 or L3 Shaper Queues
> depending
> + * on the setting of channel credit level.
> + *
> + * @return returns none.
> + */
> +void cvmx_pko3_map_channel(unsigned int node, unsigned int pq_num,
> unsigned int l2_l3_q_num,
> +			   u16 channel);
> +
> +int cvmx_pko3_pq_config(unsigned int node, unsigned int mac_num,
> unsigned int pq_num);
> +
> +int cvmx_pko3_port_cir_set(unsigned int node, unsigned int pq_num,
> unsigned long rate_kbips,
> +			   unsigned int burst_bytes, int adj_bytes);
> +int cvmx_pko3_dq_cir_set(unsigned int node, unsigned int pq_num,
> unsigned long rate_kbips,
> +			 unsigned int burst_bytes);
> +int cvmx_pko3_dq_pir_set(unsigned int node, unsigned int pq_num,
> unsigned long rate_kbips,
> +			 unsigned int burst_bytes);
> +typedef enum {
> +	CVMX_PKO3_SHAPE_RED_STALL,
> +	CVMX_PKO3_SHAPE_RED_DISCARD,
> +	CVMX_PKO3_SHAPE_RED_PASS
> +} red_action_t;
> +
> +void cvmx_pko3_dq_red(unsigned int node, unsigned int dq_num,
> red_action_t red_act,
> +		      int8_t len_adjust);
> +
> +/**
> + * Macros to deal with short floating point numbers,
> + * where unsigned exponent, and an unsigned normalized
> + * mantissa are represented each with a defined field width.
> + *
> + */
> +#define CVMX_SHOFT_MANT_BITS 8
> +#define CVMX_SHOFT_EXP_BITS  4
> +
> +/**
> + * Convert short-float to an unsigned integer
> + * Note that it will lose precision.
> + */
> +#define CVMX_SHOFT_TO_U64(m,
> e)                                                                   
>  \
> +	((((1ull << CVMX_SHOFT_MANT_BITS) | (m)) << (e)) >>
> CVMX_SHOFT_MANT_BITS)
> +
> +/**
> + * Convert to short-float from an unsigned integer
> + */
> +#define CVMX_SHOFT_FROM_U64(ui, m,
> e)                                                              \
> +	do
> {                                                                    
>                    \
> +		unsigned long long
> u;                                                              \
> +		unsigned int
> k;                                                                   
>  \
> +		k = (1ull << (CVMX_SHOFT_MANT_BITS + 1)) -
> 1;                                      \
> +		(e) =
> 0;                                                                   
>         \
> +		u = (ui) <<
> CVMX_SHOFT_MANT_BITS;                                                
>   \
> +		while ((u) > k)
> {                                                                  \
> +			u >>=
> 1;                                                                   
> \
> +			(e)++;                                         
>                             \
> +		}                                                      
>                             \
> +		(m) = u & (k >>
> 1);                                                                \
> +	} while (0);
> +
> +#define
> CVMX_SHOFT_MAX()                                                     
>                       \
> +	CVMX_SHOFT_TO_U64((1 << CVMX_SHOFT_MANT_BITS) - 1, (1 <<
> CVMX_SHOFT_EXP_BITS) - 1)
> +#define CVMX_SHOFT_MIN() CVMX_SHOFT_TO_U64(0, 0)
> +
> +#endif /* __CVMX_PKO3_QUEUE_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow.h
> b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
> new file mode 100644
> index 000000000000..0680ca258f12
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
> @@ -0,0 +1,2991 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * Interface to the hardware Scheduling unit.
> + *
> + * New, starting with SDK 1.7.0, cvmx-pow supports a number of
> + * extended consistency checks. The define
> + * CVMX_ENABLE_POW_CHECKS controls the runtime insertion of POW
> + * internal state checks to find common programming errors. If
> + * CVMX_ENABLE_POW_CHECKS is not defined, checks are by default
> + * enabled. For example, cvmx-pow will check for the following
> + * program errors or POW state inconsistency.
> + * - Requesting a POW operation with an active tag switch in
> + *   progress.
> + * - Waiting for a tag switch to complete for an excessively
> + *   long period. This is normally a sign of an error in locking
> + *   causing deadlock.
> + * - Illegal tag switches from NULL_NULL.
> + * - Illegal tag switches from NULL.
> + * - Illegal deschedule request.
> + * - WQE pointer not matching the one attached to the core by
> + *   the POW.
> + */
> +
> +#ifndef __CVMX_POW_H__
> +#define __CVMX_POW_H__
> +
> +#include "cvmx-wqe.h"
> +#include "cvmx-pow-defs.h"
> +#include "cvmx-sso-defs.h"
> +#include "cvmx-address.h"
> +#include "cvmx-coremask.h"
> +
> +/* Default to having all POW constancy checks turned on */
> +#ifndef CVMX_ENABLE_POW_CHECKS
> +#define CVMX_ENABLE_POW_CHECKS 1
> +#endif
> +
> +/*
> + * Special type for CN78XX style SSO groups (0..255),
> + * for distinction from legacy-style groups (0..15)
> + */
> +typedef union {
> +	u8 xgrp;
> +	/* Fields that map XGRP for backwards compatibility */
> +	struct __attribute__((__packed__)) {
> +		u8 group : 5;
> +		u8 qus : 3;
> +	};
> +} cvmx_xgrp_t;
> +
> +/*
> + * Softwsare-only structure to convey a return value
> + * containing multiple information fields about an work queue entry
> + */
> +typedef struct {
> +	u32 tag;
> +	u16 index;
> +	u8 grp; /* Legacy group # (0..15) */
> +	u8 tag_type;
> +} cvmx_pow_tag_info_t;
> +
> +/**
> + * Wait flag values for pow functions.
> + */
> +typedef enum {
> +	CVMX_POW_WAIT = 1,
> +	CVMX_POW_NO_WAIT = 0,
> +} cvmx_pow_wait_t;
> +
> +/**
> + *  POW tag operations.  These are used in the data stored to the
> POW.
> + */
> +typedef enum {
> +	CVMX_POW_TAG_OP_SWTAG = 0L,
> +	CVMX_POW_TAG_OP_SWTAG_FULL = 1L,
> +	CVMX_POW_TAG_OP_SWTAG_DESCH = 2L,
> +	CVMX_POW_TAG_OP_DESCH = 3L,
> +	CVMX_POW_TAG_OP_ADDWQ = 4L,
> +	CVMX_POW_TAG_OP_UPDATE_WQP_GRP = 5L,
> +	CVMX_POW_TAG_OP_SET_NSCHED = 6L,
> +	CVMX_POW_TAG_OP_CLR_NSCHED = 7L,
> +	CVMX_POW_TAG_OP_NOP = 15L
> +} cvmx_pow_tag_op_t;
> +
> +/**
> + * This structure defines the store data on a store to POW
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 no_sched : 1;
> +		u64 unused : 2;
> +		u64 index : 13;
> +		cvmx_pow_tag_op_t op : 4;
> +		u64 unused2 : 2;
> +		u64 qos : 3;
> +		u64 grp : 4;
> +		cvmx_pow_tag_type_t type : 3;
> +		u64 tag : 32;
> +	} s_cn38xx;
> +	struct {
> +		u64 no_sched : 1;
> +		cvmx_pow_tag_op_t op : 4;
> +		u64 unused1 : 4;
> +		u64 index : 11;
> +		u64 unused2 : 1;
> +		u64 grp : 6;
> +		u64 unused3 : 3;
> +		cvmx_pow_tag_type_t type : 2;
> +		u64 tag : 32;
> +	} s_cn68xx_clr;
> +	struct {
> +		u64 no_sched : 1;
> +		cvmx_pow_tag_op_t op : 4;
> +		u64 unused1 : 12;
> +		u64 qos : 3;
> +		u64 unused2 : 1;
> +		u64 grp : 6;
> +		u64 unused3 : 3;
> +		cvmx_pow_tag_type_t type : 2;
> +		u64 tag : 32;
> +	} s_cn68xx_add;
> +	struct {
> +		u64 no_sched : 1;
> +		cvmx_pow_tag_op_t op : 4;
> +		u64 unused1 : 16;
> +		u64 grp : 6;
> +		u64 unused3 : 3;
> +		cvmx_pow_tag_type_t type : 2;
> +		u64 tag : 32;
> +	} s_cn68xx_other;
> +	struct {
> +		u64 rsvd_62_63 : 2;
> +		u64 grp : 10;
> +		cvmx_pow_tag_type_t type : 2;
> +		u64 no_sched : 1;
> +		u64 rsvd_48 : 1;
> +		cvmx_pow_tag_op_t op : 4;
> +		u64 rsvd_42_43 : 2;
> +		u64 wqp : 42;
> +	} s_cn78xx_other;
> +
> +} cvmx_pow_tag_req_t;
> +
> +union cvmx_pow_tag_req_addr {
> +	u64 u64;
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 addr : 40;
> +	} s;
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 tag : 32;
> +		u64 reserved_0_3 : 4;
> +	} s_cn78xx;
> +};
> +
> +/**
> + * This structure describes the address to load stuff from POW
> + */
> +typedef union {
> +	u64 u64;
> +	/**
> +	 * Address for new work request loads (did<2:0> == 0)
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_4_39 : 36;
> +		u64 wait : 1;
> +		u64 reserved_0_2 : 3;
> +	} swork;
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 reserved_32_35 : 4;
> +		u64 indexed : 1;
> +		u64 grouped : 1;
> +		u64 rtngrp : 1;
> +		u64 reserved_16_28 : 13;
> +		u64 index : 12;
> +		u64 wait : 1;
> +		u64 reserved_0_2 : 3;
> +	} swork_78xx;
> +	/**
> +	 * Address for loads to get POW internal status
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_10_39 : 30;
> +		u64 coreid : 4;
> +		u64 get_rev : 1;
> +		u64 get_cur : 1;
> +		u64 get_wqp : 1;
> +		u64 reserved_0_2 : 3;
> +	} sstatus;
> +	/**
> +	 * Address for loads to get 68XX SS0 internal status
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_14_39 : 26;
> +		u64 coreid : 5;
> +		u64 reserved_6_8 : 3;
> +		u64 opcode : 3;
> +		u64 reserved_0_2 : 3;
> +	} sstatus_cn68xx;
> +	/**
> +	 * Address for memory loads to get POW internal state
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_16_39 : 24;
> +		u64 index : 11;
> +		u64 get_des : 1;
> +		u64 get_wqp : 1;
> +		u64 reserved_0_2 : 3;
> +	} smemload;
> +	/**
> +	 * Address for memory loads to get SSO internal state
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_20_39 : 20;
> +		u64 index : 11;
> +		u64 reserved_6_8 : 3;
> +		u64 opcode : 3;
> +		u64 reserved_0_2 : 3;
> +	} smemload_cn68xx;
> +	/**
> +	 * Address for index/pointer loads
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_9_39 : 31;
> +		u64 qosgrp : 4;
> +		u64 get_des_get_tail : 1;
> +		u64 get_rmt : 1;
> +		u64 reserved_0_2 : 3;
> +	} sindexload;
> +	/**
> +	 * Address for a Index/Pointer loads to get SSO internal state
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_15_39 : 25;
> +		u64 qos_grp : 6;
> +		u64 reserved_6_8 : 3;
> +		u64 opcode : 3;
> +		u64 reserved_0_2 : 3;
> +	} sindexload_cn68xx;
> +	/**
> +	 * Address for NULL_RD request (did<2:0> == 4)
> +	 * when this is read, HW attempts to change the state to NULL
> if it is NULL_NULL
> +	 * (the hardware cannot switch from NULL_NULL to NULL if a POW
> entry is not available -
> +	 * software may need to recover by finishing another piece of
> work before a POW
> +	 * entry can ever become available.)
> +	 */
> +	struct {
> +		u64 mem_region : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 reserved_0_39 : 40;
> +	} snull_rd;
> +} cvmx_pow_load_addr_t;
> +
> +/**
> + * This structure defines the response to a load/SENDSINGLE to POW
> (except CSR reads)
> + */
> +typedef union {
> +	u64 u64;
> +	/**
> +	 * Response to new work request loads
> +	 */
> +	struct {
> +		u64 no_work : 1;
> +		u64 pend_switch : 1;
> +		u64 tt : 2;
> +		u64 reserved_58_59 : 2;
> +		u64 grp : 10;
> +		u64 reserved_42_47 : 6;
> +		u64 addr : 42;
> +	} s_work;
> +
> +	/**
> +	 * Result for a POW Status Load (when get_cur==0 and
> get_wqp==0)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 pend_switch : 1;
> +		u64 pend_switch_full : 1;
> +		u64 pend_switch_null : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_desched_switch : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_new_work : 1;
> +		u64 pend_new_work_wait : 1;
> +		u64 pend_null_rd : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 reserved_51 : 1;
> +		u64 pend_index : 11;
> +		u64 pend_grp : 4;
> +		u64 reserved_34_35 : 2;
> +		u64 pend_type : 2;
> +		u64 pend_tag : 32;
> +	} s_sstatus0;
> +	/**
> +	 * Result for a SSO Status Load (when opcode is SL_PENDTAG)
> +	 */
> +	struct {
> +		u64 pend_switch : 1;
> +		u64 pend_get_work : 1;
> +		u64 pend_get_work_wait : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_alloc_we : 1;
> +		u64 reserved_48_56 : 9;
> +		u64 pend_index : 11;
> +		u64 reserved_34_36 : 3;
> +		u64 pend_type : 2;
> +		u64 pend_tag : 32;
> +	} s_sstatus0_cn68xx;
> +	/**
> +	 * Result for a POW Status Load (when get_cur==0 and
> get_wqp==1)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 pend_switch : 1;
> +		u64 pend_switch_full : 1;
> +		u64 pend_switch_null : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_desched_switch : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_new_work : 1;
> +		u64 pend_new_work_wait : 1;
> +		u64 pend_null_rd : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 reserved_51 : 1;
> +		u64 pend_index : 11;
> +		u64 pend_grp : 4;
> +		u64 pend_wqp : 36;
> +	} s_sstatus1;
> +	/**
> +	 * Result for a SSO Status Load (when opcode is SL_PENDWQP)
> +	 */
> +	struct {
> +		u64 pend_switch : 1;
> +		u64 pend_get_work : 1;
> +		u64 pend_get_work_wait : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_alloc_we : 1;
> +		u64 reserved_51_56 : 6;
> +		u64 pend_index : 11;
> +		u64 reserved_38_39 : 2;
> +		u64 pend_wqp : 38;
> +	} s_sstatus1_cn68xx;
> +
> +	struct {
> +		u64 pend_switch : 1;
> +		u64 pend_get_work : 1;
> +		u64 pend_get_work_wait : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_alloc_we : 1;
> +		u64 reserved_56 : 1;
> +		u64 prep_index : 12;
> +		u64 reserved_42_43 : 2;
> +		u64 pend_tag : 42;
> +	} s_sso_ppx_pendwqp_cn78xx;
> +	/**
> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==0,
> and get_rev==0)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 link_index : 11;
> +		u64 index : 11;
> +		u64 grp : 4;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_sstatus2;
> +	/**
> +	 * Result for a SSO Status Load (when opcode is SL_TAG)
> +	 */
> +	struct {
> +		u64 reserved_57_63 : 7;
> +		u64 index : 11;
> +		u64 reserved_45 : 1;
> +		u64 grp : 6;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_sstatus2_cn68xx;
> +
> +	struct {
> +		u64 tailc : 1;
> +		u64 reserved_60_62 : 3;
> +		u64 index : 12;
> +		u64 reserved_46_47 : 2;
> +		u64 grp : 10;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 tt : 2;
> +		u64 tag : 32;
> +	} s_sso_ppx_tag_cn78xx;
> +	/**
> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==0,
> and get_rev==1)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 revlink_index : 11;
> +		u64 index : 11;
> +		u64 grp : 4;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_sstatus3;
> +	/**
> +	 * Result for a SSO Status Load (when opcode is SL_WQP)
> +	 */
> +	struct {
> +		u64 reserved_58_63 : 6;
> +		u64 index : 11;
> +		u64 reserved_46 : 1;
> +		u64 grp : 6;
> +		u64 reserved_38_39 : 2;
> +		u64 wqp : 38;
> +	} s_sstatus3_cn68xx;
> +
> +	struct {
> +		u64 reserved_58_63 : 6;
> +		u64 grp : 10;
> +		u64 reserved_42_47 : 6;
> +		u64 tag : 42;
> +	} s_sso_ppx_wqp_cn78xx;
> +	/**
> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==1,
> and get_rev==0)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 link_index : 11;
> +		u64 index : 11;
> +		u64 grp : 4;
> +		u64 wqp : 36;
> +	} s_sstatus4;
> +	/**
> +	 * Result for a SSO Status Load (when opcode is SL_LINKS)
> +	 */
> +	struct {
> +		u64 reserved_46_63 : 18;
> +		u64 index : 11;
> +		u64 reserved_34 : 1;
> +		u64 grp : 6;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 reserved_24_25 : 2;
> +		u64 revlink_index : 11;
> +		u64 reserved_11_12 : 2;
> +		u64 link_index : 11;
> +	} s_sstatus4_cn68xx;
> +
> +	struct {
> +		u64 tailc : 1;
> +		u64 reserved_60_62 : 3;
> +		u64 index : 12;
> +		u64 reserved_38_47 : 10;
> +		u64 grp : 10;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 reserved_25 : 1;
> +		u64 revlink_index : 12;
> +		u64 link_index_vld : 1;
> +		u64 link_index : 12;
> +	} s_sso_ppx_links_cn78xx;
> +	/**
> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==1,
> and get_rev==1)
> +	 */
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 revlink_index : 11;
> +		u64 index : 11;
> +		u64 grp : 4;
> +		u64 wqp : 36;
> +	} s_sstatus5;
> +	/**
> +	 * Result For POW Memory Load (get_des == 0 and get_wqp == 0)
> +	 */
> +	struct {
> +		u64 reserved_51_63 : 13;
> +		u64 next_index : 11;
> +		u64 grp : 4;
> +		u64 reserved_35 : 1;
> +		u64 tail : 1;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_smemload0;
> +	/**
> +	 * Result For SSO Memory Load (opcode is ML_TAG)
> +	 */
> +	struct {
> +		u64 reserved_38_63 : 26;
> +		u64 tail : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_smemload0_cn68xx;
> +
> +	struct {
> +		u64 reserved_39_63 : 25;
> +		u64 tail : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s_sso_iaq_ppx_tag_cn78xx;
> +	/**
> +	 * Result For POW Memory Load (get_des == 0 and get_wqp == 1)
> +	 */
> +	struct {
> +		u64 reserved_51_63 : 13;
> +		u64 next_index : 11;
> +		u64 grp : 4;
> +		u64 wqp : 36;
> +	} s_smemload1;
> +	/**
> +	 * Result For SSO Memory Load (opcode is ML_WQPGRP)
> +	 */
> +	struct {
> +		u64 reserved_48_63 : 16;
> +		u64 nosched : 1;
> +		u64 reserved_46 : 1;
> +		u64 grp : 6;
> +		u64 reserved_38_39 : 2;
> +		u64 wqp : 38;
> +	} s_smemload1_cn68xx;
> +
> +	/**
> +	 * Entry structures for the CN7XXX chips.
> +	 */
> +	struct {
> +		u64 reserved_39_63 : 25;
> +		u64 tailc : 1;
> +		u64 tail : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 tt : 2;
> +		u64 tag : 32;
> +	} s_sso_ientx_tag_cn78xx;
> +
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 head : 1;
> +		u64 nosched : 1;
> +		u64 reserved_56_59 : 4;
> +		u64 grp : 8;
> +		u64 reserved_42_47 : 6;
> +		u64 wqp : 42;
> +	} s_sso_ientx_wqpgrp_cn73xx;
> +
> +	struct {
> +		u64 reserved_62_63 : 2;
> +		u64 head : 1;
> +		u64 nosched : 1;
> +		u64 reserved_58_59 : 2;
> +		u64 grp : 10;
> +		u64 reserved_42_47 : 6;
> +		u64 wqp : 42;
> +	} s_sso_ientx_wqpgrp_cn78xx;
> +
> +	struct {
> +		u64 reserved_38_63 : 26;
> +		u64 pend_switch : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 pend_tt : 2;
> +		u64 pend_tag : 32;
> +	} s_sso_ientx_pendtag_cn78xx;
> +
> +	struct {
> +		u64 reserved_26_63 : 38;
> +		u64 prev_index : 10;
> +		u64 reserved_11_15 : 5;
> +		u64 next_index_vld : 1;
> +		u64 next_index : 10;
> +	} s_sso_ientx_links_cn73xx;
> +
> +	struct {
> +		u64 reserved_28_63 : 36;
> +		u64 prev_index : 12;
> +		u64 reserved_13_15 : 3;
> +		u64 next_index_vld : 1;
> +		u64 next_index : 12;
> +	} s_sso_ientx_links_cn78xx;
> +
> +	/**
> +	 * Result For POW Memory Load (get_des == 1)
> +	 */
> +	struct {
> +		u64 reserved_51_63 : 13;
> +		u64 fwd_index : 11;
> +		u64 grp : 4;
> +		u64 nosched : 1;
> +		u64 pend_switch : 1;
> +		u64 pend_type : 2;
> +		u64 pend_tag : 32;
> +	} s_smemload2;
> +	/**
> +	 * Result For SSO Memory Load (opcode is ML_PENTAG)
> +	 */
> +	struct {
> +		u64 reserved_38_63 : 26;
> +		u64 pend_switch : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 pend_type : 2;
> +		u64 pend_tag : 32;
> +	} s_smemload2_cn68xx;
> +
> +	struct {
> +		u64 pend_switch : 1;
> +		u64 pend_get_work : 1;
> +		u64 pend_get_work_wait : 1;
> +		u64 pend_nosched : 1;
> +		u64 pend_nosched_clr : 1;
> +		u64 pend_desched : 1;
> +		u64 pend_alloc_we : 1;
> +		u64 reserved_34_56 : 23;
> +		u64 pend_tt : 2;
> +		u64 pend_tag : 32;
> +	} s_sso_ppx_pendtag_cn78xx;
> +	/**
> +	 * Result For SSO Memory Load (opcode is ML_LINKS)
> +	 */
> +	struct {
> +		u64 reserved_24_63 : 40;
> +		u64 fwd_index : 11;
> +		u64 reserved_11_12 : 2;
> +		u64 next_index : 11;
> +	} s_smemload3_cn68xx;
> +
> +	/**
> +	 * Result For POW Index/Pointer Load (get_rmt ==
> 0/get_des_get_tail == 0)
> +	 */
> +	struct {
> +		u64 reserved_52_63 : 12;
> +		u64 free_val : 1;
> +		u64 free_one : 1;
> +		u64 reserved_49 : 1;
> +		u64 free_head : 11;
> +		u64 reserved_37 : 1;
> +		u64 free_tail : 11;
> +		u64 loc_val : 1;
> +		u64 loc_one : 1;
> +		u64 reserved_23 : 1;
> +		u64 loc_head : 11;
> +		u64 reserved_11 : 1;
> +		u64 loc_tail : 11;
> +	} sindexload0;
> +	/**
> +	 * Result for SSO Index/Pointer Load(opcode ==
> +	 * IPL_IQ/IPL_DESCHED/IPL_NOSCHED)
> +	 */
> +	struct {
> +		u64 reserved_28_63 : 36;
> +		u64 queue_val : 1;
> +		u64 queue_one : 1;
> +		u64 reserved_24_25 : 2;
> +		u64 queue_head : 11;
> +		u64 reserved_11_12 : 2;
> +		u64 queue_tail : 11;
> +	} sindexload0_cn68xx;
> +	/**
> +	 * Result For POW Index/Pointer Load (get_rmt ==
> 0/get_des_get_tail == 1)
> +	 */
> +	struct {
> +		u64 reserved_52_63 : 12;
> +		u64 nosched_val : 1;
> +		u64 nosched_one : 1;
> +		u64 reserved_49 : 1;
> +		u64 nosched_head : 11;
> +		u64 reserved_37 : 1;
> +		u64 nosched_tail : 11;
> +		u64 des_val : 1;
> +		u64 des_one : 1;
> +		u64 reserved_23 : 1;
> +		u64 des_head : 11;
> +		u64 reserved_11 : 1;
> +		u64 des_tail : 11;
> +	} sindexload1;
> +	/**
> +	 * Result for SSO Index/Pointer Load(opcode ==
> IPL_FREE0/IPL_FREE1/IPL_FREE2)
> +	 */
> +	struct {
> +		u64 reserved_60_63 : 4;
> +		u64 qnum_head : 2;
> +		u64 qnum_tail : 2;
> +		u64 reserved_28_55 : 28;
> +		u64 queue_val : 1;
> +		u64 queue_one : 1;
> +		u64 reserved_24_25 : 2;
> +		u64 queue_head : 11;
> +		u64 reserved_11_12 : 2;
> +		u64 queue_tail : 11;
> +	} sindexload1_cn68xx;
> +	/**
> +	 * Result For POW Index/Pointer Load (get_rmt ==
> 1/get_des_get_tail == 0)
> +	 */
> +	struct {
> +		u64 reserved_39_63 : 25;
> +		u64 rmt_is_head : 1;
> +		u64 rmt_val : 1;
> +		u64 rmt_one : 1;
> +		u64 rmt_head : 36;
> +	} sindexload2;
> +	/**
> +	 * Result For POW Index/Pointer Load (get_rmt ==
> 1/get_des_get_tail == 1)
> +	 */
> +	struct {
> +		u64 reserved_39_63 : 25;
> +		u64 rmt_is_head : 1;
> +		u64 rmt_val : 1;
> +		u64 rmt_one : 1;
> +		u64 rmt_tail : 36;
> +	} sindexload3;
> +	/**
> +	 * Response to NULL_RD request loads
> +	 */
> +	struct {
> +		u64 unused : 62;
> +		u64 state : 2;
> +	} s_null_rd;
> +
> +} cvmx_pow_tag_load_resp_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 reserved_57_63 : 7;
> +		u64 index : 11;
> +		u64 reserved_45 : 1;
> +		u64 grp : 6;
> +		u64 head : 1;
> +		u64 tail : 1;
> +		u64 reserved_34_36 : 3;
> +		u64 tag_type : 2;
> +		u64 tag : 32;
> +	} s;
> +} cvmx_pow_sl_tag_resp_t;
> +
> +/**
> + * This structure describes the address used for stores to the POW.
> + *  The store address is meaningful on stores to the POW.  The
> hardware assumes that an aligned
> + *  64-bit store was used for all these stores.
> + *  Note the assumption that the work queue entry is aligned on an
> 8-byte
> + *  boundary (since the low-order 3 address bits must be zero).
> + *  Note that not all fields are used by all operations.
> + *
> + *  NOTE: The following is the behavior of the pending switch bit at
> the PP
> + *       for POW stores (i.e. when did<7:3> == 0xc)
> + *     - did<2:0> == 0      => pending switch bit is set
> + *     - did<2:0> == 1      => no affect on the pending switch bit
> + *     - did<2:0> == 3      => pending switch bit is cleared
> + *     - did<2:0> == 7      => no affect on the pending switch bit
> + *     - did<2:0> == others => must not be used
> + *     - No other loads/stores have an affect on the pending switch
> bit
> + *     - The switch bus from POW can clear the pending switch bit
> + *
> + *  NOTE: did<2:0> == 2 is used by the HW for a special single-cycle 
> ADDWQ command
> + *  that only contains the pointer). SW must never use did<2:0> ==
> 2.
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 mem_reg : 2;
> +		u64 reserved_49_61 : 13;
> +		u64 is_io : 1;
> +		u64 did : 8;
> +		u64 addr : 40;
> +	} stag;
> +} cvmx_pow_tag_store_addr_t; /* FIXME- this type is unused */
> +
> +/**
> + * Decode of the store data when an IOBDMA SENDSINGLE is sent to POW
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 scraddr : 8;
> +		u64 len : 8;
> +		u64 did : 8;
> +		u64 unused : 36;
> +		u64 wait : 1;
> +		u64 unused2 : 3;
> +	} s;
> +	struct {
> +		u64 scraddr : 8;
> +		u64 len : 8;
> +		u64 did : 8;
> +		u64 node : 4;
> +		u64 unused1 : 4;
> +		u64 indexed : 1;
> +		u64 grouped : 1;
> +		u64 rtngrp : 1;
> +		u64 unused2 : 13;
> +		u64 index_grp_mask : 12;
> +		u64 wait : 1;
> +		u64 unused3 : 3;
> +	} s_cn78xx;
> +} cvmx_pow_iobdma_store_t;
> +
> +/* CSR typedefs have been moved to cvmx-pow-defs.h */
> +
> +/*enum for group priority parameters which needs modification*/
> +enum cvmx_sso_group_modify_mask {
> +	CVMX_SSO_MODIFY_GROUP_PRIORITY = 0x01,
> +	CVMX_SSO_MODIFY_GROUP_WEIGHT = 0x02,
> +	CVMX_SSO_MODIFY_GROUP_AFFINITY = 0x04
> +};
> +
> +/**
> + * @INTERNAL
> + * Return the number of SSO groups for a given SoC model
> + */
> +static inline unsigned int cvmx_sso_num_xgrp(void)
> +{
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
> +		return 256;
> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
> +		return 64;
> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
> +		return 64;
> +	printf("ERROR: %s: Unknown model\n", __func__);
> +	return 0;
> +}
> +
> +/**
> + * @INTERNAL
> + * Return the number of POW groups on current model.
> + * In case of CN78XX/CN73XX this is the number of equivalent
> + * "legacy groups" on the chip when it is used in backward
> + * compatible mode.
> + */
> +static inline unsigned int cvmx_pow_num_groups(void)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return cvmx_sso_num_xgrp() >> 3;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		return 64;
> +	else
> +		return 16;
> +}
> +
> +/**
> + * @INTERNAL
> + * Return the number of mask-set registers.
> + */
> +static inline unsigned int cvmx_sso_num_maskset(void)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return 2;
> +	else
> +		return 1;
> +}
> +
> +/**
> + * Get the POW tag for this core. This returns the current
> + * tag type, tag, group, and POW entry index associated with
> + * this core. Index is only valid if the tag type isn't NULL_NULL.
> + * If a tag switch is pending this routine returns the tag before
> + * the tag switch, not after.
> + *
> + * @return Current tag
> + */
> +static inline cvmx_pow_tag_info_t cvmx_pow_get_current_tag(void)
> +{
> +	cvmx_pow_load_addr_t load_addr;
> +	cvmx_pow_tag_info_t result;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_sso_sl_ppx_tag_t sl_ppx_tag;
> +		cvmx_xgrp_t xgrp;
> +		int node, core;
> +
> +		CVMX_SYNCS;
> +		node = cvmx_get_node_num();
> +		core = cvmx_get_local_core_num();
> +		sl_ppx_tag.u64 = csr_rd_node(node,
> CVMX_SSO_SL_PPX_TAG(core));
> +		result.index = sl_ppx_tag.s.index;
> +		result.tag_type = sl_ppx_tag.s.tt;
> +		result.tag = sl_ppx_tag.s.tag;
> +
> +		/* Get native XGRP value */
> +		xgrp.xgrp = sl_ppx_tag.s.grp;
> +
> +		/* Return legacy style group 0..15 */
> +		result.grp = xgrp.group;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		cvmx_pow_sl_tag_resp_t load_resp;
> +
> +		load_addr.u64 = 0;
> +		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
> +		load_addr.sstatus_cn68xx.is_io = 1;
> +		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
> +		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
> +		load_addr.sstatus_cn68xx.opcode = 3;
> +		load_resp.u64 = csr_rd(load_addr.u64);
> +		result.grp = load_resp.s.grp;
> +		result.index = load_resp.s.index;
> +		result.tag_type = load_resp.s.tag_type;
> +		result.tag = load_resp.s.tag;
> +	} else {
> +		cvmx_pow_tag_load_resp_t load_resp;
> +
> +		load_addr.u64 = 0;
> +		load_addr.sstatus.mem_region = CVMX_IO_SEG;
> +		load_addr.sstatus.is_io = 1;
> +		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
> +		load_addr.sstatus.coreid = cvmx_get_core_num();
> +		load_addr.sstatus.get_cur = 1;
> +		load_resp.u64 = csr_rd(load_addr.u64);
> +		result.grp = load_resp.s_sstatus2.grp;
> +		result.index = load_resp.s_sstatus2.index;
> +		result.tag_type = load_resp.s_sstatus2.tag_type;
> +		result.tag = load_resp.s_sstatus2.tag;
> +	}
> +	return result;
> +}
> +
> +/**
> + * Get the POW WQE for this core. This returns the work queue
> + * entry currently associated with this core.
> + *
> + * @return WQE pointer
> + */
> +static inline cvmx_wqe_t *cvmx_pow_get_current_wqp(void)
> +{
> +	cvmx_pow_load_addr_t load_addr;
> +	cvmx_pow_tag_load_resp_t load_resp;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_sso_sl_ppx_wqp_t sso_wqp;
> +		int node = cvmx_get_node_num();
> +		int core = cvmx_get_local_core_num();
> +
> +		sso_wqp.u64 = csr_rd_node(node,
> CVMX_SSO_SL_PPX_WQP(core));
> +		if (sso_wqp.s.wqp)
> +			return (cvmx_wqe_t
> *)cvmx_phys_to_ptr(sso_wqp.s.wqp);
> +		return (cvmx_wqe_t *)0;
> +	}
> +	if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		load_addr.u64 = 0;
> +		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
> +		load_addr.sstatus_cn68xx.is_io = 1;
> +		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
> +		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
> +		load_addr.sstatus_cn68xx.opcode = 4;
> +		load_resp.u64 = csr_rd(load_addr.u64);
> +		if (load_resp.s_sstatus3_cn68xx.wqp)
> +			return (cvmx_wqe_t
> *)cvmx_phys_to_ptr(load_resp.s_sstatus3_cn68xx.wqp);
> +		else
> +			return (cvmx_wqe_t *)0;
> +	} else {
> +		load_addr.u64 = 0;
> +		load_addr.sstatus.mem_region = CVMX_IO_SEG;
> +		load_addr.sstatus.is_io = 1;
> +		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
> +		load_addr.sstatus.coreid = cvmx_get_core_num();
> +		load_addr.sstatus.get_cur = 1;
> +		load_addr.sstatus.get_wqp = 1;
> +		load_resp.u64 = csr_rd(load_addr.u64);
> +		return (cvmx_wqe_t
> *)cvmx_phys_to_ptr(load_resp.s_sstatus4.wqp);
> +	}
> +}
> +
> +/**
> + * @INTERNAL
> + * Print a warning if a tag switch is pending for this core
> + *
> + * @param function Function name checking for a pending tag switch
> + */
> +static inline void __cvmx_pow_warn_if_pending_switch(const char
> *function)
> +{
> +	u64 switch_complete;
> +
> +	CVMX_MF_CHORD(switch_complete);
> +	cvmx_warn_if(!switch_complete, "%s called with tag switch in
> progress\n", function);
> +}
> +
> +/**
> + * Waits for a tag switch to complete by polling the completion bit.
> + * Note that switches to NULL complete immediately and do not need
> + * to be waited for.
> + */
> +static inline void cvmx_pow_tag_sw_wait(void)
> +{
> +	const u64 TIMEOUT_MS = 10; /* 10ms timeout */
> +	u64 switch_complete;
> +	u64 start_cycle;
> +
> +	if (CVMX_ENABLE_POW_CHECKS)
> +		start_cycle = get_timer(0);
> +
> +	while (1) {
> +		CVMX_MF_CHORD(switch_complete);
> +		if (cvmx_likely(switch_complete))
> +			break;
> +
> +		if (CVMX_ENABLE_POW_CHECKS) {
> +			if (cvmx_unlikely(get_timer(start_cycle) >
> TIMEOUT_MS)) {
> +				debug("WARNING: %s: Tag switch is
> taking a long time, possible deadlock\n",
> +				      __func__);
> +			}
> +		}
> +	}
> +}
> +
> +/**
> + * Synchronous work request.  Requests work from the POW.
> + * This function does NOT wait for previous tag switches to
> complete,
> + * so the caller must ensure that there is not a pending tag switch.
> + *
> + * @param wait   When set, call stalls until work becomes available,
> or
> + *               times out. If not set, returns immediately.
> + *
> + * @return Returns the WQE pointer from POW. Returns NULL if no work
> was
> + * available.
> + */
> +static inline cvmx_wqe_t
> *cvmx_pow_work_request_sync_nocheck(cvmx_pow_wait_t wait)
> +{
> +	cvmx_pow_load_addr_t ptr;
> +	cvmx_pow_tag_load_resp_t result;
> +
> +	if (CVMX_ENABLE_POW_CHECKS)
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.swork_78xx.node = cvmx_get_node_num();
> +		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
> +		ptr.swork_78xx.is_io = 1;
> +		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +		ptr.swork_78xx.wait = wait;
> +	} else {
> +		ptr.swork.mem_region = CVMX_IO_SEG;
> +		ptr.swork.is_io = 1;
> +		ptr.swork.did = CVMX_OCT_DID_TAG_SWTAG;
> +		ptr.swork.wait = wait;
> +	}
> +
> +	result.u64 = csr_rd(ptr.u64);
> +	if (result.s_work.no_work)
> +		return NULL;
> +	else
> +		return (cvmx_wqe_t
> *)cvmx_phys_to_ptr(result.s_work.addr);
> +}
> +
> +/**
> + * Synchronous work request.  Requests work from the POW.
> + * This function waits for any previous tag switch to complete
> before
> + * requesting the new work.
> + *
> + * @param wait   When set, call stalls until work becomes available,
> or
> + *               times out. If not set, returns immediately.
> + *
> + * @return Returns the WQE pointer from POW. Returns NULL if no work
> was
> + * available.
> + */
> +static inline cvmx_wqe_t *cvmx_pow_work_request_sync(cvmx_pow_wait_t
> wait)
> +{
> +	/* Must not have a switch pending when requesting work */
> +	cvmx_pow_tag_sw_wait();
> +	return (cvmx_pow_work_request_sync_nocheck(wait));
> +}
> +
> +/**
> + * Synchronous null_rd request.  Requests a switch out of NULL_NULL
> POW state.
> + * This function waits for any previous tag switch to complete
> before
> + * requesting the null_rd.
> + *
> + * @return Returns the POW state of type cvmx_pow_tag_type_t.
> + */
> +static inline cvmx_pow_tag_type_t
> cvmx_pow_work_request_null_rd(void)
> +{
> +	cvmx_pow_load_addr_t ptr;
> +	cvmx_pow_tag_load_resp_t result;
> +
> +	/* Must not have a switch pending when requesting work */
> +	cvmx_pow_tag_sw_wait();
> +
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
> +		ptr.swork_78xx.is_io = 1;
> +		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_NULL_RD;
> +		ptr.swork_78xx.node = cvmx_get_node_num();
> +	} else {
> +		ptr.snull_rd.mem_region = CVMX_IO_SEG;
> +		ptr.snull_rd.is_io = 1;
> +		ptr.snull_rd.did = CVMX_OCT_DID_TAG_NULL_RD;
> +	}
> +	result.u64 = csr_rd(ptr.u64);
> +	return (cvmx_pow_tag_type_t)result.s_null_rd.state;
> +}
> +
> +/**
> + * Asynchronous work request.
> + * Work is requested from the POW unit, and should later be checked
> with
> + * function cvmx_pow_work_response_async.
> + * This function does NOT wait for previous tag switches to
> complete,
> + * so the caller must ensure that there is not a pending tag switch.
> + *
> + * @param scr_addr Scratch memory address that response will be
> returned to,
> + *     which is either a valid WQE, or a response with the invalid
> bit set.
> + *     Byte address, must be 8 byte aligned.
> + * @param wait 1 to cause response to wait for work to become
> available
> + *               (or timeout)
> + *             0 to cause response to return immediately
> + */
> +static inline void cvmx_pow_work_request_async_nocheck(int scr_addr,
> cvmx_pow_wait_t wait)
> +{
> +	cvmx_pow_iobdma_store_t data;
> +
> +	if (CVMX_ENABLE_POW_CHECKS)
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +
> +	/* scr_addr must be 8 byte aligned */
> +	data.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		data.s_cn78xx.node = cvmx_get_node_num();
> +		data.s_cn78xx.scraddr = scr_addr >> 3;
> +		data.s_cn78xx.len = 1;
> +		data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +		data.s_cn78xx.wait = wait;
> +	} else {
> +		data.s.scraddr = scr_addr >> 3;
> +		data.s.len = 1;
> +		data.s.did = CVMX_OCT_DID_TAG_SWTAG;
> +		data.s.wait = wait;
> +	}
> +	cvmx_send_single(data.u64);
> +}
> +
> +/**
> + * Asynchronous work request.
> + * Work is requested from the POW unit, and should later be checked
> with
> + * function cvmx_pow_work_response_async.
> + * This function waits for any previous tag switch to complete
> before
> + * requesting the new work.
> + *
> + * @param scr_addr Scratch memory address that response will be
> returned to,
> + *     which is either a valid WQE, or a response with the invalid
> bit set.
> + *     Byte address, must be 8 byte aligned.
> + * @param wait 1 to cause response to wait for work to become
> available
> + *               (or timeout)
> + *             0 to cause response to return immediately
> + */
> +static inline void cvmx_pow_work_request_async(int scr_addr,
> cvmx_pow_wait_t wait)
> +{
> +	/* Must not have a switch pending when requesting work */
> +	cvmx_pow_tag_sw_wait();
> +	cvmx_pow_work_request_async_nocheck(scr_addr, wait);
> +}
> +
> +/**
> + * Gets result of asynchronous work request.  Performs a IOBDMA sync
> + * to wait for the response.
> + *
> + * @param scr_addr Scratch memory address to get result from
> + *                  Byte address, must be 8 byte aligned.
> + * @return Returns the WQE from the scratch register, or NULL if no
> work was
> + *         available.
> + */
> +static inline cvmx_wqe_t *cvmx_pow_work_response_async(int scr_addr)
> +{
> +	cvmx_pow_tag_load_resp_t result;
> +
> +	CVMX_SYNCIOBDMA;
> +	result.u64 = cvmx_scratch_read64(scr_addr);
> +	if (result.s_work.no_work)
> +		return NULL;
> +	else
> +		return (cvmx_wqe_t
> *)cvmx_phys_to_ptr(result.s_work.addr);
> +}
> +
> +/**
> + * Checks if a work queue entry pointer returned by a work
> + * request is valid.  It may be invalid due to no work
> + * being available or due to a timeout.
> + *
> + * @param wqe_ptr pointer to a work queue entry returned by the POW
> + *
> + * @return 0 if pointer is valid
> + *         1 if invalid (no work was returned)
> + */
> +static inline u64 cvmx_pow_work_invalid(cvmx_wqe_t *wqe_ptr)
> +{
> +	return (!wqe_ptr); /* FIXME: improve */
> +}
> +
> +/**
> + * Starts a tag switch to the provided tag value and tag
> type.  Completion for
> + * the tag switch must be checked for separately.
> + * This function does NOT update the
> + * work queue entry in dram to match tag value and type, so the
> application must
> + * keep track of these if they are important to the application.
> + * This tag switch command must not be used for switches to NULL, as
> the tag
> + * switch pending bit will be set by the switch request, but never
> cleared by
> + * the hardware.
> + *
> + * NOTE: This should not be used when switching from a NULL
> tag.  Use
> + * cvmx_pow_tag_sw_full() instead.
> + *
> + * This function does no checks, so the caller must ensure that any
> previous tag
> + * switch has completed.
> + *
> + * @param tag      new tag value
> + * @param tag_type new tag type (ordered or atomic)
> + */
> +static inline void cvmx_pow_tag_sw_nocheck(u32 tag,
> cvmx_pow_tag_type_t tag_type)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called with NULL tag\n", __func__);
> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
> (current_tag.tag == tag),
> +			     "%s called to perform a tag switch to the
> same tag\n", __func__);
> +		cvmx_warn_if(
> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
> +			"%s called to perform a tag switch to NULL. Use
> cvmx_pow_tag_sw_null() instead\n",
> +			__func__);
> +	}
> +
> +	/*
> +	 * Note that WQE in DRAM is not updated here, as the POW does
> not read
> +	 * from DRAM once the WQE is in flight.  See hardware manual
> for
> +	 * complete details.
> +	 * It is the application's responsibility to keep track of the
> +	 * current tag value if that is important.
> +	 */
> +	tag_req.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn78xx_other.type = tag_type;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn68xx_other.tag = tag;
> +		tag_req.s_cn68xx_other.type = tag_type;
> +	} else {
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn38xx.tag = tag;
> +		tag_req.s_cn38xx.type = tag_type;
> +	}
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +		ptr.s_cn78xx.is_io = 1;
> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +		ptr.s_cn78xx.node = cvmx_get_node_num();
> +		ptr.s_cn78xx.tag = tag;
> +	} else {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
> +	}
> +	/* Once this store arrives at POW, it will attempt the switch
> +	   software must wait for the switch to complete separately */
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Starts a tag switch to the provided tag value and tag
> type.  Completion for
> + * the tag switch must be checked for separately.
> + * This function does NOT update the
> + * work queue entry in dram to match tag value and type, so the
> application must
> + * keep track of these if they are important to the application.
> + * This tag switch command must not be used for switches to NULL, as
> the tag
> + * switch pending bit will be set by the switch request, but never
> cleared by
> + * the hardware.
> + *
> + * NOTE: This should not be used when switching from a NULL
> tag.  Use
> + * cvmx_pow_tag_sw_full() instead.
> + *
> + * This function waits for any previous tag switch to complete, and
> also
> + * displays an error on tag switches to NULL.
> + *
> + * @param tag      new tag value
> + * @param tag_type new tag type (ordered or atomic)
> + */
> +static inline void cvmx_pow_tag_sw(u32 tag, cvmx_pow_tag_type_t
> tag_type)
> +{
> +	/*
> +	 * Note that WQE in DRAM is not updated here, as the POW does
> not read
> +	 * from DRAM once the WQE is in flight.  See hardware manual
> for
> +	 * complete details. It is the application's responsibility to
> keep
> +	 * track of the current tag value if that is important.
> +	 */
> +
> +	/*
> +	 * Ensure that there is not a pending tag switch, as a tag
> switch
> +	 * cannot be started if a previous switch is still pending.
> +	 */
> +	cvmx_pow_tag_sw_wait();
> +	cvmx_pow_tag_sw_nocheck(tag, tag_type);
> +}
> +
> +/**
> + * Starts a tag switch to the provided tag value and tag
> type.  Completion for
> + * the tag switch must be checked for separately.
> + * This function does NOT update the
> + * work queue entry in dram to match tag value and type, so the
> application must
> + * keep track of these if they are important to the application.
> + * This tag switch command must not be used for switches to NULL, as
> the tag
> + * switch pending bit will be set by the switch request, but never
> cleared by
> + * the hardware.
> + *
> + * This function must be used for tag switches from NULL.
> + *
> + * This function does no checks, so the caller must ensure that any
> previous tag
> + * switch has completed.
> + *
> + * @param wqp      pointer to work queue entry to submit.  This
> entry is
> + *                 updated to match the other parameters
> + * @param tag      tag value to be assigned to work queue entry
> + * @param tag_type type of tag
> + * @param group    group value for the work queue entry.
> + */
> +static inline void cvmx_pow_tag_sw_full_nocheck(cvmx_wqe_t *wqp, u32
> tag,
> +						cvmx_pow_tag_type_t
> tag_type, u64 group)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +	unsigned int node = cvmx_get_node_num();
> +	u64 wqp_phys = cvmx_ptr_to_phys(wqp);
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
> (current_tag.tag == tag),
> +			     "%s called to perform a tag switch to the
> same tag\n", __func__);
> +		cvmx_warn_if(
> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
> +			"%s called to perform a tag switch to NULL. Use
> cvmx_pow_tag_sw_null() instead\n",
> +			__func__);
> +		if ((wqp != cvmx_phys_to_ptr(0x80)) &&
> cvmx_pow_get_current_wqp())
> +			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
> +				     "%s passed WQE(%p) doesn't match
> the address in the POW(%p)\n",
> +				     __func__, wqp,
> cvmx_pow_get_current_wqp());
> +	}
> +
> +	/*
> +	 * Note that WQE in DRAM is not updated here, as the POW does
> not
> +	 * read from DRAM once the WQE is in flight.  See hardware
> manual
> +	 * for complete details. It is the application's responsibility
> to
> +	 * keep track of the current tag value if that is important.
> +	 */
> +	tag_req.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int xgrp;
> +
> +		if (wqp_phys != 0x80) {
> +			/* If WQE is valid, use its XGRP:
> +			 * WQE GRP is 10 bits, and is mapped
> +			 * to legacy GRP + QoS, includes node number.
> +			 */
> +			xgrp = wqp->word1.cn78xx.grp;
> +			/* Use XGRP[node] too */
> +			node = xgrp >> 8;
> +			/* Modify XGRP with legacy group # from arg */
> +			xgrp &= ~0xf8;
> +			xgrp |= 0xf8 & (group << 3);
> +
> +		} else {
> +			/* If no WQE, build XGRP with QoS=0 and current
> node */
> +			xgrp = group << 3;
> +			xgrp |= node << 8;
> +		}
> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
> +		tag_req.s_cn78xx_other.type = tag_type;
> +		tag_req.s_cn78xx_other.grp = xgrp;
> +		tag_req.s_cn78xx_other.wqp = wqp_phys;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
> +		tag_req.s_cn68xx_other.tag = tag;
> +		tag_req.s_cn68xx_other.type = tag_type;
> +		tag_req.s_cn68xx_other.grp = group;
> +	} else {
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_FULL;
> +		tag_req.s_cn38xx.tag = tag;
> +		tag_req.s_cn38xx.type = tag_type;
> +		tag_req.s_cn38xx.grp = group;
> +	}
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +		ptr.s_cn78xx.is_io = 1;
> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +		ptr.s_cn78xx.node = node;
> +		ptr.s_cn78xx.tag = tag;
> +	} else {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
> +		ptr.s.addr = wqp_phys;
> +	}
> +	/* Once this store arrives at POW, it will attempt the switch
> +	   software must wait for the switch to complete separately */
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Starts a tag switch to the provided tag value and tag type.
> + * Completion for the tag switch must be checked for separately.
> + * This function does NOT update the work queue entry in dram to
> match tag value
> + * and type, so the application must keep track of these if they are
> important
> + * to the application. This tag switch command must not be used for
> switches
> + * to NULL, as the tag switch pending bit will be set by the switch
> request,
> + * but never cleared by the hardware.
> + *
> + * This function must be used for tag switches from NULL.
> + *
> + * This function waits for any pending tag switches to complete
> + * before requesting the tag switch.
> + *
> + * @param wqp      Pointer to work queue entry to submit.
> + *     This entry is updated to match the other parameters
> + * @param tag      Tag value to be assigned to work queue entry
> + * @param tag_type Type of tag
> + * @param group    Group value for the work queue entry.
> + */
> +static inline void cvmx_pow_tag_sw_full(cvmx_wqe_t *wqp, u32 tag,
> cvmx_pow_tag_type_t tag_type,
> +					u64 group)
> +{
> +	/*
> +	 * Ensure that there is not a pending tag switch, as a tag
> switch cannot
> +	 * be started if a previous switch is still pending.
> +	 */
> +	cvmx_pow_tag_sw_wait();
> +	cvmx_pow_tag_sw_full_nocheck(wqp, tag, tag_type, group);
> +}
> +
> +/**
> + * Switch to a NULL tag, which ends any ordering or
> + * synchronization provided by the POW for the current
> + * work queue entry.  This operation completes immediately,
> + * so completion should not be waited for.
> + * This function does NOT wait for previous tag switches to
> complete,
> + * so the caller must ensure that any previous tag switches have
> completed.
> + */
> +static inline void cvmx_pow_tag_sw_null_nocheck(void)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called when we already have a NULL
> tag\n", __func__);
> +	}
> +	tag_req.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn78xx_other.type = CVMX_POW_TAG_TYPE_NULL;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn68xx_other.type = CVMX_POW_TAG_TYPE_NULL;
> +	} else {
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
> +		tag_req.s_cn38xx.type = CVMX_POW_TAG_TYPE_NULL;
> +	}
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +		ptr.s_cn78xx.is_io = 1;
> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
> +		ptr.s_cn78xx.node = cvmx_get_node_num();
> +	} else {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
> +	}
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Switch to a NULL tag, which ends any ordering or
> + * synchronization provided by the POW for the current
> + * work queue entry.  This operation completes immediately,
> + * so completion should not be waited for.
> + * This function waits for any pending tag switches to complete
> + * before requesting the switch to NULL.
> + */
> +static inline void cvmx_pow_tag_sw_null(void)
> +{
> +	/*
> +	 * Ensure that there is not a pending tag switch, as a tag
> switch cannot
> +	 * be started if a previous switch is still pending.
> +	 */
> +	cvmx_pow_tag_sw_wait();
> +	cvmx_pow_tag_sw_null_nocheck();
> +}
> +
> +/**
> + * Submits work to an input queue.
> + * This function updates the work queue entry in DRAM to match the
> arguments given.
> + * Note that the tag provided is for the work queue entry submitted,
> and
> + * is unrelated to the tag that the core currently holds.
> + *
> + * @param wqp      pointer to work queue entry to submit.
> + *                 This entry is updated to match the other
> parameters
> + * @param tag      tag value to be assigned to work queue entry
> + * @param tag_type type of tag
> + * @param qos      Input queue to add to.
> + * @param grp      group value for the work queue entry.
> + */
> +static inline void cvmx_pow_work_submit(cvmx_wqe_t *wqp, u32 tag,
> cvmx_pow_tag_type_t tag_type,
> +					u64 qos, u64 grp)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	tag_req.u64 = 0;
> +	ptr.u64 = 0;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int node = cvmx_get_node_num();
> +		unsigned int xgrp;
> +
> +		xgrp = (grp & 0x1f) << 3;
> +		xgrp |= (qos & 7);
> +		xgrp |= 0x300 & (node << 8);
> +
> +		wqp->word1.cn78xx.rsvd_0 = 0;
> +		wqp->word1.cn78xx.rsvd_1 = 0;
> +		wqp->word1.cn78xx.tag = tag;
> +		wqp->word1.cn78xx.tag_type = tag_type;
> +		wqp->word1.cn78xx.grp = xgrp;
> +		CVMX_SYNCWS;
> +
> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
> +		tag_req.s_cn78xx_other.type = tag_type;
> +		tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
> +		tag_req.s_cn78xx_other.grp = xgrp;
> +
> +		ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +		ptr.s_cn78xx.is_io = 1;
> +		ptr.s_cn78xx.node = node;
> +		ptr.s_cn78xx.tag = tag;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		/* Reset all reserved bits */
> +		wqp->word1.cn68xx.zero_0 = 0;
> +		wqp->word1.cn68xx.zero_1 = 0;
> +		wqp->word1.cn68xx.zero_2 = 0;
> +		wqp->word1.cn68xx.qos = qos;
> +		wqp->word1.cn68xx.grp = grp;
> +
> +		wqp->word1.tag = tag;
> +		wqp->word1.tag_type = tag_type;
> +
> +		tag_req.s_cn68xx_add.op = CVMX_POW_TAG_OP_ADDWQ;
> +		tag_req.s_cn68xx_add.type = tag_type;
> +		tag_req.s_cn68xx_add.tag = tag;
> +		tag_req.s_cn68xx_add.qos = qos;
> +		tag_req.s_cn68xx_add.grp = grp;
> +
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
> +		ptr.s.addr = cvmx_ptr_to_phys(wqp);
> +	} else {
> +		/* Reset all reserved bits */
> +		wqp->word1.cn38xx.zero_2 = 0;
> +		wqp->word1.cn38xx.qos = qos;
> +		wqp->word1.cn38xx.grp = grp;
> +
> +		wqp->word1.tag = tag;
> +		wqp->word1.tag_type = tag_type;
> +
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_ADDWQ;
> +		tag_req.s_cn38xx.type = tag_type;
> +		tag_req.s_cn38xx.tag = tag;
> +		tag_req.s_cn38xx.qos = qos;
> +		tag_req.s_cn38xx.grp = grp;
> +
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
> +		ptr.s.addr = cvmx_ptr_to_phys(wqp);
> +	}
> +	/* SYNC write to memory before the work submit.
> +	 * This is necessary as POW may read values from DRAM at this
> time */
> +	CVMX_SYNCWS;
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * This function sets the group mask for a core.  The group mask
> + * indicates which groups each core will accept work from. There are
> + * 16 groups.
> + *
> + * @param core_num   core to apply mask to
> + * @param mask   Group mask, one bit for up to 64 groups.
> + *               Each 1 bit in the mask enables the core to accept
> work from
> + *               the corresponding group.
> + *               The CN68XX supports 64 groups, earlier models only
> support
> + *               16 groups.
> + *
> + * The CN78XX in backwards compatibility mode allows up to 32
> groups,
> + * so the 'mask' argument has one bit for every of the legacy
> + * groups, and a '1' in the mask causes a total of 8 groups
> + * which share the legacy group numbher and 8 qos levels,
> + * to be enabled for the calling processor core.
> + * A '0' in the mask will disable the current core
> + * from receiving work from the associated group.
> + */
> +static inline void cvmx_pow_set_group_mask(u64 core_num, u64 mask)
> +{
> +	u64 valid_mask;
> +	int num_groups = cvmx_pow_num_groups();
> +
> +	if (num_groups >= 64)
> +		valid_mask = ~0ull;
> +	else
> +		valid_mask = (1ull << num_groups) - 1;
> +
> +	if ((mask & valid_mask) == 0) {
> +		printf("ERROR: %s empty group mask disables work on
> core# %llu, ignored.\n",
> +		       __func__, (unsigned long long)core_num);
> +		return;
> +	}
> +	cvmx_warn_if(mask & (~valid_mask), "%s group number range
> exceeded: %#llx\n", __func__,
> +		     (unsigned long long)mask);
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int mask_set;
> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
> +		unsigned int core, node;
> +		unsigned int rix;  /* Register index */
> +		unsigned int grp;  /* Legacy group # */
> +		unsigned int bit;  /* bit index */
> +		unsigned int xgrp; /* native group # */
> +
> +		node = cvmx_coremask_core_to_node(core_num);
> +		core = cvmx_coremask_core_on_node(core_num);
> +
> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
> */
> +		/* 73xx: 64 groups are in one register */
> +		for (rix = 0; rix < (cvmx_sso_num_xgrp() >> 6); rix++)
> {
> +			grp_msk.u64 = 0;
> +			for (bit = 0; bit < 64; bit++) {
> +				/* 8-bit native XGRP number */
> +				xgrp = (rix << 6) | bit;
> +				/* Legacy 5-bit group number */
> +				grp = (xgrp >> 3) & 0x1f;
> +				/* Inspect legacy mask by legacy group
> */
> +				if (mask & (1ull << grp))
> +					grp_msk.s.grp_msk |= 1ull <<
> bit;
> +				/* Pre-set to all 0's */
> +			}
> +			for (mask_set = 0; mask_set <
> cvmx_sso_num_maskset(); mask_set++) {
> +				csr_wr_node(node,
> CVMX_SSO_PPX_SX_GRPMSKX(core, mask_set, rix),
> +					    grp_msk.u64);
> +			}
> +		}
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		cvmx_sso_ppx_grp_msk_t grp_msk;
> +
> +		grp_msk.s.grp_msk = mask;
> +		csr_wr(CVMX_SSO_PPX_GRP_MSK(core_num), grp_msk.u64);
> +	} else {
> +		cvmx_pow_pp_grp_mskx_t grp_msk;
> +
> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
> +		grp_msk.s.grp_msk = mask & 0xffff;
> +		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
> +	}
> +}
> +
> +/**
> + * This function gets the group mask for a core.  The group mask
> + * indicates which groups each core will accept work from.
> + *
> + * @param core_num   core to apply mask to
> + * @return	Group mask, one bit for up to 64 groups.
> + *               Each 1 bit in the mask enables the core to accept
> work from
> + *               the corresponding group.
> + *               The CN68XX supports 64 groups, earlier models only
> support
> + *               16 groups.
> + *
> + * The CN78XX in backwards compatibility mode allows up to 32
> groups,
> + * so the 'mask' argument has one bit for every of the legacy
> + * groups, and a '1' in the mask causes a total of 8 groups
> + * which share the legacy group numbher and 8 qos levels,
> + * to be enabled for the calling processor core.
> + * A '0' in the mask will disable the current core
> + * from receiving work from the associated group.
> + */
> +static inline u64 cvmx_pow_get_group_mask(u64 core_num)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
> +		unsigned int core, node, i;
> +		int rix; /* Register index */
> +		u64 mask = 0;
> +
> +		node = cvmx_coremask_core_to_node(core_num);
> +		core = cvmx_coremask_core_on_node(core_num);
> +
> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
> */
> +		/* 73xx: 64 groups are in one register */
> +		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0;
> rix--) {
> +			/* read only mask_set=0 (both 'set' was written
> same) */
> +			grp_msk.u64 = csr_rd_node(node,
> CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
> +			/* ASSUME: (this is how mask bits got written)
> */
> +			/* grp_mask[7:0]: all bits 0..7 are same */
> +			/* grp_mask[15:8]: all bits 8..15 are same, etc
> */
> +			/* DO: mask[7:0] =
> grp_mask.u64[56,48,40,32,24,16,8,0] */
> +			for (i = 0; i < 8; i++)
> +				mask |= (grp_msk.u64 & ((u64)1 << (i *
> 8))) >> (7 * i);
> +			/* we collected 8 MSBs in mask[7:0], <<=8 and
> continue */
> +			if (cvmx_likely(rix != 0))
> +				mask <<= 8;
> +		}
> +		return mask & 0xFFFFFFFF;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		cvmx_sso_ppx_grp_msk_t grp_msk;
> +
> +		grp_msk.u64 = csr_rd(CVMX_SSO_PPX_GRP_MSK(core_num));
> +		return grp_msk.u64;
> +	} else {
> +		cvmx_pow_pp_grp_mskx_t grp_msk;
> +
> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
> +		return grp_msk.u64 & 0xffff;
> +	}
> +}
> +
> +/*
> + * Returns 0 if 78xx(73xx,75xx) is not programmed in legacy
> compatible mode
> + * Returns 1 if 78xx(73xx,75xx) is programmed in legacy compatible
> mode
> + * Returns 1 if octeon model is not 78xx(73xx,75xx)
> + */
> +static inline u64 cvmx_pow_is_legacy78mode(u64 core_num)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk0, grp_msk1;
> +		unsigned int core, node, i;
> +		int rix; /* Register index */
> +		u64 mask = 0;
> +
> +		node = cvmx_coremask_core_to_node(core_num);
> +		core = cvmx_coremask_core_on_node(core_num);
> +
> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
> */
> +		/* 73xx: 64 groups are in one register */
> +		/* 1) in order for the 78_SSO to be in legacy
> compatible mode
> +		 * the both mask_sets should be programmed the same */
> +		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0;
> rix--) {
> +			/* read mask_set=0 (both 'set' was written
> same) */
> +			grp_msk0.u64 = csr_rd_node(node,
> CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
> +			grp_msk1.u64 = csr_rd_node(node,
> CVMX_SSO_PPX_SX_GRPMSKX(core, 1, rix));
> +			if (grp_msk0.u64 != grp_msk1.u64) {
> +				return 0;
> +			}
> +			/* (this is how mask bits should be written) */
> +			/* grp_mask[7:0]: all bits 0..7 are same */
> +			/* grp_mask[15:8]: all bits 8..15 are same, etc
> */
> +			/* 2) in order for the 78_SSO to be in legacy
> compatible
> +			 * mode above should be true (test only
> mask_set=0 */
> +			for (i = 0; i < 8; i++) {
> +				mask = (grp_msk0.u64 >> (i << 3)) &
> 0xFF;
> +				if (!(mask == 0 || mask == 0xFF)) {
> +					return 0;
> +				}
> +			}
> +		}
> +		/* if we come here, the 78_SSO is in legacy compatible
> mode */
> +	}
> +	return 1; /* the SSO/POW is in legacy (or compatible) mode */
> +}
> +
> +/**
> + * This function sets POW static priorities for a core. Each input
> queue has
> + * an associated priority value.
> + *
> + * @param core_num   core to apply priorities to
> + * @param priority   Vector of 8 priorities, one per POW Input Queue
> (0-7).
> + *                   Highest priority is 0 and lowest is 7. A
> priority value
> + *                   of 0xF instructs POW to skip the Input Queue
> when
> + *                   scheduling to this specific core.
> + *                   NOTE: priorities should not have gaps in
> values, meaning
> + *                         {0,1,1,1,1,1,1,1} is a valid
> configuration while
> + *                         {0,2,2,2,2,2,2,2} is not.
> + */
> +static inline void cvmx_pow_set_priority(u64 core_num, const u8
> priority[])
> +{
> +	/* Detect gaps between priorities and flag error */
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		int i;
> +		u32 prio_mask = 0;
> +
> +		for (i = 0; i < 8; i++)
> +			if (priority[i] != 0xF)
> +				prio_mask |= 1 << priority[i];
> +
> +		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
> +			debug("ERROR: POW static priorities should be
> contiguous (0x%llx)\n",
> +			      (unsigned long long)prio_mask);
> +			return;
> +		}
> +	}
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int group;
> +		unsigned int node = cvmx_get_node_num();
> +		cvmx_sso_grpx_pri_t grp_pri;
> +
> +		/*grp_pri.s.weight = 0x3f; these will be anyway
> overwritten */
> +		/*grp_pri.s.affinity = 0xf; by the next
> csr_rd_node(..), */
> +
> +		for (group = 0; group < cvmx_sso_num_xgrp(); group++) {
> +			grp_pri.u64 = csr_rd_node(node,
> CVMX_SSO_GRPX_PRI(group));
> +			grp_pri.s.pri = priority[group & 0x7];
> +			csr_wr_node(node, CVMX_SSO_GRPX_PRI(group),
> grp_pri.u64);
> +		}
> +
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		cvmx_sso_ppx_qos_pri_t qos_pri;
> +
> +		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
> +		qos_pri.s.qos0_pri = priority[0];
> +		qos_pri.s.qos1_pri = priority[1];
> +		qos_pri.s.qos2_pri = priority[2];
> +		qos_pri.s.qos3_pri = priority[3];
> +		qos_pri.s.qos4_pri = priority[4];
> +		qos_pri.s.qos5_pri = priority[5];
> +		qos_pri.s.qos6_pri = priority[6];
> +		qos_pri.s.qos7_pri = priority[7];
> +		csr_wr(CVMX_SSO_PPX_QOS_PRI(core_num), qos_pri.u64);
> +	} else {
> +		/* POW priorities on CN5xxx .. CN66XX */
> +		cvmx_pow_pp_grp_mskx_t grp_msk;
> +
> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
> +		grp_msk.s.qos0_pri = priority[0];
> +		grp_msk.s.qos1_pri = priority[1];
> +		grp_msk.s.qos2_pri = priority[2];
> +		grp_msk.s.qos3_pri = priority[3];
> +		grp_msk.s.qos4_pri = priority[4];
> +		grp_msk.s.qos5_pri = priority[5];
> +		grp_msk.s.qos6_pri = priority[6];
> +		grp_msk.s.qos7_pri = priority[7];
> +
> +		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
> +	}
> +}
> +
> +/**
> + * This function gets POW static priorities for a core. Each input
> queue has
> + * an associated priority value.
> + *
> + * @param[in]  core_num core to get priorities for
> + * @param[out] priority Pointer to u8[] where to return priorities
> + *			Vector of 8 priorities, one per POW Input Queue
> (0-7).
> + *			Highest priority is 0 and lowest is 7. A
> priority value
> + *			of 0xF instructs POW to skip the Input Queue
> when
> + *			scheduling to this specific core.
> + *                   NOTE: priorities should not have gaps in
> values, meaning
> + *                         {0,1,1,1,1,1,1,1} is a valid
> configuration while
> + *                         {0,2,2,2,2,2,2,2} is not.
> + */
> +static inline void cvmx_pow_get_priority(u64 core_num, u8
> priority[])
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int group;
> +		unsigned int node = cvmx_get_node_num();
> +		cvmx_sso_grpx_pri_t grp_pri;
> +
> +		/* read priority only from the first 8 groups */
> +		/* the next groups are programmed the same
> (periodicaly) */
> +		for (group = 0; group < 8 /*cvmx_sso_num_xgrp() */;
> group++) {
> +			grp_pri.u64 = csr_rd_node(node,
> CVMX_SSO_GRPX_PRI(group));
> +			priority[group /* & 0x7 */] = grp_pri.s.pri;
> +		}
> +
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		cvmx_sso_ppx_qos_pri_t qos_pri;
> +
> +		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
> +		priority[0] = qos_pri.s.qos0_pri;
> +		priority[1] = qos_pri.s.qos1_pri;
> +		priority[2] = qos_pri.s.qos2_pri;
> +		priority[3] = qos_pri.s.qos3_pri;
> +		priority[4] = qos_pri.s.qos4_pri;
> +		priority[5] = qos_pri.s.qos5_pri;
> +		priority[6] = qos_pri.s.qos6_pri;
> +		priority[7] = qos_pri.s.qos7_pri;
> +	} else {
> +		/* POW priorities on CN5xxx .. CN66XX */
> +		cvmx_pow_pp_grp_mskx_t grp_msk;
> +
> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
> +		priority[0] = grp_msk.s.qos0_pri;
> +		priority[1] = grp_msk.s.qos1_pri;
> +		priority[2] = grp_msk.s.qos2_pri;
> +		priority[3] = grp_msk.s.qos3_pri;
> +		priority[4] = grp_msk.s.qos4_pri;
> +		priority[5] = grp_msk.s.qos5_pri;
> +		priority[6] = grp_msk.s.qos6_pri;
> +		priority[7] = grp_msk.s.qos7_pri;
> +	}
> +
> +	/* Detect gaps between priorities and flag error - (optional)
> */
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		int i;
> +		u32 prio_mask = 0;
> +
> +		for (i = 0; i < 8; i++)
> +			if (priority[i] != 0xF)
> +				prio_mask |= 1 << priority[i];
> +
> +		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
> +			debug("ERROR:%s: POW static priorities should
> be contiguous (0x%llx)\n",
> +			      __func__, (unsigned long long)prio_mask);
> +			return;
> +		}
> +	}
> +}
> +
> +static inline void cvmx_sso_get_group_priority(int node, cvmx_xgrp_t
> xgrp, int *priority,
> +					       int *weight, int
> *affinity)
> +{
> +	cvmx_sso_grpx_pri_t grp_pri;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +
> +	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
> +	*affinity = grp_pri.s.affinity;
> +	*priority = grp_pri.s.pri;
> +	*weight = grp_pri.s.weight;
> +}
> +
> +/**
> + * Performs a tag switch and then an immediate deschedule. This
> completes
> + * immediately, so completion must not be waited for.  This function
> does NOT
> + * update the wqe in DRAM to match arguments.
> + *
> + * This function does NOT wait for any prior tag switches to
> complete, so the
> + * calling code must do this.
> + *
> + * Note the following CAVEAT of the Octeon HW behavior when
> + * re-scheduling DE-SCHEDULEd items whose (next) state is
> + * ORDERED:
> + *   - If there are no switches pending at the time that the
> + *     HW executes the de-schedule, the HW will only re-schedule
> + *     the head of the FIFO associated with the given tag. This
> + *     means that in many respects, the HW treats this ORDERED
> + *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
> + *     case (to an ORDERED tag), the HW will do the switch
> + *     before the deschedule whenever it is possible to do
> + *     the switch immediately, so it may often look like
> + *     this case.
> + *   - If there is a pending switch to ORDERED at the time
> + *     the HW executes the de-schedule, the HW will perform
> + *     the switch at the time it re-schedules, and will be
> + *     able to reschedule any/all of the entries with the
> + *     same tag.
> + * Due to this behavior, the RECOMMENDATION to software is
> + * that they have a (next) state of ATOMIC when they
> + * DE-SCHEDULE. If an ORDERED tag is what was really desired,
> + * SW can choose to immediately switch to an ORDERED tag
> + * after the work (that has an ATOMIC tag) is re-scheduled.
> + * Note that since there are never any tag switches pending
> + * when the HW re-schedules, this switch can be IMMEDIATE upon
> + * the reception of the pointer during the re-schedule.
> + *
> + * @param tag      New tag value
> + * @param tag_type New tag type
> + * @param group    New group value
> + * @param no_sched Control whether this work queue entry will be
> rescheduled.
> + *                 - 1 : don't schedule this work
> + *                 - 0 : allow this work to be scheduled.
> + */
> +static inline void cvmx_pow_tag_sw_desched_nocheck(u32 tag,
> cvmx_pow_tag_type_t tag_type, u64 group,
> +						   u64 no_sched)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called with NULL tag. Deschedule not
> allowed from NULL state\n",
> +			     __func__);
> +		cvmx_warn_if((current_tag.tag_type !=
> CVMX_POW_TAG_TYPE_ATOMIC) &&
> +			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
> +			     "%s called where neither the before or
> after tag is ATOMIC\n",
> +			     __func__);
> +	}
> +	tag_req.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_t *wqp = cvmx_pow_get_current_wqp();
> +
> +		if (!wqp) {
> +			debug("ERROR: Failed to get WQE, %s\n",
> __func__);
> +			return;
> +		}
> +		group &= 0x1f;
> +		wqp->word1.cn78xx.tag = tag;
> +		wqp->word1.cn78xx.tag_type = tag_type;
> +		wqp->word1.cn78xx.grp = group << 3;
> +		CVMX_SYNCWS;
> +		tag_req.s_cn78xx_other.op =
> CVMX_POW_TAG_OP_SWTAG_DESCH;
> +		tag_req.s_cn78xx_other.type = tag_type;
> +		tag_req.s_cn78xx_other.grp = group << 3;
> +		tag_req.s_cn78xx_other.no_sched = no_sched;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		group &= 0x3f;
> +		tag_req.s_cn68xx_other.op =
> CVMX_POW_TAG_OP_SWTAG_DESCH;
> +		tag_req.s_cn68xx_other.tag = tag;
> +		tag_req.s_cn68xx_other.type = tag_type;
> +		tag_req.s_cn68xx_other.grp = group;
> +		tag_req.s_cn68xx_other.no_sched = no_sched;
> +	} else {
> +		group &= 0x0f;
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
> +		tag_req.s_cn38xx.tag = tag;
> +		tag_req.s_cn38xx.type = tag_type;
> +		tag_req.s_cn38xx.grp = group;
> +		tag_req.s_cn38xx.no_sched = no_sched;
> +	}
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
> +		ptr.s_cn78xx.node = cvmx_get_node_num();
> +		ptr.s_cn78xx.tag = tag;
> +	} else {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
> +	}
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Performs a tag switch and then an immediate deschedule. This
> completes
> + * immediately, so completion must not be waited for.  This function
> does NOT
> + * update the wqe in DRAM to match arguments.
> + *
> + * This function waits for any prior tag switches to complete, so
> the
> + * calling code may call this function with a pending tag switch.
> + *
> + * Note the following CAVEAT of the Octeon HW behavior when
> + * re-scheduling DE-SCHEDULEd items whose (next) state is
> + * ORDERED:
> + *   - If there are no switches pending at the time that the
> + *     HW executes the de-schedule, the HW will only re-schedule
> + *     the head of the FIFO associated with the given tag. This
> + *     means that in many respects, the HW treats this ORDERED
> + *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
> + *     case (to an ORDERED tag), the HW will do the switch
> + *     before the deschedule whenever it is possible to do
> + *     the switch immediately, so it may often look like
> + *     this case.
> + *   - If there is a pending switch to ORDERED at the time
> + *     the HW executes the de-schedule, the HW will perform
> + *     the switch at the time it re-schedules, and will be
> + *     able to reschedule any/all of the entries with the
> + *     same tag.
> + * Due to this behavior, the RECOMMENDATION to software is
> + * that they have a (next) state of ATOMIC when they
> + * DE-SCHEDULE. If an ORDERED tag is what was really desired,
> + * SW can choose to immediately switch to an ORDERED tag
> + * after the work (that has an ATOMIC tag) is re-scheduled.
> + * Note that since there are never any tag switches pending
> + * when the HW re-schedules, this switch can be IMMEDIATE upon
> + * the reception of the pointer during the re-schedule.
> + *
> + * @param tag      New tag value
> + * @param tag_type New tag type
> + * @param group    New group value
> + * @param no_sched Control whether this work queue entry will be
> rescheduled.
> + *                 - 1 : don't schedule this work
> + *                 - 0 : allow this work to be scheduled.
> + */
> +static inline void cvmx_pow_tag_sw_desched(u32 tag,
> cvmx_pow_tag_type_t tag_type, u64 group,
> +					   u64 no_sched)
> +{
> +	/* Need to make sure any writes to the work queue entry are
> complete */
> +	CVMX_SYNCWS;
> +	/* Ensure that there is not a pending tag switch, as a tag
> switch cannot be started
> +	 * if a previous switch is still pending.  */
> +	cvmx_pow_tag_sw_wait();
> +	cvmx_pow_tag_sw_desched_nocheck(tag, tag_type, group,
> no_sched);
> +}
> +
> +/**
> + * Descchedules the current work queue entry.
> + *
> + * @param no_sched no schedule flag value to be set on the work
> queue entry.
> + *     If this is set the entry will not be rescheduled.
> + */
> +static inline void cvmx_pow_desched(u64 no_sched)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called with NULL tag. Deschedule not
> expected from NULL state\n",
> +			     __func__);
> +	}
> +	/* Need to make sure any writes to the work queue entry are
> complete */
> +	CVMX_SYNCWS;
> +
> +	tag_req.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_DESCH;
> +		tag_req.s_cn78xx_other.no_sched = no_sched;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_DESCH;
> +		tag_req.s_cn68xx_other.no_sched = no_sched;
> +	} else {
> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_DESCH;
> +		tag_req.s_cn38xx.no_sched = no_sched;
> +	}
> +	ptr.u64 = 0;
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +		ptr.s_cn78xx.is_io = 1;
> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG3;
> +		ptr.s_cn78xx.node = cvmx_get_node_num();
> +	} else {
> +		ptr.s.mem_region = CVMX_IO_SEG;
> +		ptr.s.is_io = 1;
> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
> +	}
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/*******************************************************************
> ***********/
> +/* OCTEON3-specific
> functions.                                                */
> +/*******************************************************************
> ***********/
> +/**
> + * This function sets the the affinity of group to the cores in
> 78xx.
> + * It sets up all the cores in core_mask to accept work from the
> specified group.
> + *
> + * @param xgrp	Group to accept work from, 0 - 255.
> + * @param core_mask	Mask of all the cores which will accept work
> from this group
> + * @param mask_set	Every core has set of 2 masks which can be set
> to accept work
> + *     from 256 groups. At the time of get_work, cores can choose
> which mask_set
> + *     to get work from. 'mask_set' values range from 0 to 3, where	
> each of the
> + *     two bits represents a mask set. Cores will be added to the
> mask set with
> + *     corresponding bit set, and removed from the mask set with
> corresponding
> + *     bit clear.
> + * Note: cores can only accept work from SSO groups on the same
> node,
> + * so the node number for the group is derived from the core number.
> + */
> +static inline void cvmx_sso_set_group_core_affinity(cvmx_xgrp_t
> xgrp,
> +						    const struct
> cvmx_coremask *core_mask,
> +						    u8 mask_set)
> +{
> +	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
> +	int core;
> +	int grp_index = xgrp.xgrp >> 6;
> +	int bit_pos = xgrp.xgrp % 64;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +	cvmx_coremask_for_each_core(core, core_mask)
> +	{
> +		unsigned int node, ncore;
> +		u64 reg_addr;
> +
> +		node = cvmx_coremask_core_to_node(core);
> +		ncore = cvmx_coremask_core_on_node(core);
> +
> +		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 0,
> grp_index);
> +		grp_msk.u64 = csr_rd_node(node, reg_addr);
> +
> +		if (mask_set & 1)
> +			grp_msk.s.grp_msk |= (1ull << bit_pos);
> +		else
> +			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
> +
> +		csr_wr_node(node, reg_addr, grp_msk.u64);
> +
> +		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 1,
> grp_index);
> +		grp_msk.u64 = csr_rd_node(node, reg_addr);
> +
> +		if (mask_set & 2)
> +			grp_msk.s.grp_msk |= (1ull << bit_pos);
> +		else
> +			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
> +
> +		csr_wr_node(node, reg_addr, grp_msk.u64);
> +	}
> +}
> +
> +/**
> + * This function sets the priority and group affinity arbitration
> for each group.
> + *
> + * @param node		Node number
> + * @param xgrp	Group 0 - 255 to apply mask parameters to
> + * @param priority	Priority of the group relative to other groups
> + *     0x0 - highest priority
> + *     0x7 - lowest priority
> + * @param weight	Cross-group arbitration weight to apply to this
> group.
> + *     valid values are 1-63
> + *     h/w default is 0x3f
> + * @param affinity	Processor affinity arbitration weight to apply
> to this group.
> + *     If zero, affinity is disabled.
> + *     valid values are 0-15
> + *     h/w default which is 0xf.
> + * @param modify_mask   mask of the parameters which needs to be
> modified.
> + *     enum cvmx_sso_group_modify_mask
> + *     to modify only priority -- set bit0
> + *     to modify only weight   -- set bit1
> + *     to modify only affinity -- set bit2
> + */
> +static inline void cvmx_sso_set_group_priority(int node, cvmx_xgrp_t
> xgrp, int priority, int weight,
> +					       int affinity,
> +					       enum
> cvmx_sso_group_modify_mask modify_mask)
> +{
> +	cvmx_sso_grpx_pri_t grp_pri;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +	if (weight <= 0)
> +		weight = 0x3f; /* Force HW default when out of range */
> +
> +	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
> +	if (grp_pri.s.weight == 0)
> +		grp_pri.s.weight = 0x3f;
> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_PRIORITY)
> +		grp_pri.s.pri = priority;
> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_WEIGHT)
> +		grp_pri.s.weight = weight;
> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_AFFINITY)
> +		grp_pri.s.affinity = affinity;
> +	csr_wr_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp), grp_pri.u64);
> +}
> +
> +/**
> + * Asynchronous work request.
> + * Only works on CN78XX style SSO.
> + *
> + * Work is requested from the SSO unit, and should later be checked
> with
> + * function cvmx_pow_work_response_async.
> + * This function does NOT wait for previous tag switches to
> complete,
> + * so the caller must ensure that there is not a pending tag switch.
> + *
> + * @param scr_addr Scratch memory address that response will be
> returned to,
> + *     which is either a valid WQE, or a response with the invalid
> bit set.
> + *     Byte address, must be 8 byte aligned.
> + * @param xgrp  Group to receive work for (0-255).
> + * @param wait
> + *     1 to cause response to wait for work to become available (or
> timeout)
> + *     0 to cause response to return immediately
> + */
> +static inline void cvmx_sso_work_request_grp_async_nocheck(int
> scr_addr, cvmx_xgrp_t xgrp,
> +							   cvmx_pow_wai
> t_t wait)
> +{
> +	cvmx_pow_iobdma_store_t data;
> +	unsigned int node = cvmx_get_node_num();
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_
> WQE), "Not CN78XX");
> +	}
> +	/* scr_addr must be 8 byte aligned */
> +	data.u64 = 0;
> +	data.s_cn78xx.scraddr = scr_addr >> 3;
> +	data.s_cn78xx.len = 1;
> +	data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +	data.s_cn78xx.grouped = 1;
> +	data.s_cn78xx.index_grp_mask = (node << 8) | xgrp.xgrp;
> +	data.s_cn78xx.wait = wait;
> +	data.s_cn78xx.node = node;
> +
> +	cvmx_send_single(data.u64);
> +}
> +
> +/**
> + * Synchronous work request from the node-local SSO without
> verifying
> + * pending tag switch. It requests work from a specific SSO group.
> + *
> + * @param lgrp The local group number (within the SSO of the node of
> the caller)
> + *     from which to get the work.
> + * @param wait When set, call stalls until work becomes available,
> or times out.
> + *     If not set, returns immediately.
> + *
> + * @return Returns the WQE pointer from SSO.
> + *     Returns NULL if no work was available.
> + */
> +static inline void *cvmx_sso_work_request_grp_sync_nocheck(unsigned
> int lgrp, cvmx_pow_wait_t wait)
> +{
> +	cvmx_pow_load_addr_t ptr;
> +	cvmx_pow_tag_load_resp_t result;
> +	unsigned int node = cvmx_get_node_num() & 3;
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_
> WQE), "Not CN78XX");
> +	}
> +	ptr.u64 = 0;
> +	ptr.swork_78xx.mem_region = CVMX_IO_SEG;
> +	ptr.swork_78xx.is_io = 1;
> +	ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +	ptr.swork_78xx.node = node;
> +	ptr.swork_78xx.grouped = 1;
> +	ptr.swork_78xx.index = (lgrp & 0xff) | node << 8;
> +	ptr.swork_78xx.wait = wait;
> +
> +	result.u64 = csr_rd(ptr.u64);
> +	if (result.s_work.no_work)
> +		return NULL;
> +	else
> +		return cvmx_phys_to_ptr(result.s_work.addr);
> +}
> +
> +/**
> + * Synchronous work request from the node-local SSO.
> + * It requests work from a specific SSO group.
> + * This function waits for any previous tag switch to complete
> before
> + * requesting the new work.
> + *
> + * @param lgrp The node-local group number from which to get the
> work.
> + * @param wait When set, call stalls until work becomes available,
> or times out.
> + *     If not set, returns immediately.
> + *
> + * @return The WQE pointer or NULL, if work is not available.
> + */
> +static inline void *cvmx_sso_work_request_grp_sync(unsigned int
> lgrp, cvmx_pow_wait_t wait)
> +{
> +	cvmx_pow_tag_sw_wait();
> +	return cvmx_sso_work_request_grp_sync_nocheck(lgrp, wait);
> +}
> +
> +/**
> + * This function sets the group mask for a core.  The group mask
> bits
> + * indicate which groups each core will accept work from.
> + *
> + * @param core_num	Processor core to apply mask to.
> + * @param mask_set	7XXX has 2 sets of masks per core.
> + *     Bit 0 represents the first mask set, bit 1 -- the second.
> + * @param xgrp_mask	Group mask array.
> + *     Total number of groups is divided into a number of
> + *     64-bits mask sets. Each bit in the mask, if set, enables
> + *     the core to accept work from the corresponding group.
> + *
> + * NOTE: Each core can be configured to accept work in accordance to
> both
> + * mask sets, with the first having higher precedence over the
> second,
> + * or to accept work in accordance to just one of the two mask sets.
> + * The 'core_num' argument represents a processor core on any node
> + * in a coherent multi-chip system.
> + *
> + * If the 'mask_set' argument is 3, both mask sets are configured
> + * with the same value (which is not typically the intention),
> + * so keep in mind the function needs to be called twice
> + * to set a different value into each of the mask sets,
> + * once with 'mask_set=1' and second time with 'mask_set=2'.
> + */
> +static inline void cvmx_pow_set_xgrp_mask(u64 core_num, u8 mask_set,
> const u64 xgrp_mask[])
> +{
> +	unsigned int grp, node, core;
> +	u64 reg_addr;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_warn_if(((mask_set < 1) || (mask_set > 3)),
> "Invalid mask set");
> +	}
> +
> +	if ((mask_set < 1) || (mask_set > 3))
> +		mask_set = 3;
> +
> +	node = cvmx_coremask_core_to_node(core_num);
> +	core = cvmx_coremask_core_on_node(core_num);
> +
> +	for (grp = 0; grp < (cvmx_sso_num_xgrp() >> 6); grp++) {
> +		if (mask_set & 1) {
> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0,
> grp),
> +			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
> +		}
> +		if (mask_set & 2) {
> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1,
> grp),
> +			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
> +		}
> +	}
> +}
> +
> +/**
> + * This function gets the group mask for a core.  The group mask
> bits
> + * indicate which groups each core will accept work from.
> + *
> + * @param core_num	Processor core to apply mask to.
> + * @param mask_set	7XXX has 2 sets of masks per core.
> + *     Bit 0 represents the first mask set, bit 1 -- the second.
> + * @param xgrp_mask	Provide pointer to u64 mask[8] output array.
> + *     Total number of groups is divided into a number of
> + *     64-bits mask sets. Each bit in the mask represents
> + *     the core accepts work from the corresponding group.
> + *
> + * NOTE: Each core can be configured to accept work in accordance to
> both
> + * mask sets, with the first having higher precedence over the
> second,
> + * or to accept work in accordance to just one of the two mask sets.
> + * The 'core_num' argument represents a processor core on any node
> + * in a coherent multi-chip system.
> + */
> +static inline void cvmx_pow_get_xgrp_mask(u64 core_num, u8 mask_set,
> u64 *xgrp_mask)
> +{
> +	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
> +	unsigned int grp, node, core;
> +	u64 reg_addr;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_warn_if(mask_set != 1 && mask_set != 2, "Invalid
> mask set");
> +	}
> +
> +	node = cvmx_coremask_core_to_node(core_num);
> +	core = cvmx_coremask_core_on_node(core_num);
> +
> +	for (grp = 0; grp < cvmx_sso_num_xgrp() >> 6; grp++) {
> +		if (mask_set & 1) {
> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0,
> grp),
> +			grp_msk.u64 = csr_rd_node(node, reg_addr);
> +			xgrp_mask[grp] = grp_msk.s.grp_msk;
> +		}
> +		if (mask_set & 2) {
> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1,
> grp),
> +			grp_msk.u64 = csr_rd_node(node, reg_addr);
> +			xgrp_mask[grp] = grp_msk.s.grp_msk;
> +		}
> +	}
> +}
> +
> +/**
> + * Executes SSO SWTAG command.
> + * This is similar to cvmx_pow_tag_sw() function, but uses linear
> + * (vs. integrated group-qos) group index.
> + */
> +static inline void cvmx_pow_tag_sw_node(cvmx_wqe_t *wqp, u32 tag,
> cvmx_pow_tag_type_t tag_type,
> +					int node)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +
> +	if
> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
> +		debug("ERROR: %s is supported on OCTEON3 only\n",
> __func__);
> +		return;
> +	}
> +	CVMX_SYNCWS;
> +	cvmx_pow_tag_sw_wait();
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called with NULL tag\n", __func__);
> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
> (current_tag.tag == tag),
> +			     "%s called to perform a tag switch to the
> same tag\n", __func__);
> +		cvmx_warn_if(
> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
> +			"%s called to perform a tag switch to NULL. Use
> cvmx_pow_tag_sw_null() instead\n",
> +			__func__);
> +	}
> +	wqp->word1.cn78xx.tag = tag;
> +	wqp->word1.cn78xx.tag_type = tag_type;
> +	CVMX_SYNCWS;
> +
> +	tag_req.u64 = 0;
> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
> +	tag_req.s_cn78xx_other.type = tag_type;
> +
> +	ptr.u64 = 0;
> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +	ptr.s_cn78xx.is_io = 1;
> +	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +	ptr.s_cn78xx.node = node;
> +	ptr.s_cn78xx.tag = tag;
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Executes SSO SWTAG_FULL command.
> + * This is similar to cvmx_pow_tag_sw_full() function, but
> + * uses linear (vs. integrated group-qos) group index.
> + */
> +static inline void cvmx_pow_tag_sw_full_node(cvmx_wqe_t *wqp, u32
> tag, cvmx_pow_tag_type_t tag_type,
> +					     u8 xgrp, int node)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +	u16 gxgrp;
> +
> +	if
> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
> +		debug("ERROR: %s is supported on OCTEON3 only\n",
> __func__);
> +		return;
> +	}
> +	/* Ensure that there is not a pending tag switch, as a tag
> switch cannot be
> +	 * started, if a previous switch is still pending. */
> +	CVMX_SYNCWS;
> +	cvmx_pow_tag_sw_wait();
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
> (current_tag.tag == tag),
> +			     "%s called to perform a tag switch to the
> same tag\n", __func__);
> +		cvmx_warn_if(
> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
> +			"%s called to perform a tag switch to NULL. Use
> cvmx_pow_tag_sw_null() instead\n",
> +			__func__);
> +		if ((wqp != cvmx_phys_to_ptr(0x80)) &&
> cvmx_pow_get_current_wqp())
> +			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
> +				     "%s passed WQE(%p) doesn't match
> the address in the POW(%p)\n",
> +				     __func__, wqp,
> cvmx_pow_get_current_wqp());
> +	}
> +	gxgrp = node;
> +	gxgrp = gxgrp << 8 | xgrp;
> +	wqp->word1.cn78xx.grp = gxgrp;
> +	wqp->word1.cn78xx.tag = tag;
> +	wqp->word1.cn78xx.tag_type = tag_type;
> +	CVMX_SYNCWS;
> +
> +	tag_req.u64 = 0;
> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
> +	tag_req.s_cn78xx_other.type = tag_type;
> +	tag_req.s_cn78xx_other.grp = gxgrp;
> +	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
> +
> +	ptr.u64 = 0;
> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +	ptr.s_cn78xx.is_io = 1;
> +	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
> +	ptr.s_cn78xx.node = node;
> +	ptr.s_cn78xx.tag = tag;
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Submits work to an SSO group on any OCI node.
> + * This function updates the work queue entry in DRAM to match
> + * the arguments given.
> + * Note that the tag provided is for the work queue entry submitted,
> + * and is unrelated to the tag that the core currently holds.
> + *
> + * @param wqp pointer to work queue entry to submit.
> + * This entry is updated to match the other parameters
> + * @param tag tag value to be assigned to work queue entry
> + * @param tag_type type of tag
> + * @param xgrp native CN78XX group in the range 0..255
> + * @param node The OCI node number for the target group
> + *
> + * When this function is called on a model prior to CN78XX, which
> does
> + * not support OCI nodes, the 'node' argument is ignored, and the
> 'xgrp'
> + * parameter is converted into 'qos' (the lower 3 bits) and 'grp'
> (the higher
> + * 5 bits), following the backward-compatibility scheme of
> translating
> + * between new and old style group numbers.
> + */
> +static inline void cvmx_pow_work_submit_node(cvmx_wqe_t *wqp, u32
> tag, cvmx_pow_tag_type_t tag_type,
> +					     u8 xgrp, u8 node)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +	u16 group;
> +
> +	if
> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
> +		debug("ERROR: %s is supported on OCTEON3 only\n",
> __func__);
> +		return;
> +	}
> +	group = node;
> +	group = group << 8 | xgrp;
> +	wqp->word1.cn78xx.tag = tag;
> +	wqp->word1.cn78xx.tag_type = tag_type;
> +	wqp->word1.cn78xx.grp = group;
> +	CVMX_SYNCWS;
> +
> +	tag_req.u64 = 0;
> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
> +	tag_req.s_cn78xx_other.type = tag_type;
> +	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
> +	tag_req.s_cn78xx_other.grp = group;
> +
> +	ptr.u64 = 0;
> +	ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +	ptr.s_cn78xx.is_io = 1;
> +	ptr.s_cn78xx.node = node;
> +	ptr.s_cn78xx.tag = tag;
> +
> +	/* SYNC write to memory before the work submit.  This is
> necessary
> +	 ** as POW may read values from DRAM at this time */
> +	CVMX_SYNCWS;
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/**
> + * Executes the SSO SWTAG_DESCHED operation.
> + * This is similar to the cvmx_pow_tag_sw_desched() function, but
> + * uses linear (vs. unified group-qos) group index.
> + */
> +static inline void cvmx_pow_tag_sw_desched_node(cvmx_wqe_t *wqe, u32
> tag,
> +						cvmx_pow_tag_type_t
> tag_type, u8 xgrp, u64 no_sched,
> +						u8 node)
> +{
> +	union cvmx_pow_tag_req_addr ptr;
> +	cvmx_pow_tag_req_t tag_req;
> +	u16 group;
> +
> +	if
> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
> +		debug("ERROR: %s is supported on OCTEON3 only\n",
> __func__);
> +		return;
> +	}
> +	/* Need to make sure any writes to the work queue entry are
> complete */
> +	CVMX_SYNCWS;
> +	/*
> +	 * Ensure that there is not a pending tag switch, as a tag
> switch cannot
> +	 * be started if a previous switch is still pending.
> +	 */
> +	cvmx_pow_tag_sw_wait();
> +
> +	if (CVMX_ENABLE_POW_CHECKS) {
> +		cvmx_pow_tag_info_t current_tag;
> +
> +		__cvmx_pow_warn_if_pending_switch(__func__);
> +		current_tag = cvmx_pow_get_current_tag();
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL_NULL,
> +			     "%s called with NULL_NULL tag\n",
> __func__);
> +		cvmx_warn_if(current_tag.tag_type ==
> CVMX_POW_TAG_TYPE_NULL,
> +			     "%s called with NULL tag. Deschedule not
> allowed from NULL state\n",
> +			     __func__);
> +		cvmx_warn_if((current_tag.tag_type !=
> CVMX_POW_TAG_TYPE_ATOMIC) &&
> +			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
> +			     "%s called where neither the before or
> after tag is ATOMIC\n",
> +			     __func__);
> +	}
> +	group = node;
> +	group = group << 8 | xgrp;
> +	wqe->word1.cn78xx.tag = tag;
> +	wqe->word1.cn78xx.tag_type = tag_type;
> +	wqe->word1.cn78xx.grp = group;
> +	CVMX_SYNCWS;
> +
> +	tag_req.u64 = 0;
> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
> +	tag_req.s_cn78xx_other.type = tag_type;
> +	tag_req.s_cn78xx_other.grp = group;
> +	tag_req.s_cn78xx_other.no_sched = no_sched;
> +
> +	ptr.u64 = 0;
> +	ptr.s.mem_region = CVMX_IO_SEG;
> +	ptr.s.is_io = 1;
> +	ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
> +	ptr.s_cn78xx.node = node;
> +	ptr.s_cn78xx.tag = tag;
> +	cvmx_write_io(ptr.u64, tag_req.u64);
> +}
> +
> +/* Executes the UPD_WQP_GRP SSO operation.
> + *
> + * @param wqp  Pointer to the new work queue entry to switch to.
> + * @param xgrp SSO group in the range 0..255
> + *
> + * NOTE: The operation can be performed only on the local node.
> + */
> +static inline void cvmx_sso_update_wqp_group(cvmx_wqe_t *wqp, u8
> xgrp)
> +{
> +	union cvmx_pow_tag_req_addr addr;
> +	cvmx_pow_tag_req_t data;
> +	int node = cvmx_get_node_num();
> +	int group = node << 8 | xgrp;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		debug("ERROR: %s is not supported on this chip)\n",
> __func__);
> +		return;
> +	}
> +	wqp->word1.cn78xx.grp = group;
> +	CVMX_SYNCWS;
> +
> +	data.u64 = 0;
> +	data.s_cn78xx_other.op = CVMX_POW_TAG_OP_UPDATE_WQP_GRP;
> +	data.s_cn78xx_other.grp = group;
> +	data.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
> +
> +	addr.u64 = 0;
> +	addr.s_cn78xx.mem_region = CVMX_IO_SEG;
> +	addr.s_cn78xx.is_io = 1;
> +	addr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
> +	addr.s_cn78xx.node = node;
> +	cvmx_write_io(addr.u64, data.u64);
> +}
> +
> +/*******************************************************************
> ***********/
> +/* Define usage of bits within the 32 bit tag
> values.                         */
> +/*******************************************************************
> ***********/
> +/*
> + * Number of bits of the tag used by software.  The SW bits
> + * are always a contiguous block of the high starting at bit 31.
> + * The hardware bits are always the low bits.  By default, the top 8
> bits
> + * of the tag are reserved for software, and the low 24 are set by
> the IPD unit.
> + */
> +#define CVMX_TAG_SW_BITS  (8)
> +#define CVMX_TAG_SW_SHIFT (32 - CVMX_TAG_SW_BITS)
> +
> +/* Below is the list of values for the top 8 bits of the tag. */
> +/*
> + * Tag values with top byte of this value are reserved for internal
> executive
> + * uses
> + */
> +#define CVMX_TAG_SW_BITS_INTERNAL 0x1
> +
> +/*
> + * The executive divides the remaining 24 bits as follows:
> + * the upper 8 bits (bits 23 - 16 of the tag) define a subgroup
> + * the lower 16 bits (bits 15 - 0 of the tag) define are the value
> with
> + * the subgroup. Note that this section describes the format of tags
> generated
> + * by software - refer to the hardware documentation for a
> description of the
> + * tags values generated by the packet input hardware.
> + * Subgroups are defined here
> + */
> +
> +/* Mask for the value portion of the tag */
> +#define CVMX_TAG_SUBGROUP_MASK	0xFFFF
> +#define CVMX_TAG_SUBGROUP_SHIFT 16
> +#define CVMX_TAG_SUBGROUP_PKO	0x1
> +
> +/* End of executive tag subgroup definitions */
> +
> +/* The remaining values software bit values 0x2 - 0xff are available
> + * for application use */
> +
> +/**
> + * This function creates a 32 bit tag value from the two values
> provided.
> + *
> + * @param sw_bits The upper bits (number depends on configuration)
> are set
> + *     to this value.  The remainder of bits are set by the hw_bits
> parameter.
> + * @param hw_bits The lower bits (number depends on configuration)
> are set
> + *     to this value.  The remainder of bits are set by the sw_bits
> parameter.
> + *
> + * @return 32 bit value of the combined hw and sw bits.
> + */
> +static inline u32 cvmx_pow_tag_compose(u64 sw_bits, u64 hw_bits)
> +{
> +	return (((sw_bits & cvmx_build_mask(CVMX_TAG_SW_BITS)) <<
> CVMX_TAG_SW_SHIFT) |
> +		(hw_bits & cvmx_build_mask(32 - CVMX_TAG_SW_BITS)));
> +}
> +
> +/**
> + * Extracts the bits allocated for software use from the tag
> + *
> + * @param tag    32 bit tag value
> + *
> + * @return N bit software tag value, where N is configurable with
> + *     the CVMX_TAG_SW_BITS define
> + */
> +static inline u32 cvmx_pow_tag_get_sw_bits(u64 tag)
> +{
> +	return ((tag >> (32 - CVMX_TAG_SW_BITS)) &
> cvmx_build_mask(CVMX_TAG_SW_BITS));
> +}
> +
> +/**
> + *
> + * Extracts the bits allocated for hardware use from the tag
> + *
> + * @param tag    32 bit tag value
> + *
> + * @return (32 - N) bit software tag value, where N is configurable
> with
> + *     the CVMX_TAG_SW_BITS define
> + */
> +static inline u32 cvmx_pow_tag_get_hw_bits(u64 tag)
> +{
> +	return (tag & cvmx_build_mask(32 - CVMX_TAG_SW_BITS));
> +}
> +
> +static inline u64 cvmx_sso3_get_wqe_count(int node)
> +{
> +	cvmx_sso_grpx_aq_cnt_t aq_cnt;
> +	unsigned int grp = 0;
> +	u64 cnt = 0;
> +
> +	for (grp = 0; grp < cvmx_sso_num_xgrp(); grp++) {
> +		aq_cnt.u64 = csr_rd_node(node,
> CVMX_SSO_GRPX_AQ_CNT(grp));
> +		cnt += aq_cnt.s.aq_cnt;
> +	}
> +	return cnt;
> +}
> +
> +static inline u64 cvmx_sso_get_total_wqe_count(void)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		int node = cvmx_get_node_num();
> +
> +		return cvmx_sso3_get_wqe_count(node);
> +	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
> +		cvmx_sso_iq_com_cnt_t sso_iq_com_cnt;
> +
> +		sso_iq_com_cnt.u64 = csr_rd(CVMX_SSO_IQ_COM_CNT);
> +		return (sso_iq_com_cnt.s.iq_cnt);
> +	} else {
> +		cvmx_pow_iq_com_cnt_t pow_iq_com_cnt;
> +
> +		pow_iq_com_cnt.u64 = csr_rd(CVMX_POW_IQ_COM_CNT);
> +		return (pow_iq_com_cnt.s.iq_cnt);
> +	}
> +}
> +
> +/**
> + * Store the current POW internal state into the supplied
> + * buffer. It is recommended that you pass a buffer of at least
> + * 128KB. The format of the capture may change based on SDK
> + * version and Octeon chip.
> + *
> + * @param buffer Buffer to store capture into
> + * @param buffer_size The size of the supplied buffer
> + *
> + * @return Zero on success, negative on failure
> + */
> +int cvmx_pow_capture(void *buffer, int buffer_size);
> +
> +/**
> + * Dump a POW capture to the console in a human readable format.
> + *
> + * @param buffer POW capture from cvmx_pow_capture()
> + * @param buffer_size Size of the buffer
> + */
> +void cvmx_pow_display(void *buffer, int buffer_size);
> +
> +/**
> + * Return the number of POW entries supported by this chip
> + *
> + * @return Number of POW entries
> + */
> +int cvmx_pow_get_num_entries(void);
> +int cvmx_pow_get_dump_size(void);
> +
> +/**
> + * This will allocate count number of SSO groups on the specified
> node to the
> + * calling application. These groups will be for exclusive use of
> the
> + * application until they are freed.
> + * @param node The numa node for the allocation.
> + * @param base_group Pointer to the initial group, -1 to allocate
> anywhere.
> + * @param count  The number of consecutive groups to allocate.
> + * @return 0 on success and -1 on failure.
> + */
> +int cvmx_sso_reserve_group_range(int node, int *base_group, int
> count);
> +#define cvmx_sso_allocate_group_range cvmx_sso_reserve_group_range
> +int cvmx_sso_reserve_group(int node);
> +#define cvmx_sso_allocate_group cvmx_sso_reserve_group
> +int cvmx_sso_release_group_range(int node, int base_group, int
> count);
> +int cvmx_sso_release_group(int node, int group);
> +
> +/**
> + * Show integrated SSO configuration.
> + *
> + * @param node	   node number
> + */
> +int cvmx_sso_config_dump(unsigned int node);
> +
> +/**
> + * Show integrated SSO statistics.
> + *
> + * @param node	   node number
> + */
> +int cvmx_sso_stats_dump(unsigned int node);
> +
> +/**
> + * Clear integrated SSO statistics.
> + *
> + * @param node	   node number
> + */
> +int cvmx_sso_stats_clear(unsigned int node);
> +
> +/**
> + * Show SSO core-group affinity and priority per node (multi-node
> systems)
> + */
> +void cvmx_pow_mask_priority_dump_node(unsigned int node, struct
> cvmx_coremask *avail_coremask);
> +
> +/**
> + * Show POW/SSO core-group affinity and priority (legacy, single-
> node systems)
> + */
> +static inline void cvmx_pow_mask_priority_dump(struct cvmx_coremask
> *avail_coremask)
> +{
> +	cvmx_pow_mask_priority_dump_node(0 /*node */, avail_coremask);
> +}
> +
> +/**
> + * Show SSO performance counters (multi-node systems)
> + */
> +void cvmx_pow_show_perf_counters_node(unsigned int node);
> +
> +/**
> + * Show POW/SSO performance counters (legacy, single-node systems)
> + */
> +static inline void cvmx_pow_show_perf_counters(void)
> +{
> +	cvmx_pow_show_perf_counters_node(0 /*node */);
> +}
> +
> +#endif /* __CVMX_POW_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
> b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
> new file mode 100644
> index 000000000000..19915eb82c51
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
> @@ -0,0 +1,304 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef __CVMX_QLM_H__
> +#define __CVMX_QLM_H__
> +
> +/*
> + * Interface 0 on the 78xx can be connected to qlm 0 or qlm 2. When
> interface
> + * 0 is connected to qlm 0, this macro must be set to 0. When
> interface 0 is
> + * connected to qlm 2, this macro must be set to 1.
> + */
> +#define MUX_78XX_IFACE0 0
> +
> +/*
> + * Interface 1 on the 78xx can be connected to qlm 1 or qlm 3. When
> interface
> + * 1 is connected to qlm 1, this macro must be set to 0. When
> interface 1 is
> + * connected to qlm 3, this macro must be set to 1.
> + */
> +#define MUX_78XX_IFACE1 0
> +
> +/* Uncomment this line to print QLM JTAG state */
> +/* #define CVMX_QLM_DUMP_STATE 1 */
> +
> +typedef struct {
> +	const char *name;
> +	int stop_bit;
> +	int start_bit;
> +} __cvmx_qlm_jtag_field_t;
> +
> +/**
> + * Return the number of QLMs supported by the chip
> + *
> + * @return  Number of QLMs
> + */
> +int cvmx_qlm_get_num(void);
> +
> +/**
> + * Return the qlm number based on the interface
> + *
> + * @param xiface  Interface to look
> + */
> +int cvmx_qlm_interface(int xiface);
> +
> +/**
> + * Return the qlm number based for a port in the interface
> + *
> + * @param xiface  interface to look up
> + * @param index  index in an interface
> + *
> + * @return the qlm number based on the xiface
> + */
> +int cvmx_qlm_lmac(int xiface, int index);
> +
> +/**
> + * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
> + *
> + * @param BGX  BGX to search for.
> + *
> + * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
> + */
> +int cvmx_qlm_mux_interface(int bgx);
> +
> +/**
> + * Return number of lanes for a given qlm
> + *
> + * @param qlm QLM block to query
> + *
> + * @return  Number of lanes
> + */
> +int cvmx_qlm_get_lanes(int qlm);
> +
> +/**
> + * Get the QLM JTAG fields based on Octeon model on the supported
> chips.
> + *
> + * @return  qlm_jtag_field_t structure
> + */
> +const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void);
> +
> +/**
> + * Get the QLM JTAG length by going through qlm_jtag_field for each
> + * Octeon model that is supported
> + *
> + * @return return the length.
> + */
> +int cvmx_qlm_jtag_get_length(void);
> +
> +/**
> + * Initialize the QLM layer
> + */
> +void cvmx_qlm_init(void);
> +
> +/**
> + * Get a field in a QLM JTAG chain
> + *
> + * @param qlm    QLM to get
> + * @param lane   Lane in QLM to get
> + * @param name   String name of field
> + *
> + * @return JTAG field value
> + */
> +u64 cvmx_qlm_jtag_get(int qlm, int lane, const char *name);
> +
> +/**
> + * Set a field in a QLM JTAG chain
> + *
> + * @param qlm    QLM to set
> + * @param lane   Lane in QLM to set, or -1 for all lanes
> + * @param name   String name of field
> + * @param value  Value of the field
> + */
> +void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, u64
> value);
> +
> +/**
> + * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
> + * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function
> tweaks the
> + * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
> + */
> +void __cvmx_qlm_speed_tweak(void);
> +
> +/**
> + * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
> + * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM
> tweak.
> + * This function tweaks the JTAG setting for a QLMs for PCIe to run
> better.
> + */
> +void __cvmx_qlm_pcie_idle_dac_tweak(void);
> +
> +void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane);
> +
> +/**
> + * Get the speed (Gbaud) of the QLM in Mhz.
> + *
> + * @param qlm    QLM to examine
> + *
> + * @return Speed in Mhz
> + */
> +int cvmx_qlm_get_gbaud_mhz(int qlm);
> +/**
> + * Get the speed (Gbaud) of the QLM in Mhz on specific node.
> + *
> + * @param node   Target QLM node
> + * @param qlm    QLM to examine
> + *
> + * @return Speed in Mhz
> + */
> +int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm);
> +
> +enum cvmx_qlm_mode {
> +	CVMX_QLM_MODE_DISABLED = -1,
> +	CVMX_QLM_MODE_SGMII = 1,
> +	CVMX_QLM_MODE_XAUI,
> +	CVMX_QLM_MODE_RXAUI,
> +	CVMX_QLM_MODE_PCIE,	/* gen3 / gen2 / gen1 */
> +	CVMX_QLM_MODE_PCIE_1X2, /* 1x2 gen2 / gen1 */
> +	CVMX_QLM_MODE_PCIE_2X1, /* 2x1 gen2 / gen1 */
> +	CVMX_QLM_MODE_PCIE_1X1, /* 1x1 gen2 / gen1 */
> +	CVMX_QLM_MODE_SRIO_1X4, /* 1x4 short / long */
> +	CVMX_QLM_MODE_SRIO_2X2, /* 2x2 short / long */
> +	CVMX_QLM_MODE_SRIO_4X1, /* 4x1 short / long */
> +	CVMX_QLM_MODE_ILK,
> +	CVMX_QLM_MODE_QSGMII,
> +	CVMX_QLM_MODE_SGMII_SGMII,
> +	CVMX_QLM_MODE_SGMII_DISABLED,
> +	CVMX_QLM_MODE_DISABLED_SGMII,
> +	CVMX_QLM_MODE_SGMII_QSGMII,
> +	CVMX_QLM_MODE_QSGMII_QSGMII,
> +	CVMX_QLM_MODE_QSGMII_DISABLED,
> +	CVMX_QLM_MODE_DISABLED_QSGMII,
> +	CVMX_QLM_MODE_QSGMII_SGMII,
> +	CVMX_QLM_MODE_RXAUI_1X2,
> +	CVMX_QLM_MODE_SATA_2X1,
> +	CVMX_QLM_MODE_XLAUI,
> +	CVMX_QLM_MODE_XFI,
> +	CVMX_QLM_MODE_10G_KR,
> +	CVMX_QLM_MODE_40G_KR4,
> +	CVMX_QLM_MODE_PCIE_1X8, /* 1x8 gen3 / gen2 / gen1 */
> +	CVMX_QLM_MODE_RGMII_SGMII,
> +	CVMX_QLM_MODE_RGMII_XFI,
> +	CVMX_QLM_MODE_RGMII_10G_KR,
> +	CVMX_QLM_MODE_RGMII_RXAUI,
> +	CVMX_QLM_MODE_RGMII_XAUI,
> +	CVMX_QLM_MODE_RGMII_XLAUI,
> +	CVMX_QLM_MODE_RGMII_40G_KR4,
> +	CVMX_QLM_MODE_MIXED,		/* BGX2 is mixed mode,
> DLM5(SGMII) & DLM6(XFI) */
> +	CVMX_QLM_MODE_SGMII_2X1,	/* Configure BGX2 separate for DLM5 &
> DLM6 */
> +	CVMX_QLM_MODE_10G_KR_1X2,	/* Configure BGX2 separate for DLM5 &
> DLM6 */
> +	CVMX_QLM_MODE_XFI_1X2,		/* Configure BGX2 separate
> for DLM5 & DLM6 */
> +	CVMX_QLM_MODE_RGMII_SGMII_1X1,	/* Configure BGX2, applies to
> DLM5 */
> +	CVMX_QLM_MODE_RGMII_SGMII_2X1,	/* Configure BGX2, applies to
> DLM6 */
> +	CVMX_QLM_MODE_RGMII_10G_KR_1X1, /* Configure BGX2, applies to
> DLM6 */
> +	CVMX_QLM_MODE_RGMII_XFI_1X1,	/* Configure BGX2, applies to
> DLM6 */
> +	CVMX_QLM_MODE_SDL,		/* RMAC Pipe */
> +	CVMX_QLM_MODE_CPRI,		/* RMAC */
> +	CVMX_QLM_MODE_OCI
> +};
> +
> +enum cvmx_gmx_inf_mode {
> +	CVMX_GMX_INF_MODE_DISABLED = 0,
> +	CVMX_GMX_INF_MODE_SGMII = 1,  /* Other interface can be SGMII
> or QSGMII */
> +	CVMX_GMX_INF_MODE_QSGMII = 2, /* Other interface can be SGMII
> or QSGMII */
> +	CVMX_GMX_INF_MODE_RXAUI = 3,  /* Only interface 0, interface 1
> must be DISABLED */
> +};
> +
> +/**
> + * Eye diagram captures are stored in the following structure
> + */
> +typedef struct {
> +	int width;	   /* Width in the x direction (time) */
> +	int height;	   /* Height in the y direction (voltage) */
> +	u32 data[64][128]; /* Error count at location, saturates as max
> */
> +} cvmx_qlm_eye_t;
> +
> +/**
> + * These apply to DLM1 and DLM2 if its not in SATA mode
> + * Manual refers to lanes as follows:
> + *  DML 0 lane 0 == GSER0 lane 0
> + *  DML 0 lane 1 == GSER0 lane 1
> + *  DML 1 lane 2 == GSER1 lane 0
> + *  DML 1 lane 3 == GSER1 lane 1
> + *  DML 2 lane 4 == GSER2 lane 0
> + *  DML 2 lane 5 == GSER2 lane 1
> + */
> +enum cvmx_pemx_cfg_mode {
> +	CVMX_PEM_MD_GEN2_2LANE = 0, /* Valid for PEM0(DLM1), PEM1(DLM2)
> */
> +	CVMX_PEM_MD_GEN2_1LANE = 1, /* Valid for PEM0(DLM1.0),
> PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
> +	CVMX_PEM_MD_GEN2_4LANE = 2, /* Valid for PEM0(DLM1-2) */
> +	/* Reserved */
> +	CVMX_PEM_MD_GEN1_2LANE = 4, /* Valid for PEM0(DLM1), PEM1(DLM2)
> */
> +	CVMX_PEM_MD_GEN1_1LANE = 5, /* Valid for PEM0(DLM1.0),
> PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
> +	CVMX_PEM_MD_GEN1_4LANE = 6, /* Valid for PEM0(DLM1-2) */
> +	/* Reserved */
> +};
> +
> +/*
> + * Read QLM and return mode.
> + */
> +enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm);
> +enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm);
> +enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int dlm_mode, int
> interface);
> +void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier);
> +
> +void cvmx_qlm_display_registers(int qlm);
> +
> +int cvmx_qlm_measure_clock(int qlm);
> +
> +/**
> + * Measure the reference clock of a QLM on a multi-node setup
> + *
> + * @param node   node to measure
> + * @param qlm    QLM to measure
> + *
> + * @return Clock rate in Hz
> + */
> +int cvmx_qlm_measure_clock_node(int node, int qlm);
> +
> +/*
> + * Perform RX equalization on a QLM
> + *
> + * @param node	Node the QLM is on
> + * @param qlm	QLM to perform RX equalization on
> + * @param lane	Lane to use, or -1 for all lanes
> + *
> + * @return Zero on success, negative if any lane failed RX
> equalization
> + */
> +int __cvmx_qlm_rx_equalization(int node, int qlm, int lane);
> +
> +/**
> + * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
> + * Training may not update PHY Tx Taps. This function is not static
> + * so we can share it with BGX KR
> + *
> + * @param node	Node to apply errata workaround
> + * @param qlm	QLM to apply errata workaround
> + * @param lane	Lane to apply the errata
> + */
> +int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane);
> +
> +void cvmx_qlm_gser_errata_25992(int node, int qlm);
> +
> +#ifdef CVMX_DUMP_GSER
> +/**
> + * Dump GSER configuration for node 0
> + */
> +int cvmx_dump_gser_config(unsigned int gser);
> +/**
> + * Dump GSER status for node 0
> + */
> +int cvmx_dump_gser_status(unsigned int gser);
> +/**
> + * Dump GSER configuration
> + */
> +int cvmx_dump_gser_config_node(unsigned int node, unsigned int
> gser);
> +/**
> + * Dump GSER status
> + */
> +int cvmx_dump_gser_status_node(unsigned int node, unsigned int
> gser);
> +#endif
> +
> +int cvmx_qlm_eye_display(int node, int qlm, int qlm_lane, int
> format, const cvmx_qlm_eye_t *eye);
> +
> +void cvmx_prbs_process_cmd(int node, int qlm, int mode);
> +
> +#endif /* __CVMX_QLM_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
> b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
> new file mode 100644
> index 000000000000..d567a8453b7a
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
> @@ -0,0 +1,113 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * This file provides support for the processor local scratch
> memory.
> + * Scratch memory is byte addressable - all addresses are byte
> addresses.
> + */
> +
> +#ifndef __CVMX_SCRATCH_H__
> +#define __CVMX_SCRATCH_H__
> +
> +/* Note: This define must be a long, not a long long in order to
> compile
> +	without warnings for both 32bit and 64bit. */
> +#define CVMX_SCRATCH_BASE (-32768l) /* 0xffffffffffff8000 */
> +
> +/* Scratch line for LMTST/LMTDMA on Octeon3 models */
> +#ifdef CVMX_CAVIUM_OCTEON3
> +#define CVMX_PKO_LMTLINE 2ull
> +#endif
> +
> +/**
> + * Reads an 8 bit value from the processor local scratchpad memory.
> + *
> + * @param address byte address to read from
> + *
> + * @return value read
> + */
> +static inline u8 cvmx_scratch_read8(u64 address)
> +{
> +	return *CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address);
> +}
> +
> +/**
> + * Reads a 16 bit value from the processor local scratchpad memory.
> + *
> + * @param address byte address to read from
> + *
> + * @return value read
> + */
> +static inline u16 cvmx_scratch_read16(u64 address)
> +{
> +	return *CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address);
> +}
> +
> +/**
> + * Reads a 32 bit value from the processor local scratchpad memory.
> + *
> + * @param address byte address to read from
> + *
> + * @return value read
> + */
> +static inline u32 cvmx_scratch_read32(u64 address)
> +{
> +	return *CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address);
> +}
> +
> +/**
> + * Reads a 64 bit value from the processor local scratchpad memory.
> + *
> + * @param address byte address to read from
> + *
> + * @return value read
> + */
> +static inline u64 cvmx_scratch_read64(u64 address)
> +{
> +	return *CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address);
> +}
> +
> +/**
> + * Writes an 8 bit value to the processor local scratchpad memory.
> + *
> + * @param address byte address to write to
> + * @param value   value to write
> + */
> +static inline void cvmx_scratch_write8(u64 address, u64 value)
> +{
> +	*CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address) = (u8)value;
> +}
> +
> +/**
> + * Writes a 32 bit value to the processor local scratchpad memory.
> + *
> + * @param address byte address to write to
> + * @param value   value to write
> + */
> +static inline void cvmx_scratch_write16(u64 address, u64 value)
> +{
> +	*CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address) =
> (u16)value;
> +}
> +
> +/**
> + * Writes a 16 bit value to the processor local scratchpad memory.
> + *
> + * @param address byte address to write to
> + * @param value   value to write
> + */
> +static inline void cvmx_scratch_write32(u64 address, u64 value)
> +{
> +	*CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address) =
> (u32)value;
> +}
> +
> +/**
> + * Writes a 64 bit value to the processor local scratchpad memory.
> + *
> + * @param address byte address to write to
> + * @param value   value to write
> + */
> +static inline void cvmx_scratch_write64(u64 address, u64 value)
> +{
> +	*CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address) = value;
> +}
> +
> +#endif /* __CVMX_SCRATCH_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
> b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
> new file mode 100644
> index 000000000000..c9e3c8312a65
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
> @@ -0,0 +1,1462 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + *
> + * This header file defines the work queue entry (wqe) data
> structure.
> + * Since this is a commonly used structure that depends on
> structures
> + * from several hardware blocks, those definitions have been placed
> + * in this file to create a single point of definition of the wqe
> + * format.
> + * Data structures are still named according to the block that they
> + * relate to.
> + */
> +
> +#ifndef __CVMX_WQE_H__
> +#define __CVMX_WQE_H__
> +
> +#include "cvmx-packet.h"
> +#include "cvmx-csr-enums.h"
> +#include "cvmx-pki-defs.h"
> +#include "cvmx-pip-defs.h"
> +#include "octeon-feature.h"
> +
> +#define OCT_TAG_TYPE_STRING(x)					
> 	\
> +	(((x) == CVMX_POW_TAG_TYPE_ORDERED) ?				
> \
> +	 "ORDERED" :							
> \
> +	 (((x) == CVMX_POW_TAG_TYPE_ATOMIC) ?				
> \
> +	  "ATOMIC" :							
> \
> +	  (((x) == CVMX_POW_TAG_TYPE_NULL) ? "NULL" : "NULL_NULL")))
> +
> +/* Error levels in WQE WORD2 (ERRLEV).*/
> +#define PKI_ERRLEV_E__RE_M 0x0
> +#define PKI_ERRLEV_E__LA_M 0x1
> +#define PKI_ERRLEV_E__LB_M 0x2
> +#define PKI_ERRLEV_E__LC_M 0x3
> +#define PKI_ERRLEV_E__LD_M 0x4
> +#define PKI_ERRLEV_E__LE_M 0x5
> +#define PKI_ERRLEV_E__LF_M 0x6
> +#define PKI_ERRLEV_E__LG_M 0x7
> +
> +enum cvmx_pki_errlevel {
> +	CVMX_PKI_ERRLEV_E_RE = PKI_ERRLEV_E__RE_M,
> +	CVMX_PKI_ERRLEV_E_LA = PKI_ERRLEV_E__LA_M,
> +	CVMX_PKI_ERRLEV_E_LB = PKI_ERRLEV_E__LB_M,
> +	CVMX_PKI_ERRLEV_E_LC = PKI_ERRLEV_E__LC_M,
> +	CVMX_PKI_ERRLEV_E_LD = PKI_ERRLEV_E__LD_M,
> +	CVMX_PKI_ERRLEV_E_LE = PKI_ERRLEV_E__LE_M,
> +	CVMX_PKI_ERRLEV_E_LF = PKI_ERRLEV_E__LF_M,
> +	CVMX_PKI_ERRLEV_E_LG = PKI_ERRLEV_E__LG_M
> +};
> +
> +#define CVMX_PKI_ERRLEV_MAX BIT(3) /* The size of WORD2:ERRLEV
> field.*/
> +
> +/* Error code in WQE WORD2 (OPCODE).*/
> +#define CVMX_PKI_OPCODE_RE_NONE	      0x0
> +#define CVMX_PKI_OPCODE_RE_PARTIAL    0x1
> +#define CVMX_PKI_OPCODE_RE_JABBER     0x2
> +#define CVMX_PKI_OPCODE_RE_FCS	      0x7
> +#define CVMX_PKI_OPCODE_RE_FCS_RCV    0x8
> +#define CVMX_PKI_OPCODE_RE_TERMINATE  0x9
> +#define CVMX_PKI_OPCODE_RE_RX_CTL     0xb
> +#define CVMX_PKI_OPCODE_RE_SKIP	      0xc
> +#define CVMX_PKI_OPCODE_RE_DMAPKT     0xf
> +#define CVMX_PKI_OPCODE_RE_PKIPAR     0x13
> +#define CVMX_PKI_OPCODE_RE_PKIPCAM    0x14
> +#define CVMX_PKI_OPCODE_RE_MEMOUT     0x15
> +#define CVMX_PKI_OPCODE_RE_BUFS_OFLOW 0x16
> +#define CVMX_PKI_OPCODE_L2_FRAGMENT   0x20
> +#define CVMX_PKI_OPCODE_L2_OVERRUN    0x21
> +#define CVMX_PKI_OPCODE_L2_PFCS	      0x22
> +#define CVMX_PKI_OPCODE_L2_PUNY	      0x23
> +#define CVMX_PKI_OPCODE_L2_MAL	      0x24
> +#define CVMX_PKI_OPCODE_L2_OVERSIZE   0x25
> +#define CVMX_PKI_OPCODE_L2_UNDERSIZE  0x26
> +#define CVMX_PKI_OPCODE_L2_LENMISM    0x27
> +#define CVMX_PKI_OPCODE_IP_NOT	      0x41
> +#define CVMX_PKI_OPCODE_IP_CHK	      0x42
> +#define CVMX_PKI_OPCODE_IP_MAL	      0x43
> +#define CVMX_PKI_OPCODE_IP_MALD	      0x44
> +#define CVMX_PKI_OPCODE_IP_HOP	      0x45
> +#define CVMX_PKI_OPCODE_L4_MAL	      0x61
> +#define CVMX_PKI_OPCODE_L4_CHK	      0x62
> +#define CVMX_PKI_OPCODE_L4_LEN	      0x63
> +#define CVMX_PKI_OPCODE_L4_PORT	      0x64
> +#define CVMX_PKI_OPCODE_TCP_FLAG      0x65
> +
> +#define CVMX_PKI_OPCODE_MAX BIT(8) /* The size of WORD2:OPCODE
> field.*/
> +
> +/* Layer types in pki */
> +#define CVMX_PKI_LTYPE_E_NONE_M	      0x0
> +#define CVMX_PKI_LTYPE_E_ENET_M	      0x1
> +#define CVMX_PKI_LTYPE_E_VLAN_M	      0x2
> +#define CVMX_PKI_LTYPE_E_SNAP_PAYLD_M 0x5
> +#define CVMX_PKI_LTYPE_E_ARP_M	      0x6
> +#define CVMX_PKI_LTYPE_E_RARP_M	      0x7
> +#define CVMX_PKI_LTYPE_E_IP4_M	      0x8
> +#define CVMX_PKI_LTYPE_E_IP4_OPT_M    0x9
> +#define CVMX_PKI_LTYPE_E_IP6_M	      0xA
> +#define CVMX_PKI_LTYPE_E_IP6_OPT_M    0xB
> +#define CVMX_PKI_LTYPE_E_IPSEC_ESP_M  0xC
> +#define CVMX_PKI_LTYPE_E_IPFRAG_M     0xD
> +#define CVMX_PKI_LTYPE_E_IPCOMP_M     0xE
> +#define CVMX_PKI_LTYPE_E_TCP_M	      0x10
> +#define CVMX_PKI_LTYPE_E_UDP_M	      0x11
> +#define CVMX_PKI_LTYPE_E_SCTP_M	      0x12
> +#define CVMX_PKI_LTYPE_E_UDP_VXLAN_M  0x13
> +#define CVMX_PKI_LTYPE_E_GRE_M	      0x14
> +#define CVMX_PKI_LTYPE_E_NVGRE_M      0x15
> +#define CVMX_PKI_LTYPE_E_GTP_M	      0x16
> +#define CVMX_PKI_LTYPE_E_SW28_M	      0x1C
> +#define CVMX_PKI_LTYPE_E_SW29_M	      0x1D
> +#define CVMX_PKI_LTYPE_E_SW30_M	      0x1E
> +#define CVMX_PKI_LTYPE_E_SW31_M	      0x1F
> +
> +enum cvmx_pki_layer_type {
> +	CVMX_PKI_LTYPE_E_NONE = CVMX_PKI_LTYPE_E_NONE_M,
> +	CVMX_PKI_LTYPE_E_ENET = CVMX_PKI_LTYPE_E_ENET_M,
> +	CVMX_PKI_LTYPE_E_VLAN = CVMX_PKI_LTYPE_E_VLAN_M,
> +	CVMX_PKI_LTYPE_E_SNAP_PAYLD = CVMX_PKI_LTYPE_E_SNAP_PAYLD_M,
> +	CVMX_PKI_LTYPE_E_ARP = CVMX_PKI_LTYPE_E_ARP_M,
> +	CVMX_PKI_LTYPE_E_RARP = CVMX_PKI_LTYPE_E_RARP_M,
> +	CVMX_PKI_LTYPE_E_IP4 = CVMX_PKI_LTYPE_E_IP4_M,
> +	CVMX_PKI_LTYPE_E_IP4_OPT = CVMX_PKI_LTYPE_E_IP4_OPT_M,
> +	CVMX_PKI_LTYPE_E_IP6 = CVMX_PKI_LTYPE_E_IP6_M,
> +	CVMX_PKI_LTYPE_E_IP6_OPT = CVMX_PKI_LTYPE_E_IP6_OPT_M,
> +	CVMX_PKI_LTYPE_E_IPSEC_ESP = CVMX_PKI_LTYPE_E_IPSEC_ESP_M,
> +	CVMX_PKI_LTYPE_E_IPFRAG = CVMX_PKI_LTYPE_E_IPFRAG_M,
> +	CVMX_PKI_LTYPE_E_IPCOMP = CVMX_PKI_LTYPE_E_IPCOMP_M,
> +	CVMX_PKI_LTYPE_E_TCP = CVMX_PKI_LTYPE_E_TCP_M,
> +	CVMX_PKI_LTYPE_E_UDP = CVMX_PKI_LTYPE_E_UDP_M,
> +	CVMX_PKI_LTYPE_E_SCTP = CVMX_PKI_LTYPE_E_SCTP_M,
> +	CVMX_PKI_LTYPE_E_UDP_VXLAN = CVMX_PKI_LTYPE_E_UDP_VXLAN_M,
> +	CVMX_PKI_LTYPE_E_GRE = CVMX_PKI_LTYPE_E_GRE_M,
> +	CVMX_PKI_LTYPE_E_NVGRE = CVMX_PKI_LTYPE_E_NVGRE_M,
> +	CVMX_PKI_LTYPE_E_GTP = CVMX_PKI_LTYPE_E_GTP_M,
> +	CVMX_PKI_LTYPE_E_SW28 = CVMX_PKI_LTYPE_E_SW28_M,
> +	CVMX_PKI_LTYPE_E_SW29 = CVMX_PKI_LTYPE_E_SW29_M,
> +	CVMX_PKI_LTYPE_E_SW30 = CVMX_PKI_LTYPE_E_SW30_M,
> +	CVMX_PKI_LTYPE_E_SW31 = CVMX_PKI_LTYPE_E_SW31_M,
> +	CVMX_PKI_LTYPE_E_MAX = CVMX_PKI_LTYPE_E_SW31
> +};
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 ptr_vlan : 8;
> +		u64 ptr_layer_g : 8;
> +		u64 ptr_layer_f : 8;
> +		u64 ptr_layer_e : 8;
> +		u64 ptr_layer_d : 8;
> +		u64 ptr_layer_c : 8;
> +		u64 ptr_layer_b : 8;
> +		u64 ptr_layer_a : 8;
> +	};
> +} cvmx_pki_wqe_word4_t;
> +
> +/**
> + * HW decode / err_code in work queue entry
> + */
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 bufs : 8;
> +		u64 ip_offset : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 varies : 12;
> +		u64 dec_ipcomp : 1;
> +		u64 tcp_or_udp : 1;
> +		u64 dec_ipsec : 1;
> +		u64 is_v6 : 1;
> +		u64 software : 1;
> +		u64 L4_error : 1;
> +		u64 is_frag : 1;
> +		u64 IP_exc : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} s;
> +	struct {
> +		u64 bufs : 8;
> +		u64 ip_offset : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 port : 12;
> +		u64 dec_ipcomp : 1;
> +		u64 tcp_or_udp : 1;
> +		u64 dec_ipsec : 1;
> +		u64 is_v6 : 1;
> +		u64 software : 1;
> +		u64 L4_error : 1;
> +		u64 is_frag : 1;
> +		u64 IP_exc : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} s_cn68xx;
> +	struct {
> +		u64 bufs : 8;
> +		u64 ip_offset : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 pr : 4;
> +		u64 unassigned2a : 4;
> +		u64 unassigned2 : 4;
> +		u64 dec_ipcomp : 1;
> +		u64 tcp_or_udp : 1;
> +		u64 dec_ipsec : 1;
> +		u64 is_v6 : 1;
> +		u64 software : 1;
> +		u64 L4_error : 1;
> +		u64 is_frag : 1;
> +		u64 IP_exc : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} s_cn38xx;
> +	struct {
> +		u64 unused1 : 16;
> +		u64 vlan : 16;
> +		u64 unused2 : 32;
> +	} svlan;
> +	struct {
> +		u64 bufs : 8;
> +		u64 unused : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 varies : 12;
> +		u64 unassigned2 : 4;
> +		u64 software : 1;
> +		u64 unassigned3 : 1;
> +		u64 is_rarp : 1;
> +		u64 is_arp : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} snoip;
> +	struct {
> +		u64 bufs : 8;
> +		u64 unused : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 port : 12;
> +		u64 unassigned2 : 4;
> +		u64 software : 1;
> +		u64 unassigned3 : 1;
> +		u64 is_rarp : 1;
> +		u64 is_arp : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} snoip_cn68xx;
> +	struct {
> +		u64 bufs : 8;
> +		u64 unused : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 unassigned : 1;
> +		u64 vlan_cfi : 1;
> +		u64 vlan_id : 12;
> +		u64 pr : 4;
> +		u64 unassigned2a : 8;
> +		u64 unassigned2 : 4;
> +		u64 software : 1;
> +		u64 unassigned3 : 1;
> +		u64 is_rarp : 1;
> +		u64 is_arp : 1;
> +		u64 is_bcast : 1;
> +		u64 is_mcast : 1;
> +		u64 not_IP : 1;
> +		u64 rcv_error : 1;
> +		u64 err_code : 8;
> +	} snoip_cn38xx;
> +} cvmx_pip_wqe_word2_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 software : 1;
> +		u64 lg_hdr_type : 5;
> +		u64 lf_hdr_type : 5;
> +		u64 le_hdr_type : 5;
> +		u64 ld_hdr_type : 5;
> +		u64 lc_hdr_type : 5;
> +		u64 lb_hdr_type : 5;
> +		u64 is_la_ether : 1;
> +		u64 rsvd_0 : 8;
> +		u64 vlan_valid : 1;
> +		u64 vlan_stacked : 1;
> +		u64 stat_inc : 1;
> +		u64 pcam_flag4 : 1;
> +		u64 pcam_flag3 : 1;
> +		u64 pcam_flag2 : 1;
> +		u64 pcam_flag1 : 1;
> +		u64 is_frag : 1;
> +		u64 is_l3_bcast : 1;
> +		u64 is_l3_mcast : 1;
> +		u64 is_l2_bcast : 1;
> +		u64 is_l2_mcast : 1;
> +		u64 is_raw : 1;
> +		u64 err_level : 3;
> +		u64 err_code : 8;
> +	};
> +} cvmx_pki_wqe_word2_t;
> +
> +typedef union {
> +	u64 u64;
> +	cvmx_pki_wqe_word2_t pki;
> +	cvmx_pip_wqe_word2_t pip;
> +} cvmx_wqe_word2_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u16 hw_chksum;
> +		u8 unused;
> +		u64 next_ptr : 40;
> +	} cn38xx;
> +	struct {
> +		u64 l4ptr : 8;	  /* 56..63 */
> +		u64 unused0 : 8;  /* 48..55 */
> +		u64 l3ptr : 8;	  /* 40..47 */
> +		u64 l2ptr : 8;	  /* 32..39 */
> +		u64 unused1 : 18; /* 14..31 */
> +		u64 bpid : 6;	  /* 8..13 */
> +		u64 unused2 : 2;  /* 6..7 */
> +		u64 pknd : 6;	  /* 0..5 */
> +	} cn68xx;
> +} cvmx_pip_wqe_word0_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 rsvd_0 : 4;
> +		u64 aura : 12;
> +		u64 rsvd_1 : 1;
> +		u64 apad : 3;
> +		u64 channel : 12;
> +		u64 bufs : 8;
> +		u64 style : 8;
> +		u64 rsvd_2 : 10;
> +		u64 pknd : 6;
> +	};
> +} cvmx_pki_wqe_word0_t;
> +
> +/* Use reserved bit, set by HW to 0, to indicate buf_ptr legacy
> translation*/
> +#define pki_wqe_translated word0.rsvd_1
> +
> +typedef union {
> +	u64 u64;
> +	cvmx_pip_wqe_word0_t pip;
> +	cvmx_pki_wqe_word0_t pki;
> +	struct {
> +		u64 unused : 24;
> +		u64 next_ptr : 40; /* On cn68xx this is unused as well
> */
> +	} raw;
> +} cvmx_wqe_word0_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 len : 16;
> +		u64 rsvd_0 : 2;
> +		u64 rsvd_1 : 2;
> +		u64 grp : 10;
> +		cvmx_pow_tag_type_t tag_type : 2;
> +		u64 tag : 32;
> +	};
> +} cvmx_pki_wqe_word1_t;
> +
> +#define pki_errata20776 word1.rsvd_0
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 len : 16;
> +		u64 varies : 14;
> +		cvmx_pow_tag_type_t tag_type : 2;
> +		u64 tag : 32;
> +	};
> +	cvmx_pki_wqe_word1_t cn78xx;
> +	struct {
> +		u64 len : 16;
> +		u64 zero_0 : 1;
> +		u64 qos : 3;
> +		u64 zero_1 : 1;
> +		u64 grp : 6;
> +		u64 zero_2 : 3;
> +		cvmx_pow_tag_type_t tag_type : 2;
> +		u64 tag : 32;
> +	} cn68xx;
> +	struct {
> +		u64 len : 16;
> +		u64 ipprt : 6;
> +		u64 qos : 3;
> +		u64 grp : 4;
> +		u64 zero_2 : 1;
> +		cvmx_pow_tag_type_t tag_type : 2;
> +		u64 tag : 32;
> +	} cn38xx;
> +} cvmx_wqe_word1_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 rsvd_0 : 8;
> +		u64 hwerr : 8;
> +		u64 rsvd_1 : 24;
> +		u64 sqid : 8;
> +		u64 rsvd_2 : 4;
> +		u64 vfnum : 12;
> +	};
> +} cvmx_wqe_word3_t;
> +
> +typedef union {
> +	u64 u64;
> +	struct {
> +		u64 rsvd_0 : 21;
> +		u64 sqfc : 11;
> +		u64 rsvd_1 : 5;
> +		u64 sqtail : 11;
> +		u64 rsvd_2 : 3;
> +		u64 sqhead : 13;
> +	};
> +} cvmx_wqe_word4_t;
> +
> +/**
> + * Work queue entry format.
> + * Must be 8-byte aligned.
> + */
> +typedef struct cvmx_wqe_s {
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 0                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_wqe_word0_t word0;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 1                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_wqe_word1_t word1;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 2                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64-bits are filled in by hardware
> when a
> +	 * packet arrives. This indicates a variety of status and error
> +	 *conditions.
> +	 */
> +	cvmx_pip_wqe_word2_t word2;
> +
> +	/* Pointer to the first segment of the packet. */
> +	cvmx_buf_ptr_t packet_ptr;
> +
> +	/* HW WRITE: OCTEON will fill in a programmable amount from the
> packet,
> +	 * up to (at most, but perhaps less) the amount needed to fill
> the work
> +	 * queue entry to 128 bytes. If the packet is recognized to be
> IP, the
> +	 * hardware starts (except that the IPv4 header is padded for
> +	 * appropriate alignment) writing here where the IP header
> starts.
> +	 * If the packet is not recognized to be IP, the hardware
> starts
> +	 * writing the beginning of the packet here.
> +	 */
> +	u8 packet_data[96];
> +
> +	/* If desired, SW can make the work Q entry any length. For the
> purposes
> +	 * of discussion here, Assume 128B always, as this is all that
> the hardware
> +	 * deals with.
> +	 */
> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_t;
> +
> +/**
> + * Work queue entry format for NQM
> + * Must be 8-byte aligned
> + */
> +typedef struct cvmx_wqe_nqm_s {
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 0                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_wqe_word0_t word0;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 1                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_wqe_word1_t word1;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 2                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* Reserved */
> +	u64 word2;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 3                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* NVMe specific information.*/
> +	cvmx_wqe_word3_t word3;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 4                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* NVMe specific information.*/
> +	cvmx_wqe_word4_t word4;
> +
> +	/* HW WRITE: OCTEON will fill in a programmable amount from the
> packet,
> +	 * up to (at most, but perhaps less) the amount needed to fill
> the work
> +	 * queue entry to 128 bytes. If the packet is recognized to be
> IP, the
> +	 * hardware starts (except that the IPv4 header is padded for
> +	 * appropriate alignment) writing here where the IP header
> starts.
> +	 * If the packet is not recognized to be IP, the hardware
> starts
> +	 * writing the beginning of the packet here.
> +	 */
> +	u8 packet_data[88];
> +
> +	/* If desired, SW can make the work Q entry any length.
> +	 * For the purposes of discussion here, assume 128B always, as
> this is
> +	 * all that the hardware deals with.
> +	 */
> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_nqm_t;
> +
> +/**
> + * Work queue entry format for 78XX.
> + * In 78XX packet data always resides in WQE buffer unless option
> + * DIS_WQ_DAT=1 in PKI_STYLE_BUF, which causes packet data to use
> separate buffer.
> + *
> + * Must be 8-byte aligned.
> + */
> +typedef struct {
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 0                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_pki_wqe_word0_t word0;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 1                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64 bits are filled by HW when a
> packet
> +	 * arrives.
> +	 */
> +	cvmx_pki_wqe_word1_t word1;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 2                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64-bits are filled in by hardware
> when a
> +	 * packet arrives. This indicates a variety of status and error
> +	 * conditions.
> +	 */
> +	cvmx_pki_wqe_word2_t word2;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 3                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* Pointer to the first segment of the packet.*/
> +	cvmx_buf_ptr_pki_t packet_ptr;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORD
> 4                                                            */
> +	/*-------------------------------------------------------------
> ------*/
> +	/* HW WRITE: the following 64-bits are filled in by hardware
> when a
> +	 * packet arrives contains a byte pointer to the start of Layer
> +	 * A/B/C/D/E/F/G relative of start of packet.
> +	 */
> +	cvmx_pki_wqe_word4_t word4;
> +
> +	/*-------------------------------------------------------------
> ------*/
> +	/* WORDs 5/6/7 may be extended there, if WQE_HSZ is
> set.             */
> +	/*-------------------------------------------------------------
> ------*/
> +	u64 wqe_data[11];
> +
> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_78xx_t;
> +
> +/* Node LS-bit position in the WQE[grp] or PKI_QPG_TBL[grp_ok].*/
> +#define CVMX_WQE_GRP_NODE_SHIFT 8
> +
> +/*
> + * This is an accessor function into the WQE that retreives the
> + * ingress port number, which can also be used as a destination
> + * port number for the same port.
> + *
> + * @param work - Work Queue Entrey pointer
> + * @returns returns the normalized port number, also known as "ipd"
> port
> + */
> +static inline int cvmx_wqe_get_port(cvmx_wqe_t *work)
> +{
> +	int port;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		/* In 78xx wqe entry has channel number not port*/
> +		port = work->word0.pki.channel;
> +		/* For BGX interfaces (0x800 - 0xdff) the 4 LSBs
> indicate
> +		 * the PFC channel, must be cleared to normalize to
> "ipd"
> +		 */
> +		if (port & 0x800)
> +			port &= 0xff0;
> +		/* Node number is in AURA field, make it part of port #
> */
> +		port |= (work->word0.pki.aura >> 10) << 12;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		port = work->word2.s_cn68xx.port;
> +	} else {
> +		port = work->word1.cn38xx.ipprt;
> +	}
> +
> +	return port;
> +}
> +
> +static inline void cvmx_wqe_set_port(cvmx_wqe_t *work, int port)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word0.pki.channel = port;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		work->word2.s_cn68xx.port = port;
> +	else
> +		work->word1.cn38xx.ipprt = port;
> +}
> +
> +static inline int cvmx_wqe_get_grp(cvmx_wqe_t *work)
> +{
> +	int grp;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		/* legacy: GRP[0..2] :=QOS */
> +		grp = (0xff & work->word1.cn78xx.grp) >> 3;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		grp = work->word1.cn68xx.grp;
> +	else
> +		grp = work->word1.cn38xx.grp;
> +
> +	return grp;
> +}
> +
> +static inline void cvmx_wqe_set_xgrp(cvmx_wqe_t *work, int grp)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word1.cn78xx.grp = grp;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		work->word1.cn68xx.grp = grp;
> +	else
> +		work->word1.cn38xx.grp = grp;
> +}
> +
> +static inline int cvmx_wqe_get_xgrp(cvmx_wqe_t *work)
> +{
> +	int grp;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		grp = work->word1.cn78xx.grp;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		grp = work->word1.cn68xx.grp;
> +	else
> +		grp = work->word1.cn38xx.grp;
> +
> +	return grp;
> +}
> +
> +static inline void cvmx_wqe_set_grp(cvmx_wqe_t *work, int grp)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		unsigned int node = cvmx_get_node_num();
> +		/* Legacy: GRP[0..2] :=QOS */
> +		work->word1.cn78xx.grp &= 0x7;
> +		work->word1.cn78xx.grp |= 0xff & (grp << 3);
> +		work->word1.cn78xx.grp |= (node << 8);
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		work->word1.cn68xx.grp = grp;
> +	} else {
> +		work->word1.cn38xx.grp = grp;
> +	}
> +}
> +
> +static inline int cvmx_wqe_get_qos(cvmx_wqe_t *work)
> +{
> +	int qos;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		/* Legacy: GRP[0..2] :=QOS */
> +		qos = work->word1.cn78xx.grp & 0x7;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		qos = work->word1.cn68xx.qos;
> +	} else {
> +		qos = work->word1.cn38xx.qos;
> +	}
> +
> +	return qos;
> +}
> +
> +static inline void cvmx_wqe_set_qos(cvmx_wqe_t *work, int qos)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		/* legacy: GRP[0..2] :=QOS */
> +		work->word1.cn78xx.grp &= ~0x7;
> +		work->word1.cn78xx.grp |= qos & 0x7;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		work->word1.cn68xx.qos = qos;
> +	} else {
> +		work->word1.cn38xx.qos = qos;
> +	}
> +}
> +
> +static inline int cvmx_wqe_get_len(cvmx_wqe_t *work)
> +{
> +	int len;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		len = work->word1.cn78xx.len;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		len = work->word1.cn68xx.len;
> +	else
> +		len = work->word1.cn38xx.len;
> +
> +	return len;
> +}
> +
> +static inline void cvmx_wqe_set_len(cvmx_wqe_t *work, int len)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word1.cn78xx.len = len;
> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
> +		work->word1.cn68xx.len = len;
> +	else
> +		work->word1.cn38xx.len = len;
> +}
> +
> +/**
> + * This function returns, if there was L2/L1 errors detected in
> packet.
> + *
> + * @param work	pointer to work queue entry
> + *
> + * @return	0 if packet had no error, non-zero to indicate error
> code.
> + *
> + * Please refer to HRM for the specific model for full enumaration
> of error codes.
> + * With Octeon1/Octeon2 models, the returned code indicates L1/L2
> errors.
> + * On CN73XX/CN78XX, the return code is the value of PKI_OPCODE_E,
> + * if it is non-zero, otherwise the returned code will be derived
> from
> + * PKI_ERRLEV_E such that an error indicated in LayerA will return
> 0x20,
> + * LayerB - 0x30, LayerC - 0x40 and so forth.
> + */
> +static inline int cvmx_wqe_get_rcv_err(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_RE ||
> wqe->word2.err_code != 0)
> +			return wqe->word2.err_code;
> +		else
> +			return (wqe->word2.err_level << 4) + 0x10;
> +	} else if (work->word2.snoip.rcv_error) {
> +		return work->word2.snoip.err_code;
> +	}
> +
> +	return 0;
> +}
> +
> +static inline u32 cvmx_wqe_get_tag(cvmx_wqe_t *work)
> +{
> +	return work->word1.tag;
> +}
> +
> +static inline void cvmx_wqe_set_tag(cvmx_wqe_t *work, u32 tag)
> +{
> +	work->word1.tag = tag;
> +}
> +
> +static inline int cvmx_wqe_get_tt(cvmx_wqe_t *work)
> +{
> +	return work->word1.tag_type;
> +}
> +
> +static inline void cvmx_wqe_set_tt(cvmx_wqe_t *work, int tt)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		work->word1.cn78xx.tag_type = (cvmx_pow_tag_type_t)tt;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		work->word1.cn68xx.tag_type = (cvmx_pow_tag_type_t)tt;
> +		work->word1.cn68xx.zero_2 = 0;
> +	} else {
> +		work->word1.cn38xx.tag_type = (cvmx_pow_tag_type_t)tt;
> +		work->word1.cn38xx.zero_2 = 0;
> +	}
> +}
> +
> +static inline u8 cvmx_wqe_get_unused8(cvmx_wqe_t *work)
> +{
> +	u8 bits;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		bits = wqe->word2.rsvd_0;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		bits = work->word0.pip.cn68xx.unused1;
> +	} else {
> +		bits = work->word0.pip.cn38xx.unused;
> +	}
> +
> +	return bits;
> +}
> +
> +static inline void cvmx_wqe_set_unused8(cvmx_wqe_t *work, u8 v)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.rsvd_0 = v;
> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
> +		work->word0.pip.cn68xx.unused1 = v;
> +	} else {
> +		work->word0.pip.cn38xx.unused = v;
> +	}
> +}
> +
> +static inline u8 cvmx_wqe_get_user_flags(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return work->word0.pki.rsvd_2;
> +	else
> +		return 0;
> +}
> +
> +static inline void cvmx_wqe_set_user_flags(cvmx_wqe_t *work, u8 v)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word0.pki.rsvd_2 = v;
> +}
> +
> +static inline int cvmx_wqe_get_channel(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return (work->word0.pki.channel);
> +	else
> +		return cvmx_wqe_get_port(work);
> +}
> +
> +static inline void cvmx_wqe_set_channel(cvmx_wqe_t *work, int
> channel)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word0.pki.channel = channel;
> +	else
> +		debug("%s: ERROR: not supported for model\n",
> __func__);
> +}
> +
> +static inline int cvmx_wqe_get_aura(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return (work->word0.pki.aura);
> +	else
> +		return (work->packet_ptr.s.pool);
> +}
> +
> +static inline void cvmx_wqe_set_aura(cvmx_wqe_t *work, int aura)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word0.pki.aura = aura;
> +	else
> +		work->packet_ptr.s.pool = aura;
> +}
> +
> +static inline int cvmx_wqe_get_style(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		return (work->word0.pki.style);
> +	return 0;
> +}
> +
> +static inline void cvmx_wqe_set_style(cvmx_wqe_t *work, int style)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
> +		work->word0.pki.style = style;
> +}
> +
> +static inline int cvmx_wqe_is_l3_ip(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +		/* Match all 4 values for v4/v6 with.without options */
> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			return 1;
> +		if ((wqe->word2.le_hdr_type & 0x1c) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			return 1;
> +		return 0;
> +	} else {
> +		return !work->word2.s_cn38xx.not_IP;
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_l3_ipv4(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +		/* Match 2 values - with/wotuout options */
> +		if ((wqe->word2.lc_hdr_type & 0x1e) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			return 1;
> +		if ((wqe->word2.le_hdr_type & 0x1e) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			return 1;
> +		return 0;
> +	} else {
> +		return (!work->word2.s_cn38xx.not_IP &&
> +			!work->word2.s_cn38xx.is_v6);
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_l3_ipv6(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +		/* Match 2 values - with/wotuout options */
> +		if ((wqe->word2.lc_hdr_type & 0x1e) ==
> CVMX_PKI_LTYPE_E_IP6)
> +			return 1;
> +		if ((wqe->word2.le_hdr_type & 0x1e) ==
> CVMX_PKI_LTYPE_E_IP6)
> +			return 1;
> +		return 0;
> +	} else {
> +		return (!work->word2.s_cn38xx.not_IP &&
> +			work->word2.s_cn38xx.is_v6);
> +	}
> +}
> +
> +static inline bool cvmx_wqe_is_l4_udp_or_tcp(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_TCP)
> +			return true;
> +		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_UDP)
> +			return true;
> +		return false;
> +	}
> +
> +	if (work->word2.s_cn38xx.not_IP)
> +		return false;
> +
> +	return (work->word2.s_cn38xx.tcp_or_udp != 0);
> +}
> +
> +static inline int cvmx_wqe_is_l2_bcast(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.is_l2_bcast;
> +	} else {
> +		return work->word2.s_cn38xx.is_bcast;
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_l2_mcast(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.is_l2_mcast;
> +	} else {
> +		return work->word2.s_cn38xx.is_mcast;
> +	}
> +}
> +
> +static inline void cvmx_wqe_set_l2_bcast(cvmx_wqe_t *work, bool
> bcast)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.is_l2_bcast = bcast;
> +	} else {
> +		work->word2.s_cn38xx.is_bcast = bcast;
> +	}
> +}
> +
> +static inline void cvmx_wqe_set_l2_mcast(cvmx_wqe_t *work, bool
> mcast)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.is_l2_mcast = mcast;
> +	} else {
> +		work->word2.s_cn38xx.is_mcast = mcast;
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_l3_bcast(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.is_l3_bcast;
> +	}
> +	debug("%s: ERROR: not supported for model\n", __func__);
> +	return 0;
> +}
> +
> +static inline int cvmx_wqe_is_l3_mcast(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.is_l3_mcast;
> +	}
> +	debug("%s: ERROR: not supported for model\n", __func__);
> +	return 0;
> +}
> +
> +/**
> + * This function returns is there was IP error detected in packet.
> + * For 78XX it does not flag ipv4 options and ipv6 extensions.
> + * For older chips if PIP_GBL_CTL was proviosned to flag ip4_otions
> and
> + * ipv6 extension, it will be flag them.
> + * @param work	pointer to work queue entry
> + * @return	1 -- If IP error was found in packet
> + *          0 -- If no IP error was found in packet.
> + */
> +static inline int cvmx_wqe_is_ip_exception(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LC)
> +			return 1;
> +		else
> +			return 0;
> +	}
> +
> +	return work->word2.s.IP_exc;
> +}
> +
> +static inline int cvmx_wqe_is_l4_error(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LF)
> +			return 1;
> +		else
> +			return 0;
> +	} else {
> +		return work->word2.s.L4_error;
> +	}
> +}
> +
> +static inline void cvmx_wqe_set_vlan(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.vlan_valid = set;
> +	} else {
> +		work->word2.s.vlan_valid = set;
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_vlan(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.vlan_valid;
> +	} else {
> +		return work->word2.s.vlan_valid;
> +	}
> +}
> +
> +static inline int cvmx_wqe_is_vlan_stacked(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.vlan_stacked;
> +	} else {
> +		return work->word2.s.vlan_stacked;
> +	}
> +}
> +
> +/**
> + * Extract packet data buffer pointer from work queue entry.
> + *
> + * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
> + * for the linked buffer list.
> + * On CN78XX, the native buffer pointer structure is converted into
> + * the legacy format.
> + * The legacy buf_ptr is then stored in the WQE, and word0 reserved
> + * field is set to indicate that the buffer pointers were
> translated.
> + * If the packet data is only found inside the work queue entry,
> + * a standard buffer pointer structure is created for it.
> + */
> +cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work);
> +
> +static inline int cvmx_wqe_get_bufs(cvmx_wqe_t *work)
> +{
> +	int bufs;
> +
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		bufs = work->word0.pki.bufs;
> +	} else {
> +		/* Adjust for packet-in-WQE cases */
> +		if (cvmx_unlikely(work->word2.s_cn38xx.bufs == 0 &&
> !work->word2.s.software))
> +			(void)cvmx_wqe_get_packet_ptr(work);
> +		bufs = work->word2.s_cn38xx.bufs;
> +	}
> +	return bufs;
> +}
> +
> +/**
> + * Free Work Queue Entry memory
> + *
> + * Will return the WQE buffer to its pool, unless the WQE contains
> + * non-redundant packet data.
> + * This function is intended to be called AFTER the packet data
> + * has been passed along to PKO for transmission and release.
> + * It can also follow a call to cvmx_helper_free_packet_data()
> + * to release the WQE after associated data was released.
> + */
> +void cvmx_wqe_free(cvmx_wqe_t *work);
> +
> +/**
> + * Check if a work entry has been intiated by software
> + *
> + */
> +static inline bool cvmx_wqe_is_soft(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return wqe->word2.software;
> +	} else {
> +		return work->word2.s.software;
> +	}
> +}
> +
> +/**
> + * Allocate a work-queue entry for delivering software-initiated
> + * event notifications.
> + * The application data is copied into the work-queue entry,
> + * if the space is sufficient.
> + */
> +cvmx_wqe_t *cvmx_wqe_soft_create(void *data_p, unsigned int
> data_sz);
> +
> +/* Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
> + * CN78XX pass 1.x has a bug where the packet pointer in each
> segment is
> + * written in the opposite endianness of the configured mode. Fix
> these here.
> + */
> +static inline void cvmx_wqe_pki_errata_20776(cvmx_wqe_t *work)
> +{
> +	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !wqe-
> >pki_errata20776) {
> +		u64 bufs;
> +		cvmx_buf_ptr_pki_t buffer_next;
> +
> +		bufs = wqe->word0.bufs;
> +		buffer_next = wqe->packet_ptr;
> +		while (bufs > 1) {
> +			cvmx_buf_ptr_pki_t next;
> +			void *nextaddr =
> cvmx_phys_to_ptr(buffer_next.addr - 8);
> +
> +			memcpy(&next, nextaddr, sizeof(next));
> +			next.u64 = __builtin_bswap64(next.u64);
> +			memcpy(nextaddr, &next, sizeof(next));
> +			buffer_next = next;
> +			bufs--;
> +		}
> +		wqe->pki_errata20776 = 1;
> +	}
> +}
> +
> +/**
> + * @INTERNAL
> + *
> + * Extract the native PKI-specific buffer pointer from WQE.
> + *
> + * NOTE: Provisional, may be superceded.
> + */
> +static inline cvmx_buf_ptr_pki_t cvmx_wqe_get_pki_pkt_ptr(cvmx_wqe_t
> *work)
> +{
> +	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_buf_ptr_pki_t x = { 0 };
> +		return x;
> +	}
> +
> +	cvmx_wqe_pki_errata_20776(work);
> +	return wqe->packet_ptr;
> +}
> +
> +/**
> + * Set the buffer segment count for a packet.
> + *
> + * @return Returns the actual resulting value in the WQE fielda
> + *
> + */
> +static inline unsigned int cvmx_wqe_set_bufs(cvmx_wqe_t *work,
> unsigned int bufs)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		work->word0.pki.bufs = bufs;
> +		return work->word0.pki.bufs;
> +	}
> +
> +	work->word2.s.bufs = bufs;
> +	return work->word2.s.bufs;
> +}
> +
> +/**
> + * Get the offset of Layer-3 header,
> + * only supported when Layer-3 protocol is IPv4 or IPv6.
> + *
> + * @return Returns the offset, or 0 if the offset is not known or
> unsupported.
> + *
> + * FIXME: Assuming word4 is present.
> + */
> +static inline unsigned int cvmx_wqe_get_l3_offset(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +		/* Match 4 values: IPv4/v6 w/wo options */
> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			return wqe->word4.ptr_layer_c;
> +	} else {
> +		return work->word2.s.ip_offset;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * Set the offset of Layer-3 header in a packet.
> + * Typically used when an IP packet is generated by software
> + * or when the Layer-2 header length is modified, and
> + * a subsequent recalculation of checksums is anticipated.
> + *
> + * @return Returns the actual value of the work entry offset field.
> + *
> + * FIXME: Assuming word4 is present.
> + */
> +static inline unsigned int cvmx_wqe_set_l3_offset(cvmx_wqe_t *work,
> unsigned int ip_off)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +		/* Match 4 values: IPv4/v6 w/wo options */
> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
> CVMX_PKI_LTYPE_E_IP4)
> +			wqe->word4.ptr_layer_c = ip_off;
> +	} else {
> +		work->word2.s.ip_offset = ip_off;
> +	}
> +
> +	return cvmx_wqe_get_l3_offset(work);
> +}
> +
> +/**
> + * Set the indication that the packet contains a IPv4 Layer-3 *
> header.
> + * Use 'cvmx_wqe_set_l3_ipv6()' if the protocol is IPv6.
> + * When 'set' is false, the call will result in an indication
> + * that the Layer-3 protocol is neither IPv4 nor IPv6.
> + *
> + * FIXME: Add IPV4_OPT handling based on L3 header length.
> + */
> +static inline void cvmx_wqe_set_l3_ipv4(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP4;
> +		else
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		work->word2.s.not_IP = !set;
> +		if (set)
> +			work->word2.s_cn38xx.is_v6 = 0;
> +	}
> +}
> +
> +/**
> + * Set packet Layer-3 protocol to IPv6.
> + *
> + * FIXME: Add IPV6_OPT handling based on presence of extended
> headers.
> + */
> +static inline void cvmx_wqe_set_l3_ipv6(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP6;
> +		else
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		work->word2.s_cn38xx.not_IP = !set;
> +		if (set)
> +			work->word2.s_cn38xx.is_v6 = 1;
> +	}
> +}
> +
> +/**
> + * Set a packet Layer-4 protocol type to UDP.
> + */
> +static inline void cvmx_wqe_set_l4_udp(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_UDP;
> +		else
> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		if (!work->word2.s_cn38xx.not_IP)
> +			work->word2.s_cn38xx.tcp_or_udp = set;
> +	}
> +}
> +
> +/**
> + * Set a packet Layer-4 protocol type to TCP.
> + */
> +static inline void cvmx_wqe_set_l4_tcp(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_TCP;
> +		else
> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		if (!work->word2.s_cn38xx.not_IP)
> +			work->word2.s_cn38xx.tcp_or_udp = set;
> +	}
> +}
> +
> +/**
> + * Set the "software" flag in a work entry.
> + */
> +static inline void cvmx_wqe_set_soft(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.software = set;
> +	} else {
> +		work->word2.s.software = set;
> +	}
> +}
> +
> +/**
> + * Return true if the packet is an IP fragment.
> + */
> +static inline bool cvmx_wqe_is_l3_frag(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return (wqe->word2.is_frag != 0);
> +	}
> +
> +	if (!work->word2.s_cn38xx.not_IP)
> +		return (work->word2.s.is_frag != 0);
> +
> +	return false;
> +}
> +
> +/**
> + * Set the indicator that the packet is an fragmented IP packet.
> + */
> +static inline void cvmx_wqe_set_l3_frag(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		wqe->word2.is_frag = set;
> +	} else {
> +		if (!work->word2.s_cn38xx.not_IP)
> +			work->word2.s.is_frag = set;
> +	}
> +}
> +
> +/**
> + * Set the packet Layer-3 protocol to RARP.
> + */
> +static inline void cvmx_wqe_set_l3_rarp(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_RARP;
> +		else
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		work->word2.snoip.is_rarp = set;
> +	}
> +}
> +
> +/**
> + * Set the packet Layer-3 protocol to ARP.
> + */
> +static inline void cvmx_wqe_set_l3_arp(cvmx_wqe_t *work, bool set)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		if (set)
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_ARP;
> +		else
> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
> +	} else {
> +		work->word2.snoip.is_arp = set;
> +	}
> +}
> +
> +/**
> + * Return true if the packet Layer-3 protocol is ARP.
> + */
> +static inline bool cvmx_wqe_is_l3_arp(cvmx_wqe_t *work)
> +{
> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
> +
> +		return (wqe->word2.lc_hdr_type ==
> CVMX_PKI_LTYPE_E_ARP);
> +	}
> +
> +	if (work->word2.s_cn38xx.not_IP)
> +		return (work->word2.snoip.is_arp != 0);
> +
> +	return false;
> +}
> +
> +#endif /* __CVMX_WQE_H__ */
> diff --git a/arch/mips/mach-octeon/include/mach/octeon_qlm.h
> b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
> new file mode 100644
> index 000000000000..219625b25688
> --- /dev/null
> +++ b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
> @@ -0,0 +1,109 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020 Marvell International Ltd.
> + */
> +
> +#ifndef __OCTEON_QLM_H__
> +#define __OCTEON_QLM_H__
> +
> +/* Reference clock selector values for ref_clk_sel */
> +#define OCTEON_QLM_REF_CLK_100MHZ 0 /** 100 MHz */
> +#define OCTEON_QLM_REF_CLK_125MHZ 1 /** 125 MHz */
> +#define OCTEON_QLM_REF_CLK_156MHZ 2 /** 156.25 MHz */
> +#define OCTEON_QLM_REF_CLK_161MHZ 3 /** 161.1328125 MHz */
> +
> +/**
> + * Configure qlm/dlm speed and mode.
> + * @param qlm     The QLM or DLM to configure
> + * @param speed   The speed the QLM needs to be configured in Mhz.
> + * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
> + * @param rc      Only used for PCIe, rc = 1 for root complex mode,
> 0 for EP
> + *		  mode.
> + * @param pcie_mode Only used when qlm/dlm are in pcie mode.
> + * @param ref_clk_sel Reference clock to use for 70XX where:
> + *			0: 100MHz
> + *			1: 125MHz
> + *			2: 156.25MHz
> + *			3: 161.1328125MHz (CN73XX and CN78XX only)
> + * @param ref_clk_input	This selects which reference clock
> input to use.  For
> + *			cn70xx:
> + *				0: DLMC_REF_CLK0
> + *				1: DLMC_REF_CLK1
> + *				2: DLM0_REF_CLK
> + *			cn61xx: (not used)
> + *			cn78xx/cn76xx/cn73xx:
> + *				0: Internal clock (QLM[0-7]_REF_CLK)
> + *				1: QLMC_REF_CLK0
> + *				2: QLMC_REF_CLK1
> + *
> + * @return	Return 0 on success or -1.
> + *
> + * @note	When the 161MHz clock is used it can only be used for
> + *		XLAUI mode with a 6316 speed or XFI mode with a 103125
> speed.
> + *		This rate is also only supported for CN73XX and CN78XX.
> + */
> +int octeon_configure_qlm(int qlm, int speed, int mode, int rc, int
> pcie_mode, int ref_clk_sel,
> +			 int ref_clk_input);
> +
> +int octeon_configure_qlm_cn78xx(int node, int qlm, int speed, int
> mode, int rc, int pcie_mode,
> +				int ref_clk_sel, int ref_clk_input);
> +
> +/**
> + * Some QLM speeds need to override the default tuning parameters
> + *
> + * @param node     Node to configure
> + * @param qlm      QLM to configure
> + * @param baud_mhz Desired speed in MHz
> + * @param lane     Lane the apply the tuning parameters
> + * @param tx_swing Voltage swing.  The higher the value the lower
> the voltage,
> + *		   the default value is 7.
> + * @param tx_pre   pre-cursor pre-emphasis
> + * @param tx_post  post-cursor pre-emphasis.
> + * @param tx_gain   Transmit gain. Range 0-7
> + * @param tx_vboost Transmit voltage boost. Range 0-1
> + */
> +void octeon_qlm_tune_per_lane_v3(int node, int qlm, int baud_mhz,
> int lane, int tx_swing,
> +				 int tx_pre, int tx_post, int tx_gain,
> int tx_vboost);
> +
> +/**
> + * Some QLM speeds need to override the default tuning parameters
> + *
> + * @param node     Node to configure
> + * @param qlm      QLM to configure
> + * @param baud_mhz Desired speed in MHz
> + * @param tx_swing Voltage swing.  The higher the value the lower
> the voltage,
> + *		   the default value is 7.
> + * @param tx_premptap bits [0:3] pre-cursor pre-emphasis, bits[4:8]
> post-cursor
> + *		      pre-emphasis.
> + * @param tx_gain   Transmit gain. Range 0-7
> + * @param tx_vboost Transmit voltage boost. Range 0-1
> + */
> +void octeon_qlm_tune_v3(int node, int qlm, int baud_mhz, int
> tx_swing, int tx_premptap, int tx_gain,
> +			int tx_vboost);
> +
> +/**
> + * Disables DFE for the specified QLM lane(s).
> + * This function should only be called for low-loss channels.
> + *
> + * @param node     Node to configure
> + * @param qlm      QLM to configure
> + * @param lane     Lane to configure, or -1 all lanes
> + * @param baud_mhz The speed the QLM needs to be configured in Mhz.
> + * @param mode     The QLM to be configured as SGMII/XAUI/PCIe.
> + */
> +void octeon_qlm_dfe_disable(int node, int qlm, int lane, int
> baud_mhz, int mode);
> +
> +/**
> + * Some QLMs need to override the default pre-ctle for low loss
> channels.
> + *
> + * @param node     Node to configure
> + * @param qlm      QLM to configure
> + * @param pre_ctle pre-ctle settings for low loss channels
> + */
> +void octeon_qlm_set_channel_v3(int node, int qlm, int pre_ctle);
> +
> +void octeon_init_qlm(int node);
> +
> +int octeon_mcu_probe(int node);
> +
> +#endif /* __OCTEON_QLM_H__ */
-- 
- Daniel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v3 33/50] mips: octeon: Add misc remaining header files
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (50 preceding siblings ...)
  2021-04-23  3:56 ` [PATCH v2 33/50] mips: octeon: Add misc remaining header files Stefan Roese
@ 2021-04-23 17:56 ` Stefan Roese
  2021-04-24 22:49 ` [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Daniel Schwierzeck
  52 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2021-04-23 17:56 UTC (permalink / raw)
  To: u-boot

From: Aaron Williams <awilliams@marvell.com>

Import misc remaining header files from 2013 U-Boot. These will be used
by the later added drivers to support PCIe and networking on the MIPS
Octeon II / III platforms.

Signed-off-by: Aaron Williams <awilliams@marvell.com>
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Aaron Williams <awilliams@marvell.com>
Cc: Chandrakala Chavva <cchavva@marvell.com>
Cc: Daniel Schwierzeck <daniel.schwierzeck@gmail.com>
---
v3:
- Add missing mach/octeon_eth.h and mach/octeon_pci.h files
  (forgot to commit it in v1 and v2)

v2:
- Add missing mach/octeon_qlm.h file (forgot to commit it in v1)


 .../mach-octeon/include/mach/cvmx-address.h   |  209 ++
 .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +++
 .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
 arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
 .../mach-octeon/include/mach/cvmx-error.h     |  456 +++
 arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 ++
 .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 ++++
 .../include/mach/cvmx-global-resources.h      |  213 ++
 arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
 .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++++
 .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 ++++
 arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
 arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 ++
 .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
 .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 ++
 arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++++++
 .../include/mach/cvmx-pki-resources.h         |  157 +
 arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++++++
 .../mach/cvmx-pko-internal-ports-range.h      |   43 +
 .../include/mach/cvmx-pko3-queue.h            |  175 +
 arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991 +++++++++++++++++
 arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 ++
 .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
 arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 ++++++++
 .../mach-octeon/include/mach/octeon_eth.h     |  141 +
 .../mach-octeon/include/mach/octeon_fdt.h     |  268 ++
 .../mach-octeon/include/mach/octeon_pci.h     |   68 +
 .../mach-octeon/include/mach/octeon_qlm.h     |  109 +
 29 files changed, 12242 insertions(+)
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
 create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
 create mode 100644 arch/mips/mach-octeon/include/mach/octeon_eth.h
 create mode 100644 arch/mips/mach-octeon/include/mach/octeon_fdt.h
 create mode 100644 arch/mips/mach-octeon/include/mach/octeon_pci.h
 create mode 100644 arch/mips/mach-octeon/include/mach/octeon_qlm.h

diff --git a/arch/mips/mach-octeon/include/mach/cvmx-address.h b/arch/mips/mach-octeon/include/mach/cvmx-address.h
new file mode 100644
index 000000000000..984f574a75bb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-address.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Typedefs and defines for working with Octeon physical addresses.
+ */
+
+#ifndef __CVMX_ADDRESS_H__
+#define __CVMX_ADDRESS_H__
+
+typedef enum {
+	CVMX_MIPS_SPACE_XKSEG = 3LL,
+	CVMX_MIPS_SPACE_XKPHYS = 2LL,
+	CVMX_MIPS_SPACE_XSSEG = 1LL,
+	CVMX_MIPS_SPACE_XUSEG = 0LL
+} cvmx_mips_space_t;
+
+typedef enum {
+	CVMX_MIPS_XKSEG_SPACE_KSEG0 = 0LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG1 = 1LL,
+	CVMX_MIPS_XKSEG_SPACE_SSEG = 2LL,
+	CVMX_MIPS_XKSEG_SPACE_KSEG3 = 3LL
+} cvmx_mips_xkseg_space_t;
+
+/* decodes <14:13> of a kseg3 window address */
+typedef enum {
+	CVMX_ADD_WIN_SCR = 0L,
+	CVMX_ADD_WIN_DMA = 1L,
+	CVMX_ADD_WIN_UNUSED = 2L,
+	CVMX_ADD_WIN_UNUSED2 = 3L
+} cvmx_add_win_dec_t;
+
+/* decode within DMA space */
+typedef enum {
+	CVMX_ADD_WIN_DMA_ADD = 0L,
+	CVMX_ADD_WIN_DMA_SENDMEM = 1L,
+	/* store data must be normal DRAM memory space address in this case */
+	CVMX_ADD_WIN_DMA_SENDDMA = 2L,
+	/* see CVMX_ADD_WIN_DMA_SEND_DEC for data contents */
+	CVMX_ADD_WIN_DMA_SENDIO = 3L,
+	/* store data must be normal IO space address in this case */
+	CVMX_ADD_WIN_DMA_SENDSINGLE = 4L,
+	/* no write buffer data needed/used */
+} cvmx_add_win_dma_dec_t;
+
+/**
+ *   Physical Address Decode
+ *
+ * Octeon-I HW never interprets this X (<39:36> reserved
+ * for future expansion), software should set to 0.
+ *
+ *  - 0x0 XXX0 0000 0000 to      DRAM         Cached
+ *  - 0x0 XXX0 0FFF FFFF
+ *
+ *  - 0x0 XXX0 1000 0000 to      Boot Bus     Uncached  (Converted to 0x1 00X0 1000 0000
+ *  - 0x0 XXX0 1FFF FFFF         + EJTAG                           to 0x1 00X0 1FFF FFFF)
+ *
+ *  - 0x0 XXX0 2000 0000 to      DRAM         Cached
+ *  - 0x0 XXXF FFFF FFFF
+ *
+ *  - 0x1 00X0 0000 0000 to      Boot Bus     Uncached
+ *  - 0x1 00XF FFFF FFFF
+ *
+ *  - 0x1 01X0 0000 0000 to      Other NCB    Uncached
+ *  - 0x1 FFXF FFFF FFFF         devices
+ *
+ * Decode of all Octeon addresses
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 offset : 62;
+	} sva;
+
+	struct {
+		u64 zeroes : 33;
+		u64 offset : 31;
+	} suseg;
+
+	struct {
+		u64 ones : 33;
+		cvmx_mips_xkseg_space_t sp : 2;
+		u64 offset : 29;
+	} sxkseg;
+
+	struct {
+		cvmx_mips_space_t R : 2;
+		u64 cca : 3;
+		u64 mbz : 10;
+		u64 pa : 49;
+	} sxkphys;
+
+	struct {
+		u64 mbz : 15;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sphys;
+
+	struct {
+		u64 zeroes : 24;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} smem;
+
+	struct {
+		u64 mem_region : 2;
+		u64 mbz : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 unaddr : 4;
+		u64 offset : 36;
+	} sio;
+
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 addr : 13;
+	} sscr;
+
+	/* there should only be stores to IOBDMA space, no loads */
+	struct {
+		u64 ones : 49;
+		cvmx_add_win_dec_t csrdec : 2;
+		u64 unused2 : 3;
+		cvmx_add_win_dma_dec_t type : 3;
+		u64 addr : 7;
+	} sdma;
+
+	struct {
+		u64 didspace : 24;
+		u64 unused : 40;
+	} sfilldidspace;
+} cvmx_addr_t;
+
+/* These macros for used by 32 bit applications */
+
+#define CVMX_MIPS32_SPACE_KSEG0	     1l
+#define CVMX_ADD_SEG32(segment, add) (((s32)segment << 31) | (s32)(add))
+
+/*
+ * Currently all IOs are performed using XKPHYS addressing. Linux uses the
+ * CvmMemCtl register to enable XKPHYS addressing to IO space from user mode.
+ * Future OSes may need to change the upper bits of IO addresses. The
+ * following define controls the upper two bits for all IO addresses generated
+ * by the simple executive library
+ */
+#define CVMX_IO_SEG CVMX_MIPS_SPACE_XKPHYS
+
+/* These macros simplify the process of creating common IO addresses */
+#define CVMX_ADD_SEG(segment, add) ((((u64)segment) << 62) | (add))
+
+#define CVMX_ADD_IO_SEG(add) (add)
+
+#define CVMX_ADDR_DIDSPACE(did)	   (((CVMX_IO_SEG) << 22) | ((1ULL) << 8) | (did))
+#define CVMX_ADDR_DID(did)	   (CVMX_ADDR_DIDSPACE(did) << 40)
+#define CVMX_FULL_DID(did, subdid) (((did) << 3) | (subdid))
+
+/* from include/ncb_rsl_id.v */
+#define CVMX_OCT_DID_MIS  0ULL /* misc stuff */
+#define CVMX_OCT_DID_GMX0 1ULL
+#define CVMX_OCT_DID_GMX1 2ULL
+#define CVMX_OCT_DID_PCI  3ULL
+#define CVMX_OCT_DID_KEY  4ULL
+#define CVMX_OCT_DID_FPA  5ULL
+#define CVMX_OCT_DID_DFA  6ULL
+#define CVMX_OCT_DID_ZIP  7ULL
+#define CVMX_OCT_DID_RNG  8ULL
+#define CVMX_OCT_DID_IPD  9ULL
+#define CVMX_OCT_DID_PKT  10ULL
+#define CVMX_OCT_DID_TIM  11ULL
+#define CVMX_OCT_DID_TAG  12ULL
+/* the rest are not on the IO bus */
+#define CVMX_OCT_DID_L2C  16ULL
+#define CVMX_OCT_DID_LMC  17ULL
+#define CVMX_OCT_DID_SPX0 18ULL
+#define CVMX_OCT_DID_SPX1 19ULL
+#define CVMX_OCT_DID_PIP  20ULL
+#define CVMX_OCT_DID_ASX0 22ULL
+#define CVMX_OCT_DID_ASX1 23ULL
+#define CVMX_OCT_DID_IOB  30ULL
+
+#define CVMX_OCT_DID_PKT_SEND	 CVMX_FULL_DID(CVMX_OCT_DID_PKT, 2ULL)
+#define CVMX_OCT_DID_TAG_SWTAG	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 0ULL)
+#define CVMX_OCT_DID_TAG_TAG1	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 1ULL)
+#define CVMX_OCT_DID_TAG_TAG2	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 2ULL)
+#define CVMX_OCT_DID_TAG_TAG3	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 3ULL)
+#define CVMX_OCT_DID_TAG_NULL_RD CVMX_FULL_DID(CVMX_OCT_DID_TAG, 4ULL)
+#define CVMX_OCT_DID_TAG_TAG5	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 5ULL)
+#define CVMX_OCT_DID_TAG_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 7ULL)
+#define CVMX_OCT_DID_FAU_FAI	 CVMX_FULL_DID(CVMX_OCT_DID_IOB, 0ULL)
+#define CVMX_OCT_DID_TIM_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TIM, 0ULL)
+#define CVMX_OCT_DID_KEY_RW	 CVMX_FULL_DID(CVMX_OCT_DID_KEY, 0ULL)
+#define CVMX_OCT_DID_PCI_6	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 6ULL)
+#define CVMX_OCT_DID_MIS_BOO	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 0ULL)
+#define CVMX_OCT_DID_PCI_RML	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 0ULL)
+#define CVMX_OCT_DID_IPD_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_IPD, 7ULL)
+#define CVMX_OCT_DID_DFA_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_DFA, 7ULL)
+#define CVMX_OCT_DID_MIS_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 7ULL)
+#define CVMX_OCT_DID_ZIP_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_ZIP, 0ULL)
+
+/* Cast to unsigned long long, mainly for use in printfs. */
+#define CAST_ULL(v) ((unsigned long long)(v))
+
+#define UNMAPPED_PTR(x) ((1ULL << 63) | (x))
+
+#endif /* __CVMX_ADDRESS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
new file mode 100644
index 000000000000..ddc294348cb4
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
@@ -0,0 +1,441 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Support functions for managing command queues used for
+ * various hardware blocks.
+ *
+ * The common command queue infrastructure abstracts out the
+ * software necessary for adding to Octeon's chained queue
+ * structures. These structures are used for commands to the
+ * PKO, ZIP, DFA, RAID, HNA, and DMA engine blocks. Although each
+ * hardware unit takes commands and CSRs of different types,
+ * they all use basic linked command buffers to store the
+ * pending request. In general, users of the CVMX API don't
+ * call cvmx-cmd-queue functions directly. Instead the hardware
+ * unit specific wrapper should be used. The wrappers perform
+ * unit specific validation and CSR writes to submit the
+ * commands.
+ *
+ * Even though most software will never directly interact with
+ * cvmx-cmd-queue, knowledge of its internal workings can help
+ * in diagnosing performance problems and help with debugging.
+ *
+ * Command queue pointers are stored in a global named block
+ * called "cvmx_cmd_queues". Except for the PKO queues, each
+ * hardware queue is stored in its own cache line to reduce SMP
+ * contention on spin locks. The PKO queues are stored such that
+ * every 16th queue is next to each other in memory. This scheme
+ * allows for queues being in separate cache lines when there
+ * are low number of queues per port. With 16 queues per port,
+ * the first queue for each port is in the same cache area. The
+ * second queues for each port are in another area, etc. This
+ * allows software to implement very efficient lockless PKO with
+ * 16 queues per port using a minimum of cache lines per core.
+ * All queues for a given core will be isolated in the same
+ * cache area.
+ *
+ * In addition to the memory pointer layout, cvmx-cmd-queue
+ * provides an optimized fair ll/sc locking mechanism for the
+ * queues. The lock uses a "ticket / now serving" model to
+ * maintain fair order on contended locks. In addition, it uses
+ * predicted locking time to limit cache contention. When a core
+ * know it must wait in line for a lock, it spins on the
+ * internal cycle counter to completely eliminate any causes of
+ * bus traffic.
+ */
+
+#ifndef __CVMX_CMD_QUEUE_H__
+#define __CVMX_CMD_QUEUE_H__
+
+/**
+ * By default we disable the max depth support. Most programs
+ * don't use it and it slows down the command queue processing
+ * significantly.
+ */
+#ifndef CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH
+#define CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH 0
+#endif
+
+/**
+ * Enumeration representing all hardware blocks that use command
+ * queues. Each hardware block has up to 65536 sub identifiers for
+ * multiple command queues. Not all chips support all hardware
+ * units.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_PKO_BASE = 0x00000,
+#define CVMX_CMD_QUEUE_PKO(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_PKO_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_ZIP = 0x10000,
+#define CVMX_CMD_QUEUE_ZIP_QUE(queue)                                                              \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_ZIP + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_DFA = 0x20000,
+	CVMX_CMD_QUEUE_RAID = 0x30000,
+	CVMX_CMD_QUEUE_DMA_BASE = 0x40000,
+#define CVMX_CMD_QUEUE_DMA(queue)                                                                  \
+	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_DMA_BASE + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_BCH = 0x50000,
+#define CVMX_CMD_QUEUE_BCH(queue) ((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_BCH + (0xffff & (queue))))
+	CVMX_CMD_QUEUE_HNA = 0x60000,
+	CVMX_CMD_QUEUE_END = 0x70000,
+} cvmx_cmd_queue_id_t;
+
+#define CVMX_CMD_QUEUE_ZIP3_QUE(node, queue)                                                       \
+	((cvmx_cmd_queue_id_t)((node) << 24 | CVMX_CMD_QUEUE_ZIP | (0xffff & (queue))))
+
+/**
+ * Command write operations can fail if the command queue needs
+ * a new buffer and the associated FPA pool is empty. It can also
+ * fail if the number of queued command words reaches the maximum
+ * set@initialization.
+ */
+typedef enum {
+	CVMX_CMD_QUEUE_SUCCESS = 0,
+	CVMX_CMD_QUEUE_NO_MEMORY = -1,
+	CVMX_CMD_QUEUE_FULL = -2,
+	CVMX_CMD_QUEUE_INVALID_PARAM = -3,
+	CVMX_CMD_QUEUE_ALREADY_SETUP = -4,
+} cvmx_cmd_queue_result_t;
+
+typedef struct {
+	/* First 64-bit word: */
+	u64 fpa_pool : 16;
+	u64 base_paddr : 48;
+	s32 index;
+	u16 max_depth;
+	u16 pool_size_m1;
+} __cvmx_cmd_queue_state_t;
+
+/**
+ * command-queue locking uses a fair ticket spinlock algo,
+ * with 64-bit tickets for endianness-neutrality and
+ * counter overflow protection.
+ * Lock is free when both counters are of equal value.
+ */
+typedef struct {
+	u64 ticket;
+	u64 now_serving;
+} __cvmx_cmd_queue_lock_t;
+
+/**
+ * @INTERNAL
+ * This structure contains the global state of all command queues.
+ * It is stored in a bootmem named block and shared by all
+ * applications running on Octeon. Tickets are stored in a different
+ * cache line that queue information to reduce the contention on the
+ * ll/sc used to get a ticket. If this is not the case, the update
+ * of queue state causes the ll/sc to fail quite often.
+ */
+typedef struct {
+	__cvmx_cmd_queue_lock_t lock[(CVMX_CMD_QUEUE_END >> 16) * 256];
+	__cvmx_cmd_queue_state_t state[(CVMX_CMD_QUEUE_END >> 16) * 256];
+} __cvmx_cmd_queue_all_state_t;
+
+extern __cvmx_cmd_queue_all_state_t *__cvmx_cmd_queue_state_ptrs[CVMX_MAX_NODES];
+
+/**
+ * @INTERNAL
+ * Internal function to handle the corner cases
+ * of adding command words to a queue when the current
+ * block is getting full.
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_write_raw(cvmx_cmd_queue_id_t queue_id,
+						   __cvmx_cmd_queue_state_t *qptr, int cmd_count,
+						   const u64 *cmds);
+
+/**
+ * Initialize a command queue for use. The initial FPA buffer is
+ * allocated and the hardware unit is configured to point to the
+ * new command queue.
+ *
+ * @param queue_id  Hardware command queue to initialize.
+ * @param max_depth Maximum outstanding commands that can be queued.
+ * @param fpa_pool  FPA pool the command queues should come from.
+ * @param pool_size Size of each buffer in the FPA pool (bytes)
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_initialize(cvmx_cmd_queue_id_t queue_id, int max_depth,
+						  int fpa_pool, int pool_size);
+
+/**
+ * Shutdown a queue a free it's command buffers to the FPA. The
+ * hardware connected to the queue must be stopped before this
+ * function is called.
+ *
+ * @param queue_id Queue to shutdown
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+cvmx_cmd_queue_result_t cvmx_cmd_queue_shutdown(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the number of command words pending in the queue. This
+ * function may be relatively slow for some hardware units.
+ *
+ * @param queue_id Hardware command queue to query
+ *
+ * @return Number of outstanding commands
+ */
+int cvmx_cmd_queue_length(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * Return the command buffer to be written to. The purpose of this
+ * function is to allow CVMX routine access to the low level buffer
+ * for initial hardware setup. User applications should not call this
+ * function directly.
+ *
+ * @param queue_id Command queue to query
+ *
+ * @return Command buffer or NULL on failure
+ */
+void *cvmx_cmd_queue_buffer(cvmx_cmd_queue_id_t queue_id);
+
+/**
+ * @INTERNAL
+ * Retrieve or allocate command queue state named block
+ */
+cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(unsigned int node);
+
+/**
+ * @INTERNAL
+ * Get the index into the state arrays for the supplied queue id.
+ *
+ * @param queue_id Queue ID to get an index for
+ *
+ * @return Index into the state arrays
+ */
+static inline unsigned int __cvmx_cmd_queue_get_index(cvmx_cmd_queue_id_t queue_id)
+{
+	/* Warning: This code currently only works with devices that have 256
+	 * queues or less.  Devices with more than 16 queues are laid out in
+	 * memory to allow cores quick access to every 16th queue. This reduces
+	 * cache thrashing when you are running 16 queues per port to support
+	 * lockless operation
+	 */
+	unsigned int unit = (queue_id >> 16) & 0xff;
+	unsigned int q = (queue_id >> 4) & 0xf;
+	unsigned int core = queue_id & 0xf;
+
+	return (unit << 8) | (core << 4) | q;
+}
+
+static inline int __cvmx_cmd_queue_get_node(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int node = queue_id >> 24;
+	return node;
+}
+
+/**
+ * @INTERNAL
+ * Lock the supplied queue so nobody else is updating it at the same
+ * time as us.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_lock(cvmx_cmd_queue_id_t queue_id)
+{
+}
+
+/**
+ * @INTERNAL
+ * Unlock the queue, flushing all writes.
+ *
+ * @param queue_id Queue ID to lock
+ *
+ */
+static inline void __cvmx_cmd_queue_unlock(cvmx_cmd_queue_id_t queue_id)
+{
+	CVMX_SYNCWS; /* nudge out the unlock. */
+}
+
+/**
+ * @INTERNAL
+ * Initialize a command-queue lock to "unlocked" state.
+ */
+static inline void __cvmx_cmd_queue_lock_init(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index = __cvmx_cmd_queue_get_index(queue_id);
+	unsigned int node = __cvmx_cmd_queue_get_node(queue_id);
+
+	__cvmx_cmd_queue_state_ptrs[node]->lock[index] = (__cvmx_cmd_queue_lock_t){ 0, 0 };
+	CVMX_SYNCWS;
+}
+
+/**
+ * @INTERNAL
+ * Get the queue state structure for the given queue id
+ *
+ * @param queue_id Queue id to get
+ *
+ * @return Queue structure or NULL on failure
+ */
+static inline __cvmx_cmd_queue_state_t *__cvmx_cmd_queue_get_state(cvmx_cmd_queue_id_t queue_id)
+{
+	unsigned int index;
+	unsigned int node;
+	__cvmx_cmd_queue_state_t *qptr;
+
+	node = __cvmx_cmd_queue_get_node(queue_id);
+	index = __cvmx_cmd_queue_get_index(queue_id);
+
+	if (cvmx_unlikely(!__cvmx_cmd_queue_state_ptrs[node]))
+		__cvmx_cmd_queue_init_state_ptr(node);
+
+	qptr = &__cvmx_cmd_queue_state_ptrs[node]->state[index];
+	return qptr;
+}
+
+/**
+ * Write an arbitrary number of command words to a command queue.
+ * This is a generic function; the fixed number of command word
+ * functions yield higher performance.
+ *
+ * @param queue_id  Hardware command queue to write to
+ * @param use_locking
+ *                  Use internal locking to ensure exclusive access for queue
+ *                  updates. If you don't use this locking you must ensure
+ *                  exclusivity some other way. Locking is strongly recommended.
+ * @param cmd_count Number of command words to write
+ * @param cmds      Array of commands to write
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write(cvmx_cmd_queue_id_t queue_id, bool use_locking, int cmd_count, const u64 *cmds)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	/* Most of the time there is lots of free words in current block */
+	if (cvmx_unlikely((qptr->index + cmd_count) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, cmd_count, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		/* Loop easy for compiler to unroll for the likely case */
+		while (cmd_count > 0) {
+			cmd_ptr[qptr->index++] = *cmds++;
+			cmd_count--;
+		}
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write two command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t cvmx_cmd_queue_write2(cvmx_cmd_queue_id_t queue_id,
+							    bool use_locking, u64 cmd1, u64 cmd2)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	u64 *cmd_ptr;
+
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 2) >= qptr->pool_size_m1)) {
+		/* The rare case when nearing end of block */
+		u64 cmds[2];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 2, cmds);
+	} else {
+		/* Likely case to work fast */
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 2;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+/**
+ * Simple function to write three command words to a command queue.
+ *
+ * @param queue_id Hardware command queue to write to
+ * @param use_locking
+ *                 Use internal locking to ensure exclusive access for queue
+ *                 updates. If you don't use this locking you must ensure
+ *                 exclusivity some other way. Locking is strongly recommended.
+ * @param cmd1     Command
+ * @param cmd2     Command
+ * @param cmd3     Command
+ *
+ * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
+ */
+static inline cvmx_cmd_queue_result_t
+cvmx_cmd_queue_write3(cvmx_cmd_queue_id_t queue_id, bool use_locking, u64 cmd1, u64 cmd2, u64 cmd3)
+{
+	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
+	__cvmx_cmd_queue_state_t *qptr = __cvmx_cmd_queue_get_state(queue_id);
+	u64 *cmd_ptr;
+
+	/* Make sure nobody else is updating the same queue */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_lock(queue_id);
+
+	if (cvmx_unlikely((qptr->index + 3) >= qptr->pool_size_m1)) {
+		/* Most of the time there is lots of free words in current block */
+		u64 cmds[3];
+
+		cmds[0] = cmd1;
+		cmds[1] = cmd2;
+		cmds[2] = cmd3;
+		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 3, cmds);
+	} else {
+		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr->base_paddr);
+		cmd_ptr += qptr->index;
+		qptr->index += 3;
+		cmd_ptr[0] = cmd1;
+		cmd_ptr[1] = cmd2;
+		cmd_ptr[2] = cmd3;
+	}
+
+	/* All updates are complete. Release the lock and return */
+	if (cvmx_likely(use_locking))
+		__cvmx_cmd_queue_unlock(queue_id);
+	else
+		CVMX_SYNCWS;
+
+	return ret;
+}
+
+#endif /* __CVMX_CMD_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
new file mode 100644
index 000000000000..a8625b4228ac
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Definitions for enumerations used with Octeon CSRs.
+ */
+
+#ifndef __CVMX_CSR_ENUMS_H__
+#define __CVMX_CSR_ENUMS_H__
+
+typedef enum {
+	CVMX_IPD_OPC_MODE_STT = 0LL,
+	CVMX_IPD_OPC_MODE_STF = 1LL,
+	CVMX_IPD_OPC_MODE_STF1_STT = 2LL,
+	CVMX_IPD_OPC_MODE_STF2_STT = 3LL
+} cvmx_ipd_mode_t;
+
+/**
+ * Enumeration representing the amount of packet processing
+ * and validation performed by the input hardware.
+ */
+typedef enum {
+	CVMX_PIP_PORT_CFG_MODE_NONE = 0ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPL2 = 1ull,
+	CVMX_PIP_PORT_CFG_MODE_SKIPIP = 2ull
+} cvmx_pip_port_parse_mode_t;
+
+/**
+ * This enumeration controls how a QoS watcher matches a packet.
+ *
+ * @deprecated  This enumeration was used with cvmx_pip_config_watcher which has
+ *              been deprecated.
+ */
+typedef enum {
+	CVMX_PIP_QOS_WATCH_DISABLE = 0ull,
+	CVMX_PIP_QOS_WATCH_PROTNH = 1ull,
+	CVMX_PIP_QOS_WATCH_TCP = 2ull,
+	CVMX_PIP_QOS_WATCH_UDP = 3ull
+} cvmx_pip_qos_watch_types;
+
+/**
+ * This enumeration is used in PIP tag config to control how
+ * POW tags are generated by the hardware.
+ */
+typedef enum {
+	CVMX_PIP_TAG_MODE_TUPLE = 0ull,
+	CVMX_PIP_TAG_MODE_MASK = 1ull,
+	CVMX_PIP_TAG_MODE_IP_OR_MASK = 2ull,
+	CVMX_PIP_TAG_MODE_TUPLE_XOR_MASK = 3ull
+} cvmx_pip_tag_mode_t;
+
+/**
+ * Tag type definitions
+ */
+typedef enum {
+	CVMX_POW_TAG_TYPE_ORDERED = 0L,
+	CVMX_POW_TAG_TYPE_ATOMIC = 1L,
+	CVMX_POW_TAG_TYPE_NULL = 2L,
+	CVMX_POW_TAG_TYPE_NULL_NULL = 3L
+} cvmx_pow_tag_type_t;
+
+/**
+ * LCR bits 0 and 1 control the number of bits per character. See the following table for encodings:
+ *
+ * - 00 = 5 bits (bits 0-4 sent)
+ * - 01 = 6 bits (bits 0-5 sent)
+ * - 10 = 7 bits (bits 0-6 sent)
+ * - 11 = 8 bits (all bits sent)
+ */
+typedef enum {
+	CVMX_UART_BITS5 = 0,
+	CVMX_UART_BITS6 = 1,
+	CVMX_UART_BITS7 = 2,
+	CVMX_UART_BITS8 = 3
+} cvmx_uart_bits_t;
+
+typedef enum {
+	CVMX_UART_IID_NONE = 1,
+	CVMX_UART_IID_RX_ERROR = 6,
+	CVMX_UART_IID_RX_DATA = 4,
+	CVMX_UART_IID_RX_TIMEOUT = 12,
+	CVMX_UART_IID_TX_EMPTY = 2,
+	CVMX_UART_IID_MODEM = 0,
+	CVMX_UART_IID_BUSY = 7
+} cvmx_uart_iid_t;
+
+#endif /* __CVMX_CSR_ENUMS_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr.h b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
new file mode 100644
index 000000000000..730d54bb9278
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Configuration and status register (CSR) address and type definitions for
+ * Octoen.
+ */
+
+#ifndef __CVMX_CSR_H__
+#define __CVMX_CSR_H__
+
+#include "cvmx-csr-enums.h"
+#include "cvmx-pip-defs.h"
+
+typedef cvmx_pip_prt_cfgx_t cvmx_pip_port_cfg_t;
+
+/* The CSRs for bootbus region zero used to be independent of the
+    other 1-7. As of SDK 1.7.0 these were combined. These macros
+    are for backwards compactability */
+#define CVMX_MIO_BOOT_REG_CFG0 CVMX_MIO_BOOT_REG_CFGX(0)
+#define CVMX_MIO_BOOT_REG_TIM0 CVMX_MIO_BOOT_REG_TIMX(0)
+
+/* The CN3XXX and CN58XX chips used to not have a LMC number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_LMC_BIST_CTL	  CVMX_LMCX_BIST_CTL(0)
+#define CVMX_LMC_BIST_RESULT	  CVMX_LMCX_BIST_RESULT(0)
+#define CVMX_LMC_COMP_CTL	  CVMX_LMCX_COMP_CTL(0)
+#define CVMX_LMC_CTL		  CVMX_LMCX_CTL(0)
+#define CVMX_LMC_CTL1		  CVMX_LMCX_CTL1(0)
+#define CVMX_LMC_DCLK_CNT_HI	  CVMX_LMCX_DCLK_CNT_HI(0)
+#define CVMX_LMC_DCLK_CNT_LO	  CVMX_LMCX_DCLK_CNT_LO(0)
+#define CVMX_LMC_DCLK_CTL	  CVMX_LMCX_DCLK_CTL(0)
+#define CVMX_LMC_DDR2_CTL	  CVMX_LMCX_DDR2_CTL(0)
+#define CVMX_LMC_DELAY_CFG	  CVMX_LMCX_DELAY_CFG(0)
+#define CVMX_LMC_DLL_CTL	  CVMX_LMCX_DLL_CTL(0)
+#define CVMX_LMC_DUAL_MEMCFG	  CVMX_LMCX_DUAL_MEMCFG(0)
+#define CVMX_LMC_ECC_SYND	  CVMX_LMCX_ECC_SYND(0)
+#define CVMX_LMC_FADR		  CVMX_LMCX_FADR(0)
+#define CVMX_LMC_IFB_CNT_HI	  CVMX_LMCX_IFB_CNT_HI(0)
+#define CVMX_LMC_IFB_CNT_LO	  CVMX_LMCX_IFB_CNT_LO(0)
+#define CVMX_LMC_MEM_CFG0	  CVMX_LMCX_MEM_CFG0(0)
+#define CVMX_LMC_MEM_CFG1	  CVMX_LMCX_MEM_CFG1(0)
+#define CVMX_LMC_OPS_CNT_HI	  CVMX_LMCX_OPS_CNT_HI(0)
+#define CVMX_LMC_OPS_CNT_LO	  CVMX_LMCX_OPS_CNT_LO(0)
+#define CVMX_LMC_PLL_BWCTL	  CVMX_LMCX_PLL_BWCTL(0)
+#define CVMX_LMC_PLL_CTL	  CVMX_LMCX_PLL_CTL(0)
+#define CVMX_LMC_PLL_STATUS	  CVMX_LMCX_PLL_STATUS(0)
+#define CVMX_LMC_READ_LEVEL_CTL	  CVMX_LMCX_READ_LEVEL_CTL(0)
+#define CVMX_LMC_READ_LEVEL_DBG	  CVMX_LMCX_READ_LEVEL_DBG(0)
+#define CVMX_LMC_READ_LEVEL_RANKX CVMX_LMCX_READ_LEVEL_RANKX(0)
+#define CVMX_LMC_RODT_COMP_CTL	  CVMX_LMCX_RODT_COMP_CTL(0)
+#define CVMX_LMC_RODT_CTL	  CVMX_LMCX_RODT_CTL(0)
+#define CVMX_LMC_WODT_CTL	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL0	  CVMX_LMCX_WODT_CTL0(0)
+#define CVMX_LMC_WODT_CTL1	  CVMX_LMCX_WODT_CTL1(0)
+
+/* The CN3XXX and CN58XX chips used to not have a TWSI bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_MIO_TWS_INT	 CVMX_MIO_TWSX_INT(0)
+#define CVMX_MIO_TWS_SW_TWSI	 CVMX_MIO_TWSX_SW_TWSI(0)
+#define CVMX_MIO_TWS_SW_TWSI_EXT CVMX_MIO_TWSX_SW_TWSI_EXT(0)
+#define CVMX_MIO_TWS_TWSI_SW	 CVMX_MIO_TWSX_TWSI_SW(0)
+
+/* The CN3XXX and CN58XX chips used to not have a SMI/MDIO bus number
+    passed to the address macros. These are here to supply backwards
+    compatibility with old code. Code should really use the new addresses
+    with bus arguments for support on other chips */
+#define CVMX_SMI_CLK	CVMX_SMIX_CLK(0)
+#define CVMX_SMI_CMD	CVMX_SMIX_CMD(0)
+#define CVMX_SMI_EN	CVMX_SMIX_EN(0)
+#define CVMX_SMI_RD_DAT CVMX_SMIX_RD_DAT(0)
+#define CVMX_SMI_WR_DAT CVMX_SMIX_WR_DAT(0)
+
+#endif /* __CVMX_CSR_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-error.h b/arch/mips/mach-octeon/include/mach/cvmx-error.h
new file mode 100644
index 000000000000..9a13ed422484
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-error.h
@@ -0,0 +1,456 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the Octeon extended error status.
+ */
+
+#ifndef __CVMX_ERROR_H__
+#define __CVMX_ERROR_H__
+
+/**
+ * There are generally many error status bits associated with a
+ * single logical group. The enumeration below is used to
+ * communicate high level groups to the error infastructure so
+ * error status bits can be enable or disabled in large groups.
+ */
+typedef enum {
+	CVMX_ERROR_GROUP_INTERNAL,
+	CVMX_ERROR_GROUP_L2C,
+	CVMX_ERROR_GROUP_ETHERNET,
+	CVMX_ERROR_GROUP_MGMT_PORT,
+	CVMX_ERROR_GROUP_PCI,
+	CVMX_ERROR_GROUP_SRIO,
+	CVMX_ERROR_GROUP_USB,
+	CVMX_ERROR_GROUP_LMC,
+	CVMX_ERROR_GROUP_ILK,
+	CVMX_ERROR_GROUP_DFM,
+	CVMX_ERROR_GROUP_ILA,
+} cvmx_error_group_t;
+
+/**
+ * Flags representing special handling for some error registers.
+ * These flags are passed to cvmx_error_initialize() to control
+ * the handling of bits where the same flags were passed to the
+ * added cvmx_error_info_t.
+ */
+typedef enum {
+	CVMX_ERROR_TYPE_NONE = 0,
+	CVMX_ERROR_TYPE_SBE = 1 << 0,
+	CVMX_ERROR_TYPE_DBE = 1 << 1,
+} cvmx_error_type_t;
+
+/**
+ * When registering for interest in an error status register, the
+ * type of the register needs to be known by cvmx-error. Most
+ * registers are either IO64 or IO32, but some blocks contain
+ * registers that can't be directly accessed. A good example of
+ * would be PCIe extended error state stored in config space.
+ */
+typedef enum {
+	__CVMX_ERROR_REGISTER_NONE,
+	CVMX_ERROR_REGISTER_IO64,
+	CVMX_ERROR_REGISTER_IO32,
+	CVMX_ERROR_REGISTER_PCICONFIG,
+	CVMX_ERROR_REGISTER_SRIOMAINT,
+} cvmx_error_register_t;
+
+struct cvmx_error_info;
+/**
+ * Error handling functions must have the following prototype.
+ */
+typedef int (*cvmx_error_func_t)(const struct cvmx_error_info *info);
+
+/**
+ * This structure is passed to all error handling functions.
+ */
+typedef struct cvmx_error_info {
+	cvmx_error_register_t reg_type;
+	u64 status_addr;
+	u64 status_mask;
+	u64 enable_addr;
+	u64 enable_mask;
+	cvmx_error_type_t flags;
+	cvmx_error_group_t group;
+	int group_index;
+	cvmx_error_func_t func;
+	u64 user_info;
+	struct {
+		cvmx_error_register_t reg_type;
+		u64 status_addr;
+		u64 status_mask;
+	} parent;
+} cvmx_error_info_t;
+
+/**
+ * Initialize the error status system. This should be called once
+ * before any other functions are called. This function adds default
+ * handlers for most all error events but does not enable them. Later
+ * calls to cvmx_error_enable() are needed.
+ *
+ * @param flags  Optional flags.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize(void);
+
+/**
+ * Poll the error status registers and call the appropriate error
+ * handlers. This should be called in the RSL interrupt handler
+ * for your application or operating system.
+ *
+ * @return Number of error handlers called. Zero means this call
+ *         found no errors and was spurious.
+ */
+int cvmx_error_poll(void);
+
+/**
+ * Register to be called when an error status bit is set. Most users
+ * will not need to call this function as cvmx_error_initialize()
+ * registers default handlers for most error conditions. This function
+ * is normally used to add more handlers without changing the existing
+ * handlers.
+ *
+ * @param new_info Information about the handler for a error register. The
+ *                 structure passed is copied and can be destroyed after the
+ *                 call. All members of the structure must be populated, even the
+ *                 parent information.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_add(const cvmx_error_info_t *new_info);
+
+/**
+ * Remove all handlers for a status register and mask. Normally
+ * this function should not be called. Instead a new handler should be
+ * installed to replace the existing handler. In the even that all
+ * reporting of a error bit should be removed, then use this
+ * function.
+ *
+ * @param reg_type Type of the status register to remove
+ * @param status_addr
+ *                 Status register to remove.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 removed.
+ * @param old_info If not NULL, this is filled with information about the handler
+ *                 that was removed.
+ *
+ * @return Zero on success, negative on failure (not found).
+ */
+int cvmx_error_remove(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+		      cvmx_error_info_t *old_info);
+
+/**
+ * Change the function and user_info for an existing error status
+ * register. This function should be used to replace the default
+ * handler with an application specific version as needed.
+ *
+ * @param reg_type Type of the status register to change
+ * @param status_addr
+ *                 Status register to change.
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 changed.
+ * @param new_func New function to use to handle the error status
+ * @param new_user_info
+ *                 New user info parameter for the function
+ * @param old_func If not NULL, the old function is returned. Useful for restoring
+ *                 the old handler.
+ * @param old_user_info
+ *                 If not NULL, the old user info parameter.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_error_change_handler(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask,
+			      cvmx_error_func_t new_func, u64 new_user_info,
+			      cvmx_error_func_t *old_func, u64 *old_user_info);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not enable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_enable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Disable all error registers for a logical group. This should be
+ * called whenever a logical group is brought offline. Many blocks
+ * will report spurious errors when offline unless this function
+ * is called.
+ *
+ * @param group  Logical group to disable
+ * @param group_index
+ *               Index for the group as defined in the cvmx_error_group_t
+ *               comments.
+ *
+ * @return Zero on success, negative on failure.
+ */
+/*
+ * Rather than conditionalize the calls throughout the executive to not disable
+ * interrupts in Uboot, simply make the enable function do nothing
+ */
+static inline int cvmx_error_disable_group(cvmx_error_group_t group, int group_index)
+{
+	return 0;
+}
+
+/**
+ * Enable all handlers for a specific status register mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 enabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_enable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * Disable all handlers for a specific status register and mask.
+ *
+ * @param reg_type Type of the status register
+ * @param status_addr
+ *                 Status register address
+ * @param status_mask
+ *                 All handlers for this status register with this mask will be
+ *                 disabled.
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_disable(cvmx_error_register_t reg_type, u64 status_addr, u64 status_mask);
+
+/**
+ * @INTERNAL
+ * Function for processing non leaf error status registers. This function
+ * calls all handlers for this passed register and all children linked
+ * to it.
+ *
+ * @param info   Error register to check
+ *
+ * @return Number of error status bits found or zero if no bits were set.
+ */
+int __cvmx_error_decode(const cvmx_error_info_t *info);
+
+/**
+ * @INTERNAL
+ * This error bit handler simply prints a message and clears the status bit
+ *
+ * @param info   Error register to check
+ *
+ * @return
+ */
+int __cvmx_error_display(const cvmx_error_info_t *info);
+
+/**
+ * Find the handler for a specific status register and mask
+ *
+ * @param status_addr
+ *                Status register address
+ *
+ * @return  Return the handler on success or null on failure.
+ */
+cvmx_error_info_t *cvmx_error_get_index(u64 status_addr);
+
+void __cvmx_install_gmx_error_handler_for_xaui(void);
+
+/**
+ * 78xx related
+ */
+/**
+ * Compare two INTSN values.
+ *
+ * @param key INTSN value to search for
+ * @param data current entry from the searched array
+ *
+ * @return Negative, 0 or positive when respectively key is less than,
+ *		equal or greater than data.
+ */
+int cvmx_error_intsn_cmp(const void *key, const void *data);
+
+/**
+ * @INTERNAL
+ *
+ * @param intsn   Interrupt source number to display
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, -1 on error
+ */
+int cvmx_error_intsn_display_v3(int node, u32 intsn);
+
+/**
+ * Initialize the error status system for cn78xx. This should be called once
+ * before any other functions are called. This function enables the interrupts
+ * described in the array.
+ *
+ * @param node Node number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_initialize_cn78xx(int node);
+
+/**
+ * Enable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_enable_v3(int node, u32 intsn);
+
+/**
+ * Disable interrupt for a specific INTSN.
+ *
+ * @param node Node number
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_disable_v3(int node, u32 intsn);
+
+/**
+ * Clear interrupt for a specific INTSN.
+ *
+ * @param intsn Interrupt source number
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_intsn_clear_v3(int node, u32 intsn);
+
+/**
+ * Enable interrupts for a specific CSR(all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero on success, negative on failure.
+ */
+int cvmx_error_csr_enable_v3(int node, u64 csr_address);
+
+/**
+ * Disable interrupts for a specific CSR (all the bits/intsn in the csr).
+ *
+ * @param node Node number
+ * @param csr_address CSR address
+ *
+ * @return Zero
+ */
+int cvmx_error_csr_disable_v3(int node, u64 csr_address);
+
+/**
+ * Enable all error registers for a logical group. This should be
+ * called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Disable all error registers for a logical group.
+ *
+ * @param group  Logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to enable
+ * @param type   Category in a logical group to enable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_enable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				    int xipd_port);
+
+/**
+ * Disable all error registers for a specific category in a logical group.
+ * This should be called whenever a logical group is brought online.
+ *
+ * @param group  Logical group to disable
+ * @param type   Category in a logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_disable_group_type_v3(cvmx_error_group_t group, cvmx_error_type_t type,
+				     int xipd_port);
+
+/**
+ * Clear all error registers for a logical group.
+ *
+ * @param group  Logical group to disable
+ * @param xipd_port  The IPD port value
+ *
+ * @return Zero.
+ */
+int cvmx_error_clear_group_v3(cvmx_error_group_t group, int xipd_port);
+
+/**
+ * Enable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to enable
+ *
+ *@return Zero.
+ */
+int cvmx_error_enable_type_v3(int node, cvmx_error_type_t type);
+
+/**
+ * Disable all error registers for a particular category.
+ *
+ * @param node  CCPI node
+ * @param type  category to disable
+ *
+ *@return Zero.
+ */
+int cvmx_error_disable_type_v3(int node, cvmx_error_type_t type);
+
+void cvmx_octeon_hang(void) __attribute__((__noreturn__));
+
+/**
+ * @INTERNAL
+ *
+ * Process L2C single and multi-bit ECC errors
+ *
+ */
+int __cvmx_cn7xxx_l2c_l2d_ecc_error_display(int node, int intsn);
+
+/**
+ * Handle L2 cache TAG ECC errors and noway errors
+ *
+ * @param	CCPI node
+ * @param	intsn	intsn from error array.
+ * @param	remote	true for remote node (cn78xx only)
+ *
+ * @return	1 if handled, 0 if not handled
+ */
+int __cvmx_cn7xxx_l2c_tag_error_display(int node, int intsn, bool remote);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
new file mode 100644
index 000000000000..297fb3f4a28c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator.
+ */
+
+#ifndef __CVMX_FPA_H__
+#define __CVMX_FPA_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa1.h"
+#include "cvmx-fpa3.h"
+
+#define CVMX_FPA_MIN_BLOCK_SIZE 128
+#define CVMX_FPA_ALIGNMENT	128
+#define CVMX_FPA_POOL_NAME_LEN	16
+
+/* On CN78XX in backward-compatible mode, pool is mapped to AURA */
+#define CVMX_FPA_NUM_POOLS                                                                         \
+	(octeon_has_feature(OCTEON_FEATURE_FPA3) ? cvmx_fpa3_num_auras() : CVMX_FPA1_NUM_POOLS)
+
+/**
+ * Structure to store FPA pool configuration parameters.
+ */
+struct cvmx_fpa_pool_config {
+	s64 pool_num;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+typedef struct cvmx_fpa_pool_config cvmx_fpa_pool_config_t;
+
+/**
+ * Return the name of the pool
+ *
+ * @param pool_num   Pool to get the name of
+ * @return The name
+ */
+const char *cvmx_fpa_get_name(int pool_num);
+
+/**
+ * Initialize FPA per node
+ */
+int cvmx_fpa_global_init_node(int node);
+
+/**
+ * Enable the FPA
+ */
+static inline void cvmx_fpa_enable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_enable();
+	else
+		cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * Disable the FPA
+ */
+static inline void cvmx_fpa_disable(void)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
+		cvmx_fpa1_disable();
+	/* FPA3 does not have a disable function */
+}
+
+/**
+ * @INTERNAL
+ * @deprecated OBSOLETE
+ *
+ * Kept for transition assistance only
+ */
+static inline void cvmx_fpa_global_initialize(void)
+{
+	cvmx_fpa_global_init_node(cvmx_get_node_num());
+}
+
+/**
+ * @INTERNAL
+ *
+ * Convert FPA1 style POOL into FPA3 AURA in
+ * backward compatibility mode.
+ */
+static inline cvmx_fpa3_gaura_t cvmx_fpa1_pool_to_fpa3_aura(cvmx_fpa1_pool_t pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		unsigned int node = cvmx_get_node_num();
+		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(node, pool);
+		return aura;
+	}
+	return CVMX_FPA3_INVALID_GAURA;
+}
+
+/**
+ * Get a new block from the FPA
+ *
+ * @param pool   Pool to get the block from
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_alloc(u64 pool)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_alloc(cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_alloc(pool);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa_async_alloc(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
+		return cvmx_fpa3_async_alloc(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	} else
+		return cvmx_fpa1_async_alloc(scr_addr, pool);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa_async_alloc_finish(u64 scr_addr, u64 pool)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		return cvmx_fpa3_async_alloc_finish(scr_addr, cvmx_fpa1_pool_to_fpa3_aura(pool));
+	else
+		return cvmx_fpa1_async_alloc_finish(scr_addr, pool);
+}
+
+/**
+ * Free a block allocated with a FPA pool.
+ * Does NOT provide memory ordering in cases where the memory block was
+ * modified by the core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free_nosync(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	/* FPA3 is handled differently */
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free_nosync(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free_nosync(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Free a block allocated with a FPA pool.  Provides required memory
+ * ordering in cases where memory block was modified by core.
+ *
+ * @param ptr    Block to free
+ * @param pool   Pool to put it in
+ * @param num_cache_lines
+ *               Cache lines to invalidate
+ */
+static inline void cvmx_fpa_free(void *ptr, u64 pool, u64 num_cache_lines)
+{
+	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
+		cvmx_fpa3_free(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
+	else
+		cvmx_fpa1_free(ptr, pool, num_cache_lines);
+}
+
+/**
+ * Setup a FPA pool to control a new block of memory.
+ * This can only be called once per pool. Make sure proper
+ * locking enforces this.
+ *
+ * @param pool       Pool to initialize
+ * @param name       Constant character string to name this pool.
+ *                   String is not copied.
+ * @param buffer     Pointer to the block of memory to use. This must be
+ *                   accessible by all processors and external hardware.
+ * @param block_size Size for each block controlled by the FPA
+ * @param num_blocks Number of blocks
+ *
+ * @return the pool number on Success,
+ *         -1 on failure
+ */
+int cvmx_fpa_setup_pool(int pool, const char *name, void *buffer, u64 block_size, u64 num_blocks);
+
+int cvmx_fpa_shutdown_pool(int pool);
+
+/**
+ * Gets the block size of buffer in specified pool
+ * @param pool	 Pool to get the block size from
+ * @return       Size of buffer in specified pool
+ */
+unsigned int cvmx_fpa_get_block_size(int pool);
+
+int cvmx_fpa_is_pool_available(int pool_num);
+u64 cvmx_fpa_get_pool_owner(int pool_num);
+int cvmx_fpa_get_max_pools(void);
+int cvmx_fpa_get_current_count(int pool_num);
+int cvmx_fpa_validate_pool(int pool);
+
+#endif /*  __CVM_FPA_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
new file mode 100644
index 000000000000..6985083a5d66
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Free Pool Allocator on Octeon chips.
+ * These are the legacy models, i.e. prior to CN78XX/CN76XX.
+ */
+
+#ifndef __CVMX_FPA1_HW_H__
+#define __CVMX_FPA1_HW_H__
+
+#include "cvmx-scratch.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-fpa3.h"
+
+/* Legacy pool range is 0..7 and 8 on CN68XX */
+typedef int cvmx_fpa1_pool_t;
+
+#define CVMX_FPA1_NUM_POOLS    8
+#define CVMX_FPA1_INVALID_POOL ((cvmx_fpa1_pool_t)-1)
+#define CVMX_FPA1_NAME_SIZE    16
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+} cvmx_fpa1_iobdma_data_t;
+
+/*
+ * Allocate or reserve the specified fpa pool.
+ *
+ * @param pool	  FPA pool to allocate/reserve. If -1 it
+ *                finds an empty pool to allocate.
+ * @return        Alloctaed pool number or CVMX_FPA1_POOL_INVALID
+ *                if fails to allocate the pool
+ */
+cvmx_fpa1_pool_t cvmx_fpa1_reserve_pool(cvmx_fpa1_pool_t pool);
+
+/**
+ * Free the specified fpa pool.
+ * @param pool	   Pool to free
+ * @return         0 for success -1 failure
+ */
+int cvmx_fpa1_release_pool(cvmx_fpa1_pool_t pool);
+
+static inline void cvmx_fpa1_free(void *ptr, cvmx_fpa1_pool_t pool, u64 num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Make sure that any previous writes to memory go out before we free
+	 * this buffer.  This also serves as a barrier to prevent GCC from
+	 * reordering operations to after the free.
+	 */
+	CVMX_SYNCWS;
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+static inline void cvmx_fpa1_free_nosync(void *ptr, cvmx_fpa1_pool_t pool,
+					 unsigned int num_cache_lines)
+{
+	cvmx_addr_t newptr;
+
+	newptr.u64 = cvmx_ptr_to_phys(ptr);
+	newptr.sfilldidspace.didspace = CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
+	/* Prevent GCC from reordering around free */
+	asm volatile("" : : : "memory");
+	/* value written is number of cache lines not written back */
+	cvmx_write_io(newptr.u64, num_cache_lines);
+}
+
+/**
+ * Enable the FPA for use. Must be performed after any CSR
+ * configuration but before any other FPA functions.
+ */
+static inline void cvmx_fpa1_enable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	if (status.s.enb) {
+		/*
+		 * CN68XXP1 should not reset the FPA (doing so may break
+		 * the SSO, so we may end up enabling it more than once.
+		 * Just return and don't spew messages.
+		 */
+		return;
+	}
+
+	status.u64 = 0;
+	status.s.enb = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+/**
+ * Reset FPA to disable. Make sure buffers from all FPA pools are freed
+ * before disabling FPA.
+ */
+static inline void cvmx_fpa1_disable(void)
+{
+	cvmx_fpa_ctl_status_t status;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
+		return;
+
+	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
+	status.s.reset = 1;
+	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
+}
+
+static inline void *cvmx_fpa1_alloc(cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	for (;;) {
+		address = csr_rd(CVMX_ADDR_DID(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool)));
+		if (cvmx_likely(address)) {
+			return cvmx_phys_to_ptr(address);
+		} else {
+			if (csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool)) > 0)
+				udelay(50);
+			else
+				return NULL;
+		}
+	}
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ * @INTERNAL
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param pool      Pool to get the block from
+ */
+static inline void cvmx_fpa1_async_alloc(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	cvmx_fpa1_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.s.scraddr = scr_addr >> 3;
+	data.s.len = 1;
+	data.s.did = CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool);
+	data.s.addr = 0;
+
+	cvmx_scratch_write64(scr_addr, 0ull);
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa_async_alloc
+ * @INTERNAL
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param pool Pool the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa1_async_alloc_finish(u64 scr_addr, cvmx_fpa1_pool_t pool)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		return cvmx_fpa1_alloc(pool);
+}
+
+static inline u64 cvmx_fpa1_get_available(cvmx_fpa1_pool_t pool)
+{
+	return csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool));
+}
+
+#endif /* __CVMX_FPA1_HW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
new file mode 100644
index 000000000000..229982b83163
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
@@ -0,0 +1,566 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the CN78XX Free Pool Allocator, a.k.a. FPA3
+ */
+
+#include "cvmx-address.h"
+#include "cvmx-fpa-defs.h"
+#include "cvmx-scratch.h"
+
+#ifndef __CVMX_FPA3_H__
+#define __CVMX_FPA3_H__
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 2;
+	unsigned lpool : 6;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_pool_t;
+
+typedef struct {
+	unsigned res0 : 6;
+	unsigned node : 2;
+	unsigned res1 : 6;
+	unsigned laura : 10;
+	unsigned valid_magic : 16;
+} cvmx_fpa3_gaura_t;
+
+#define CVMX_FPA3_VALID_MAGIC	0xf9a3
+#define CVMX_FPA3_INVALID_GAURA ((cvmx_fpa3_gaura_t){ 0, 0, 0, 0, 0 })
+#define CVMX_FPA3_INVALID_POOL	((cvmx_fpa3_pool_t){ 0, 0, 0, 0, 0 })
+
+static inline bool __cvmx_fpa3_aura_valid(cvmx_fpa3_gaura_t aura)
+{
+	if (aura.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline bool __cvmx_fpa3_pool_valid(cvmx_fpa3_pool_t pool)
+{
+	if (pool.valid_magic != CVMX_FPA3_VALID_MAGIC)
+		return false;
+	return true;
+}
+
+static inline cvmx_fpa3_gaura_t __cvmx_fpa3_gaura(int node, int laura)
+{
+	cvmx_fpa3_gaura_t aura;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (laura < 0)
+		return CVMX_FPA3_INVALID_GAURA;
+
+	aura.node = node;
+	aura.laura = laura;
+	aura.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return aura;
+}
+
+static inline cvmx_fpa3_pool_t __cvmx_fpa3_pool(int node, int lpool)
+{
+	cvmx_fpa3_pool_t pool;
+
+	if (node < 0)
+		node = cvmx_get_node_num();
+	if (lpool < 0)
+		return CVMX_FPA3_INVALID_POOL;
+
+	pool.node = node;
+	pool.lpool = lpool;
+	pool.valid_magic = CVMX_FPA3_VALID_MAGIC;
+	return pool;
+}
+
+#undef CVMX_FPA3_VALID_MAGIC
+
+/**
+ * Structure describing the data format used for stores to the FPA.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	} cn78xx;
+} cvmx_fpa3_iobdma_data_t;
+
+/**
+ * Struct describing load allocate operation addresses for FPA pool.
+ */
+union cvmx_fpa3_load_data {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 red : 1;
+		u64 reserved2 : 9;
+		u64 aura : 10;
+		u64 reserved3 : 16;
+	};
+};
+
+typedef union cvmx_fpa3_load_data cvmx_fpa3_load_data_t;
+
+/**
+ * Struct describing store free operation addresses from FPA pool.
+ */
+union cvmx_fpa3_store_addr {
+	u64 u64;
+	struct {
+		u64 seg : 2;
+		u64 reserved1 : 13;
+		u64 io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved2 : 10;
+		u64 aura : 10;
+		u64 fabs : 1;
+		u64 reserved3 : 3;
+		u64 dwb_count : 9;
+		u64 reserved4 : 3;
+	};
+};
+
+typedef union cvmx_fpa3_store_addr cvmx_fpa3_store_addr_t;
+
+enum cvmx_fpa3_pool_alignment_e {
+	FPA_NATURAL_ALIGNMENT,
+	FPA_OFFSET_ALIGNMENT,
+	FPA_OPAQUE_ALIGNMENT
+};
+
+#define CVMX_FPA3_AURAX_LIMIT_MAX ((1ull << 40) - 1)
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of POOLS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_pools(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 32;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 32;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * @INTERNAL
+ * Accessor functions to return number of AURAS in an FPA3
+ * depending on SoC model.
+ * The number is per-node for models supporting multi-node configurations.
+ */
+static inline int cvmx_fpa3_num_auras(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1024;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 512;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 512;
+	printf("ERROR: %s: Unknowm model\n", __func__);
+	return -1;
+}
+
+/**
+ * Get the FPA3 POOL underneath FPA3 AURA, containing all its buffers
+ *
+ */
+static inline cvmx_fpa3_pool_t cvmx_fpa3_aura_to_pool(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_aurax_pool_t aurax_pool;
+
+	aurax_pool.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_POOL(aura.laura));
+
+	pool = __cvmx_fpa3_pool(aura.node, aurax_pool.s.pool);
+	return pool;
+}
+
+/**
+ * Get a new block from the FPA pool
+ *
+ * @param aura  - aura number
+ * @return pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_alloc(cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+	cvmx_fpa3_load_data_t load_addr;
+
+	load_addr.u64 = 0;
+	load_addr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	load_addr.io = 1;
+	load_addr.did = 0x29; /* Device ID. Indicates FPA. */
+	load_addr.node = aura.node;
+	load_addr.red = 0; /* Perform RED on allocation.
+				  * FIXME to use config option
+				  */
+	load_addr.aura = aura.laura;
+
+	address = cvmx_read64_uint64(load_addr.u64);
+	if (!address)
+		return NULL;
+	return cvmx_phys_to_ptr(address);
+}
+
+/**
+ * Asynchronously get a new block from the FPA
+ *
+ * The result of cvmx_fpa_async_alloc() may be retrieved using
+ * cvmx_fpa_async_alloc_finish().
+ *
+ * @param scr_addr Local scratch address to put response in.  This is a byte
+ *		   address but must be 8 byte aligned.
+ * @param aura     Global aura to get the block from
+ */
+static inline void cvmx_fpa3_async_alloc(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_iobdma_data_t data;
+
+	/* Hardware only uses 64 bit aligned locations, so convert from byte
+	 * address to 64-bit index
+	 */
+	data.u64 = 0ull;
+	data.cn78xx.scraddr = scr_addr >> 3;
+	data.cn78xx.len = 1;
+	data.cn78xx.did = 0x29;
+	data.cn78xx.node = aura.node;
+	data.cn78xx.aura = aura.laura;
+	cvmx_scratch_write64(scr_addr, 0ull);
+
+	CVMX_SYNCW;
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Retrieve the result of cvmx_fpa3_async_alloc
+ *
+ * @param scr_addr The Local scratch address.  Must be the same value
+ * passed to cvmx_fpa_async_alloc().
+ *
+ * @param aura Global aura the block came from.  Must be the same value
+ * passed to cvmx_fpa_async_alloc.
+ *
+ * @return Pointer to the block or NULL on failure
+ */
+static inline void *cvmx_fpa3_async_alloc_finish(u64 scr_addr, cvmx_fpa3_gaura_t aura)
+{
+	u64 address;
+
+	CVMX_SYNCIOBDMA;
+
+	address = cvmx_scratch_read64(scr_addr);
+	if (cvmx_likely(address))
+		return cvmx_phys_to_ptr(address);
+	else
+		/* Try regular alloc if async failed */
+		return cvmx_fpa3_alloc(aura);
+}
+
+/**
+ * Free a pointer back to the pool.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free(void *ptr, cvmx_fpa3_gaura_t aura, unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Make sure that any previous writes to memory go out before we free
+	   this buffer. This also serves as a barrier to prevent GCC from
+	   reordering operations to after the free. */
+	CVMX_SYNCWS;
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+/**
+ * Free a pointer back to the pool without flushing the write buffer.
+ *
+ * @param aura   global aura number
+ * @param ptr    physical address of block to free.
+ * @param num_cache_lines Cache lines to invalidate
+ */
+static inline void cvmx_fpa3_free_nosync(void *ptr, cvmx_fpa3_gaura_t aura,
+					 unsigned int num_cache_lines)
+{
+	cvmx_fpa3_store_addr_t newptr;
+	cvmx_addr_t newdata;
+
+	newdata.u64 = cvmx_ptr_to_phys(ptr);
+
+	/* Prevent GCC from reordering writes to (*ptr) */
+	asm volatile("" : : : "memory");
+
+	newptr.u64 = 0;
+	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
+	newptr.io = 1;
+	newptr.did = 0x29; /* Device id, indicates FPA */
+	newptr.node = aura.node;
+	newptr.aura = aura.laura;
+	newptr.fabs = 0; /* Free absolute. FIXME to use config option */
+	newptr.dwb_count = num_cache_lines;
+
+	cvmx_write_io(newptr.u64, newdata.u64);
+}
+
+static inline int cvmx_fpa3_pool_is_enabled(cvmx_fpa3_pool_t pool)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	if (!__cvmx_fpa3_pool_valid(pool))
+		return -1;
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	return pool_cfg.cn78xx.ena;
+}
+
+static inline int cvmx_fpa3_config_red_params(unsigned int node, int qos_avg_en, int red_lvl_dly,
+					      int avg_dly)
+{
+	cvmx_fpa_gen_cfg_t fpa_cfg;
+	cvmx_fpa_red_delay_t red_delay;
+
+	fpa_cfg.u64 = cvmx_read_csr_node(node, CVMX_FPA_GEN_CFG);
+	fpa_cfg.s.avg_en = qos_avg_en;
+	fpa_cfg.s.lvl_dly = red_lvl_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_GEN_CFG, fpa_cfg.u64);
+
+	red_delay.u64 = cvmx_read_csr_node(node, CVMX_FPA_RED_DELAY);
+	red_delay.s.avg_dly = avg_dly;
+	cvmx_write_csr_node(node, CVMX_FPA_RED_DELAY, red_delay.u64);
+	return 0;
+}
+
+/**
+ * Gets the buffer size of the specified pool,
+ *
+ * @param aura Global aura number
+ * @return Returns size of the buffers in the specified pool.
+ */
+static inline int cvmx_fpa3_get_aura_buf_size(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+	int block_size;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	pool_cfg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_CFG(pool.lpool));
+	block_size = pool_cfg.cn78xx.buf_size << 7;
+	return block_size;
+}
+
+/**
+ * Return the number of available buffers in an AURA
+ *
+ * @param aura to receive count for
+ * @return available buffer count
+ */
+static inline long long cvmx_fpa3_get_available(cvmx_fpa3_gaura_t aura)
+{
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa_poolx_available_t avail_reg;
+	cvmx_fpa_aurax_cnt_t cnt_reg;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	long long ret;
+
+	pool = cvmx_fpa3_aura_to_pool(aura);
+
+	/* Get POOL available buffer count */
+	avail_reg.u64 = cvmx_read_csr_node(pool.node, CVMX_FPA_POOLX_AVAILABLE(pool.lpool));
+
+	/* Get AURA current available count */
+	cnt_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT(aura.laura));
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (limit_reg.cn78xx.limit < cnt_reg.cn78xx.cnt)
+		return 0;
+
+	/* Calculate AURA-based buffer allowance */
+	ret = limit_reg.cn78xx.limit - cnt_reg.cn78xx.cnt;
+
+	/* Use POOL real buffer availability when less then allowance */
+	if (ret > (long long)avail_reg.cn78xx.count)
+		ret = avail_reg.cn78xx.count;
+
+	return ret;
+}
+
+/**
+ * Configure the QoS parameters of an FPA3 AURA
+ *
+ * @param aura is the FPA3 AURA handle
+ * @param ena_bp enables backpressure when outstanding count exceeds 'bp_thresh'
+ * @param ena_red enables random early discard when outstanding count exceeds 'pass_thresh'
+ * @param pass_thresh is the maximum count to invoke flow control
+ * @param drop_thresh is the count threshold to begin dropping packets
+ * @param bp_thresh is the back-pressure threshold
+ *
+ */
+static inline void cvmx_fpa3_setup_aura_qos(cvmx_fpa3_gaura_t aura, bool ena_red, u64 pass_thresh,
+					    u64 drop_thresh, bool ena_bp, u64 bp_thresh)
+{
+	unsigned int shift = 0;
+	u64 shift_thresh;
+	cvmx_fpa_aurax_cnt_limit_t limit_reg;
+	cvmx_fpa_aurax_cnt_levels_t aura_level;
+
+	if (!__cvmx_fpa3_aura_valid(aura))
+		return;
+
+	/* Get AURAX count limit for validation */
+	limit_reg.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
+
+	if (pass_thresh < 256)
+		pass_thresh = 255;
+
+	if (drop_thresh <= pass_thresh || drop_thresh > limit_reg.cn78xx.limit)
+		drop_thresh = limit_reg.cn78xx.limit;
+
+	if (bp_thresh < 256 || bp_thresh > limit_reg.cn78xx.limit)
+		bp_thresh = limit_reg.cn78xx.limit >> 1;
+
+	shift_thresh = (bp_thresh > drop_thresh) ? bp_thresh : drop_thresh;
+
+	/* Calculate shift so that the largest threshold fits in 8 bits */
+	for (shift = 0; shift < (1 << 6); shift++) {
+		if (0 == ((shift_thresh >> shift) & ~0xffull))
+			break;
+	};
+
+	aura_level.u64 = cvmx_read_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura));
+	aura_level.s.pass = pass_thresh >> shift;
+	aura_level.s.drop = drop_thresh >> shift;
+	aura_level.s.bp = bp_thresh >> shift;
+	aura_level.s.shift = shift;
+	aura_level.s.red_ena = ena_red;
+	aura_level.s.bp_ena = ena_bp;
+	cvmx_write_csr_node(aura.node, CVMX_FPA_AURAX_CNT_LEVELS(aura.laura), aura_level.u64);
+}
+
+cvmx_fpa3_gaura_t cvmx_fpa3_reserve_aura(int node, int desired_aura_num);
+int cvmx_fpa3_release_aura(cvmx_fpa3_gaura_t aura);
+cvmx_fpa3_pool_t cvmx_fpa3_reserve_pool(int node, int desired_pool_num);
+int cvmx_fpa3_release_pool(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_is_aura_available(int node, int aura_num);
+int cvmx_fpa3_is_pool_available(int node, int pool_num);
+
+cvmx_fpa3_pool_t cvmx_fpa3_setup_fill_pool(int node, int desired_pool, const char *name,
+					   unsigned int block_size, unsigned int num_blocks,
+					   void *buffer);
+
+/**
+ * Function to attach an aura to an existing pool
+ *
+ * @param node - configure fpa on this node
+ * @param pool - configured pool to attach aura to
+ * @param desired_aura - pointer to aura to use, set to -1 to allocate
+ * @param name - name to register
+ * @param block_size - size of buffers to use
+ * @param num_blocks - number of blocks to allocate
+ *
+ * @return configured gaura on success, CVMX_FPA3_INVALID_GAURA on failure
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_set_aura_for_pool(cvmx_fpa3_pool_t pool, int desired_aura,
+					      const char *name, unsigned int block_size,
+					      unsigned int num_blocks);
+
+/**
+ * Function to setup and initialize a pool.
+ *
+ * @param node - configure fpa on this node
+ * @param desired_aura - aura to use, -1 for dynamic allocation
+ * @param name - name to register
+ * @param block_size - size of buffers in pool
+ * @param num_blocks - max number of buffers allowed
+ */
+cvmx_fpa3_gaura_t cvmx_fpa3_setup_aura_and_pool(int node, int desired_aura, const char *name,
+						void *buffer, unsigned int block_size,
+						unsigned int num_blocks);
+
+int cvmx_fpa3_shutdown_aura_and_pool(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_aura(cvmx_fpa3_gaura_t aura);
+int cvmx_fpa3_shutdown_pool(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_pool_name(cvmx_fpa3_pool_t pool);
+int cvmx_fpa3_get_pool_buf_size(cvmx_fpa3_pool_t pool);
+const char *cvmx_fpa3_get_aura_name(cvmx_fpa3_gaura_t aura);
+
+/* FIXME: Need a different macro for stage2 of u-boot */
+
+static inline void cvmx_fpa3_stage2_init(int aura, int pool, u64 stack_paddr, int stacklen,
+					 int buffer_sz, int buf_cnt)
+{
+	cvmx_fpa_poolx_cfg_t pool_cfg;
+
+	/* Configure pool stack */
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), stack_paddr);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), stack_paddr + stacklen);
+
+	/* Configure pool with buffer size */
+	pool_cfg.u64 = 0;
+	pool_cfg.cn78xx.nat_align = 1;
+	pool_cfg.cn78xx.buf_size = buffer_sz >> 7;
+	pool_cfg.cn78xx.l_type = 0x2;
+	pool_cfg.cn78xx.ena = 0;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+	/* Reset pool before starting */
+	pool_cfg.cn78xx.ena = 1;
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
+
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CFG(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CNT_ADD(aura), buf_cnt);
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), (u64)pool);
+}
+
+static inline void cvmx_fpa3_stage2_disable(int aura, int pool)
+{
+	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), 0);
+	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), 0);
+}
+
+#endif /* __CVMX_FPA3_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
new file mode 100644
index 000000000000..28c32ddbe17a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CVMX_GLOBAL_RESOURCES_T_
+#define _CVMX_GLOBAL_RESOURCES_T_
+
+#define CVMX_GLOBAL_RESOURCES_DATA_NAME "cvmx-global-resources"
+
+/*In macros below abbreviation GR stands for global resources. */
+#define CVMX_GR_TAG_INVALID                                                                        \
+	cvmx_get_gr_tag('i', 'n', 'v', 'a', 'l', 'i', 'd', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+/*Tag for pko que table range. */
+#define CVMX_GR_TAG_PKO_QUEUES                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'q', 'u', 'e', 'u', 's', '.', '.', \
+			'.')
+/*Tag for a pko internal ports range */
+#define CVMX_GR_TAG_PKO_IPORTS                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'i', 'p', 'o', 'r', 't', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FPA                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'p', 'a', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_FAU                                                                            \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'a', 'u', '.', '.', '.', '.', '.', '.', '.', '.', \
+			'.')
+#define CVMX_GR_TAG_SSO_GRP(n)                                                                     \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 's', 'o', '_', '0', (n) + '0', '.', '.', '.',     \
+			'.', '.', '.');
+#define CVMX_GR_TAG_TIM(n)                                                                         \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 't', 'i', 'm', '_', (n) + '0', '.', '.', '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTERS(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'u', 's', 't', 'e', 'r', '_', (x + '0'),     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_CLUSTER_GRP(x)                                                                 \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'g', 'r', 'p', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_STYLE(x)                                                                       \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 't', 'y', 'l', 'e', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_QPG_ENTRY(x)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'q', 'p', 'g', 'e', 't', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_BPID(x)                                                                        \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'b', 'p', 'i', 'd', 's', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_MTAG_IDX(x)                                                                    \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'm', 't', 'a', 'g', 'x', '_', (x + '0'), '.', '.',     \
+			'.', '.', '.')
+#define CVMX_GR_TAG_PCAM(x, y, z)                                                                  \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'c', 'a', 'm', '_', (x + '0'), (y + '0'),         \
+			(z + '0'), '.', '.', '.', '.')
+
+#define CVMX_GR_TAG_CIU3_IDT(_n)                                                                   \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 'i', 'd',  \
+			't', '.', '.')
+
+/* Allocation of the 512 SW INTSTs (in the  12 bit SW INTSN space) */
+#define CVMX_GR_TAG_CIU3_SWINTSN(_n)                                                               \
+	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_', ((_n) + '0'), '_', 's', 'w',  \
+			'i', 's', 'n')
+
+#define TAG_INIT_PART(A, B, C, D, E, F, G, H)                                                      \
+	((((u64)(A) & 0xff) << 56) | (((u64)(B) & 0xff) << 48) | (((u64)(C) & 0xff) << 40) |             \
+	 (((u64)(D) & 0xff) << 32) | (((u64)(E) & 0xff) << 24) | (((u64)(F) & 0xff) << 16) |             \
+	 (((u64)(G) & 0xff) << 8) | (((u64)(H) & 0xff)))
+
+struct global_resource_tag {
+	u64 lo;
+	u64 hi;
+};
+
+enum cvmx_resource_err { CVMX_RESOURCE_ALLOC_FAILED = -1, CVMX_RESOURCE_ALREADY_RESERVED = -2 };
+
+/*
+ * @INTERNAL
+ * Creates a tag from the specified characters.
+ */
+static inline struct global_resource_tag cvmx_get_gr_tag(char a, char b, char c, char d, char e,
+							 char f, char g, char h, char i, char j,
+							 char k, char l, char m, char n, char o,
+							 char p)
+{
+	struct global_resource_tag tag;
+
+	tag.lo = TAG_INIT_PART(a, b, c, d, e, f, g, h);
+	tag.hi = TAG_INIT_PART(i, j, k, l, m, n, o, p);
+	return tag;
+}
+
+static inline int cvmx_gr_same_tag(struct global_resource_tag gr1, struct global_resource_tag gr2)
+{
+	return (gr1.hi == gr2.hi) && (gr1.lo == gr2.lo);
+}
+
+/*
+ * @INTERNAL
+ * Creates a global resource range that can hold the specified number of
+ * elements
+ * @param tag is the tag of the range. The taga is created using the method
+ * cvmx_get_gr_tag()
+ * @param nelements is the number of elements to be held in the resource range.
+ */
+int cvmx_create_global_resource_range(struct global_resource_tag tag, int nelements);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag. It
+ * is assumed that prior
+ * to calling this the global resource range has already been created using
+ * cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @aligment specifes the required alignment of the returned base number.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_allocate_global_resource_range(struct global_resource_tag tag, u64 owner, int nelements,
+					int alignment);
+
+/*
+ * @INTERNAL
+ * Allocate nelements in the global resource range with the specified tag.
+ * The elements allocated need not be contiguous. It is assumed that prior to
+ * calling this the global resource range has already
+ * been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of the allocated
+ * elements.
+ * @param allocated_elements returns indexs of the allocated entries.
+ * @return returns 0 on success and -1 on failure.
+ */
+int cvmx_resource_alloc_many(struct global_resource_tag tag, u64 owner, int nelements,
+			     int allocated_elements[]);
+int cvmx_resource_alloc_reverse(struct global_resource_tag, u64 owner);
+/*
+ * @INTERNAL
+ * Reserve nelements starting from base in the global resource range with the
+ * specified tag.
+ * It is assumed that prior to calling this the global resource range has
+ * already been created using cvmx_create_global_resource_range().
+ * @param tag is the tag of the global resource range.
+ * @param nelements is the number of elements to be allocated.
+ * @param owner is a 64 bit number that identifes the owner of this range.
+ * @base specifies the base start of nelements.
+ * @return returns the base of the allocated range. -1 return value indicates
+ * failure.
+ */
+int cvmx_reserve_global_resource_range(struct global_resource_tag tag, u64 owner, int base,
+				       int nelements);
+/*
+ * @INTERNAL
+ * Free nelements starting@base in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param base is the base number
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_base(struct global_resource_tag tag, int base,
+					      int nelements);
+
+/*
+ * @INTERNAL
+ * Free nelements with the bases specified in bases[] with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param bases is an array containing the bases to be freed.
+ * @param nelements is the number of elements that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_multiple(struct global_resource_tag tag, int bases[],
+					     int nelements);
+/*
+ * @INTERNAL
+ * Free elements from the specified owner in the global resource range with the
+ * specified tag.
+ * @param tag is the tag of the global resource range.
+ * @param owner is the owner of resources that are to be freed.
+ * @return returns 0 if successful and -1 on failure.
+ */
+int cvmx_free_global_resource_range_with_owner(struct global_resource_tag tag, int owner);
+
+/*
+ * @INTERNAL
+ * Frees all the global resources that have been created.
+ * For use only from the bootloader, when it shutdown and boots up the
+ * application or kernel.
+ */
+int free_global_resources(void);
+
+u64 cvmx_get_global_resource_owner(struct global_resource_tag tag, int base);
+/*
+ * @INTERNAL
+ * Shows the global resource range with the specified tag. Use mainly for debug.
+ */
+void cvmx_show_global_resource_range(struct global_resource_tag tag);
+
+/*
+ * @INTERNAL
+ * Shows all the global resources. Used mainly for debug.
+ */
+void cvmx_global_resources_show(void);
+
+u64 cvmx_allocate_app_id(void);
+u64 cvmx_get_app_id(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmx.h b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
new file mode 100644
index 000000000000..2df7da102a0f
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the GMX hardware.
+ */
+
+#ifndef __CVMX_GMX_H__
+#define __CVMX_GMX_H__
+
+/* CSR typedefs have been moved to cvmx-gmx-defs.h */
+
+int cvmx_gmx_set_backpressure_override(u32 interface, u32 port_mask);
+int cvmx_agl_set_backpressure_override(u32 interface, u32 port_mask);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
new file mode 100644
index 000000000000..59772190aa3b
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ */
+
+/**
+ * @file
+ *
+ * Interface to the hardware Fetch and Add Unit.
+ *
+ */
+
+#ifndef __CVMX_HWFAU_H__
+#define __CVMX_HWFAU_H__
+
+typedef int cvmx_fau_reg64_t;
+typedef int cvmx_fau_reg32_t;
+typedef int cvmx_fau_reg16_t;
+typedef int cvmx_fau_reg8_t;
+
+#define CVMX_FAU_REG_ANY -1
+
+/*
+ * Octeon Fetch and Add Unit (FAU)
+ */
+
+#define CVMX_FAU_LOAD_IO_ADDRESS cvmx_build_io_address(0x1e, 0)
+#define CVMX_FAU_BITS_SCRADDR	 63, 56
+#define CVMX_FAU_BITS_LEN	 55, 48
+#define CVMX_FAU_BITS_INEVAL	 35, 14
+#define CVMX_FAU_BITS_TAGWAIT	 13, 13
+#define CVMX_FAU_BITS_NOADD	 13, 13
+#define CVMX_FAU_BITS_SIZE	 12, 11
+#define CVMX_FAU_BITS_REGISTER	 10, 0
+
+#define CVMX_FAU_MAX_REGISTERS_8 (2048)
+
+typedef enum {
+	CVMX_FAU_OP_SIZE_8 = 0,
+	CVMX_FAU_OP_SIZE_16 = 1,
+	CVMX_FAU_OP_SIZE_32 = 2,
+	CVMX_FAU_OP_SIZE_64 = 3
+} cvmx_fau_op_size_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s64 value : 63;
+} cvmx_fau_tagwait64_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s32 value : 31;
+} cvmx_fau_tagwait32_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	s16 value : 15;
+} cvmx_fau_tagwait16_t;
+
+/**
+ * Tagwait return definition. If a timeout occurs, the error
+ * bit will be set. Otherwise the value of the register before
+ * the update will be returned.
+ */
+typedef struct {
+	u64 error : 1;
+	int8_t value : 7;
+} cvmx_fau_tagwait8_t;
+
+/**
+ * Asynchronous tagwait return definition. If a timeout occurs,
+ * the error bit will be set. Otherwise the value of the
+ * register before the update will be returned.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 invalid : 1;
+		u64 data : 63; /* unpredictable if invalid is set */
+	} s;
+} cvmx_fau_async_tagwait_result_t;
+
+#define SWIZZLE_8  0
+#define SWIZZLE_16 0
+#define SWIZZLE_32 0
+
+/**
+ * @INTERNAL
+ * Builds a store I/O address for writing to the FAU
+ *
+ * @param noadd  0 = Store value is atomically added to the current value
+ *               1 = Store value is atomically written over the current value
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ *               - Step by 4 for 32 bit access.
+ *               - Step by 8 for 64 bit access.
+ * @return Address to store for atomic update
+ */
+static inline u64 __cvmx_hwfau_store_address(u64 noadd, u64 reg)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_NOADD, noadd) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * @INTERNAL
+ * Builds a I/O address for accessing the FAU
+ *
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @return Address to read from for atomic update
+ */
+static inline u64 __cvmx_hwfau_atomic_address(u64 tagwait, u64 reg, s64 value)
+{
+	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s64 cvmx_hwfau_fetch_and_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	return cvmx_read64_int64(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Value of the register before the update
+ */
+static inline s32 cvmx_hwfau_fetch_and_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	return cvmx_read64_int32(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline s16 cvmx_hwfau_fetch_and_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	return cvmx_read64_int16(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Value of the register before the update
+ */
+static inline int8_t cvmx_hwfau_fetch_and_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	return cvmx_read64_int8(__cvmx_hwfau_atomic_address(0, reg, value));
+}
+
+/**
+ * Perform an atomic 64 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 8 for 64 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait64_t cvmx_hwfau_tagwait_fetch_and_add64(cvmx_fau_reg64_t reg,
+								      s64 value)
+{
+	union {
+		u64 i64;
+		cvmx_fau_tagwait64_t t;
+	} result;
+	result.i64 = cvmx_read64_int64(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 32 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 4 for 32 bit access.
+ * @param value  Signed value to add.
+ *               Note: Only the low 22 bits are available.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait32_t cvmx_hwfau_tagwait_fetch_and_add32(cvmx_fau_reg32_t reg,
+								      s32 value)
+{
+	union {
+		u64 i32;
+		cvmx_fau_tagwait32_t t;
+	} result;
+	reg ^= SWIZZLE_32;
+	result.i32 = cvmx_read64_int32(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 16 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ *               - Step by 2 for 16 bit access.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait16_t cvmx_hwfau_tagwait_fetch_and_add16(cvmx_fau_reg16_t reg,
+								      s16 value)
+{
+	union {
+		u64 i16;
+		cvmx_fau_tagwait16_t t;
+	} result;
+	reg ^= SWIZZLE_16;
+	result.i16 = cvmx_read64_int16(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * Perform an atomic 8 bit add after the current tag switch
+ * completes
+ *
+ * @param reg    FAU atomic register to access. 0 <= reg < 2048.
+ * @param value  Signed value to add.
+ * @return If a timeout occurs, the error bit will be set. Otherwise
+ *         the value of the register before the update will be
+ *         returned
+ */
+static inline cvmx_fau_tagwait8_t cvmx_hwfau_tagwait_fetch_and_add8(cvmx_fau_reg8_t reg,
+								    int8_t value)
+{
+	union {
+		u64 i8;
+		cvmx_fau_tagwait8_t t;
+	} result;
+	reg ^= SWIZZLE_8;
+	result.i8 = cvmx_read64_int8(__cvmx_hwfau_atomic_address(1, reg, value));
+	return result.t;
+}
+
+/**
+ * @INTERNAL
+ * Builds I/O data for async operations
+ *
+ * @param scraddr Scratch pad byte address to write to.  Must be 8 byte aligned
+ * @param value   Signed value to add.
+ *                Note: When performing 32 and 64 bit access, only the low
+ *                22 bits are available.
+ * @param tagwait Should the atomic add wait for the current tag switch
+ *                operation to complete.
+ *                - 0 = Don't wait
+ *                - 1 = Wait for tag switch to complete
+ * @param size    The size of the operation:
+ *                - CVMX_FAU_OP_SIZE_8  (0) = 8 bits
+ *                - CVMX_FAU_OP_SIZE_16 (1) = 16 bits
+ *                - CVMX_FAU_OP_SIZE_32 (2) = 32 bits
+ *                - CVMX_FAU_OP_SIZE_64 (3) = 64 bits
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ *                - Step by 4 for 32 bit access.
+ *                - Step by 8 for 64 bit access.
+ * @return Data to write using cvmx_send_single
+ */
+static inline u64 __cvmx_fau_iobdma_data(u64 scraddr, s64 value, u64 tagwait,
+					 cvmx_fau_op_size_t size, u64 reg)
+{
+	return (CVMX_FAU_LOAD_IO_ADDRESS | cvmx_build_bits(CVMX_FAU_BITS_SCRADDR, scraddr >> 3) |
+		cvmx_build_bits(CVMX_FAU_BITS_LEN, 1) |
+		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
+		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
+		cvmx_build_bits(CVMX_FAU_BITS_SIZE, size) |
+		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg, s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add. The old value is
+ * placed in the scratch memory at byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg, s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add. The old value is
+ * placed in the scratch memory@byte address scraddr.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg, int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an async atomic 64 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add64(u64 scraddr, cvmx_fau_reg64_t reg,
+							    s64 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_64, reg));
+}
+
+/**
+ * Perform an async atomic 32 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ *                Note: Only the low 22 bits are available.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add32(u64 scraddr, cvmx_fau_reg32_t reg,
+							    s32 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_32, reg));
+}
+
+/**
+ * Perform an async atomic 16 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add16(u64 scraddr, cvmx_fau_reg16_t reg,
+							    s16 value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_16, reg));
+}
+
+/**
+ * Perform an async atomic 8 bit add after the current tag
+ * switch completes.
+ *
+ * @param scraddr Scratch memory byte address to put response in.
+ *                Must be 8 byte aligned.
+ *                If a timeout occurs, the error bit (63) will be set. Otherwise
+ *                the value of the register before the update will be
+ *                returned
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ * @return Placed in the scratch pad register
+ */
+static inline void cvmx_hwfau_async_tagwait_fetch_and_add8(u64 scraddr, cvmx_fau_reg8_t reg,
+							   int8_t value)
+{
+	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1, CVMX_FAU_OP_SIZE_8, reg));
+}
+
+/**
+ * Perform an atomic 64 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit add
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to add.
+ */
+static inline void cvmx_hwfau_atomic_add8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(0, reg), value);
+}
+
+/**
+ * Perform an atomic 64 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 8 for 64 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write64(cvmx_fau_reg64_t reg, s64 value)
+{
+	cvmx_write64_int64(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 32 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 4 for 32 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write32(cvmx_fau_reg32_t reg, s32 value)
+{
+	reg ^= SWIZZLE_32;
+	cvmx_write64_int32(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 16 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ *                - Step by 2 for 16 bit access.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write16(cvmx_fau_reg16_t reg, s16 value)
+{
+	reg ^= SWIZZLE_16;
+	cvmx_write64_int16(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/**
+ * Perform an atomic 8 bit write
+ *
+ * @param reg     FAU atomic register to access. 0 <= reg < 2048.
+ * @param value   Signed value to write.
+ */
+static inline void cvmx_hwfau_atomic_write8(cvmx_fau_reg8_t reg, int8_t value)
+{
+	reg ^= SWIZZLE_8;
+	cvmx_write64_int8(__cvmx_hwfau_store_address(1, reg), value);
+}
+
+/** Allocates 64bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau64_alloc(int reserve);
+
+/** Allocates 32bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau32_alloc(int reserve);
+
+/** Allocates 16bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau16_alloc(int reserve);
+
+/** Allocates 8bit FAU register.
+ *  @return value is the base address of allocated FAU register
+ */
+int cvmx_fau8_alloc(int reserve);
+
+/** Frees the specified FAU register.
+ *  @param address Base address of register to release.
+ *  @return 0 on success; -1 on failure
+ */
+int cvmx_fau_free(int address);
+
+/** Display the fau registers array
+ */
+void cvmx_fau_show(void);
+
+#endif /* __CVMX_HWFAU_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
new file mode 100644
index 000000000000..459c19bbc0f1
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Output unit.
+ *
+ * Starting with SDK 1.7.0, the PKO output functions now support
+ * two types of locking. CVMX_PKO_LOCK_ATOMIC_TAG continues to
+ * function similarly to previous SDKs by using POW atomic tags
+ * to preserve ordering and exclusivity. As a new option, you
+ * can now pass CVMX_PKO_LOCK_CMD_QUEUE which uses a ll/sc
+ * memory based locking instead. This locking has the advantage
+ * of not affecting the tag state but doesn't preserve packet
+ * ordering. CVMX_PKO_LOCK_CMD_QUEUE is appropriate in most
+ * generic code while CVMX_PKO_LOCK_CMD_QUEUE should be used
+ * with hand tuned fast path code.
+ *
+ * Some of other SDK differences visible to the command command
+ * queuing:
+ * - PKO indexes are no longer stored in the FAU. A large
+ *   percentage of the FAU register block used to be tied up
+ *   maintaining PKO queue pointers. These are now stored in a
+ *   global named block.
+ * - The PKO <b>use_locking</b> parameter can now have a global
+ *   effect. Since all application use the same named block,
+ *   queue locking correctly applies across all operating
+ *   systems when using CVMX_PKO_LOCK_CMD_QUEUE.
+ * - PKO 3 word commands are now supported. Use
+ *   cvmx_pko_send_packet_finish3().
+ */
+
+#ifndef __CVMX_HWPKO_H__
+#define __CVMX_HWPKO_H__
+
+#include "cvmx-hwfau.h"
+#include "cvmx-fpa.h"
+#include "cvmx-pow.h"
+#include "cvmx-cmd-queue.h"
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+
+/* Adjust the command buffer size by 1 word so that in the case of using only
+** two word PKO commands no command words stradle buffers.  The useful values
+** for this are 0 and 1. */
+#define CVMX_PKO_COMMAND_BUFFER_SIZE_ADJUST (1)
+
+#define CVMX_PKO_MAX_OUTPUT_QUEUES_STATIC 256
+#define CVMX_PKO_MAX_OUTPUT_QUEUES                                                                 \
+	((OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) ? 256 : 128)
+#define CVMX_PKO_NUM_OUTPUT_PORTS                                                                  \
+	((OCTEON_IS_MODEL(OCTEON_CN63XX)) ? 44 : (OCTEON_IS_MODEL(OCTEON_CN66XX) ? 48 : 40))
+#define CVMX_PKO_MEM_QUEUE_PTRS_ILLEGAL_PID 63
+#define CVMX_PKO_QUEUE_STATIC_PRIORITY	    9
+#define CVMX_PKO_ILLEGAL_QUEUE		    0xFFFF
+#define CVMX_PKO_MAX_QUEUE_DEPTH	    0
+
+typedef enum {
+	CVMX_PKO_SUCCESS,
+	CVMX_PKO_INVALID_PORT,
+	CVMX_PKO_INVALID_QUEUE,
+	CVMX_PKO_INVALID_PRIORITY,
+	CVMX_PKO_NO_MEMORY,
+	CVMX_PKO_PORT_ALREADY_SETUP,
+	CVMX_PKO_CMD_QUEUE_INIT_ERROR
+} cvmx_pko_return_value_t;
+
+/**
+ * This enumeration represents the differnet locking modes supported by PKO.
+ */
+typedef enum {
+	CVMX_PKO_LOCK_NONE = 0,
+	CVMX_PKO_LOCK_ATOMIC_TAG = 1,
+	CVMX_PKO_LOCK_CMD_QUEUE = 2,
+} cvmx_pko_lock_t;
+
+typedef struct cvmx_pko_port_status {
+	u32 packets;
+	u64 octets;
+	u64 doorbell;
+} cvmx_pko_port_status_t;
+
+/**
+ * This structure defines the address to use on a packet enqueue
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_mips_space_t mem_space : 2;
+		u64 reserved : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved2 : 4;
+		u64 reserved3 : 15;
+		u64 port : 9;
+		u64 queue : 9;
+		u64 reserved4 : 3;
+	} s;
+} cvmx_pko_doorbell_address_t;
+
+/**
+ * Structure of the first packet output command word.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		cvmx_fau_op_size_t size1 : 2;
+		cvmx_fau_op_size_t size0 : 2;
+		u64 subone1 : 1;
+		u64 reg1 : 11;
+		u64 subone0 : 1;
+		u64 reg0 : 11;
+		u64 le : 1;
+		u64 n2 : 1;
+		u64 wqp : 1;
+		u64 rsp : 1;
+		u64 gather : 1;
+		u64 ipoffp1 : 7;
+		u64 ignore_i : 1;
+		u64 dontfree : 1;
+		u64 segs : 6;
+		u64 total_bytes : 16;
+	} s;
+} cvmx_pko_command_word0_t;
+
+/**
+ * Call before any other calls to initialize the packet
+ * output system.
+ */
+
+void cvmx_pko_hw_init(u8 pool, unsigned int bufsize);
+
+/**
+ * Enables the packet output hardware. It must already be
+ * configured.
+ */
+void cvmx_pko_enable(void);
+
+/**
+ * Disables the packet output. Does not affect any configuration.
+ */
+void cvmx_pko_disable(void);
+
+/**
+ * Shutdown and free resources required by packet output.
+ */
+
+void cvmx_pko_shutdown(void);
+
+/**
+ * Configure a output port and the associated queues for use.
+ *
+ * @param port       Port to configure.
+ * @param base_queue First queue number to associate with this port.
+ * @param num_queues Number of queues t oassociate with this port
+ * @param priority   Array of priority levels for each queue. Values are
+ *                   allowed to be 1-8. A value of 8 get 8 times the traffic
+ *                   of a value of 1. There must be num_queues elements in the
+ *                   array.
+ */
+cvmx_pko_return_value_t cvmx_pko_config_port(int port, int base_queue, int num_queues,
+					     const u8 priority[]);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_doorbell_pkoid() directly if the PKO port identifier is
+ * known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell(u64 ipd_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+	u64 pko_port;
+
+	pko_port = ipd_port;
+	if (octeon_has_feature(OCTEON_FEATURE_PKND))
+		pko_port = cvmx_helper_cfg_ipd2pko_port_base(ipd_port);
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Prepare to send a packet.  This may initiate a tag switch to
+ * get exclusive access to the output queue structure, and
+ * performs other prep work for the packet send operation.
+ *
+ * cvmx_pko_send_packet_finish() MUST be called after this function is called,
+ * and must be called with the same port/queue/use_locking arguments.
+ *
+ * The use_locking parameter allows the caller to use three
+ * possible locking modes.
+ * - CVMX_PKO_LOCK_NONE
+ *      - PKO doesn't do any locking. It is the responsibility
+ *          of the application to make sure that no other core
+ *          is accessing the same queue at the same time.
+ * - CVMX_PKO_LOCK_ATOMIC_TAG
+ *      - PKO performs an atomic tagswitch to insure exclusive
+ *          access to the output queue. This will maintain
+ *          packet ordering on output.
+ * - CVMX_PKO_LOCK_CMD_QUEUE
+ *      - PKO uses the common command queue locks to insure
+ *          exclusive access to the output queue. This is a
+ *          memory based ll/sc. This is the most portable
+ *          locking mechanism.
+ *
+ * NOTE: If atomic locking is used, the POW entry CANNOT be
+ * descheduled, as it does not contain a valid WQE pointer.
+ *
+ * @param port   Port to send it on, this can be either IPD port or PKO
+ *		 port.
+ * @param queue  Queue to use
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ */
+static inline void cvmx_pko_send_packet_prepare(u64 port __attribute__((unused)), u64 queue,
+						cvmx_pko_lock_t use_locking)
+{
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG) {
+		/*
+		 * Must do a full switch here to handle all cases.  We use a
+		 * fake WQE pointer, as the POW does not access this memory.
+		 * The WQE pointer and group are only used if this work is
+		 * descheduled, which is not supported by the
+		 * cvmx_pko_send_packet_prepare/cvmx_pko_send_packet_finish
+		 * combination. Note that this is a special case in which these
+		 * fake values can be used - this is not a general technique.
+		 */
+		u32 tag = CVMX_TAG_SW_BITS_INTERNAL << CVMX_TAG_SW_SHIFT |
+			  CVMX_TAG_SUBGROUP_PKO << CVMX_TAG_SUBGROUP_SHIFT |
+			  (CVMX_TAG_SUBGROUP_MASK & queue);
+		cvmx_pow_tag_sw_full((cvmx_wqe_t *)cvmx_phys_to_ptr(0x80), tag,
+				     CVMX_POW_TAG_TYPE_ATOMIC, 0);
+	}
+}
+
+#define cvmx_pko_send_packet_prepare_pkoid cvmx_pko_send_packet_prepare
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			      cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish().
+ *
+ * WARNING: This function may have to look up the proper PKO port in
+ * the IPD port to PKO port map, and is thus slower than calling
+ * cvmx_pko_send_packet_finish3_pkoid() directly if the PKO port
+ * identifier is known.
+ *
+ * @param ipd_port   The IPD port corresponding the to pko port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3(u64 ipd_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+			       cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell(ipd_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Get the first pko_port for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_base_pko_port(int interface, int index);
+
+/**
+ * Get the number of pko_ports for the (interface, index)
+ *
+ * @param interface
+ * @param index
+ */
+int cvmx_pko_get_num_pko_ports(int interface, int index);
+
+/**
+ * For a given port number, return the base pko output queue
+ * for the port.
+ *
+ * @param port   IPD port number
+ * @return Base output queue
+ */
+int cvmx_pko_get_base_queue(int port);
+
+/**
+ * For a given port number, return the number of pko output queues.
+ *
+ * @param port   IPD port number
+ * @return Number of output queues
+ */
+int cvmx_pko_get_num_queues(int port);
+
+/**
+ * Sets the internal FPA pool data structure for PKO comamnd queue.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ *
+ * @note the caller is responsable for setting up the pool with
+ * an appropriate buffer size and sufficient buffer count.
+ */
+void cvmx_pko_set_cmd_que_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param ipd_port Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ *
+ * Note:
+ *     - Only the doorbell for the base queue of the ipd_port is
+ *       collected.
+ *     - Retrieving the stats involves writing the index through
+ *       CVMX_PKO_REG_READ_IDX and reading the stat CSRs, in that
+ *       order. It is not MP-safe and caller should guarantee
+ *       atomicity.
+ */
+void cvmx_pko_get_port_status(u64 ipd_port, u64 clear, cvmx_pko_port_status_t *status);
+
+/**
+ * Rate limit a PKO port to a max packets/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port      Port to rate limit
+ * @param packets_s Maximum packet/sec
+ * @param burst     Maximum number of packets to burst in a row before rate
+ *                  limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_packets(int port, int packets_s, int burst);
+
+/**
+ * Rate limit a PKO port to a max bits/sec. This function is only
+ * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
+ *
+ * @param port   Port to rate limit
+ * @param bits_s PKO rate limit in bits/sec
+ * @param burst  Maximum number of bits to burst before rate
+ *               limiting cuts in.
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pko_rate_limit_bits(int port, u64 bits_s, int burst);
+
+/**
+ * @INTERNAL
+ *
+ * Retrieve the PKO pipe number for a port
+ *
+ * @param interface
+ * @param index
+ *
+ * @return negative on error.
+ *
+ * This applies only to the non-loopback interfaces.
+ *
+ */
+int __cvmx_pko_get_pipe(int interface, int index);
+
+/**
+ * For a given PKO port number, return the base output queue
+ * for the port.
+ *
+ * @param pko_port   PKO port number
+ * @return           Base output queue
+ */
+int cvmx_pko_get_base_queue_pkoid(int pko_port);
+
+/**
+ * For a given PKO port number, return the number of output queues
+ * for the port.
+ *
+ * @param pko_port	PKO port number
+ * @return		the number of output queues
+ */
+int cvmx_pko_get_num_queues_pkoid(int pko_port);
+
+/**
+ * Ring the packet output doorbell. This tells the packet
+ * output hardware that "len" command words have been added
+ * to its pending list.  This command includes the required
+ * CVMX_SYNCWS before the doorbell ring.
+ *
+ * @param pko_port   Port the packet is for
+ * @param queue  Queue the packet is for
+ * @param len    Length of the command in 64 bit words
+ */
+static inline void cvmx_pko_doorbell_pkoid(u64 pko_port, u64 queue, u64 len)
+{
+	cvmx_pko_doorbell_address_t ptr;
+
+	ptr.u64 = 0;
+	ptr.s.mem_space = CVMX_IO_SEG;
+	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
+	ptr.s.is_io = 1;
+	ptr.s.port = pko_port;
+	ptr.s.queue = queue;
+	/* Need to make sure output queue data is in DRAM before doorbell write */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, len);
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   Port to send it on
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish_pkoid(int pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				    cvmx_buf_ptr_t packet, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 2);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/**
+ * Complete packet output. cvmx_pko_send_packet_prepare() must be called exactly once before this,
+ * and the same parameters must be passed to both cvmx_pko_send_packet_prepare() and
+ * cvmx_pko_send_packet_finish_pkoid().
+ *
+ * @param pko_port   The PKO port the packet is for
+ * @param queue  Queue to use
+ * @param pko_command
+ *               PKO HW command word
+ * @param packet Packet to send
+ * @param addr   Plysical address of a work queue entry or physical address to zero on complete.
+ * @param use_locking
+ *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or CVMX_PKO_LOCK_CMD_QUEUE
+ *
+ * @return returns CVMX_PKO_SUCCESS on success, or error code on failure of output
+ */
+static inline cvmx_pko_return_value_t
+cvmx_hwpko_send_packet_finish3_pkoid(u64 pko_port, u64 queue, cvmx_pko_command_word0_t pko_command,
+				     cvmx_buf_ptr_t packet, u64 addr, cvmx_pko_lock_t use_locking)
+{
+	cvmx_cmd_queue_result_t result;
+
+	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
+		cvmx_pow_tag_sw_wait();
+
+	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
+				       (use_locking == CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
+				       packet.u64, addr);
+	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
+		cvmx_pko_doorbell_pkoid(pko_port, queue, 3);
+		return CVMX_PKO_SUCCESS;
+	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result == CVMX_CMD_QUEUE_FULL)) {
+		return CVMX_PKO_NO_MEMORY;
+	} else {
+		return CVMX_PKO_INVALID_QUEUE;
+	}
+}
+
+/*
+ * Obtain the number of PKO commands pending in a queue
+ *
+ * @param queue is the queue identifier to be queried
+ * @return the number of commands pending transmission or -1 on error
+ */
+int cvmx_pko_queue_pend_count(cvmx_cmd_queue_id_t queue);
+
+void cvmx_pko_set_cmd_queue_pool_buffer_count(u64 buffer_count);
+
+#endif /* __CVMX_HWPKO_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ilk.h b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
new file mode 100644
index 000000000000..727298352c28
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file contains defines for the ILK interface
+ */
+
+#ifndef __CVMX_ILK_H__
+#define __CVMX_ILK_H__
+
+/* CSR typedefs have been moved to cvmx-ilk-defs.h */
+
+/*
+ * Note: this macro must match the first ilk port in the ipd_port_map_68xx[]
+ * and ipd_port_map_78xx[] arrays.
+ */
+static inline int CVMX_ILK_GBL_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 5;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 6;
+	return -1;
+}
+
+static inline int CVMX_ILK_QLM_BASE(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 1;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 4;
+	return -1;
+}
+
+typedef struct {
+	int intf_en : 1;
+	int la_mode : 1;
+	int reserved : 14; /* unused */
+	int lane_speed : 16;
+	/* add more here */
+} cvmx_ilk_intf_t;
+
+#define CVMX_NUM_ILK_INTF 2
+static inline int CVMX_ILK_MAX_LANES(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+		return 8;
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 16;
+	return -1;
+}
+
+extern unsigned short cvmx_ilk_lane_mask[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int pipe;
+	unsigned int chan;
+} cvmx_ilk_pipe_chan_t;
+
+#define CVMX_ILK_MAX_PIPES 45
+/* Max number of channels allowed */
+#define CVMX_ILK_MAX_CHANS 256
+
+extern int cvmx_ilk_chans[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
+
+typedef struct {
+	unsigned int chan;
+	unsigned int pknd;
+} cvmx_ilk_chan_pknd_t;
+
+#define CVMX_ILK_MAX_PKNDS 16 /* must be <45 */
+
+typedef struct {
+	int *chan_list; /* for discrete channels. or, must be null */
+	unsigned int num_chans;
+
+	unsigned int chan_start; /* for continuous channels */
+	unsigned int chan_end;
+	unsigned int chan_step;
+
+	unsigned int clr_on_rd;
+} cvmx_ilk_stats_ctrl_t;
+
+#define CVMX_ILK_MAX_CAL      288
+#define CVMX_ILK_MAX_CAL_IDX  (CVMX_ILK_MAX_CAL / 8)
+#define CVMX_ILK_TX_MIN_CAL   1
+#define CVMX_ILK_RX_MIN_CAL   1
+#define CVMX_ILK_CAL_GRP_SZ   8
+#define CVMX_ILK_PIPE_BPID_SZ 7
+#define CVMX_ILK_ENT_CTRL_SZ  2
+#define CVMX_ILK_RX_FIFO_WM   0x200
+
+typedef enum { PIPE_BPID = 0, LINK, XOFF, XON } cvmx_ilk_cal_ent_ctrl_t;
+
+typedef struct {
+	unsigned char pipe_bpid;
+	cvmx_ilk_cal_ent_ctrl_t ent_ctrl;
+} cvmx_ilk_cal_entry_t;
+
+typedef enum { CVMX_ILK_LPBK_DISA = 0, CVMX_ILK_LPBK_ENA } cvmx_ilk_lpbk_ena_t;
+
+typedef enum { CVMX_ILK_LPBK_INT = 0, CVMX_ILK_LPBK_EXT } cvmx_ilk_lpbk_mode_t;
+
+/**
+ * This header is placed in front of all received ILK look-aside mode packets
+ */
+typedef union {
+	u64 u64;
+
+	struct {
+		u32 reserved_63_57 : 7;	  /* bits 63...57 */
+		u32 nsp_cmd : 5;	  /* bits 56...52 */
+		u32 nsp_flags : 4;	  /* bits 51...48 */
+		u32 nsp_grp_id_upper : 6; /* bits 47...42 */
+		u32 reserved_41_40 : 2;	  /* bits 41...40 */
+		/* Protocol type, 1 for LA mode packet */
+		u32 la_mode : 1;	  /* bit  39      */
+		u32 nsp_grp_id_lower : 2; /* bits 38...37 */
+		u32 nsp_xid_upper : 4;	  /* bits 36...33 */
+		/* ILK channel number, 0 or 1 */
+		u32 ilk_channel : 1;   /* bit  32      */
+		u32 nsp_xid_lower : 8; /* bits 31...24 */
+		/* Unpredictable, may be any value */
+		u32 reserved_23_0 : 24; /* bits 23...0  */
+	} s;
+} cvmx_ilk_la_nsp_compact_hdr_t;
+
+typedef struct cvmx_ilk_LA_mode_struct {
+	int ilk_LA_mode;
+	int ilk_LA_mode_cal_ena;
+} cvmx_ilk_LA_mode_t;
+
+extern cvmx_ilk_LA_mode_t cvmx_ilk_LA_mode[CVMX_NUM_ILK_INTF];
+
+int cvmx_ilk_use_la_mode(int interface, int channel);
+int cvmx_ilk_start_interface(int interface, unsigned short num_lanes);
+int cvmx_ilk_start_interface_la(int interface, unsigned char num_lanes);
+int cvmx_ilk_set_pipe(int interface, int pipe_base, unsigned int pipe_len);
+int cvmx_ilk_tx_set_channel(int interface, cvmx_ilk_pipe_chan_t *pch, unsigned int num_chs);
+int cvmx_ilk_rx_set_pknd(int interface, cvmx_ilk_chan_pknd_t *chpknd, unsigned int num_pknd);
+int cvmx_ilk_enable(int interface);
+int cvmx_ilk_disable(int interface);
+int cvmx_ilk_get_intf_ena(int interface);
+int cvmx_ilk_get_chan_info(int interface, unsigned char **chans, unsigned char *num_chan);
+cvmx_ilk_la_nsp_compact_hdr_t cvmx_ilk_enable_la_header(int ipd_port, int mode);
+void cvmx_ilk_show_stats(int interface, cvmx_ilk_stats_ctrl_t *pstats);
+int cvmx_ilk_cal_setup_rx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent, int hi_wm,
+			  unsigned char cal_ena);
+int cvmx_ilk_cal_setup_tx(int interface, int cal_depth, cvmx_ilk_cal_entry_t *pent,
+			  unsigned char cal_ena);
+int cvmx_ilk_lpbk(int interface, cvmx_ilk_lpbk_ena_t enable, cvmx_ilk_lpbk_mode_t mode);
+int cvmx_ilk_la_mode_enable_rx_calendar(int interface);
+
+#endif /* __CVMX_ILK_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd.h b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
new file mode 100644
index 000000000000..cdff36fffb56
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Input Packet Data unit.
+ */
+
+#ifndef __CVMX_IPD_H__
+#define __CVMX_IPD_H__
+
+#include "cvmx-pki.h"
+
+/* CSR typedefs have been moved to cvmx-ipd-defs.h */
+
+typedef cvmx_ipd_1st_mbuff_skip_t cvmx_ipd_mbuff_not_first_skip_t;
+typedef cvmx_ipd_1st_next_ptr_back_t cvmx_ipd_second_next_ptr_back_t;
+
+typedef struct cvmx_ipd_tag_fields {
+	u64 ipv6_src_ip : 1;
+	u64 ipv6_dst_ip : 1;
+	u64 ipv6_src_port : 1;
+	u64 ipv6_dst_port : 1;
+	u64 ipv6_next_header : 1;
+	u64 ipv4_src_ip : 1;
+	u64 ipv4_dst_ip : 1;
+	u64 ipv4_src_port : 1;
+	u64 ipv4_dst_port : 1;
+	u64 ipv4_protocol : 1;
+	u64 input_port : 1;
+} cvmx_ipd_tag_fields_t;
+
+typedef struct cvmx_pip_port_config {
+	u64 parse_mode;
+	u64 tag_type;
+	u64 tag_mode;
+	cvmx_ipd_tag_fields_t tag_fields;
+} cvmx_pip_port_config_t;
+
+typedef struct cvmx_ipd_config_struct {
+	u64 first_mbuf_skip;
+	u64 not_first_mbuf_skip;
+	u64 ipd_enable;
+	u64 enable_len_M8_fix;
+	u64 cache_mode;
+	cvmx_fpa_pool_config_t packet_pool;
+	cvmx_fpa_pool_config_t wqe_pool;
+	cvmx_pip_port_config_t port_config;
+} cvmx_ipd_config_t;
+
+extern cvmx_ipd_config_t cvmx_ipd_cfg;
+
+/**
+ * Gets the fpa pool number of packet pool
+ */
+static inline s64 cvmx_fpa_get_packet_pool(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of packet pool buffer
+ */
+static inline u64 cvmx_fpa_get_packet_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of packet pool
+ */
+static inline u64 cvmx_fpa_get_packet_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.packet_pool.buffer_count);
+}
+
+/**
+ * Gets the fpa pool number of wqe pool
+ */
+static inline s64 cvmx_fpa_get_wqe_pool(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.pool_num);
+}
+
+/**
+ * Gets the buffer size of wqe pool buffer
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_block_size(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_size);
+}
+
+/**
+ * Gets the buffer count of wqe pool
+ */
+static inline u64 cvmx_fpa_get_wqe_pool_buffer_count(void)
+{
+	return (cvmx_ipd_cfg.wqe_pool.buffer_count);
+}
+
+/**
+ * Sets the ipd related configuration in internal structure which is then used
+ * for seting IPD hardware block
+ */
+int cvmx_ipd_set_config(cvmx_ipd_config_t ipd_config);
+
+/**
+ * Gets the ipd related configuration from internal structure.
+ */
+void cvmx_ipd_get_config(cvmx_ipd_config_t *ipd_config);
+
+/**
+ * Sets the internal FPA pool data structure for packet buffer pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Sets the internal FPA pool data structure for wqe pool.
+ * @param pool	fpa pool number yo use
+ * @param buffer_size	buffer size of pool
+ * @param buffer_count	number of buufers to allocate to pool
+ */
+void cvmx_ipd_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count);
+
+/**
+ * Gets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_get_packet_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.packet_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.packet_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.packet_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA packet buffer pool parameters.
+ */
+static inline void cvmx_fpa_set_packet_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_packet_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Gets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_get_wqe_pool_config(s64 *pool, u64 *buffer_size, u64 *buffer_count)
+{
+	if (pool)
+		*pool = cvmx_ipd_cfg.wqe_pool.pool_num;
+	if (buffer_size)
+		*buffer_size = cvmx_ipd_cfg.wqe_pool.buffer_size;
+	if (buffer_count)
+		*buffer_count = cvmx_ipd_cfg.wqe_pool.buffer_count;
+}
+
+/**
+ * Sets the FPA WQE pool parameters.
+ */
+static inline void cvmx_fpa_set_wqe_pool_config(s64 pool, u64 buffer_size, u64 buffer_count)
+{
+	cvmx_ipd_set_wqe_pool_config(pool, buffer_size, buffer_count);
+}
+
+/**
+ * Configure IPD
+ *
+ * @param mbuff_size Packets buffer size in 8 byte words
+ * @param first_mbuff_skip
+ *                   Number of 8 byte words to skip in the first buffer
+ * @param not_first_mbuff_skip
+ *                   Number of 8 byte words to skip in each following buffer
+ * @param first_back Must be same as first_mbuff_skip / 128
+ * @param second_back
+ *                   Must be same as not_first_mbuff_skip / 128
+ * @param wqe_fpa_pool
+ *                   FPA pool to get work entries from
+ * @param cache_mode
+ * @param back_pres_enable_flag
+ *                   Enable or disable port back pressure at a global level.
+ *                   This should always be 1 as more accurate control can be
+ *                   found in IPD_PORTX_BP_PAGE_CNT[BP_ENB].
+ */
+void cvmx_ipd_config(u64 mbuff_size, u64 first_mbuff_skip, u64 not_first_mbuff_skip, u64 first_back,
+		     u64 second_back, u64 wqe_fpa_pool, cvmx_ipd_mode_t cache_mode,
+		     u64 back_pres_enable_flag);
+/**
+ * Enable IPD
+ */
+void cvmx_ipd_enable(void);
+
+/**
+ * Disable IPD
+ */
+void cvmx_ipd_disable(void);
+
+void __cvmx_ipd_free_ptr(void);
+
+void cvmx_ipd_set_packet_pool_buffer_count(u64 buffer_count);
+void cvmx_ipd_set_wqe_pool_buffer_count(u64 buffer_count);
+
+/**
+ * Setup Random Early Drop on a specific input queue
+ *
+ * @param queue  Input queue to setup RED on (0-7)
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red_queue(int queue, int pass_thresh, int drop_thresh);
+
+/**
+ * Setup Random Early Drop to automatically begin dropping packets.
+ *
+ * @param pass_thresh
+ *               Packets will begin slowly dropping when there are less than
+ *               this many packet buffers free in FPA 0.
+ * @param drop_thresh
+ *               All incoming packets will be dropped when there are less
+ *               than this many free packet buffers in FPA 0.
+ * @return Zero on success. Negative on failure
+ */
+int cvmx_ipd_setup_red(int pass_thresh, int drop_thresh);
+
+#endif /*  __CVMX_IPD_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-packet.h b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
new file mode 100644
index 000000000000..f3cfe9c64f43
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Packet buffer defines.
+ */
+
+#ifndef __CVMX_PACKET_H__
+#define __CVMX_PACKET_H__
+
+union cvmx_buf_ptr_pki {
+	u64 u64;
+	struct {
+		u64 size : 16;
+		u64 packet_outside_wqe : 1;
+		u64 rsvd0 : 5;
+		u64 addr : 42;
+	};
+};
+
+typedef union cvmx_buf_ptr_pki cvmx_buf_ptr_pki_t;
+
+/**
+ * This structure defines a buffer pointer on Octeon
+ */
+union cvmx_buf_ptr {
+	void *ptr;
+	u64 u64;
+	struct {
+		u64 i : 1;
+		u64 back : 4;
+		u64 pool : 3;
+		u64 size : 16;
+		u64 addr : 40;
+	} s;
+};
+
+typedef union cvmx_buf_ptr cvmx_buf_ptr_t;
+
+#endif /*  __CVMX_PACKET_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcie.h b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
new file mode 100644
index 000000000000..a819196c021c
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PCIE_H__
+#define __CVMX_PCIE_H__
+
+#define CVMX_PCIE_MAX_PORTS 4
+#define CVMX_PCIE_PORTS                                                                            \
+	((OCTEON_IS_MODEL(OCTEON_CN78XX) || OCTEON_IS_MODEL(OCTEON_CN73XX)) ?                      \
+		       CVMX_PCIE_MAX_PORTS :                                                             \
+		       (OCTEON_IS_MODEL(OCTEON_CN70XX) ? 3 : 2))
+
+/*
+ * The physical memory base mapped by BAR1.  256MB at the end of the
+ * first 4GB.
+ */
+#define CVMX_PCIE_BAR1_PHYS_BASE ((1ull << 32) - (1ull << 28))
+#define CVMX_PCIE_BAR1_PHYS_SIZE BIT_ULL(28)
+
+/*
+ * The RC base of BAR1.  gen1 has a 39-bit BAR2, gen2 has 41-bit BAR2,
+ * place BAR1 so it is the same for both.
+ */
+#define CVMX_PCIE_BAR1_RC_BASE BIT_ULL(41)
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 1 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 reserved_29_31 : 3;	 /* Must be zero */
+		u64 ty : 1;
+		u64 bus : 8;
+		u64 dev : 5;
+		u64 func : 3;
+		u64 reg : 12;
+	} config;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 2 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 es : 2;		 /* Endian swap = 1 */
+		u64 port : 2;		 /* PCIe port 0,1 */
+		u64 address : 32;	 /* PCIe IO address */
+	} io;
+	struct {
+		u64 upper : 2;		 /* Normally 2 for XKPHYS */
+		u64 reserved_49_61 : 13; /* Must be zero */
+		u64 io : 1;		 /* 1 for IO space access */
+		u64 did : 5;		 /* PCIe DID = 3 */
+		u64 subdid : 3;		 /* PCIe SubDID = 3-6 */
+		u64 reserved_38_39 : 2;	 /* Must be zero */
+		u64 node : 2;		 /* Numa node number */
+		u64 address : 36;	 /* PCIe Mem address */
+	} mem;
+} cvmx_pcie_address_t;
+
+/**
+ * Return the Core virtual base address for PCIe IO access. IOs are
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_io_base_address(int pcie_port);
+
+/**
+ * Size of the IO address region returned at address
+ * cvmx_pcie_get_io_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the IO window
+ */
+u64 cvmx_pcie_get_io_size(int pcie_port);
+
+/**
+ * Return the Core virtual base address for PCIe MEM access. Memory is
+ * read/written as an offset from this address.
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return 64bit Octeon IO base address for read/write
+ */
+u64 cvmx_pcie_get_mem_base_address(int pcie_port);
+
+/**
+ * Size of the Mem address region returned@address
+ * cvmx_pcie_get_mem_base_address()
+ *
+ * @param pcie_port PCIe port the IO is for
+ *
+ * @return Size of the Mem window
+ */
+u64 cvmx_pcie_get_mem_size(int pcie_port);
+
+/**
+ * Initialize a PCIe port for use in host(RC) mode. It doesn't enumerate the bus.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_initialize(int pcie_port);
+
+/**
+ * Shutdown a PCIe port and put it in reset
+ *
+ * @param pcie_port PCIe port to shutdown
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_rc_shutdown(int pcie_port);
+
+/**
+ * Read 8bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u8 cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 16bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u16 cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Read 32bits from a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ *
+ * @return Result of the read
+ */
+u32 cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn, int reg);
+
+/**
+ * Write 8bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int fn, int reg, u8 val);
+
+/**
+ * Write 16bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int fn, int reg, u16 val);
+
+/**
+ * Write 32bits to a Device's config space
+ *
+ * @param pcie_port PCIe port the device is on
+ * @param bus       Sub bus
+ * @param dev       Device ID
+ * @param fn        Device sub function
+ * @param reg       Register to access
+ * @param val       Value to write
+ */
+void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int fn, int reg, u32 val);
+
+/**
+ * Read a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to read from
+ * @param cfg_offset Address to read
+ *
+ * @return Value read
+ */
+u32 cvmx_pcie_cfgx_read(int pcie_port, u32 cfg_offset);
+u32 cvmx_pcie_cfgx_read_node(int node, int pcie_port, u32 cfg_offset);
+
+/**
+ * Write a PCIe config space register indirectly. This is used for
+ * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
+ *
+ * @param pcie_port  PCIe port to write to
+ * @param cfg_offset Address to write
+ * @param val        Value to write
+ */
+void cvmx_pcie_cfgx_write(int pcie_port, u32 cfg_offset, u32 val);
+void cvmx_pcie_cfgx_write_node(int node, int pcie_port, u32 cfg_offset, u32 val);
+
+/**
+ * Write a 32bit value to the Octeon NPEI register space
+ *
+ * @param address Address to write to
+ * @param val     Value to write
+ */
+static inline void cvmx_pcie_npei_write32(u64 address, u32 val)
+{
+	cvmx_write64_uint32(address ^ 4, val);
+	cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Read a 32bit value from the Octeon NPEI register space
+ *
+ * @param address Address to read
+ * @return The result
+ */
+static inline u32 cvmx_pcie_npei_read32(u64 address)
+{
+	return cvmx_read64_uint32(address ^ 4);
+}
+
+/**
+ * Initialize a PCIe port for use in target(EP) mode.
+ *
+ * @param pcie_port PCIe port to initialize
+ *
+ * @return Zero on success
+ */
+int cvmx_pcie_ep_initialize(int pcie_port);
+
+/**
+ * Wait for posted PCIe read/writes to reach the other side of
+ * the internal PCIe switch. This will insure that core
+ * read/writes are posted before anything after this function
+ * is called. This may be necessary when writing to memory that
+ * will later be read using the DMA/PKT engines.
+ *
+ * @param pcie_port PCIe port to wait for
+ */
+void cvmx_pcie_wait_for_pending(int pcie_port);
+
+/**
+ * Returns if a PCIe port is in host or target mode.
+ *
+ * @param pcie_port PCIe port number (PEM number)
+ *
+ * @return 0 if PCIe port is in target mode, !0 if in host mode.
+ */
+int cvmx_pcie_is_host_mode(int pcie_port);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip.h b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
new file mode 100644
index 000000000000..013f533fb7bb
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
@@ -0,0 +1,1080 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Processing unit.
+ */
+
+#ifndef __CVMX_PIP_H__
+#define __CVMX_PIP_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pki.h"
+#include "cvmx-helper-pki.h"
+
+#include "cvmx-helper.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-pki-resources.h"
+
+#define CVMX_PIP_NUM_INPUT_PORTS 46
+#define CVMX_PIP_NUM_WATCHERS	 8
+
+/*
+ * Encodes the different error and exception codes
+ */
+typedef enum {
+	CVMX_PIP_L4_NO_ERR = 0ull,
+	/*        1  = TCP (UDP) packet not long enough to cover TCP (UDP) header */
+	CVMX_PIP_L4_MAL_ERR = 1ull,
+	/*        2  = TCP/UDP checksum failure */
+	CVMX_PIP_CHK_ERR = 2ull,
+	/*        3  = TCP/UDP length check (TCP/UDP length does not match IP length) */
+	CVMX_PIP_L4_LENGTH_ERR = 3ull,
+	/*        4  = illegal TCP/UDP port (either source or dest port is zero) */
+	CVMX_PIP_BAD_PRT_ERR = 4ull,
+	/*        8  = TCP flags = FIN only */
+	CVMX_PIP_TCP_FLG8_ERR = 8ull,
+	/*        9  = TCP flags = 0 */
+	CVMX_PIP_TCP_FLG9_ERR = 9ull,
+	/*        10 = TCP flags = FIN+RST+* */
+	CVMX_PIP_TCP_FLG10_ERR = 10ull,
+	/*        11 = TCP flags = SYN+URG+* */
+	CVMX_PIP_TCP_FLG11_ERR = 11ull,
+	/*        12 = TCP flags = SYN+RST+* */
+	CVMX_PIP_TCP_FLG12_ERR = 12ull,
+	/*        13 = TCP flags = SYN+FIN+* */
+	CVMX_PIP_TCP_FLG13_ERR = 13ull
+} cvmx_pip_l4_err_t;
+
+typedef enum {
+	CVMX_PIP_IP_NO_ERR = 0ull,
+	/*        1 = not IPv4 or IPv6 */
+	CVMX_PIP_NOT_IP = 1ull,
+	/*        2 = IPv4 header checksum violation */
+	CVMX_PIP_IPV4_HDR_CHK = 2ull,
+	/*        3 = malformed (packet not long enough to cover IP hdr) */
+	CVMX_PIP_IP_MAL_HDR = 3ull,
+	/*        4 = malformed (packet not long enough to cover len in IP hdr) */
+	CVMX_PIP_IP_MAL_PKT = 4ull,
+	/*        5 = TTL / hop count equal zero */
+	CVMX_PIP_TTL_HOP = 5ull,
+	/*        6 = IPv4 options / IPv6 early extension headers */
+	CVMX_PIP_OPTS = 6ull
+} cvmx_pip_ip_exc_t;
+
+/**
+ * NOTES
+ *       late collision (data received before collision)
+ *            late collisions cannot be detected by the receiver
+ *            they would appear as JAM bits which would appear as bad FCS
+ *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+ */
+typedef enum {
+	/**
+	 * No error
+	 */
+	CVMX_PIP_RX_NO_ERR = 0ull,
+
+	CVMX_PIP_PARTIAL_ERR =
+		1ull, /* RGM+SPI            1 = partially received packet (buffering/bandwidth not adequate) */
+	CVMX_PIP_JABBER_ERR =
+		2ull, /* RGM+SPI            2 = receive packet too large and truncated */
+	CVMX_PIP_OVER_FCS_ERR =
+		3ull, /* RGM                3 = max frame error (pkt len > max frame len) (with FCS error) */
+	CVMX_PIP_OVER_ERR =
+		4ull, /* RGM+SPI            4 = max frame error (pkt len > max frame len) */
+	CVMX_PIP_ALIGN_ERR =
+		5ull, /* RGM                5 = nibble error (data not byte multiple - 100M and 10M only) */
+	CVMX_PIP_UNDER_FCS_ERR =
+		6ull, /* RGM                6 = min frame error (pkt len < min frame len) (with FCS error) */
+	CVMX_PIP_GMX_FCS_ERR = 7ull, /* RGM                7 = FCS error */
+	CVMX_PIP_UNDER_ERR =
+		8ull, /* RGM+SPI            8 = min frame error (pkt len < min frame len) */
+	CVMX_PIP_EXTEND_ERR = 9ull, /* RGM                9 = Frame carrier extend error */
+	CVMX_PIP_TERMINATE_ERR =
+		9ull, /* XAUI               9 = Packet was terminated with an idle cycle */
+	CVMX_PIP_LENGTH_ERR =
+		10ull, /* RGM               10 = length mismatch (len did not match len in L2 length/type) */
+	CVMX_PIP_DAT_ERR =
+		11ull, /* RGM               11 = Frame error (some or all data bits marked err) */
+	CVMX_PIP_DIP_ERR = 11ull, /*     SPI           11 = DIP4 error */
+	CVMX_PIP_SKIP_ERR =
+		12ull, /* RGM               12 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_NIBBLE_ERR =
+		13ull, /* RGM               13 = studder error (data not repeated - 100M and 10M only) */
+	CVMX_PIP_PIP_FCS = 16L, /* RGM+SPI           16 = FCS error */
+	CVMX_PIP_PIP_SKIP_ERR =
+		17L, /* RGM+SPI+PCI       17 = packet was not large enough to pass the skipper - no inspection could occur */
+	CVMX_PIP_PIP_L2_MAL_HDR =
+		18L, /* RGM+SPI+PCI       18 = malformed l2 (packet not long enough to cover L2 hdr) */
+	CVMX_PIP_PUNY_ERR =
+		47L /* SGMII             47 = PUNY error (packet was 4B or less when FCS stripping is enabled) */
+	/* NOTES
+	 *       xx = late collision (data received before collision)
+	 *            late collisions cannot be detected by the receiver
+	 *            they would appear as JAM bits which would appear as bad FCS
+	 *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
+	 */
+} cvmx_pip_rcv_err_t;
+
+/**
+ * This defines the err_code field errors in the work Q entry
+ */
+typedef union {
+	cvmx_pip_l4_err_t l4_err;
+	cvmx_pip_ip_exc_t ip_exc;
+	cvmx_pip_rcv_err_t rcv_err;
+} cvmx_pip_err_t;
+
+/**
+ * Status statistics for a port
+ */
+typedef struct {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+} cvmx_pip_port_status_t;
+
+/**
+ * Definition of the PIP custom header that can be prepended
+ * to a packet by external hardware.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 rawfull : 1;
+		u64 reserved0 : 5;
+		cvmx_pip_port_parse_mode_t parse_mode : 2;
+		u64 reserved1 : 1;
+		u64 skip_len : 7;
+		u64 grpext : 2;
+		u64 nqos : 1;
+		u64 ngrp : 1;
+		u64 ntt : 1;
+		u64 ntag : 1;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 rs : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pip_pkt_inst_hdr_t;
+
+enum cvmx_pki_pcam_match {
+	CVMX_PKI_PCAM_MATCH_IP,
+	CVMX_PKI_PCAM_MATCH_IPV4,
+	CVMX_PKI_PCAM_MATCH_IPV6,
+	CVMX_PKI_PCAM_MATCH_TCP
+};
+
+/* CSR typedefs have been moved to cvmx-pip-defs.h */
+static inline int cvmx_pip_config_watcher(int index, int type, u16 match, u16 mask, int grp,
+					  int qos)
+{
+	if (index >= CVMX_PIP_NUM_WATCHERS) {
+		debug("ERROR: pip watcher %d is > than supported\n", index);
+		return -1;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		/* store in software for now, only when the watcher is enabled program the entry*/
+		if (type == CVMX_PIP_QOS_WATCH_PROTNH) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L3_FLAGS;
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_TCP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x060000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == CVMX_PIP_QOS_WATCH_UDP) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_L4_PORT;
+			qos_watcher[index].data = 0x110000;
+			qos_watcher[index].data |= (u32)match;
+			qos_watcher[index].data_mask = (u32)(mask);
+			qos_watcher[index].advance = 0;
+		} else if (type == 0x4 /*CVMX_PIP_QOS_WATCH_ETHERTYPE*/) {
+			qos_watcher[index].field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+			if (match == 0x8100) {
+				debug("ERROR: default vlan entry already exist, cant set watcher\n");
+				return -1;
+			}
+			qos_watcher[index].data = (u32)(match << 16);
+			qos_watcher[index].data_mask = (u32)(mask << 16);
+			qos_watcher[index].advance = 4;
+		} else {
+			debug("ERROR: Unsupported watcher type %d\n", type);
+			return -1;
+		}
+		if (grp >= 32) {
+			debug("ERROR: grp %d out of range for backward compat 78xx\n", grp);
+			return -1;
+		}
+		qos_watcher[index].sso_grp = (u8)(grp << 3 | qos);
+		qos_watcher[index].configured = 1;
+	} else {
+		/* Implement it later */
+	}
+	return 0;
+}
+
+static inline int __cvmx_pip_set_tag_type(int node, int style, int tag_type, int field)
+{
+	struct cvmx_pki_style_config style_cfg;
+	int style_num;
+	int pcam_offset;
+	int bank;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+
+	/* All other style parameters remain same except tag type */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tag_type;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to set tag type\n");
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	if (field == CVMX_PKI_PCAM_MATCH_IP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+		/* legacy will write to all clusters*/
+		bank = 0;
+		pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+							CVMX_PKI_CLUSTER_ALL);
+		if (pcam_offset < 0) {
+			debug("ERROR: pcam entry not available to enable qos watcher\n");
+			cvmx_pki_style_free(node, style_num);
+			return -1;
+		}
+		pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+		pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+		pcam_action.style_add = (u8)(style_num - style);
+		cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input,
+					  pcam_action);
+		field = CVMX_PKI_PCAM_MATCH_IPV6;
+	}
+	if (field == CVMX_PKI_PCAM_MATCH_IPV4) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x08000000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_IPV6) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x86dd00000;
+		pcam_input.data_mask = 0xffff0000;
+		pcam_action.pointer_advance = 4;
+	} else if (field == CVMX_PKI_PCAM_MATCH_TCP) {
+		pcam_input.field = CVMX_PKI_PCAM_TERM_L4_PORT;
+		pcam_input.field_mask = 0xff;
+		pcam_input.data = 0x60000;
+		pcam_input.data_mask = 0xff0000;
+		pcam_action.pointer_advance = 0;
+	}
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	bank = pcam_input.field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		return -1;
+	}
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return style_num;
+}
+
+/* Only for legacy internal use */
+static inline int __cvmx_pip_enable_watcher_78xx(int node, int index, int style)
+{
+	struct cvmx_pki_style_config style_cfg;
+	struct cvmx_pki_qpg_config qpg_cfg;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+	int style_num;
+	int qpg_offset;
+	int pcam_offset;
+	int bank;
+
+	if (!qos_watcher[index].configured) {
+		debug("ERROR: qos watcher %d should be configured before enable\n", index);
+		return -1;
+	}
+	/* All other style parameters remain same except grp and qos and qps base */
+	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	cvmx_pki_read_qpg_entry(node, style_cfg.parm_cfg.qpg_base, &qpg_cfg);
+	qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+	qpg_cfg.grp_ok = qos_watcher[index].sso_grp;
+	qpg_cfg.grp_bad = qos_watcher[index].sso_grp;
+	qpg_offset = cvmx_helper_pki_set_qpg_entry(node, &qpg_cfg);
+	if (qpg_offset == -1) {
+		debug("Warning: no new qpg entry available to enable watcher\n");
+		return -1;
+	}
+	/* try to reserve the style, if it is not configured already, reserve
+	   and configure it */
+	style_cfg.parm_cfg.qpg_base = qpg_offset;
+	style_num = cvmx_pki_style_alloc(node, -1);
+	if (style_num < 0) {
+		debug("ERROR: style not available to enable qos watcher\n");
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	cvmx_pki_write_style_config(node, style_num, CVMX_PKI_CLUSTER_ALL, &style_cfg);
+	/* legacy will write to all clusters*/
+	bank = qos_watcher[index].field & 0x01;
+	pcam_offset = cvmx_pki_pcam_entry_alloc(node, CVMX_PKI_FIND_AVAL_ENTRY, bank,
+						CVMX_PKI_CLUSTER_ALL);
+	if (pcam_offset < 0) {
+		debug("ERROR: pcam entry not available to enable qos watcher\n");
+		cvmx_pki_style_free(node, style_num);
+		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
+		return -1;
+	}
+	memset(&pcam_input, 0, sizeof(pcam_input));
+	memset(&pcam_action, 0, sizeof(pcam_action));
+	pcam_input.style = style;
+	pcam_input.style_mask = 0xff;
+	pcam_input.field = qos_watcher[index].field;
+	pcam_input.field_mask = 0xff;
+	pcam_input.data = qos_watcher[index].data;
+	pcam_input.data_mask = qos_watcher[index].data_mask;
+	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
+	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
+	pcam_action.style_add = (u8)(style_num - style);
+	pcam_action.pointer_advance = qos_watcher[index].advance;
+	cvmx_pki_pcam_write_entry(node, pcam_offset, CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
+	return 0;
+}
+
+/**
+ * Configure an ethernet input port
+ *
+ * @param ipd_port Port number to configure
+ * @param port_cfg Port hardware configuration
+ * @param port_tag_cfg Port POW tagging configuration
+ */
+static inline void cvmx_pip_config_port(u64 ipd_port, cvmx_pip_prt_cfgx_t port_cfg,
+					cvmx_pip_prt_tagx_t port_tag_cfg)
+{
+	struct cvmx_pki_qpg_config qpg_cfg;
+	int qpg_offset;
+	u8 tcp_tag = 0xff;
+	u8 ip_tag = 0xaa;
+	int style, nstyle, n4style, n6style;
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		struct cvmx_pki_port_config pki_prt_cfg;
+		struct cvmx_xport xp = cvmx_helper_ipd_port_to_xport(ipd_port);
+
+		cvmx_pki_get_port_config(ipd_port, &pki_prt_cfg);
+		style = pki_prt_cfg.pkind_cfg.initial_style;
+		if (port_cfg.s.ih_pri || port_cfg.s.vlan_len || port_cfg.s.pad_len)
+			debug("Warning: 78xx: use different config for this option\n");
+		pki_prt_cfg.style_cfg.parm_cfg.minmax_sel = port_cfg.s.len_chk_sel;
+		pki_prt_cfg.style_cfg.parm_cfg.lenerr_en = port_cfg.s.lenerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.maxerr_en = port_cfg.s.maxerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.minerr_en = port_cfg.s.minerr_en;
+		pki_prt_cfg.style_cfg.parm_cfg.fcs_chk = port_cfg.s.crc_en;
+		if (port_cfg.s.grp_wat || port_cfg.s.qos_wat || port_cfg.s.grp_wat_47 ||
+		    port_cfg.s.qos_wat_47) {
+			u8 group_mask = (u8)(port_cfg.s.grp_wat | (u8)(port_cfg.s.grp_wat_47 << 4));
+			u8 qos_mask = (u8)(port_cfg.s.qos_wat | (u8)(port_cfg.s.qos_wat_47 << 4));
+			int i;
+
+			for (i = 0; i < CVMX_PIP_NUM_WATCHERS; i++) {
+				if ((group_mask & (1 << i)) || (qos_mask & (1 << i)))
+					__cvmx_pip_enable_watcher_78xx(xp.node, i, style);
+			}
+		}
+		if (port_tag_cfg.s.tag_mode) {
+			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
+				cvmx_printf("Warning: mask tag is not supported in 78xx pass1\n");
+			else {
+			}
+			/* need to implement for 78xx*/
+		}
+		if (port_cfg.s.tag_inc)
+			debug("Warning: 78xx uses differnet method for tag generation\n");
+		pki_prt_cfg.style_cfg.parm_cfg.rawdrp = port_cfg.s.rawdrp;
+		pki_prt_cfg.pkind_cfg.parse_en.inst_hdr = port_cfg.s.inst_hdr;
+		if (port_cfg.s.hg_qos)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_HIGIG;
+		else if (port_cfg.s.qos_vlan)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_VLAN;
+		else if (port_cfg.s.qos_diff)
+			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos = CVMX_PKI_QPG_QOS_DIFFSERV;
+		if (port_cfg.s.qos_vod)
+			debug("Warning: 78xx needs pcam entries installed to achieve qos_vod\n");
+		if (port_cfg.s.qos) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= port_cfg.s.qos;
+			qpg_cfg.grp_bad |= port_cfg.s.qos;
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify qos\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		if (port_tag_cfg.s.grp != pki_dflt_sso_grp[xp.node].group) {
+			cvmx_pki_read_qpg_entry(xp.node, pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
+						&qpg_cfg);
+			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
+			qpg_cfg.grp_ok |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_cfg.grp_bad |= (u8)(port_tag_cfg.s.grp << 3);
+			qpg_offset = cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
+			if (qpg_offset == -1)
+				debug("Warning: no new qpg entry available, will not modify group\n");
+			else
+				pki_prt_cfg.style_cfg.parm_cfg.qpg_base = qpg_offset;
+		}
+		pki_prt_cfg.pkind_cfg.parse_en.dsa_en = port_cfg.s.dsa_en;
+		pki_prt_cfg.pkind_cfg.parse_en.hg_en = port_cfg.s.higig_en;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_src =
+			port_tag_cfg.s.ip6_src_flag | port_tag_cfg.s.ip4_src_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_dst =
+			port_tag_cfg.s.ip6_dst_flag | port_tag_cfg.s.ip4_dst_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.ip_prot_nexthdr =
+			port_tag_cfg.s.ip6_nxth_flag | port_tag_cfg.s.ip4_pctl_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_src =
+			port_tag_cfg.s.ip6_sprt_flag | port_tag_cfg.s.ip4_sprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_dst =
+			port_tag_cfg.s.ip6_dprt_flag | port_tag_cfg.s.ip4_dprt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.input_port = port_tag_cfg.s.inc_prt_flag;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.first_vlan = port_tag_cfg.s.inc_vlan;
+		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.second_vlan = port_tag_cfg.s.inc_vs;
+
+		if (port_tag_cfg.s.tcp6_tag_type == port_tag_cfg.s.tcp4_tag_type)
+			tcp_tag = port_tag_cfg.s.tcp6_tag_type;
+		if (port_tag_cfg.s.ip6_tag_type == port_tag_cfg.s.ip4_tag_type)
+			ip_tag = port_tag_cfg.s.ip6_tag_type;
+		pki_prt_cfg.style_cfg.parm_cfg.tag_type =
+			(enum cvmx_sso_tag_type)port_tag_cfg.s.non_tag_type;
+		if (tcp_tag == ip_tag && tcp_tag == port_tag_cfg.s.non_tag_type)
+			pki_prt_cfg.style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tcp_tag;
+		else if (tcp_tag == ip_tag) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for ip6 && ip4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, ip_tag, CVMX_PKI_PCAM_MATCH_IP);
+		} else if (ip_tag == port_tag_cfg.s.non_tag_type) {
+			/* allocate and copy style */
+			/* modify tag type */
+			/*pcam entry for tcp6 & tcp4 match*/
+			/* default is non tag type */
+			__cvmx_pip_set_tag_type(xp.node, style, tcp_tag, CVMX_PKI_PCAM_MATCH_TCP);
+		} else {
+			if (ip_tag != 0xaa) {
+				nstyle = __cvmx_pip_set_tag_type(xp.node, style, ip_tag,
+								 CVMX_PKI_PCAM_MATCH_IP);
+				if (tcp_tag != 0xff)
+					__cvmx_pip_set_tag_type(xp.node, nstyle, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				else {
+					n4style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV4);
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					n6style = __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
+									  CVMX_PKI_PCAM_MATCH_IPV6);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			} else {
+				n4style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip4_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV4);
+				n6style = __cvmx_pip_set_tag_type(xp.node, style,
+								  port_tag_cfg.s.ip6_tag_type,
+								  CVMX_PKI_PCAM_MATCH_IPV6);
+				if (tcp_tag != 0xff) {
+					__cvmx_pip_set_tag_type(xp.node, n4style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style, tcp_tag,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				} else {
+					__cvmx_pip_set_tag_type(xp.node, n4style,
+								port_tag_cfg.s.tcp4_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+					__cvmx_pip_set_tag_type(xp.node, n6style,
+								port_tag_cfg.s.tcp6_tag_type,
+								CVMX_PKI_PCAM_MATCH_TCP);
+				}
+			}
+		}
+		pki_prt_cfg.style_cfg.parm_cfg.qpg_dis_padd = !port_tag_cfg.s.portadd_en;
+
+		if (port_cfg.s.mode == 0x1)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LA_TO_LG;
+		else if (port_cfg.s.mode == 0x2)
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_LC_TO_LG;
+		else
+			pki_prt_cfg.pkind_cfg.initial_parse_mode = CVMX_PKI_PARSE_NOTHING;
+		/* This is only for backward compatibility, not all the parameters are supported in 78xx */
+		cvmx_pki_set_port_config(ipd_port, &pki_prt_cfg);
+	} else {
+		if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+			int interface, index, pknd;
+
+			interface = cvmx_helper_get_interface_num(ipd_port);
+			index = cvmx_helper_get_interface_index_num(ipd_port);
+			pknd = cvmx_helper_get_pknd(interface, index);
+
+			ipd_port = pknd; /* overload port_num with pknd */
+		}
+		csr_wr(CVMX_PIP_PRT_CFGX(ipd_port), port_cfg.u64);
+		csr_wr(CVMX_PIP_PRT_TAGX(ipd_port), port_tag_cfg.u64);
+	}
+}
+
+/**
+ * Configure the VLAN priority to QoS queue mapping.
+ *
+ * @param vlan_priority
+ *               VLAN priority (0-7)
+ * @param qos    QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_vlan_qos(u64 vlan_priority, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_vlanx_t pip_qos_vlanx;
+
+		pip_qos_vlanx.u64 = 0;
+		pip_qos_vlanx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_VLANX(vlan_priority), pip_qos_vlanx.u64);
+	}
+}
+
+/**
+ * Configure the Diffserv to QoS queue mapping.
+ *
+ * @param diffserv Diffserv field value (0-63)
+ * @param qos      QoS queue for packets matching this watcher
+ */
+static inline void cvmx_pip_config_diffserv_qos(u64 diffserv, u64 qos)
+{
+	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		cvmx_pip_qos_diffx_t pip_qos_diffx;
+
+		pip_qos_diffx.u64 = 0;
+		pip_qos_diffx.s.qos = qos;
+		csr_wr(CVMX_PIP_QOS_DIFFX(diffserv), pip_qos_diffx.u64);
+	}
+}
+
+/**
+ * Get the status counters for a port for older non PKI chips.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_stats(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	cvmx_pip_stat_ctl_t pip_stat_ctl;
+	cvmx_pip_stat0_prtx_t stat0;
+	cvmx_pip_stat1_prtx_t stat1;
+	cvmx_pip_stat2_prtx_t stat2;
+	cvmx_pip_stat3_prtx_t stat3;
+	cvmx_pip_stat4_prtx_t stat4;
+	cvmx_pip_stat5_prtx_t stat5;
+	cvmx_pip_stat6_prtx_t stat6;
+	cvmx_pip_stat7_prtx_t stat7;
+	cvmx_pip_stat8_prtx_t stat8;
+	cvmx_pip_stat9_prtx_t stat9;
+	cvmx_pip_stat10_x_t stat10;
+	cvmx_pip_stat11_x_t stat11;
+	cvmx_pip_stat_inb_pktsx_t pip_stat_inb_pktsx;
+	cvmx_pip_stat_inb_octsx_t pip_stat_inb_octsx;
+	cvmx_pip_stat_inb_errsx_t pip_stat_inb_errsx;
+	int interface = cvmx_helper_get_interface_num(port_num);
+	int index = cvmx_helper_get_interface_index_num(port_num);
+
+	pip_stat_ctl.u64 = 0;
+	pip_stat_ctl.s.rdclr = clear;
+	csr_wr(CVMX_PIP_STAT_CTL, pip_stat_ctl.u64);
+
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+		/*
+		 * PIP_STAT_CTL[MODE] 0 means pkind.
+		 */
+		stat0.u64 = csr_rd(CVMX_PIP_STAT0_X(pknd));
+		stat1.u64 = csr_rd(CVMX_PIP_STAT1_X(pknd));
+		stat2.u64 = csr_rd(CVMX_PIP_STAT2_X(pknd));
+		stat3.u64 = csr_rd(CVMX_PIP_STAT3_X(pknd));
+		stat4.u64 = csr_rd(CVMX_PIP_STAT4_X(pknd));
+		stat5.u64 = csr_rd(CVMX_PIP_STAT5_X(pknd));
+		stat6.u64 = csr_rd(CVMX_PIP_STAT6_X(pknd));
+		stat7.u64 = csr_rd(CVMX_PIP_STAT7_X(pknd));
+		stat8.u64 = csr_rd(CVMX_PIP_STAT8_X(pknd));
+		stat9.u64 = csr_rd(CVMX_PIP_STAT9_X(pknd));
+		stat10.u64 = csr_rd(CVMX_PIP_STAT10_X(pknd));
+		stat11.u64 = csr_rd(CVMX_PIP_STAT11_X(pknd));
+	} else {
+		if (port_num >= 40) {
+			stat0.u64 = csr_rd(CVMX_PIP_XSTAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_XSTAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_XSTAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_XSTAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_XSTAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_XSTAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_XSTAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_XSTAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_XSTAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_XSTAT9_PRTX(port_num));
+			if (OCTEON_IS_MODEL(OCTEON_CN6XXX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_XSTAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_XSTAT11_PRTX(port_num));
+			}
+		} else {
+			stat0.u64 = csr_rd(CVMX_PIP_STAT0_PRTX(port_num));
+			stat1.u64 = csr_rd(CVMX_PIP_STAT1_PRTX(port_num));
+			stat2.u64 = csr_rd(CVMX_PIP_STAT2_PRTX(port_num));
+			stat3.u64 = csr_rd(CVMX_PIP_STAT3_PRTX(port_num));
+			stat4.u64 = csr_rd(CVMX_PIP_STAT4_PRTX(port_num));
+			stat5.u64 = csr_rd(CVMX_PIP_STAT5_PRTX(port_num));
+			stat6.u64 = csr_rd(CVMX_PIP_STAT6_PRTX(port_num));
+			stat7.u64 = csr_rd(CVMX_PIP_STAT7_PRTX(port_num));
+			stat8.u64 = csr_rd(CVMX_PIP_STAT8_PRTX(port_num));
+			stat9.u64 = csr_rd(CVMX_PIP_STAT9_PRTX(port_num));
+			if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+				stat10.u64 = csr_rd(CVMX_PIP_STAT10_PRTX(port_num));
+				stat11.u64 = csr_rd(CVMX_PIP_STAT11_PRTX(port_num));
+			}
+		}
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int pknd = cvmx_helper_get_pknd(interface, index);
+
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTS_PKNDX(pknd));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTS_PKNDX(pknd));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRS_PKNDX(pknd));
+	} else {
+		pip_stat_inb_pktsx.u64 = csr_rd(CVMX_PIP_STAT_INB_PKTSX(port_num));
+		pip_stat_inb_octsx.u64 = csr_rd(CVMX_PIP_STAT_INB_OCTSX(port_num));
+		pip_stat_inb_errsx.u64 = csr_rd(CVMX_PIP_STAT_INB_ERRSX(port_num));
+	}
+
+	status->dropped_octets = stat0.s.drp_octs;
+	status->dropped_packets = stat0.s.drp_pkts;
+	status->octets = stat1.s.octs;
+	status->pci_raw_packets = stat2.s.raw;
+	status->packets = stat2.s.pkts;
+	status->multicast_packets = stat3.s.mcst;
+	status->broadcast_packets = stat3.s.bcst;
+	status->len_64_packets = stat4.s.h64;
+	status->len_65_127_packets = stat4.s.h65to127;
+	status->len_128_255_packets = stat5.s.h128to255;
+	status->len_256_511_packets = stat5.s.h256to511;
+	status->len_512_1023_packets = stat6.s.h512to1023;
+	status->len_1024_1518_packets = stat6.s.h1024to1518;
+	status->len_1519_max_packets = stat7.s.h1519;
+	status->fcs_align_err_packets = stat7.s.fcs;
+	status->runt_packets = stat8.s.undersz;
+	status->runt_crc_packets = stat8.s.frag;
+	status->oversize_packets = stat9.s.oversz;
+	status->oversize_crc_packets = stat9.s.jabber;
+	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		status->mcast_l2_red_packets = stat10.s.mcast;
+		status->bcast_l2_red_packets = stat10.s.bcast;
+		status->mcast_l3_red_packets = stat11.s.mcast;
+		status->bcast_l3_red_packets = stat11.s.bcast;
+	}
+	status->inb_packets = pip_stat_inb_pktsx.s.pkts;
+	status->inb_octets = pip_stat_inb_octsx.s.octs;
+	status->inb_errors = pip_stat_inb_errsx.s.errs;
+}
+
+/**
+ * Get the status counters for a port.
+ *
+ * @param port_num Port number (ipd_port) to get statistics for.
+ * @param clear    Set to 1 to clear the counters after they are read
+ * @param status   Where to put the results.
+ */
+static inline void cvmx_pip_get_port_status(u64 port_num, u64 clear, cvmx_pip_port_status_t *status)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+		unsigned int node = cvmx_get_node_num();
+
+		cvmx_pki_get_port_stats(node, port_num, (struct cvmx_pki_port_stats *)status);
+	} else {
+		cvmx_pip_get_port_stats(port_num, clear, status);
+	}
+}
+
+/**
+ * Configure the hardware CRC engine
+ *
+ * @param interface Interface to configure (0 or 1)
+ * @param invert_result
+ *                 Invert the result of the CRC
+ * @param reflect  Reflect
+ * @param initialization_vector
+ *                 CRC initialization vector
+ */
+static inline void cvmx_pip_config_crc(u64 interface, u64 invert_result, u64 reflect,
+				       u32 initialization_vector)
+{
+	/* Only CN38XX & CN58XX */
+}
+
+/**
+ * Clear all bits in a tag mask. This should be called on
+ * startup before any calls to cvmx_pip_tag_mask_set. Each bit
+ * set in the final mask represent a byte used in the packet for
+ * tag generation.
+ *
+ * @param mask_index Which tag mask to clear (0..3)
+ */
+static inline void cvmx_pip_tag_mask_clear(u64 mask_index)
+{
+	u64 index;
+	cvmx_pip_tag_incx_t pip_tag_incx;
+
+	pip_tag_incx.u64 = 0;
+	pip_tag_incx.s.en = 0;
+	for (index = mask_index * 16; index < (mask_index + 1) * 16; index++)
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+}
+
+/**
+ * Sets a range of bits in the tag mask. The tag mask is used
+ * when the cvmx_pip_port_tag_cfg_t tag_mode is non zero.
+ * There are four separate masks that can be configured.
+ *
+ * @param mask_index Which tag mask to modify (0..3)
+ * @param offset     Offset into the bitmask to set bits at. Use the GCC macro
+ *                   offsetof() to determine the offsets into packet headers.
+ *                   For example, offsetof(ethhdr, protocol) returns the offset
+ *                   of the ethernet protocol field.  The bitmask selects which bytes
+ *                   to include the the tag, with bit offset X selecting byte@offset X
+ *                   from the beginning of the packet data.
+ * @param len        Number of bytes to include. Usually this is the sizeof()
+ *                   the field.
+ */
+static inline void cvmx_pip_tag_mask_set(u64 mask_index, u64 offset, u64 len)
+{
+	while (len--) {
+		cvmx_pip_tag_incx_t pip_tag_incx;
+		u64 index = mask_index * 16 + offset / 8;
+
+		pip_tag_incx.u64 = csr_rd(CVMX_PIP_TAG_INCX(index));
+		pip_tag_incx.s.en |= 0x80 >> (offset & 0x7);
+		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
+		offset++;
+	}
+}
+
+/**
+ * Set byte count for Max-Sized and Min Sized frame check.
+ *
+ * @param interface   Which interface to set the limit
+ * @param max_size    Byte count for Max-Size frame check
+ */
+static inline void cvmx_pip_set_frame_check(int interface, u32 max_size)
+{
+	cvmx_pip_frm_len_chkx_t frm_len;
+
+	/* max_size and min_size are passed as 0, reset to default values. */
+	if (max_size < 1536)
+		max_size = 1536;
+
+	/* On CN68XX frame check is enabled for a pkind n and
+	   PIP_PRT_CFG[len_chk_sel] selects which set of
+	   MAXLEN/MINLEN to use. */
+	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
+		int port;
+		int num_ports = cvmx_helper_ports_on_interface(interface);
+
+		for (port = 0; port < num_ports; port++) {
+			if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
+				int ipd_port;
+
+				ipd_port = cvmx_helper_get_ipd_port(interface, port);
+				cvmx_pki_set_max_frm_len(ipd_port, max_size);
+			} else {
+				int pknd;
+				int sel;
+				cvmx_pip_prt_cfgx_t config;
+
+				pknd = cvmx_helper_get_pknd(interface, port);
+				config.u64 = csr_rd(CVMX_PIP_PRT_CFGX(pknd));
+				sel = config.s.len_chk_sel;
+				frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(sel));
+				frm_len.s.maxlen = max_size;
+				csr_wr(CVMX_PIP_FRM_LEN_CHKX(sel), frm_len.u64);
+			}
+		}
+	}
+	/* on cn6xxx and cn7xxx models, PIP_FRM_LEN_CHK0 applies to
+	 *     all incoming traffic */
+	else if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+		frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(0));
+		frm_len.s.maxlen = max_size;
+		csr_wr(CVMX_PIP_FRM_LEN_CHKX(0), frm_len.u64);
+	}
+}
+
+/**
+ * Initialize Bit Select Extractor config. Their are 8 bit positions and valids
+ * to be used when using the corresponding extractor.
+ *
+ * @param bit     Bit Select Extractor to use
+ * @param pos     Which position to update
+ * @param val     The value to update the position with
+ */
+static inline void cvmx_pip_set_bsel_pos(int bit, int pos, int val)
+{
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	if (bit < 0 || bit > 3) {
+		debug("ERROR: cvmx_pip_set_bsel_pos: Invalid Bit-Select Extractor (%d) passed\n",
+		      bit);
+		return;
+	}
+
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+	switch (pos) {
+	case 0:
+		bsel_pos.s.pos0_val = 1;
+		bsel_pos.s.pos0 = val & 0x7f;
+		break;
+	case 1:
+		bsel_pos.s.pos1_val = 1;
+		bsel_pos.s.pos1 = val & 0x7f;
+		break;
+	case 2:
+		bsel_pos.s.pos2_val = 1;
+		bsel_pos.s.pos2 = val & 0x7f;
+		break;
+	case 3:
+		bsel_pos.s.pos3_val = 1;
+		bsel_pos.s.pos3 = val & 0x7f;
+		break;
+	case 4:
+		bsel_pos.s.pos4_val = 1;
+		bsel_pos.s.pos4 = val & 0x7f;
+		break;
+	case 5:
+		bsel_pos.s.pos5_val = 1;
+		bsel_pos.s.pos5 = val & 0x7f;
+		break;
+	case 6:
+		bsel_pos.s.pos6_val = 1;
+		bsel_pos.s.pos6 = val & 0x7f;
+		break;
+	case 7:
+		bsel_pos.s.pos7_val = 1;
+		bsel_pos.s.pos7 = val & 0x7f;
+		break;
+	default:
+		debug("Warning: cvmx_pip_set_bsel_pos: Invalid pos(%d)\n", pos);
+		break;
+	}
+	csr_wr(CVMX_PIP_BSEL_EXT_POSX(bit), bsel_pos.u64);
+}
+
+/**
+ * Initialize offset and skip values to use by bit select extractor.
+
+ * @param bit	Bit Select Extractor to use
+ * @param offset	Offset to add to extractor mem addr to get final address
+ *			to lookup table.
+ * @param skip		Number of bytes to skip from start of packet 0-64
+ */
+static inline void cvmx_pip_bsel_config(int bit, int offset, int skip)
+{
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_cfg.s.offset = offset;
+	bsel_cfg.s.skip = skip;
+	csr_wr(CVMX_PIP_BSEL_EXT_CFGX(bit), bsel_cfg.u64);
+}
+
+/**
+ * Get the entry for the Bit Select Extractor Table.
+ * @param work   pointer to work queue entry
+ * @return       Index of the Bit Select Extractor Table
+ */
+static inline int cvmx_pip_get_bsel_table_index(cvmx_wqe_t *work)
+{
+	int bit = cvmx_wqe_get_port(work) & 0x3;
+	/* Get the Bit select table index. */
+	int index;
+	int y;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_bsel_ext_posx_t bsel_pos;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
+
+	for (y = 0; y < 8; y++) {
+		char *ptr = (char *)cvmx_phys_to_ptr(work->packet_ptr.s.addr);
+		int bit_loc = 0;
+		int bit;
+
+		ptr += bsel_cfg.s.skip;
+		switch (y) {
+		case 0:
+			ptr += (bsel_pos.s.pos0 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos0 & 0x3);
+			break;
+		case 1:
+			ptr += (bsel_pos.s.pos1 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos1 & 0x3);
+			break;
+		case 2:
+			ptr += (bsel_pos.s.pos2 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos2 & 0x3);
+			break;
+		case 3:
+			ptr += (bsel_pos.s.pos3 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos3 & 0x3);
+			break;
+		case 4:
+			ptr += (bsel_pos.s.pos4 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos4 & 0x3);
+			break;
+		case 5:
+			ptr += (bsel_pos.s.pos5 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos5 & 0x3);
+			break;
+		case 6:
+			ptr += (bsel_pos.s.pos6 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos6 & 0x3);
+			break;
+		case 7:
+			ptr += (bsel_pos.s.pos7 >> 3);
+			bit_loc = 7 - (bsel_pos.s.pos7 & 0x3);
+			break;
+		}
+		bit = (*ptr >> bit_loc) & 1;
+		index |= bit << y;
+	}
+	index += bsel_cfg.s.offset;
+	index &= 0x1ff;
+	return index;
+}
+
+static inline int cvmx_pip_get_bsel_qos(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.qos;
+}
+
+static inline int cvmx_pip_get_bsel_grp(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.grp;
+}
+
+static inline int cvmx_pip_get_bsel_tt(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+
+	return bsel_tbl.s.tt;
+}
+
+static inline int cvmx_pip_get_bsel_tag(cvmx_wqe_t *work)
+{
+	int index = cvmx_pip_get_bsel_table_index(work);
+	int port = cvmx_wqe_get_port(work);
+	int bit = port & 0x3;
+	int upper_tag = 0;
+	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
+	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
+	cvmx_pip_prt_tagx_t prt_tag;
+
+	/* The bit select extractor is available in CN61XX and CN68XX pass2.0 onwards. */
+	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
+		return -1;
+
+	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
+	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
+
+	prt_tag.u64 = csr_rd(CVMX_PIP_PRT_TAGX(port));
+	if (prt_tag.s.inc_prt_flag == 0)
+		upper_tag = bsel_cfg.s.upper_tag;
+	return bsel_tbl.s.tag | ((bsel_cfg.s.tag << 8) & 0xff00) | ((upper_tag << 16) & 0xffff0000);
+}
+
+#endif /*  __CVMX_PIP_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
new file mode 100644
index 000000000000..79b99b0bd7c2
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Resource management for PKI resources.
+ */
+
+#ifndef __CVMX_PKI_RESOURCES_H__
+#define __CVMX_PKI_RESOURCES_H__
+
+/**
+ * This function allocates/reserves a style from pool of global styles per node.
+ * @param node	 node to allocate style from.
+ * @param style	 style to allocate, if -1 it will be allocated
+		 first available style from style resource. If index is positive
+		 number and in range, it will try to allocate specified style.
+ * @return	 style number on success, -1 on failure.
+ */
+int cvmx_pki_style_alloc(int node, int style);
+
+/**
+ * This function allocates/reserves a cluster group from per node
+   cluster group resources.
+ * @param node		node to allocate cluster group from.
+   @param cl_grp	cluster group to allocate/reserve, if -1 ,
+			allocate any available cluster group.
+ * @return		cluster group number or -1 on failure
+ */
+int cvmx_pki_cluster_grp_alloc(int node, int cl_grp);
+
+/**
+ * This function allocates/reserves a cluster from per node
+   cluster resources.
+ * @param node		node to allocate cluster group from.
+   @param cluster_mask	mask of clusters  to allocate/reserve, if -1 ,
+			allocate any available clusters.
+ * @param num_clusters	number of clusters that will be allocated
+ */
+int cvmx_pki_cluster_alloc(int node, int num_clusters, u64 *cluster_mask);
+
+/**
+ * This function allocates/reserves a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191), if -1 ,
+			allocate any available pcam entry.
+ * @param bank		pcam bank where to allocate/reserve pcan entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is needed.
+ * @return		pcam entry of -1 on failure
+ */
+int cvmx_pki_pcam_entry_alloc(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+			qpg base_offset will be allocated. If base_offset is positive
+			number and in range, it will try to allocate specified base_offset.
+   @param count		number of consecutive qpg entries to allocate. They will be consecutive
+			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_alloc(int node, int base_offset, int count);
+
+/**
+ * This function frees a style from pool of global styles per node.
+ * @param node	 node to free style from.
+ * @param style	 style to free
+ * @return	 0 on success, -1 on failure.
+ */
+int cvmx_pki_style_free(int node, int style);
+
+/**
+ * This function frees a cluster group from per node
+   cluster group resources.
+ * @param node		node to free cluster group from.
+   @param cl_grp	cluster group to free
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_grp_free(int node, int cl_grp);
+
+/**
+ * This function frees QPG table entries per node.
+ * @param node		node number.
+ * @param base_offset	base_offset in qpg table. If -1, first available
+ *			qpg base_offset will be allocated. If base_offset is positive
+ *			number and in range, it will try to allocate specified base_offset.
+ * @param count		number of consecutive qpg entries to allocate. They will be consecutive
+ *			from base offset.
+ * @return		qpg table base offset number on success, -1 on failure.
+ */
+int cvmx_pki_qpg_entry_free(int node, int base_offset, int count);
+
+/**
+ * This function frees  clusters  from per node
+   clusters resources.
+ * @param node		node to free clusters from.
+ * @param cluster_mask  mask of clusters need freeing
+ * @return		0 on success or -1 on failure
+ */
+int cvmx_pki_cluster_free(int node, u64 cluster_mask);
+
+/**
+ * This function frees a pcam entry from node
+ * @param node		node to allocate pcam entry from.
+   @param index	index of pacm entry (0-191) needs to be freed.
+ * @param bank		pcam bank where to free pcam entry from
+ * @param cluster_mask  mask of clusters from which pcam entry is freed.
+ * @return		0 on success OR -1 on failure
+ */
+int cvmx_pki_pcam_entry_free(int node, int index, int bank, u64 cluster_mask);
+
+/**
+ * This function allocates/reserves a bpid from pool of global bpid per node.
+ * @param node	node to allocate bpid from.
+ * @param bpid	bpid  to allocate, if -1 it will be allocated
+ *		first available boid from bpid resource. If index is positive
+ *		number and in range, it will try to allocate specified bpid.
+ * @return	bpid number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_bpid_alloc(int node, int bpid);
+
+/**
+ * This function frees a bpid from pool of global bpid per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_bpid_free(int node, int bpid);
+
+/**
+ * This function frees all the PKI software resources
+ * (clusters, styles, qpg_entry, pcam_entry etc) for the specified node
+ */
+
+/**
+ * This function allocates/reserves an index from pool of global MTAG-IDX per node.
+ * @param node	node to allocate index from.
+ * @param idx	index  to allocate, if -1 it will be allocated
+ * @return	MTAG index number on success,
+ *		-1 on alloc failure.
+ *		-2 on resource already reserved.
+ */
+int cvmx_pki_mtag_idx_alloc(int node, int idx);
+
+/**
+ * This function frees an index from pool of global MTAG-IDX per node.
+ * @param node	 node to free bpid from.
+ * @param bpid	 bpid to free
+ * @return	 0 on success, -1 on failure or
+ */
+int cvmx_pki_mtag_idx_free(int node, int idx);
+
+void __cvmx_pki_global_rsrc_free(int node);
+
+#endif /*  __CVM_PKI_RESOURCES_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki.h b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
new file mode 100644
index 000000000000..c1feb55a1f01
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
@@ -0,0 +1,970 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Packet Input Data unit.
+ */
+
+#ifndef __CVMX_PKI_H__
+#define __CVMX_PKI_H__
+
+#include "cvmx-fpa3.h"
+#include "cvmx-helper-util.h"
+#include "cvmx-helper-cfg.h"
+#include "cvmx-error.h"
+
+/* PKI AURA and BPID count are equal to FPA AURA count */
+#define CVMX_PKI_NUM_AURA	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_BPID	       (cvmx_fpa3_num_auras())
+#define CVMX_PKI_NUM_SSO_GROUP	       (cvmx_sso_num_xgrp())
+#define CVMX_PKI_NUM_CLUSTER_GROUP_MAX 1
+#define CVMX_PKI_NUM_CLUSTER_GROUP     (cvmx_pki_num_cl_grp())
+#define CVMX_PKI_NUM_CLUSTER	       (cvmx_pki_num_clusters())
+
+/* FIXME: Reduce some of these values, convert to routines XXX */
+#define CVMX_PKI_NUM_CHANNEL	    4096
+#define CVMX_PKI_NUM_PKIND	    64
+#define CVMX_PKI_NUM_INTERNAL_STYLE 256
+#define CVMX_PKI_NUM_FINAL_STYLE    64
+#define CVMX_PKI_NUM_QPG_ENTRY	    2048
+#define CVMX_PKI_NUM_MTAG_IDX	    (32 / 4) /* 32 registers grouped by 4*/
+#define CVMX_PKI_NUM_LTYPE	    32
+#define CVMX_PKI_NUM_PCAM_BANK	    2
+#define CVMX_PKI_NUM_PCAM_ENTRY	    192
+#define CVMX_PKI_NUM_FRAME_CHECK    2
+#define CVMX_PKI_NUM_BELTYPE	    32
+#define CVMX_PKI_MAX_FRAME_SIZE	    65535
+#define CVMX_PKI_FIND_AVAL_ENTRY    (-1)
+#define CVMX_PKI_CLUSTER_ALL	    0xf
+
+#ifdef CVMX_SUPPORT_SEPARATE_CLUSTER_CONFIG
+#define CVMX_PKI_TOTAL_PCAM_ENTRY                                                                  \
+	((CVMX_PKI_NUM_CLUSTER) * (CVMX_PKI_NUM_PCAM_BANK) * (CVMX_PKI_NUM_PCAM_ENTRY))
+#else
+#define CVMX_PKI_TOTAL_PCAM_ENTRY (CVMX_PKI_NUM_PCAM_BANK * CVMX_PKI_NUM_PCAM_ENTRY)
+#endif
+
+static inline unsigned int cvmx_pki_num_clusters(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 2;
+	return 4;
+}
+
+static inline unsigned int cvmx_pki_num_cl_grp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX) || OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
+	    OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 1;
+	return 0;
+}
+
+enum cvmx_pki_pkind_parse_mode {
+	CVMX_PKI_PARSE_LA_TO_LG = 0,  /* Parse LA(L2) to LG */
+	CVMX_PKI_PARSE_LB_TO_LG = 1,  /* Parse LB(custom) to LG */
+	CVMX_PKI_PARSE_LC_TO_LG = 3,  /* Parse LC(L3) to LG */
+	CVMX_PKI_PARSE_LG = 0x3f,     /* Parse LG */
+	CVMX_PKI_PARSE_NOTHING = 0x7f /* Parse nothing */
+};
+
+enum cvmx_pki_parse_mode_chg {
+	CVMX_PKI_PARSE_NO_CHG = 0x0,
+	CVMX_PKI_PARSE_SKIP_TO_LB = 0x1,
+	CVMX_PKI_PARSE_SKIP_TO_LC = 0x3,
+	CVMX_PKI_PARSE_SKIP_TO_LD = 0x7,
+	CVMX_PKI_PARSE_SKIP_TO_LG = 0x3f,
+	CVMX_PKI_PARSE_SKIP_ALL = 0x7f,
+};
+
+enum cvmx_pki_l2_len_mode { PKI_L2_LENCHK_EQUAL_GREATER = 0, PKI_L2_LENCHK_EQUAL_ONLY };
+
+enum cvmx_pki_cache_mode {
+	CVMX_PKI_OPC_MODE_STT = 0LL,	  /* All blocks write through DRAM,*/
+	CVMX_PKI_OPC_MODE_STF = 1LL,	  /* All blocks into L2 */
+	CVMX_PKI_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
+	CVMX_PKI_OPC_MODE_STF2_STT = 3LL  /* 1st, 2nd blocks L2, rest DRAM */
+};
+
+/**
+ * Tag type definitions
+ */
+enum cvmx_sso_tag_type {
+	CVMX_SSO_TAG_TYPE_ORDERED = 0L,
+	CVMX_SSO_TAG_TYPE_ATOMIC = 1L,
+	CVMX_SSO_TAG_TYPE_UNTAGGED = 2L,
+	CVMX_SSO_TAG_TYPE_EMPTY = 3L
+};
+
+enum cvmx_pki_qpg_qos {
+	CVMX_PKI_QPG_QOS_NONE = 0,
+	CVMX_PKI_QPG_QOS_VLAN,
+	CVMX_PKI_QPG_QOS_MPLS,
+	CVMX_PKI_QPG_QOS_DSA_SRC,
+	CVMX_PKI_QPG_QOS_DIFFSERV,
+	CVMX_PKI_QPG_QOS_HIGIG,
+};
+
+enum cvmx_pki_wqe_vlan { CVMX_PKI_USE_FIRST_VLAN = 0, CVMX_PKI_USE_SECOND_VLAN };
+
+/**
+ * Controls how the PKI statistics counters are handled
+ * The PKI_STAT*_X registers can be indexed either by port kind (pkind), or
+ * final style. (Does not apply to the PKI_STAT_INB* registers.)
+ *    0 = X represents the packet?s pkind
+ *    1 = X represents the low 6-bits of packet?s final style
+ */
+enum cvmx_pki_stats_mode { CVMX_PKI_STAT_MODE_PKIND, CVMX_PKI_STAT_MODE_STYLE };
+
+enum cvmx_pki_fpa_wait { CVMX_PKI_DROP_PKT, CVMX_PKI_WAIT_PKT };
+
+#define PKI_BELTYPE_E__NONE_M 0x0
+#define PKI_BELTYPE_E__MISC_M 0x1
+#define PKI_BELTYPE_E__IP4_M  0x2
+#define PKI_BELTYPE_E__IP6_M  0x3
+#define PKI_BELTYPE_E__TCP_M  0x4
+#define PKI_BELTYPE_E__UDP_M  0x5
+#define PKI_BELTYPE_E__SCTP_M 0x6
+#define PKI_BELTYPE_E__SNAP_M 0x7
+
+/* PKI_BELTYPE_E_t */
+enum cvmx_pki_beltype {
+	CVMX_PKI_BELTYPE_NONE = PKI_BELTYPE_E__NONE_M,
+	CVMX_PKI_BELTYPE_MISC = PKI_BELTYPE_E__MISC_M,
+	CVMX_PKI_BELTYPE_IP4 = PKI_BELTYPE_E__IP4_M,
+	CVMX_PKI_BELTYPE_IP6 = PKI_BELTYPE_E__IP6_M,
+	CVMX_PKI_BELTYPE_TCP = PKI_BELTYPE_E__TCP_M,
+	CVMX_PKI_BELTYPE_UDP = PKI_BELTYPE_E__UDP_M,
+	CVMX_PKI_BELTYPE_SCTP = PKI_BELTYPE_E__SCTP_M,
+	CVMX_PKI_BELTYPE_SNAP = PKI_BELTYPE_E__SNAP_M,
+	CVMX_PKI_BELTYPE_MAX = CVMX_PKI_BELTYPE_SNAP
+};
+
+struct cvmx_pki_frame_len {
+	u16 maxlen;
+	u16 minlen;
+};
+
+struct cvmx_pki_tag_fields {
+	u64 layer_g_src : 1;
+	u64 layer_f_src : 1;
+	u64 layer_e_src : 1;
+	u64 layer_d_src : 1;
+	u64 layer_c_src : 1;
+	u64 layer_b_src : 1;
+	u64 layer_g_dst : 1;
+	u64 layer_f_dst : 1;
+	u64 layer_e_dst : 1;
+	u64 layer_d_dst : 1;
+	u64 layer_c_dst : 1;
+	u64 layer_b_dst : 1;
+	u64 input_port : 1;
+	u64 mpls_label : 1;
+	u64 first_vlan : 1;
+	u64 second_vlan : 1;
+	u64 ip_prot_nexthdr : 1;
+	u64 tag_sync : 1;
+	u64 tag_spi : 1;
+	u64 tag_gtp : 1;
+	u64 tag_vni : 1;
+};
+
+struct cvmx_pki_pkind_parse {
+	u64 mpls_en : 1;
+	u64 inst_hdr : 1;
+	u64 lg_custom : 1;
+	u64 fulc_en : 1;
+	u64 dsa_en : 1;
+	u64 hg2_en : 1;
+	u64 hg_en : 1;
+};
+
+struct cvmx_pki_pool_config {
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	u64 buffer_size;
+	u64 buffer_count;
+};
+
+struct cvmx_pki_qpg_config {
+	int qpg_base;
+	int port_add;
+	int aura_num;
+	int grp_ok;
+	int grp_bad;
+	int grptag_ok;
+	int grptag_bad;
+};
+
+struct cvmx_pki_aura_config {
+	int aura_num;
+	int pool_num;
+	cvmx_fpa3_pool_t pool;
+	cvmx_fpa3_gaura_t aura;
+	int buffer_count;
+};
+
+struct cvmx_pki_cluster_grp_config {
+	int grp_num;
+	u64 cluster_mask; /* Bit mask of cluster assigned to this cluster group */
+};
+
+struct cvmx_pki_sso_grp_config {
+	int group;
+	int priority;
+	int weight;
+	int affinity;
+	u64 core_mask;
+	u8 core_mask_set;
+};
+
+/* This is per style structure for configuring port parameters,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modifying
+ * that style will modify the respective parameters for all the ports
+ * which are using this style
+ */
+struct cvmx_pki_style_parm {
+	bool ip6_udp_opt;
+	bool lenerr_en;
+	bool maxerr_en;
+	bool minerr_en;
+	u8 lenerr_eqpad;
+	u8 minmax_sel;
+	bool qpg_dis_grptag;
+	bool fcs_strip;
+	bool fcs_chk;
+	bool rawdrp;
+	bool force_drop;
+	bool nodrop;
+	bool qpg_dis_padd;
+	bool qpg_dis_grp;
+	bool qpg_dis_aura;
+	u16 qpg_base;
+	enum cvmx_pki_qpg_qos qpg_qos;
+	u8 qpg_port_sh;
+	u8 qpg_port_msb;
+	u8 apad_nip;
+	u8 wqe_vs;
+	enum cvmx_sso_tag_type tag_type;
+	bool pkt_lend;
+	u8 wqe_hsz;
+	u16 wqe_skip;
+	u16 first_skip;
+	u16 later_skip;
+	enum cvmx_pki_cache_mode cache_mode;
+	u8 dis_wq_dat;
+	u64 mbuff_size;
+	bool len_lg;
+	bool len_lf;
+	bool len_le;
+	bool len_ld;
+	bool len_lc;
+	bool len_lb;
+	bool csum_lg;
+	bool csum_lf;
+	bool csum_le;
+	bool csum_ld;
+	bool csum_lc;
+	bool csum_lb;
+};
+
+/* This is per style structure for configuring port's tag configuration,
+ * it is kind of of profile which can be assigned to any port.
+ * If multiple ports are assigned same style be aware that modiying that style
+ * will modify the respective parameters for all the ports which are
+ * using this style */
+enum cvmx_pki_mtag_ptrsel {
+	CVMX_PKI_MTAG_PTRSEL_SOP = 0,
+	CVMX_PKI_MTAG_PTRSEL_LA = 8,
+	CVMX_PKI_MTAG_PTRSEL_LB = 9,
+	CVMX_PKI_MTAG_PTRSEL_LC = 10,
+	CVMX_PKI_MTAG_PTRSEL_LD = 11,
+	CVMX_PKI_MTAG_PTRSEL_LE = 12,
+	CVMX_PKI_MTAG_PTRSEL_LF = 13,
+	CVMX_PKI_MTAG_PTRSEL_LG = 14,
+	CVMX_PKI_MTAG_PTRSEL_VL = 15,
+};
+
+struct cvmx_pki_mask_tag {
+	bool enable;
+	int base;   /* CVMX_PKI_MTAG_PTRSEL_XXX */
+	int offset; /* Offset from base. */
+	u64 val;    /* Bitmask:
+		1 = enable, 0 = disabled for each byte in the 64-byte array.*/
+};
+
+struct cvmx_pki_style_tag_cfg {
+	struct cvmx_pki_tag_fields tag_fields;
+	struct cvmx_pki_mask_tag mask_tag[4];
+};
+
+struct cvmx_pki_style_config {
+	struct cvmx_pki_style_parm parm_cfg;
+	struct cvmx_pki_style_tag_cfg tag_cfg;
+};
+
+struct cvmx_pki_pkind_config {
+	u8 cluster_grp;
+	bool fcs_pres;
+	struct cvmx_pki_pkind_parse parse_en;
+	enum cvmx_pki_pkind_parse_mode initial_parse_mode;
+	u8 fcs_skip;
+	u8 inst_skip;
+	int initial_style;
+	bool custom_l2_hdr;
+	u8 l2_scan_offset;
+	u64 lg_scan_offset;
+};
+
+struct cvmx_pki_port_config {
+	struct cvmx_pki_pkind_config pkind_cfg;
+	struct cvmx_pki_style_config style_cfg;
+};
+
+struct cvmx_pki_global_parse {
+	u64 virt_pen : 1;
+	u64 clg_pen : 1;
+	u64 cl2_pen : 1;
+	u64 l4_pen : 1;
+	u64 il3_pen : 1;
+	u64 l3_pen : 1;
+	u64 mpls_pen : 1;
+	u64 fulc_pen : 1;
+	u64 dsa_pen : 1;
+	u64 hg_pen : 1;
+};
+
+struct cvmx_pki_tag_sec {
+	u16 dst6;
+	u16 src6;
+	u16 dst;
+	u16 src;
+};
+
+struct cvmx_pki_global_config {
+	u64 cluster_mask[CVMX_PKI_NUM_CLUSTER_GROUP_MAX];
+	enum cvmx_pki_stats_mode stat_mode;
+	enum cvmx_pki_fpa_wait fpa_wait;
+	struct cvmx_pki_global_parse gbl_pen;
+	struct cvmx_pki_tag_sec tag_secret;
+	struct cvmx_pki_frame_len frm_len[CVMX_PKI_NUM_FRAME_CHECK];
+	enum cvmx_pki_beltype ltype_map[CVMX_PKI_NUM_BELTYPE];
+	int pki_enable;
+};
+
+#define CVMX_PKI_PCAM_TERM_E_NONE_M	 0x0
+#define CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M 0x2
+#define CVMX_PKI_PCAM_TERM_E_HIGIGD_M	 0x4
+#define CVMX_PKI_PCAM_TERM_E_HIGIG_M	 0x5
+#define CVMX_PKI_PCAM_TERM_E_SMACH_M	 0x8
+#define CVMX_PKI_PCAM_TERM_E_SMACL_M	 0x9
+#define CVMX_PKI_PCAM_TERM_E_DMACH_M	 0xA
+#define CVMX_PKI_PCAM_TERM_E_DMACL_M	 0xB
+#define CVMX_PKI_PCAM_TERM_E_GLORT_M	 0x12
+#define CVMX_PKI_PCAM_TERM_E_DSA_M	 0x13
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M	 0x18
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M	 0x19
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M	 0x1A
+#define CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M	 0x1B
+#define CVMX_PKI_PCAM_TERM_E_MPLS0_M	 0x1E
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M	 0x1F
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M	 0x20
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPML_M	 0x21
+#define CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M	 0x22
+#define CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M	 0x23
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M	 0x24
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M	 0x25
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPML_M	 0x26
+#define CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M	 0x27
+#define CVMX_PKI_PCAM_TERM_E_LD_VNI_M	 0x28
+#define CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M 0x2B
+#define CVMX_PKI_PCAM_TERM_E_LF_SPI_M	 0x2E
+#define CVMX_PKI_PCAM_TERM_E_L4_SPORT_M	 0x2f
+#define CVMX_PKI_PCAM_TERM_E_L4_PORT_M	 0x30
+#define CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M 0x39
+
+enum cvmx_pki_term {
+	CVMX_PKI_PCAM_TERM_NONE = CVMX_PKI_PCAM_TERM_E_NONE_M,
+	CVMX_PKI_PCAM_TERM_L2_CUSTOM = CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M,
+	CVMX_PKI_PCAM_TERM_HIGIGD = CVMX_PKI_PCAM_TERM_E_HIGIGD_M,
+	CVMX_PKI_PCAM_TERM_HIGIG = CVMX_PKI_PCAM_TERM_E_HIGIG_M,
+	CVMX_PKI_PCAM_TERM_SMACH = CVMX_PKI_PCAM_TERM_E_SMACH_M,
+	CVMX_PKI_PCAM_TERM_SMACL = CVMX_PKI_PCAM_TERM_E_SMACL_M,
+	CVMX_PKI_PCAM_TERM_DMACH = CVMX_PKI_PCAM_TERM_E_DMACH_M,
+	CVMX_PKI_PCAM_TERM_DMACL = CVMX_PKI_PCAM_TERM_E_DMACL_M,
+	CVMX_PKI_PCAM_TERM_GLORT = CVMX_PKI_PCAM_TERM_E_GLORT_M,
+	CVMX_PKI_PCAM_TERM_DSA = CVMX_PKI_PCAM_TERM_E_DSA_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE0 = CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE1 = CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE2 = CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M,
+	CVMX_PKI_PCAM_TERM_ETHTYPE3 = CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M,
+	CVMX_PKI_PCAM_TERM_MPLS0 = CVMX_PKI_PCAM_TERM_E_MPLS0_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPHH = CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPMH = CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPML = CVMX_PKI_PCAM_TERM_E_L3_SIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_SIPLL = CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M,
+	CVMX_PKI_PCAM_TERM_L3_FLAGS = CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPHH = CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPMH = CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPML = CVMX_PKI_PCAM_TERM_E_L3_DIPML_M,
+	CVMX_PKI_PCAM_TERM_L3_DIPLL = CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M,
+	CVMX_PKI_PCAM_TERM_LD_VNI = CVMX_PKI_PCAM_TERM_E_LD_VNI_M,
+	CVMX_PKI_PCAM_TERM_IL3_FLAGS = CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M,
+	CVMX_PKI_PCAM_TERM_LF_SPI = CVMX_PKI_PCAM_TERM_E_LF_SPI_M,
+	CVMX_PKI_PCAM_TERM_L4_PORT = CVMX_PKI_PCAM_TERM_E_L4_PORT_M,
+	CVMX_PKI_PCAM_TERM_L4_SPORT = CVMX_PKI_PCAM_TERM_E_L4_SPORT_M,
+	CVMX_PKI_PCAM_TERM_LG_CUSTOM = CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M
+};
+
+#define CVMX_PKI_DMACH_SHIFT	  32
+#define CVMX_PKI_DMACH_MASK	  cvmx_build_mask(16)
+#define CVMX_PKI_DMACL_MASK	  CVMX_PKI_DATA_MASK_32
+#define CVMX_PKI_DATA_MASK_32	  cvmx_build_mask(32)
+#define CVMX_PKI_DATA_MASK_16	  cvmx_build_mask(16)
+#define CVMX_PKI_DMAC_MATCH_EXACT cvmx_build_mask(48)
+
+struct cvmx_pki_pcam_input {
+	u64 style;
+	u64 style_mask; /* bits: 1-match, 0-dont care */
+	enum cvmx_pki_term field;
+	u32 field_mask; /* bits: 1-match, 0-dont care */
+	u64 data;
+	u64 data_mask; /* bits: 1-match, 0-dont care */
+};
+
+struct cvmx_pki_pcam_action {
+	enum cvmx_pki_parse_mode_chg parse_mode_chg;
+	enum cvmx_pki_layer_type layer_type_set;
+	int style_add;
+	int parse_flag_set;
+	int pointer_advance;
+};
+
+struct cvmx_pki_pcam_config {
+	int in_use;
+	int entry_num;
+	u64 cluster_mask;
+	struct cvmx_pki_pcam_input pcam_input;
+	struct cvmx_pki_pcam_action pcam_action;
+};
+
+/**
+ * Status statistics for a port
+ */
+struct cvmx_pki_port_stats {
+	u64 dropped_octets;
+	u64 dropped_packets;
+	u64 pci_raw_packets;
+	u64 octets;
+	u64 packets;
+	u64 multicast_packets;
+	u64 broadcast_packets;
+	u64 len_64_packets;
+	u64 len_65_127_packets;
+	u64 len_128_255_packets;
+	u64 len_256_511_packets;
+	u64 len_512_1023_packets;
+	u64 len_1024_1518_packets;
+	u64 len_1519_max_packets;
+	u64 fcs_align_err_packets;
+	u64 runt_packets;
+	u64 runt_crc_packets;
+	u64 oversize_packets;
+	u64 oversize_crc_packets;
+	u64 inb_packets;
+	u64 inb_octets;
+	u64 inb_errors;
+	u64 mcast_l2_red_packets;
+	u64 bcast_l2_red_packets;
+	u64 mcast_l3_red_packets;
+	u64 bcast_l3_red_packets;
+};
+
+/**
+ * PKI Packet Instruction Header Structure (PKI_INST_HDR_S)
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 w : 1;    /* INST_HDR size: 0 = 2 bytes, 1 = 4 or 8 bytes */
+		u64 raw : 1;  /* RAW packet indicator in WQE[RAW]: 1 = enable */
+		u64 utag : 1; /* Use INST_HDR[TAG] to compute WQE[TAG]: 1 = enable */
+		u64 uqpg : 1; /* Use INST_HDR[QPG] to compute QPG: 1 = enable */
+		u64 rsvd1 : 1;
+		u64 pm : 3; /* Packet parsing mode. Legal values = 0x0..0x7 */
+		u64 sl : 8; /* Number of bytes in INST_HDR. */
+		/* The following fields are not present, if INST_HDR[W] = 0: */
+		u64 utt : 1; /* Use INST_HDR[TT] to compute WQE[TT]: 1 = enable */
+		u64 tt : 2;  /* INST_HDR[TT] => WQE[TT], if INST_HDR[UTT] = 1 */
+		u64 rsvd2 : 2;
+		u64 qpg : 11; /* INST_HDR[QPG] => QPG, if INST_HDR[UQPG] = 1 */
+		u64 tag : 32; /* INST_HDR[TAG] => WQE[TAG], if INST_HDR[UTAG] = 1 */
+	} s;
+} cvmx_pki_inst_hdr_t;
+
+/**
+ * This function assignes the clusters to a group, later pkind can be
+ * configured to use that group depending on number of clusters pkind
+ * would use. A given cluster can only be enabled in a single cluster group.
+ * Number of clusters assign to that group determines how many engine can work
+ * in parallel to process the packet. Eack cluster can process x MPPS.
+ *
+ * @param node	Node
+ * @param cluster_group Group to attach clusters to.
+ * @param cluster_mask The mask of clusters which needs to be assigned to the group.
+ */
+static inline int cvmx_pki_attach_cluster_to_group(int node, u64 cluster_group, u64 cluster_mask)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cluster_group >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: config cluster group %d", (int)cluster_group);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group));
+	pki_cl_grp.s.clusters = cluster_mask;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group), pki_cl_grp.u64);
+	return 0;
+}
+
+static inline void cvmx_pki_write_global_parse(int node, struct cvmx_pki_global_parse gbl_pen)
+{
+	cvmx_pki_gbl_pen_t gbl_pen_reg;
+
+	gbl_pen_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_GBL_PEN);
+	gbl_pen_reg.s.virt_pen = gbl_pen.virt_pen;
+	gbl_pen_reg.s.clg_pen = gbl_pen.clg_pen;
+	gbl_pen_reg.s.cl2_pen = gbl_pen.cl2_pen;
+	gbl_pen_reg.s.l4_pen = gbl_pen.l4_pen;
+	gbl_pen_reg.s.il3_pen = gbl_pen.il3_pen;
+	gbl_pen_reg.s.l3_pen = gbl_pen.l3_pen;
+	gbl_pen_reg.s.mpls_pen = gbl_pen.mpls_pen;
+	gbl_pen_reg.s.fulc_pen = gbl_pen.fulc_pen;
+	gbl_pen_reg.s.dsa_pen = gbl_pen.dsa_pen;
+	gbl_pen_reg.s.hg_pen = gbl_pen.hg_pen;
+	cvmx_write_csr_node(node, CVMX_PKI_GBL_PEN, gbl_pen_reg.u64);
+}
+
+static inline void cvmx_pki_write_tag_secret(int node, struct cvmx_pki_tag_sec tag_secret)
+{
+	cvmx_pki_tag_secret_t tag_secret_reg;
+
+	tag_secret_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_TAG_SECRET);
+	tag_secret_reg.s.dst6 = tag_secret.dst6;
+	tag_secret_reg.s.src6 = tag_secret.src6;
+	tag_secret_reg.s.dst = tag_secret.dst;
+	tag_secret_reg.s.src = tag_secret.src;
+	cvmx_write_csr_node(node, CVMX_PKI_TAG_SECRET, tag_secret_reg.u64);
+}
+
+static inline void cvmx_pki_write_ltype_map(int node, enum cvmx_pki_layer_type layer,
+					    enum cvmx_pki_beltype backend)
+{
+	cvmx_pki_ltypex_map_t ltype_map;
+
+	if (layer > CVMX_PKI_LTYPE_E_MAX || backend > CVMX_PKI_BELTYPE_MAX) {
+		debug("ERROR: invalid ltype beltype mapping\n");
+		return;
+	}
+	ltype_map.u64 = cvmx_read_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer));
+	ltype_map.s.beltype = backend;
+	cvmx_write_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer), ltype_map.u64);
+}
+
+/**
+ * This function enables the cluster group to start parsing.
+ *
+ * @param node    Node number.
+ * @param cl_grp  Cluster group to enable parsing.
+ */
+static inline int cvmx_pki_parse_enable(int node, unsigned int cl_grp)
+{
+	cvmx_pki_icgx_cfg_t pki_cl_grp;
+
+	if (cl_grp >= CVMX_PKI_NUM_CLUSTER_GROUP) {
+		debug("ERROR: pki parse en group %d", (int)cl_grp);
+		return -1;
+	}
+	pki_cl_grp.u64 = cvmx_read_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp));
+	pki_cl_grp.s.pena = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp), pki_cl_grp.u64);
+	return 0;
+}
+
+/**
+ * This function enables the PKI to send bpid level backpressure to CN78XX inputs.
+ *
+ * @param node Node number.
+ */
+static inline void cvmx_pki_enable_backpressure(int node)
+{
+	cvmx_pki_buf_ctl_t pki_buf_ctl;
+
+	pki_buf_ctl.u64 = cvmx_read_csr_node(node, CVMX_PKI_BUF_CTL);
+	pki_buf_ctl.s.pbp_en = 1;
+	cvmx_write_csr_node(node, CVMX_PKI_BUF_CTL, pki_buf_ctl.u64);
+}
+
+/**
+ * Clear the statistics counters for a port.
+ *
+ * @param node Node number.
+ * @param port Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ */
+void cvmx_pki_clear_port_stats(int node, u64 port);
+
+/**
+ * Get the status counters for index from PKI.
+ *
+ * @param node	  Node number.
+ * @param index   PKIND number, if PKI_STATS_CTL:mode = 0 or
+ *     style(flow) number, if PKI_STATS_CTL:mode = 1
+ * @param status  Where to put the results.
+ */
+void cvmx_pki_get_stats(int node, int index, struct cvmx_pki_port_stats *status);
+
+/**
+ * Get the statistics counters for a port.
+ *
+ * @param node	 Node number
+ * @param port   Port number (ipd_port) to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per port/pkind stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_port_stats(int node, u64 port, struct cvmx_pki_port_stats *status)
+{
+	int xipd = cvmx_helper_node_to_ipd_port(node, port);
+	int xiface = cvmx_helper_get_interface_num(xipd);
+	int index = cvmx_helper_get_interface_index_num(port);
+	int pknd = cvmx_helper_get_pknd(xiface, index);
+
+	cvmx_pki_get_stats(node, pknd, status);
+}
+
+/**
+ * Get the statistics counters for a flow represented by style in PKI.
+ *
+ * @param node Node number.
+ * @param style_num Style number to get statistics for.
+ *    Make sure PKI_STATS_CTL:mode is set to 1 for collecting per style/flow stats.
+ * @param status Where to put the results.
+ */
+static inline void cvmx_pki_get_flow_stats(int node, u64 style_num,
+					   struct cvmx_pki_port_stats *status)
+{
+	cvmx_pki_get_stats(node, style_num, status);
+}
+
+/**
+ * Show integrated PKI configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_config_dump(unsigned int node);
+
+/**
+ * Show integrated PKI statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_pki_stats_dump(unsigned int node);
+
+/**
+ * Clear PKI statistics.
+ *
+ * @param node	   node number
+ */
+void cvmx_pki_stats_clear(unsigned int node);
+
+/**
+ * This function enables PKI.
+ *
+ * @param node	 node to enable pki in.
+ */
+void cvmx_pki_enable(int node);
+
+/**
+ * This function disables PKI.
+ *
+ * @param node	node to disable pki in.
+ */
+void cvmx_pki_disable(int node);
+
+/**
+ * This function soft resets PKI.
+ *
+ * @param node	node to enable pki in.
+ */
+void cvmx_pki_reset(int node);
+
+/**
+ * This function sets the clusters in PKI.
+ *
+ * @param node	node to set clusters in.
+ */
+int cvmx_pki_setup_clusters(int node);
+
+/**
+ * This function reads global configuration of PKI block.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to read global configuration
+ */
+void cvmx_pki_read_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function writes global configuration of PKI into hw.
+ *
+ * @param node    Node number.
+ * @param gbl_cfg Pointer to struct to global configuration
+ */
+void cvmx_pki_write_global_config(int node, struct cvmx_pki_global_config *gbl_cfg);
+
+/**
+ * This function reads per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration read
+ *     from hardware.
+ */
+int cvmx_pki_read_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function writes per pkind parameters in hardware which defines how
+ * the incoming packet is processed.
+ *
+ * @param node   Node number.
+ * @param pkind  PKI supports a large number of incoming interfaces and packets
+ *     arriving on different interfaces or channels may want to be processed
+ *     differently. PKI uses the pkind to determine how the incoming packet
+ *     is processed.
+ * @param pkind_cfg	Pointer to struct conatining pkind configuration need
+ *     to be written in hardware.
+ */
+int cvmx_pki_write_pkind_config(int node, int pkind, struct cvmx_pki_pkind_config *pkind_cfg);
+
+/**
+ * This function reads parameters associated with tag configuration in hardware.
+ *
+ * @param node	 Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to tag configuration struct.
+ */
+void cvmx_pki_read_tag_config(int node, int style, u64 cluster_mask,
+			      struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function writes/configures parameters associated with tag
+ * configuration in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure tag for.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param tag_cfg  Pointer to taf configuration struct.
+ */
+void cvmx_pki_write_tag_config(int node, int style, u64 cluster_mask,
+			       struct cvmx_pki_style_tag_cfg *tag_cfg);
+
+/**
+ * This function reads parameters associated with style in hardware.
+ *
+ * @param node	Node number.
+ * @param style  Style to read from.
+ * @param cluster_mask  Mask of clusters style belongs to.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_read_style_config(int node, int style, u64 cluster_mask,
+				struct cvmx_pki_style_config *style_cfg);
+
+/**
+ * This function writes/configures parameters associated with style in hardware.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param cluster_mask  Mask of clusters to configure the style for.
+ * @param style_cfg  Pointer to style config struct.
+ */
+void cvmx_pki_write_style_config(int node, u64 style, u64 cluster_mask,
+				 struct cvmx_pki_style_config *style_cfg);
+/**
+ * This function reads qpg entry at specified offset from qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to read from.
+ * @param qpg_cfg  Pointer to structure containing qpg values
+ */
+int cvmx_pki_read_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes qpg entry at specified offset in qpg table
+ *
+ * @param node  Node number.
+ * @param offset  Offset in qpg table to write to.
+ * @param qpg_cfg  Pointer to stricture containing qpg values.
+ */
+void cvmx_pki_write_qpg_entry(int node, int offset, struct cvmx_pki_qpg_config *qpg_cfg);
+
+/**
+ * This function writes pcam entry at given offset in pcam table in hardware
+ *
+ * @param node  Node number.
+ * @param index	 Offset in pcam table.
+ * @param cluster_mask  Mask of clusters in which to write pcam entry.
+ * @param input  Input keys to pcam match passed as struct.
+ * @param action  PCAM match action passed as struct
+ */
+int cvmx_pki_pcam_write_entry(int node, int index, u64 cluster_mask,
+			      struct cvmx_pki_pcam_input input, struct cvmx_pki_pcam_action action);
+/**
+ * Configures the channel which will receive backpressure from the specified bpid.
+ * Each channel listens for backpressure on a specific bpid.
+ * Each bpid can backpressure multiple channels.
+ * @param node  Node number.
+ * @param bpid  BPID from which channel will receive backpressure.
+ * @param channel  Channel number to receive backpressue.
+ */
+int cvmx_pki_write_channel_bpid(int node, int channel, int bpid);
+
+/**
+ * Configures the bpid on which, specified channel will
+ * assert backpressure.
+ * Each bpid receives backpressure from auras.
+ * Multiple auras can backpressure single bpid.
+ * @param node  Node number.
+ * @param aura  Number which will assert backpressure on that bpid.
+ * @param bpid  To assert backpressure on.
+ */
+int cvmx_pki_write_aura_bpid(int node, int aura, int bpid);
+
+/**
+ * Enables/Disabled QoS (RED Drop, Tail Drop & backpressure) for the* PKI aura.
+ *
+ * @param node  Node number
+ * @param aura  To enable/disable QoS on.
+ * @param ena_red  Enable/Disable RED drop between pass and drop level
+ *    1-enable 0-disable
+ * @param ena_drop  Enable/disable tail drop when max drop level exceeds
+ *    1-enable 0-disable
+ * @param ena_bp  Enable/Disable asserting backpressure on bpid when
+ *    max DROP level exceeds.
+ *    1-enable 0-disable
+ */
+int cvmx_pki_enable_aura_qos(int node, int aura, bool ena_red, bool ena_drop, bool ena_bp);
+
+/**
+ * This function gives the initial style used by that pkind.
+ *
+ * @param node  Node number.
+ * @param pkind  PKIND number.
+ */
+int cvmx_pki_get_pkind_style(int node, int pkind);
+
+/**
+ * This function sets the wqe buffer mode. First packet data buffer can reside
+ * either in same buffer as wqe OR it can go in separate buffer. If used the later mode,
+ * make sure software allocate enough buffers to now have wqe separate from packet data.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ * @param pkt_outside_wqe
+ *    0 = The packet link pointer will be at word [FIRST_SKIP] immediately
+ *    followed by packet data, in the same buffer as the work queue entry.
+ *    1 = The packet link pointer will be at word [FIRST_SKIP] in a new
+ *    buffer separate from the work queue entry. Words following the
+ *    WQE in the same cache line will be zeroed, other lines in the
+ *    buffer will not be modified and will retain stale data (from the
+ *    buffer?s previous use). This setting may decrease the peak PKI
+ *    performance by up to half on small packets.
+ */
+void cvmx_pki_set_wqe_mode(int node, u64 style, bool pkt_outside_wqe);
+
+/**
+ * This function sets the Packet mode of all ports and styles to little-endian.
+ * It Changes write operations of packet data to L2C to
+ * be in little-endian. Does not change the WQE header format, which is
+ * properly endian neutral.
+ *
+ * @param node  Node number.
+ * @param style  Style to configure.
+ */
+void cvmx_pki_set_little_endian(int node, u64 style);
+
+/**
+ * Enables/Disables L2 length error check and max & min frame length checks.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to disable error for.
+ * @param l2len_err	 L2 length error check enable.
+ * @param maxframe_err	Max frame error check enable.
+ * @param minframe_err	Min frame error check enable.
+ *    1 -- Enabel err checks
+ *    0 -- Disable error checks
+ */
+void cvmx_pki_endis_l2_errs(int node, int pknd, bool l2len_err, bool maxframe_err,
+			    bool minframe_err);
+
+/**
+ * Enables/Disables fcs check and fcs stripping on the pkind.
+ *
+ * @param node  Node number.
+ * @param pknd  PKIND to apply settings on.
+ * @param fcs_chk  Enable/disable fcs check.
+ *    1 -- enable fcs error check.
+ *    0 -- disable fcs error check.
+ * @param fcs_strip	 Strip L2 FCS bytes from packet, decrease WQE[LEN] by 4 bytes
+ *    1 -- strip L2 FCS.
+ *    0 -- Do not strip L2 FCS.
+ */
+void cvmx_pki_endis_fcs_check(int node, int pknd, bool fcs_chk, bool fcs_strip);
+
+/**
+ * This function shows the qpg table entries, read directly from hardware.
+ *
+ * @param node  Node number.
+ * @param num_entry  Number of entries to print.
+ */
+void cvmx_pki_show_qpg_entries(int node, u16 num_entry);
+
+/**
+ * This function shows the pcam table in raw format read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_pcam_entries(int node);
+
+/**
+ * This function shows the valid entries in readable format,
+ * read directly from hardware.
+ *
+ * @param node  Node number.
+ */
+void cvmx_pki_show_valid_pcam_entries(int node);
+
+/**
+ * This function shows the pkind attributes in readable format,
+ * read directly from hardware.
+ * @param node  Node number.
+ * @param pkind  PKIND number to print.
+ */
+void cvmx_pki_show_pkind_attributes(int node, int pkind);
+
+/**
+ * @INTERNAL
+ * This function is called by cvmx_helper_shutdown() to extract all FPA buffers
+ * out of the PKI. After this function completes, all FPA buffers that were
+ * prefetched by PKI will be in the appropriate FPA pool.
+ * This functions does not reset the PKI.
+ * WARNING: It is very important that PKI be reset soon after a call to this function.
+ *
+ * @param node  Node number.
+ */
+void __cvmx_pki_free_ptr(int node);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
new file mode 100644
index 000000000000..1fb49b3fb6de
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-range.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_INTERNAL_PORTS_RANGE__
+#define __CVMX_INTERNAL_PORTS_RANGE__
+
+/*
+ * Allocated a block of internal ports for the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ * @param  count      the number of internal ports requested
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_alloc(int interface, int port, u64 count);
+
+/*
+ * Free the internal ports associated with the specified interface/port
+ *
+ * @param  interface  the interface for which the internal ports are requested
+ * @param  port       the index of the port within in the interface for which the internal ports
+ *                    are requested.
+ *
+ * @return  0 on success
+ *         -1 on failure
+ */
+int cvmx_pko_internal_ports_free(int interface, int port);
+
+/*
+ * Frees up all the allocated internal ports.
+ */
+void cvmx_pko_internal_ports_range_free_all(void);
+
+void cvmx_pko_internal_ports_range_show(void);
+
+int __cvmx_pko_internal_ports_range_init(void);
+
+#endif
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
new file mode 100644
index 000000000000..5f8398904953
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_PKO3_QUEUE_H__
+#define __CVMX_PKO3_QUEUE_H__
+
+/**
+ * @INTERNAL
+ *
+ * Find or allocate global port/dq map table
+ * which is a named table, contains entries for
+ * all possible OCI nodes.
+ *
+ * The table global pointer is stored in core-local variable
+ * so that every core will call this function once, on first use.
+ */
+int __cvmx_pko3_dq_table_setup(void);
+
+/*
+ * Get the base Descriptor Queue number for an IPD port on the local node
+ */
+int cvmx_pko3_get_queue_base(int ipd_port);
+
+/*
+ * Get the number of Descriptor Queues assigned for an IPD port
+ */
+int cvmx_pko3_get_queue_num(int ipd_port);
+
+/**
+ * Get L1/Port Queue number assigned to interface port.
+ *
+ * @param xiface is interface number.
+ * @param index is port index.
+ */
+int cvmx_pko3_get_port_queue(int xiface, int index);
+
+/*
+ * Configure L3 through L5 Scheduler Queues and Descriptor Queues
+ *
+ * The Scheduler Queues in Levels 3 to 5 and Descriptor Queues are
+ * configured one-to-one or many-to-one to a single parent Scheduler
+ * Queues. The level of the parent SQ is specified in an argument,
+ * as well as the number of children to attach to the specific parent.
+ * The children can have fair round-robin or priority-based scheduling
+ * when multiple children are assigned a single parent.
+ *
+ * @param node is the OCI node location for the queues to be configured
+ * @param parent_level is the level of the parent queue, 2 to 5.
+ * @param parent_queue is the number of the parent Scheduler Queue
+ * @param child_base is the number of the first child SQ or DQ to assign to
+ * @param parent
+ * @param child_count is the number of consecutive children to assign
+ * @param stat_prio_count is the priority setting for the children L2 SQs
+ *
+ * If <stat_prio_count> is -1, the Ln children will have equal Round-Robin
+ * relationship with eachother. If <stat_prio_count> is 0, all Ln children
+ * will be arranged in Weighted-Round-Robin, with the first having the most
+ * precedence. If <stat_prio_count> is between 1 and 8, it indicates how
+ * many children will have static priority settings (with the first having
+ * the most precedence), with the remaining Ln children having WRR scheduling.
+ *
+ * @returns 0 on success, -1 on failure.
+ *
+ * Note: this function supports the configuration of node-local unit.
+ */
+int cvmx_pko3_sq_config_children(unsigned int node, unsigned int parent_level,
+				 unsigned int parent_queue, unsigned int child_base,
+				 unsigned int child_count, int stat_prio_count);
+
+/*
+ * @INTERNAL
+ * Register a range of Descriptor Queues wth an interface port
+ *
+ * This function poulates the DQ-to-IPD translation table
+ * used by the application to retrieve the DQ range (typically ordered
+ * by priority) for a given IPD-port, which is either a physical port,
+ * or a channel on a channelized interface (i.e. ILK).
+ *
+ * @param xiface is the physical interface number
+ * @param index is either a physical port on an interface
+ * @param or a channel of an ILK interface
+ * @param dq_base is the first Descriptor Queue number in a consecutive range
+ * @param dq_count is the number of consecutive Descriptor Queues leading
+ * @param the same channel or port.
+ *
+ * Only a consecurive range of Descriptor Queues can be associated with any
+ * given channel/port, and usually they are ordered from most to least
+ * in terms of scheduling priority.
+ *
+ * Note: thus function only populates the node-local translation table.
+ *
+ * @returns 0 on success, -1 on failure.
+ */
+int __cvmx_pko3_ipd_dq_register(int xiface, int index, unsigned int dq_base, unsigned int dq_count);
+
+/**
+ * @INTERNAL
+ *
+ * Unregister DQs associated with CHAN_E (IPD port)
+ */
+int __cvmx_pko3_ipd_dq_unregister(int xiface, int index);
+
+/*
+ * Map channel number in PKO
+ *
+ * @param node is to specify the node to which this configuration is applied.
+ * @param pq_num specifies the Port Queue (i.e. L1) queue number.
+ * @param l2_l3_q_num  specifies L2/L3 queue number.
+ * @param channel specifies the channel number to map to the queue.
+ *
+ * The channel assignment applies to L2 or L3 Shaper Queues depending
+ * on the setting of channel credit level.
+ *
+ * @return returns none.
+ */
+void cvmx_pko3_map_channel(unsigned int node, unsigned int pq_num, unsigned int l2_l3_q_num,
+			   u16 channel);
+
+int cvmx_pko3_pq_config(unsigned int node, unsigned int mac_num, unsigned int pq_num);
+
+int cvmx_pko3_port_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			   unsigned int burst_bytes, int adj_bytes);
+int cvmx_pko3_dq_cir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+int cvmx_pko3_dq_pir_set(unsigned int node, unsigned int pq_num, unsigned long rate_kbips,
+			 unsigned int burst_bytes);
+typedef enum {
+	CVMX_PKO3_SHAPE_RED_STALL,
+	CVMX_PKO3_SHAPE_RED_DISCARD,
+	CVMX_PKO3_SHAPE_RED_PASS
+} red_action_t;
+
+void cvmx_pko3_dq_red(unsigned int node, unsigned int dq_num, red_action_t red_act,
+		      int8_t len_adjust);
+
+/**
+ * Macros to deal with short floating point numbers,
+ * where unsigned exponent, and an unsigned normalized
+ * mantissa are represented each with a defined field width.
+ *
+ */
+#define CVMX_SHOFT_MANT_BITS 8
+#define CVMX_SHOFT_EXP_BITS  4
+
+/**
+ * Convert short-float to an unsigned integer
+ * Note that it will lose precision.
+ */
+#define CVMX_SHOFT_TO_U64(m, e)                                                                    \
+	((((1ull << CVMX_SHOFT_MANT_BITS) | (m)) << (e)) >> CVMX_SHOFT_MANT_BITS)
+
+/**
+ * Convert to short-float from an unsigned integer
+ */
+#define CVMX_SHOFT_FROM_U64(ui, m, e)                                                              \
+	do {                                                                                       \
+		unsigned long long u;                                                              \
+		unsigned int k;                                                                    \
+		k = (1ull << (CVMX_SHOFT_MANT_BITS + 1)) - 1;                                      \
+		(e) = 0;                                                                           \
+		u = (ui) << CVMX_SHOFT_MANT_BITS;                                                  \
+		while ((u) > k) {                                                                  \
+			u >>= 1;                                                                   \
+			(e)++;                                                                     \
+		}                                                                                  \
+		(m) = u & (k >> 1);                                                                \
+	} while (0);
+
+#define CVMX_SHOFT_MAX()                                                                           \
+	CVMX_SHOFT_TO_U64((1 << CVMX_SHOFT_MANT_BITS) - 1, (1 << CVMX_SHOFT_EXP_BITS) - 1)
+#define CVMX_SHOFT_MIN() CVMX_SHOFT_TO_U64(0, 0)
+
+#endif /* __CVMX_PKO3_QUEUE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow.h b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
new file mode 100644
index 000000000000..0680ca258f12
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
@@ -0,0 +1,2991 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * Interface to the hardware Scheduling unit.
+ *
+ * New, starting with SDK 1.7.0, cvmx-pow supports a number of
+ * extended consistency checks. The define
+ * CVMX_ENABLE_POW_CHECKS controls the runtime insertion of POW
+ * internal state checks to find common programming errors. If
+ * CVMX_ENABLE_POW_CHECKS is not defined, checks are by default
+ * enabled. For example, cvmx-pow will check for the following
+ * program errors or POW state inconsistency.
+ * - Requesting a POW operation with an active tag switch in
+ *   progress.
+ * - Waiting for a tag switch to complete for an excessively
+ *   long period. This is normally a sign of an error in locking
+ *   causing deadlock.
+ * - Illegal tag switches from NULL_NULL.
+ * - Illegal tag switches from NULL.
+ * - Illegal deschedule request.
+ * - WQE pointer not matching the one attached to the core by
+ *   the POW.
+ */
+
+#ifndef __CVMX_POW_H__
+#define __CVMX_POW_H__
+
+#include "cvmx-wqe.h"
+#include "cvmx-pow-defs.h"
+#include "cvmx-sso-defs.h"
+#include "cvmx-address.h"
+#include "cvmx-coremask.h"
+
+/* Default to having all POW constancy checks turned on */
+#ifndef CVMX_ENABLE_POW_CHECKS
+#define CVMX_ENABLE_POW_CHECKS 1
+#endif
+
+/*
+ * Special type for CN78XX style SSO groups (0..255),
+ * for distinction from legacy-style groups (0..15)
+ */
+typedef union {
+	u8 xgrp;
+	/* Fields that map XGRP for backwards compatibility */
+	struct __attribute__((__packed__)) {
+		u8 group : 5;
+		u8 qus : 3;
+	};
+} cvmx_xgrp_t;
+
+/*
+ * Softwsare-only structure to convey a return value
+ * containing multiple information fields about an work queue entry
+ */
+typedef struct {
+	u32 tag;
+	u16 index;
+	u8 grp; /* Legacy group # (0..15) */
+	u8 tag_type;
+} cvmx_pow_tag_info_t;
+
+/**
+ * Wait flag values for pow functions.
+ */
+typedef enum {
+	CVMX_POW_WAIT = 1,
+	CVMX_POW_NO_WAIT = 0,
+} cvmx_pow_wait_t;
+
+/**
+ *  POW tag operations.  These are used in the data stored to the POW.
+ */
+typedef enum {
+	CVMX_POW_TAG_OP_SWTAG = 0L,
+	CVMX_POW_TAG_OP_SWTAG_FULL = 1L,
+	CVMX_POW_TAG_OP_SWTAG_DESCH = 2L,
+	CVMX_POW_TAG_OP_DESCH = 3L,
+	CVMX_POW_TAG_OP_ADDWQ = 4L,
+	CVMX_POW_TAG_OP_UPDATE_WQP_GRP = 5L,
+	CVMX_POW_TAG_OP_SET_NSCHED = 6L,
+	CVMX_POW_TAG_OP_CLR_NSCHED = 7L,
+	CVMX_POW_TAG_OP_NOP = 15L
+} cvmx_pow_tag_op_t;
+
+/**
+ * This structure defines the store data on a store to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 no_sched : 1;
+		u64 unused : 2;
+		u64 index : 13;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused2 : 2;
+		u64 qos : 3;
+		u64 grp : 4;
+		cvmx_pow_tag_type_t type : 3;
+		u64 tag : 32;
+	} s_cn38xx;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 4;
+		u64 index : 11;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_clr;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 12;
+		u64 qos : 3;
+		u64 unused2 : 1;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_add;
+	struct {
+		u64 no_sched : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 unused1 : 16;
+		u64 grp : 6;
+		u64 unused3 : 3;
+		cvmx_pow_tag_type_t type : 2;
+		u64 tag : 32;
+	} s_cn68xx_other;
+	struct {
+		u64 rsvd_62_63 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t type : 2;
+		u64 no_sched : 1;
+		u64 rsvd_48 : 1;
+		cvmx_pow_tag_op_t op : 4;
+		u64 rsvd_42_43 : 2;
+		u64 wqp : 42;
+	} s_cn78xx_other;
+
+} cvmx_pow_tag_req_t;
+
+union cvmx_pow_tag_req_addr {
+	u64 u64;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} s;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 tag : 32;
+		u64 reserved_0_3 : 4;
+	} s_cn78xx;
+};
+
+/**
+ * This structure describes the address to load stuff from POW
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Address for new work request loads (did<2:0> == 0)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_4_39 : 36;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork;
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 node : 4;
+		u64 reserved_32_35 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 reserved_16_28 : 13;
+		u64 index : 12;
+		u64 wait : 1;
+		u64 reserved_0_2 : 3;
+	} swork_78xx;
+	/**
+	 * Address for loads to get POW internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_10_39 : 30;
+		u64 coreid : 4;
+		u64 get_rev : 1;
+		u64 get_cur : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} sstatus;
+	/**
+	 * Address for loads to get 68XX SS0 internal status
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_14_39 : 26;
+		u64 coreid : 5;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sstatus_cn68xx;
+	/**
+	 * Address for memory loads to get POW internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_16_39 : 24;
+		u64 index : 11;
+		u64 get_des : 1;
+		u64 get_wqp : 1;
+		u64 reserved_0_2 : 3;
+	} smemload;
+	/**
+	 * Address for memory loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_20_39 : 20;
+		u64 index : 11;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} smemload_cn68xx;
+	/**
+	 * Address for index/pointer loads
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_9_39 : 31;
+		u64 qosgrp : 4;
+		u64 get_des_get_tail : 1;
+		u64 get_rmt : 1;
+		u64 reserved_0_2 : 3;
+	} sindexload;
+	/**
+	 * Address for a Index/Pointer loads to get SSO internal state
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_15_39 : 25;
+		u64 qos_grp : 6;
+		u64 reserved_6_8 : 3;
+		u64 opcode : 3;
+		u64 reserved_0_2 : 3;
+	} sindexload_cn68xx;
+	/**
+	 * Address for NULL_RD request (did<2:0> == 4)
+	 * when this is read, HW attempts to change the state to NULL if it is NULL_NULL
+	 * (the hardware cannot switch from NULL_NULL to NULL if a POW entry is not available -
+	 * software may need to recover by finishing another piece of work before a POW
+	 * entry can ever become available.)
+	 */
+	struct {
+		u64 mem_region : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 reserved_0_39 : 40;
+	} snull_rd;
+} cvmx_pow_load_addr_t;
+
+/**
+ * This structure defines the response to a load/SENDSINGLE to POW (except CSR reads)
+ */
+typedef union {
+	u64 u64;
+	/**
+	 * Response to new work request loads
+	 */
+	struct {
+		u64 no_work : 1;
+		u64 pend_switch : 1;
+		u64 tt : 2;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 addr : 42;
+	} s_work;
+
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 reserved_34_35 : 2;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDTAG)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_48_56 : 9;
+		u64 pend_index : 11;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_sstatus0_cn68xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==0 and get_wqp==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 pend_switch : 1;
+		u64 pend_switch_full : 1;
+		u64 pend_switch_null : 1;
+		u64 pend_desched : 1;
+		u64 pend_desched_switch : 1;
+		u64 pend_nosched : 1;
+		u64 pend_new_work : 1;
+		u64 pend_new_work_wait : 1;
+		u64 pend_null_rd : 1;
+		u64 pend_nosched_clr : 1;
+		u64 reserved_51 : 1;
+		u64 pend_index : 11;
+		u64 pend_grp : 4;
+		u64 pend_wqp : 36;
+	} s_sstatus1;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_PENDWQP)
+	 */
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_51_56 : 6;
+		u64 pend_index : 11;
+		u64 reserved_38_39 : 2;
+		u64 pend_wqp : 38;
+	} s_sstatus1_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_56 : 1;
+		u64 prep_index : 12;
+		u64 reserved_42_43 : 2;
+		u64 pend_tag : 42;
+	} s_sso_ppx_pendwqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_TAG)
+	 */
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus2_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_46_47 : 2;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ppx_tag_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==0, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sstatus3;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_WQP)
+	 */
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 index : 11;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_sstatus3_cn68xx;
+
+	struct {
+		u64 reserved_58_63 : 6;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 tag : 42;
+	} s_sso_ppx_wqp_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==0)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 link_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus4;
+	/**
+	 * Result for a SSO Status Load (when opcode is SL_LINKS)
+	 */
+	struct {
+		u64 reserved_46_63 : 18;
+		u64 index : 11;
+		u64 reserved_34 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_24_25 : 2;
+		u64 revlink_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 link_index : 11;
+	} s_sstatus4_cn68xx;
+
+	struct {
+		u64 tailc : 1;
+		u64 reserved_60_62 : 3;
+		u64 index : 12;
+		u64 reserved_38_47 : 10;
+		u64 grp : 10;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_25 : 1;
+		u64 revlink_index : 12;
+		u64 link_index_vld : 1;
+		u64 link_index : 12;
+	} s_sso_ppx_links_cn78xx;
+	/**
+	 * Result for a POW Status Load (when get_cur==1, get_wqp==1, and get_rev==1)
+	 */
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 revlink_index : 11;
+		u64 index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_sstatus5;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 0)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 reserved_35 : 1;
+		u64 tail : 1;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_TAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_smemload0_cn68xx;
+
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s_sso_iaq_ppx_tag_cn78xx;
+	/**
+	 * Result For POW Memory Load (get_des == 0 and get_wqp == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 next_index : 11;
+		u64 grp : 4;
+		u64 wqp : 36;
+	} s_smemload1;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_WQPGRP)
+	 */
+	struct {
+		u64 reserved_48_63 : 16;
+		u64 nosched : 1;
+		u64 reserved_46 : 1;
+		u64 grp : 6;
+		u64 reserved_38_39 : 2;
+		u64 wqp : 38;
+	} s_smemload1_cn68xx;
+
+	/**
+	 * Entry structures for the CN7XXX chips.
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 tailc : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tt : 2;
+		u64 tag : 32;
+	} s_sso_ientx_tag_cn78xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_56_59 : 4;
+		u64 grp : 8;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn73xx;
+
+	struct {
+		u64 reserved_62_63 : 2;
+		u64 head : 1;
+		u64 nosched : 1;
+		u64 reserved_58_59 : 2;
+		u64 grp : 10;
+		u64 reserved_42_47 : 6;
+		u64 wqp : 42;
+	} s_sso_ientx_wqpgrp_cn78xx;
+
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ientx_pendtag_cn78xx;
+
+	struct {
+		u64 reserved_26_63 : 38;
+		u64 prev_index : 10;
+		u64 reserved_11_15 : 5;
+		u64 next_index_vld : 1;
+		u64 next_index : 10;
+	} s_sso_ientx_links_cn73xx;
+
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 prev_index : 12;
+		u64 reserved_13_15 : 3;
+		u64 next_index_vld : 1;
+		u64 next_index : 12;
+	} s_sso_ientx_links_cn78xx;
+
+	/**
+	 * Result For POW Memory Load (get_des == 1)
+	 */
+	struct {
+		u64 reserved_51_63 : 13;
+		u64 fwd_index : 11;
+		u64 grp : 4;
+		u64 nosched : 1;
+		u64 pend_switch : 1;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_PENTAG)
+	 */
+	struct {
+		u64 reserved_38_63 : 26;
+		u64 pend_switch : 1;
+		u64 reserved_34_36 : 3;
+		u64 pend_type : 2;
+		u64 pend_tag : 32;
+	} s_smemload2_cn68xx;
+
+	struct {
+		u64 pend_switch : 1;
+		u64 pend_get_work : 1;
+		u64 pend_get_work_wait : 1;
+		u64 pend_nosched : 1;
+		u64 pend_nosched_clr : 1;
+		u64 pend_desched : 1;
+		u64 pend_alloc_we : 1;
+		u64 reserved_34_56 : 23;
+		u64 pend_tt : 2;
+		u64 pend_tag : 32;
+	} s_sso_ppx_pendtag_cn78xx;
+	/**
+	 * Result For SSO Memory Load (opcode is ML_LINKS)
+	 */
+	struct {
+		u64 reserved_24_63 : 40;
+		u64 fwd_index : 11;
+		u64 reserved_11_12 : 2;
+		u64 next_index : 11;
+	} s_smemload3_cn68xx;
+
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 free_val : 1;
+		u64 free_one : 1;
+		u64 reserved_49 : 1;
+		u64 free_head : 11;
+		u64 reserved_37 : 1;
+		u64 free_tail : 11;
+		u64 loc_val : 1;
+		u64 loc_one : 1;
+		u64 reserved_23 : 1;
+		u64 loc_head : 11;
+		u64 reserved_11 : 1;
+		u64 loc_tail : 11;
+	} sindexload0;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode ==
+	 * IPL_IQ/IPL_DESCHED/IPL_NOSCHED)
+	 */
+	struct {
+		u64 reserved_28_63 : 36;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload0_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 0/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_52_63 : 12;
+		u64 nosched_val : 1;
+		u64 nosched_one : 1;
+		u64 reserved_49 : 1;
+		u64 nosched_head : 11;
+		u64 reserved_37 : 1;
+		u64 nosched_tail : 11;
+		u64 des_val : 1;
+		u64 des_one : 1;
+		u64 reserved_23 : 1;
+		u64 des_head : 11;
+		u64 reserved_11 : 1;
+		u64 des_tail : 11;
+	} sindexload1;
+	/**
+	 * Result for SSO Index/Pointer Load(opcode == IPL_FREE0/IPL_FREE1/IPL_FREE2)
+	 */
+	struct {
+		u64 reserved_60_63 : 4;
+		u64 qnum_head : 2;
+		u64 qnum_tail : 2;
+		u64 reserved_28_55 : 28;
+		u64 queue_val : 1;
+		u64 queue_one : 1;
+		u64 reserved_24_25 : 2;
+		u64 queue_head : 11;
+		u64 reserved_11_12 : 2;
+		u64 queue_tail : 11;
+	} sindexload1_cn68xx;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 0)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_head : 36;
+	} sindexload2;
+	/**
+	 * Result For POW Index/Pointer Load (get_rmt == 1/get_des_get_tail == 1)
+	 */
+	struct {
+		u64 reserved_39_63 : 25;
+		u64 rmt_is_head : 1;
+		u64 rmt_val : 1;
+		u64 rmt_one : 1;
+		u64 rmt_tail : 36;
+	} sindexload3;
+	/**
+	 * Response to NULL_RD request loads
+	 */
+	struct {
+		u64 unused : 62;
+		u64 state : 2;
+	} s_null_rd;
+
+} cvmx_pow_tag_load_resp_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 reserved_57_63 : 7;
+		u64 index : 11;
+		u64 reserved_45 : 1;
+		u64 grp : 6;
+		u64 head : 1;
+		u64 tail : 1;
+		u64 reserved_34_36 : 3;
+		u64 tag_type : 2;
+		u64 tag : 32;
+	} s;
+} cvmx_pow_sl_tag_resp_t;
+
+/**
+ * This structure describes the address used for stores to the POW.
+ *  The store address is meaningful on stores to the POW.  The hardware assumes that an aligned
+ *  64-bit store was used for all these stores.
+ *  Note the assumption that the work queue entry is aligned on an 8-byte
+ *  boundary (since the low-order 3 address bits must be zero).
+ *  Note that not all fields are used by all operations.
+ *
+ *  NOTE: The following is the behavior of the pending switch bit at the PP
+ *       for POW stores (i.e. when did<7:3> == 0xc)
+ *     - did<2:0> == 0      => pending switch bit is set
+ *     - did<2:0> == 1      => no affect on the pending switch bit
+ *     - did<2:0> == 3      => pending switch bit is cleared
+ *     - did<2:0> == 7      => no affect on the pending switch bit
+ *     - did<2:0> == others => must not be used
+ *     - No other loads/stores have an affect on the pending switch bit
+ *     - The switch bus from POW can clear the pending switch bit
+ *
+ *  NOTE: did<2:0> == 2 is used by the HW for a special single-cycle ADDWQ command
+ *  that only contains the pointer). SW must never use did<2:0> == 2.
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 mem_reg : 2;
+		u64 reserved_49_61 : 13;
+		u64 is_io : 1;
+		u64 did : 8;
+		u64 addr : 40;
+	} stag;
+} cvmx_pow_tag_store_addr_t; /* FIXME- this type is unused */
+
+/**
+ * Decode of the store data when an IOBDMA SENDSINGLE is sent to POW
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 unused : 36;
+		u64 wait : 1;
+		u64 unused2 : 3;
+	} s;
+	struct {
+		u64 scraddr : 8;
+		u64 len : 8;
+		u64 did : 8;
+		u64 node : 4;
+		u64 unused1 : 4;
+		u64 indexed : 1;
+		u64 grouped : 1;
+		u64 rtngrp : 1;
+		u64 unused2 : 13;
+		u64 index_grp_mask : 12;
+		u64 wait : 1;
+		u64 unused3 : 3;
+	} s_cn78xx;
+} cvmx_pow_iobdma_store_t;
+
+/* CSR typedefs have been moved to cvmx-pow-defs.h */
+
+/*enum for group priority parameters which needs modification*/
+enum cvmx_sso_group_modify_mask {
+	CVMX_SSO_MODIFY_GROUP_PRIORITY = 0x01,
+	CVMX_SSO_MODIFY_GROUP_WEIGHT = 0x02,
+	CVMX_SSO_MODIFY_GROUP_AFFINITY = 0x04
+};
+
+/**
+ * @INTERNAL
+ * Return the number of SSO groups for a given SoC model
+ */
+static inline unsigned int cvmx_sso_num_xgrp(void)
+{
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
+		return 256;
+	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
+		return 64;
+	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
+		return 64;
+	printf("ERROR: %s: Unknown model\n", __func__);
+	return 0;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of POW groups on current model.
+ * In case of CN78XX/CN73XX this is the number of equivalent
+ * "legacy groups" on the chip when it is used in backward
+ * compatible mode.
+ */
+static inline unsigned int cvmx_pow_num_groups(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return cvmx_sso_num_xgrp() >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		return 64;
+	else
+		return 16;
+}
+
+/**
+ * @INTERNAL
+ * Return the number of mask-set registers.
+ */
+static inline unsigned int cvmx_sso_num_maskset(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return 2;
+	else
+		return 1;
+}
+
+/**
+ * Get the POW tag for this core. This returns the current
+ * tag type, tag, group, and POW entry index associated with
+ * this core. Index is only valid if the tag type isn't NULL_NULL.
+ * If a tag switch is pending this routine returns the tag before
+ * the tag switch, not after.
+ *
+ * @return Current tag
+ */
+static inline cvmx_pow_tag_info_t cvmx_pow_get_current_tag(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_info_t result;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_tag_t sl_ppx_tag;
+		cvmx_xgrp_t xgrp;
+		int node, core;
+
+		CVMX_SYNCS;
+		node = cvmx_get_node_num();
+		core = cvmx_get_local_core_num();
+		sl_ppx_tag.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_TAG(core));
+		result.index = sl_ppx_tag.s.index;
+		result.tag_type = sl_ppx_tag.s.tt;
+		result.tag = sl_ppx_tag.s.tag;
+
+		/* Get native XGRP value */
+		xgrp.xgrp = sl_ppx_tag.s.grp;
+
+		/* Return legacy style group 0..15 */
+		result.grp = xgrp.group;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_pow_sl_tag_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 3;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s.grp;
+		result.index = load_resp.s.index;
+		result.tag_type = load_resp.s.tag_type;
+		result.tag = load_resp.s.tag;
+	} else {
+		cvmx_pow_tag_load_resp_t load_resp;
+
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		result.grp = load_resp.s_sstatus2.grp;
+		result.index = load_resp.s_sstatus2.index;
+		result.tag_type = load_resp.s_sstatus2.tag_type;
+		result.tag = load_resp.s_sstatus2.tag;
+	}
+	return result;
+}
+
+/**
+ * Get the POW WQE for this core. This returns the work queue
+ * entry currently associated with this core.
+ *
+ * @return WQE pointer
+ */
+static inline cvmx_wqe_t *cvmx_pow_get_current_wqp(void)
+{
+	cvmx_pow_load_addr_t load_addr;
+	cvmx_pow_tag_load_resp_t load_resp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_sl_ppx_wqp_t sso_wqp;
+		int node = cvmx_get_node_num();
+		int core = cvmx_get_local_core_num();
+
+		sso_wqp.u64 = csr_rd_node(node, CVMX_SSO_SL_PPX_WQP(core));
+		if (sso_wqp.s.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(sso_wqp.s.wqp);
+		return (cvmx_wqe_t *)0;
+	}
+	if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		load_addr.u64 = 0;
+		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus_cn68xx.is_io = 1;
+		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
+		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
+		load_addr.sstatus_cn68xx.opcode = 4;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		if (load_resp.s_sstatus3_cn68xx.wqp)
+			return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus3_cn68xx.wqp);
+		else
+			return (cvmx_wqe_t *)0;
+	} else {
+		load_addr.u64 = 0;
+		load_addr.sstatus.mem_region = CVMX_IO_SEG;
+		load_addr.sstatus.is_io = 1;
+		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
+		load_addr.sstatus.coreid = cvmx_get_core_num();
+		load_addr.sstatus.get_cur = 1;
+		load_addr.sstatus.get_wqp = 1;
+		load_resp.u64 = csr_rd(load_addr.u64);
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(load_resp.s_sstatus4.wqp);
+	}
+}
+
+/**
+ * @INTERNAL
+ * Print a warning if a tag switch is pending for this core
+ *
+ * @param function Function name checking for a pending tag switch
+ */
+static inline void __cvmx_pow_warn_if_pending_switch(const char *function)
+{
+	u64 switch_complete;
+
+	CVMX_MF_CHORD(switch_complete);
+	cvmx_warn_if(!switch_complete, "%s called with tag switch in progress\n", function);
+}
+
+/**
+ * Waits for a tag switch to complete by polling the completion bit.
+ * Note that switches to NULL complete immediately and do not need
+ * to be waited for.
+ */
+static inline void cvmx_pow_tag_sw_wait(void)
+{
+	const u64 TIMEOUT_MS = 10; /* 10ms timeout */
+	u64 switch_complete;
+	u64 start_cycle;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		start_cycle = get_timer(0);
+
+	while (1) {
+		CVMX_MF_CHORD(switch_complete);
+		if (cvmx_likely(switch_complete))
+			break;
+
+		if (CVMX_ENABLE_POW_CHECKS) {
+			if (cvmx_unlikely(get_timer(start_cycle) > TIMEOUT_MS)) {
+				debug("WARNING: %s: Tag switch is taking a long time, possible deadlock\n",
+				      __func__);
+			}
+		}
+	}
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync_nocheck(cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.node = cvmx_get_node_num();
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork_78xx.wait = wait;
+	} else {
+		ptr.swork.mem_region = CVMX_IO_SEG;
+		ptr.swork.is_io = 1;
+		ptr.swork.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.swork.wait = wait;
+	}
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request.  Requests work from the POW.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param wait   When set, call stalls until work becomes available, or
+ *               times out. If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from POW. Returns NULL if no work was
+ * available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_request_sync(cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	return (cvmx_pow_work_request_sync_nocheck(wait));
+}
+
+/**
+ * Synchronous null_rd request.  Requests a switch out of NULL_NULL POW state.
+ * This function waits for any previous tag switch to complete before
+ * requesting the null_rd.
+ *
+ * @return Returns the POW state of type cvmx_pow_tag_type_t.
+ */
+static inline cvmx_pow_tag_type_t cvmx_pow_work_request_null_rd(void)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+		ptr.swork_78xx.is_io = 1;
+		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_NULL_RD;
+		ptr.swork_78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.snull_rd.mem_region = CVMX_IO_SEG;
+		ptr.snull_rd.is_io = 1;
+		ptr.snull_rd.did = CVMX_OCT_DID_TAG_NULL_RD;
+	}
+	result.u64 = csr_rd(ptr.u64);
+	return (cvmx_pow_tag_type_t)result.s_null_rd.state;
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async_nocheck(int scr_addr, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+
+	if (CVMX_ENABLE_POW_CHECKS)
+		__cvmx_pow_warn_if_pending_switch(__func__);
+
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		data.s_cn78xx.node = cvmx_get_node_num();
+		data.s_cn78xx.scraddr = scr_addr >> 3;
+		data.s_cn78xx.len = 1;
+		data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s_cn78xx.wait = wait;
+	} else {
+		data.s.scraddr = scr_addr >> 3;
+		data.s.len = 1;
+		data.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		data.s.wait = wait;
+	}
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Work is requested from the POW unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param wait 1 to cause response to wait for work to become available
+ *               (or timeout)
+ *             0 to cause response to return immediately
+ */
+static inline void cvmx_pow_work_request_async(int scr_addr, cvmx_pow_wait_t wait)
+{
+	/* Must not have a switch pending when requesting work */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_work_request_async_nocheck(scr_addr, wait);
+}
+
+/**
+ * Gets result of asynchronous work request.  Performs a IOBDMA sync
+ * to wait for the response.
+ *
+ * @param scr_addr Scratch memory address to get result from
+ *                  Byte address, must be 8 byte aligned.
+ * @return Returns the WQE from the scratch register, or NULL if no work was
+ *         available.
+ */
+static inline cvmx_wqe_t *cvmx_pow_work_response_async(int scr_addr)
+{
+	cvmx_pow_tag_load_resp_t result;
+
+	CVMX_SYNCIOBDMA;
+	result.u64 = cvmx_scratch_read64(scr_addr);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return (cvmx_wqe_t *)cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Checks if a work queue entry pointer returned by a work
+ * request is valid.  It may be invalid due to no work
+ * being available or due to a timeout.
+ *
+ * @param wqe_ptr pointer to a work queue entry returned by the POW
+ *
+ * @return 0 if pointer is valid
+ *         1 if invalid (no work was returned)
+ */
+static inline u64 cvmx_pow_work_invalid(cvmx_wqe_t *wqe_ptr)
+{
+	return (!wqe_ptr); /* FIXME: improve */
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details.
+	 * It is the application's responsibility to keep track of the
+	 * current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = tag_type;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+	}
+	/* Once this store arrives at POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * NOTE: This should not be used when switching from a NULL tag.  Use
+ * cvmx_pow_tag_sw_full() instead.
+ *
+ * This function waits for any previous tag switch to complete, and also
+ * displays an error on tag switches to NULL.
+ *
+ * @param tag      new tag value
+ * @param tag_type new tag type (ordered or atomic)
+ */
+static inline void cvmx_pow_tag_sw(u32 tag, cvmx_pow_tag_type_t tag_type)
+{
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not read
+	 * from DRAM once the WQE is in flight.  See hardware manual for
+	 * complete details. It is the application's responsibility to keep
+	 * track of the current tag value if that is important.
+	 */
+
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch
+	 * cannot be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_nocheck(tag, tag_type);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.  Completion for
+ * the tag switch must be checked for separately.
+ * This function does NOT update the
+ * work queue entry in dram to match tag value and type, so the application must
+ * keep track of these if they are important to the application.
+ * This tag switch command must not be used for switches to NULL, as the tag
+ * switch pending bit will be set by the switch request, but never cleared by
+ * the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function does no checks, so the caller must ensure that any previous tag
+ * switch has completed.
+ *
+ * @param wqp      pointer to work queue entry to submit.  This entry is
+ *                 updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param group    group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full_nocheck(cvmx_wqe_t *wqp, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u64 group)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	unsigned int node = cvmx_get_node_num();
+	u64 wqp_phys = cvmx_ptr_to_phys(wqp);
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+
+	/*
+	 * Note that WQE in DRAM is not updated here, as the POW does not
+	 * read from DRAM once the WQE is in flight.  See hardware manual
+	 * for complete details. It is the application's responsibility to
+	 * keep track of the current tag value if that is important.
+	 */
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int xgrp;
+
+		if (wqp_phys != 0x80) {
+			/* If WQE is valid, use its XGRP:
+			 * WQE GRP is 10 bits, and is mapped
+			 * to legacy GRP + QoS, includes node number.
+			 */
+			xgrp = wqp->word1.cn78xx.grp;
+			/* Use XGRP[node] too */
+			node = xgrp >> 8;
+			/* Modify XGRP with legacy group # from arg */
+			xgrp &= ~0xf8;
+			xgrp |= 0xf8 & (group << 3);
+
+		} else {
+			/* If no WQE, build XGRP with QoS=0 and current node */
+			xgrp = group << 3;
+			xgrp |= node << 8;
+		}
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = xgrp;
+		tag_req.s_cn78xx_other.wqp = wqp_phys;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
+		ptr.s.addr = wqp_phys;
+	}
+	/* Once this store arrives@POW, it will attempt the switch
+	   software must wait for the switch to complete separately */
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Starts a tag switch to the provided tag value and tag type.
+ * Completion for the tag switch must be checked for separately.
+ * This function does NOT update the work queue entry in dram to match tag value
+ * and type, so the application must keep track of these if they are important
+ * to the application. This tag switch command must not be used for switches
+ * to NULL, as the tag switch pending bit will be set by the switch request,
+ * but never cleared by the hardware.
+ *
+ * This function must be used for tag switches from NULL.
+ *
+ * This function waits for any pending tag switches to complete
+ * before requesting the tag switch.
+ *
+ * @param wqp      Pointer to work queue entry to submit.
+ *     This entry is updated to match the other parameters
+ * @param tag      Tag value to be assigned to work queue entry
+ * @param tag_type Type of tag
+ * @param group    Group value for the work queue entry.
+ */
+static inline void cvmx_pow_tag_sw_full(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 group)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_full_nocheck(wqp, tag, tag_type, group);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that any previous tag switches have completed.
+ */
+static inline void cvmx_pow_tag_sw_null_nocheck(void)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called when we already have a NULL tag\n", __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn78xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn68xx_other.type = CVMX_POW_TAG_TYPE_NULL;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
+		tag_req.s_cn38xx.type = CVMX_POW_TAG_TYPE_NULL;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Switch to a NULL tag, which ends any ordering or
+ * synchronization provided by the POW for the current
+ * work queue entry.  This operation completes immediately,
+ * so completion should not be waited for.
+ * This function waits for any pending tag switches to complete
+ * before requesting the switch to NULL.
+ */
+static inline void cvmx_pow_tag_sw_null(void)
+{
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_null_nocheck();
+}
+
+/**
+ * Submits work to an input queue.
+ * This function updates the work queue entry in DRAM to match the arguments given.
+ * Note that the tag provided is for the work queue entry submitted, and
+ * is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp      pointer to work queue entry to submit.
+ *                 This entry is updated to match the other parameters
+ * @param tag      tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param qos      Input queue to add to.
+ * @param grp      group value for the work queue entry.
+ */
+static inline void cvmx_pow_work_submit(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					u64 qos, u64 grp)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	tag_req.u64 = 0;
+	ptr.u64 = 0;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		unsigned int xgrp;
+
+		xgrp = (grp & 0x1f) << 3;
+		xgrp |= (qos & 7);
+		xgrp |= 0x300 & (node << 8);
+
+		wqp->word1.cn78xx.rsvd_0 = 0;
+		wqp->word1.cn78xx.rsvd_1 = 0;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = xgrp;
+		CVMX_SYNCWS;
+
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+		tag_req.s_cn78xx_other.grp = xgrp;
+
+		ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.node = node;
+		ptr.s_cn78xx.tag = tag;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		/* Reset all reserved bits */
+		wqp->word1.cn68xx.zero_0 = 0;
+		wqp->word1.cn68xx.zero_1 = 0;
+		wqp->word1.cn68xx.zero_2 = 0;
+		wqp->word1.cn68xx.qos = qos;
+		wqp->word1.cn68xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn68xx_add.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn68xx_add.type = tag_type;
+		tag_req.s_cn68xx_add.tag = tag;
+		tag_req.s_cn68xx_add.qos = qos;
+		tag_req.s_cn68xx_add.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	} else {
+		/* Reset all reserved bits */
+		wqp->word1.cn38xx.zero_2 = 0;
+		wqp->word1.cn38xx.qos = qos;
+		wqp->word1.cn38xx.grp = grp;
+
+		wqp->word1.tag = tag;
+		wqp->word1.tag_type = tag_type;
+
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_ADDWQ;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.qos = qos;
+		tag_req.s_cn38xx.grp = grp;
+
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
+		ptr.s.addr = cvmx_ptr_to_phys(wqp);
+	}
+	/* SYNC write to memory before the work submit.
+	 * This is necessary as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from. There are
+ * 16 groups.
+ *
+ * @param core_num   core to apply mask to
+ * @param mask   Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline void cvmx_pow_set_group_mask(u64 core_num, u64 mask)
+{
+	u64 valid_mask;
+	int num_groups = cvmx_pow_num_groups();
+
+	if (num_groups >= 64)
+		valid_mask = ~0ull;
+	else
+		valid_mask = (1ull << num_groups) - 1;
+
+	if ((mask & valid_mask) == 0) {
+		printf("ERROR: %s empty group mask disables work on core# %llu, ignored.\n",
+		       __func__, (unsigned long long)core_num);
+		return;
+	}
+	cvmx_warn_if(mask & (~valid_mask), "%s group number range exceeded: %#llx\n", __func__,
+		     (unsigned long long)mask);
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int mask_set;
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node;
+		unsigned int rix;  /* Register index */
+		unsigned int grp;  /* Legacy group # */
+		unsigned int bit;  /* bit index */
+		unsigned int xgrp; /* native group # */
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = 0; rix < (cvmx_sso_num_xgrp() >> 6); rix++) {
+			grp_msk.u64 = 0;
+			for (bit = 0; bit < 64; bit++) {
+				/* 8-bit native XGRP number */
+				xgrp = (rix << 6) | bit;
+				/* Legacy 5-bit group number */
+				grp = (xgrp >> 3) & 0x1f;
+				/* Inspect legacy mask by legacy group */
+				if (mask & (1ull << grp))
+					grp_msk.s.grp_msk |= 1ull << bit;
+				/* Pre-set to all 0's */
+			}
+			for (mask_set = 0; mask_set < cvmx_sso_num_maskset(); mask_set++) {
+				csr_wr_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, mask_set, rix),
+					    grp_msk.u64);
+			}
+		}
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.s.grp_msk = mask;
+		csr_wr(CVMX_SSO_PPX_GRP_MSK(core_num), grp_msk.u64);
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.grp_msk = mask & 0xffff;
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask
+ * indicates which groups each core will accept work from.
+ *
+ * @param core_num   core to apply mask to
+ * @return	Group mask, one bit for up to 64 groups.
+ *               Each 1 bit in the mask enables the core to accept work from
+ *               the corresponding group.
+ *               The CN68XX supports 64 groups, earlier models only support
+ *               16 groups.
+ *
+ * The CN78XX in backwards compatibility mode allows up to 32 groups,
+ * so the 'mask' argument has one bit for every of the legacy
+ * groups, and a '1' in the mask causes a total of 8 groups
+ * which share the legacy group numbher and 8 qos levels,
+ * to be enabled for the calling processor core.
+ * A '0' in the mask will disable the current core
+ * from receiving work from the associated group.
+ */
+static inline u64 cvmx_pow_get_group_mask(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read only mask_set=0 (both 'set' was written same) */
+			grp_msk.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			/* ASSUME: (this is how mask bits got written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* DO: mask[7:0] = grp_mask.u64[56,48,40,32,24,16,8,0] */
+			for (i = 0; i < 8; i++)
+				mask |= (grp_msk.u64 & ((u64)1 << (i * 8))) >> (7 * i);
+			/* we collected 8 MSBs in mask[7:0], <<=8 and continue */
+			if (cvmx_likely(rix != 0))
+				mask <<= 8;
+		}
+		return mask & 0xFFFFFFFF;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_grp_msk_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_SSO_PPX_GRP_MSK(core_num));
+		return grp_msk.u64;
+	} else {
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		return grp_msk.u64 & 0xffff;
+	}
+}
+
+/*
+ * Returns 0 if 78xx(73xx,75xx) is not programmed in legacy compatible mode
+ * Returns 1 if 78xx(73xx,75xx) is programmed in legacy compatible mode
+ * Returns 1 if octeon model is not 78xx(73xx,75xx)
+ */
+static inline u64 cvmx_pow_is_legacy78mode(u64 core_num)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_sso_ppx_sx_grpmskx_t grp_msk0, grp_msk1;
+		unsigned int core, node, i;
+		int rix; /* Register index */
+		u64 mask = 0;
+
+		node = cvmx_coremask_core_to_node(core_num);
+		core = cvmx_coremask_core_on_node(core_num);
+
+		/* 78xx: 256 groups divided into 4 X 64 bit registers */
+		/* 73xx: 64 groups are in one register */
+		/* 1) in order for the 78_SSO to be in legacy compatible mode
+		 * the both mask_sets should be programmed the same */
+		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0; rix--) {
+			/* read mask_set=0 (both 'set' was written same) */
+			grp_msk0.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
+			grp_msk1.u64 = csr_rd_node(node, CVMX_SSO_PPX_SX_GRPMSKX(core, 1, rix));
+			if (grp_msk0.u64 != grp_msk1.u64) {
+				return 0;
+			}
+			/* (this is how mask bits should be written) */
+			/* grp_mask[7:0]: all bits 0..7 are same */
+			/* grp_mask[15:8]: all bits 8..15 are same, etc */
+			/* 2) in order for the 78_SSO to be in legacy compatible
+			 * mode above should be true (test only mask_set=0 */
+			for (i = 0; i < 8; i++) {
+				mask = (grp_msk0.u64 >> (i << 3)) & 0xFF;
+				if (!(mask == 0 || mask == 0xFF)) {
+					return 0;
+				}
+			}
+		}
+		/* if we come here, the 78_SSO is in legacy compatible mode */
+	}
+	return 1; /* the SSO/POW is in legacy (or compatible) mode */
+}
+
+/**
+ * This function sets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param core_num   core to apply priorities to
+ * @param priority   Vector of 8 priorities, one per POW Input Queue (0-7).
+ *                   Highest priority is 0 and lowest is 7. A priority value
+ *                   of 0xF instructs POW to skip the Input Queue when
+ *                   scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_set_priority(u64 core_num, const u8 priority[])
+{
+	/* Detect gaps between priorities and flag error */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR: POW static priorities should be contiguous (0x%llx)\n",
+			      (unsigned long long)prio_mask);
+			return;
+		}
+	}
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/*grp_pri.s.weight = 0x3f; these will be anyway overwritten */
+		/*grp_pri.s.affinity = 0xf; by the next csr_rd_node(..), */
+
+		for (group = 0; group < cvmx_sso_num_xgrp(); group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			grp_pri.s.pri = priority[group & 0x7];
+			csr_wr_node(node, CVMX_SSO_GRPX_PRI(group), grp_pri.u64);
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		qos_pri.s.qos0_pri = priority[0];
+		qos_pri.s.qos1_pri = priority[1];
+		qos_pri.s.qos2_pri = priority[2];
+		qos_pri.s.qos3_pri = priority[3];
+		qos_pri.s.qos4_pri = priority[4];
+		qos_pri.s.qos5_pri = priority[5];
+		qos_pri.s.qos6_pri = priority[6];
+		qos_pri.s.qos7_pri = priority[7];
+		csr_wr(CVMX_SSO_PPX_QOS_PRI(core_num), qos_pri.u64);
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		grp_msk.s.qos0_pri = priority[0];
+		grp_msk.s.qos1_pri = priority[1];
+		grp_msk.s.qos2_pri = priority[2];
+		grp_msk.s.qos3_pri = priority[3];
+		grp_msk.s.qos4_pri = priority[4];
+		grp_msk.s.qos5_pri = priority[5];
+		grp_msk.s.qos6_pri = priority[6];
+		grp_msk.s.qos7_pri = priority[7];
+
+		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
+	}
+}
+
+/**
+ * This function gets POW static priorities for a core. Each input queue has
+ * an associated priority value.
+ *
+ * @param[in]  core_num core to get priorities for
+ * @param[out] priority Pointer to u8[] where to return priorities
+ *			Vector of 8 priorities, one per POW Input Queue (0-7).
+ *			Highest priority is 0 and lowest is 7. A priority value
+ *			of 0xF instructs POW to skip the Input Queue when
+ *			scheduling to this specific core.
+ *                   NOTE: priorities should not have gaps in values, meaning
+ *                         {0,1,1,1,1,1,1,1} is a valid configuration while
+ *                         {0,2,2,2,2,2,2,2} is not.
+ */
+static inline void cvmx_pow_get_priority(u64 core_num, u8 priority[])
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int group;
+		unsigned int node = cvmx_get_node_num();
+		cvmx_sso_grpx_pri_t grp_pri;
+
+		/* read priority only from the first 8 groups */
+		/* the next groups are programmed the same (periodicaly) */
+		for (group = 0; group < 8 /*cvmx_sso_num_xgrp() */; group++) {
+			grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(group));
+			priority[group /* & 0x7 */] = grp_pri.s.pri;
+		}
+
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		cvmx_sso_ppx_qos_pri_t qos_pri;
+
+		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
+		priority[0] = qos_pri.s.qos0_pri;
+		priority[1] = qos_pri.s.qos1_pri;
+		priority[2] = qos_pri.s.qos2_pri;
+		priority[3] = qos_pri.s.qos3_pri;
+		priority[4] = qos_pri.s.qos4_pri;
+		priority[5] = qos_pri.s.qos5_pri;
+		priority[6] = qos_pri.s.qos6_pri;
+		priority[7] = qos_pri.s.qos7_pri;
+	} else {
+		/* POW priorities on CN5xxx .. CN66XX */
+		cvmx_pow_pp_grp_mskx_t grp_msk;
+
+		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
+		priority[0] = grp_msk.s.qos0_pri;
+		priority[1] = grp_msk.s.qos1_pri;
+		priority[2] = grp_msk.s.qos2_pri;
+		priority[3] = grp_msk.s.qos3_pri;
+		priority[4] = grp_msk.s.qos4_pri;
+		priority[5] = grp_msk.s.qos5_pri;
+		priority[6] = grp_msk.s.qos6_pri;
+		priority[7] = grp_msk.s.qos7_pri;
+	}
+
+	/* Detect gaps between priorities and flag error - (optional) */
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int i;
+		u32 prio_mask = 0;
+
+		for (i = 0; i < 8; i++)
+			if (priority[i] != 0xF)
+				prio_mask |= 1 << priority[i];
+
+		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
+			debug("ERROR:%s: POW static priorities should be contiguous (0x%llx)\n",
+			      __func__, (unsigned long long)prio_mask);
+			return;
+		}
+	}
+}
+
+static inline void cvmx_sso_get_group_priority(int node, cvmx_xgrp_t xgrp, int *priority,
+					       int *weight, int *affinity)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	*affinity = grp_pri.s.affinity;
+	*priority = grp_pri.s.pri;
+	*weight = grp_pri.s.weight;
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function does NOT wait for any prior tag switches to complete, so the
+ * calling code must do this.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched_nocheck(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+						   u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_t *wqp = cvmx_pow_get_current_wqp();
+
+		if (!wqp) {
+			debug("ERROR: Failed to get WQE, %s\n", __func__);
+			return;
+		}
+		group &= 0x1f;
+		wqp->word1.cn78xx.tag = tag;
+		wqp->word1.cn78xx.tag_type = tag_type;
+		wqp->word1.cn78xx.grp = group << 3;
+		CVMX_SYNCWS;
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn78xx_other.type = tag_type;
+		tag_req.s_cn78xx_other.grp = group << 3;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		group &= 0x3f;
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn68xx_other.tag = tag;
+		tag_req.s_cn68xx_other.type = tag_type;
+		tag_req.s_cn68xx_other.grp = group;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		group &= 0x0f;
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+		tag_req.s_cn38xx.tag = tag;
+		tag_req.s_cn38xx.type = tag_type;
+		tag_req.s_cn38xx.grp = group;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+		ptr.s_cn78xx.tag = tag;
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Performs a tag switch and then an immediate deschedule. This completes
+ * immediately, so completion must not be waited for.  This function does NOT
+ * update the wqe in DRAM to match arguments.
+ *
+ * This function waits for any prior tag switches to complete, so the
+ * calling code may call this function with a pending tag switch.
+ *
+ * Note the following CAVEAT of the Octeon HW behavior when
+ * re-scheduling DE-SCHEDULEd items whose (next) state is
+ * ORDERED:
+ *   - If there are no switches pending at the time that the
+ *     HW executes the de-schedule, the HW will only re-schedule
+ *     the head of the FIFO associated with the given tag. This
+ *     means that in many respects, the HW treats this ORDERED
+ *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
+ *     case (to an ORDERED tag), the HW will do the switch
+ *     before the deschedule whenever it is possible to do
+ *     the switch immediately, so it may often look like
+ *     this case.
+ *   - If there is a pending switch to ORDERED at the time
+ *     the HW executes the de-schedule, the HW will perform
+ *     the switch@the time it re-schedules, and will be
+ *     able to reschedule any/all of the entries with the
+ *     same tag.
+ * Due to this behavior, the RECOMMENDATION to software is
+ * that they have a (next) state of ATOMIC when they
+ * DE-SCHEDULE. If an ORDERED tag is what was really desired,
+ * SW can choose to immediately switch to an ORDERED tag
+ * after the work (that has an ATOMIC tag) is re-scheduled.
+ * Note that since there are never any tag switches pending
+ * when the HW re-schedules, this switch can be IMMEDIATE upon
+ * the reception of the pointer during the re-schedule.
+ *
+ * @param tag      New tag value
+ * @param tag_type New tag type
+ * @param group    New group value
+ * @param no_sched Control whether this work queue entry will be rescheduled.
+ *                 - 1 : don't schedule this work
+ *                 - 0 : allow this work to be scheduled.
+ */
+static inline void cvmx_pow_tag_sw_desched(u32 tag, cvmx_pow_tag_type_t tag_type, u64 group,
+					   u64 no_sched)
+{
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be started
+	 * if a previous switch is still pending.  */
+	cvmx_pow_tag_sw_wait();
+	cvmx_pow_tag_sw_desched_nocheck(tag, tag_type, group, no_sched);
+}
+
+/**
+ * Descchedules the current work queue entry.
+ *
+ * @param no_sched no schedule flag value to be set on the work queue entry.
+ *     If this is set the entry will not be rescheduled.
+ */
+static inline void cvmx_pow_desched(u64 no_sched)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not expected from NULL state\n",
+			     __func__);
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn78xx_other.no_sched = no_sched;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn68xx_other.no_sched = no_sched;
+	} else {
+		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_DESCH;
+		tag_req.s_cn38xx.no_sched = no_sched;
+	}
+	ptr.u64 = 0;
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+		ptr.s_cn78xx.is_io = 1;
+		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG3;
+		ptr.s_cn78xx.node = cvmx_get_node_num();
+	} else {
+		ptr.s.mem_region = CVMX_IO_SEG;
+		ptr.s.is_io = 1;
+		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	}
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/******************************************************************************/
+/* OCTEON3-specific functions.                                                */
+/******************************************************************************/
+/**
+ * This function sets the the affinity of group to the cores in 78xx.
+ * It sets up all the cores in core_mask to accept work from the specified group.
+ *
+ * @param xgrp	Group to accept work from, 0 - 255.
+ * @param core_mask	Mask of all the cores which will accept work from this group
+ * @param mask_set	Every core has set of 2 masks which can be set to accept work
+ *     from 256 groups. At the time of get_work, cores can choose which mask_set
+ *     to get work from. 'mask_set' values range from 0 to 3, where	each of the
+ *     two bits represents a mask set. Cores will be added to the mask set with
+ *     corresponding bit set, and removed from the mask set with corresponding
+ *     bit clear.
+ * Note: cores can only accept work from SSO groups on the same node,
+ * so the node number for the group is derived from the core number.
+ */
+static inline void cvmx_sso_set_group_core_affinity(cvmx_xgrp_t xgrp,
+						    const struct cvmx_coremask *core_mask,
+						    u8 mask_set)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	int core;
+	int grp_index = xgrp.xgrp >> 6;
+	int bit_pos = xgrp.xgrp % 64;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	cvmx_coremask_for_each_core(core, core_mask)
+	{
+		unsigned int node, ncore;
+		u64 reg_addr;
+
+		node = cvmx_coremask_core_to_node(core);
+		ncore = cvmx_coremask_core_on_node(core);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 0, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 1)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+
+		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 1, grp_index);
+		grp_msk.u64 = csr_rd_node(node, reg_addr);
+
+		if (mask_set & 2)
+			grp_msk.s.grp_msk |= (1ull << bit_pos);
+		else
+			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
+
+		csr_wr_node(node, reg_addr, grp_msk.u64);
+	}
+}
+
+/**
+ * This function sets the priority and group affinity arbitration for each group.
+ *
+ * @param node		Node number
+ * @param xgrp	Group 0 - 255 to apply mask parameters to
+ * @param priority	Priority of the group relative to other groups
+ *     0x0 - highest priority
+ *     0x7 - lowest priority
+ * @param weight	Cross-group arbitration weight to apply to this group.
+ *     valid values are 1-63
+ *     h/w default is 0x3f
+ * @param affinity	Processor affinity arbitration weight to apply to this group.
+ *     If zero, affinity is disabled.
+ *     valid values are 0-15
+ *     h/w default which is 0xf.
+ * @param modify_mask   mask of the parameters which needs to be modified.
+ *     enum cvmx_sso_group_modify_mask
+ *     to modify only priority -- set bit0
+ *     to modify only weight   -- set bit1
+ *     to modify only affinity -- set bit2
+ */
+static inline void cvmx_sso_set_group_priority(int node, cvmx_xgrp_t xgrp, int priority, int weight,
+					       int affinity,
+					       enum cvmx_sso_group_modify_mask modify_mask)
+{
+	cvmx_sso_grpx_pri_t grp_pri;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	if (weight <= 0)
+		weight = 0x3f; /* Force HW default when out of range */
+
+	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
+	if (grp_pri.s.weight == 0)
+		grp_pri.s.weight = 0x3f;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_PRIORITY)
+		grp_pri.s.pri = priority;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_WEIGHT)
+		grp_pri.s.weight = weight;
+	if (modify_mask & CVMX_SSO_MODIFY_GROUP_AFFINITY)
+		grp_pri.s.affinity = affinity;
+	csr_wr_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp), grp_pri.u64);
+}
+
+/**
+ * Asynchronous work request.
+ * Only works on CN78XX style SSO.
+ *
+ * Work is requested from the SSO unit, and should later be checked with
+ * function cvmx_pow_work_response_async.
+ * This function does NOT wait for previous tag switches to complete,
+ * so the caller must ensure that there is not a pending tag switch.
+ *
+ * @param scr_addr Scratch memory address that response will be returned to,
+ *     which is either a valid WQE, or a response with the invalid bit set.
+ *     Byte address, must be 8 byte aligned.
+ * @param xgrp  Group to receive work for (0-255).
+ * @param wait
+ *     1 to cause response to wait for work to become available (or timeout)
+ *     0 to cause response to return immediately
+ */
+static inline void cvmx_sso_work_request_grp_async_nocheck(int scr_addr, cvmx_xgrp_t xgrp,
+							   cvmx_pow_wait_t wait)
+{
+	cvmx_pow_iobdma_store_t data;
+	unsigned int node = cvmx_get_node_num();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	/* scr_addr must be 8 byte aligned */
+	data.u64 = 0;
+	data.s_cn78xx.scraddr = scr_addr >> 3;
+	data.s_cn78xx.len = 1;
+	data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	data.s_cn78xx.grouped = 1;
+	data.s_cn78xx.index_grp_mask = (node << 8) | xgrp.xgrp;
+	data.s_cn78xx.wait = wait;
+	data.s_cn78xx.node = node;
+
+	cvmx_send_single(data.u64);
+}
+
+/**
+ * Synchronous work request from the node-local SSO without verifying
+ * pending tag switch. It requests work from a specific SSO group.
+ *
+ * @param lgrp The local group number (within the SSO of the node of the caller)
+ *     from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return Returns the WQE pointer from SSO.
+ *     Returns NULL if no work was available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync_nocheck(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_load_addr_t ptr;
+	cvmx_pow_tag_load_resp_t result;
+	unsigned int node = cvmx_get_node_num() & 3;
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE), "Not CN78XX");
+	}
+	ptr.u64 = 0;
+	ptr.swork_78xx.mem_region = CVMX_IO_SEG;
+	ptr.swork_78xx.is_io = 1;
+	ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.swork_78xx.node = node;
+	ptr.swork_78xx.grouped = 1;
+	ptr.swork_78xx.index = (lgrp & 0xff) | node << 8;
+	ptr.swork_78xx.wait = wait;
+
+	result.u64 = csr_rd(ptr.u64);
+	if (result.s_work.no_work)
+		return NULL;
+	else
+		return cvmx_phys_to_ptr(result.s_work.addr);
+}
+
+/**
+ * Synchronous work request from the node-local SSO.
+ * It requests work from a specific SSO group.
+ * This function waits for any previous tag switch to complete before
+ * requesting the new work.
+ *
+ * @param lgrp The node-local group number from which to get the work.
+ * @param wait When set, call stalls until work becomes available, or times out.
+ *     If not set, returns immediately.
+ *
+ * @return The WQE pointer or NULL, if work is not available.
+ */
+static inline void *cvmx_sso_work_request_grp_sync(unsigned int lgrp, cvmx_pow_wait_t wait)
+{
+	cvmx_pow_tag_sw_wait();
+	return cvmx_sso_work_request_grp_sync_nocheck(lgrp, wait);
+}
+
+/**
+ * This function sets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Group mask array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask, if set, enables
+ *     the core to accept work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ *
+ * If the 'mask_set' argument is 3, both mask sets are configured
+ * with the same value (which is not typically the intention),
+ * so keep in mind the function needs to be called twice
+ * to set a different value into each of the mask sets,
+ * once with 'mask_set=1' and second time with 'mask_set=2'.
+ */
+static inline void cvmx_pow_set_xgrp_mask(u64 core_num, u8 mask_set, const u64 xgrp_mask[])
+{
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(((mask_set < 1) || (mask_set > 3)), "Invalid mask set");
+	}
+
+	if ((mask_set < 1) || (mask_set > 3))
+		mask_set = 3;
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < (cvmx_sso_num_xgrp() >> 6); grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
+		}
+	}
+}
+
+/**
+ * This function gets the group mask for a core.  The group mask bits
+ * indicate which groups each core will accept work from.
+ *
+ * @param core_num	Processor core to apply mask to.
+ * @param mask_set	7XXX has 2 sets of masks per core.
+ *     Bit 0 represents the first mask set, bit 1 -- the second.
+ * @param xgrp_mask	Provide pointer to u64 mask[8] output array.
+ *     Total number of groups is divided into a number of
+ *     64-bits mask sets. Each bit in the mask represents
+ *     the core accepts work from the corresponding group.
+ *
+ * NOTE: Each core can be configured to accept work in accordance to both
+ * mask sets, with the first having higher precedence over the second,
+ * or to accept work in accordance to just one of the two mask sets.
+ * The 'core_num' argument represents a processor core on any node
+ * in a coherent multi-chip system.
+ */
+static inline void cvmx_pow_get_xgrp_mask(u64 core_num, u8 mask_set, u64 *xgrp_mask)
+{
+	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
+	unsigned int grp, node, core;
+	u64 reg_addr;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_warn_if(mask_set != 1 && mask_set != 2, "Invalid mask set");
+	}
+
+	node = cvmx_coremask_core_to_node(core_num);
+	core = cvmx_coremask_core_on_node(core_num);
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp() >> 6; grp++) {
+		if (mask_set & 1) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+		if (mask_set & 2) {
+			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1, grp),
+			grp_msk.u64 = csr_rd_node(node, reg_addr);
+			xgrp_mask[grp] = grp_msk.s.grp_msk;
+		}
+	}
+}
+
+/**
+ * Executes SSO SWTAG command.
+ * This is similar to cvmx_pow_tag_sw() function, but uses linear
+ * (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+	}
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
+	tag_req.s_cn78xx_other.type = tag_type;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes SSO SWTAG_FULL command.
+ * This is similar to cvmx_pow_tag_sw_full() function, but
+ * uses linear (vs. integrated group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_full_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, int node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 gxgrp;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Ensure that there is not a pending tag switch, as a tag switch cannot be
+	 * started, if a previous switch is still pending. */
+	CVMX_SYNCWS;
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if((current_tag.tag_type == tag_type) && (current_tag.tag == tag),
+			     "%s called to perform a tag switch to the same tag\n", __func__);
+		cvmx_warn_if(
+			tag_type == CVMX_POW_TAG_TYPE_NULL,
+			"%s called to perform a tag switch to NULL. Use cvmx_pow_tag_sw_null() instead\n",
+			__func__);
+		if ((wqp != cvmx_phys_to_ptr(0x80)) && cvmx_pow_get_current_wqp())
+			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
+				     "%s passed WQE(%p) doesn't match the address in the POW(%p)\n",
+				     __func__, wqp, cvmx_pow_get_current_wqp());
+	}
+	gxgrp = node;
+	gxgrp = gxgrp << 8 | xgrp;
+	wqp->word1.cn78xx.grp = gxgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = gxgrp;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Submits work to an SSO group on any OCI node.
+ * This function updates the work queue entry in DRAM to match
+ * the arguments given.
+ * Note that the tag provided is for the work queue entry submitted,
+ * and is unrelated to the tag that the core currently holds.
+ *
+ * @param wqp pointer to work queue entry to submit.
+ * This entry is updated to match the other parameters
+ * @param tag tag value to be assigned to work queue entry
+ * @param tag_type type of tag
+ * @param xgrp native CN78XX group in the range 0..255
+ * @param node The OCI node number for the target group
+ *
+ * When this function is called on a model prior to CN78XX, which does
+ * not support OCI nodes, the 'node' argument is ignored, and the 'xgrp'
+ * parameter is converted into 'qos' (the lower 3 bits) and 'grp' (the higher
+ * 5 bits), following the backward-compatibility scheme of translating
+ * between new and old style group numbers.
+ */
+static inline void cvmx_pow_work_submit_node(cvmx_wqe_t *wqp, u32 tag, cvmx_pow_tag_type_t tag_type,
+					     u8 xgrp, u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqp->word1.cn78xx.tag = tag;
+	wqp->word1.cn78xx.tag_type = tag_type;
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+	tag_req.s_cn78xx_other.grp = group;
+
+	ptr.u64 = 0;
+	ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
+	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	ptr.s_cn78xx.is_io = 1;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+
+	/* SYNC write to memory before the work submit.  This is necessary
+	 ** as POW may read values from DRAM at this time */
+	CVMX_SYNCWS;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/**
+ * Executes the SSO SWTAG_DESCHED operation.
+ * This is similar to the cvmx_pow_tag_sw_desched() function, but
+ * uses linear (vs. unified group-qos) group index.
+ */
+static inline void cvmx_pow_tag_sw_desched_node(cvmx_wqe_t *wqe, u32 tag,
+						cvmx_pow_tag_type_t tag_type, u8 xgrp, u64 no_sched,
+						u8 node)
+{
+	union cvmx_pow_tag_req_addr ptr;
+	cvmx_pow_tag_req_t tag_req;
+	u16 group;
+
+	if (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
+		debug("ERROR: %s is supported on OCTEON3 only\n", __func__);
+		return;
+	}
+	/* Need to make sure any writes to the work queue entry are complete */
+	CVMX_SYNCWS;
+	/*
+	 * Ensure that there is not a pending tag switch, as a tag switch cannot
+	 * be started if a previous switch is still pending.
+	 */
+	cvmx_pow_tag_sw_wait();
+
+	if (CVMX_ENABLE_POW_CHECKS) {
+		cvmx_pow_tag_info_t current_tag;
+
+		__cvmx_pow_warn_if_pending_switch(__func__);
+		current_tag = cvmx_pow_get_current_tag();
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL_NULL,
+			     "%s called with NULL_NULL tag\n", __func__);
+		cvmx_warn_if(current_tag.tag_type == CVMX_POW_TAG_TYPE_NULL,
+			     "%s called with NULL tag. Deschedule not allowed from NULL state\n",
+			     __func__);
+		cvmx_warn_if((current_tag.tag_type != CVMX_POW_TAG_TYPE_ATOMIC) &&
+			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
+			     "%s called where neither the before or after tag is ATOMIC\n",
+			     __func__);
+	}
+	group = node;
+	group = group << 8 | xgrp;
+	wqe->word1.cn78xx.tag = tag;
+	wqe->word1.cn78xx.tag_type = tag_type;
+	wqe->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	tag_req.u64 = 0;
+	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
+	tag_req.s_cn78xx_other.type = tag_type;
+	tag_req.s_cn78xx_other.grp = group;
+	tag_req.s_cn78xx_other.no_sched = no_sched;
+
+	ptr.u64 = 0;
+	ptr.s.mem_region = CVMX_IO_SEG;
+	ptr.s.is_io = 1;
+	ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
+	ptr.s_cn78xx.node = node;
+	ptr.s_cn78xx.tag = tag;
+	cvmx_write_io(ptr.u64, tag_req.u64);
+}
+
+/* Executes the UPD_WQP_GRP SSO operation.
+ *
+ * @param wqp  Pointer to the new work queue entry to switch to.
+ * @param xgrp SSO group in the range 0..255
+ *
+ * NOTE: The operation can be performed only on the local node.
+ */
+static inline void cvmx_sso_update_wqp_group(cvmx_wqe_t *wqp, u8 xgrp)
+{
+	union cvmx_pow_tag_req_addr addr;
+	cvmx_pow_tag_req_t data;
+	int node = cvmx_get_node_num();
+	int group = node << 8 | xgrp;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		debug("ERROR: %s is not supported on this chip)\n", __func__);
+		return;
+	}
+	wqp->word1.cn78xx.grp = group;
+	CVMX_SYNCWS;
+
+	data.u64 = 0;
+	data.s_cn78xx_other.op = CVMX_POW_TAG_OP_UPDATE_WQP_GRP;
+	data.s_cn78xx_other.grp = group;
+	data.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
+
+	addr.u64 = 0;
+	addr.s_cn78xx.mem_region = CVMX_IO_SEG;
+	addr.s_cn78xx.is_io = 1;
+	addr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
+	addr.s_cn78xx.node = node;
+	cvmx_write_io(addr.u64, data.u64);
+}
+
+/******************************************************************************/
+/* Define usage of bits within the 32 bit tag values.                         */
+/******************************************************************************/
+/*
+ * Number of bits of the tag used by software.  The SW bits
+ * are always a contiguous block of the high starting at bit 31.
+ * The hardware bits are always the low bits.  By default, the top 8 bits
+ * of the tag are reserved for software, and the low 24 are set by the IPD unit.
+ */
+#define CVMX_TAG_SW_BITS  (8)
+#define CVMX_TAG_SW_SHIFT (32 - CVMX_TAG_SW_BITS)
+
+/* Below is the list of values for the top 8 bits of the tag. */
+/*
+ * Tag values with top byte of this value are reserved for internal executive
+ * uses
+ */
+#define CVMX_TAG_SW_BITS_INTERNAL 0x1
+
+/*
+ * The executive divides the remaining 24 bits as follows:
+ * the upper 8 bits (bits 23 - 16 of the tag) define a subgroup
+ * the lower 16 bits (bits 15 - 0 of the tag) define are the value with
+ * the subgroup. Note that this section describes the format of tags generated
+ * by software - refer to the hardware documentation for a description of the
+ * tags values generated by the packet input hardware.
+ * Subgroups are defined here
+ */
+
+/* Mask for the value portion of the tag */
+#define CVMX_TAG_SUBGROUP_MASK	0xFFFF
+#define CVMX_TAG_SUBGROUP_SHIFT 16
+#define CVMX_TAG_SUBGROUP_PKO	0x1
+
+/* End of executive tag subgroup definitions */
+
+/* The remaining values software bit values 0x2 - 0xff are available
+ * for application use */
+
+/**
+ * This function creates a 32 bit tag value from the two values provided.
+ *
+ * @param sw_bits The upper bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the hw_bits parameter.
+ * @param hw_bits The lower bits (number depends on configuration) are set
+ *     to this value.  The remainder of bits are set by the sw_bits parameter.
+ *
+ * @return 32 bit value of the combined hw and sw bits.
+ */
+static inline u32 cvmx_pow_tag_compose(u64 sw_bits, u64 hw_bits)
+{
+	return (((sw_bits & cvmx_build_mask(CVMX_TAG_SW_BITS)) << CVMX_TAG_SW_SHIFT) |
+		(hw_bits & cvmx_build_mask(32 - CVMX_TAG_SW_BITS)));
+}
+
+/**
+ * Extracts the bits allocated for software use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return N bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_sw_bits(u64 tag)
+{
+	return ((tag >> (32 - CVMX_TAG_SW_BITS)) & cvmx_build_mask(CVMX_TAG_SW_BITS));
+}
+
+/**
+ *
+ * Extracts the bits allocated for hardware use from the tag
+ *
+ * @param tag    32 bit tag value
+ *
+ * @return (32 - N) bit software tag value, where N is configurable with
+ *     the CVMX_TAG_SW_BITS define
+ */
+static inline u32 cvmx_pow_tag_get_hw_bits(u64 tag)
+{
+	return (tag & cvmx_build_mask(32 - CVMX_TAG_SW_BITS));
+}
+
+static inline u64 cvmx_sso3_get_wqe_count(int node)
+{
+	cvmx_sso_grpx_aq_cnt_t aq_cnt;
+	unsigned int grp = 0;
+	u64 cnt = 0;
+
+	for (grp = 0; grp < cvmx_sso_num_xgrp(); grp++) {
+		aq_cnt.u64 = csr_rd_node(node, CVMX_SSO_GRPX_AQ_CNT(grp));
+		cnt += aq_cnt.s.aq_cnt;
+	}
+	return cnt;
+}
+
+static inline u64 cvmx_sso_get_total_wqe_count(void)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		int node = cvmx_get_node_num();
+
+		return cvmx_sso3_get_wqe_count(node);
+	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
+		cvmx_sso_iq_com_cnt_t sso_iq_com_cnt;
+
+		sso_iq_com_cnt.u64 = csr_rd(CVMX_SSO_IQ_COM_CNT);
+		return (sso_iq_com_cnt.s.iq_cnt);
+	} else {
+		cvmx_pow_iq_com_cnt_t pow_iq_com_cnt;
+
+		pow_iq_com_cnt.u64 = csr_rd(CVMX_POW_IQ_COM_CNT);
+		return (pow_iq_com_cnt.s.iq_cnt);
+	}
+}
+
+/**
+ * Store the current POW internal state into the supplied
+ * buffer. It is recommended that you pass a buffer of at least
+ * 128KB. The format of the capture may change based on SDK
+ * version and Octeon chip.
+ *
+ * @param buffer Buffer to store capture into
+ * @param buffer_size The size of the supplied buffer
+ *
+ * @return Zero on success, negative on failure
+ */
+int cvmx_pow_capture(void *buffer, int buffer_size);
+
+/**
+ * Dump a POW capture to the console in a human readable format.
+ *
+ * @param buffer POW capture from cvmx_pow_capture()
+ * @param buffer_size Size of the buffer
+ */
+void cvmx_pow_display(void *buffer, int buffer_size);
+
+/**
+ * Return the number of POW entries supported by this chip
+ *
+ * @return Number of POW entries
+ */
+int cvmx_pow_get_num_entries(void);
+int cvmx_pow_get_dump_size(void);
+
+/**
+ * This will allocate count number of SSO groups on the specified node to the
+ * calling application. These groups will be for exclusive use of the
+ * application until they are freed.
+ * @param node The numa node for the allocation.
+ * @param base_group Pointer to the initial group, -1 to allocate anywhere.
+ * @param count  The number of consecutive groups to allocate.
+ * @return 0 on success and -1 on failure.
+ */
+int cvmx_sso_reserve_group_range(int node, int *base_group, int count);
+#define cvmx_sso_allocate_group_range cvmx_sso_reserve_group_range
+int cvmx_sso_reserve_group(int node);
+#define cvmx_sso_allocate_group cvmx_sso_reserve_group
+int cvmx_sso_release_group_range(int node, int base_group, int count);
+int cvmx_sso_release_group(int node, int group);
+
+/**
+ * Show integrated SSO configuration.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_config_dump(unsigned int node);
+
+/**
+ * Show integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_dump(unsigned int node);
+
+/**
+ * Clear integrated SSO statistics.
+ *
+ * @param node	   node number
+ */
+int cvmx_sso_stats_clear(unsigned int node);
+
+/**
+ * Show SSO core-group affinity and priority per node (multi-node systems)
+ */
+void cvmx_pow_mask_priority_dump_node(unsigned int node, struct cvmx_coremask *avail_coremask);
+
+/**
+ * Show POW/SSO core-group affinity and priority (legacy, single-node systems)
+ */
+static inline void cvmx_pow_mask_priority_dump(struct cvmx_coremask *avail_coremask)
+{
+	cvmx_pow_mask_priority_dump_node(0 /*node */, avail_coremask);
+}
+
+/**
+ * Show SSO performance counters (multi-node systems)
+ */
+void cvmx_pow_show_perf_counters_node(unsigned int node);
+
+/**
+ * Show POW/SSO performance counters (legacy, single-node systems)
+ */
+static inline void cvmx_pow_show_perf_counters(void)
+{
+	cvmx_pow_show_perf_counters_node(0 /*node */);
+}
+
+#endif /* __CVMX_POW_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-qlm.h b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
new file mode 100644
index 000000000000..19915eb82c51
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __CVMX_QLM_H__
+#define __CVMX_QLM_H__
+
+/*
+ * Interface 0 on the 78xx can be connected to qlm 0 or qlm 2. When interface
+ * 0 is connected to qlm 0, this macro must be set to 0. When interface 0 is
+ * connected to qlm 2, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE0 0
+
+/*
+ * Interface 1 on the 78xx can be connected to qlm 1 or qlm 3. When interface
+ * 1 is connected to qlm 1, this macro must be set to 0. When interface 1 is
+ * connected to qlm 3, this macro must be set to 1.
+ */
+#define MUX_78XX_IFACE1 0
+
+/* Uncomment this line to print QLM JTAG state */
+/* #define CVMX_QLM_DUMP_STATE 1 */
+
+typedef struct {
+	const char *name;
+	int stop_bit;
+	int start_bit;
+} __cvmx_qlm_jtag_field_t;
+
+/**
+ * Return the number of QLMs supported by the chip
+ *
+ * @return  Number of QLMs
+ */
+int cvmx_qlm_get_num(void);
+
+/**
+ * Return the qlm number based on the interface
+ *
+ * @param xiface  Interface to look
+ */
+int cvmx_qlm_interface(int xiface);
+
+/**
+ * Return the qlm number based for a port in the interface
+ *
+ * @param xiface  interface to look up
+ * @param index  index in an interface
+ *
+ * @return the qlm number based on the xiface
+ */
+int cvmx_qlm_lmac(int xiface, int index);
+
+/**
+ * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
+ *
+ * @param BGX  BGX to search for.
+ *
+ * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
+ */
+int cvmx_qlm_mux_interface(int bgx);
+
+/**
+ * Return number of lanes for a given qlm
+ *
+ * @param qlm QLM block to query
+ *
+ * @return  Number of lanes
+ */
+int cvmx_qlm_get_lanes(int qlm);
+
+/**
+ * Get the QLM JTAG fields based on Octeon model on the supported chips.
+ *
+ * @return  qlm_jtag_field_t structure
+ */
+const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void);
+
+/**
+ * Get the QLM JTAG length by going through qlm_jtag_field for each
+ * Octeon model that is supported
+ *
+ * @return return the length.
+ */
+int cvmx_qlm_jtag_get_length(void);
+
+/**
+ * Initialize the QLM layer
+ */
+void cvmx_qlm_init(void);
+
+/**
+ * Get a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to get
+ * @param lane   Lane in QLM to get
+ * @param name   String name of field
+ *
+ * @return JTAG field value
+ */
+u64 cvmx_qlm_jtag_get(int qlm, int lane, const char *name);
+
+/**
+ * Set a field in a QLM JTAG chain
+ *
+ * @param qlm    QLM to set
+ * @param lane   Lane in QLM to set, or -1 for all lanes
+ * @param name   String name of field
+ * @param value  Value of the field
+ */
+void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, u64 value);
+
+/**
+ * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
+ * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function tweaks the
+ * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
+ */
+void __cvmx_qlm_speed_tweak(void);
+
+/**
+ * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
+ * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM tweak.
+ * This function tweaks the JTAG setting for a QLMs for PCIe to run better.
+ */
+void __cvmx_qlm_pcie_idle_dac_tweak(void);
+
+void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane);
+
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz.
+ *
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz(int qlm);
+/**
+ * Get the speed (Gbaud) of the QLM in Mhz on specific node.
+ *
+ * @param node   Target QLM node
+ * @param qlm    QLM to examine
+ *
+ * @return Speed in Mhz
+ */
+int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm);
+
+enum cvmx_qlm_mode {
+	CVMX_QLM_MODE_DISABLED = -1,
+	CVMX_QLM_MODE_SGMII = 1,
+	CVMX_QLM_MODE_XAUI,
+	CVMX_QLM_MODE_RXAUI,
+	CVMX_QLM_MODE_PCIE,	/* gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X2, /* 1x2 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_2X1, /* 2x1 gen2 / gen1 */
+	CVMX_QLM_MODE_PCIE_1X1, /* 1x1 gen2 / gen1 */
+	CVMX_QLM_MODE_SRIO_1X4, /* 1x4 short / long */
+	CVMX_QLM_MODE_SRIO_2X2, /* 2x2 short / long */
+	CVMX_QLM_MODE_SRIO_4X1, /* 4x1 short / long */
+	CVMX_QLM_MODE_ILK,
+	CVMX_QLM_MODE_QSGMII,
+	CVMX_QLM_MODE_SGMII_SGMII,
+	CVMX_QLM_MODE_SGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_SGMII,
+	CVMX_QLM_MODE_SGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_QSGMII,
+	CVMX_QLM_MODE_QSGMII_DISABLED,
+	CVMX_QLM_MODE_DISABLED_QSGMII,
+	CVMX_QLM_MODE_QSGMII_SGMII,
+	CVMX_QLM_MODE_RXAUI_1X2,
+	CVMX_QLM_MODE_SATA_2X1,
+	CVMX_QLM_MODE_XLAUI,
+	CVMX_QLM_MODE_XFI,
+	CVMX_QLM_MODE_10G_KR,
+	CVMX_QLM_MODE_40G_KR4,
+	CVMX_QLM_MODE_PCIE_1X8, /* 1x8 gen3 / gen2 / gen1 */
+	CVMX_QLM_MODE_RGMII_SGMII,
+	CVMX_QLM_MODE_RGMII_XFI,
+	CVMX_QLM_MODE_RGMII_10G_KR,
+	CVMX_QLM_MODE_RGMII_RXAUI,
+	CVMX_QLM_MODE_RGMII_XAUI,
+	CVMX_QLM_MODE_RGMII_XLAUI,
+	CVMX_QLM_MODE_RGMII_40G_KR4,
+	CVMX_QLM_MODE_MIXED,		/* BGX2 is mixed mode, DLM5(SGMII) & DLM6(XFI) */
+	CVMX_QLM_MODE_SGMII_2X1,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_10G_KR_1X2,	/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_XFI_1X2,		/* Configure BGX2 separate for DLM5 & DLM6 */
+	CVMX_QLM_MODE_RGMII_SGMII_1X1,	/* Configure BGX2, applies to DLM5 */
+	CVMX_QLM_MODE_RGMII_SGMII_2X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_10G_KR_1X1, /* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_RGMII_XFI_1X1,	/* Configure BGX2, applies to DLM6 */
+	CVMX_QLM_MODE_SDL,		/* RMAC Pipe */
+	CVMX_QLM_MODE_CPRI,		/* RMAC */
+	CVMX_QLM_MODE_OCI
+};
+
+enum cvmx_gmx_inf_mode {
+	CVMX_GMX_INF_MODE_DISABLED = 0,
+	CVMX_GMX_INF_MODE_SGMII = 1,  /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_QSGMII = 2, /* Other interface can be SGMII or QSGMII */
+	CVMX_GMX_INF_MODE_RXAUI = 3,  /* Only interface 0, interface 1 must be DISABLED */
+};
+
+/**
+ * Eye diagram captures are stored in the following structure
+ */
+typedef struct {
+	int width;	   /* Width in the x direction (time) */
+	int height;	   /* Height in the y direction (voltage) */
+	u32 data[64][128]; /* Error count@location, saturates as max */
+} cvmx_qlm_eye_t;
+
+/**
+ * These apply to DLM1 and DLM2 if its not in SATA mode
+ * Manual refers to lanes as follows:
+ *  DML 0 lane 0 == GSER0 lane 0
+ *  DML 0 lane 1 == GSER0 lane 1
+ *  DML 1 lane 2 == GSER1 lane 0
+ *  DML 1 lane 3 == GSER1 lane 1
+ *  DML 2 lane 4 == GSER2 lane 0
+ *  DML 2 lane 5 == GSER2 lane 1
+ */
+enum cvmx_pemx_cfg_mode {
+	CVMX_PEM_MD_GEN2_2LANE = 0, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN2_1LANE = 1, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN2_4LANE = 2, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+	CVMX_PEM_MD_GEN1_2LANE = 4, /* Valid for PEM0(DLM1), PEM1(DLM2) */
+	CVMX_PEM_MD_GEN1_1LANE = 5, /* Valid for PEM0(DLM1.0), PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
+	CVMX_PEM_MD_GEN1_4LANE = 6, /* Valid for PEM0(DLM1-2) */
+	/* Reserved */
+};
+
+/*
+ * Read QLM and return mode.
+ */
+enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm);
+enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int dlm_mode, int interface);
+void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier);
+
+void cvmx_qlm_display_registers(int qlm);
+
+int cvmx_qlm_measure_clock(int qlm);
+
+/**
+ * Measure the reference clock of a QLM on a multi-node setup
+ *
+ * @param node   node to measure
+ * @param qlm    QLM to measure
+ *
+ * @return Clock rate in Hz
+ */
+int cvmx_qlm_measure_clock_node(int node, int qlm);
+
+/*
+ * Perform RX equalization on a QLM
+ *
+ * @param node	Node the QLM is on
+ * @param qlm	QLM to perform RX equalization on
+ * @param lane	Lane to use, or -1 for all lanes
+ *
+ * @return Zero on success, negative if any lane failed RX equalization
+ */
+int __cvmx_qlm_rx_equalization(int node, int qlm, int lane);
+
+/**
+ * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
+ * Training may not update PHY Tx Taps. This function is not static
+ * so we can share it with BGX KR
+ *
+ * @param node	Node to apply errata workaround
+ * @param qlm	QLM to apply errata workaround
+ * @param lane	Lane to apply the errata
+ */
+int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane);
+
+void cvmx_qlm_gser_errata_25992(int node, int qlm);
+
+#ifdef CVMX_DUMP_GSER
+/**
+ * Dump GSER configuration for node 0
+ */
+int cvmx_dump_gser_config(unsigned int gser);
+/**
+ * Dump GSER status for node 0
+ */
+int cvmx_dump_gser_status(unsigned int gser);
+/**
+ * Dump GSER configuration
+ */
+int cvmx_dump_gser_config_node(unsigned int node, unsigned int gser);
+/**
+ * Dump GSER status
+ */
+int cvmx_dump_gser_status_node(unsigned int node, unsigned int gser);
+#endif
+
+int cvmx_qlm_eye_display(int node, int qlm, int qlm_lane, int format, const cvmx_qlm_eye_t *eye);
+
+void cvmx_prbs_process_cmd(int node, int qlm, int mode);
+
+#endif /* __CVMX_QLM_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-scratch.h b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
new file mode 100644
index 000000000000..d567a8453b7a
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This file provides support for the processor local scratch memory.
+ * Scratch memory is byte addressable - all addresses are byte addresses.
+ */
+
+#ifndef __CVMX_SCRATCH_H__
+#define __CVMX_SCRATCH_H__
+
+/* Note: This define must be a long, not a long long in order to compile
+	without warnings for both 32bit and 64bit. */
+#define CVMX_SCRATCH_BASE (-32768l) /* 0xffffffffffff8000 */
+
+/* Scratch line for LMTST/LMTDMA on Octeon3 models */
+#ifdef CVMX_CAVIUM_OCTEON3
+#define CVMX_PKO_LMTLINE 2ull
+#endif
+
+/**
+ * Reads an 8 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u8 cvmx_scratch_read8(u64 address)
+{
+	return *CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 16 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u16 cvmx_scratch_read16(u64 address)
+{
+	return *CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 32 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u32 cvmx_scratch_read32(u64 address)
+{
+	return *CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Reads a 64 bit value from the processor local scratchpad memory.
+ *
+ * @param address byte address to read from
+ *
+ * @return value read
+ */
+static inline u64 cvmx_scratch_read64(u64 address)
+{
+	return *CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address);
+}
+
+/**
+ * Writes an 8 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write8(u64 address, u64 value)
+{
+	*CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address) = (u8)value;
+}
+
+/**
+ * Writes a 32 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write16(u64 address, u64 value)
+{
+	*CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address) = (u16)value;
+}
+
+/**
+ * Writes a 16 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write32(u64 address, u64 value)
+{
+	*CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address) = (u32)value;
+}
+
+/**
+ * Writes a 64 bit value to the processor local scratchpad memory.
+ *
+ * @param address byte address to write to
+ * @param value   value to write
+ */
+static inline void cvmx_scratch_write64(u64 address, u64 value)
+{
+	*CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address) = value;
+}
+
+#endif /* __CVMX_SCRATCH_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/cvmx-wqe.h b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
new file mode 100644
index 000000000000..c9e3c8312a65
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
@@ -0,0 +1,1462 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ *
+ * This header file defines the work queue entry (wqe) data structure.
+ * Since this is a commonly used structure that depends on structures
+ * from several hardware blocks, those definitions have been placed
+ * in this file to create a single point of definition of the wqe
+ * format.
+ * Data structures are still named according to the block that they
+ * relate to.
+ */
+
+#ifndef __CVMX_WQE_H__
+#define __CVMX_WQE_H__
+
+#include "cvmx-packet.h"
+#include "cvmx-csr-enums.h"
+#include "cvmx-pki-defs.h"
+#include "cvmx-pip-defs.h"
+#include "octeon-feature.h"
+
+#define OCT_TAG_TYPE_STRING(x)						\
+	(((x) == CVMX_POW_TAG_TYPE_ORDERED) ?				\
+	 "ORDERED" :							\
+	 (((x) == CVMX_POW_TAG_TYPE_ATOMIC) ?				\
+	  "ATOMIC" :							\
+	  (((x) == CVMX_POW_TAG_TYPE_NULL) ? "NULL" : "NULL_NULL")))
+
+/* Error levels in WQE WORD2 (ERRLEV).*/
+#define PKI_ERRLEV_E__RE_M 0x0
+#define PKI_ERRLEV_E__LA_M 0x1
+#define PKI_ERRLEV_E__LB_M 0x2
+#define PKI_ERRLEV_E__LC_M 0x3
+#define PKI_ERRLEV_E__LD_M 0x4
+#define PKI_ERRLEV_E__LE_M 0x5
+#define PKI_ERRLEV_E__LF_M 0x6
+#define PKI_ERRLEV_E__LG_M 0x7
+
+enum cvmx_pki_errlevel {
+	CVMX_PKI_ERRLEV_E_RE = PKI_ERRLEV_E__RE_M,
+	CVMX_PKI_ERRLEV_E_LA = PKI_ERRLEV_E__LA_M,
+	CVMX_PKI_ERRLEV_E_LB = PKI_ERRLEV_E__LB_M,
+	CVMX_PKI_ERRLEV_E_LC = PKI_ERRLEV_E__LC_M,
+	CVMX_PKI_ERRLEV_E_LD = PKI_ERRLEV_E__LD_M,
+	CVMX_PKI_ERRLEV_E_LE = PKI_ERRLEV_E__LE_M,
+	CVMX_PKI_ERRLEV_E_LF = PKI_ERRLEV_E__LF_M,
+	CVMX_PKI_ERRLEV_E_LG = PKI_ERRLEV_E__LG_M
+};
+
+#define CVMX_PKI_ERRLEV_MAX BIT(3) /* The size of WORD2:ERRLEV field.*/
+
+/* Error code in WQE WORD2 (OPCODE).*/
+#define CVMX_PKI_OPCODE_RE_NONE	      0x0
+#define CVMX_PKI_OPCODE_RE_PARTIAL    0x1
+#define CVMX_PKI_OPCODE_RE_JABBER     0x2
+#define CVMX_PKI_OPCODE_RE_FCS	      0x7
+#define CVMX_PKI_OPCODE_RE_FCS_RCV    0x8
+#define CVMX_PKI_OPCODE_RE_TERMINATE  0x9
+#define CVMX_PKI_OPCODE_RE_RX_CTL     0xb
+#define CVMX_PKI_OPCODE_RE_SKIP	      0xc
+#define CVMX_PKI_OPCODE_RE_DMAPKT     0xf
+#define CVMX_PKI_OPCODE_RE_PKIPAR     0x13
+#define CVMX_PKI_OPCODE_RE_PKIPCAM    0x14
+#define CVMX_PKI_OPCODE_RE_MEMOUT     0x15
+#define CVMX_PKI_OPCODE_RE_BUFS_OFLOW 0x16
+#define CVMX_PKI_OPCODE_L2_FRAGMENT   0x20
+#define CVMX_PKI_OPCODE_L2_OVERRUN    0x21
+#define CVMX_PKI_OPCODE_L2_PFCS	      0x22
+#define CVMX_PKI_OPCODE_L2_PUNY	      0x23
+#define CVMX_PKI_OPCODE_L2_MAL	      0x24
+#define CVMX_PKI_OPCODE_L2_OVERSIZE   0x25
+#define CVMX_PKI_OPCODE_L2_UNDERSIZE  0x26
+#define CVMX_PKI_OPCODE_L2_LENMISM    0x27
+#define CVMX_PKI_OPCODE_IP_NOT	      0x41
+#define CVMX_PKI_OPCODE_IP_CHK	      0x42
+#define CVMX_PKI_OPCODE_IP_MAL	      0x43
+#define CVMX_PKI_OPCODE_IP_MALD	      0x44
+#define CVMX_PKI_OPCODE_IP_HOP	      0x45
+#define CVMX_PKI_OPCODE_L4_MAL	      0x61
+#define CVMX_PKI_OPCODE_L4_CHK	      0x62
+#define CVMX_PKI_OPCODE_L4_LEN	      0x63
+#define CVMX_PKI_OPCODE_L4_PORT	      0x64
+#define CVMX_PKI_OPCODE_TCP_FLAG      0x65
+
+#define CVMX_PKI_OPCODE_MAX BIT(8) /* The size of WORD2:OPCODE field.*/
+
+/* Layer types in pki */
+#define CVMX_PKI_LTYPE_E_NONE_M	      0x0
+#define CVMX_PKI_LTYPE_E_ENET_M	      0x1
+#define CVMX_PKI_LTYPE_E_VLAN_M	      0x2
+#define CVMX_PKI_LTYPE_E_SNAP_PAYLD_M 0x5
+#define CVMX_PKI_LTYPE_E_ARP_M	      0x6
+#define CVMX_PKI_LTYPE_E_RARP_M	      0x7
+#define CVMX_PKI_LTYPE_E_IP4_M	      0x8
+#define CVMX_PKI_LTYPE_E_IP4_OPT_M    0x9
+#define CVMX_PKI_LTYPE_E_IP6_M	      0xA
+#define CVMX_PKI_LTYPE_E_IP6_OPT_M    0xB
+#define CVMX_PKI_LTYPE_E_IPSEC_ESP_M  0xC
+#define CVMX_PKI_LTYPE_E_IPFRAG_M     0xD
+#define CVMX_PKI_LTYPE_E_IPCOMP_M     0xE
+#define CVMX_PKI_LTYPE_E_TCP_M	      0x10
+#define CVMX_PKI_LTYPE_E_UDP_M	      0x11
+#define CVMX_PKI_LTYPE_E_SCTP_M	      0x12
+#define CVMX_PKI_LTYPE_E_UDP_VXLAN_M  0x13
+#define CVMX_PKI_LTYPE_E_GRE_M	      0x14
+#define CVMX_PKI_LTYPE_E_NVGRE_M      0x15
+#define CVMX_PKI_LTYPE_E_GTP_M	      0x16
+#define CVMX_PKI_LTYPE_E_SW28_M	      0x1C
+#define CVMX_PKI_LTYPE_E_SW29_M	      0x1D
+#define CVMX_PKI_LTYPE_E_SW30_M	      0x1E
+#define CVMX_PKI_LTYPE_E_SW31_M	      0x1F
+
+enum cvmx_pki_layer_type {
+	CVMX_PKI_LTYPE_E_NONE = CVMX_PKI_LTYPE_E_NONE_M,
+	CVMX_PKI_LTYPE_E_ENET = CVMX_PKI_LTYPE_E_ENET_M,
+	CVMX_PKI_LTYPE_E_VLAN = CVMX_PKI_LTYPE_E_VLAN_M,
+	CVMX_PKI_LTYPE_E_SNAP_PAYLD = CVMX_PKI_LTYPE_E_SNAP_PAYLD_M,
+	CVMX_PKI_LTYPE_E_ARP = CVMX_PKI_LTYPE_E_ARP_M,
+	CVMX_PKI_LTYPE_E_RARP = CVMX_PKI_LTYPE_E_RARP_M,
+	CVMX_PKI_LTYPE_E_IP4 = CVMX_PKI_LTYPE_E_IP4_M,
+	CVMX_PKI_LTYPE_E_IP4_OPT = CVMX_PKI_LTYPE_E_IP4_OPT_M,
+	CVMX_PKI_LTYPE_E_IP6 = CVMX_PKI_LTYPE_E_IP6_M,
+	CVMX_PKI_LTYPE_E_IP6_OPT = CVMX_PKI_LTYPE_E_IP6_OPT_M,
+	CVMX_PKI_LTYPE_E_IPSEC_ESP = CVMX_PKI_LTYPE_E_IPSEC_ESP_M,
+	CVMX_PKI_LTYPE_E_IPFRAG = CVMX_PKI_LTYPE_E_IPFRAG_M,
+	CVMX_PKI_LTYPE_E_IPCOMP = CVMX_PKI_LTYPE_E_IPCOMP_M,
+	CVMX_PKI_LTYPE_E_TCP = CVMX_PKI_LTYPE_E_TCP_M,
+	CVMX_PKI_LTYPE_E_UDP = CVMX_PKI_LTYPE_E_UDP_M,
+	CVMX_PKI_LTYPE_E_SCTP = CVMX_PKI_LTYPE_E_SCTP_M,
+	CVMX_PKI_LTYPE_E_UDP_VXLAN = CVMX_PKI_LTYPE_E_UDP_VXLAN_M,
+	CVMX_PKI_LTYPE_E_GRE = CVMX_PKI_LTYPE_E_GRE_M,
+	CVMX_PKI_LTYPE_E_NVGRE = CVMX_PKI_LTYPE_E_NVGRE_M,
+	CVMX_PKI_LTYPE_E_GTP = CVMX_PKI_LTYPE_E_GTP_M,
+	CVMX_PKI_LTYPE_E_SW28 = CVMX_PKI_LTYPE_E_SW28_M,
+	CVMX_PKI_LTYPE_E_SW29 = CVMX_PKI_LTYPE_E_SW29_M,
+	CVMX_PKI_LTYPE_E_SW30 = CVMX_PKI_LTYPE_E_SW30_M,
+	CVMX_PKI_LTYPE_E_SW31 = CVMX_PKI_LTYPE_E_SW31_M,
+	CVMX_PKI_LTYPE_E_MAX = CVMX_PKI_LTYPE_E_SW31
+};
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 ptr_vlan : 8;
+		u64 ptr_layer_g : 8;
+		u64 ptr_layer_f : 8;
+		u64 ptr_layer_e : 8;
+		u64 ptr_layer_d : 8;
+		u64 ptr_layer_c : 8;
+		u64 ptr_layer_b : 8;
+		u64 ptr_layer_a : 8;
+	};
+} cvmx_pki_wqe_word4_t;
+
+/**
+ * HW decode / err_code in work queue entry
+ */
+typedef union {
+	u64 u64;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 ip_offset : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 4;
+		u64 unassigned2 : 4;
+		u64 dec_ipcomp : 1;
+		u64 tcp_or_udp : 1;
+		u64 dec_ipsec : 1;
+		u64 is_v6 : 1;
+		u64 software : 1;
+		u64 L4_error : 1;
+		u64 is_frag : 1;
+		u64 IP_exc : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} s_cn38xx;
+	struct {
+		u64 unused1 : 16;
+		u64 vlan : 16;
+		u64 unused2 : 32;
+	} svlan;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 varies : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 port : 12;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn68xx;
+	struct {
+		u64 bufs : 8;
+		u64 unused : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 unassigned : 1;
+		u64 vlan_cfi : 1;
+		u64 vlan_id : 12;
+		u64 pr : 4;
+		u64 unassigned2a : 8;
+		u64 unassigned2 : 4;
+		u64 software : 1;
+		u64 unassigned3 : 1;
+		u64 is_rarp : 1;
+		u64 is_arp : 1;
+		u64 is_bcast : 1;
+		u64 is_mcast : 1;
+		u64 not_IP : 1;
+		u64 rcv_error : 1;
+		u64 err_code : 8;
+	} snoip_cn38xx;
+} cvmx_pip_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 software : 1;
+		u64 lg_hdr_type : 5;
+		u64 lf_hdr_type : 5;
+		u64 le_hdr_type : 5;
+		u64 ld_hdr_type : 5;
+		u64 lc_hdr_type : 5;
+		u64 lb_hdr_type : 5;
+		u64 is_la_ether : 1;
+		u64 rsvd_0 : 8;
+		u64 vlan_valid : 1;
+		u64 vlan_stacked : 1;
+		u64 stat_inc : 1;
+		u64 pcam_flag4 : 1;
+		u64 pcam_flag3 : 1;
+		u64 pcam_flag2 : 1;
+		u64 pcam_flag1 : 1;
+		u64 is_frag : 1;
+		u64 is_l3_bcast : 1;
+		u64 is_l3_mcast : 1;
+		u64 is_l2_bcast : 1;
+		u64 is_l2_mcast : 1;
+		u64 is_raw : 1;
+		u64 err_level : 3;
+		u64 err_code : 8;
+	};
+} cvmx_pki_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	cvmx_pki_wqe_word2_t pki;
+	cvmx_pip_wqe_word2_t pip;
+} cvmx_wqe_word2_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u16 hw_chksum;
+		u8 unused;
+		u64 next_ptr : 40;
+	} cn38xx;
+	struct {
+		u64 l4ptr : 8;	  /* 56..63 */
+		u64 unused0 : 8;  /* 48..55 */
+		u64 l3ptr : 8;	  /* 40..47 */
+		u64 l2ptr : 8;	  /* 32..39 */
+		u64 unused1 : 18; /* 14..31 */
+		u64 bpid : 6;	  /* 8..13 */
+		u64 unused2 : 2;  /* 6..7 */
+		u64 pknd : 6;	  /* 0..5 */
+	} cn68xx;
+} cvmx_pip_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 4;
+		u64 aura : 12;
+		u64 rsvd_1 : 1;
+		u64 apad : 3;
+		u64 channel : 12;
+		u64 bufs : 8;
+		u64 style : 8;
+		u64 rsvd_2 : 10;
+		u64 pknd : 6;
+	};
+} cvmx_pki_wqe_word0_t;
+
+/* Use reserved bit, set by HW to 0, to indicate buf_ptr legacy translation*/
+#define pki_wqe_translated word0.rsvd_1
+
+typedef union {
+	u64 u64;
+	cvmx_pip_wqe_word0_t pip;
+	cvmx_pki_wqe_word0_t pki;
+	struct {
+		u64 unused : 24;
+		u64 next_ptr : 40; /* On cn68xx this is unused as well */
+	} raw;
+} cvmx_wqe_word0_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 rsvd_0 : 2;
+		u64 rsvd_1 : 2;
+		u64 grp : 10;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+} cvmx_pki_wqe_word1_t;
+
+#define pki_errata20776 word1.rsvd_0
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 len : 16;
+		u64 varies : 14;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	};
+	cvmx_pki_wqe_word1_t cn78xx;
+	struct {
+		u64 len : 16;
+		u64 zero_0 : 1;
+		u64 qos : 3;
+		u64 zero_1 : 1;
+		u64 grp : 6;
+		u64 zero_2 : 3;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn68xx;
+	struct {
+		u64 len : 16;
+		u64 ipprt : 6;
+		u64 qos : 3;
+		u64 grp : 4;
+		u64 zero_2 : 1;
+		cvmx_pow_tag_type_t tag_type : 2;
+		u64 tag : 32;
+	} cn38xx;
+} cvmx_wqe_word1_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 8;
+		u64 hwerr : 8;
+		u64 rsvd_1 : 24;
+		u64 sqid : 8;
+		u64 rsvd_2 : 4;
+		u64 vfnum : 12;
+	};
+} cvmx_wqe_word3_t;
+
+typedef union {
+	u64 u64;
+	struct {
+		u64 rsvd_0 : 21;
+		u64 sqfc : 11;
+		u64 rsvd_1 : 5;
+		u64 sqtail : 11;
+		u64 rsvd_2 : 3;
+		u64 sqhead : 13;
+	};
+} cvmx_wqe_word4_t;
+
+/**
+ * Work queue entry format.
+ * Must be 8-byte aligned.
+ */
+typedef struct cvmx_wqe_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 *conditions.
+	 */
+	cvmx_pip_wqe_word2_t word2;
+
+	/* Pointer to the first segment of the packet. */
+	cvmx_buf_ptr_t packet_ptr;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[96];
+
+	/* If desired, SW can make the work Q entry any length. For the purposes
+	 * of discussion here, Assume 128B always, as this is all that the hardware
+	 * deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_t;
+
+/**
+ * Work queue entry format for NQM
+ * Must be 8-byte aligned
+ */
+typedef struct cvmx_wqe_nqm_s {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Reserved */
+	u64 word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word3_t word3;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* NVMe specific information.*/
+	cvmx_wqe_word4_t word4;
+
+	/* HW WRITE: OCTEON will fill in a programmable amount from the packet,
+	 * up to (at most, but perhaps less) the amount needed to fill the work
+	 * queue entry to 128 bytes. If the packet is recognized to be IP, the
+	 * hardware starts (except that the IPv4 header is padded for
+	 * appropriate alignment) writing here where the IP header starts.
+	 * If the packet is not recognized to be IP, the hardware starts
+	 * writing the beginning of the packet here.
+	 */
+	u8 packet_data[88];
+
+	/* If desired, SW can make the work Q entry any length.
+	 * For the purposes of discussion here, assume 128B always, as this is
+	 * all that the hardware deals with.
+	 */
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_nqm_t;
+
+/**
+ * Work queue entry format for 78XX.
+ * In 78XX packet data always resides in WQE buffer unless option
+ * DIS_WQ_DAT=1 in PKI_STYLE_BUF, which causes packet data to use separate buffer.
+ *
+ * Must be 8-byte aligned.
+ */
+typedef struct {
+	/*-------------------------------------------------------------------*/
+	/* WORD 0                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word0_t word0;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 1                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64 bits are filled by HW when a packet
+	 * arrives.
+	 */
+	cvmx_pki_wqe_word1_t word1;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 2                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives. This indicates a variety of status and error
+	 * conditions.
+	 */
+	cvmx_pki_wqe_word2_t word2;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 3                                                            */
+	/*-------------------------------------------------------------------*/
+	/* Pointer to the first segment of the packet.*/
+	cvmx_buf_ptr_pki_t packet_ptr;
+
+	/*-------------------------------------------------------------------*/
+	/* WORD 4                                                            */
+	/*-------------------------------------------------------------------*/
+	/* HW WRITE: the following 64-bits are filled in by hardware when a
+	 * packet arrives contains a byte pointer to the start of Layer
+	 * A/B/C/D/E/F/G relative of start of packet.
+	 */
+	cvmx_pki_wqe_word4_t word4;
+
+	/*-------------------------------------------------------------------*/
+	/* WORDs 5/6/7 may be extended there, if WQE_HSZ is set.             */
+	/*-------------------------------------------------------------------*/
+	u64 wqe_data[11];
+
+} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_78xx_t;
+
+/* Node LS-bit position in the WQE[grp] or PKI_QPG_TBL[grp_ok].*/
+#define CVMX_WQE_GRP_NODE_SHIFT 8
+
+/*
+ * This is an accessor function into the WQE that retreives the
+ * ingress port number, which can also be used as a destination
+ * port number for the same port.
+ *
+ * @param work - Work Queue Entrey pointer
+ * @returns returns the normalized port number, also known as "ipd" port
+ */
+static inline int cvmx_wqe_get_port(cvmx_wqe_t *work)
+{
+	int port;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* In 78xx wqe entry has channel number not port*/
+		port = work->word0.pki.channel;
+		/* For BGX interfaces (0x800 - 0xdff) the 4 LSBs indicate
+		 * the PFC channel, must be cleared to normalize to "ipd"
+		 */
+		if (port & 0x800)
+			port &= 0xff0;
+		/* Node number is in AURA field, make it part of port # */
+		port |= (work->word0.pki.aura >> 10) << 12;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		port = work->word2.s_cn68xx.port;
+	} else {
+		port = work->word1.cn38xx.ipprt;
+	}
+
+	return port;
+}
+
+static inline void cvmx_wqe_set_port(cvmx_wqe_t *work, int port)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = port;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word2.s_cn68xx.port = port;
+	else
+		work->word1.cn38xx.ipprt = port;
+}
+
+static inline int cvmx_wqe_get_grp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		/* legacy: GRP[0..2] :=QOS */
+		grp = (0xff & work->word1.cn78xx.grp) >> 3;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_xgrp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.grp = grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.grp = grp;
+	else
+		work->word1.cn38xx.grp = grp;
+}
+
+static inline int cvmx_wqe_get_xgrp(cvmx_wqe_t *work)
+{
+	int grp;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		grp = work->word1.cn78xx.grp;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		grp = work->word1.cn68xx.grp;
+	else
+		grp = work->word1.cn38xx.grp;
+
+	return grp;
+}
+
+static inline void cvmx_wqe_set_grp(cvmx_wqe_t *work, int grp)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		unsigned int node = cvmx_get_node_num();
+		/* Legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= 0x7;
+		work->word1.cn78xx.grp |= 0xff & (grp << 3);
+		work->word1.cn78xx.grp |= (node << 8);
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.grp = grp;
+	} else {
+		work->word1.cn38xx.grp = grp;
+	}
+}
+
+static inline int cvmx_wqe_get_qos(cvmx_wqe_t *work)
+{
+	int qos;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* Legacy: GRP[0..2] :=QOS */
+		qos = work->word1.cn78xx.grp & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		qos = work->word1.cn68xx.qos;
+	} else {
+		qos = work->word1.cn38xx.qos;
+	}
+
+	return qos;
+}
+
+static inline void cvmx_wqe_set_qos(cvmx_wqe_t *work, int qos)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		/* legacy: GRP[0..2] :=QOS */
+		work->word1.cn78xx.grp &= ~0x7;
+		work->word1.cn78xx.grp |= qos & 0x7;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.qos = qos;
+	} else {
+		work->word1.cn38xx.qos = qos;
+	}
+}
+
+static inline int cvmx_wqe_get_len(cvmx_wqe_t *work)
+{
+	int len;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		len = work->word1.cn78xx.len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		len = work->word1.cn68xx.len;
+	else
+		len = work->word1.cn38xx.len;
+
+	return len;
+}
+
+static inline void cvmx_wqe_set_len(cvmx_wqe_t *work, int len)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word1.cn78xx.len = len;
+	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
+		work->word1.cn68xx.len = len;
+	else
+		work->word1.cn38xx.len = len;
+}
+
+/**
+ * This function returns, if there was L2/L1 errors detected in packet.
+ *
+ * @param work	pointer to work queue entry
+ *
+ * @return	0 if packet had no error, non-zero to indicate error code.
+ *
+ * Please refer to HRM for the specific model for full enumaration of error codes.
+ * With Octeon1/Octeon2 models, the returned code indicates L1/L2 errors.
+ * On CN73XX/CN78XX, the return code is the value of PKI_OPCODE_E,
+ * if it is non-zero, otherwise the returned code will be derived from
+ * PKI_ERRLEV_E such that an error indicated in LayerA will return 0x20,
+ * LayerB - 0x30, LayerC - 0x40 and so forth.
+ */
+static inline int cvmx_wqe_get_rcv_err(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_RE || wqe->word2.err_code != 0)
+			return wqe->word2.err_code;
+		else
+			return (wqe->word2.err_level << 4) + 0x10;
+	} else if (work->word2.snoip.rcv_error) {
+		return work->word2.snoip.err_code;
+	}
+
+	return 0;
+}
+
+static inline u32 cvmx_wqe_get_tag(cvmx_wqe_t *work)
+{
+	return work->word1.tag;
+}
+
+static inline void cvmx_wqe_set_tag(cvmx_wqe_t *work, u32 tag)
+{
+	work->word1.tag = tag;
+}
+
+static inline int cvmx_wqe_get_tt(cvmx_wqe_t *work)
+{
+	return work->word1.tag_type;
+}
+
+static inline void cvmx_wqe_set_tt(cvmx_wqe_t *work, int tt)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word1.cn78xx.tag_type = (cvmx_pow_tag_type_t)tt;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word1.cn68xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn68xx.zero_2 = 0;
+	} else {
+		work->word1.cn38xx.tag_type = (cvmx_pow_tag_type_t)tt;
+		work->word1.cn38xx.zero_2 = 0;
+	}
+}
+
+static inline u8 cvmx_wqe_get_unused8(cvmx_wqe_t *work)
+{
+	u8 bits;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		bits = wqe->word2.rsvd_0;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		bits = work->word0.pip.cn68xx.unused1;
+	} else {
+		bits = work->word0.pip.cn38xx.unused;
+	}
+
+	return bits;
+}
+
+static inline void cvmx_wqe_set_unused8(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.rsvd_0 = v;
+	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
+		work->word0.pip.cn68xx.unused1 = v;
+	} else {
+		work->word0.pip.cn38xx.unused = v;
+	}
+}
+
+static inline u8 cvmx_wqe_get_user_flags(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return work->word0.pki.rsvd_2;
+	else
+		return 0;
+}
+
+static inline void cvmx_wqe_set_user_flags(cvmx_wqe_t *work, u8 v)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.rsvd_2 = v;
+}
+
+static inline int cvmx_wqe_get_channel(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.channel);
+	else
+		return cvmx_wqe_get_port(work);
+}
+
+static inline void cvmx_wqe_set_channel(cvmx_wqe_t *work, int channel)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.channel = channel;
+	else
+		debug("%s: ERROR: not supported for model\n", __func__);
+}
+
+static inline int cvmx_wqe_get_aura(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.aura);
+	else
+		return (work->packet_ptr.s.pool);
+}
+
+static inline void cvmx_wqe_set_aura(cvmx_wqe_t *work, int aura)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.aura = aura;
+	else
+		work->packet_ptr.s.pool = aura;
+}
+
+static inline int cvmx_wqe_get_style(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		return (work->word0.pki.style);
+	return 0;
+}
+
+static inline void cvmx_wqe_set_style(cvmx_wqe_t *work, int style)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
+		work->word0.pki.style = style;
+}
+
+static inline int cvmx_wqe_is_l3_ip(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match all 4 values for v4/v6 with.without options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return !work->word2.s_cn38xx.not_IP;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv4(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP4)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			!work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline int cvmx_wqe_is_l3_ipv6(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 2 values - with/wotuout options */
+		if ((wqe->word2.lc_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		if ((wqe->word2.le_hdr_type & 0x1e) == CVMX_PKI_LTYPE_E_IP6)
+			return 1;
+		return 0;
+	} else {
+		return (!work->word2.s_cn38xx.not_IP &&
+			work->word2.s_cn38xx.is_v6);
+	}
+}
+
+static inline bool cvmx_wqe_is_l4_udp_or_tcp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_TCP)
+			return true;
+		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_UDP)
+			return true;
+		return false;
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return false;
+
+	return (work->word2.s_cn38xx.tcp_or_udp != 0);
+}
+
+static inline int cvmx_wqe_is_l2_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_bcast;
+	} else {
+		return work->word2.s_cn38xx.is_bcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l2_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l2_mcast;
+	} else {
+		return work->word2.s_cn38xx.is_mcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_bcast(cvmx_wqe_t *work, bool bcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_bcast = bcast;
+	} else {
+		work->word2.s_cn38xx.is_bcast = bcast;
+	}
+}
+
+static inline void cvmx_wqe_set_l2_mcast(cvmx_wqe_t *work, bool mcast)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_l2_mcast = mcast;
+	} else {
+		work->word2.s_cn38xx.is_mcast = mcast;
+	}
+}
+
+static inline int cvmx_wqe_is_l3_bcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_bcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+static inline int cvmx_wqe_is_l3_mcast(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.is_l3_mcast;
+	}
+	debug("%s: ERROR: not supported for model\n", __func__);
+	return 0;
+}
+
+/**
+ * This function returns is there was IP error detected in packet.
+ * For 78XX it does not flag ipv4 options and ipv6 extensions.
+ * For older chips if PIP_GBL_CTL was proviosned to flag ip4_otions and
+ * ipv6 extension, it will be flag them.
+ * @param work	pointer to work queue entry
+ * @return	1 -- If IP error was found in packet
+ *          0 -- If no IP error was found in packet.
+ */
+static inline int cvmx_wqe_is_ip_exception(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LC)
+			return 1;
+		else
+			return 0;
+	}
+
+	return work->word2.s.IP_exc;
+}
+
+static inline int cvmx_wqe_is_l4_error(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LF)
+			return 1;
+		else
+			return 0;
+	} else {
+		return work->word2.s.L4_error;
+	}
+}
+
+static inline void cvmx_wqe_set_vlan(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.vlan_valid = set;
+	} else {
+		work->word2.s.vlan_valid = set;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_valid;
+	} else {
+		return work->word2.s.vlan_valid;
+	}
+}
+
+static inline int cvmx_wqe_is_vlan_stacked(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.vlan_stacked;
+	} else {
+		return work->word2.s.vlan_stacked;
+	}
+}
+
+/**
+ * Extract packet data buffer pointer from work queue entry.
+ *
+ * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
+ * for the linked buffer list.
+ * On CN78XX, the native buffer pointer structure is converted into
+ * the legacy format.
+ * The legacy buf_ptr is then stored in the WQE, and word0 reserved
+ * field is set to indicate that the buffer pointers were translated.
+ * If the packet data is only found inside the work queue entry,
+ * a standard buffer pointer structure is created for it.
+ */
+cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work);
+
+static inline int cvmx_wqe_get_bufs(cvmx_wqe_t *work)
+{
+	int bufs;
+
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		bufs = work->word0.pki.bufs;
+	} else {
+		/* Adjust for packet-in-WQE cases */
+		if (cvmx_unlikely(work->word2.s_cn38xx.bufs == 0 && !work->word2.s.software))
+			(void)cvmx_wqe_get_packet_ptr(work);
+		bufs = work->word2.s_cn38xx.bufs;
+	}
+	return bufs;
+}
+
+/**
+ * Free Work Queue Entry memory
+ *
+ * Will return the WQE buffer to its pool, unless the WQE contains
+ * non-redundant packet data.
+ * This function is intended to be called AFTER the packet data
+ * has been passed along to PKO for transmission and release.
+ * It can also follow a call to cvmx_helper_free_packet_data()
+ * to release the WQE after associated data was released.
+ */
+void cvmx_wqe_free(cvmx_wqe_t *work);
+
+/**
+ * Check if a work entry has been intiated by software
+ *
+ */
+static inline bool cvmx_wqe_is_soft(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return wqe->word2.software;
+	} else {
+		return work->word2.s.software;
+	}
+}
+
+/**
+ * Allocate a work-queue entry for delivering software-initiated
+ * event notifications.
+ * The application data is copied into the work-queue entry,
+ * if the space is sufficient.
+ */
+cvmx_wqe_t *cvmx_wqe_soft_create(void *data_p, unsigned int data_sz);
+
+/* Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
+ * CN78XX pass 1.x has a bug where the packet pointer in each segment is
+ * written in the opposite endianness of the configured mode. Fix these here.
+ */
+static inline void cvmx_wqe_pki_errata_20776(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !wqe->pki_errata20776) {
+		u64 bufs;
+		cvmx_buf_ptr_pki_t buffer_next;
+
+		bufs = wqe->word0.bufs;
+		buffer_next = wqe->packet_ptr;
+		while (bufs > 1) {
+			cvmx_buf_ptr_pki_t next;
+			void *nextaddr = cvmx_phys_to_ptr(buffer_next.addr - 8);
+
+			memcpy(&next, nextaddr, sizeof(next));
+			next.u64 = __builtin_bswap64(next.u64);
+			memcpy(nextaddr, &next, sizeof(next));
+			buffer_next = next;
+			bufs--;
+		}
+		wqe->pki_errata20776 = 1;
+	}
+}
+
+/**
+ * @INTERNAL
+ *
+ * Extract the native PKI-specific buffer pointer from WQE.
+ *
+ * NOTE: Provisional, may be superceded.
+ */
+static inline cvmx_buf_ptr_pki_t cvmx_wqe_get_pki_pkt_ptr(cvmx_wqe_t *work)
+{
+	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_buf_ptr_pki_t x = { 0 };
+		return x;
+	}
+
+	cvmx_wqe_pki_errata_20776(work);
+	return wqe->packet_ptr;
+}
+
+/**
+ * Set the buffer segment count for a packet.
+ *
+ * @return Returns the actual resulting value in the WQE fielda
+ *
+ */
+static inline unsigned int cvmx_wqe_set_bufs(cvmx_wqe_t *work, unsigned int bufs)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		work->word0.pki.bufs = bufs;
+		return work->word0.pki.bufs;
+	}
+
+	work->word2.s.bufs = bufs;
+	return work->word2.s.bufs;
+}
+
+/**
+ * Get the offset of Layer-3 header,
+ * only supported when Layer-3 protocol is IPv4 or IPv6.
+ *
+ * @return Returns the offset, or 0 if the offset is not known or unsupported.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_get_l3_offset(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			return wqe->word4.ptr_layer_c;
+	} else {
+		return work->word2.s.ip_offset;
+	}
+
+	return 0;
+}
+
+/**
+ * Set the offset of Layer-3 header in a packet.
+ * Typically used when an IP packet is generated by software
+ * or when the Layer-2 header length is modified, and
+ * a subsequent recalculation of checksums is anticipated.
+ *
+ * @return Returns the actual value of the work entry offset field.
+ *
+ * FIXME: Assuming word4 is present.
+ */
+static inline unsigned int cvmx_wqe_set_l3_offset(cvmx_wqe_t *work, unsigned int ip_off)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+		/* Match 4 values: IPv4/v6 w/wo options */
+		if ((wqe->word2.lc_hdr_type & 0x1c) == CVMX_PKI_LTYPE_E_IP4)
+			wqe->word4.ptr_layer_c = ip_off;
+	} else {
+		work->word2.s.ip_offset = ip_off;
+	}
+
+	return cvmx_wqe_get_l3_offset(work);
+}
+
+/**
+ * Set the indication that the packet contains a IPv4 Layer-3 * header.
+ * Use 'cvmx_wqe_set_l3_ipv6()' if the protocol is IPv6.
+ * When 'set' is false, the call will result in an indication
+ * that the Layer-3 protocol is neither IPv4 nor IPv6.
+ *
+ * FIXME: Add IPV4_OPT handling based on L3 header length.
+ */
+static inline void cvmx_wqe_set_l3_ipv4(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP4;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 0;
+	}
+}
+
+/**
+ * Set packet Layer-3 protocol to IPv6.
+ *
+ * FIXME: Add IPV6_OPT handling based on presence of extended headers.
+ */
+static inline void cvmx_wqe_set_l3_ipv6(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP6;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.s_cn38xx.not_IP = !set;
+		if (set)
+			work->word2.s_cn38xx.is_v6 = 1;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to UDP.
+ */
+static inline void cvmx_wqe_set_l4_udp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_UDP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set a packet Layer-4 protocol type to TCP.
+ */
+static inline void cvmx_wqe_set_l4_tcp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_TCP;
+		else
+			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s_cn38xx.tcp_or_udp = set;
+	}
+}
+
+/**
+ * Set the "software" flag in a work entry.
+ */
+static inline void cvmx_wqe_set_soft(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.software = set;
+	} else {
+		work->word2.s.software = set;
+	}
+}
+
+/**
+ * Return true if the packet is an IP fragment.
+ */
+static inline bool cvmx_wqe_is_l3_frag(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.is_frag != 0);
+	}
+
+	if (!work->word2.s_cn38xx.not_IP)
+		return (work->word2.s.is_frag != 0);
+
+	return false;
+}
+
+/**
+ * Set the indicator that the packet is an fragmented IP packet.
+ */
+static inline void cvmx_wqe_set_l3_frag(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		wqe->word2.is_frag = set;
+	} else {
+		if (!work->word2.s_cn38xx.not_IP)
+			work->word2.s.is_frag = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to RARP.
+ */
+static inline void cvmx_wqe_set_l3_rarp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_RARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_rarp = set;
+	}
+}
+
+/**
+ * Set the packet Layer-3 protocol to ARP.
+ */
+static inline void cvmx_wqe_set_l3_arp(cvmx_wqe_t *work, bool set)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		if (set)
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_ARP;
+		else
+			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
+	} else {
+		work->word2.snoip.is_arp = set;
+	}
+}
+
+/**
+ * Return true if the packet Layer-3 protocol is ARP.
+ */
+static inline bool cvmx_wqe_is_l3_arp(cvmx_wqe_t *work)
+{
+	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
+		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
+
+		return (wqe->word2.lc_hdr_type == CVMX_PKI_LTYPE_E_ARP);
+	}
+
+	if (work->word2.s_cn38xx.not_IP)
+		return (work->word2.snoip.is_arp != 0);
+
+	return false;
+}
+
+#endif /* __CVMX_WQE_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon_eth.h b/arch/mips/mach-octeon/include/mach/octeon_eth.h
new file mode 100644
index 000000000000..bfef0a6e9f13
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/octeon_eth.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __OCTEON_ETH_H__
+#define __OCTEON_ETH_H__
+
+#include <phy.h>
+#include <miiphy.h>
+
+#include <mach/cvmx-helper.h>
+#include <mach/cvmx-helper-board.h>
+#include <mach/octeon_fdt.h>
+
+struct eth_device;
+
+/** Ethernet device private data structure for octeon ethernet */
+struct octeon_eth_info {
+	u64 link_state;
+	u32 port;		   /** ipd port */
+	u32 interface;		   /** Port interface */
+	u32 index;		   /** port index on interface */
+	int node;		   /** OCX node number */
+	u32 initted_flag;	   /** 0 if port not initialized */
+	struct mii_dev *mii_bus;   /** MII bus for PHY */
+	struct phy_device *phydev; /** PHY device */
+	struct eth_device *ethdev; /** Eth device this priv is part of */
+	int mii_addr;
+	int phy_fdt_offset;		    /** Offset of PHY info in device tree */
+	int fdt_offset;			    /** Offset of Eth interface in DT */
+	int phy_offset;			    /** Offset of PHY dev in device tree */
+	enum cvmx_phy_type phy_device_type; /** Type of PHY */
+	/* current link status, use to reconfigure on status changes */
+	u64 packets_sent;
+	u64 packets_received;
+	u32 link_speed : 2;
+	u32 link_duplex : 1;
+	u32 link_status : 1;
+	u32 loopback : 1;
+	u32 enabled : 1;
+	u32 is_c45 : 1;		    /** Set if we need to use clause 45 */
+	u32 vitesse_sfp_config : 1; /** Need Vitesse SFP config */
+	u32 ti_gpio_config : 1;	    /** Need TI GPIO config */
+	u32 bgx_mac_set : 1;	    /** Has the BGX MAC been set already */
+	u64 last_bgx_mac;	    /** Last BGX MAC address set */
+	u64 gmx_base;		    /** Base address to access GMX CSRs */
+	bool mod_abs;		    /** True if module is absent */
+
+	/**
+	 * User defined function to check if a SFP+ module is absent or not.
+	 *
+	 * @param	dev	Ethernet device
+	 * @param	data	User supplied data
+	 */
+	int (*check_mod_abs)(struct eth_device *dev, void *data);
+
+	/** User supplied data for check_mod_abs */
+	void *mod_abs_data;
+	/**
+	 * Called to check the status of a port.  This is used for some
+	 * Vitesse and Inphi phys to probe the sFP adapter.
+	 */
+	int (*phy_port_check)(struct phy_device *dev);
+	/**
+	 * Called whenever mod_abs changes state
+	 *
+	 * @param	dev	Ethernet device
+	 * @param	mod_abs	True if module is absent
+	 *
+	 * @return	0 for success, otherwise error
+	 */
+	int (*mod_abs_changed)(struct eth_device *dev, bool mod_abs);
+	/** SDK phy information data structure */
+	cvmx_phy_info_t phy_info;
+#ifdef CONFIG_OCTEON_SFP
+	/** Information about connected SFP/SFP+/SFP28/QSFP+/QSFP28 module */
+	struct octeon_sfp_info sfp;
+#endif
+};
+
+/**
+ * Searches for an ethernet device based on interface and index.
+ *
+ * @param interface - interface number to search for
+ * @param index - index to search for
+ *
+ * @returns pointer to ethernet device or NULL if not found.
+ */
+struct eth_device *octeon_find_eth_by_interface_index(int interface, int index);
+
+/**
+ * User-defined function called when the link state changes
+ *
+ * @param[in]	dev		Ethernet device
+ * @param	link_state	new link state
+ *
+ * NOTE: This is defined as a weak function.
+ */
+void board_net_set_link(struct eth_device *dev, cvmx_helper_link_info_t link_state);
+
+/**
+ * Registers a function to be called when the link goes down.  The function is
+ * often used for things like reading the SFP+ EEPROM.
+ *
+ * @param	dev		Ethernet device
+ * @param	phy_port_check	Function to call
+ */
+void octeon_eth_register_phy_port_check(struct eth_device *dev,
+					int (*phy_port_check)(struct phy_device *dev));
+
+/**
+ * This weak function is called after the phy driver is connected but before
+ * it is initialized.
+ *
+ * @param	dev	Ethernet device for phy
+ *
+ * @return	0 to continue, or -1 for error to stop setting up the phy
+ */
+int octeon_eth_board_post_setup_phy(struct eth_device *dev);
+
+/**
+ * Registers a function to be called whenever a mod_abs change is detected.
+ *
+ * @param	dev		Ethernet device
+ * @param	mod_abs_changed	Function to be called
+ */
+void octeon_eth_register_mod_abs_changed(struct eth_device *dev,
+					 int (*mod_abs_changed)(struct eth_device *dev,
+								bool mod_abs));
+
+/**
+ * Checks for state changes with the link state or module state
+ *
+ * @param	dev	Ethernet device to check
+ *
+ * NOTE: If the module state is changed then the module callback is called.
+ */
+void octeon_phy_port_check(struct eth_device *dev);
+
+#endif /* __OCTEON_ETH_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon_fdt.h b/arch/mips/mach-octeon/include/mach/octeon_fdt.h
new file mode 100644
index 000000000000..31878cb233d8
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/octeon_fdt.h
@@ -0,0 +1,268 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __OCTEON_FDT_H__
+#define __OCTEON_FDT_H__
+
+struct phy_device;
+
+/** Type of GPIO pin */
+enum octeon_gpio_type {
+	GPIO_TYPE_OCTEON,  /** Native Octeon */
+	GPIO_TYPE_PCA953X, /** PCA953X i2c GPIO expander */
+	GPIO_TYPE_PCA9554, /** PCA9554 i2c GPIO expander */
+	GPIO_TYPE_PCA9555, /** PCA9555 i2c GPIO expander */
+	GPIO_TYPE_PCA9698, /** PCA9698 i2c GPIO expander */
+#ifdef CONFIG_PHY_VITESSE
+	GPIO_TYPE_VSC8488, /** Vitesse VSC8488 or related PHY GPIO */
+#endif
+	GPIO_TYPE_UNKNOWN /** Unknown GPIO type */
+};
+
+/**
+ * Trims nodes from the flat device tree.
+ *
+ * @param fdt - pointer to working FDT, usually in gd->fdt_blob
+ * @param fdt_key - key to preserve.  All non-matching keys are removed
+ * @param trim_name - name of property to look for.  If NULL use
+ *		      'cavium,qlm-trim'
+ * @param rename - set to TRUE to rename interfaces.
+ * @param callback - function to call on matched nodes.
+ * @param cbarg - passed to callback.
+ *
+ * The key should look something like device #, type where device # is a
+ * number from 0-9 and type is a string describing the type.  For QLM
+ * operations this would typically contain the QLM number followed by
+ * the type in the device tree, like "0,xaui", "0,sgmii", etc.  This function
+ * will trim all items in the device tree which match the device number but
+ * have a type which does not match.  For example, if a QLM has a xaui module
+ * installed on QLM 0 and "0,xaui" is passed as a key, then all FDT nodes that
+ * have "0,xaui" will be preserved but all others, i.e. "0,sgmii" will be
+ * removed.
+ *
+ * Note that the trim_name must also match.  If trim_name is NULL then it
+ * looks for the property "cavium,qlm-trim".
+ *
+ * Also, when the trim_name is "cavium,qlm-trim" or NULL that the interfaces
+ * will also be renamed based on their register values.
+ *
+ * For example, if a PIP interface is named "interface at W" and has the property
+ * reg = <0> then the interface will be renamed after this function to
+ * interface at 0.
+ *
+ * @return 0 for success.
+ */
+int octeon_fdt_patch_rename(void *fdt, const char *fdt_key, const char *trim_name, bool rename,
+			    void (*callback)(void *fdt, int offset, void *arg), void *cbarg);
+
+/**
+ * Trims nodes from the flat device tree.
+ *
+ * @param fdt - pointer to working FDT, usually in gd->fdt_blob
+ * @param fdt_key - key to preserve.  All non-matching keys are removed
+ * @param trim_name - name of property to look for.  If NULL use
+ *		      'cavium,qlm-trim'
+ *
+ * The key should look something like device #, type where device # is a
+ * number from 0-9 and type is a string describing the type.  For QLM
+ * operations this would typically contain the QLM number followed by
+ * the type in the device tree, like "0,xaui", "0,sgmii", etc.  This function
+ * will trim all items in the device tree which match the device number but
+ * have a type which does not match.  For example, if a QLM has a xaui module
+ * installed on QLM 0 and "0,xaui" is passed as a key, then all FDT nodes that
+ * have "0,xaui" will be preserved but all others, i.e. "0,sgmii" will be
+ * removed.
+ *
+ * Note that the trim_name must also match.  If trim_name is NULL then it
+ * looks for the property "cavium,qlm-trim".
+ *
+ * Also, when the trim_name is "cavium,qlm-trim" or NULL that the interfaces
+ * will also be renamed based on their register values.
+ *
+ * For example, if a PIP interface is named "interface at W" and has the property
+ * reg = <0> then the interface will be renamed after this function to
+ * interface at 0.
+ *
+ * @return 0 for success.
+ */
+int octeon_fdt_patch(void *fdt, const char *fdt_key, const char *trim_name);
+
+/**
+ * Fix up the MAC address in the flat device tree based on the MAC address
+ * stored in ethaddr or in the board descriptor.
+ *
+ * NOTE: This function is weak and an alias for __octeon_fixup_fdt_mac_addr.
+ */
+void octeon_fixup_fdt_mac_addr(void);
+
+/**
+ * This function fixes the clock-frequency in the flat device tree for the UART.
+ *
+ * NOTE: This function is weak and an alias for __octeon_fixup_fdt_uart.
+ */
+void octeon_fixup_fdt_uart(void);
+
+/**
+ * This function fills in the /memory portion of the flat device tree.
+ *
+ * NOTE: This function is weak and aliased to __octeon_fixup_fdt_memory.
+ */
+void octeon_fixup_fdt_memory(void);
+
+int board_fixup_fdt(void);
+
+void octeon_fixup_fdt(void);
+
+/**
+ * This is a helper function to find the offset of a PHY device given
+ * an Ethernet device.
+ *
+ * @param[in] eth - Ethernet device to search for PHY offset
+ *
+ * @returns offset of phy info in device tree or -1 if not found
+ */
+int octeon_fdt_find_phy(const struct udevice *eth);
+
+/**
+ * This helper function returns if a node contains the specified vendor name.
+ *
+ * @param[in]	fdt		pointer to device tree blob
+ * @param	nodeoffset	offset of the tree node
+ * @param[in]	vendor		name of vendor to check
+ *
+ * returns:
+ *	0, if the node has a compatible vendor string property
+ *	1, if the node does not contain the vendor string property
+ *	-FDT_ERR_NOTFOUND, if the given node has no 'compatible' property
+ *	-FDT_ERR_BADOFFSET, if nodeoffset does not refer to a BEGIN_NODE tag
+ *	-FDT_ERR_BADMAGIC,
+ *	-FDT_ERR_BADVERSION,
+ *	-FDT_BADSTATE,
+ *	-FDT_ERR_BADSTRUCTURE, standard meanings
+ */
+int octeon_fdt_compat_vendor(const void *fdt, int nodeoffset, const char *vendor);
+
+/**
+ * Given a node in the device tree get the OCTEON OCX node number
+ *
+ * @param fdt		pointer to flat device tree
+ * @param nodeoffset	node offset to get OCX node for
+ *
+ * @return the Octeon OCX node number
+ */
+int octeon_fdt_get_soc_node(const void *fdt, int nodeoffset);
+
+/**
+ * Given a FDT node, check if it is compatible with a list of devices
+ *
+ * @param[in]	fdt		Flat device tree pointer
+ * @param	node_offset	Node offset in device tree
+ * @param[in]	strlist		Array of FDT devices to check, end must be NULL
+ *
+ * @return	0 if at least one device is compatible, 1 if not compatible.
+ */
+int octeon_fdt_node_check_compatible(const void *fdt, int node_offset, const char *const *strlist);
+/**
+ * Given a node offset, find the i2c bus number for that node
+ *
+ * @param[in]	fdt	Pointer to flat device tree
+ * @param	node_offset	Node offset in device tree
+ *
+ * @return	i2c bus number or -1 if error
+ */
+int octeon_fdt_i2c_get_bus(const void *fdt, int node_offset);
+
+/**
+ * Given an offset into the fdt, output the i2c bus and address of the device
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	node	offset in FDT of device
+ * @param[out]	bus	i2c bus number of device
+ * @param[out]	addr	address of device on i2c bus
+ *
+ * @return	0 for success, -1 on error
+ */
+int octeon_fdt_get_i2c_bus_addr(const void *fdt, int node, int *bus, int *addr);
+
+/**
+ * Reads a GPIO pin given the node of the GPIO device in the device tree and
+ * the pin number.
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	phandle	phandle of GPIO node
+ * @param	pin	pin number to read
+ *
+ * @return	0 = pin is low, 1 = pin is high, -1 = error
+ */
+int octeon_fdt_read_gpio(const void *fdt, int phandle, int pin);
+
+/**
+ * Reads a GPIO pin given the node of the GPIO device in the device tree and
+ * the pin number.
+ *
+ * @param[in]	fdt	fdt blob pointer
+ * @param	phandle	phandle of GPIO node
+ * @param	pin	pin number to read
+ * @param	val	value to write (1 = high, 0 = low)
+ *
+ * @return	0 = success, -1 = error
+ */
+int octeon_fdt_set_gpio(const void *fdt, int phandle, int pin, int val);
+
+/**
+ * Given the node to a MAC entry in the device tree, output the i2c bus, address
+ * and if the module is absent.
+ *
+ * @param[in]	fdt		flat device tree pointer
+ * @param	mac_node	node of Ethernet port in the FDT
+ * @param[out]	bus		i2c bus address of SFP EEPROM
+ * @param[out]	addr		i2c address of SFP EEPROM
+ * @param[out]	mod_abs		Set true if module is absent, false if present
+ *
+ * @return	0 for success, -1 if there are problems with the device tree
+ */
+int octeon_fdt_get_sfp_eeprom(const void *fdt, int mac_node, int *bus, int *addr, bool *mod_abs);
+
+/**
+ * Given a node to a MAC entry in the device tree, output the i2c bus, address
+ * and if the module is absent
+ *
+ * @param[in]	fdt		flat device tree pointer
+ * @param	mac_node	node of QSFP Ethernet port in FDT
+ * @param[out]	bus		i2c bus address of SFP EEPROM
+ * @param[out]	addr		i2c address of SFP eeprom
+ * @param[out]	mod_abs		Set true if module is absent, false if present
+ *
+ * @return	0 for success, -1 if there are problems with the device tree
+ */
+int octeon_fdt_get_qsfp_eeprom(const void *fdt, int mac_node, int *bus, int *addr, bool *mod_abs);
+
+/**
+ * Given the node of a GPIO entry output the GPIO type, i2c bus and i2c
+ * address.
+ *
+ * @param	fdt_node	node of GPIO in device tree, generally
+ *				derived from a phandle.
+ * @param[out]	type		Type of GPIO detected
+ * @param[out]	i2c_bus		For i2c GPIO expanders, the i2c bus number
+ * @param[out]	i2c_addr	For i2c GPIO expanders, the i2c address
+ *
+ * @return	0 for success, -1 for errors
+ *
+ * NOTE: It is up to the caller to determine the pin number.
+ */
+int octeon_fdt_get_gpio_info(int fdt_node, enum octeon_gpio_type *type, int *i2c_bus,
+			     int *i2c_addr);
+
+/**
+ * Get the PHY data structure for the specified FDT node and output the type
+ *
+ * @param	fdt_node	FDT node of phy
+ * @param[out]	type		Type of GPIO
+ *
+ * @return	pointer to phy device or NULL if no match found.
+ */
+struct phy_device *octeon_fdt_get_phy_gpio_info(int fdt_node, enum octeon_gpio_type *type);
+#endif /* __OCTEON_FDT_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon_pci.h b/arch/mips/mach-octeon/include/mach/octeon_pci.h
new file mode 100644
index 000000000000..3034f23dc658
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/octeon_pci.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __OCTEON_PCI_H__
+#define __OCTEON_PCI_H__
+
+/**
+ * EEPROM entry struct
+ */
+union octeon_pcie_eeprom {
+	u64 u64;
+	struct octeon_data_s {
+		/**
+		 * 0x9DA1 valid entry, 0x6A5D end of table, 0xffff invalid
+		 * access
+		 */
+		u64 preamble : 16;
+u64: 1; /** Reserved */
+		/** Physical function number accessed by the write operation. */
+		u64 pf : 2;
+		/**
+		 * Specifies bit<31> of the address written by hardware.
+		 * 1 = configuration mask register, 0 = configuration register
+		 */
+		u64 cs2 : 1;
+		/**
+		 * Specifies bits<11:0> of the address written by hardware.
+		 * Bits<30:12> of this address are all 0s.
+		 */
+		u64 address : 12;
+		u64 data : 32;
+	} s;
+};
+
+void pci_dev_post_init(void);
+
+int octeon_pci_io_readb(unsigned int reg);
+void octeon_pci_io_writeb(int value, unsigned int reg);
+int octeon_pci_io_readw(unsigned int reg);
+void octeon_pci_io_writew(int value, unsigned int reg);
+int octeon_pci_io_readl(unsigned int reg);
+void octeon_pci_io_writel(int value, unsigned int reg);
+int octeon_pci_mem1_readb(unsigned int reg);
+void octeon_pci_mem1_writeb(int value, unsigned int reg);
+int octeon_pci_mem1_readw(unsigned int reg);
+void octeon_pci_mem1_writew(int value, unsigned int reg);
+int octeon_pci_mem1_readl(unsigned int reg);
+void octeon_pci_mem1_writel(int value, unsigned int reg);
+
+/* In the TLB mapped case, these also work with virtual addresses,
+** and do the required virt<->phys translations as well. */
+u32 octeon_pci_phys_to_bus(u32 phys);
+u32 octeon_pci_bus_to_phys(u32 bus);
+
+/**
+ * Searches PCIe EEPROM for override data specified by address and pf.
+ *
+ * @param	address - PCIe config space address
+ * @param	pf	- PCIe config space pf num
+ * @param[out]	id	- override device and vendor ID
+ *
+ * @return	0 if override found, 1 if not found.
+ */
+int octeon_find_pcie_id_override(unsigned int address, unsigned int pf, u32 *id);
+
+#endif /* __OCTEON_PCI_H__ */
diff --git a/arch/mips/mach-octeon/include/mach/octeon_qlm.h b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
new file mode 100644
index 000000000000..219625b25688
--- /dev/null
+++ b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __OCTEON_QLM_H__
+#define __OCTEON_QLM_H__
+
+/* Reference clock selector values for ref_clk_sel */
+#define OCTEON_QLM_REF_CLK_100MHZ 0 /** 100 MHz */
+#define OCTEON_QLM_REF_CLK_125MHZ 1 /** 125 MHz */
+#define OCTEON_QLM_REF_CLK_156MHZ 2 /** 156.25 MHz */
+#define OCTEON_QLM_REF_CLK_161MHZ 3 /** 161.1328125 MHz */
+
+/**
+ * Configure qlm/dlm speed and mode.
+ * @param qlm     The QLM or DLM to configure
+ * @param speed   The speed the QLM needs to be configured in Mhz.
+ * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
+ * @param rc      Only used for PCIe, rc = 1 for root complex mode, 0 for EP
+ *		  mode.
+ * @param pcie_mode Only used when qlm/dlm are in pcie mode.
+ * @param ref_clk_sel Reference clock to use for 70XX where:
+ *			0: 100MHz
+ *			1: 125MHz
+ *			2: 156.25MHz
+ *			3: 161.1328125MHz (CN73XX and CN78XX only)
+ * @param ref_clk_input	This selects which reference clock input to use.  For
+ *			cn70xx:
+ *				0: DLMC_REF_CLK0
+ *				1: DLMC_REF_CLK1
+ *				2: DLM0_REF_CLK
+ *			cn61xx: (not used)
+ *			cn78xx/cn76xx/cn73xx:
+ *				0: Internal clock (QLM[0-7]_REF_CLK)
+ *				1: QLMC_REF_CLK0
+ *				2: QLMC_REF_CLK1
+ *
+ * @return	Return 0 on success or -1.
+ *
+ * @note	When the 161MHz clock is used it can only be used for
+ *		XLAUI mode with a 6316 speed or XFI mode with a 103125 speed.
+ *		This rate is also only supported for CN73XX and CN78XX.
+ */
+int octeon_configure_qlm(int qlm, int speed, int mode, int rc, int pcie_mode, int ref_clk_sel,
+			 int ref_clk_input);
+
+int octeon_configure_qlm_cn78xx(int node, int qlm, int speed, int mode, int rc, int pcie_mode,
+				int ref_clk_sel, int ref_clk_input);
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param lane     Lane the apply the tuning parameters
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_pre   pre-cursor pre-emphasis
+ * @param tx_post  post-cursor pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ */
+void octeon_qlm_tune_per_lane_v3(int node, int qlm, int baud_mhz, int lane, int tx_swing,
+				 int tx_pre, int tx_post, int tx_gain, int tx_vboost);
+
+/**
+ * Some QLM speeds need to override the default tuning parameters
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param baud_mhz Desired speed in MHz
+ * @param tx_swing Voltage swing.  The higher the value the lower the voltage,
+ *		   the default value is 7.
+ * @param tx_premptap bits [0:3] pre-cursor pre-emphasis, bits[4:8] post-cursor
+ *		      pre-emphasis.
+ * @param tx_gain   Transmit gain. Range 0-7
+ * @param tx_vboost Transmit voltage boost. Range 0-1
+ */
+void octeon_qlm_tune_v3(int node, int qlm, int baud_mhz, int tx_swing, int tx_premptap, int tx_gain,
+			int tx_vboost);
+
+/**
+ * Disables DFE for the specified QLM lane(s).
+ * This function should only be called for low-loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param lane     Lane to configure, or -1 all lanes
+ * @param baud_mhz The speed the QLM needs to be configured in Mhz.
+ * @param mode     The QLM to be configured as SGMII/XAUI/PCIe.
+ */
+void octeon_qlm_dfe_disable(int node, int qlm, int lane, int baud_mhz, int mode);
+
+/**
+ * Some QLMs need to override the default pre-ctle for low loss channels.
+ *
+ * @param node     Node to configure
+ * @param qlm      QLM to configure
+ * @param pre_ctle pre-ctle settings for low loss channels
+ */
+void octeon_qlm_set_channel_v3(int node, int qlm, int pre_ctle);
+
+void octeon_init_qlm(int node);
+
+int octeon_mcu_probe(int node);
+
+#endif /* __OCTEON_QLM_H__ */
-- 
2.31.1

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 33/50] mips: octeon: Add misc remaining header files
  2021-04-23 16:38   ` Daniel Schwierzeck
@ 2021-04-23 17:57     ` Stefan Roese
  0 siblings, 0 replies; 57+ messages in thread
From: Stefan Roese @ 2021-04-23 17:57 UTC (permalink / raw)
  To: u-boot

Hi Daniel,

On 23.04.21 18:38, Daniel Schwierzeck wrote:
> Hi Stefan,
> 
> Am Freitag, den 23.04.2021, 05:56 +0200 schrieb Stefan Roese:
>> From: Aaron Williams <awilliams@marvell.com>
>>
>> Import misc remaining header files from 2013 U-Boot. These will be
>> used
>> by the later added drivers to support PCIe and networking on the MIPS
>> Octeon II / III platforms.
>>
>> Signed-off-by: Aaron Williams <awilliams@marvell.com>
>> Signed-off-by: Stefan Roese <sr@denx.de>
>> Cc: Aaron Williams <awilliams@marvell.com>
>> Cc: Chandrakala Chavva <cchavva@marvell.com>
>> Cc: Daniel Schwierzeck <daniel.schwierzeck@gmail.com>
>> ---
>> v2:
>> - Add missing mach/octeon_qlm.h file (forgot to commit it in v1)
>>
> 
> the patch didn't show up in patchwork. But when manually applying,
> there is still a build error due to missing mach/octeon_fdt.h

Ah, sorry. I still missed 2 additional headers to commit. v3
will follow very soon.

Thanks,
Stefan

>>   .../mach-octeon/include/mach/cvmx-address.h   |  209 ++
>>   .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +++
>>   .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
>>   arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
>>   .../mach-octeon/include/mach/cvmx-error.h     |  456 +++
>>   arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 ++
>>   .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 ++
>>   .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 ++++
>>   .../include/mach/cvmx-global-resources.h      |  213 ++
>>   arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
>>   .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++++
>>   .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 ++++
>>   arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
>>   arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 ++
>>   .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
>>   .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 ++
>>   arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++++++
>>   .../include/mach/cvmx-pki-resources.h         |  157 +
>>   arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++++++
>>   .../mach/cvmx-pko-internal-ports-range.h      |   43 +
>>   .../include/mach/cvmx-pko3-queue.h            |  175 +
>>   arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991
>> +++++++++++++++++
>>   arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 ++
>>   .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
>>   arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 ++++++++
>>   .../mach-octeon/include/mach/octeon_qlm.h     |  109 +
>>   26 files changed, 11765 insertions(+)
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-
>> queue.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-
>> enums.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-
>> resources.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-
>> resources.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-
>> internal-ports-range.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-
>> queue.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>>   create mode 100644 arch/mips/mach-octeon/include/mach/octeon_qlm.h
>>
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-address.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-address.h
>> new file mode 100644
>> index 000000000000..984f574a75bb
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-address.h
>> @@ -0,0 +1,209 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Typedefs and defines for working with Octeon physical addresses.
>> + */
>> +
>> +#ifndef __CVMX_ADDRESS_H__
>> +#define __CVMX_ADDRESS_H__
>> +
>> +typedef enum {
>> +	CVMX_MIPS_SPACE_XKSEG = 3LL,
>> +	CVMX_MIPS_SPACE_XKPHYS = 2LL,
>> +	CVMX_MIPS_SPACE_XSSEG = 1LL,
>> +	CVMX_MIPS_SPACE_XUSEG = 0LL
>> +} cvmx_mips_space_t;
>> +
>> +typedef enum {
>> +	CVMX_MIPS_XKSEG_SPACE_KSEG0 = 0LL,
>> +	CVMX_MIPS_XKSEG_SPACE_KSEG1 = 1LL,
>> +	CVMX_MIPS_XKSEG_SPACE_SSEG = 2LL,
>> +	CVMX_MIPS_XKSEG_SPACE_KSEG3 = 3LL
>> +} cvmx_mips_xkseg_space_t;
>> +
>> +/* decodes <14:13> of a kseg3 window address */
>> +typedef enum {
>> +	CVMX_ADD_WIN_SCR = 0L,
>> +	CVMX_ADD_WIN_DMA = 1L,
>> +	CVMX_ADD_WIN_UNUSED = 2L,
>> +	CVMX_ADD_WIN_UNUSED2 = 3L
>> +} cvmx_add_win_dec_t;
>> +
>> +/* decode within DMA space */
>> +typedef enum {
>> +	CVMX_ADD_WIN_DMA_ADD = 0L,
>> +	CVMX_ADD_WIN_DMA_SENDMEM = 1L,
>> +	/* store data must be normal DRAM memory space address in this
>> case */
>> +	CVMX_ADD_WIN_DMA_SENDDMA = 2L,
>> +	/* see CVMX_ADD_WIN_DMA_SEND_DEC for data contents */
>> +	CVMX_ADD_WIN_DMA_SENDIO = 3L,
>> +	/* store data must be normal IO space address in this case */
>> +	CVMX_ADD_WIN_DMA_SENDSINGLE = 4L,
>> +	/* no write buffer data needed/used */
>> +} cvmx_add_win_dma_dec_t;
>> +
>> +/**
>> + *   Physical Address Decode
>> + *
>> + * Octeon-I HW never interprets this X (<39:36> reserved
>> + * for future expansion), software should set to 0.
>> + *
>> + *  - 0x0 XXX0 0000 0000 to      DRAM         Cached
>> + *  - 0x0 XXX0 0FFF FFFF
>> + *
>> + *  - 0x0 XXX0 1000 0000 to      Boot Bus     Uncached  (Converted
>> to 0x1 00X0 1000 0000
>> + *  - 0x0 XXX0 1FFF FFFF         +
>> EJTAG                           to 0x1 00X0 1FFF FFFF)
>> + *
>> + *  - 0x0 XXX0 2000 0000 to      DRAM         Cached
>> + *  - 0x0 XXXF FFFF FFFF
>> + *
>> + *  - 0x1 00X0 0000 0000 to      Boot Bus     Uncached
>> + *  - 0x1 00XF FFFF FFFF
>> + *
>> + *  - 0x1 01X0 0000 0000 to      Other NCB    Uncached
>> + *  - 0x1 FFXF FFFF FFFF         devices
>> + *
>> + * Decode of all Octeon addresses
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		cvmx_mips_space_t R : 2;
>> +		u64 offset : 62;
>> +	} sva;
>> +
>> +	struct {
>> +		u64 zeroes : 33;
>> +		u64 offset : 31;
>> +	} suseg;
>> +
>> +	struct {
>> +		u64 ones : 33;
>> +		cvmx_mips_xkseg_space_t sp : 2;
>> +		u64 offset : 29;
>> +	} sxkseg;
>> +
>> +	struct {
>> +		cvmx_mips_space_t R : 2;
>> +		u64 cca : 3;
>> +		u64 mbz : 10;
>> +		u64 pa : 49;
>> +	} sxkphys;
>> +
>> +	struct {
>> +		u64 mbz : 15;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 unaddr : 4;
>> +		u64 offset : 36;
>> +	} sphys;
>> +
>> +	struct {
>> +		u64 zeroes : 24;
>> +		u64 unaddr : 4;
>> +		u64 offset : 36;
>> +	} smem;
>> +
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 mbz : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 unaddr : 4;
>> +		u64 offset : 36;
>> +	} sio;
>> +
>> +	struct {
>> +		u64 ones : 49;
>> +		cvmx_add_win_dec_t csrdec : 2;
>> +		u64 addr : 13;
>> +	} sscr;
>> +
>> +	/* there should only be stores to IOBDMA space, no loads */
>> +	struct {
>> +		u64 ones : 49;
>> +		cvmx_add_win_dec_t csrdec : 2;
>> +		u64 unused2 : 3;
>> +		cvmx_add_win_dma_dec_t type : 3;
>> +		u64 addr : 7;
>> +	} sdma;
>> +
>> +	struct {
>> +		u64 didspace : 24;
>> +		u64 unused : 40;
>> +	} sfilldidspace;
>> +} cvmx_addr_t;
>> +
>> +/* These macros for used by 32 bit applications */
>> +
>> +#define CVMX_MIPS32_SPACE_KSEG0	     1l
>> +#define CVMX_ADD_SEG32(segment, add) (((s32)segment << 31) |
>> (s32)(add))
>> +
>> +/*
>> + * Currently all IOs are performed using XKPHYS addressing. Linux
>> uses the
>> + * CvmMemCtl register to enable XKPHYS addressing to IO space from
>> user mode.
>> + * Future OSes may need to change the upper bits of IO addresses.
>> The
>> + * following define controls the upper two bits for all IO addresses
>> generated
>> + * by the simple executive library
>> + */
>> +#define CVMX_IO_SEG CVMX_MIPS_SPACE_XKPHYS
>> +
>> +/* These macros simplify the process of creating common IO addresses
>> */
>> +#define CVMX_ADD_SEG(segment, add) ((((u64)segment) << 62) | (add))
>> +
>> +#define CVMX_ADD_IO_SEG(add) (add)
>> +
>> +#define CVMX_ADDR_DIDSPACE(did)	   (((CVMX_IO_SEG) << 22) |
>> ((1ULL) << 8) | (did))
>> +#define CVMX_ADDR_DID(did)	   (CVMX_ADDR_DIDSPACE(did) << 40)
>> +#define CVMX_FULL_DID(did, subdid) (((did) << 3) | (subdid))
>> +
>> +/* from include/ncb_rsl_id.v */
>> +#define CVMX_OCT_DID_MIS  0ULL /* misc stuff */
>> +#define CVMX_OCT_DID_GMX0 1ULL
>> +#define CVMX_OCT_DID_GMX1 2ULL
>> +#define CVMX_OCT_DID_PCI  3ULL
>> +#define CVMX_OCT_DID_KEY  4ULL
>> +#define CVMX_OCT_DID_FPA  5ULL
>> +#define CVMX_OCT_DID_DFA  6ULL
>> +#define CVMX_OCT_DID_ZIP  7ULL
>> +#define CVMX_OCT_DID_RNG  8ULL
>> +#define CVMX_OCT_DID_IPD  9ULL
>> +#define CVMX_OCT_DID_PKT  10ULL
>> +#define CVMX_OCT_DID_TIM  11ULL
>> +#define CVMX_OCT_DID_TAG  12ULL
>> +/* the rest are not on the IO bus */
>> +#define CVMX_OCT_DID_L2C  16ULL
>> +#define CVMX_OCT_DID_LMC  17ULL
>> +#define CVMX_OCT_DID_SPX0 18ULL
>> +#define CVMX_OCT_DID_SPX1 19ULL
>> +#define CVMX_OCT_DID_PIP  20ULL
>> +#define CVMX_OCT_DID_ASX0 22ULL
>> +#define CVMX_OCT_DID_ASX1 23ULL
>> +#define CVMX_OCT_DID_IOB  30ULL
>> +
>> +#define CVMX_OCT_DID_PKT_SEND	 CVMX_FULL_DID(CVMX_OCT_DID_PKT
>> , 2ULL)
>> +#define CVMX_OCT_DID_TAG_SWTAG	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
>> , 0ULL)
>> +#define CVMX_OCT_DID_TAG_TAG1	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
>> , 1ULL)
>> +#define CVMX_OCT_DID_TAG_TAG2	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
>> , 2ULL)
>> +#define CVMX_OCT_DID_TAG_TAG3	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
>> , 3ULL)
>> +#define CVMX_OCT_DID_TAG_NULL_RD CVMX_FULL_DID(CVMX_OCT_DID_TAG,
>> 4ULL)
>> +#define CVMX_OCT_DID_TAG_TAG5	 CVMX_FULL_DID(CVMX_OCT_DID_TAG
>> , 5ULL)
>> +#define CVMX_OCT_DID_TAG_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TAG, 7ULL)
>> +#define CVMX_OCT_DID_FAU_FAI	 CVMX_FULL_DID(CVMX_OCT_DID_IOB, 0ULL)
>> +#define CVMX_OCT_DID_TIM_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_TIM, 0ULL)
>> +#define CVMX_OCT_DID_KEY_RW	 CVMX_FULL_DID(CVMX_OCT_DID_KEY, 0ULL)
>> +#define CVMX_OCT_DID_PCI_6	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 6ULL)
>> +#define CVMX_OCT_DID_MIS_BOO	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 0ULL)
>> +#define CVMX_OCT_DID_PCI_RML	 CVMX_FULL_DID(CVMX_OCT_DID_PCI, 0ULL)
>> +#define CVMX_OCT_DID_IPD_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_IPD, 7ULL)
>> +#define CVMX_OCT_DID_DFA_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_DFA, 7ULL)
>> +#define CVMX_OCT_DID_MIS_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_MIS, 7ULL)
>> +#define CVMX_OCT_DID_ZIP_CSR	 CVMX_FULL_DID(CVMX_OCT_DID_ZIP, 0ULL)
>> +
>> +/* Cast to unsigned long long, mainly for use in printfs. */
>> +#define CAST_ULL(v) ((unsigned long long)(v))
>> +
>> +#define UNMAPPED_PTR(x) ((1ULL << 63) | (x))
>> +
>> +#endif /* __CVMX_ADDRESS_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
>> new file mode 100644
>> index 000000000000..ddc294348cb4
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-cmd-queue.h
>> @@ -0,0 +1,441 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Support functions for managing command queues used for
>> + * various hardware blocks.
>> + *
>> + * The common command queue infrastructure abstracts out the
>> + * software necessary for adding to Octeon's chained queue
>> + * structures. These structures are used for commands to the
>> + * PKO, ZIP, DFA, RAID, HNA, and DMA engine blocks. Although each
>> + * hardware unit takes commands and CSRs of different types,
>> + * they all use basic linked command buffers to store the
>> + * pending request. In general, users of the CVMX API don't
>> + * call cvmx-cmd-queue functions directly. Instead the hardware
>> + * unit specific wrapper should be used. The wrappers perform
>> + * unit specific validation and CSR writes to submit the
>> + * commands.
>> + *
>> + * Even though most software will never directly interact with
>> + * cvmx-cmd-queue, knowledge of its internal workings can help
>> + * in diagnosing performance problems and help with debugging.
>> + *
>> + * Command queue pointers are stored in a global named block
>> + * called "cvmx_cmd_queues". Except for the PKO queues, each
>> + * hardware queue is stored in its own cache line to reduce SMP
>> + * contention on spin locks. The PKO queues are stored such that
>> + * every 16th queue is next to each other in memory. This scheme
>> + * allows for queues being in separate cache lines when there
>> + * are low number of queues per port. With 16 queues per port,
>> + * the first queue for each port is in the same cache area. The
>> + * second queues for each port are in another area, etc. This
>> + * allows software to implement very efficient lockless PKO with
>> + * 16 queues per port using a minimum of cache lines per core.
>> + * All queues for a given core will be isolated in the same
>> + * cache area.
>> + *
>> + * In addition to the memory pointer layout, cvmx-cmd-queue
>> + * provides an optimized fair ll/sc locking mechanism for the
>> + * queues. The lock uses a "ticket / now serving" model to
>> + * maintain fair order on contended locks. In addition, it uses
>> + * predicted locking time to limit cache contention. When a core
>> + * know it must wait in line for a lock, it spins on the
>> + * internal cycle counter to completely eliminate any causes of
>> + * bus traffic.
>> + */
>> +
>> +#ifndef __CVMX_CMD_QUEUE_H__
>> +#define __CVMX_CMD_QUEUE_H__
>> +
>> +/**
>> + * By default we disable the max depth support. Most programs
>> + * don't use it and it slows down the command queue processing
>> + * significantly.
>> + */
>> +#ifndef CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH
>> +#define CVMX_CMD_QUEUE_ENABLE_MAX_DEPTH 0
>> +#endif
>> +
>> +/**
>> + * Enumeration representing all hardware blocks that use command
>> + * queues. Each hardware block has up to 65536 sub identifiers for
>> + * multiple command queues. Not all chips support all hardware
>> + * units.
>> + */
>> +typedef enum {
>> +	CVMX_CMD_QUEUE_PKO_BASE = 0x00000,
>> +#define
>> CVMX_CMD_QUEUE_PKO(queue)
>>                        \
>> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_PKO_BASE + (0xffff &
>> (queue))))
>> +	CVMX_CMD_QUEUE_ZIP = 0x10000,
>> +#define
>> CVMX_CMD_QUEUE_ZIP_QUE(queue)
>>                        \
>> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_ZIP + (0xffff &
>> (queue))))
>> +	CVMX_CMD_QUEUE_DFA = 0x20000,
>> +	CVMX_CMD_QUEUE_RAID = 0x30000,
>> +	CVMX_CMD_QUEUE_DMA_BASE = 0x40000,
>> +#define
>> CVMX_CMD_QUEUE_DMA(queue)
>>                        \
>> +	((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_DMA_BASE + (0xffff &
>> (queue))))
>> +	CVMX_CMD_QUEUE_BCH = 0x50000,
>> +#define CVMX_CMD_QUEUE_BCH(queue)
>> ((cvmx_cmd_queue_id_t)(CVMX_CMD_QUEUE_BCH + (0xffff & (queue))))
>> +	CVMX_CMD_QUEUE_HNA = 0x60000,
>> +	CVMX_CMD_QUEUE_END = 0x70000,
>> +} cvmx_cmd_queue_id_t;
>> +
>> +#define CVMX_CMD_QUEUE_ZIP3_QUE(node,
>> queue)                                                       \
>> +	((cvmx_cmd_queue_id_t)((node) << 24 | CVMX_CMD_QUEUE_ZIP |
>> (0xffff & (queue))))
>> +
>> +/**
>> + * Command write operations can fail if the command queue needs
>> + * a new buffer and the associated FPA pool is empty. It can also
>> + * fail if the number of queued command words reaches the maximum
>> + * set at initialization.
>> + */
>> +typedef enum {
>> +	CVMX_CMD_QUEUE_SUCCESS = 0,
>> +	CVMX_CMD_QUEUE_NO_MEMORY = -1,
>> +	CVMX_CMD_QUEUE_FULL = -2,
>> +	CVMX_CMD_QUEUE_INVALID_PARAM = -3,
>> +	CVMX_CMD_QUEUE_ALREADY_SETUP = -4,
>> +} cvmx_cmd_queue_result_t;
>> +
>> +typedef struct {
>> +	/* First 64-bit word: */
>> +	u64 fpa_pool : 16;
>> +	u64 base_paddr : 48;
>> +	s32 index;
>> +	u16 max_depth;
>> +	u16 pool_size_m1;
>> +} __cvmx_cmd_queue_state_t;
>> +
>> +/**
>> + * command-queue locking uses a fair ticket spinlock algo,
>> + * with 64-bit tickets for endianness-neutrality and
>> + * counter overflow protection.
>> + * Lock is free when both counters are of equal value.
>> + */
>> +typedef struct {
>> +	u64 ticket;
>> +	u64 now_serving;
>> +} __cvmx_cmd_queue_lock_t;
>> +
>> +/**
>> + * @INTERNAL
>> + * This structure contains the global state of all command queues.
>> + * It is stored in a bootmem named block and shared by all
>> + * applications running on Octeon. Tickets are stored in a different
>> + * cache line that queue information to reduce the contention on the
>> + * ll/sc used to get a ticket. If this is not the case, the update
>> + * of queue state causes the ll/sc to fail quite often.
>> + */
>> +typedef struct {
>> +	__cvmx_cmd_queue_lock_t lock[(CVMX_CMD_QUEUE_END >> 16) * 256];
>> +	__cvmx_cmd_queue_state_t state[(CVMX_CMD_QUEUE_END >> 16) *
>> 256];
>> +} __cvmx_cmd_queue_all_state_t;
>> +
>> +extern __cvmx_cmd_queue_all_state_t
>> *__cvmx_cmd_queue_state_ptrs[CVMX_MAX_NODES];
>> +
>> +/**
>> + * @INTERNAL
>> + * Internal function to handle the corner cases
>> + * of adding command words to a queue when the current
>> + * block is getting full.
>> + */
>> +cvmx_cmd_queue_result_t
>> __cvmx_cmd_queue_write_raw(cvmx_cmd_queue_id_t queue_id,
>> +						   __cvmx_cmd_queue_sta
>> te_t *qptr, int cmd_count,
>> +						   const u64 *cmds);
>> +
>> +/**
>> + * Initialize a command queue for use. The initial FPA buffer is
>> + * allocated and the hardware unit is configured to point to the
>> + * new command queue.
>> + *
>> + * @param queue_id  Hardware command queue to initialize.
>> + * @param max_depth Maximum outstanding commands that can be queued.
>> + * @param fpa_pool  FPA pool the command queues should come from.
>> + * @param pool_size Size of each buffer in the FPA pool (bytes)
>> + *
>> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
>> + */
>> +cvmx_cmd_queue_result_t
>> cvmx_cmd_queue_initialize(cvmx_cmd_queue_id_t queue_id, int
>> max_depth,
>> +						  int fpa_pool, int
>> pool_size);
>> +
>> +/**
>> + * Shutdown a queue a free it's command buffers to the FPA. The
>> + * hardware connected to the queue must be stopped before this
>> + * function is called.
>> + *
>> + * @param queue_id Queue to shutdown
>> + *
>> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
>> + */
>> +cvmx_cmd_queue_result_t cvmx_cmd_queue_shutdown(cvmx_cmd_queue_id_t
>> queue_id);
>> +
>> +/**
>> + * Return the number of command words pending in the queue. This
>> + * function may be relatively slow for some hardware units.
>> + *
>> + * @param queue_id Hardware command queue to query
>> + *
>> + * @return Number of outstanding commands
>> + */
>> +int cvmx_cmd_queue_length(cvmx_cmd_queue_id_t queue_id);
>> +
>> +/**
>> + * Return the command buffer to be written to. The purpose of this
>> + * function is to allow CVMX routine access to the low level buffer
>> + * for initial hardware setup. User applications should not call
>> this
>> + * function directly.
>> + *
>> + * @param queue_id Command queue to query
>> + *
>> + * @return Command buffer or NULL on failure
>> + */
>> +void *cvmx_cmd_queue_buffer(cvmx_cmd_queue_id_t queue_id);
>> +
>> +/**
>> + * @INTERNAL
>> + * Retrieve or allocate command queue state named block
>> + */
>> +cvmx_cmd_queue_result_t __cvmx_cmd_queue_init_state_ptr(unsigned int
>> node);
>> +
>> +/**
>> + * @INTERNAL
>> + * Get the index into the state arrays for the supplied queue id.
>> + *
>> + * @param queue_id Queue ID to get an index for
>> + *
>> + * @return Index into the state arrays
>> + */
>> +static inline unsigned int
>> __cvmx_cmd_queue_get_index(cvmx_cmd_queue_id_t queue_id)
>> +{
>> +	/* Warning: This code currently only works with devices that
>> have 256
>> +	 * queues or less.  Devices with more than 16 queues are laid
>> out in
>> +	 * memory to allow cores quick access to every 16th queue. This
>> reduces
>> +	 * cache thrashing when you are running 16 queues per port to
>> support
>> +	 * lockless operation
>> +	 */
>> +	unsigned int unit = (queue_id >> 16) & 0xff;
>> +	unsigned int q = (queue_id >> 4) & 0xf;
>> +	unsigned int core = queue_id & 0xf;
>> +
>> +	return (unit << 8) | (core << 4) | q;
>> +}
>> +
>> +static inline int __cvmx_cmd_queue_get_node(cvmx_cmd_queue_id_t
>> queue_id)
>> +{
>> +	unsigned int node = queue_id >> 24;
>> +	return node;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Lock the supplied queue so nobody else is updating it at the same
>> + * time as us.
>> + *
>> + * @param queue_id Queue ID to lock
>> + *
>> + */
>> +static inline void __cvmx_cmd_queue_lock(cvmx_cmd_queue_id_t
>> queue_id)
>> +{
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Unlock the queue, flushing all writes.
>> + *
>> + * @param queue_id Queue ID to lock
>> + *
>> + */
>> +static inline void __cvmx_cmd_queue_unlock(cvmx_cmd_queue_id_t
>> queue_id)
>> +{
>> +	CVMX_SYNCWS; /* nudge out the unlock. */
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Initialize a command-queue lock to "unlocked" state.
>> + */
>> +static inline void __cvmx_cmd_queue_lock_init(cvmx_cmd_queue_id_t
>> queue_id)
>> +{
>> +	unsigned int index = __cvmx_cmd_queue_get_index(queue_id);
>> +	unsigned int node = __cvmx_cmd_queue_get_node(queue_id);
>> +
>> +	__cvmx_cmd_queue_state_ptrs[node]->lock[index] =
>> (__cvmx_cmd_queue_lock_t){ 0, 0 };
>> +	CVMX_SYNCWS;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Get the queue state structure for the given queue id
>> + *
>> + * @param queue_id Queue id to get
>> + *
>> + * @return Queue structure or NULL on failure
>> + */
>> +static inline __cvmx_cmd_queue_state_t
>> *__cvmx_cmd_queue_get_state(cvmx_cmd_queue_id_t queue_id)
>> +{
>> +	unsigned int index;
>> +	unsigned int node;
>> +	__cvmx_cmd_queue_state_t *qptr;
>> +
>> +	node = __cvmx_cmd_queue_get_node(queue_id);
>> +	index = __cvmx_cmd_queue_get_index(queue_id);
>> +
>> +	if (cvmx_unlikely(!__cvmx_cmd_queue_state_ptrs[node]))
>> +		__cvmx_cmd_queue_init_state_ptr(node);
>> +
>> +	qptr = &__cvmx_cmd_queue_state_ptrs[node]->state[index];
>> +	return qptr;
>> +}
>> +
>> +/**
>> + * Write an arbitrary number of command words to a command queue.
>> + * This is a generic function; the fixed number of command word
>> + * functions yield higher performance.
>> + *
>> + * @param queue_id  Hardware command queue to write to
>> + * @param use_locking
>> + *                  Use internal locking to ensure exclusive access
>> for queue
>> + *                  updates. If you don't use this locking you must
>> ensure
>> + *                  exclusivity some other way. Locking is strongly
>> recommended.
>> + * @param cmd_count Number of command words to write
>> + * @param cmds      Array of commands to write
>> + *
>> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
>> + */
>> +static inline cvmx_cmd_queue_result_t
>> +cvmx_cmd_queue_write(cvmx_cmd_queue_id_t queue_id, bool use_locking,
>> int cmd_count, const u64 *cmds)
>> +{
>> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
>> +	u64 *cmd_ptr;
>> +
>> +	__cvmx_cmd_queue_state_t *qptr =
>> __cvmx_cmd_queue_get_state(queue_id);
>> +
>> +	/* Make sure nobody else is updating the same queue */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_lock(queue_id);
>> +
>> +	/* Most of the time there is lots of free words in current
>> block */
>> +	if (cvmx_unlikely((qptr->index + cmd_count) >= qptr-
>>> pool_size_m1)) {
>> +		/* The rare case when nearing end of block */
>> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr,
>> cmd_count, cmds);
>> +	} else {
>> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
>>> base_paddr);
>> +		/* Loop easy for compiler to unroll for the likely case
>> */
>> +		while (cmd_count > 0) {
>> +			cmd_ptr[qptr->index++] = *cmds++;
>> +			cmd_count--;
>> +		}
>> +	}
>> +
>> +	/* All updates are complete. Release the lock and return */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_unlock(queue_id);
>> +	else
>> +		CVMX_SYNCWS;
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * Simple function to write two command words to a command queue.
>> + *
>> + * @param queue_id Hardware command queue to write to
>> + * @param use_locking
>> + *                 Use internal locking to ensure exclusive access
>> for queue
>> + *                 updates. If you don't use this locking you must
>> ensure
>> + *                 exclusivity some other way. Locking is strongly
>> recommended.
>> + * @param cmd1     Command
>> + * @param cmd2     Command
>> + *
>> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
>> + */
>> +static inline cvmx_cmd_queue_result_t
>> cvmx_cmd_queue_write2(cvmx_cmd_queue_id_t queue_id,
>> +							    bool
>> use_locking, u64 cmd1, u64 cmd2)
>> +{
>> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
>> +	u64 *cmd_ptr;
>> +
>> +	__cvmx_cmd_queue_state_t *qptr =
>> __cvmx_cmd_queue_get_state(queue_id);
>> +
>> +	/* Make sure nobody else is updating the same queue */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_lock(queue_id);
>> +
>> +	if (cvmx_unlikely((qptr->index + 2) >= qptr->pool_size_m1)) {
>> +		/* The rare case when nearing end of block */
>> +		u64 cmds[2];
>> +
>> +		cmds[0] = cmd1;
>> +		cmds[1] = cmd2;
>> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 2,
>> cmds);
>> +	} else {
>> +		/* Likely case to work fast */
>> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
>>> base_paddr);
>> +		cmd_ptr += qptr->index;
>> +		qptr->index += 2;
>> +		cmd_ptr[0] = cmd1;
>> +		cmd_ptr[1] = cmd2;
>> +	}
>> +
>> +	/* All updates are complete. Release the lock and return */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_unlock(queue_id);
>> +	else
>> +		CVMX_SYNCWS;
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * Simple function to write three command words to a command queue.
>> + *
>> + * @param queue_id Hardware command queue to write to
>> + * @param use_locking
>> + *                 Use internal locking to ensure exclusive access
>> for queue
>> + *                 updates. If you don't use this locking you must
>> ensure
>> + *                 exclusivity some other way. Locking is strongly
>> recommended.
>> + * @param cmd1     Command
>> + * @param cmd2     Command
>> + * @param cmd3     Command
>> + *
>> + * @return CVMX_CMD_QUEUE_SUCCESS or a failure code
>> + */
>> +static inline cvmx_cmd_queue_result_t
>> +cvmx_cmd_queue_write3(cvmx_cmd_queue_id_t queue_id, bool
>> use_locking, u64 cmd1, u64 cmd2, u64 cmd3)
>> +{
>> +	cvmx_cmd_queue_result_t ret = CVMX_CMD_QUEUE_SUCCESS;
>> +	__cvmx_cmd_queue_state_t *qptr =
>> __cvmx_cmd_queue_get_state(queue_id);
>> +	u64 *cmd_ptr;
>> +
>> +	/* Make sure nobody else is updating the same queue */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_lock(queue_id);
>> +
>> +	if (cvmx_unlikely((qptr->index + 3) >= qptr->pool_size_m1)) {
>> +		/* Most of the time there is lots of free words in
>> current block */
>> +		u64 cmds[3];
>> +
>> +		cmds[0] = cmd1;
>> +		cmds[1] = cmd2;
>> +		cmds[2] = cmd3;
>> +		ret = __cvmx_cmd_queue_write_raw(queue_id, qptr, 3,
>> cmds);
>> +	} else {
>> +		cmd_ptr = (u64 *)cvmx_phys_to_ptr((u64)qptr-
>>> base_paddr);
>> +		cmd_ptr += qptr->index;
>> +		qptr->index += 3;
>> +		cmd_ptr[0] = cmd1;
>> +		cmd_ptr[1] = cmd2;
>> +		cmd_ptr[2] = cmd3;
>> +	}
>> +
>> +	/* All updates are complete. Release the lock and return */
>> +	if (cvmx_likely(use_locking))
>> +		__cvmx_cmd_queue_unlock(queue_id);
>> +	else
>> +		CVMX_SYNCWS;
>> +
>> +	return ret;
>> +}
>> +
>> +#endif /* __CVMX_CMD_QUEUE_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
>> new file mode 100644
>> index 000000000000..a8625b4228ac
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-csr-enums.h
>> @@ -0,0 +1,87 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Definitions for enumerations used with Octeon CSRs.
>> + */
>> +
>> +#ifndef __CVMX_CSR_ENUMS_H__
>> +#define __CVMX_CSR_ENUMS_H__
>> +
>> +typedef enum {
>> +	CVMX_IPD_OPC_MODE_STT = 0LL,
>> +	CVMX_IPD_OPC_MODE_STF = 1LL,
>> +	CVMX_IPD_OPC_MODE_STF1_STT = 2LL,
>> +	CVMX_IPD_OPC_MODE_STF2_STT = 3LL
>> +} cvmx_ipd_mode_t;
>> +
>> +/**
>> + * Enumeration representing the amount of packet processing
>> + * and validation performed by the input hardware.
>> + */
>> +typedef enum {
>> +	CVMX_PIP_PORT_CFG_MODE_NONE = 0ull,
>> +	CVMX_PIP_PORT_CFG_MODE_SKIPL2 = 1ull,
>> +	CVMX_PIP_PORT_CFG_MODE_SKIPIP = 2ull
>> +} cvmx_pip_port_parse_mode_t;
>> +
>> +/**
>> + * This enumeration controls how a QoS watcher matches a packet.
>> + *
>> + * @deprecated  This enumeration was used with
>> cvmx_pip_config_watcher which has
>> + *              been deprecated.
>> + */
>> +typedef enum {
>> +	CVMX_PIP_QOS_WATCH_DISABLE = 0ull,
>> +	CVMX_PIP_QOS_WATCH_PROTNH = 1ull,
>> +	CVMX_PIP_QOS_WATCH_TCP = 2ull,
>> +	CVMX_PIP_QOS_WATCH_UDP = 3ull
>> +} cvmx_pip_qos_watch_types;
>> +
>> +/**
>> + * This enumeration is used in PIP tag config to control how
>> + * POW tags are generated by the hardware.
>> + */
>> +typedef enum {
>> +	CVMX_PIP_TAG_MODE_TUPLE = 0ull,
>> +	CVMX_PIP_TAG_MODE_MASK = 1ull,
>> +	CVMX_PIP_TAG_MODE_IP_OR_MASK = 2ull,
>> +	CVMX_PIP_TAG_MODE_TUPLE_XOR_MASK = 3ull
>> +} cvmx_pip_tag_mode_t;
>> +
>> +/**
>> + * Tag type definitions
>> + */
>> +typedef enum {
>> +	CVMX_POW_TAG_TYPE_ORDERED = 0L,
>> +	CVMX_POW_TAG_TYPE_ATOMIC = 1L,
>> +	CVMX_POW_TAG_TYPE_NULL = 2L,
>> +	CVMX_POW_TAG_TYPE_NULL_NULL = 3L
>> +} cvmx_pow_tag_type_t;
>> +
>> +/**
>> + * LCR bits 0 and 1 control the number of bits per character. See
>> the following table for encodings:
>> + *
>> + * - 00 = 5 bits (bits 0-4 sent)
>> + * - 01 = 6 bits (bits 0-5 sent)
>> + * - 10 = 7 bits (bits 0-6 sent)
>> + * - 11 = 8 bits (all bits sent)
>> + */
>> +typedef enum {
>> +	CVMX_UART_BITS5 = 0,
>> +	CVMX_UART_BITS6 = 1,
>> +	CVMX_UART_BITS7 = 2,
>> +	CVMX_UART_BITS8 = 3
>> +} cvmx_uart_bits_t;
>> +
>> +typedef enum {
>> +	CVMX_UART_IID_NONE = 1,
>> +	CVMX_UART_IID_RX_ERROR = 6,
>> +	CVMX_UART_IID_RX_DATA = 4,
>> +	CVMX_UART_IID_RX_TIMEOUT = 12,
>> +	CVMX_UART_IID_TX_EMPTY = 2,
>> +	CVMX_UART_IID_MODEM = 0,
>> +	CVMX_UART_IID_BUSY = 7
>> +} cvmx_uart_iid_t;
>> +
>> +#endif /* __CVMX_CSR_ENUMS_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-csr.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
>> new file mode 100644
>> index 000000000000..730d54bb9278
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-csr.h
>> @@ -0,0 +1,78 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Configuration and status register (CSR) address and type
>> definitions for
>> + * Octoen.
>> + */
>> +
>> +#ifndef __CVMX_CSR_H__
>> +#define __CVMX_CSR_H__
>> +
>> +#include "cvmx-csr-enums.h"
>> +#include "cvmx-pip-defs.h"
>> +
>> +typedef cvmx_pip_prt_cfgx_t cvmx_pip_port_cfg_t;
>> +
>> +/* The CSRs for bootbus region zero used to be independent of the
>> +    other 1-7. As of SDK 1.7.0 these were combined. These macros
>> +    are for backwards compactability */
>> +#define CVMX_MIO_BOOT_REG_CFG0 CVMX_MIO_BOOT_REG_CFGX(0)
>> +#define CVMX_MIO_BOOT_REG_TIM0 CVMX_MIO_BOOT_REG_TIMX(0)
>> +
>> +/* The CN3XXX and CN58XX chips used to not have a LMC number
>> +    passed to the address macros. These are here to supply backwards
>> +    compatibility with old code. Code should really use the new
>> addresses
>> +    with bus arguments for support on other chips */
>> +#define CVMX_LMC_BIST_CTL	  CVMX_LMCX_BIST_CTL(0)
>> +#define CVMX_LMC_BIST_RESULT	  CVMX_LMCX_BIST_RESULT(0)
>> +#define CVMX_LMC_COMP_CTL	  CVMX_LMCX_COMP_CTL(0)
>> +#define CVMX_LMC_CTL		  CVMX_LMCX_CTL(0)
>> +#define CVMX_LMC_CTL1		  CVMX_LMCX_CTL1(0)
>> +#define CVMX_LMC_DCLK_CNT_HI	  CVMX_LMCX_DCLK_CNT_HI(0)
>> +#define CVMX_LMC_DCLK_CNT_LO	  CVMX_LMCX_DCLK_CNT_LO(0)
>> +#define CVMX_LMC_DCLK_CTL	  CVMX_LMCX_DCLK_CTL(0)
>> +#define CVMX_LMC_DDR2_CTL	  CVMX_LMCX_DDR2_CTL(0)
>> +#define CVMX_LMC_DELAY_CFG	  CVMX_LMCX_DELAY_CFG(0)
>> +#define CVMX_LMC_DLL_CTL	  CVMX_LMCX_DLL_CTL(0)
>> +#define CVMX_LMC_DUAL_MEMCFG	  CVMX_LMCX_DUAL_MEMCFG(0)
>> +#define CVMX_LMC_ECC_SYND	  CVMX_LMCX_ECC_SYND(0)
>> +#define CVMX_LMC_FADR		  CVMX_LMCX_FADR(0)
>> +#define CVMX_LMC_IFB_CNT_HI	  CVMX_LMCX_IFB_CNT_HI(0)
>> +#define CVMX_LMC_IFB_CNT_LO	  CVMX_LMCX_IFB_CNT_LO(0)
>> +#define CVMX_LMC_MEM_CFG0	  CVMX_LMCX_MEM_CFG0(0)
>> +#define CVMX_LMC_MEM_CFG1	  CVMX_LMCX_MEM_CFG1(0)
>> +#define CVMX_LMC_OPS_CNT_HI	  CVMX_LMCX_OPS_CNT_HI(0)
>> +#define CVMX_LMC_OPS_CNT_LO	  CVMX_LMCX_OPS_CNT_LO(0)
>> +#define CVMX_LMC_PLL_BWCTL	  CVMX_LMCX_PLL_BWCTL(0)
>> +#define CVMX_LMC_PLL_CTL	  CVMX_LMCX_PLL_CTL(0)
>> +#define CVMX_LMC_PLL_STATUS	  CVMX_LMCX_PLL_STATUS(0)
>> +#define CVMX_LMC_READ_LEVEL_CTL	  CVMX_LMCX_READ_LEVEL_CTL(0)
>> +#define CVMX_LMC_READ_LEVEL_DBG	  CVMX_LMCX_READ_LEVEL_DBG(0)
>> +#define CVMX_LMC_READ_LEVEL_RANKX CVMX_LMCX_READ_LEVEL_RANKX(0)
>> +#define CVMX_LMC_RODT_COMP_CTL	  CVMX_LMCX_RODT_COMP_CTL(0)
>> +#define CVMX_LMC_RODT_CTL	  CVMX_LMCX_RODT_CTL(0)
>> +#define CVMX_LMC_WODT_CTL	  CVMX_LMCX_WODT_CTL0(0)
>> +#define CVMX_LMC_WODT_CTL0	  CVMX_LMCX_WODT_CTL0(0)
>> +#define CVMX_LMC_WODT_CTL1	  CVMX_LMCX_WODT_CTL1(0)
>> +
>> +/* The CN3XXX and CN58XX chips used to not have a TWSI bus number
>> +    passed to the address macros. These are here to supply backwards
>> +    compatibility with old code. Code should really use the new
>> addresses
>> +    with bus arguments for support on other chips */
>> +#define CVMX_MIO_TWS_INT	 CVMX_MIO_TWSX_INT(0)
>> +#define CVMX_MIO_TWS_SW_TWSI	 CVMX_MIO_TWSX_SW_TWSI(0)
>> +#define CVMX_MIO_TWS_SW_TWSI_EXT CVMX_MIO_TWSX_SW_TWSI_EXT(0)
>> +#define CVMX_MIO_TWS_TWSI_SW	 CVMX_MIO_TWSX_TWSI_SW(0)
>> +
>> +/* The CN3XXX and CN58XX chips used to not have a SMI/MDIO bus
>> number
>> +    passed to the address macros. These are here to supply backwards
>> +    compatibility with old code. Code should really use the new
>> addresses
>> +    with bus arguments for support on other chips */
>> +#define CVMX_SMI_CLK	CVMX_SMIX_CLK(0)
>> +#define CVMX_SMI_CMD	CVMX_SMIX_CMD(0)
>> +#define CVMX_SMI_EN	CVMX_SMIX_EN(0)
>> +#define CVMX_SMI_RD_DAT CVMX_SMIX_RD_DAT(0)
>> +#define CVMX_SMI_WR_DAT CVMX_SMIX_WR_DAT(0)
>> +
>> +#endif /* __CVMX_CSR_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-error.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-error.h
>> new file mode 100644
>> index 000000000000..9a13ed422484
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-error.h
>> @@ -0,0 +1,456 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the Octeon extended error status.
>> + */
>> +
>> +#ifndef __CVMX_ERROR_H__
>> +#define __CVMX_ERROR_H__
>> +
>> +/**
>> + * There are generally many error status bits associated with a
>> + * single logical group. The enumeration below is used to
>> + * communicate high level groups to the error infastructure so
>> + * error status bits can be enable or disabled in large groups.
>> + */
>> +typedef enum {
>> +	CVMX_ERROR_GROUP_INTERNAL,
>> +	CVMX_ERROR_GROUP_L2C,
>> +	CVMX_ERROR_GROUP_ETHERNET,
>> +	CVMX_ERROR_GROUP_MGMT_PORT,
>> +	CVMX_ERROR_GROUP_PCI,
>> +	CVMX_ERROR_GROUP_SRIO,
>> +	CVMX_ERROR_GROUP_USB,
>> +	CVMX_ERROR_GROUP_LMC,
>> +	CVMX_ERROR_GROUP_ILK,
>> +	CVMX_ERROR_GROUP_DFM,
>> +	CVMX_ERROR_GROUP_ILA,
>> +} cvmx_error_group_t;
>> +
>> +/**
>> + * Flags representing special handling for some error registers.
>> + * These flags are passed to cvmx_error_initialize() to control
>> + * the handling of bits where the same flags were passed to the
>> + * added cvmx_error_info_t.
>> + */
>> +typedef enum {
>> +	CVMX_ERROR_TYPE_NONE = 0,
>> +	CVMX_ERROR_TYPE_SBE = 1 << 0,
>> +	CVMX_ERROR_TYPE_DBE = 1 << 1,
>> +} cvmx_error_type_t;
>> +
>> +/**
>> + * When registering for interest in an error status register, the
>> + * type of the register needs to be known by cvmx-error. Most
>> + * registers are either IO64 or IO32, but some blocks contain
>> + * registers that can't be directly accessed. A good example of
>> + * would be PCIe extended error state stored in config space.
>> + */
>> +typedef enum {
>> +	__CVMX_ERROR_REGISTER_NONE,
>> +	CVMX_ERROR_REGISTER_IO64,
>> +	CVMX_ERROR_REGISTER_IO32,
>> +	CVMX_ERROR_REGISTER_PCICONFIG,
>> +	CVMX_ERROR_REGISTER_SRIOMAINT,
>> +} cvmx_error_register_t;
>> +
>> +struct cvmx_error_info;
>> +/**
>> + * Error handling functions must have the following prototype.
>> + */
>> +typedef int (*cvmx_error_func_t)(const struct cvmx_error_info
>> *info);
>> +
>> +/**
>> + * This structure is passed to all error handling functions.
>> + */
>> +typedef struct cvmx_error_info {
>> +	cvmx_error_register_t reg_type;
>> +	u64 status_addr;
>> +	u64 status_mask;
>> +	u64 enable_addr;
>> +	u64 enable_mask;
>> +	cvmx_error_type_t flags;
>> +	cvmx_error_group_t group;
>> +	int group_index;
>> +	cvmx_error_func_t func;
>> +	u64 user_info;
>> +	struct {
>> +		cvmx_error_register_t reg_type;
>> +		u64 status_addr;
>> +		u64 status_mask;
>> +	} parent;
>> +} cvmx_error_info_t;
>> +
>> +/**
>> + * Initialize the error status system. This should be called once
>> + * before any other functions are called. This function adds default
>> + * handlers for most all error events but does not enable them.
>> Later
>> + * calls to cvmx_error_enable() are needed.
>> + *
>> + * @param flags  Optional flags.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_initialize(void);
>> +
>> +/**
>> + * Poll the error status registers and call the appropriate error
>> + * handlers. This should be called in the RSL interrupt handler
>> + * for your application or operating system.
>> + *
>> + * @return Number of error handlers called. Zero means this call
>> + *         found no errors and was spurious.
>> + */
>> +int cvmx_error_poll(void);
>> +
>> +/**
>> + * Register to be called when an error status bit is set. Most users
>> + * will not need to call this function as cvmx_error_initialize()
>> + * registers default handlers for most error conditions. This
>> function
>> + * is normally used to add more handlers without changing the
>> existing
>> + * handlers.
>> + *
>> + * @param new_info Information about the handler for a error
>> register. The
>> + *                 structure passed is copied and can be destroyed
>> after the
>> + *                 call. All members of the structure must be
>> populated, even the
>> + *                 parent information.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_add(const cvmx_error_info_t *new_info);
>> +
>> +/**
>> + * Remove all handlers for a status register and mask. Normally
>> + * this function should not be called. Instead a new handler should
>> be
>> + * installed to replace the existing handler. In the even that all
>> + * reporting of a error bit should be removed, then use this
>> + * function.
>> + *
>> + * @param reg_type Type of the status register to remove
>> + * @param status_addr
>> + *                 Status register to remove.
>> + * @param status_mask
>> + *                 All handlers for this status register with this
>> mask will be
>> + *                 removed.
>> + * @param old_info If not NULL, this is filled with information
>> about the handler
>> + *                 that was removed.
>> + *
>> + * @return Zero on success, negative on failure (not found).
>> + */
>> +int cvmx_error_remove(cvmx_error_register_t reg_type, u64
>> status_addr, u64 status_mask,
>> +		      cvmx_error_info_t *old_info);
>> +
>> +/**
>> + * Change the function and user_info for an existing error status
>> + * register. This function should be used to replace the default
>> + * handler with an application specific version as needed.
>> + *
>> + * @param reg_type Type of the status register to change
>> + * @param status_addr
>> + *                 Status register to change.
>> + * @param status_mask
>> + *                 All handlers for this status register with this
>> mask will be
>> + *                 changed.
>> + * @param new_func New function to use to handle the error status
>> + * @param new_user_info
>> + *                 New user info parameter for the function
>> + * @param old_func If not NULL, the old function is returned. Useful
>> for restoring
>> + *                 the old handler.
>> + * @param old_user_info
>> + *                 If not NULL, the old user info parameter.
>> + *
>> + * @return Zero on success, negative on failure
>> + */
>> +int cvmx_error_change_handler(cvmx_error_register_t reg_type, u64
>> status_addr, u64 status_mask,
>> +			      cvmx_error_func_t new_func, u64
>> new_user_info,
>> +			      cvmx_error_func_t *old_func, u64
>> *old_user_info);
>> +
>> +/**
>> + * Enable all error registers for a logical group. This should be
>> + * called whenever a logical group is brought online.
>> + *
>> + * @param group  Logical group to enable
>> + * @param group_index
>> + *               Index for the group as defined in the
>> cvmx_error_group_t
>> + *               comments.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +/*
>> + * Rather than conditionalize the calls throughout the executive to
>> not enable
>> + * interrupts in Uboot, simply make the enable function do nothing
>> + */
>> +static inline int cvmx_error_enable_group(cvmx_error_group_t group,
>> int group_index)
>> +{
>> +	return 0;
>> +}
>> +
>> +/**
>> + * Disable all error registers for a logical group. This should be
>> + * called whenever a logical group is brought offline. Many blocks
>> + * will report spurious errors when offline unless this function
>> + * is called.
>> + *
>> + * @param group  Logical group to disable
>> + * @param group_index
>> + *               Index for the group as defined in the
>> cvmx_error_group_t
>> + *               comments.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +/*
>> + * Rather than conditionalize the calls throughout the executive to
>> not disable
>> + * interrupts in Uboot, simply make the enable function do nothing
>> + */
>> +static inline int cvmx_error_disable_group(cvmx_error_group_t group,
>> int group_index)
>> +{
>> +	return 0;
>> +}
>> +
>> +/**
>> + * Enable all handlers for a specific status register mask.
>> + *
>> + * @param reg_type Type of the status register
>> + * @param status_addr
>> + *                 Status register address
>> + * @param status_mask
>> + *                 All handlers for this status register with this
>> mask will be
>> + *                 enabled.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_enable(cvmx_error_register_t reg_type, u64
>> status_addr, u64 status_mask);
>> +
>> +/**
>> + * Disable all handlers for a specific status register and mask.
>> + *
>> + * @param reg_type Type of the status register
>> + * @param status_addr
>> + *                 Status register address
>> + * @param status_mask
>> + *                 All handlers for this status register with this
>> mask will be
>> + *                 disabled.
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_disable(cvmx_error_register_t reg_type, u64
>> status_addr, u64 status_mask);
>> +
>> +/**
>> + * @INTERNAL
>> + * Function for processing non leaf error status registers. This
>> function
>> + * calls all handlers for this passed register and all children
>> linked
>> + * to it.
>> + *
>> + * @param info   Error register to check
>> + *
>> + * @return Number of error status bits found or zero if no bits were
>> set.
>> + */
>> +int __cvmx_error_decode(const cvmx_error_info_t *info);
>> +
>> +/**
>> + * @INTERNAL
>> + * This error bit handler simply prints a message and clears the
>> status bit
>> + *
>> + * @param info   Error register to check
>> + *
>> + * @return
>> + */
>> +int __cvmx_error_display(const cvmx_error_info_t *info);
>> +
>> +/**
>> + * Find the handler for a specific status register and mask
>> + *
>> + * @param status_addr
>> + *                Status register address
>> + *
>> + * @return  Return the handler on success or null on failure.
>> + */
>> +cvmx_error_info_t *cvmx_error_get_index(u64 status_addr);
>> +
>> +void __cvmx_install_gmx_error_handler_for_xaui(void);
>> +
>> +/**
>> + * 78xx related
>> + */
>> +/**
>> + * Compare two INTSN values.
>> + *
>> + * @param key INTSN value to search for
>> + * @param data current entry from the searched array
>> + *
>> + * @return Negative, 0 or positive when respectively key is less
>> than,
>> + *		equal or greater than data.
>> + */
>> +int cvmx_error_intsn_cmp(const void *key, const void *data);
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * @param intsn   Interrupt source number to display
>> + *
>> + * @param node Node number
>> + *
>> + * @return Zero on success, -1 on error
>> + */
>> +int cvmx_error_intsn_display_v3(int node, u32 intsn);
>> +
>> +/**
>> + * Initialize the error status system for cn78xx. This should be
>> called once
>> + * before any other functions are called. This function enables the
>> interrupts
>> + * described in the array.
>> + *
>> + * @param node Node number
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_initialize_cn78xx(int node);
>> +
>> +/**
>> + * Enable interrupt for a specific INTSN.
>> + *
>> + * @param node Node number
>> + * @param intsn Interrupt source number
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_intsn_enable_v3(int node, u32 intsn);
>> +
>> +/**
>> + * Disable interrupt for a specific INTSN.
>> + *
>> + * @param node Node number
>> + * @param intsn Interrupt source number
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_intsn_disable_v3(int node, u32 intsn);
>> +
>> +/**
>> + * Clear interrupt for a specific INTSN.
>> + *
>> + * @param intsn Interrupt source number
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_intsn_clear_v3(int node, u32 intsn);
>> +
>> +/**
>> + * Enable interrupts for a specific CSR(all the bits/intsn in the
>> csr).
>> + *
>> + * @param node Node number
>> + * @param csr_address CSR address
>> + *
>> + * @return Zero on success, negative on failure.
>> + */
>> +int cvmx_error_csr_enable_v3(int node, u64 csr_address);
>> +
>> +/**
>> + * Disable interrupts for a specific CSR (all the bits/intsn in the
>> csr).
>> + *
>> + * @param node Node number
>> + * @param csr_address CSR address
>> + *
>> + * @return Zero
>> + */
>> +int cvmx_error_csr_disable_v3(int node, u64 csr_address);
>> +
>> +/**
>> + * Enable all error registers for a logical group. This should be
>> + * called whenever a logical group is brought online.
>> + *
>> + * @param group  Logical group to enable
>> + * @param xipd_port  The IPD port value
>> + *
>> + * @return Zero.
>> + */
>> +int cvmx_error_enable_group_v3(cvmx_error_group_t group, int
>> xipd_port);
>> +
>> +/**
>> + * Disable all error registers for a logical group.
>> + *
>> + * @param group  Logical group to enable
>> + * @param xipd_port  The IPD port value
>> + *
>> + * @return Zero.
>> + */
>> +int cvmx_error_disable_group_v3(cvmx_error_group_t group, int
>> xipd_port);
>> +
>> +/**
>> + * Enable all error registers for a specific category in a logical
>> group.
>> + * This should be called whenever a logical group is brought online.
>> + *
>> + * @param group  Logical group to enable
>> + * @param type   Category in a logical group to enable
>> + * @param xipd_port  The IPD port value
>> + *
>> + * @return Zero.
>> + */
>> +int cvmx_error_enable_group_type_v3(cvmx_error_group_t group,
>> cvmx_error_type_t type,
>> +				    int xipd_port);
>> +
>> +/**
>> + * Disable all error registers for a specific category in a logical
>> group.
>> + * This should be called whenever a logical group is brought online.
>> + *
>> + * @param group  Logical group to disable
>> + * @param type   Category in a logical group to disable
>> + * @param xipd_port  The IPD port value
>> + *
>> + * @return Zero.
>> + */
>> +int cvmx_error_disable_group_type_v3(cvmx_error_group_t group,
>> cvmx_error_type_t type,
>> +				     int xipd_port);
>> +
>> +/**
>> + * Clear all error registers for a logical group.
>> + *
>> + * @param group  Logical group to disable
>> + * @param xipd_port  The IPD port value
>> + *
>> + * @return Zero.
>> + */
>> +int cvmx_error_clear_group_v3(cvmx_error_group_t group, int
>> xipd_port);
>> +
>> +/**
>> + * Enable all error registers for a particular category.
>> + *
>> + * @param node  CCPI node
>> + * @param type  category to enable
>> + *
>> + *@return Zero.
>> + */
>> +int cvmx_error_enable_type_v3(int node, cvmx_error_type_t type);
>> +
>> +/**
>> + * Disable all error registers for a particular category.
>> + *
>> + * @param node  CCPI node
>> + * @param type  category to disable
>> + *
>> + *@return Zero.
>> + */
>> +int cvmx_error_disable_type_v3(int node, cvmx_error_type_t type);
>> +
>> +void cvmx_octeon_hang(void) __attribute__((__noreturn__));
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Process L2C single and multi-bit ECC errors
>> + *
>> + */
>> +int __cvmx_cn7xxx_l2c_l2d_ecc_error_display(int node, int intsn);
>> +
>> +/**
>> + * Handle L2 cache TAG ECC errors and noway errors
>> + *
>> + * @param	CCPI node
>> + * @param	intsn	intsn from error array.
>> + * @param	remote	true for remote node (cn78xx only)
>> + *
>> + * @return	1 if handled, 0 if not handled
>> + */
>> +int __cvmx_cn7xxx_l2c_tag_error_display(int node, int intsn, bool
>> remote);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>> new file mode 100644
>> index 000000000000..297fb3f4a28c
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>> @@ -0,0 +1,217 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Free Pool Allocator.
>> + */
>> +
>> +#ifndef __CVMX_FPA_H__
>> +#define __CVMX_FPA_H__
>> +
>> +#include "cvmx-scratch.h"
>> +#include "cvmx-fpa-defs.h"
>> +#include "cvmx-fpa1.h"
>> +#include "cvmx-fpa3.h"
>> +
>> +#define CVMX_FPA_MIN_BLOCK_SIZE 128
>> +#define CVMX_FPA_ALIGNMENT	128
>> +#define CVMX_FPA_POOL_NAME_LEN	16
>> +
>> +/* On CN78XX in backward-compatible mode, pool is mapped to AURA */
>> +#define
>> CVMX_FPA_NUM_POOLS
>>                        \
>> +	(octeon_has_feature(OCTEON_FEATURE_FPA3) ?
>> cvmx_fpa3_num_auras() : CVMX_FPA1_NUM_POOLS)
>> +
>> +/**
>> + * Structure to store FPA pool configuration parameters.
>> + */
>> +struct cvmx_fpa_pool_config {
>> +	s64 pool_num;
>> +	u64 buffer_size;
>> +	u64 buffer_count;
>> +};
>> +
>> +typedef struct cvmx_fpa_pool_config cvmx_fpa_pool_config_t;
>> +
>> +/**
>> + * Return the name of the pool
>> + *
>> + * @param pool_num   Pool to get the name of
>> + * @return The name
>> + */
>> +const char *cvmx_fpa_get_name(int pool_num);
>> +
>> +/**
>> + * Initialize FPA per node
>> + */
>> +int cvmx_fpa_global_init_node(int node);
>> +
>> +/**
>> + * Enable the FPA
>> + */
>> +static inline void cvmx_fpa_enable(void)
>> +{
>> +	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
>> +		cvmx_fpa1_enable();
>> +	else
>> +		cvmx_fpa_global_init_node(cvmx_get_node_num());
>> +}
>> +
>> +/**
>> + * Disable the FPA
>> + */
>> +static inline void cvmx_fpa_disable(void)
>> +{
>> +	if (!octeon_has_feature(OCTEON_FEATURE_FPA3))
>> +		cvmx_fpa1_disable();
>> +	/* FPA3 does not have a disable function */
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * @deprecated OBSOLETE
>> + *
>> + * Kept for transition assistance only
>> + */
>> +static inline void cvmx_fpa_global_initialize(void)
>> +{
>> +	cvmx_fpa_global_init_node(cvmx_get_node_num());
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Convert FPA1 style POOL into FPA3 AURA in
>> + * backward compatibility mode.
>> + */
>> +static inline cvmx_fpa3_gaura_t
>> cvmx_fpa1_pool_to_fpa3_aura(cvmx_fpa1_pool_t pool)
>> +{
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
>> +		unsigned int node = cvmx_get_node_num();
>> +		cvmx_fpa3_gaura_t aura = __cvmx_fpa3_gaura(node, pool);
>> +		return aura;
>> +	}
>> +	return CVMX_FPA3_INVALID_GAURA;
>> +}
>> +
>> +/**
>> + * Get a new block from the FPA
>> + *
>> + * @param pool   Pool to get the block from
>> + * @return Pointer to the block or NULL on failure
>> + */
>> +static inline void *cvmx_fpa_alloc(u64 pool)
>> +{
>> +	/* FPA3 is handled differently */
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
>> +		return
>> cvmx_fpa3_alloc(cvmx_fpa1_pool_to_fpa3_aura(pool));
>> +	} else
>> +		return cvmx_fpa1_alloc(pool);
>> +}
>> +
>> +/**
>> + * Asynchronously get a new block from the FPA
>> + *
>> + * The result of cvmx_fpa_async_alloc() may be retrieved using
>> + * cvmx_fpa_async_alloc_finish().
>> + *
>> + * @param scr_addr Local scratch address to put response in.  This
>> is a byte
>> + *		   address but must be 8 byte aligned.
>> + * @param pool      Pool to get the block from
>> + */
>> +static inline void cvmx_fpa_async_alloc(u64 scr_addr, u64 pool)
>> +{
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3))) {
>> +		return cvmx_fpa3_async_alloc(scr_addr,
>> cvmx_fpa1_pool_to_fpa3_aura(pool));
>> +	} else
>> +		return cvmx_fpa1_async_alloc(scr_addr, pool);
>> +}
>> +
>> +/**
>> + * Retrieve the result of cvmx_fpa_async_alloc
>> + *
>> + * @param scr_addr The Local scratch address.  Must be the same
>> value
>> + * passed to cvmx_fpa_async_alloc().
>> + *
>> + * @param pool Pool the block came from.  Must be the same value
>> + * passed to cvmx_fpa_async_alloc.
>> + *
>> + * @return Pointer to the block or NULL on failure
>> + */
>> +static inline void *cvmx_fpa_async_alloc_finish(u64 scr_addr, u64
>> pool)
>> +{
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
>> +		return cvmx_fpa3_async_alloc_finish(scr_addr,
>> cvmx_fpa1_pool_to_fpa3_aura(pool));
>> +	else
>> +		return cvmx_fpa1_async_alloc_finish(scr_addr, pool);
>> +}
>> +
>> +/**
>> + * Free a block allocated with a FPA pool.
>> + * Does NOT provide memory ordering in cases where the memory block
>> was
>> + * modified by the core.
>> + *
>> + * @param ptr    Block to free
>> + * @param pool   Pool to put it in
>> + * @param num_cache_lines
>> + *               Cache lines to invalidate
>> + */
>> +static inline void cvmx_fpa_free_nosync(void *ptr, u64 pool, u64
>> num_cache_lines)
>> +{
>> +	/* FPA3 is handled differently */
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
>> +		cvmx_fpa3_free_nosync(ptr,
>> cvmx_fpa1_pool_to_fpa3_aura(pool), num_cache_lines);
>> +	else
>> +		cvmx_fpa1_free_nosync(ptr, pool, num_cache_lines);
>> +}
>> +
>> +/**
>> + * Free a block allocated with a FPA pool.  Provides required memory
>> + * ordering in cases where memory block was modified by core.
>> + *
>> + * @param ptr    Block to free
>> + * @param pool   Pool to put it in
>> + * @param num_cache_lines
>> + *               Cache lines to invalidate
>> + */
>> +static inline void cvmx_fpa_free(void *ptr, u64 pool, u64
>> num_cache_lines)
>> +{
>> +	if ((octeon_has_feature(OCTEON_FEATURE_FPA3)))
>> +		cvmx_fpa3_free(ptr, cvmx_fpa1_pool_to_fpa3_aura(pool),
>> num_cache_lines);
>> +	else
>> +		cvmx_fpa1_free(ptr, pool, num_cache_lines);
>> +}
>> +
>> +/**
>> + * Setup a FPA pool to control a new block of memory.
>> + * This can only be called once per pool. Make sure proper
>> + * locking enforces this.
>> + *
>> + * @param pool       Pool to initialize
>> + * @param name       Constant character string to name this pool.
>> + *                   String is not copied.
>> + * @param buffer     Pointer to the block of memory to use. This
>> must be
>> + *                   accessible by all processors and external
>> hardware.
>> + * @param block_size Size for each block controlled by the FPA
>> + * @param num_blocks Number of blocks
>> + *
>> + * @return the pool number on Success,
>> + *         -1 on failure
>> + */
>> +int cvmx_fpa_setup_pool(int pool, const char *name, void *buffer,
>> u64 block_size, u64 num_blocks);
>> +
>> +int cvmx_fpa_shutdown_pool(int pool);
>> +
>> +/**
>> + * Gets the block size of buffer in specified pool
>> + * @param pool	 Pool to get the block size from
>> + * @return       Size of buffer in specified pool
>> + */
>> +unsigned int cvmx_fpa_get_block_size(int pool);
>> +
>> +int cvmx_fpa_is_pool_available(int pool_num);
>> +u64 cvmx_fpa_get_pool_owner(int pool_num);
>> +int cvmx_fpa_get_max_pools(void);
>> +int cvmx_fpa_get_current_count(int pool_num);
>> +int cvmx_fpa_validate_pool(int pool);
>> +
>> +#endif /*  __CVM_FPA_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>> new file mode 100644
>> index 000000000000..6985083a5d66
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>> @@ -0,0 +1,196 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Free Pool Allocator on Octeon chips.
>> + * These are the legacy models, i.e. prior to CN78XX/CN76XX.
>> + */
>> +
>> +#ifndef __CVMX_FPA1_HW_H__
>> +#define __CVMX_FPA1_HW_H__
>> +
>> +#include "cvmx-scratch.h"
>> +#include "cvmx-fpa-defs.h"
>> +#include "cvmx-fpa3.h"
>> +
>> +/* Legacy pool range is 0..7 and 8 on CN68XX */
>> +typedef int cvmx_fpa1_pool_t;
>> +
>> +#define CVMX_FPA1_NUM_POOLS    8
>> +#define CVMX_FPA1_INVALID_POOL ((cvmx_fpa1_pool_t)-1)
>> +#define CVMX_FPA1_NAME_SIZE    16
>> +
>> +/**
>> + * Structure describing the data format used for stores to the FPA.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 scraddr : 8;
>> +		u64 len : 8;
>> +		u64 did : 8;
>> +		u64 addr : 40;
>> +	} s;
>> +} cvmx_fpa1_iobdma_data_t;
>> +
>> +/*
>> + * Allocate or reserve the specified fpa pool.
>> + *
>> + * @param pool	  FPA pool to allocate/reserve. If -1 it
>> + *                finds an empty pool to allocate.
>> + * @return        Alloctaed pool number or CVMX_FPA1_POOL_INVALID
>> + *                if fails to allocate the pool
>> + */
>> +cvmx_fpa1_pool_t cvmx_fpa1_reserve_pool(cvmx_fpa1_pool_t pool);
>> +
>> +/**
>> + * Free the specified fpa pool.
>> + * @param pool	   Pool to free
>> + * @return         0 for success -1 failure
>> + */
>> +int cvmx_fpa1_release_pool(cvmx_fpa1_pool_t pool);
>> +
>> +static inline void cvmx_fpa1_free(void *ptr, cvmx_fpa1_pool_t pool,
>> u64 num_cache_lines)
>> +{
>> +	cvmx_addr_t newptr;
>> +
>> +	newptr.u64 = cvmx_ptr_to_phys(ptr);
>> +	newptr.sfilldidspace.didspace =
>> CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
>> +	/* Make sure that any previous writes to memory go out before
>> we free
>> +	 * this buffer.  This also serves as a barrier to prevent GCC
>> from
>> +	 * reordering operations to after the free.
>> +	 */
>> +	CVMX_SYNCWS;
>> +	/* value written is number of cache lines not written back */
>> +	cvmx_write_io(newptr.u64, num_cache_lines);
>> +}
>> +
>> +static inline void cvmx_fpa1_free_nosync(void *ptr, cvmx_fpa1_pool_t
>> pool,
>> +					 unsigned int num_cache_lines)
>> +{
>> +	cvmx_addr_t newptr;
>> +
>> +	newptr.u64 = cvmx_ptr_to_phys(ptr);
>> +	newptr.sfilldidspace.didspace =
>> CVMX_ADDR_DIDSPACE(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool));
>> +	/* Prevent GCC from reordering around free */
>> +	asm volatile("" : : : "memory");
>> +	/* value written is number of cache lines not written back */
>> +	cvmx_write_io(newptr.u64, num_cache_lines);
>> +}
>> +
>> +/**
>> + * Enable the FPA for use. Must be performed after any CSR
>> + * configuration but before any other FPA functions.
>> + */
>> +static inline void cvmx_fpa1_enable(void)
>> +{
>> +	cvmx_fpa_ctl_status_t status;
>> +
>> +	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
>> +	if (status.s.enb) {
>> +		/*
>> +		 * CN68XXP1 should not reset the FPA (doing so may
>> break
>> +		 * the SSO, so we may end up enabling it more than
>> once.
>> +		 * Just return and don't spew messages.
>> +		 */
>> +		return;
>> +	}
>> +
>> +	status.u64 = 0;
>> +	status.s.enb = 1;
>> +	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
>> +}
>> +
>> +/**
>> + * Reset FPA to disable. Make sure buffers from all FPA pools are
>> freed
>> + * before disabling FPA.
>> + */
>> +static inline void cvmx_fpa1_disable(void)
>> +{
>> +	cvmx_fpa_ctl_status_t status;
>> +
>> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX_PASS1))
>> +		return;
>> +
>> +	status.u64 = csr_rd(CVMX_FPA_CTL_STATUS);
>> +	status.s.reset = 1;
>> +	csr_wr(CVMX_FPA_CTL_STATUS, status.u64);
>> +}
>> +
>> +static inline void *cvmx_fpa1_alloc(cvmx_fpa1_pool_t pool)
>> +{
>> +	u64 address;
>> +
>> +	for (;;) {
>> +		address =
>> csr_rd(CVMX_ADDR_DID(CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool)));
>> +		if (cvmx_likely(address)) {
>> +			return cvmx_phys_to_ptr(address);
>> +		} else {
>> +			if (csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool)) > 0)
>> +				udelay(50);
>> +			else
>> +				return NULL;
>> +		}
>> +	}
>> +}
>> +
>> +/**
>> + * Asynchronously get a new block from the FPA
>> + * @INTERNAL
>> + *
>> + * The result of cvmx_fpa_async_alloc() may be retrieved using
>> + * cvmx_fpa_async_alloc_finish().
>> + *
>> + * @param scr_addr Local scratch address to put response in.  This
>> is a byte
>> + *		   address but must be 8 byte aligned.
>> + * @param pool      Pool to get the block from
>> + */
>> +static inline void cvmx_fpa1_async_alloc(u64 scr_addr,
>> cvmx_fpa1_pool_t pool)
>> +{
>> +	cvmx_fpa1_iobdma_data_t data;
>> +
>> +	/* Hardware only uses 64 bit aligned locations, so convert from
>> byte
>> +	 * address to 64-bit index
>> +	 */
>> +	data.u64 = 0ull;
>> +	data.s.scraddr = scr_addr >> 3;
>> +	data.s.len = 1;
>> +	data.s.did = CVMX_FULL_DID(CVMX_OCT_DID_FPA, pool);
>> +	data.s.addr = 0;
>> +
>> +	cvmx_scratch_write64(scr_addr, 0ull);
>> +	CVMX_SYNCW;
>> +	cvmx_send_single(data.u64);
>> +}
>> +
>> +/**
>> + * Retrieve the result of cvmx_fpa_async_alloc
>> + * @INTERNAL
>> + *
>> + * @param scr_addr The Local scratch address.  Must be the same
>> value
>> + * passed to cvmx_fpa_async_alloc().
>> + *
>> + * @param pool Pool the block came from.  Must be the same value
>> + * passed to cvmx_fpa_async_alloc.
>> + *
>> + * @return Pointer to the block or NULL on failure
>> + */
>> +static inline void *cvmx_fpa1_async_alloc_finish(u64 scr_addr,
>> cvmx_fpa1_pool_t pool)
>> +{
>> +	u64 address;
>> +
>> +	CVMX_SYNCIOBDMA;
>> +
>> +	address = cvmx_scratch_read64(scr_addr);
>> +	if (cvmx_likely(address))
>> +		return cvmx_phys_to_ptr(address);
>> +	else
>> +		return cvmx_fpa1_alloc(pool);
>> +}
>> +
>> +static inline u64 cvmx_fpa1_get_available(cvmx_fpa1_pool_t pool)
>> +{
>> +	return csr_rd(CVMX_FPA_QUEX_AVAILABLE(pool));
>> +}
>> +
>> +#endif /* __CVMX_FPA1_HW_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>> new file mode 100644
>> index 000000000000..229982b83163
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>> @@ -0,0 +1,566 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the CN78XX Free Pool Allocator, a.k.a. FPA3
>> + */
>> +
>> +#include "cvmx-address.h"
>> +#include "cvmx-fpa-defs.h"
>> +#include "cvmx-scratch.h"
>> +
>> +#ifndef __CVMX_FPA3_H__
>> +#define __CVMX_FPA3_H__
>> +
>> +typedef struct {
>> +	unsigned res0 : 6;
>> +	unsigned node : 2;
>> +	unsigned res1 : 2;
>> +	unsigned lpool : 6;
>> +	unsigned valid_magic : 16;
>> +} cvmx_fpa3_pool_t;
>> +
>> +typedef struct {
>> +	unsigned res0 : 6;
>> +	unsigned node : 2;
>> +	unsigned res1 : 6;
>> +	unsigned laura : 10;
>> +	unsigned valid_magic : 16;
>> +} cvmx_fpa3_gaura_t;
>> +
>> +#define CVMX_FPA3_VALID_MAGIC	0xf9a3
>> +#define CVMX_FPA3_INVALID_GAURA ((cvmx_fpa3_gaura_t){ 0, 0, 0, 0, 0
>> })
>> +#define CVMX_FPA3_INVALID_POOL	((cvmx_fpa3_pool_t){ 0, 0, 0,
>> 0, 0 })
>> +
>> +static inline bool __cvmx_fpa3_aura_valid(cvmx_fpa3_gaura_t aura)
>> +{
>> +	if (aura.valid_magic != CVMX_FPA3_VALID_MAGIC)
>> +		return false;
>> +	return true;
>> +}
>> +
>> +static inline bool __cvmx_fpa3_pool_valid(cvmx_fpa3_pool_t pool)
>> +{
>> +	if (pool.valid_magic != CVMX_FPA3_VALID_MAGIC)
>> +		return false;
>> +	return true;
>> +}
>> +
>> +static inline cvmx_fpa3_gaura_t __cvmx_fpa3_gaura(int node, int
>> laura)
>> +{
>> +	cvmx_fpa3_gaura_t aura;
>> +
>> +	if (node < 0)
>> +		node = cvmx_get_node_num();
>> +	if (laura < 0)
>> +		return CVMX_FPA3_INVALID_GAURA;
>> +
>> +	aura.node = node;
>> +	aura.laura = laura;
>> +	aura.valid_magic = CVMX_FPA3_VALID_MAGIC;
>> +	return aura;
>> +}
>> +
>> +static inline cvmx_fpa3_pool_t __cvmx_fpa3_pool(int node, int lpool)
>> +{
>> +	cvmx_fpa3_pool_t pool;
>> +
>> +	if (node < 0)
>> +		node = cvmx_get_node_num();
>> +	if (lpool < 0)
>> +		return CVMX_FPA3_INVALID_POOL;
>> +
>> +	pool.node = node;
>> +	pool.lpool = lpool;
>> +	pool.valid_magic = CVMX_FPA3_VALID_MAGIC;
>> +	return pool;
>> +}
>> +
>> +#undef CVMX_FPA3_VALID_MAGIC
>> +
>> +/**
>> + * Structure describing the data format used for stores to the FPA.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 scraddr : 8;
>> +		u64 len : 8;
>> +		u64 did : 8;
>> +		u64 addr : 40;
>> +	} s;
>> +	struct {
>> +		u64 scraddr : 8;
>> +		u64 len : 8;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 red : 1;
>> +		u64 reserved2 : 9;
>> +		u64 aura : 10;
>> +		u64 reserved3 : 16;
>> +	} cn78xx;
>> +} cvmx_fpa3_iobdma_data_t;
>> +
>> +/**
>> + * Struct describing load allocate operation addresses for FPA pool.
>> + */
>> +union cvmx_fpa3_load_data {
>> +	u64 u64;
>> +	struct {
>> +		u64 seg : 2;
>> +		u64 reserved1 : 13;
>> +		u64 io : 1;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 red : 1;
>> +		u64 reserved2 : 9;
>> +		u64 aura : 10;
>> +		u64 reserved3 : 16;
>> +	};
>> +};
>> +
>> +typedef union cvmx_fpa3_load_data cvmx_fpa3_load_data_t;
>> +
>> +/**
>> + * Struct describing store free operation addresses from FPA pool.
>> + */
>> +union cvmx_fpa3_store_addr {
>> +	u64 u64;
>> +	struct {
>> +		u64 seg : 2;
>> +		u64 reserved1 : 13;
>> +		u64 io : 1;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 reserved2 : 10;
>> +		u64 aura : 10;
>> +		u64 fabs : 1;
>> +		u64 reserved3 : 3;
>> +		u64 dwb_count : 9;
>> +		u64 reserved4 : 3;
>> +	};
>> +};
>> +
>> +typedef union cvmx_fpa3_store_addr cvmx_fpa3_store_addr_t;
>> +
>> +enum cvmx_fpa3_pool_alignment_e {
>> +	FPA_NATURAL_ALIGNMENT,
>> +	FPA_OFFSET_ALIGNMENT,
>> +	FPA_OPAQUE_ALIGNMENT
>> +};
>> +
>> +#define CVMX_FPA3_AURAX_LIMIT_MAX ((1ull << 40) - 1)
>> +
>> +/**
>> + * @INTERNAL
>> + * Accessor functions to return number of POOLS in an FPA3
>> + * depending on SoC model.
>> + * The number is per-node for models supporting multi-node
>> configurations.
>> + */
>> +static inline int cvmx_fpa3_num_pools(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 64;
>> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
>> +		return 32;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
>> +		return 32;
>> +	printf("ERROR: %s: Unknowm model\n", __func__);
>> +	return -1;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Accessor functions to return number of AURAS in an FPA3
>> + * depending on SoC model.
>> + * The number is per-node for models supporting multi-node
>> configurations.
>> + */
>> +static inline int cvmx_fpa3_num_auras(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 1024;
>> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
>> +		return 512;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
>> +		return 512;
>> +	printf("ERROR: %s: Unknowm model\n", __func__);
>> +	return -1;
>> +}
>> +
>> +/**
>> + * Get the FPA3 POOL underneath FPA3 AURA, containing all its
>> buffers
>> + *
>> + */
>> +static inline cvmx_fpa3_pool_t
>> cvmx_fpa3_aura_to_pool(cvmx_fpa3_gaura_t aura)
>> +{
>> +	cvmx_fpa3_pool_t pool;
>> +	cvmx_fpa_aurax_pool_t aurax_pool;
>> +
>> +	aurax_pool.u64 = cvmx_read_csr_node(aura.node,
>> CVMX_FPA_AURAX_POOL(aura.laura));
>> +
>> +	pool = __cvmx_fpa3_pool(aura.node, aurax_pool.s.pool);
>> +	return pool;
>> +}
>> +
>> +/**
>> + * Get a new block from the FPA pool
>> + *
>> + * @param aura  - aura number
>> + * @return pointer to the block or NULL on failure
>> + */
>> +static inline void *cvmx_fpa3_alloc(cvmx_fpa3_gaura_t aura)
>> +{
>> +	u64 address;
>> +	cvmx_fpa3_load_data_t load_addr;
>> +
>> +	load_addr.u64 = 0;
>> +	load_addr.seg = CVMX_MIPS_SPACE_XKPHYS;
>> +	load_addr.io = 1;
>> +	load_addr.did = 0x29; /* Device ID. Indicates FPA. */
>> +	load_addr.node = aura.node;
>> +	load_addr.red = 0; /* Perform RED on allocation.
>> +				  * FIXME to use config option
>> +				  */
>> +	load_addr.aura = aura.laura;
>> +
>> +	address = cvmx_read64_uint64(load_addr.u64);
>> +	if (!address)
>> +		return NULL;
>> +	return cvmx_phys_to_ptr(address);
>> +}
>> +
>> +/**
>> + * Asynchronously get a new block from the FPA
>> + *
>> + * The result of cvmx_fpa_async_alloc() may be retrieved using
>> + * cvmx_fpa_async_alloc_finish().
>> + *
>> + * @param scr_addr Local scratch address to put response in.  This
>> is a byte
>> + *		   address but must be 8 byte aligned.
>> + * @param aura     Global aura to get the block from
>> + */
>> +static inline void cvmx_fpa3_async_alloc(u64 scr_addr,
>> cvmx_fpa3_gaura_t aura)
>> +{
>> +	cvmx_fpa3_iobdma_data_t data;
>> +
>> +	/* Hardware only uses 64 bit aligned locations, so convert from
>> byte
>> +	 * address to 64-bit index
>> +	 */
>> +	data.u64 = 0ull;
>> +	data.cn78xx.scraddr = scr_addr >> 3;
>> +	data.cn78xx.len = 1;
>> +	data.cn78xx.did = 0x29;
>> +	data.cn78xx.node = aura.node;
>> +	data.cn78xx.aura = aura.laura;
>> +	cvmx_scratch_write64(scr_addr, 0ull);
>> +
>> +	CVMX_SYNCW;
>> +	cvmx_send_single(data.u64);
>> +}
>> +
>> +/**
>> + * Retrieve the result of cvmx_fpa3_async_alloc
>> + *
>> + * @param scr_addr The Local scratch address.  Must be the same
>> value
>> + * passed to cvmx_fpa_async_alloc().
>> + *
>> + * @param aura Global aura the block came from.  Must be the same
>> value
>> + * passed to cvmx_fpa_async_alloc.
>> + *
>> + * @return Pointer to the block or NULL on failure
>> + */
>> +static inline void *cvmx_fpa3_async_alloc_finish(u64 scr_addr,
>> cvmx_fpa3_gaura_t aura)
>> +{
>> +	u64 address;
>> +
>> +	CVMX_SYNCIOBDMA;
>> +
>> +	address = cvmx_scratch_read64(scr_addr);
>> +	if (cvmx_likely(address))
>> +		return cvmx_phys_to_ptr(address);
>> +	else
>> +		/* Try regular alloc if async failed */
>> +		return cvmx_fpa3_alloc(aura);
>> +}
>> +
>> +/**
>> + * Free a pointer back to the pool.
>> + *
>> + * @param aura   global aura number
>> + * @param ptr    physical address of block to free.
>> + * @param num_cache_lines Cache lines to invalidate
>> + */
>> +static inline void cvmx_fpa3_free(void *ptr, cvmx_fpa3_gaura_t aura,
>> unsigned int num_cache_lines)
>> +{
>> +	cvmx_fpa3_store_addr_t newptr;
>> +	cvmx_addr_t newdata;
>> +
>> +	newdata.u64 = cvmx_ptr_to_phys(ptr);
>> +
>> +	/* Make sure that any previous writes to memory go out before
>> we free
>> +	   this buffer. This also serves as a barrier to prevent GCC
>> from
>> +	   reordering operations to after the free. */
>> +	CVMX_SYNCWS;
>> +
>> +	newptr.u64 = 0;
>> +	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
>> +	newptr.io = 1;
>> +	newptr.did = 0x29; /* Device id, indicates FPA */
>> +	newptr.node = aura.node;
>> +	newptr.aura = aura.laura;
>> +	newptr.fabs = 0; /* Free absolute. FIXME to use config option
>> */
>> +	newptr.dwb_count = num_cache_lines;
>> +
>> +	cvmx_write_io(newptr.u64, newdata.u64);
>> +}
>> +
>> +/**
>> + * Free a pointer back to the pool without flushing the write
>> buffer.
>> + *
>> + * @param aura   global aura number
>> + * @param ptr    physical address of block to free.
>> + * @param num_cache_lines Cache lines to invalidate
>> + */
>> +static inline void cvmx_fpa3_free_nosync(void *ptr,
>> cvmx_fpa3_gaura_t aura,
>> +					 unsigned int num_cache_lines)
>> +{
>> +	cvmx_fpa3_store_addr_t newptr;
>> +	cvmx_addr_t newdata;
>> +
>> +	newdata.u64 = cvmx_ptr_to_phys(ptr);
>> +
>> +	/* Prevent GCC from reordering writes to (*ptr) */
>> +	asm volatile("" : : : "memory");
>> +
>> +	newptr.u64 = 0;
>> +	newptr.seg = CVMX_MIPS_SPACE_XKPHYS;
>> +	newptr.io = 1;
>> +	newptr.did = 0x29; /* Device id, indicates FPA */
>> +	newptr.node = aura.node;
>> +	newptr.aura = aura.laura;
>> +	newptr.fabs = 0; /* Free absolute. FIXME to use config option
>> */
>> +	newptr.dwb_count = num_cache_lines;
>> +
>> +	cvmx_write_io(newptr.u64, newdata.u64);
>> +}
>> +
>> +static inline int cvmx_fpa3_pool_is_enabled(cvmx_fpa3_pool_t pool)
>> +{
>> +	cvmx_fpa_poolx_cfg_t pool_cfg;
>> +
>> +	if (!__cvmx_fpa3_pool_valid(pool))
>> +		return -1;
>> +
>> +	pool_cfg.u64 = cvmx_read_csr_node(pool.node,
>> CVMX_FPA_POOLX_CFG(pool.lpool));
>> +	return pool_cfg.cn78xx.ena;
>> +}
>> +
>> +static inline int cvmx_fpa3_config_red_params(unsigned int node, int
>> qos_avg_en, int red_lvl_dly,
>> +					      int avg_dly)
>> +{
>> +	cvmx_fpa_gen_cfg_t fpa_cfg;
>> +	cvmx_fpa_red_delay_t red_delay;
>> +
>> +	fpa_cfg.u64 = cvmx_read_csr_node(node, CVMX_FPA_GEN_CFG);
>> +	fpa_cfg.s.avg_en = qos_avg_en;
>> +	fpa_cfg.s.lvl_dly = red_lvl_dly;
>> +	cvmx_write_csr_node(node, CVMX_FPA_GEN_CFG, fpa_cfg.u64);
>> +
>> +	red_delay.u64 = cvmx_read_csr_node(node, CVMX_FPA_RED_DELAY);
>> +	red_delay.s.avg_dly = avg_dly;
>> +	cvmx_write_csr_node(node, CVMX_FPA_RED_DELAY, red_delay.u64);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * Gets the buffer size of the specified pool,
>> + *
>> + * @param aura Global aura number
>> + * @return Returns size of the buffers in the specified pool.
>> + */
>> +static inline int cvmx_fpa3_get_aura_buf_size(cvmx_fpa3_gaura_t
>> aura)
>> +{
>> +	cvmx_fpa3_pool_t pool;
>> +	cvmx_fpa_poolx_cfg_t pool_cfg;
>> +	int block_size;
>> +
>> +	pool = cvmx_fpa3_aura_to_pool(aura);
>> +
>> +	pool_cfg.u64 = cvmx_read_csr_node(pool.node,
>> CVMX_FPA_POOLX_CFG(pool.lpool));
>> +	block_size = pool_cfg.cn78xx.buf_size << 7;
>> +	return block_size;
>> +}
>> +
>> +/**
>> + * Return the number of available buffers in an AURA
>> + *
>> + * @param aura to receive count for
>> + * @return available buffer count
>> + */
>> +static inline long long cvmx_fpa3_get_available(cvmx_fpa3_gaura_t
>> aura)
>> +{
>> +	cvmx_fpa3_pool_t pool;
>> +	cvmx_fpa_poolx_available_t avail_reg;
>> +	cvmx_fpa_aurax_cnt_t cnt_reg;
>> +	cvmx_fpa_aurax_cnt_limit_t limit_reg;
>> +	long long ret;
>> +
>> +	pool = cvmx_fpa3_aura_to_pool(aura);
>> +
>> +	/* Get POOL available buffer count */
>> +	avail_reg.u64 = cvmx_read_csr_node(pool.node,
>> CVMX_FPA_POOLX_AVAILABLE(pool.lpool));
>> +
>> +	/* Get AURA current available count */
>> +	cnt_reg.u64 = cvmx_read_csr_node(aura.node,
>> CVMX_FPA_AURAX_CNT(aura.laura));
>> +	limit_reg.u64 = cvmx_read_csr_node(aura.node,
>> CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
>> +
>> +	if (limit_reg.cn78xx.limit < cnt_reg.cn78xx.cnt)
>> +		return 0;
>> +
>> +	/* Calculate AURA-based buffer allowance */
>> +	ret = limit_reg.cn78xx.limit - cnt_reg.cn78xx.cnt;
>> +
>> +	/* Use POOL real buffer availability when less then allowance
>> */
>> +	if (ret > (long long)avail_reg.cn78xx.count)
>> +		ret = avail_reg.cn78xx.count;
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * Configure the QoS parameters of an FPA3 AURA
>> + *
>> + * @param aura is the FPA3 AURA handle
>> + * @param ena_bp enables backpressure when outstanding count exceeds
>> 'bp_thresh'
>> + * @param ena_red enables random early discard when outstanding
>> count exceeds 'pass_thresh'
>> + * @param pass_thresh is the maximum count to invoke flow control
>> + * @param drop_thresh is the count threshold to begin dropping
>> packets
>> + * @param bp_thresh is the back-pressure threshold
>> + *
>> + */
>> +static inline void cvmx_fpa3_setup_aura_qos(cvmx_fpa3_gaura_t aura,
>> bool ena_red, u64 pass_thresh,
>> +					    u64 drop_thresh, bool
>> ena_bp, u64 bp_thresh)
>> +{
>> +	unsigned int shift = 0;
>> +	u64 shift_thresh;
>> +	cvmx_fpa_aurax_cnt_limit_t limit_reg;
>> +	cvmx_fpa_aurax_cnt_levels_t aura_level;
>> +
>> +	if (!__cvmx_fpa3_aura_valid(aura))
>> +		return;
>> +
>> +	/* Get AURAX count limit for validation */
>> +	limit_reg.u64 = cvmx_read_csr_node(aura.node,
>> CVMX_FPA_AURAX_CNT_LIMIT(aura.laura));
>> +
>> +	if (pass_thresh < 256)
>> +		pass_thresh = 255;
>> +
>> +	if (drop_thresh <= pass_thresh || drop_thresh >
>> limit_reg.cn78xx.limit)
>> +		drop_thresh = limit_reg.cn78xx.limit;
>> +
>> +	if (bp_thresh < 256 || bp_thresh > limit_reg.cn78xx.limit)
>> +		bp_thresh = limit_reg.cn78xx.limit >> 1;
>> +
>> +	shift_thresh = (bp_thresh > drop_thresh) ? bp_thresh :
>> drop_thresh;
>> +
>> +	/* Calculate shift so that the largest threshold fits in 8 bits
>> */
>> +	for (shift = 0; shift < (1 << 6); shift++) {
>> +		if (0 == ((shift_thresh >> shift) & ~0xffull))
>> +			break;
>> +	};
>> +
>> +	aura_level.u64 = cvmx_read_csr_node(aura.node,
>> CVMX_FPA_AURAX_CNT_LEVELS(aura.laura));
>> +	aura_level.s.pass = pass_thresh >> shift;
>> +	aura_level.s.drop = drop_thresh >> shift;
>> +	aura_level.s.bp = bp_thresh >> shift;
>> +	aura_level.s.shift = shift;
>> +	aura_level.s.red_ena = ena_red;
>> +	aura_level.s.bp_ena = ena_bp;
>> +	cvmx_write_csr_node(aura.node,
>> CVMX_FPA_AURAX_CNT_LEVELS(aura.laura), aura_level.u64);
>> +}
>> +
>> +cvmx_fpa3_gaura_t cvmx_fpa3_reserve_aura(int node, int
>> desired_aura_num);
>> +int cvmx_fpa3_release_aura(cvmx_fpa3_gaura_t aura);
>> +cvmx_fpa3_pool_t cvmx_fpa3_reserve_pool(int node, int
>> desired_pool_num);
>> +int cvmx_fpa3_release_pool(cvmx_fpa3_pool_t pool);
>> +int cvmx_fpa3_is_aura_available(int node, int aura_num);
>> +int cvmx_fpa3_is_pool_available(int node, int pool_num);
>> +
>> +cvmx_fpa3_pool_t cvmx_fpa3_setup_fill_pool(int node, int
>> desired_pool, const char *name,
>> +					   unsigned int block_size,
>> unsigned int num_blocks,
>> +					   void *buffer);
>> +
>> +/**
>> + * Function to attach an aura to an existing pool
>> + *
>> + * @param node - configure fpa on this node
>> + * @param pool - configured pool to attach aura to
>> + * @param desired_aura - pointer to aura to use, set to -1 to
>> allocate
>> + * @param name - name to register
>> + * @param block_size - size of buffers to use
>> + * @param num_blocks - number of blocks to allocate
>> + *
>> + * @return configured gaura on success, CVMX_FPA3_INVALID_GAURA on
>> failure
>> + */
>> +cvmx_fpa3_gaura_t cvmx_fpa3_set_aura_for_pool(cvmx_fpa3_pool_t pool,
>> int desired_aura,
>> +					      const char *name,
>> unsigned int block_size,
>> +					      unsigned int num_blocks);
>> +
>> +/**
>> + * Function to setup and initialize a pool.
>> + *
>> + * @param node - configure fpa on this node
>> + * @param desired_aura - aura to use, -1 for dynamic allocation
>> + * @param name - name to register
>> + * @param block_size - size of buffers in pool
>> + * @param num_blocks - max number of buffers allowed
>> + */
>> +cvmx_fpa3_gaura_t cvmx_fpa3_setup_aura_and_pool(int node, int
>> desired_aura, const char *name,
>> +						void *buffer, unsigned
>> int block_size,
>> +						unsigned int
>> num_blocks);
>> +
>> +int cvmx_fpa3_shutdown_aura_and_pool(cvmx_fpa3_gaura_t aura);
>> +int cvmx_fpa3_shutdown_aura(cvmx_fpa3_gaura_t aura);
>> +int cvmx_fpa3_shutdown_pool(cvmx_fpa3_pool_t pool);
>> +const char *cvmx_fpa3_get_pool_name(cvmx_fpa3_pool_t pool);
>> +int cvmx_fpa3_get_pool_buf_size(cvmx_fpa3_pool_t pool);
>> +const char *cvmx_fpa3_get_aura_name(cvmx_fpa3_gaura_t aura);
>> +
>> +/* FIXME: Need a different macro for stage2 of u-boot */
>> +
>> +static inline void cvmx_fpa3_stage2_init(int aura, int pool, u64
>> stack_paddr, int stacklen,
>> +					 int buffer_sz, int buf_cnt)
>> +{
>> +	cvmx_fpa_poolx_cfg_t pool_cfg;
>> +
>> +	/* Configure pool stack */
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool),
>> stack_paddr);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool),
>> stack_paddr);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool),
>> stack_paddr + stacklen);
>> +
>> +	/* Configure pool with buffer size */
>> +	pool_cfg.u64 = 0;
>> +	pool_cfg.cn78xx.nat_align = 1;
>> +	pool_cfg.cn78xx.buf_size = buffer_sz >> 7;
>> +	pool_cfg.cn78xx.l_type = 0x2;
>> +	pool_cfg.cn78xx.ena = 0;
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
>> +	/* Reset pool before starting */
>> +	pool_cfg.cn78xx.ena = 1;
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), pool_cfg.u64);
>> +
>> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CFG(aura), 0);
>> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_CNT_ADD(aura), buf_cnt);
>> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), (u64)pool);
>> +}
>> +
>> +static inline void cvmx_fpa3_stage2_disable(int aura, int pool)
>> +{
>> +	cvmx_write_csr_node(0, CVMX_FPA_AURAX_POOL(aura), 0);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_CFG(pool), 0);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_BASE(pool), 0);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_ADDR(pool), 0);
>> +	cvmx_write_csr_node(0, CVMX_FPA_POOLX_STACK_END(pool), 0);
>> +}
>> +
>> +#endif /* __CVMX_FPA3_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-global-
>> resources.h b/arch/mips/mach-octeon/include/mach/cvmx-global-
>> resources.h
>> new file mode 100644
>> index 000000000000..28c32ddbe17a
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-global-resources.h
>> @@ -0,0 +1,213 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef _CVMX_GLOBAL_RESOURCES_T_
>> +#define _CVMX_GLOBAL_RESOURCES_T_
>> +
>> +#define CVMX_GLOBAL_RESOURCES_DATA_NAME "cvmx-global-resources"
>> +
>> +/*In macros below abbreviation GR stands for global resources. */
>> +#define
>> CVMX_GR_TAG_INVALID
>>                        \
>> +	cvmx_get_gr_tag('i', 'n', 'v', 'a', 'l', 'i', 'd', '.', '.',
>> '.', '.', '.', '.', '.', '.', \
>> +			'.')
>> +/*Tag for pko que table range. */
>> +#define
>> CVMX_GR_TAG_PKO_QUEUES
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'q',
>> 'u', 'e', 'u', 's', '.', '.', \
>> +			'.')
>> +/*Tag for a pko internal ports range */
>> +#define
>> CVMX_GR_TAG_PKO_IPORTS
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'k', 'o', '_', 'i',
>> 'p', 'o', 'r', 't', '.', '.', \
>> +			'.')
>> +#define
>> CVMX_GR_TAG_FPA
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'p', 'a', '.', '.',
>> '.', '.', '.', '.', '.', '.', \
>> +			'.')
>> +#define
>> CVMX_GR_TAG_FAU
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'f', 'a', 'u', '.', '.',
>> '.', '.', '.', '.', '.', '.', \
>> +			'.')
>> +#define
>> CVMX_GR_TAG_SSO_GRP(n)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 's', 'o', '_', '0',
>> (n) + '0', '.', '.', '.',     \
>> +			'.', '.', '.');
>> +#define
>> CVMX_GR_TAG_TIM(n)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 't', 'i', 'm', '_', (n) +
>> '0', '.', '.', '.', '.',     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_CLUSTERS(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'u', 's', 't',
>> 'e', 'r', '_', (x + '0'),     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_CLUSTER_GRP(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'l', 'g', 'r', 'p',
>> '_', (x + '0'), '.', '.',     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_STYLE(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 's', 't', 'y', 'l', 'e',
>> '_', (x + '0'), '.', '.',     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_QPG_ENTRY(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'q', 'p', 'g', 'e', 't',
>> '_', (x + '0'), '.', '.',     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_BPID(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'b', 'p', 'i', 'd', 's',
>> '_', (x + '0'), '.', '.',     \
>> +			'.', '.', '.')
>> +#define
>> CVMX_GR_TAG_MTAG_IDX(x)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'm', 't', 'a', 'g', 'x',
>> '_', (x + '0'), '.', '.',     \
>> +			'.', '.', '.')
>> +#define CVMX_GR_TAG_PCAM(x, y,
>> z)                                                                  \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'p', 'c', 'a', 'm', '_', (x
>> + '0'), (y + '0'),         \
>> +			(z + '0'), '.', '.', '.', '.')
>> +
>> +#define
>> CVMX_GR_TAG_CIU3_IDT(_n)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_',
>> ((_n) + '0'), '_', 'i', 'd',  \
>> +			't', '.', '.')
>> +
>> +/* Allocation of the 512 SW INTSTs (in the  12 bit SW INTSN space)
>> */
>> +#define
>> CVMX_GR_TAG_CIU3_SWINTSN(_n)
>>                        \
>> +	cvmx_get_gr_tag('c', 'v', 'm', '_', 'c', 'i', 'u', '3', '_',
>> ((_n) + '0'), '_', 's', 'w',  \
>> +			'i', 's', 'n')
>> +
>> +#define TAG_INIT_PART(A, B, C, D, E, F, G,
>> H)                                                      \
>> +	((((u64)(A) & 0xff) << 56) | (((u64)(B) & 0xff) << 48) |
>> (((u64)(C) & 0xff) << 40) |             \
>> +	 (((u64)(D) & 0xff) << 32) | (((u64)(E) & 0xff) << 24) |
>> (((u64)(F) & 0xff) << 16) |             \
>> +	 (((u64)(G) & 0xff) << 8) | (((u64)(H) & 0xff)))
>> +
>> +struct global_resource_tag {
>> +	u64 lo;
>> +	u64 hi;
>> +};
>> +
>> +enum cvmx_resource_err { CVMX_RESOURCE_ALLOC_FAILED = -1,
>> CVMX_RESOURCE_ALREADY_RESERVED = -2 };
>> +
>> +/*
>> + * @INTERNAL
>> + * Creates a tag from the specified characters.
>> + */
>> +static inline struct global_resource_tag cvmx_get_gr_tag(char a,
>> char b, char c, char d, char e,
>> +							 char f, char
>> g, char h, char i, char j,
>> +							 char k, char
>> l, char m, char n, char o,
>> +							 char p)
>> +{
>> +	struct global_resource_tag tag;
>> +
>> +	tag.lo = TAG_INIT_PART(a, b, c, d, e, f, g, h);
>> +	tag.hi = TAG_INIT_PART(i, j, k, l, m, n, o, p);
>> +	return tag;
>> +}
>> +
>> +static inline int cvmx_gr_same_tag(struct global_resource_tag gr1,
>> struct global_resource_tag gr2)
>> +{
>> +	return (gr1.hi == gr2.hi) && (gr1.lo == gr2.lo);
>> +}
>> +
>> +/*
>> + * @INTERNAL
>> + * Creates a global resource range that can hold the specified
>> number of
>> + * elements
>> + * @param tag is the tag of the range. The taga is created using the
>> method
>> + * cvmx_get_gr_tag()
>> + * @param nelements is the number of elements to be held in the
>> resource range.
>> + */
>> +int cvmx_create_global_resource_range(struct global_resource_tag
>> tag, int nelements);
>> +
>> +/*
>> + * @INTERNAL
>> + * Allocate nelements in the global resource range with the
>> specified tag. It
>> + * is assumed that prior
>> + * to calling this the global resource range has already been
>> created using
>> + * cvmx_create_global_resource_range().
>> + * @param tag is the tag of the global resource range.
>> + * @param nelements is the number of elements to be allocated.
>> + * @param owner is a 64 bit number that identifes the owner of this
>> range.
>> + * @aligment specifes the required alignment of the returned base
>> number.
>> + * @return returns the base of the allocated range. -1 return value
>> indicates
>> + * failure.
>> + */
>> +int cvmx_allocate_global_resource_range(struct global_resource_tag
>> tag, u64 owner, int nelements,
>> +					int alignment);
>> +
>> +/*
>> + * @INTERNAL
>> + * Allocate nelements in the global resource range with the
>> specified tag.
>> + * The elements allocated need not be contiguous. It is assumed that
>> prior to
>> + * calling this the global resource range has already
>> + * been created using cvmx_create_global_resource_range().
>> + * @param tag is the tag of the global resource range.
>> + * @param nelements is the number of elements to be allocated.
>> + * @param owner is a 64 bit number that identifes the owner of the
>> allocated
>> + * elements.
>> + * @param allocated_elements returns indexs of the allocated
>> entries.
>> + * @return returns 0 on success and -1 on failure.
>> + */
>> +int cvmx_resource_alloc_many(struct global_resource_tag tag, u64
>> owner, int nelements,
>> +			     int allocated_elements[]);
>> +int cvmx_resource_alloc_reverse(struct global_resource_tag, u64
>> owner);
>> +/*
>> + * @INTERNAL
>> + * Reserve nelements starting from base in the global resource range
>> with the
>> + * specified tag.
>> + * It is assumed that prior to calling this the global resource
>> range has
>> + * already been created using cvmx_create_global_resource_range().
>> + * @param tag is the tag of the global resource range.
>> + * @param nelements is the number of elements to be allocated.
>> + * @param owner is a 64 bit number that identifes the owner of this
>> range.
>> + * @base specifies the base start of nelements.
>> + * @return returns the base of the allocated range. -1 return value
>> indicates
>> + * failure.
>> + */
>> +int cvmx_reserve_global_resource_range(struct global_resource_tag
>> tag, u64 owner, int base,
>> +				       int nelements);
>> +/*
>> + * @INTERNAL
>> + * Free nelements starting at base in the global resource range with
>> the
>> + * specified tag.
>> + * @param tag is the tag of the global resource range.
>> + * @param base is the base number
>> + * @param nelements is the number of elements that are to be freed.
>> + * @return returns 0 if successful and -1 on failure.
>> + */
>> +int cvmx_free_global_resource_range_with_base(struct
>> global_resource_tag tag, int base,
>> +					      int nelements);
>> +
>> +/*
>> + * @INTERNAL
>> + * Free nelements with the bases specified in bases[] with the
>> + * specified tag.
>> + * @param tag is the tag of the global resource range.
>> + * @param bases is an array containing the bases to be freed.
>> + * @param nelements is the number of elements that are to be freed.
>> + * @return returns 0 if successful and -1 on failure.
>> + */
>> +int cvmx_free_global_resource_range_multiple(struct
>> global_resource_tag tag, int bases[],
>> +					     int nelements);
>> +/*
>> + * @INTERNAL
>> + * Free elements from the specified owner in the global resource
>> range with the
>> + * specified tag.
>> + * @param tag is the tag of the global resource range.
>> + * @param owner is the owner of resources that are to be freed.
>> + * @return returns 0 if successful and -1 on failure.
>> + */
>> +int cvmx_free_global_resource_range_with_owner(struct
>> global_resource_tag tag, int owner);
>> +
>> +/*
>> + * @INTERNAL
>> + * Frees all the global resources that have been created.
>> + * For use only from the bootloader, when it shutdown and boots up
>> the
>> + * application or kernel.
>> + */
>> +int free_global_resources(void);
>> +
>> +u64 cvmx_get_global_resource_owner(struct global_resource_tag tag,
>> int base);
>> +/*
>> + * @INTERNAL
>> + * Shows the global resource range with the specified tag. Use
>> mainly for debug.
>> + */
>> +void cvmx_show_global_resource_range(struct global_resource_tag
>> tag);
>> +
>> +/*
>> + * @INTERNAL
>> + * Shows all the global resources. Used mainly for debug.
>> + */
>> +void cvmx_global_resources_show(void);
>> +
>> +u64 cvmx_allocate_app_id(void);
>> +u64 cvmx_get_app_id(void);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>> new file mode 100644
>> index 000000000000..2df7da102a0f
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>> @@ -0,0 +1,16 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the GMX hardware.
>> + */
>> +
>> +#ifndef __CVMX_GMX_H__
>> +#define __CVMX_GMX_H__
>> +
>> +/* CSR typedefs have been moved to cvmx-gmx-defs.h */
>> +
>> +int cvmx_gmx_set_backpressure_override(u32 interface, u32
>> port_mask);
>> +int cvmx_agl_set_backpressure_override(u32 interface, u32
>> port_mask);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>> new file mode 100644
>> index 000000000000..59772190aa3b
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>> @@ -0,0 +1,606 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Fetch and Add Unit.
>> + */
>> +
>> +/**
>> + * @file
>> + *
>> + * Interface to the hardware Fetch and Add Unit.
>> + *
>> + */
>> +
>> +#ifndef __CVMX_HWFAU_H__
>> +#define __CVMX_HWFAU_H__
>> +
>> +typedef int cvmx_fau_reg64_t;
>> +typedef int cvmx_fau_reg32_t;
>> +typedef int cvmx_fau_reg16_t;
>> +typedef int cvmx_fau_reg8_t;
>> +
>> +#define CVMX_FAU_REG_ANY -1
>> +
>> +/*
>> + * Octeon Fetch and Add Unit (FAU)
>> + */
>> +
>> +#define CVMX_FAU_LOAD_IO_ADDRESS cvmx_build_io_address(0x1e, 0)
>> +#define CVMX_FAU_BITS_SCRADDR	 63, 56
>> +#define CVMX_FAU_BITS_LEN	 55, 48
>> +#define CVMX_FAU_BITS_INEVAL	 35, 14
>> +#define CVMX_FAU_BITS_TAGWAIT	 13, 13
>> +#define CVMX_FAU_BITS_NOADD	 13, 13
>> +#define CVMX_FAU_BITS_SIZE	 12, 11
>> +#define CVMX_FAU_BITS_REGISTER	 10, 0
>> +
>> +#define CVMX_FAU_MAX_REGISTERS_8 (2048)
>> +
>> +typedef enum {
>> +	CVMX_FAU_OP_SIZE_8 = 0,
>> +	CVMX_FAU_OP_SIZE_16 = 1,
>> +	CVMX_FAU_OP_SIZE_32 = 2,
>> +	CVMX_FAU_OP_SIZE_64 = 3
>> +} cvmx_fau_op_size_t;
>> +
>> +/**
>> + * Tagwait return definition. If a timeout occurs, the error
>> + * bit will be set. Otherwise the value of the register before
>> + * the update will be returned.
>> + */
>> +typedef struct {
>> +	u64 error : 1;
>> +	s64 value : 63;
>> +} cvmx_fau_tagwait64_t;
>> +
>> +/**
>> + * Tagwait return definition. If a timeout occurs, the error
>> + * bit will be set. Otherwise the value of the register before
>> + * the update will be returned.
>> + */
>> +typedef struct {
>> +	u64 error : 1;
>> +	s32 value : 31;
>> +} cvmx_fau_tagwait32_t;
>> +
>> +/**
>> + * Tagwait return definition. If a timeout occurs, the error
>> + * bit will be set. Otherwise the value of the register before
>> + * the update will be returned.
>> + */
>> +typedef struct {
>> +	u64 error : 1;
>> +	s16 value : 15;
>> +} cvmx_fau_tagwait16_t;
>> +
>> +/**
>> + * Tagwait return definition. If a timeout occurs, the error
>> + * bit will be set. Otherwise the value of the register before
>> + * the update will be returned.
>> + */
>> +typedef struct {
>> +	u64 error : 1;
>> +	int8_t value : 7;
>> +} cvmx_fau_tagwait8_t;
>> +
>> +/**
>> + * Asynchronous tagwait return definition. If a timeout occurs,
>> + * the error bit will be set. Otherwise the value of the
>> + * register before the update will be returned.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 invalid : 1;
>> +		u64 data : 63; /* unpredictable if invalid is set */
>> +	} s;
>> +} cvmx_fau_async_tagwait_result_t;
>> +
>> +#define SWIZZLE_8  0
>> +#define SWIZZLE_16 0
>> +#define SWIZZLE_32 0
>> +
>> +/**
>> + * @INTERNAL
>> + * Builds a store I/O address for writing to the FAU
>> + *
>> + * @param noadd  0 = Store value is atomically added to the current
>> value
>> + *               1 = Store value is atomically written over the
>> current value
>> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
>> + *               - Step by 2 for 16 bit access.
>> + *               - Step by 4 for 32 bit access.
>> + *               - Step by 8 for 64 bit access.
>> + * @return Address to store for atomic update
>> + */
>> +static inline u64 __cvmx_hwfau_store_address(u64 noadd, u64 reg)
>> +{
>> +	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_NOADD, noadd) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Builds a I/O address for accessing the FAU
>> + *
>> + * @param tagwait Should the atomic add wait for the current tag
>> switch
>> + *                operation to complete.
>> + *                - 0 = Don't wait
>> + *                - 1 = Wait for tag switch to complete
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + *                - Step by 4 for 32 bit access.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: When performing 32 and 64 bit access, only
>> the low
>> + *                22 bits are available.
>> + * @return Address to read from for atomic update
>> + */
>> +static inline u64 __cvmx_hwfau_atomic_address(u64 tagwait, u64 reg,
>> s64 value)
>> +{
>> +	return (CVMX_ADD_IO_SEG(CVMX_FAU_LOAD_IO_ADDRESS) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
>> +}
>> +
>> +/**
>> + * Perform an atomic 64 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Value of the register before the update
>> + */
>> +static inline s64 cvmx_hwfau_fetch_and_add64(cvmx_fau_reg64_t reg,
>> s64 value)
>> +{
>> +	return cvmx_read64_int64(__cvmx_hwfau_atomic_address(0, reg,
>> value));
>> +}
>> +
>> +/**
>> + * Perform an atomic 32 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 4 for 32 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Value of the register before the update
>> + */
>> +static inline s32 cvmx_hwfau_fetch_and_add32(cvmx_fau_reg32_t reg,
>> s32 value)
>> +{
>> +	reg ^= SWIZZLE_32;
>> +	return cvmx_read64_int32(__cvmx_hwfau_atomic_address(0, reg,
>> value));
>> +}
>> +
>> +/**
>> + * Perform an atomic 16 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + * @param value   Signed value to add.
>> + * @return Value of the register before the update
>> + */
>> +static inline s16 cvmx_hwfau_fetch_and_add16(cvmx_fau_reg16_t reg,
>> s16 value)
>> +{
>> +	reg ^= SWIZZLE_16;
>> +	return cvmx_read64_int16(__cvmx_hwfau_atomic_address(0, reg,
>> value));
>> +}
>> +
>> +/**
>> + * Perform an atomic 8 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value   Signed value to add.
>> + * @return Value of the register before the update
>> + */
>> +static inline int8_t cvmx_hwfau_fetch_and_add8(cvmx_fau_reg8_t reg,
>> int8_t value)
>> +{
>> +	reg ^= SWIZZLE_8;
>> +	return cvmx_read64_int8(__cvmx_hwfau_atomic_address(0, reg,
>> value));
>> +}
>> +
>> +/**
>> + * Perform an atomic 64 bit add after the current tag switch
>> + * completes
>> + *
>> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
>> + *               - Step by 8 for 64 bit access.
>> + * @param value  Signed value to add.
>> + *               Note: Only the low 22 bits are available.
>> + * @return If a timeout occurs, the error bit will be set. Otherwise
>> + *         the value of the register before the update will be
>> + *         returned
>> + */
>> +static inline cvmx_fau_tagwait64_t
>> cvmx_hwfau_tagwait_fetch_and_add64(cvmx_fau_reg64_t reg,
>> +								      s
>> 64 value)
>> +{
>> +	union {
>> +		u64 i64;
>> +		cvmx_fau_tagwait64_t t;
>> +	} result;
>> +	result.i64 = cvmx_read64_int64(__cvmx_hwfau_atomic_address(1,
>> reg, value));
>> +	return result.t;
>> +}
>> +
>> +/**
>> + * Perform an atomic 32 bit add after the current tag switch
>> + * completes
>> + *
>> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
>> + *               - Step by 4 for 32 bit access.
>> + * @param value  Signed value to add.
>> + *               Note: Only the low 22 bits are available.
>> + * @return If a timeout occurs, the error bit will be set. Otherwise
>> + *         the value of the register before the update will be
>> + *         returned
>> + */
>> +static inline cvmx_fau_tagwait32_t
>> cvmx_hwfau_tagwait_fetch_and_add32(cvmx_fau_reg32_t reg,
>> +								      s
>> 32 value)
>> +{
>> +	union {
>> +		u64 i32;
>> +		cvmx_fau_tagwait32_t t;
>> +	} result;
>> +	reg ^= SWIZZLE_32;
>> +	result.i32 = cvmx_read64_int32(__cvmx_hwfau_atomic_address(1,
>> reg, value));
>> +	return result.t;
>> +}
>> +
>> +/**
>> + * Perform an atomic 16 bit add after the current tag switch
>> + * completes
>> + *
>> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
>> + *               - Step by 2 for 16 bit access.
>> + * @param value  Signed value to add.
>> + * @return If a timeout occurs, the error bit will be set. Otherwise
>> + *         the value of the register before the update will be
>> + *         returned
>> + */
>> +static inline cvmx_fau_tagwait16_t
>> cvmx_hwfau_tagwait_fetch_and_add16(cvmx_fau_reg16_t reg,
>> +								      s
>> 16 value)
>> +{
>> +	union {
>> +		u64 i16;
>> +		cvmx_fau_tagwait16_t t;
>> +	} result;
>> +	reg ^= SWIZZLE_16;
>> +	result.i16 = cvmx_read64_int16(__cvmx_hwfau_atomic_address(1,
>> reg, value));
>> +	return result.t;
>> +}
>> +
>> +/**
>> + * Perform an atomic 8 bit add after the current tag switch
>> + * completes
>> + *
>> + * @param reg    FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value  Signed value to add.
>> + * @return If a timeout occurs, the error bit will be set. Otherwise
>> + *         the value of the register before the update will be
>> + *         returned
>> + */
>> +static inline cvmx_fau_tagwait8_t
>> cvmx_hwfau_tagwait_fetch_and_add8(cvmx_fau_reg8_t reg,
>> +								    int
>> 8_t value)
>> +{
>> +	union {
>> +		u64 i8;
>> +		cvmx_fau_tagwait8_t t;
>> +	} result;
>> +	reg ^= SWIZZLE_8;
>> +	result.i8 = cvmx_read64_int8(__cvmx_hwfau_atomic_address(1,
>> reg, value));
>> +	return result.t;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Builds I/O data for async operations
>> + *
>> + * @param scraddr Scratch pad byte address to write to.  Must be 8
>> byte aligned
>> + * @param value   Signed value to add.
>> + *                Note: When performing 32 and 64 bit access, only
>> the low
>> + *                22 bits are available.
>> + * @param tagwait Should the atomic add wait for the current tag
>> switch
>> + *                operation to complete.
>> + *                - 0 = Don't wait
>> + *                - 1 = Wait for tag switch to complete
>> + * @param size    The size of the operation:
>> + *                - CVMX_FAU_OP_SIZE_8  (0) = 8 bits
>> + *                - CVMX_FAU_OP_SIZE_16 (1) = 16 bits
>> + *                - CVMX_FAU_OP_SIZE_32 (2) = 32 bits
>> + *                - CVMX_FAU_OP_SIZE_64 (3) = 64 bits
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + *                - Step by 4 for 32 bit access.
>> + *                - Step by 8 for 64 bit access.
>> + * @return Data to write using cvmx_send_single
>> + */
>> +static inline u64 __cvmx_fau_iobdma_data(u64 scraddr, s64 value, u64
>> tagwait,
>> +					 cvmx_fau_op_size_t size, u64
>> reg)
>> +{
>> +	return (CVMX_FAU_LOAD_IO_ADDRESS |
>> cvmx_build_bits(CVMX_FAU_BITS_SCRADDR, scraddr >> 3) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_LEN, 1) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_INEVAL, value) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_TAGWAIT, tagwait) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_SIZE, size) |
>> +		cvmx_build_bits(CVMX_FAU_BITS_REGISTER, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 64 bit add. The old value is
>> + * placed in the scratch memory at byte address scraddr.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_fetch_and_add64(u64 scraddr,
>> cvmx_fau_reg64_t reg, s64 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
>> CVMX_FAU_OP_SIZE_64, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 32 bit add. The old value is
>> + * placed in the scratch memory at byte address scraddr.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 4 for 32 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_fetch_and_add32(u64 scraddr,
>> cvmx_fau_reg32_t reg, s32 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
>> CVMX_FAU_OP_SIZE_32, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 16 bit add. The old value is
>> + * placed in the scratch memory at byte address scraddr.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + * @param value   Signed value to add.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_fetch_and_add16(u64 scraddr,
>> cvmx_fau_reg16_t reg, s16 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
>> CVMX_FAU_OP_SIZE_16, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 8 bit add. The old value is
>> + * placed in the scratch memory at byte address scraddr.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value   Signed value to add.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_fetch_and_add8(u64 scraddr,
>> cvmx_fau_reg8_t reg, int8_t value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 0,
>> CVMX_FAU_OP_SIZE_8, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 64 bit add after the current tag
>> + * switch completes.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + *                If a timeout occurs, the error bit (63) will be
>> set. Otherwise
>> + *                the value of the register before the update will
>> be
>> + *                returned
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add64(u64
>> scraddr, cvmx_fau_reg64_t reg,
>> +							    s64 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
>> CVMX_FAU_OP_SIZE_64, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 32 bit add after the current tag
>> + * switch completes.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + *                If a timeout occurs, the error bit (63) will be
>> set. Otherwise
>> + *                the value of the register before the update will
>> be
>> + *                returned
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 4 for 32 bit access.
>> + * @param value   Signed value to add.
>> + *                Note: Only the low 22 bits are available.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add32(u64
>> scraddr, cvmx_fau_reg32_t reg,
>> +							    s32 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
>> CVMX_FAU_OP_SIZE_32, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 16 bit add after the current tag
>> + * switch completes.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + *                If a timeout occurs, the error bit (63) will be
>> set. Otherwise
>> + *                the value of the register before the update will
>> be
>> + *                returned
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + * @param value   Signed value to add.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add16(u64
>> scraddr, cvmx_fau_reg16_t reg,
>> +							    s16 value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
>> CVMX_FAU_OP_SIZE_16, reg));
>> +}
>> +
>> +/**
>> + * Perform an async atomic 8 bit add after the current tag
>> + * switch completes.
>> + *
>> + * @param scraddr Scratch memory byte address to put response in.
>> + *                Must be 8 byte aligned.
>> + *                If a timeout occurs, the error bit (63) will be
>> set. Otherwise
>> + *                the value of the register before the update will
>> be
>> + *                returned
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value   Signed value to add.
>> + * @return Placed in the scratch pad register
>> + */
>> +static inline void cvmx_hwfau_async_tagwait_fetch_and_add8(u64
>> scraddr, cvmx_fau_reg8_t reg,
>> +							   int8_t
>> value)
>> +{
>> +	cvmx_send_single(__cvmx_fau_iobdma_data(scraddr, value, 1,
>> CVMX_FAU_OP_SIZE_8, reg));
>> +}
>> +
>> +/**
>> + * Perform an atomic 64 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to add.
>> + */
>> +static inline void cvmx_hwfau_atomic_add64(cvmx_fau_reg64_t reg, s64
>> value)
>> +{
>> +	cvmx_write64_int64(__cvmx_hwfau_store_address(0, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 32 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 4 for 32 bit access.
>> + * @param value   Signed value to add.
>> + */
>> +static inline void cvmx_hwfau_atomic_add32(cvmx_fau_reg32_t reg, s32
>> value)
>> +{
>> +	reg ^= SWIZZLE_32;
>> +	cvmx_write64_int32(__cvmx_hwfau_store_address(0, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 16 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + * @param value   Signed value to add.
>> + */
>> +static inline void cvmx_hwfau_atomic_add16(cvmx_fau_reg16_t reg, s16
>> value)
>> +{
>> +	reg ^= SWIZZLE_16;
>> +	cvmx_write64_int16(__cvmx_hwfau_store_address(0, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 8 bit add
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value   Signed value to add.
>> + */
>> +static inline void cvmx_hwfau_atomic_add8(cvmx_fau_reg8_t reg,
>> int8_t value)
>> +{
>> +	reg ^= SWIZZLE_8;
>> +	cvmx_write64_int8(__cvmx_hwfau_store_address(0, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 64 bit write
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 8 for 64 bit access.
>> + * @param value   Signed value to write.
>> + */
>> +static inline void cvmx_hwfau_atomic_write64(cvmx_fau_reg64_t reg,
>> s64 value)
>> +{
>> +	cvmx_write64_int64(__cvmx_hwfau_store_address(1, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 32 bit write
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 4 for 32 bit access.
>> + * @param value   Signed value to write.
>> + */
>> +static inline void cvmx_hwfau_atomic_write32(cvmx_fau_reg32_t reg,
>> s32 value)
>> +{
>> +	reg ^= SWIZZLE_32;
>> +	cvmx_write64_int32(__cvmx_hwfau_store_address(1, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 16 bit write
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + *                - Step by 2 for 16 bit access.
>> + * @param value   Signed value to write.
>> + */
>> +static inline void cvmx_hwfau_atomic_write16(cvmx_fau_reg16_t reg,
>> s16 value)
>> +{
>> +	reg ^= SWIZZLE_16;
>> +	cvmx_write64_int16(__cvmx_hwfau_store_address(1, reg), value);
>> +}
>> +
>> +/**
>> + * Perform an atomic 8 bit write
>> + *
>> + * @param reg     FAU atomic register to access. 0 <= reg < 2048.
>> + * @param value   Signed value to write.
>> + */
>> +static inline void cvmx_hwfau_atomic_write8(cvmx_fau_reg8_t reg,
>> int8_t value)
>> +{
>> +	reg ^= SWIZZLE_8;
>> +	cvmx_write64_int8(__cvmx_hwfau_store_address(1, reg), value);
>> +}
>> +
>> +/** Allocates 64bit FAU register.
>> + *  @return value is the base address of allocated FAU register
>> + */
>> +int cvmx_fau64_alloc(int reserve);
>> +
>> +/** Allocates 32bit FAU register.
>> + *  @return value is the base address of allocated FAU register
>> + */
>> +int cvmx_fau32_alloc(int reserve);
>> +
>> +/** Allocates 16bit FAU register.
>> + *  @return value is the base address of allocated FAU register
>> + */
>> +int cvmx_fau16_alloc(int reserve);
>> +
>> +/** Allocates 8bit FAU register.
>> + *  @return value is the base address of allocated FAU register
>> + */
>> +int cvmx_fau8_alloc(int reserve);
>> +
>> +/** Frees the specified FAU register.
>> + *  @param address Base address of register to release.
>> + *  @return 0 on success; -1 on failure
>> + */
>> +int cvmx_fau_free(int address);
>> +
>> +/** Display the fau registers array
>> + */
>> +void cvmx_fau_show(void);
>> +
>> +#endif /* __CVMX_HWFAU_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>> new file mode 100644
>> index 000000000000..459c19bbc0f1
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>> @@ -0,0 +1,570 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Packet Output unit.
>> + *
>> + * Starting with SDK 1.7.0, the PKO output functions now support
>> + * two types of locking. CVMX_PKO_LOCK_ATOMIC_TAG continues to
>> + * function similarly to previous SDKs by using POW atomic tags
>> + * to preserve ordering and exclusivity. As a new option, you
>> + * can now pass CVMX_PKO_LOCK_CMD_QUEUE which uses a ll/sc
>> + * memory based locking instead. This locking has the advantage
>> + * of not affecting the tag state but doesn't preserve packet
>> + * ordering. CVMX_PKO_LOCK_CMD_QUEUE is appropriate in most
>> + * generic code while CVMX_PKO_LOCK_CMD_QUEUE should be used
>> + * with hand tuned fast path code.
>> + *
>> + * Some of other SDK differences visible to the command command
>> + * queuing:
>> + * - PKO indexes are no longer stored in the FAU. A large
>> + *   percentage of the FAU register block used to be tied up
>> + *   maintaining PKO queue pointers. These are now stored in a
>> + *   global named block.
>> + * - The PKO <b>use_locking</b> parameter can now have a global
>> + *   effect. Since all application use the same named block,
>> + *   queue locking correctly applies across all operating
>> + *   systems when using CVMX_PKO_LOCK_CMD_QUEUE.
>> + * - PKO 3 word commands are now supported. Use
>> + *   cvmx_pko_send_packet_finish3().
>> + */
>> +
>> +#ifndef __CVMX_HWPKO_H__
>> +#define __CVMX_HWPKO_H__
>> +
>> +#include "cvmx-hwfau.h"
>> +#include "cvmx-fpa.h"
>> +#include "cvmx-pow.h"
>> +#include "cvmx-cmd-queue.h"
>> +#include "cvmx-helper.h"
>> +#include "cvmx-helper-util.h"
>> +#include "cvmx-helper-cfg.h"
>> +
>> +/* Adjust the command buffer size by 1 word so that in the case of
>> using only
>> +** two word PKO commands no command words stradle buffers.  The
>> useful values
>> +** for this are 0 and 1. */
>> +#define CVMX_PKO_COMMAND_BUFFER_SIZE_ADJUST (1)
>> +
>> +#define CVMX_PKO_MAX_OUTPUT_QUEUES_STATIC 256
>> +#define
>> CVMX_PKO_MAX_OUTPUT_QUEUES
>>                        \
>> +	((OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) ? 256
>> : 128)
>> +#define
>> CVMX_PKO_NUM_OUTPUT_PORTS
>>                        \
>> +	((OCTEON_IS_MODEL(OCTEON_CN63XX)) ? 44 :
>> (OCTEON_IS_MODEL(OCTEON_CN66XX) ? 48 : 40))
>> +#define CVMX_PKO_MEM_QUEUE_PTRS_ILLEGAL_PID 63
>> +#define CVMX_PKO_QUEUE_STATIC_PRIORITY	    9
>> +#define CVMX_PKO_ILLEGAL_QUEUE		    0xFFFF
>> +#define CVMX_PKO_MAX_QUEUE_DEPTH	    0
>> +
>> +typedef enum {
>> +	CVMX_PKO_SUCCESS,
>> +	CVMX_PKO_INVALID_PORT,
>> +	CVMX_PKO_INVALID_QUEUE,
>> +	CVMX_PKO_INVALID_PRIORITY,
>> +	CVMX_PKO_NO_MEMORY,
>> +	CVMX_PKO_PORT_ALREADY_SETUP,
>> +	CVMX_PKO_CMD_QUEUE_INIT_ERROR
>> +} cvmx_pko_return_value_t;
>> +
>> +/**
>> + * This enumeration represents the differnet locking modes supported
>> by PKO.
>> + */
>> +typedef enum {
>> +	CVMX_PKO_LOCK_NONE = 0,
>> +	CVMX_PKO_LOCK_ATOMIC_TAG = 1,
>> +	CVMX_PKO_LOCK_CMD_QUEUE = 2,
>> +} cvmx_pko_lock_t;
>> +
>> +typedef struct cvmx_pko_port_status {
>> +	u32 packets;
>> +	u64 octets;
>> +	u64 doorbell;
>> +} cvmx_pko_port_status_t;
>> +
>> +/**
>> + * This structure defines the address to use on a packet enqueue
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		cvmx_mips_space_t mem_space : 2;
>> +		u64 reserved : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved2 : 4;
>> +		u64 reserved3 : 15;
>> +		u64 port : 9;
>> +		u64 queue : 9;
>> +		u64 reserved4 : 3;
>> +	} s;
>> +} cvmx_pko_doorbell_address_t;
>> +
>> +/**
>> + * Structure of the first packet output command word.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		cvmx_fau_op_size_t size1 : 2;
>> +		cvmx_fau_op_size_t size0 : 2;
>> +		u64 subone1 : 1;
>> +		u64 reg1 : 11;
>> +		u64 subone0 : 1;
>> +		u64 reg0 : 11;
>> +		u64 le : 1;
>> +		u64 n2 : 1;
>> +		u64 wqp : 1;
>> +		u64 rsp : 1;
>> +		u64 gather : 1;
>> +		u64 ipoffp1 : 7;
>> +		u64 ignore_i : 1;
>> +		u64 dontfree : 1;
>> +		u64 segs : 6;
>> +		u64 total_bytes : 16;
>> +	} s;
>> +} cvmx_pko_command_word0_t;
>> +
>> +/**
>> + * Call before any other calls to initialize the packet
>> + * output system.
>> + */
>> +
>> +void cvmx_pko_hw_init(u8 pool, unsigned int bufsize);
>> +
>> +/**
>> + * Enables the packet output hardware. It must already be
>> + * configured.
>> + */
>> +void cvmx_pko_enable(void);
>> +
>> +/**
>> + * Disables the packet output. Does not affect any configuration.
>> + */
>> +void cvmx_pko_disable(void);
>> +
>> +/**
>> + * Shutdown and free resources required by packet output.
>> + */
>> +
>> +void cvmx_pko_shutdown(void);
>> +
>> +/**
>> + * Configure a output port and the associated queues for use.
>> + *
>> + * @param port       Port to configure.
>> + * @param base_queue First queue number to associate with this port.
>> + * @param num_queues Number of queues t oassociate with this port
>> + * @param priority   Array of priority levels for each queue. Values
>> are
>> + *                   allowed to be 1-8. A value of 8 get 8 times the
>> traffic
>> + *                   of a value of 1. There must be num_queues
>> elements in the
>> + *                   array.
>> + */
>> +cvmx_pko_return_value_t cvmx_pko_config_port(int port, int
>> base_queue, int num_queues,
>> +					     const u8 priority[]);
>> +
>> +/**
>> + * Ring the packet output doorbell. This tells the packet
>> + * output hardware that "len" command words have been added
>> + * to its pending list.  This command includes the required
>> + * CVMX_SYNCWS before the doorbell ring.
>> + *
>> + * WARNING: This function may have to look up the proper PKO port in
>> + * the IPD port to PKO port map, and is thus slower than calling
>> + * cvmx_pko_doorbell_pkoid() directly if the PKO port identifier is
>> + * known.
>> + *
>> + * @param ipd_port   The IPD port corresponding the to pko port the
>> packet is for
>> + * @param queue  Queue the packet is for
>> + * @param len    Length of the command in 64 bit words
>> + */
>> +static inline void cvmx_pko_doorbell(u64 ipd_port, u64 queue, u64
>> len)
>> +{
>> +	cvmx_pko_doorbell_address_t ptr;
>> +	u64 pko_port;
>> +
>> +	pko_port = ipd_port;
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKND))
>> +		pko_port = cvmx_helper_cfg_ipd2pko_port_base(ipd_port);
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s.mem_space = CVMX_IO_SEG;
>> +	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
>> +	ptr.s.is_io = 1;
>> +	ptr.s.port = pko_port;
>> +	ptr.s.queue = queue;
>> +	/* Need to make sure output queue data is in DRAM before
>> doorbell write */
>> +	CVMX_SYNCWS;
>> +	cvmx_write_io(ptr.u64, len);
>> +}
>> +
>> +/**
>> + * Prepare to send a packet.  This may initiate a tag switch to
>> + * get exclusive access to the output queue structure, and
>> + * performs other prep work for the packet send operation.
>> + *
>> + * cvmx_pko_send_packet_finish() MUST be called after this function
>> is called,
>> + * and must be called with the same port/queue/use_locking
>> arguments.
>> + *
>> + * The use_locking parameter allows the caller to use three
>> + * possible locking modes.
>> + * - CVMX_PKO_LOCK_NONE
>> + *      - PKO doesn't do any locking. It is the responsibility
>> + *          of the application to make sure that no other core
>> + *          is accessing the same queue at the same time.
>> + * - CVMX_PKO_LOCK_ATOMIC_TAG
>> + *      - PKO performs an atomic tagswitch to insure exclusive
>> + *          access to the output queue. This will maintain
>> + *          packet ordering on output.
>> + * - CVMX_PKO_LOCK_CMD_QUEUE
>> + *      - PKO uses the common command queue locks to insure
>> + *          exclusive access to the output queue. This is a
>> + *          memory based ll/sc. This is the most portable
>> + *          locking mechanism.
>> + *
>> + * NOTE: If atomic locking is used, the POW entry CANNOT be
>> + * descheduled, as it does not contain a valid WQE pointer.
>> + *
>> + * @param port   Port to send it on, this can be either IPD port or
>> PKO
>> + *		 port.
>> + * @param queue  Queue to use
>> + * @param use_locking
>> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
>> CVMX_PKO_LOCK_CMD_QUEUE
>> + */
>> +static inline void cvmx_pko_send_packet_prepare(u64 port
>> __attribute__((unused)), u64 queue,
>> +						cvmx_pko_lock_t
>> use_locking)
>> +{
>> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG) {
>> +		/*
>> +		 * Must do a full switch here to handle all cases.  We
>> use a
>> +		 * fake WQE pointer, as the POW does not access this
>> memory.
>> +		 * The WQE pointer and group are only used if this work
>> is
>> +		 * descheduled, which is not supported by the
>> +		 *
>> cvmx_pko_send_packet_prepare/cvmx_pko_send_packet_finish
>> +		 * combination. Note that this is a special case in
>> which these
>> +		 * fake values can be used - this is not a general
>> technique.
>> +		 */
>> +		u32 tag = CVMX_TAG_SW_BITS_INTERNAL <<
>> CVMX_TAG_SW_SHIFT |
>> +			  CVMX_TAG_SUBGROUP_PKO <<
>> CVMX_TAG_SUBGROUP_SHIFT |
>> +			  (CVMX_TAG_SUBGROUP_MASK & queue);
>> +		cvmx_pow_tag_sw_full((cvmx_wqe_t
>> *)cvmx_phys_to_ptr(0x80), tag,
>> +				     CVMX_POW_TAG_TYPE_ATOMIC, 0);
>> +	}
>> +}
>> +
>> +#define cvmx_pko_send_packet_prepare_pkoid
>> cvmx_pko_send_packet_prepare
>> +
>> +/**
>> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
>> called exactly once before this,
>> + * and the same parameters must be passed to both
>> cvmx_pko_send_packet_prepare() and
>> + * cvmx_pko_send_packet_finish().
>> + *
>> + * WARNING: This function may have to look up the proper PKO port in
>> + * the IPD port to PKO port map, and is thus slower than calling
>> + * cvmx_pko_send_packet_finish_pkoid() directly if the PKO port
>> + * identifier is known.
>> + *
>> + * @param ipd_port   The IPD port corresponding the to pko port the
>> packet is for
>> + * @param queue  Queue to use
>> + * @param pko_command
>> + *               PKO HW command word
>> + * @param packet Packet to send
>> + * @param use_locking
>> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
>> CVMX_PKO_LOCK_CMD_QUEUE
>> + *
>> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
>> failure of output
>> + */
>> +static inline cvmx_pko_return_value_t
>> +cvmx_hwpko_send_packet_finish(u64 ipd_port, u64 queue,
>> cvmx_pko_command_word0_t pko_command,
>> +			      cvmx_buf_ptr_t packet, cvmx_pko_lock_t
>> use_locking)
>> +{
>> +	cvmx_cmd_queue_result_t result;
>> +
>> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
>> +		cvmx_pow_tag_sw_wait();
>> +
>> +	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
>> +				       (use_locking ==
>> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
>> +				       packet.u64);
>> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
>> +		cvmx_pko_doorbell(ipd_port, queue, 2);
>> +		return CVMX_PKO_SUCCESS;
>> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
>> CVMX_CMD_QUEUE_FULL)) {
>> +		return CVMX_PKO_NO_MEMORY;
>> +	} else {
>> +		return CVMX_PKO_INVALID_QUEUE;
>> +	}
>> +}
>> +
>> +/**
>> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
>> called exactly once before this,
>> + * and the same parameters must be passed to both
>> cvmx_pko_send_packet_prepare() and
>> + * cvmx_pko_send_packet_finish().
>> + *
>> + * WARNING: This function may have to look up the proper PKO port in
>> + * the IPD port to PKO port map, and is thus slower than calling
>> + * cvmx_pko_send_packet_finish3_pkoid() directly if the PKO port
>> + * identifier is known.
>> + *
>> + * @param ipd_port   The IPD port corresponding the to pko port the
>> packet is for
>> + * @param queue  Queue to use
>> + * @param pko_command
>> + *               PKO HW command word
>> + * @param packet Packet to send
>> + * @param addr   Plysical address of a work queue entry or physical
>> address to zero on complete.
>> + * @param use_locking
>> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
>> CVMX_PKO_LOCK_CMD_QUEUE
>> + *
>> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
>> failure of output
>> + */
>> +static inline cvmx_pko_return_value_t
>> +cvmx_hwpko_send_packet_finish3(u64 ipd_port, u64 queue,
>> cvmx_pko_command_word0_t pko_command,
>> +			       cvmx_buf_ptr_t packet, u64 addr,
>> cvmx_pko_lock_t use_locking)
>> +{
>> +	cvmx_cmd_queue_result_t result;
>> +
>> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
>> +		cvmx_pow_tag_sw_wait();
>> +
>> +	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
>> +				       (use_locking ==
>> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
>> +				       packet.u64, addr);
>> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
>> +		cvmx_pko_doorbell(ipd_port, queue, 3);
>> +		return CVMX_PKO_SUCCESS;
>> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
>> CVMX_CMD_QUEUE_FULL)) {
>> +		return CVMX_PKO_NO_MEMORY;
>> +	} else {
>> +		return CVMX_PKO_INVALID_QUEUE;
>> +	}
>> +}
>> +
>> +/**
>> + * Get the first pko_port for the (interface, index)
>> + *
>> + * @param interface
>> + * @param index
>> + */
>> +int cvmx_pko_get_base_pko_port(int interface, int index);
>> +
>> +/**
>> + * Get the number of pko_ports for the (interface, index)
>> + *
>> + * @param interface
>> + * @param index
>> + */
>> +int cvmx_pko_get_num_pko_ports(int interface, int index);
>> +
>> +/**
>> + * For a given port number, return the base pko output queue
>> + * for the port.
>> + *
>> + * @param port   IPD port number
>> + * @return Base output queue
>> + */
>> +int cvmx_pko_get_base_queue(int port);
>> +
>> +/**
>> + * For a given port number, return the number of pko output queues.
>> + *
>> + * @param port   IPD port number
>> + * @return Number of output queues
>> + */
>> +int cvmx_pko_get_num_queues(int port);
>> +
>> +/**
>> + * Sets the internal FPA pool data structure for PKO comamnd queue.
>> + * @param pool	fpa pool number yo use
>> + * @param buffer_size	buffer size of pool
>> + * @param buffer_count	number of buufers to allocate to pool
>> + *
>> + * @note the caller is responsable for setting up the pool with
>> + * an appropriate buffer size and sufficient buffer count.
>> + */
>> +void cvmx_pko_set_cmd_que_pool_config(s64 pool, u64 buffer_size, u64
>> buffer_count);
>> +
>> +/**
>> + * Get the status counters for a port.
>> + *
>> + * @param ipd_port Port number (ipd_port) to get statistics for.
>> + * @param clear    Set to 1 to clear the counters after they are
>> read
>> + * @param status   Where to put the results.
>> + *
>> + * Note:
>> + *     - Only the doorbell for the base queue of the ipd_port is
>> + *       collected.
>> + *     - Retrieving the stats involves writing the index through
>> + *       CVMX_PKO_REG_READ_IDX and reading the stat CSRs, in that
>> + *       order. It is not MP-safe and caller should guarantee
>> + *       atomicity.
>> + */
>> +void cvmx_pko_get_port_status(u64 ipd_port, u64 clear,
>> cvmx_pko_port_status_t *status);
>> +
>> +/**
>> + * Rate limit a PKO port to a max packets/sec. This function is only
>> + * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
>> + *
>> + * @param port      Port to rate limit
>> + * @param packets_s Maximum packet/sec
>> + * @param burst     Maximum number of packets to burst in a row
>> before rate
>> + *                  limiting cuts in.
>> + *
>> + * @return Zero on success, negative on failure
>> + */
>> +int cvmx_pko_rate_limit_packets(int port, int packets_s, int burst);
>> +
>> +/**
>> + * Rate limit a PKO port to a max bits/sec. This function is only
>> + * supported on CN57XX, CN56XX, CN55XX, and CN54XX.
>> + *
>> + * @param port   Port to rate limit
>> + * @param bits_s PKO rate limit in bits/sec
>> + * @param burst  Maximum number of bits to burst before rate
>> + *               limiting cuts in.
>> + *
>> + * @return Zero on success, negative on failure
>> + */
>> +int cvmx_pko_rate_limit_bits(int port, u64 bits_s, int burst);
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Retrieve the PKO pipe number for a port
>> + *
>> + * @param interface
>> + * @param index
>> + *
>> + * @return negative on error.
>> + *
>> + * This applies only to the non-loopback interfaces.
>> + *
>> + */
>> +int __cvmx_pko_get_pipe(int interface, int index);
>> +
>> +/**
>> + * For a given PKO port number, return the base output queue
>> + * for the port.
>> + *
>> + * @param pko_port   PKO port number
>> + * @return           Base output queue
>> + */
>> +int cvmx_pko_get_base_queue_pkoid(int pko_port);
>> +
>> +/**
>> + * For a given PKO port number, return the number of output queues
>> + * for the port.
>> + *
>> + * @param pko_port	PKO port number
>> + * @return		the number of output queues
>> + */
>> +int cvmx_pko_get_num_queues_pkoid(int pko_port);
>> +
>> +/**
>> + * Ring the packet output doorbell. This tells the packet
>> + * output hardware that "len" command words have been added
>> + * to its pending list.  This command includes the required
>> + * CVMX_SYNCWS before the doorbell ring.
>> + *
>> + * @param pko_port   Port the packet is for
>> + * @param queue  Queue the packet is for
>> + * @param len    Length of the command in 64 bit words
>> + */
>> +static inline void cvmx_pko_doorbell_pkoid(u64 pko_port, u64 queue,
>> u64 len)
>> +{
>> +	cvmx_pko_doorbell_address_t ptr;
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s.mem_space = CVMX_IO_SEG;
>> +	ptr.s.did = CVMX_OCT_DID_PKT_SEND;
>> +	ptr.s.is_io = 1;
>> +	ptr.s.port = pko_port;
>> +	ptr.s.queue = queue;
>> +	/* Need to make sure output queue data is in DRAM before
>> doorbell write */
>> +	CVMX_SYNCWS;
>> +	cvmx_write_io(ptr.u64, len);
>> +}
>> +
>> +/**
>> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
>> called exactly once before this,
>> + * and the same parameters must be passed to both
>> cvmx_pko_send_packet_prepare() and
>> + * cvmx_pko_send_packet_finish_pkoid().
>> + *
>> + * @param pko_port   Port to send it on
>> + * @param queue  Queue to use
>> + * @param pko_command
>> + *               PKO HW command word
>> + * @param packet Packet to send
>> + * @param use_locking
>> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
>> CVMX_PKO_LOCK_CMD_QUEUE
>> + *
>> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
>> failure of output
>> + */
>> +static inline cvmx_pko_return_value_t
>> +cvmx_hwpko_send_packet_finish_pkoid(int pko_port, u64 queue,
>> cvmx_pko_command_word0_t pko_command,
>> +				    cvmx_buf_ptr_t packet,
>> cvmx_pko_lock_t use_locking)
>> +{
>> +	cvmx_cmd_queue_result_t result;
>> +
>> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
>> +		cvmx_pow_tag_sw_wait();
>> +
>> +	result = cvmx_cmd_queue_write2(CVMX_CMD_QUEUE_PKO(queue),
>> +				       (use_locking ==
>> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
>> +				       packet.u64);
>> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
>> +		cvmx_pko_doorbell_pkoid(pko_port, queue, 2);
>> +		return CVMX_PKO_SUCCESS;
>> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
>> CVMX_CMD_QUEUE_FULL)) {
>> +		return CVMX_PKO_NO_MEMORY;
>> +	} else {
>> +		return CVMX_PKO_INVALID_QUEUE;
>> +	}
>> +}
>> +
>> +/**
>> + * Complete packet output. cvmx_pko_send_packet_prepare() must be
>> called exactly once before this,
>> + * and the same parameters must be passed to both
>> cvmx_pko_send_packet_prepare() and
>> + * cvmx_pko_send_packet_finish_pkoid().
>> + *
>> + * @param pko_port   The PKO port the packet is for
>> + * @param queue  Queue to use
>> + * @param pko_command
>> + *               PKO HW command word
>> + * @param packet Packet to send
>> + * @param addr   Plysical address of a work queue entry or physical
>> address to zero on complete.
>> + * @param use_locking
>> + *               CVMX_PKO_LOCK_NONE, CVMX_PKO_LOCK_ATOMIC_TAG, or
>> CVMX_PKO_LOCK_CMD_QUEUE
>> + *
>> + * @return returns CVMX_PKO_SUCCESS on success, or error code on
>> failure of output
>> + */
>> +static inline cvmx_pko_return_value_t
>> +cvmx_hwpko_send_packet_finish3_pkoid(u64 pko_port, u64 queue,
>> cvmx_pko_command_word0_t pko_command,
>> +				     cvmx_buf_ptr_t packet, u64 addr,
>> cvmx_pko_lock_t use_locking)
>> +{
>> +	cvmx_cmd_queue_result_t result;
>> +
>> +	if (use_locking == CVMX_PKO_LOCK_ATOMIC_TAG)
>> +		cvmx_pow_tag_sw_wait();
>> +
>> +	result = cvmx_cmd_queue_write3(CVMX_CMD_QUEUE_PKO(queue),
>> +				       (use_locking ==
>> CVMX_PKO_LOCK_CMD_QUEUE), pko_command.u64,
>> +				       packet.u64, addr);
>> +	if (cvmx_likely(result == CVMX_CMD_QUEUE_SUCCESS)) {
>> +		cvmx_pko_doorbell_pkoid(pko_port, queue, 3);
>> +		return CVMX_PKO_SUCCESS;
>> +	} else if ((result == CVMX_CMD_QUEUE_NO_MEMORY) || (result ==
>> CVMX_CMD_QUEUE_FULL)) {
>> +		return CVMX_PKO_NO_MEMORY;
>> +	} else {
>> +		return CVMX_PKO_INVALID_QUEUE;
>> +	}
>> +}
>> +
>> +/*
>> + * Obtain the number of PKO commands pending in a queue
>> + *
>> + * @param queue is the queue identifier to be queried
>> + * @return the number of commands pending transmission or -1 on
>> error
>> + */
>> +int cvmx_pko_queue_pend_count(cvmx_cmd_queue_id_t queue);
>> +
>> +void cvmx_pko_set_cmd_queue_pool_buffer_count(u64 buffer_count);
>> +
>> +#endif /* __CVMX_HWPKO_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>> new file mode 100644
>> index 000000000000..727298352c28
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>> @@ -0,0 +1,154 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * This file contains defines for the ILK interface
>> + */
>> +
>> +#ifndef __CVMX_ILK_H__
>> +#define __CVMX_ILK_H__
>> +
>> +/* CSR typedefs have been moved to cvmx-ilk-defs.h */
>> +
>> +/*
>> + * Note: this macro must match the first ilk port in the
>> ipd_port_map_68xx[]
>> + * and ipd_port_map_78xx[] arrays.
>> + */
>> +static inline int CVMX_ILK_GBL_BASE(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
>> +		return 5;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 6;
>> +	return -1;
>> +}
>> +
>> +static inline int CVMX_ILK_QLM_BASE(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
>> +		return 1;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 4;
>> +	return -1;
>> +}
>> +
>> +typedef struct {
>> +	int intf_en : 1;
>> +	int la_mode : 1;
>> +	int reserved : 14; /* unused */
>> +	int lane_speed : 16;
>> +	/* add more here */
>> +} cvmx_ilk_intf_t;
>> +
>> +#define CVMX_NUM_ILK_INTF 2
>> +static inline int CVMX_ILK_MAX_LANES(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN68XX))
>> +		return 8;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 16;
>> +	return -1;
>> +}
>> +
>> +extern unsigned short
>> cvmx_ilk_lane_mask[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
>> +
>> +typedef struct {
>> +	unsigned int pipe;
>> +	unsigned int chan;
>> +} cvmx_ilk_pipe_chan_t;
>> +
>> +#define CVMX_ILK_MAX_PIPES 45
>> +/* Max number of channels allowed */
>> +#define CVMX_ILK_MAX_CHANS 256
>> +
>> +extern int cvmx_ilk_chans[CVMX_MAX_NODES][CVMX_NUM_ILK_INTF];
>> +
>> +typedef struct {
>> +	unsigned int chan;
>> +	unsigned int pknd;
>> +} cvmx_ilk_chan_pknd_t;
>> +
>> +#define CVMX_ILK_MAX_PKNDS 16 /* must be <45 */
>> +
>> +typedef struct {
>> +	int *chan_list; /* for discrete channels. or, must be null */
>> +	unsigned int num_chans;
>> +
>> +	unsigned int chan_start; /* for continuous channels */
>> +	unsigned int chan_end;
>> +	unsigned int chan_step;
>> +
>> +	unsigned int clr_on_rd;
>> +} cvmx_ilk_stats_ctrl_t;
>> +
>> +#define CVMX_ILK_MAX_CAL      288
>> +#define CVMX_ILK_MAX_CAL_IDX  (CVMX_ILK_MAX_CAL / 8)
>> +#define CVMX_ILK_TX_MIN_CAL   1
>> +#define CVMX_ILK_RX_MIN_CAL   1
>> +#define CVMX_ILK_CAL_GRP_SZ   8
>> +#define CVMX_ILK_PIPE_BPID_SZ 7
>> +#define CVMX_ILK_ENT_CTRL_SZ  2
>> +#define CVMX_ILK_RX_FIFO_WM   0x200
>> +
>> +typedef enum { PIPE_BPID = 0, LINK, XOFF, XON }
>> cvmx_ilk_cal_ent_ctrl_t;
>> +
>> +typedef struct {
>> +	unsigned char pipe_bpid;
>> +	cvmx_ilk_cal_ent_ctrl_t ent_ctrl;
>> +} cvmx_ilk_cal_entry_t;
>> +
>> +typedef enum { CVMX_ILK_LPBK_DISA = 0, CVMX_ILK_LPBK_ENA }
>> cvmx_ilk_lpbk_ena_t;
>> +
>> +typedef enum { CVMX_ILK_LPBK_INT = 0, CVMX_ILK_LPBK_EXT }
>> cvmx_ilk_lpbk_mode_t;
>> +
>> +/**
>> + * This header is placed in front of all received ILK look-aside
>> mode packets
>> + */
>> +typedef union {
>> +	u64 u64;
>> +
>> +	struct {
>> +		u32 reserved_63_57 : 7;	  /* bits 63...57 */
>> +		u32 nsp_cmd : 5;	  /* bits 56...52 */
>> +		u32 nsp_flags : 4;	  /* bits 51...48 */
>> +		u32 nsp_grp_id_upper : 6; /* bits 47...42 */
>> +		u32 reserved_41_40 : 2;	  /* bits 41...40 */
>> +		/* Protocol type, 1 for LA mode packet */
>> +		u32 la_mode : 1;	  /* bit  39      */
>> +		u32 nsp_grp_id_lower : 2; /* bits 38...37 */
>> +		u32 nsp_xid_upper : 4;	  /* bits 36...33 */
>> +		/* ILK channel number, 0 or 1 */
>> +		u32 ilk_channel : 1;   /* bit  32      */
>> +		u32 nsp_xid_lower : 8; /* bits 31...24 */
>> +		/* Unpredictable, may be any value */
>> +		u32 reserved_23_0 : 24; /* bits 23...0  */
>> +	} s;
>> +} cvmx_ilk_la_nsp_compact_hdr_t;
>> +
>> +typedef struct cvmx_ilk_LA_mode_struct {
>> +	int ilk_LA_mode;
>> +	int ilk_LA_mode_cal_ena;
>> +} cvmx_ilk_LA_mode_t;
>> +
>> +extern cvmx_ilk_LA_mode_t cvmx_ilk_LA_mode[CVMX_NUM_ILK_INTF];
>> +
>> +int cvmx_ilk_use_la_mode(int interface, int channel);
>> +int cvmx_ilk_start_interface(int interface, unsigned short
>> num_lanes);
>> +int cvmx_ilk_start_interface_la(int interface, unsigned char
>> num_lanes);
>> +int cvmx_ilk_set_pipe(int interface, int pipe_base, unsigned int
>> pipe_len);
>> +int cvmx_ilk_tx_set_channel(int interface, cvmx_ilk_pipe_chan_t
>> *pch, unsigned int num_chs);
>> +int cvmx_ilk_rx_set_pknd(int interface, cvmx_ilk_chan_pknd_t
>> *chpknd, unsigned int num_pknd);
>> +int cvmx_ilk_enable(int interface);
>> +int cvmx_ilk_disable(int interface);
>> +int cvmx_ilk_get_intf_ena(int interface);
>> +int cvmx_ilk_get_chan_info(int interface, unsigned char **chans,
>> unsigned char *num_chan);
>> +cvmx_ilk_la_nsp_compact_hdr_t cvmx_ilk_enable_la_header(int
>> ipd_port, int mode);
>> +void cvmx_ilk_show_stats(int interface, cvmx_ilk_stats_ctrl_t
>> *pstats);
>> +int cvmx_ilk_cal_setup_rx(int interface, int cal_depth,
>> cvmx_ilk_cal_entry_t *pent, int hi_wm,
>> +			  unsigned char cal_ena);
>> +int cvmx_ilk_cal_setup_tx(int interface, int cal_depth,
>> cvmx_ilk_cal_entry_t *pent,
>> +			  unsigned char cal_ena);
>> +int cvmx_ilk_lpbk(int interface, cvmx_ilk_lpbk_ena_t enable,
>> cvmx_ilk_lpbk_mode_t mode);
>> +int cvmx_ilk_la_mode_enable_rx_calendar(int interface);
>> +
>> +#endif /* __CVMX_ILK_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>> new file mode 100644
>> index 000000000000..cdff36fffb56
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>> @@ -0,0 +1,233 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Input Packet Data unit.
>> + */
>> +
>> +#ifndef __CVMX_IPD_H__
>> +#define __CVMX_IPD_H__
>> +
>> +#include "cvmx-pki.h"
>> +
>> +/* CSR typedefs have been moved to cvmx-ipd-defs.h */
>> +
>> +typedef cvmx_ipd_1st_mbuff_skip_t cvmx_ipd_mbuff_not_first_skip_t;
>> +typedef cvmx_ipd_1st_next_ptr_back_t
>> cvmx_ipd_second_next_ptr_back_t;
>> +
>> +typedef struct cvmx_ipd_tag_fields {
>> +	u64 ipv6_src_ip : 1;
>> +	u64 ipv6_dst_ip : 1;
>> +	u64 ipv6_src_port : 1;
>> +	u64 ipv6_dst_port : 1;
>> +	u64 ipv6_next_header : 1;
>> +	u64 ipv4_src_ip : 1;
>> +	u64 ipv4_dst_ip : 1;
>> +	u64 ipv4_src_port : 1;
>> +	u64 ipv4_dst_port : 1;
>> +	u64 ipv4_protocol : 1;
>> +	u64 input_port : 1;
>> +} cvmx_ipd_tag_fields_t;
>> +
>> +typedef struct cvmx_pip_port_config {
>> +	u64 parse_mode;
>> +	u64 tag_type;
>> +	u64 tag_mode;
>> +	cvmx_ipd_tag_fields_t tag_fields;
>> +} cvmx_pip_port_config_t;
>> +
>> +typedef struct cvmx_ipd_config_struct {
>> +	u64 first_mbuf_skip;
>> +	u64 not_first_mbuf_skip;
>> +	u64 ipd_enable;
>> +	u64 enable_len_M8_fix;
>> +	u64 cache_mode;
>> +	cvmx_fpa_pool_config_t packet_pool;
>> +	cvmx_fpa_pool_config_t wqe_pool;
>> +	cvmx_pip_port_config_t port_config;
>> +} cvmx_ipd_config_t;
>> +
>> +extern cvmx_ipd_config_t cvmx_ipd_cfg;
>> +
>> +/**
>> + * Gets the fpa pool number of packet pool
>> + */
>> +static inline s64 cvmx_fpa_get_packet_pool(void)
>> +{
>> +	return (cvmx_ipd_cfg.packet_pool.pool_num);
>> +}
>> +
>> +/**
>> + * Gets the buffer size of packet pool buffer
>> + */
>> +static inline u64 cvmx_fpa_get_packet_pool_block_size(void)
>> +{
>> +	return (cvmx_ipd_cfg.packet_pool.buffer_size);
>> +}
>> +
>> +/**
>> + * Gets the buffer count of packet pool
>> + */
>> +static inline u64 cvmx_fpa_get_packet_pool_buffer_count(void)
>> +{
>> +	return (cvmx_ipd_cfg.packet_pool.buffer_count);
>> +}
>> +
>> +/**
>> + * Gets the fpa pool number of wqe pool
>> + */
>> +static inline s64 cvmx_fpa_get_wqe_pool(void)
>> +{
>> +	return (cvmx_ipd_cfg.wqe_pool.pool_num);
>> +}
>> +
>> +/**
>> + * Gets the buffer size of wqe pool buffer
>> + */
>> +static inline u64 cvmx_fpa_get_wqe_pool_block_size(void)
>> +{
>> +	return (cvmx_ipd_cfg.wqe_pool.buffer_size);
>> +}
>> +
>> +/**
>> + * Gets the buffer count of wqe pool
>> + */
>> +static inline u64 cvmx_fpa_get_wqe_pool_buffer_count(void)
>> +{
>> +	return (cvmx_ipd_cfg.wqe_pool.buffer_count);
>> +}
>> +
>> +/**
>> + * Sets the ipd related configuration in internal structure which is
>> then used
>> + * for seting IPD hardware block
>> + */
>> +int cvmx_ipd_set_config(cvmx_ipd_config_t ipd_config);
>> +
>> +/**
>> + * Gets the ipd related configuration from internal structure.
>> + */
>> +void cvmx_ipd_get_config(cvmx_ipd_config_t *ipd_config);
>> +
>> +/**
>> + * Sets the internal FPA pool data structure for packet buffer pool.
>> + * @param pool	fpa pool number yo use
>> + * @param buffer_size	buffer size of pool
>> + * @param buffer_count	number of buufers to allocate to pool
>> + */
>> +void cvmx_ipd_set_packet_pool_config(s64 pool, u64 buffer_size, u64
>> buffer_count);
>> +
>> +/**
>> + * Sets the internal FPA pool data structure for wqe pool.
>> + * @param pool	fpa pool number yo use
>> + * @param buffer_size	buffer size of pool
>> + * @param buffer_count	number of buufers to allocate to pool
>> + */
>> +void cvmx_ipd_set_wqe_pool_config(s64 pool, u64 buffer_size, u64
>> buffer_count);
>> +
>> +/**
>> + * Gets the FPA packet buffer pool parameters.
>> + */
>> +static inline void cvmx_fpa_get_packet_pool_config(s64 *pool, u64
>> *buffer_size, u64 *buffer_count)
>> +{
>> +	if (pool)
>> +		*pool = cvmx_ipd_cfg.packet_pool.pool_num;
>> +	if (buffer_size)
>> +		*buffer_size = cvmx_ipd_cfg.packet_pool.buffer_size;
>> +	if (buffer_count)
>> +		*buffer_count = cvmx_ipd_cfg.packet_pool.buffer_count;
>> +}
>> +
>> +/**
>> + * Sets the FPA packet buffer pool parameters.
>> + */
>> +static inline void cvmx_fpa_set_packet_pool_config(s64 pool, u64
>> buffer_size, u64 buffer_count)
>> +{
>> +	cvmx_ipd_set_packet_pool_config(pool, buffer_size,
>> buffer_count);
>> +}
>> +
>> +/**
>> + * Gets the FPA WQE pool parameters.
>> + */
>> +static inline void cvmx_fpa_get_wqe_pool_config(s64 *pool, u64
>> *buffer_size, u64 *buffer_count)
>> +{
>> +	if (pool)
>> +		*pool = cvmx_ipd_cfg.wqe_pool.pool_num;
>> +	if (buffer_size)
>> +		*buffer_size = cvmx_ipd_cfg.wqe_pool.buffer_size;
>> +	if (buffer_count)
>> +		*buffer_count = cvmx_ipd_cfg.wqe_pool.buffer_count;
>> +}
>> +
>> +/**
>> + * Sets the FPA WQE pool parameters.
>> + */
>> +static inline void cvmx_fpa_set_wqe_pool_config(s64 pool, u64
>> buffer_size, u64 buffer_count)
>> +{
>> +	cvmx_ipd_set_wqe_pool_config(pool, buffer_size, buffer_count);
>> +}
>> +
>> +/**
>> + * Configure IPD
>> + *
>> + * @param mbuff_size Packets buffer size in 8 byte words
>> + * @param first_mbuff_skip
>> + *                   Number of 8 byte words to skip in the first
>> buffer
>> + * @param not_first_mbuff_skip
>> + *                   Number of 8 byte words to skip in each
>> following buffer
>> + * @param first_back Must be same as first_mbuff_skip / 128
>> + * @param second_back
>> + *                   Must be same as not_first_mbuff_skip / 128
>> + * @param wqe_fpa_pool
>> + *                   FPA pool to get work entries from
>> + * @param cache_mode
>> + * @param back_pres_enable_flag
>> + *                   Enable or disable port back pressure at a
>> global level.
>> + *                   This should always be 1 as more accurate
>> control can be
>> + *                   found in IPD_PORTX_BP_PAGE_CNT[BP_ENB].
>> + */
>> +void cvmx_ipd_config(u64 mbuff_size, u64 first_mbuff_skip, u64
>> not_first_mbuff_skip, u64 first_back,
>> +		     u64 second_back, u64 wqe_fpa_pool, cvmx_ipd_mode_t
>> cache_mode,
>> +		     u64 back_pres_enable_flag);
>> +/**
>> + * Enable IPD
>> + */
>> +void cvmx_ipd_enable(void);
>> +
>> +/**
>> + * Disable IPD
>> + */
>> +void cvmx_ipd_disable(void);
>> +
>> +void __cvmx_ipd_free_ptr(void);
>> +
>> +void cvmx_ipd_set_packet_pool_buffer_count(u64 buffer_count);
>> +void cvmx_ipd_set_wqe_pool_buffer_count(u64 buffer_count);
>> +
>> +/**
>> + * Setup Random Early Drop on a specific input queue
>> + *
>> + * @param queue  Input queue to setup RED on (0-7)
>> + * @param pass_thresh
>> + *               Packets will begin slowly dropping when there are
>> less than
>> + *               this many packet buffers free in FPA 0.
>> + * @param drop_thresh
>> + *               All incoming packets will be dropped when there are
>> less
>> + *               than this many free packet buffers in FPA 0.
>> + * @return Zero on success. Negative on failure
>> + */
>> +int cvmx_ipd_setup_red_queue(int queue, int pass_thresh, int
>> drop_thresh);
>> +
>> +/**
>> + * Setup Random Early Drop to automatically begin dropping packets.
>> + *
>> + * @param pass_thresh
>> + *               Packets will begin slowly dropping when there are
>> less than
>> + *               this many packet buffers free in FPA 0.
>> + * @param drop_thresh
>> + *               All incoming packets will be dropped when there are
>> less
>> + *               than this many free packet buffers in FPA 0.
>> + * @return Zero on success. Negative on failure
>> + */
>> +int cvmx_ipd_setup_red(int pass_thresh, int drop_thresh);
>> +
>> +#endif /*  __CVMX_IPD_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-packet.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
>> new file mode 100644
>> index 000000000000..f3cfe9c64f43
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-packet.h
>> @@ -0,0 +1,40 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Packet buffer defines.
>> + */
>> +
>> +#ifndef __CVMX_PACKET_H__
>> +#define __CVMX_PACKET_H__
>> +
>> +union cvmx_buf_ptr_pki {
>> +	u64 u64;
>> +	struct {
>> +		u64 size : 16;
>> +		u64 packet_outside_wqe : 1;
>> +		u64 rsvd0 : 5;
>> +		u64 addr : 42;
>> +	};
>> +};
>> +
>> +typedef union cvmx_buf_ptr_pki cvmx_buf_ptr_pki_t;
>> +
>> +/**
>> + * This structure defines a buffer pointer on Octeon
>> + */
>> +union cvmx_buf_ptr {
>> +	void *ptr;
>> +	u64 u64;
>> +	struct {
>> +		u64 i : 1;
>> +		u64 back : 4;
>> +		u64 pool : 3;
>> +		u64 size : 16;
>> +		u64 addr : 40;
>> +	} s;
>> +};
>> +
>> +typedef union cvmx_buf_ptr cvmx_buf_ptr_t;
>> +
>> +#endif /*  __CVMX_PACKET_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>> new file mode 100644
>> index 000000000000..a819196c021c
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>> @@ -0,0 +1,279 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __CVMX_PCIE_H__
>> +#define __CVMX_PCIE_H__
>> +
>> +#define CVMX_PCIE_MAX_PORTS 4
>> +#define
>> CVMX_PCIE_PORTS
>>                        \
>> +	((OCTEON_IS_MODEL(OCTEON_CN78XX) ||
>> OCTEON_IS_MODEL(OCTEON_CN73XX)) ?                      \
>> +		       CVMX_PCIE_MAX_PORTS
>> :                                                             \
>> +		       (OCTEON_IS_MODEL(OCTEON_CN70XX) ? 3 : 2))
>> +
>> +/*
>> + * The physical memory base mapped by BAR1.  256MB at the end of the
>> + * first 4GB.
>> + */
>> +#define CVMX_PCIE_BAR1_PHYS_BASE ((1ull << 32) - (1ull << 28))
>> +#define CVMX_PCIE_BAR1_PHYS_SIZE BIT_ULL(28)
>> +
>> +/*
>> + * The RC base of BAR1.  gen1 has a 39-bit BAR2, gen2 has 41-bit
>> BAR2,
>> + * place BAR1 so it is the same for both.
>> + */
>> +#define CVMX_PCIE_BAR1_RC_BASE BIT_ULL(41)
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
>> +		u64 reserved_49_61 : 13; /* Must be zero */
>> +		u64 io : 1;		 /* 1 for IO space access */
>> +		u64 did : 5;		 /* PCIe DID = 3 */
>> +		u64 subdid : 3;		 /* PCIe SubDID = 1 */
>> +		u64 reserved_38_39 : 2;	 /* Must be zero */
>> +		u64 node : 2;		 /* Numa node number */
>> +		u64 es : 2;		 /* Endian swap = 1 */
>> +		u64 port : 2;		 /* PCIe port 0,1 */
>> +		u64 reserved_29_31 : 3;	 /* Must be zero */
>> +		u64 ty : 1;
>> +		u64 bus : 8;
>> +		u64 dev : 5;
>> +		u64 func : 3;
>> +		u64 reg : 12;
>> +	} config;
>> +	struct {
>> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
>> +		u64 reserved_49_61 : 13; /* Must be zero */
>> +		u64 io : 1;		 /* 1 for IO space access */
>> +		u64 did : 5;		 /* PCIe DID = 3 */
>> +		u64 subdid : 3;		 /* PCIe SubDID = 2 */
>> +		u64 reserved_38_39 : 2;	 /* Must be zero */
>> +		u64 node : 2;		 /* Numa node number */
>> +		u64 es : 2;		 /* Endian swap = 1 */
>> +		u64 port : 2;		 /* PCIe port 0,1 */
>> +		u64 address : 32;	 /* PCIe IO address */
>> +	} io;
>> +	struct {
>> +		u64 upper : 2;		 /* Normally 2 for XKPHYS */
>> +		u64 reserved_49_61 : 13; /* Must be zero */
>> +		u64 io : 1;		 /* 1 for IO space access */
>> +		u64 did : 5;		 /* PCIe DID = 3 */
>> +		u64 subdid : 3;		 /* PCIe SubDID = 3-6 */
>> +		u64 reserved_38_39 : 2;	 /* Must be zero */
>> +		u64 node : 2;		 /* Numa node number */
>> +		u64 address : 36;	 /* PCIe Mem address */
>> +	} mem;
>> +} cvmx_pcie_address_t;
>> +
>> +/**
>> + * Return the Core virtual base address for PCIe IO access. IOs are
>> + * read/written as an offset from this address.
>> + *
>> + * @param pcie_port PCIe port the IO is for
>> + *
>> + * @return 64bit Octeon IO base address for read/write
>> + */
>> +u64 cvmx_pcie_get_io_base_address(int pcie_port);
>> +
>> +/**
>> + * Size of the IO address region returned at address
>> + * cvmx_pcie_get_io_base_address()
>> + *
>> + * @param pcie_port PCIe port the IO is for
>> + *
>> + * @return Size of the IO window
>> + */
>> +u64 cvmx_pcie_get_io_size(int pcie_port);
>> +
>> +/**
>> + * Return the Core virtual base address for PCIe MEM access. Memory
>> is
>> + * read/written as an offset from this address.
>> + *
>> + * @param pcie_port PCIe port the IO is for
>> + *
>> + * @return 64bit Octeon IO base address for read/write
>> + */
>> +u64 cvmx_pcie_get_mem_base_address(int pcie_port);
>> +
>> +/**
>> + * Size of the Mem address region returned at address
>> + * cvmx_pcie_get_mem_base_address()
>> + *
>> + * @param pcie_port PCIe port the IO is for
>> + *
>> + * @return Size of the Mem window
>> + */
>> +u64 cvmx_pcie_get_mem_size(int pcie_port);
>> +
>> +/**
>> + * Initialize a PCIe port for use in host(RC) mode. It doesn't
>> enumerate the bus.
>> + *
>> + * @param pcie_port PCIe port to initialize
>> + *
>> + * @return Zero on success
>> + */
>> +int cvmx_pcie_rc_initialize(int pcie_port);
>> +
>> +/**
>> + * Shutdown a PCIe port and put it in reset
>> + *
>> + * @param pcie_port PCIe port to shutdown
>> + *
>> + * @return Zero on success
>> + */
>> +int cvmx_pcie_rc_shutdown(int pcie_port);
>> +
>> +/**
>> + * Read 8bits from a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + *
>> + * @return Result of the read
>> + */
>> +u8 cvmx_pcie_config_read8(int pcie_port, int bus, int dev, int fn,
>> int reg);
>> +
>> +/**
>> + * Read 16bits from a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + *
>> + * @return Result of the read
>> + */
>> +u16 cvmx_pcie_config_read16(int pcie_port, int bus, int dev, int fn,
>> int reg);
>> +
>> +/**
>> + * Read 32bits from a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + *
>> + * @return Result of the read
>> + */
>> +u32 cvmx_pcie_config_read32(int pcie_port, int bus, int dev, int fn,
>> int reg);
>> +
>> +/**
>> + * Write 8bits to a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + * @param val       Value to write
>> + */
>> +void cvmx_pcie_config_write8(int pcie_port, int bus, int dev, int
>> fn, int reg, u8 val);
>> +
>> +/**
>> + * Write 16bits to a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + * @param val       Value to write
>> + */
>> +void cvmx_pcie_config_write16(int pcie_port, int bus, int dev, int
>> fn, int reg, u16 val);
>> +
>> +/**
>> + * Write 32bits to a Device's config space
>> + *
>> + * @param pcie_port PCIe port the device is on
>> + * @param bus       Sub bus
>> + * @param dev       Device ID
>> + * @param fn        Device sub function
>> + * @param reg       Register to access
>> + * @param val       Value to write
>> + */
>> +void cvmx_pcie_config_write32(int pcie_port, int bus, int dev, int
>> fn, int reg, u32 val);
>> +
>> +/**
>> + * Read a PCIe config space register indirectly. This is used for
>> + * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
>> + *
>> + * @param pcie_port  PCIe port to read from
>> + * @param cfg_offset Address to read
>> + *
>> + * @return Value read
>> + */
>> +u32 cvmx_pcie_cfgx_read(int pcie_port, u32 cfg_offset);
>> +u32 cvmx_pcie_cfgx_read_node(int node, int pcie_port, u32
>> cfg_offset);
>> +
>> +/**
>> + * Write a PCIe config space register indirectly. This is used for
>> + * registers of the form PCIEEP_CFG??? and PCIERC?_CFG???.
>> + *
>> + * @param pcie_port  PCIe port to write to
>> + * @param cfg_offset Address to write
>> + * @param val        Value to write
>> + */
>> +void cvmx_pcie_cfgx_write(int pcie_port, u32 cfg_offset, u32 val);
>> +void cvmx_pcie_cfgx_write_node(int node, int pcie_port, u32
>> cfg_offset, u32 val);
>> +
>> +/**
>> + * Write a 32bit value to the Octeon NPEI register space
>> + *
>> + * @param address Address to write to
>> + * @param val     Value to write
>> + */
>> +static inline void cvmx_pcie_npei_write32(u64 address, u32 val)
>> +{
>> +	cvmx_write64_uint32(address ^ 4, val);
>> +	cvmx_read64_uint32(address ^ 4);
>> +}
>> +
>> +/**
>> + * Read a 32bit value from the Octeon NPEI register space
>> + *
>> + * @param address Address to read
>> + * @return The result
>> + */
>> +static inline u32 cvmx_pcie_npei_read32(u64 address)
>> +{
>> +	return cvmx_read64_uint32(address ^ 4);
>> +}
>> +
>> +/**
>> + * Initialize a PCIe port for use in target(EP) mode.
>> + *
>> + * @param pcie_port PCIe port to initialize
>> + *
>> + * @return Zero on success
>> + */
>> +int cvmx_pcie_ep_initialize(int pcie_port);
>> +
>> +/**
>> + * Wait for posted PCIe read/writes to reach the other side of
>> + * the internal PCIe switch. This will insure that core
>> + * read/writes are posted before anything after this function
>> + * is called. This may be necessary when writing to memory that
>> + * will later be read using the DMA/PKT engines.
>> + *
>> + * @param pcie_port PCIe port to wait for
>> + */
>> +void cvmx_pcie_wait_for_pending(int pcie_port);
>> +
>> +/**
>> + * Returns if a PCIe port is in host or target mode.
>> + *
>> + * @param pcie_port PCIe port number (PEM number)
>> + *
>> + * @return 0 if PCIe port is in target mode, !0 if in host mode.
>> + */
>> +int cvmx_pcie_is_host_mode(int pcie_port);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pip.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
>> new file mode 100644
>> index 000000000000..013f533fb7bb
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pip.h
>> @@ -0,0 +1,1080 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Packet Input Processing unit.
>> + */
>> +
>> +#ifndef __CVMX_PIP_H__
>> +#define __CVMX_PIP_H__
>> +
>> +#include "cvmx-wqe.h"
>> +#include "cvmx-pki.h"
>> +#include "cvmx-helper-pki.h"
>> +
>> +#include "cvmx-helper.h"
>> +#include "cvmx-helper-util.h"
>> +#include "cvmx-pki-resources.h"
>> +
>> +#define CVMX_PIP_NUM_INPUT_PORTS 46
>> +#define CVMX_PIP_NUM_WATCHERS	 8
>> +
>> +/*
>> + * Encodes the different error and exception codes
>> + */
>> +typedef enum {
>> +	CVMX_PIP_L4_NO_ERR = 0ull,
>> +	/*        1  = TCP (UDP) packet not long enough to cover TCP
>> (UDP) header */
>> +	CVMX_PIP_L4_MAL_ERR = 1ull,
>> +	/*        2  = TCP/UDP checksum failure */
>> +	CVMX_PIP_CHK_ERR = 2ull,
>> +	/*        3  = TCP/UDP length check (TCP/UDP length does not
>> match IP length) */
>> +	CVMX_PIP_L4_LENGTH_ERR = 3ull,
>> +	/*        4  = illegal TCP/UDP port (either source or dest port
>> is zero) */
>> +	CVMX_PIP_BAD_PRT_ERR = 4ull,
>> +	/*        8  = TCP flags = FIN only */
>> +	CVMX_PIP_TCP_FLG8_ERR = 8ull,
>> +	/*        9  = TCP flags = 0 */
>> +	CVMX_PIP_TCP_FLG9_ERR = 9ull,
>> +	/*        10 = TCP flags = FIN+RST+* */
>> +	CVMX_PIP_TCP_FLG10_ERR = 10ull,
>> +	/*        11 = TCP flags = SYN+URG+* */
>> +	CVMX_PIP_TCP_FLG11_ERR = 11ull,
>> +	/*        12 = TCP flags = SYN+RST+* */
>> +	CVMX_PIP_TCP_FLG12_ERR = 12ull,
>> +	/*        13 = TCP flags = SYN+FIN+* */
>> +	CVMX_PIP_TCP_FLG13_ERR = 13ull
>> +} cvmx_pip_l4_err_t;
>> +
>> +typedef enum {
>> +	CVMX_PIP_IP_NO_ERR = 0ull,
>> +	/*        1 = not IPv4 or IPv6 */
>> +	CVMX_PIP_NOT_IP = 1ull,
>> +	/*        2 = IPv4 header checksum violation */
>> +	CVMX_PIP_IPV4_HDR_CHK = 2ull,
>> +	/*        3 = malformed (packet not long enough to cover IP
>> hdr) */
>> +	CVMX_PIP_IP_MAL_HDR = 3ull,
>> +	/*        4 = malformed (packet not long enough to cover len in
>> IP hdr) */
>> +	CVMX_PIP_IP_MAL_PKT = 4ull,
>> +	/*        5 = TTL / hop count equal zero */
>> +	CVMX_PIP_TTL_HOP = 5ull,
>> +	/*        6 = IPv4 options / IPv6 early extension headers */
>> +	CVMX_PIP_OPTS = 6ull
>> +} cvmx_pip_ip_exc_t;
>> +
>> +/**
>> + * NOTES
>> + *       late collision (data received before collision)
>> + *            late collisions cannot be detected by the receiver
>> + *            they would appear as JAM bits which would appear as
>> bad FCS
>> + *            or carrier extend error which is CVMX_PIP_EXTEND_ERR
>> + */
>> +typedef enum {
>> +	/**
>> +	 * No error
>> +	 */
>> +	CVMX_PIP_RX_NO_ERR = 0ull,
>> +
>> +	CVMX_PIP_PARTIAL_ERR =
>> +		1ull, /* RGM+SPI            1 = partially received
>> packet (buffering/bandwidth not adequate) */
>> +	CVMX_PIP_JABBER_ERR =
>> +		2ull, /* RGM+SPI            2 = receive packet too
>> large and truncated */
>> +	CVMX_PIP_OVER_FCS_ERR =
>> +		3ull, /* RGM                3 = max frame error (pkt
>> len > max frame len) (with FCS error) */
>> +	CVMX_PIP_OVER_ERR =
>> +		4ull, /* RGM+SPI            4 = max frame error (pkt
>> len > max frame len) */
>> +	CVMX_PIP_ALIGN_ERR =
>> +		5ull, /* RGM                5 = nibble error (data not
>> byte multiple - 100M and 10M only) */
>> +	CVMX_PIP_UNDER_FCS_ERR =
>> +		6ull, /* RGM                6 = min frame error (pkt
>> len < min frame len) (with FCS error) */
>> +	CVMX_PIP_GMX_FCS_ERR = 7ull, /* RGM                7 = FCS
>> error */
>> +	CVMX_PIP_UNDER_ERR =
>> +		8ull, /* RGM+SPI            8 = min frame error (pkt
>> len < min frame len) */
>> +	CVMX_PIP_EXTEND_ERR = 9ull, /* RGM                9 = Frame
>> carrier extend error */
>> +	CVMX_PIP_TERMINATE_ERR =
>> +		9ull, /* XAUI               9 = Packet was terminated
>> with an idle cycle */
>> +	CVMX_PIP_LENGTH_ERR =
>> +		10ull, /* RGM               10 = length mismatch (len
>> did not match len in L2 length/type) */
>> +	CVMX_PIP_DAT_ERR =
>> +		11ull, /* RGM               11 = Frame error (some or
>> all data bits marked err) */
>> +	CVMX_PIP_DIP_ERR = 11ull, /*     SPI           11 = DIP4 error
>> */
>> +	CVMX_PIP_SKIP_ERR =
>> +		12ull, /* RGM               12 = packet was not large
>> enough to pass the skipper - no inspection could occur */
>> +	CVMX_PIP_NIBBLE_ERR =
>> +		13ull, /* RGM               13 = studder error (data
>> not repeated - 100M and 10M only) */
>> +	CVMX_PIP_PIP_FCS = 16L, /* RGM+SPI           16 = FCS error */
>> +	CVMX_PIP_PIP_SKIP_ERR =
>> +		17L, /* RGM+SPI+PCI       17 = packet was not large
>> enough to pass the skipper - no inspection could occur */
>> +	CVMX_PIP_PIP_L2_MAL_HDR =
>> +		18L, /* RGM+SPI+PCI       18 = malformed l2 (packet not
>> long enough to cover L2 hdr) */
>> +	CVMX_PIP_PUNY_ERR =
>> +		47L /* SGMII             47 = PUNY error (packet was 4B
>> or less when FCS stripping is enabled) */
>> +	/* NOTES
>> +	 *       xx = late collision (data received before collision)
>> +	 *            late collisions cannot be detected by the
>> receiver
>> +	 *            they would appear as JAM bits which would appear
>> as bad FCS
>> +	 *            or carrier extend error which is
>> CVMX_PIP_EXTEND_ERR
>> +	 */
>> +} cvmx_pip_rcv_err_t;
>> +
>> +/**
>> + * This defines the err_code field errors in the work Q entry
>> + */
>> +typedef union {
>> +	cvmx_pip_l4_err_t l4_err;
>> +	cvmx_pip_ip_exc_t ip_exc;
>> +	cvmx_pip_rcv_err_t rcv_err;
>> +} cvmx_pip_err_t;
>> +
>> +/**
>> + * Status statistics for a port
>> + */
>> +typedef struct {
>> +	u64 dropped_octets;
>> +	u64 dropped_packets;
>> +	u64 pci_raw_packets;
>> +	u64 octets;
>> +	u64 packets;
>> +	u64 multicast_packets;
>> +	u64 broadcast_packets;
>> +	u64 len_64_packets;
>> +	u64 len_65_127_packets;
>> +	u64 len_128_255_packets;
>> +	u64 len_256_511_packets;
>> +	u64 len_512_1023_packets;
>> +	u64 len_1024_1518_packets;
>> +	u64 len_1519_max_packets;
>> +	u64 fcs_align_err_packets;
>> +	u64 runt_packets;
>> +	u64 runt_crc_packets;
>> +	u64 oversize_packets;
>> +	u64 oversize_crc_packets;
>> +	u64 inb_packets;
>> +	u64 inb_octets;
>> +	u64 inb_errors;
>> +	u64 mcast_l2_red_packets;
>> +	u64 bcast_l2_red_packets;
>> +	u64 mcast_l3_red_packets;
>> +	u64 bcast_l3_red_packets;
>> +} cvmx_pip_port_status_t;
>> +
>> +/**
>> + * Definition of the PIP custom header that can be prepended
>> + * to a packet by external hardware.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 rawfull : 1;
>> +		u64 reserved0 : 5;
>> +		cvmx_pip_port_parse_mode_t parse_mode : 2;
>> +		u64 reserved1 : 1;
>> +		u64 skip_len : 7;
>> +		u64 grpext : 2;
>> +		u64 nqos : 1;
>> +		u64 ngrp : 1;
>> +		u64 ntt : 1;
>> +		u64 ntag : 1;
>> +		u64 qos : 3;
>> +		u64 grp : 4;
>> +		u64 rs : 1;
>> +		cvmx_pow_tag_type_t tag_type : 2;
>> +		u64 tag : 32;
>> +	} s;
>> +} cvmx_pip_pkt_inst_hdr_t;
>> +
>> +enum cvmx_pki_pcam_match {
>> +	CVMX_PKI_PCAM_MATCH_IP,
>> +	CVMX_PKI_PCAM_MATCH_IPV4,
>> +	CVMX_PKI_PCAM_MATCH_IPV6,
>> +	CVMX_PKI_PCAM_MATCH_TCP
>> +};
>> +
>> +/* CSR typedefs have been moved to cvmx-pip-defs.h */
>> +static inline int cvmx_pip_config_watcher(int index, int type, u16
>> match, u16 mask, int grp,
>> +					  int qos)
>> +{
>> +	if (index >= CVMX_PIP_NUM_WATCHERS) {
>> +		debug("ERROR: pip watcher %d is > than supported\n",
>> index);
>> +		return -1;
>> +	}
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
>> +		/* store in software for now, only when the watcher is
>> enabled program the entry*/
>> +		if (type == CVMX_PIP_QOS_WATCH_PROTNH) {
>> +			qos_watcher[index].field =
>> CVMX_PKI_PCAM_TERM_L3_FLAGS;
>> +			qos_watcher[index].data = (u32)(match << 16);
>> +			qos_watcher[index].data_mask = (u32)(mask <<
>> 16);
>> +			qos_watcher[index].advance = 0;
>> +		} else if (type == CVMX_PIP_QOS_WATCH_TCP) {
>> +			qos_watcher[index].field =
>> CVMX_PKI_PCAM_TERM_L4_PORT;
>> +			qos_watcher[index].data = 0x060000;
>> +			qos_watcher[index].data |= (u32)match;
>> +			qos_watcher[index].data_mask = (u32)(mask);
>> +			qos_watcher[index].advance = 0;
>> +		} else if (type == CVMX_PIP_QOS_WATCH_UDP) {
>> +			qos_watcher[index].field =
>> CVMX_PKI_PCAM_TERM_L4_PORT;
>> +			qos_watcher[index].data = 0x110000;
>> +			qos_watcher[index].data |= (u32)match;
>> +			qos_watcher[index].data_mask = (u32)(mask);
>> +			qos_watcher[index].advance = 0;
>> +		} else if (type == 0x4
>> /*CVMX_PIP_QOS_WATCH_ETHERTYPE*/) {
>> +			qos_watcher[index].field =
>> CVMX_PKI_PCAM_TERM_ETHTYPE0;
>> +			if (match == 0x8100) {
>> +				debug("ERROR: default vlan entry
>> already exist, cant set watcher\n");
>> +				return -1;
>> +			}
>> +			qos_watcher[index].data = (u32)(match << 16);
>> +			qos_watcher[index].data_mask = (u32)(mask <<
>> 16);
>> +			qos_watcher[index].advance = 4;
>> +		} else {
>> +			debug("ERROR: Unsupported watcher type %d\n",
>> type);
>> +			return -1;
>> +		}
>> +		if (grp >= 32) {
>> +			debug("ERROR: grp %d out of range for backward
>> compat 78xx\n", grp);
>> +			return -1;
>> +		}
>> +		qos_watcher[index].sso_grp = (u8)(grp << 3 | qos);
>> +		qos_watcher[index].configured = 1;
>> +	} else {
>> +		/* Implement it later */
>> +	}
>> +	return 0;
>> +}
>> +
>> +static inline int __cvmx_pip_set_tag_type(int node, int style, int
>> tag_type, int field)
>> +{
>> +	struct cvmx_pki_style_config style_cfg;
>> +	int style_num;
>> +	int pcam_offset;
>> +	int bank;
>> +	struct cvmx_pki_pcam_input pcam_input;
>> +	struct cvmx_pki_pcam_action pcam_action;
>> +
>> +	/* All other style parameters remain same except tag type */
>> +	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL,
>> &style_cfg);
>> +	style_cfg.parm_cfg.tag_type = (enum cvmx_sso_tag_type)tag_type;
>> +	style_num = cvmx_pki_style_alloc(node, -1);
>> +	if (style_num < 0) {
>> +		debug("ERROR: style not available to set tag type\n");
>> +		return -1;
>> +	}
>> +	cvmx_pki_write_style_config(node, style_num,
>> CVMX_PKI_CLUSTER_ALL, &style_cfg);
>> +	memset(&pcam_input, 0, sizeof(pcam_input));
>> +	memset(&pcam_action, 0, sizeof(pcam_action));
>> +	pcam_input.style = style;
>> +	pcam_input.style_mask = 0xff;
>> +	if (field == CVMX_PKI_PCAM_MATCH_IP) {
>> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
>> +		pcam_input.field_mask = 0xff;
>> +		pcam_input.data = 0x08000000;
>> +		pcam_input.data_mask = 0xffff0000;
>> +		pcam_action.pointer_advance = 4;
>> +		/* legacy will write to all clusters*/
>> +		bank = 0;
>> +		pcam_offset = cvmx_pki_pcam_entry_alloc(node,
>> CVMX_PKI_FIND_AVAL_ENTRY, bank,
>> +							CVMX_PKI_CLUSTE
>> R_ALL);
>> +		if (pcam_offset < 0) {
>> +			debug("ERROR: pcam entry not available to
>> enable qos watcher\n");
>> +			cvmx_pki_style_free(node, style_num);
>> +			return -1;
>> +		}
>> +		pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
>> +		pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
>> +		pcam_action.style_add = (u8)(style_num - style);
>> +		cvmx_pki_pcam_write_entry(node, pcam_offset,
>> CVMX_PKI_CLUSTER_ALL, pcam_input,
>> +					  pcam_action);
>> +		field = CVMX_PKI_PCAM_MATCH_IPV6;
>> +	}
>> +	if (field == CVMX_PKI_PCAM_MATCH_IPV4) {
>> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
>> +		pcam_input.field_mask = 0xff;
>> +		pcam_input.data = 0x08000000;
>> +		pcam_input.data_mask = 0xffff0000;
>> +		pcam_action.pointer_advance = 4;
>> +	} else if (field == CVMX_PKI_PCAM_MATCH_IPV6) {
>> +		pcam_input.field = CVMX_PKI_PCAM_TERM_ETHTYPE0;
>> +		pcam_input.field_mask = 0xff;
>> +		pcam_input.data = 0x86dd00000;
>> +		pcam_input.data_mask = 0xffff0000;
>> +		pcam_action.pointer_advance = 4;
>> +	} else if (field == CVMX_PKI_PCAM_MATCH_TCP) {
>> +		pcam_input.field = CVMX_PKI_PCAM_TERM_L4_PORT;
>> +		pcam_input.field_mask = 0xff;
>> +		pcam_input.data = 0x60000;
>> +		pcam_input.data_mask = 0xff0000;
>> +		pcam_action.pointer_advance = 0;
>> +	}
>> +	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
>> +	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
>> +	pcam_action.style_add = (u8)(style_num - style);
>> +	bank = pcam_input.field & 0x01;
>> +	pcam_offset = cvmx_pki_pcam_entry_alloc(node,
>> CVMX_PKI_FIND_AVAL_ENTRY, bank,
>> +						CVMX_PKI_CLUSTER_ALL);
>> +	if (pcam_offset < 0) {
>> +		debug("ERROR: pcam entry not available to enable qos
>> watcher\n");
>> +		cvmx_pki_style_free(node, style_num);
>> +		return -1;
>> +	}
>> +	cvmx_pki_pcam_write_entry(node, pcam_offset,
>> CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
>> +	return style_num;
>> +}
>> +
>> +/* Only for legacy internal use */
>> +static inline int __cvmx_pip_enable_watcher_78xx(int node, int
>> index, int style)
>> +{
>> +	struct cvmx_pki_style_config style_cfg;
>> +	struct cvmx_pki_qpg_config qpg_cfg;
>> +	struct cvmx_pki_pcam_input pcam_input;
>> +	struct cvmx_pki_pcam_action pcam_action;
>> +	int style_num;
>> +	int qpg_offset;
>> +	int pcam_offset;
>> +	int bank;
>> +
>> +	if (!qos_watcher[index].configured) {
>> +		debug("ERROR: qos watcher %d should be configured
>> before enable\n", index);
>> +		return -1;
>> +	}
>> +	/* All other style parameters remain same except grp and qos
>> and qps base */
>> +	cvmx_pki_read_style_config(node, style, CVMX_PKI_CLUSTER_ALL,
>> &style_cfg);
>> +	cvmx_pki_read_qpg_entry(node, style_cfg.parm_cfg.qpg_base,
>> &qpg_cfg);
>> +	qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
>> +	qpg_cfg.grp_ok = qos_watcher[index].sso_grp;
>> +	qpg_cfg.grp_bad = qos_watcher[index].sso_grp;
>> +	qpg_offset = cvmx_helper_pki_set_qpg_entry(node, &qpg_cfg);
>> +	if (qpg_offset == -1) {
>> +		debug("Warning: no new qpg entry available to enable
>> watcher\n");
>> +		return -1;
>> +	}
>> +	/* try to reserve the style, if it is not configured already,
>> reserve
>> +	   and configure it */
>> +	style_cfg.parm_cfg.qpg_base = qpg_offset;
>> +	style_num = cvmx_pki_style_alloc(node, -1);
>> +	if (style_num < 0) {
>> +		debug("ERROR: style not available to enable qos
>> watcher\n");
>> +		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
>> +		return -1;
>> +	}
>> +	cvmx_pki_write_style_config(node, style_num,
>> CVMX_PKI_CLUSTER_ALL, &style_cfg);
>> +	/* legacy will write to all clusters*/
>> +	bank = qos_watcher[index].field & 0x01;
>> +	pcam_offset = cvmx_pki_pcam_entry_alloc(node,
>> CVMX_PKI_FIND_AVAL_ENTRY, bank,
>> +						CVMX_PKI_CLUSTER_ALL);
>> +	if (pcam_offset < 0) {
>> +		debug("ERROR: pcam entry not available to enable qos
>> watcher\n");
>> +		cvmx_pki_style_free(node, style_num);
>> +		cvmx_pki_qpg_entry_free(node, qpg_offset, 1);
>> +		return -1;
>> +	}
>> +	memset(&pcam_input, 0, sizeof(pcam_input));
>> +	memset(&pcam_action, 0, sizeof(pcam_action));
>> +	pcam_input.style = style;
>> +	pcam_input.style_mask = 0xff;
>> +	pcam_input.field = qos_watcher[index].field;
>> +	pcam_input.field_mask = 0xff;
>> +	pcam_input.data = qos_watcher[index].data;
>> +	pcam_input.data_mask = qos_watcher[index].data_mask;
>> +	pcam_action.parse_mode_chg = CVMX_PKI_PARSE_NO_CHG;
>> +	pcam_action.layer_type_set = CVMX_PKI_LTYPE_E_NONE;
>> +	pcam_action.style_add = (u8)(style_num - style);
>> +	pcam_action.pointer_advance = qos_watcher[index].advance;
>> +	cvmx_pki_pcam_write_entry(node, pcam_offset,
>> CVMX_PKI_CLUSTER_ALL, pcam_input, pcam_action);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * Configure an ethernet input port
>> + *
>> + * @param ipd_port Port number to configure
>> + * @param port_cfg Port hardware configuration
>> + * @param port_tag_cfg Port POW tagging configuration
>> + */
>> +static inline void cvmx_pip_config_port(u64 ipd_port,
>> cvmx_pip_prt_cfgx_t port_cfg,
>> +					cvmx_pip_prt_tagx_t
>> port_tag_cfg)
>> +{
>> +	struct cvmx_pki_qpg_config qpg_cfg;
>> +	int qpg_offset;
>> +	u8 tcp_tag = 0xff;
>> +	u8 ip_tag = 0xaa;
>> +	int style, nstyle, n4style, n6style;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
>> +		struct cvmx_pki_port_config pki_prt_cfg;
>> +		struct cvmx_xport xp =
>> cvmx_helper_ipd_port_to_xport(ipd_port);
>> +
>> +		cvmx_pki_get_port_config(ipd_port, &pki_prt_cfg);
>> +		style = pki_prt_cfg.pkind_cfg.initial_style;
>> +		if (port_cfg.s.ih_pri || port_cfg.s.vlan_len ||
>> port_cfg.s.pad_len)
>> +			debug("Warning: 78xx: use different config for
>> this option\n");
>> +		pki_prt_cfg.style_cfg.parm_cfg.minmax_sel =
>> port_cfg.s.len_chk_sel;
>> +		pki_prt_cfg.style_cfg.parm_cfg.lenerr_en =
>> port_cfg.s.lenerr_en;
>> +		pki_prt_cfg.style_cfg.parm_cfg.maxerr_en =
>> port_cfg.s.maxerr_en;
>> +		pki_prt_cfg.style_cfg.parm_cfg.minerr_en =
>> port_cfg.s.minerr_en;
>> +		pki_prt_cfg.style_cfg.parm_cfg.fcs_chk =
>> port_cfg.s.crc_en;
>> +		if (port_cfg.s.grp_wat || port_cfg.s.qos_wat ||
>> port_cfg.s.grp_wat_47 ||
>> +		    port_cfg.s.qos_wat_47) {
>> +			u8 group_mask = (u8)(port_cfg.s.grp_wat |
>> (u8)(port_cfg.s.grp_wat_47 << 4));
>> +			u8 qos_mask = (u8)(port_cfg.s.qos_wat |
>> (u8)(port_cfg.s.qos_wat_47 << 4));
>> +			int i;
>> +
>> +			for (i = 0; i < CVMX_PIP_NUM_WATCHERS; i++) {
>> +				if ((group_mask & (1 << i)) ||
>> (qos_mask & (1 << i)))
>> +					__cvmx_pip_enable_watcher_78xx(
>> xp.node, i, style);
>> +			}
>> +		}
>> +		if (port_tag_cfg.s.tag_mode) {
>> +			if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X))
>> +				cvmx_printf("Warning: mask tag is not
>> supported in 78xx pass1\n");
>> +			else {
>> +			}
>> +			/* need to implement for 78xx*/
>> +		}
>> +		if (port_cfg.s.tag_inc)
>> +			debug("Warning: 78xx uses differnet method for
>> tag generation\n");
>> +		pki_prt_cfg.style_cfg.parm_cfg.rawdrp =
>> port_cfg.s.rawdrp;
>> +		pki_prt_cfg.pkind_cfg.parse_en.inst_hdr =
>> port_cfg.s.inst_hdr;
>> +		if (port_cfg.s.hg_qos)
>> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
>> CVMX_PKI_QPG_QOS_HIGIG;
>> +		else if (port_cfg.s.qos_vlan)
>> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
>> CVMX_PKI_QPG_QOS_VLAN;
>> +		else if (port_cfg.s.qos_diff)
>> +			pki_prt_cfg.style_cfg.parm_cfg.qpg_qos =
>> CVMX_PKI_QPG_QOS_DIFFSERV;
>> +		if (port_cfg.s.qos_vod)
>> +			debug("Warning: 78xx needs pcam entries
>> installed to achieve qos_vod\n");
>> +		if (port_cfg.s.qos) {
>> +			cvmx_pki_read_qpg_entry(xp.node,
>> pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
>> +						&qpg_cfg);
>> +			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
>> +			qpg_cfg.grp_ok |= port_cfg.s.qos;
>> +			qpg_cfg.grp_bad |= port_cfg.s.qos;
>> +			qpg_offset =
>> cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
>> +			if (qpg_offset == -1)
>> +				debug("Warning: no new qpg entry
>> available, will not modify qos\n");
>> +			else
>> +				pki_prt_cfg.style_cfg.parm_cfg.qpg_base
>> = qpg_offset;
>> +		}
>> +		if (port_tag_cfg.s.grp !=
>> pki_dflt_sso_grp[xp.node].group) {
>> +			cvmx_pki_read_qpg_entry(xp.node,
>> pki_prt_cfg.style_cfg.parm_cfg.qpg_base,
>> +						&qpg_cfg);
>> +			qpg_cfg.qpg_base = CVMX_PKI_FIND_AVAL_ENTRY;
>> +			qpg_cfg.grp_ok |= (u8)(port_tag_cfg.s.grp <<
>> 3);
>> +			qpg_cfg.grp_bad |= (u8)(port_tag_cfg.s.grp <<
>> 3);
>> +			qpg_offset =
>> cvmx_helper_pki_set_qpg_entry(xp.node, &qpg_cfg);
>> +			if (qpg_offset == -1)
>> +				debug("Warning: no new qpg entry
>> available, will not modify group\n");
>> +			else
>> +				pki_prt_cfg.style_cfg.parm_cfg.qpg_base
>> = qpg_offset;
>> +		}
>> +		pki_prt_cfg.pkind_cfg.parse_en.dsa_en =
>> port_cfg.s.dsa_en;
>> +		pki_prt_cfg.pkind_cfg.parse_en.hg_en =
>> port_cfg.s.higig_en;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_src =
>> +			port_tag_cfg.s.ip6_src_flag |
>> port_tag_cfg.s.ip4_src_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_c_dst =
>> +			port_tag_cfg.s.ip6_dst_flag |
>> port_tag_cfg.s.ip4_dst_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.ip_prot_nexthd
>> r =
>> +			port_tag_cfg.s.ip6_nxth_flag |
>> port_tag_cfg.s.ip4_pctl_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_src =
>> +			port_tag_cfg.s.ip6_sprt_flag |
>> port_tag_cfg.s.ip4_sprt_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.layer_d_dst =
>> +			port_tag_cfg.s.ip6_dprt_flag |
>> port_tag_cfg.s.ip4_dprt_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.input_port =
>> port_tag_cfg.s.inc_prt_flag;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.first_vlan =
>> port_tag_cfg.s.inc_vlan;
>> +		pki_prt_cfg.style_cfg.tag_cfg.tag_fields.second_vlan =
>> port_tag_cfg.s.inc_vs;
>> +
>> +		if (port_tag_cfg.s.tcp6_tag_type ==
>> port_tag_cfg.s.tcp4_tag_type)
>> +			tcp_tag = port_tag_cfg.s.tcp6_tag_type;
>> +		if (port_tag_cfg.s.ip6_tag_type ==
>> port_tag_cfg.s.ip4_tag_type)
>> +			ip_tag = port_tag_cfg.s.ip6_tag_type;
>> +		pki_prt_cfg.style_cfg.parm_cfg.tag_type =
>> +			(enum
>> cvmx_sso_tag_type)port_tag_cfg.s.non_tag_type;
>> +		if (tcp_tag == ip_tag && tcp_tag ==
>> port_tag_cfg.s.non_tag_type)
>> +			pki_prt_cfg.style_cfg.parm_cfg.tag_type = (enum
>> cvmx_sso_tag_type)tcp_tag;
>> +		else if (tcp_tag == ip_tag) {
>> +			/* allocate and copy style */
>> +			/* modify tag type */
>> +			/*pcam entry for ip6 && ip4 match*/
>> +			/* default is non tag type */
>> +			__cvmx_pip_set_tag_type(xp.node, style, ip_tag,
>> CVMX_PKI_PCAM_MATCH_IP);
>> +		} else if (ip_tag == port_tag_cfg.s.non_tag_type) {
>> +			/* allocate and copy style */
>> +			/* modify tag type */
>> +			/*pcam entry for tcp6 & tcp4 match*/
>> +			/* default is non tag type */
>> +			__cvmx_pip_set_tag_type(xp.node, style,
>> tcp_tag, CVMX_PKI_PCAM_MATCH_TCP);
>> +		} else {
>> +			if (ip_tag != 0xaa) {
>> +				nstyle =
>> __cvmx_pip_set_tag_type(xp.node, style, ip_tag,
>> +								 CVMX_P
>> KI_PCAM_MATCH_IP);
>> +				if (tcp_tag != 0xff)
>> +					__cvmx_pip_set_tag_type(xp.node
>> , nstyle, tcp_tag,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +				else {
>> +					n4style =
>> __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
>> +									
>>    CVMX_PKI_PCAM_MATCH_IPV4);
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n4style,
>> +								port_ta
>> g_cfg.s.tcp4_tag_type,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +					n6style =
>> __cvmx_pip_set_tag_type(xp.node, nstyle, ip_tag,
>> +									
>>    CVMX_PKI_PCAM_MATCH_IPV6);
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n6style,
>> +								port_ta
>> g_cfg.s.tcp6_tag_type,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +				}
>> +			} else {
>> +				n4style =
>> __cvmx_pip_set_tag_type(xp.node, style,
>> +								  port_
>> tag_cfg.s.ip4_tag_type,
>> +								  CVMX_
>> PKI_PCAM_MATCH_IPV4);
>> +				n6style =
>> __cvmx_pip_set_tag_type(xp.node, style,
>> +								  port_
>> tag_cfg.s.ip6_tag_type,
>> +								  CVMX_
>> PKI_PCAM_MATCH_IPV6);
>> +				if (tcp_tag != 0xff) {
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n4style, tcp_tag,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n6style, tcp_tag,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +				} else {
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n4style,
>> +								port_ta
>> g_cfg.s.tcp4_tag_type,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +					__cvmx_pip_set_tag_type(xp.node
>> , n6style,
>> +								port_ta
>> g_cfg.s.tcp6_tag_type,
>> +								CVMX_PK
>> I_PCAM_MATCH_TCP);
>> +				}
>> +			}
>> +		}
>> +		pki_prt_cfg.style_cfg.parm_cfg.qpg_dis_padd =
>> !port_tag_cfg.s.portadd_en;
>> +
>> +		if (port_cfg.s.mode == 0x1)
>> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
>> CVMX_PKI_PARSE_LA_TO_LG;
>> +		else if (port_cfg.s.mode == 0x2)
>> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
>> CVMX_PKI_PARSE_LC_TO_LG;
>> +		else
>> +			pki_prt_cfg.pkind_cfg.initial_parse_mode =
>> CVMX_PKI_PARSE_NOTHING;
>> +		/* This is only for backward compatibility, not all the
>> parameters are supported in 78xx */
>> +		cvmx_pki_set_port_config(ipd_port, &pki_prt_cfg);
>> +	} else {
>> +		if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +			int interface, index, pknd;
>> +
>> +			interface =
>> cvmx_helper_get_interface_num(ipd_port);
>> +			index =
>> cvmx_helper_get_interface_index_num(ipd_port);
>> +			pknd = cvmx_helper_get_pknd(interface, index);
>> +
>> +			ipd_port = pknd; /* overload port_num with pknd
>> */
>> +		}
>> +		csr_wr(CVMX_PIP_PRT_CFGX(ipd_port), port_cfg.u64);
>> +		csr_wr(CVMX_PIP_PRT_TAGX(ipd_port), port_tag_cfg.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * Configure the VLAN priority to QoS queue mapping.
>> + *
>> + * @param vlan_priority
>> + *               VLAN priority (0-7)
>> + * @param qos    QoS queue for packets matching this watcher
>> + */
>> +static inline void cvmx_pip_config_vlan_qos(u64 vlan_priority, u64
>> qos)
>> +{
>> +	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +		cvmx_pip_qos_vlanx_t pip_qos_vlanx;
>> +
>> +		pip_qos_vlanx.u64 = 0;
>> +		pip_qos_vlanx.s.qos = qos;
>> +		csr_wr(CVMX_PIP_QOS_VLANX(vlan_priority),
>> pip_qos_vlanx.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * Configure the Diffserv to QoS queue mapping.
>> + *
>> + * @param diffserv Diffserv field value (0-63)
>> + * @param qos      QoS queue for packets matching this watcher
>> + */
>> +static inline void cvmx_pip_config_diffserv_qos(u64 diffserv, u64
>> qos)
>> +{
>> +	if (!octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +		cvmx_pip_qos_diffx_t pip_qos_diffx;
>> +
>> +		pip_qos_diffx.u64 = 0;
>> +		pip_qos_diffx.s.qos = qos;
>> +		csr_wr(CVMX_PIP_QOS_DIFFX(diffserv),
>> pip_qos_diffx.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * Get the status counters for a port for older non PKI chips.
>> + *
>> + * @param port_num Port number (ipd_port) to get statistics for.
>> + * @param clear    Set to 1 to clear the counters after they are
>> read
>> + * @param status   Where to put the results.
>> + */
>> +static inline void cvmx_pip_get_port_stats(u64 port_num, u64 clear,
>> cvmx_pip_port_status_t *status)
>> +{
>> +	cvmx_pip_stat_ctl_t pip_stat_ctl;
>> +	cvmx_pip_stat0_prtx_t stat0;
>> +	cvmx_pip_stat1_prtx_t stat1;
>> +	cvmx_pip_stat2_prtx_t stat2;
>> +	cvmx_pip_stat3_prtx_t stat3;
>> +	cvmx_pip_stat4_prtx_t stat4;
>> +	cvmx_pip_stat5_prtx_t stat5;
>> +	cvmx_pip_stat6_prtx_t stat6;
>> +	cvmx_pip_stat7_prtx_t stat7;
>> +	cvmx_pip_stat8_prtx_t stat8;
>> +	cvmx_pip_stat9_prtx_t stat9;
>> +	cvmx_pip_stat10_x_t stat10;
>> +	cvmx_pip_stat11_x_t stat11;
>> +	cvmx_pip_stat_inb_pktsx_t pip_stat_inb_pktsx;
>> +	cvmx_pip_stat_inb_octsx_t pip_stat_inb_octsx;
>> +	cvmx_pip_stat_inb_errsx_t pip_stat_inb_errsx;
>> +	int interface = cvmx_helper_get_interface_num(port_num);
>> +	int index = cvmx_helper_get_interface_index_num(port_num);
>> +
>> +	pip_stat_ctl.u64 = 0;
>> +	pip_stat_ctl.s.rdclr = clear;
>> +	csr_wr(CVMX_PIP_STAT_CTL, pip_stat_ctl.u64);
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +		int pknd = cvmx_helper_get_pknd(interface, index);
>> +		/*
>> +		 * PIP_STAT_CTL[MODE] 0 means pkind.
>> +		 */
>> +		stat0.u64 = csr_rd(CVMX_PIP_STAT0_X(pknd));
>> +		stat1.u64 = csr_rd(CVMX_PIP_STAT1_X(pknd));
>> +		stat2.u64 = csr_rd(CVMX_PIP_STAT2_X(pknd));
>> +		stat3.u64 = csr_rd(CVMX_PIP_STAT3_X(pknd));
>> +		stat4.u64 = csr_rd(CVMX_PIP_STAT4_X(pknd));
>> +		stat5.u64 = csr_rd(CVMX_PIP_STAT5_X(pknd));
>> +		stat6.u64 = csr_rd(CVMX_PIP_STAT6_X(pknd));
>> +		stat7.u64 = csr_rd(CVMX_PIP_STAT7_X(pknd));
>> +		stat8.u64 = csr_rd(CVMX_PIP_STAT8_X(pknd));
>> +		stat9.u64 = csr_rd(CVMX_PIP_STAT9_X(pknd));
>> +		stat10.u64 = csr_rd(CVMX_PIP_STAT10_X(pknd));
>> +		stat11.u64 = csr_rd(CVMX_PIP_STAT11_X(pknd));
>> +	} else {
>> +		if (port_num >= 40) {
>> +			stat0.u64 =
>> csr_rd(CVMX_PIP_XSTAT0_PRTX(port_num));
>> +			stat1.u64 =
>> csr_rd(CVMX_PIP_XSTAT1_PRTX(port_num));
>> +			stat2.u64 =
>> csr_rd(CVMX_PIP_XSTAT2_PRTX(port_num));
>> +			stat3.u64 =
>> csr_rd(CVMX_PIP_XSTAT3_PRTX(port_num));
>> +			stat4.u64 =
>> csr_rd(CVMX_PIP_XSTAT4_PRTX(port_num));
>> +			stat5.u64 =
>> csr_rd(CVMX_PIP_XSTAT5_PRTX(port_num));
>> +			stat6.u64 =
>> csr_rd(CVMX_PIP_XSTAT6_PRTX(port_num));
>> +			stat7.u64 =
>> csr_rd(CVMX_PIP_XSTAT7_PRTX(port_num));
>> +			stat8.u64 =
>> csr_rd(CVMX_PIP_XSTAT8_PRTX(port_num));
>> +			stat9.u64 =
>> csr_rd(CVMX_PIP_XSTAT9_PRTX(port_num));
>> +			if (OCTEON_IS_MODEL(OCTEON_CN6XXX)) {
>> +				stat10.u64 =
>> csr_rd(CVMX_PIP_XSTAT10_PRTX(port_num));
>> +				stat11.u64 =
>> csr_rd(CVMX_PIP_XSTAT11_PRTX(port_num));
>> +			}
>> +		} else {
>> +			stat0.u64 =
>> csr_rd(CVMX_PIP_STAT0_PRTX(port_num));
>> +			stat1.u64 =
>> csr_rd(CVMX_PIP_STAT1_PRTX(port_num));
>> +			stat2.u64 =
>> csr_rd(CVMX_PIP_STAT2_PRTX(port_num));
>> +			stat3.u64 =
>> csr_rd(CVMX_PIP_STAT3_PRTX(port_num));
>> +			stat4.u64 =
>> csr_rd(CVMX_PIP_STAT4_PRTX(port_num));
>> +			stat5.u64 =
>> csr_rd(CVMX_PIP_STAT5_PRTX(port_num));
>> +			stat6.u64 =
>> csr_rd(CVMX_PIP_STAT6_PRTX(port_num));
>> +			stat7.u64 =
>> csr_rd(CVMX_PIP_STAT7_PRTX(port_num));
>> +			stat8.u64 =
>> csr_rd(CVMX_PIP_STAT8_PRTX(port_num));
>> +			stat9.u64 =
>> csr_rd(CVMX_PIP_STAT9_PRTX(port_num));
>> +			if (OCTEON_IS_OCTEON2() ||
>> OCTEON_IS_MODEL(OCTEON_CN70XX)) {
>> +				stat10.u64 =
>> csr_rd(CVMX_PIP_STAT10_PRTX(port_num));
>> +				stat11.u64 =
>> csr_rd(CVMX_PIP_STAT11_PRTX(port_num));
>> +			}
>> +		}
>> +	}
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +		int pknd = cvmx_helper_get_pknd(interface, index);
>> +
>> +		pip_stat_inb_pktsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_PKTS_PKNDX(pknd));
>> +		pip_stat_inb_octsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_OCTS_PKNDX(pknd));
>> +		pip_stat_inb_errsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_ERRS_PKNDX(pknd));
>> +	} else {
>> +		pip_stat_inb_pktsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_PKTSX(port_num));
>> +		pip_stat_inb_octsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_OCTSX(port_num));
>> +		pip_stat_inb_errsx.u64 =
>> csr_rd(CVMX_PIP_STAT_INB_ERRSX(port_num));
>> +	}
>> +
>> +	status->dropped_octets = stat0.s.drp_octs;
>> +	status->dropped_packets = stat0.s.drp_pkts;
>> +	status->octets = stat1.s.octs;
>> +	status->pci_raw_packets = stat2.s.raw;
>> +	status->packets = stat2.s.pkts;
>> +	status->multicast_packets = stat3.s.mcst;
>> +	status->broadcast_packets = stat3.s.bcst;
>> +	status->len_64_packets = stat4.s.h64;
>> +	status->len_65_127_packets = stat4.s.h65to127;
>> +	status->len_128_255_packets = stat5.s.h128to255;
>> +	status->len_256_511_packets = stat5.s.h256to511;
>> +	status->len_512_1023_packets = stat6.s.h512to1023;
>> +	status->len_1024_1518_packets = stat6.s.h1024to1518;
>> +	status->len_1519_max_packets = stat7.s.h1519;
>> +	status->fcs_align_err_packets = stat7.s.fcs;
>> +	status->runt_packets = stat8.s.undersz;
>> +	status->runt_crc_packets = stat8.s.frag;
>> +	status->oversize_packets = stat9.s.oversz;
>> +	status->oversize_crc_packets = stat9.s.jabber;
>> +	if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX)) {
>> +		status->mcast_l2_red_packets = stat10.s.mcast;
>> +		status->bcast_l2_red_packets = stat10.s.bcast;
>> +		status->mcast_l3_red_packets = stat11.s.mcast;
>> +		status->bcast_l3_red_packets = stat11.s.bcast;
>> +	}
>> +	status->inb_packets = pip_stat_inb_pktsx.s.pkts;
>> +	status->inb_octets = pip_stat_inb_octsx.s.octs;
>> +	status->inb_errors = pip_stat_inb_errsx.s.errs;
>> +}
>> +
>> +/**
>> + * Get the status counters for a port.
>> + *
>> + * @param port_num Port number (ipd_port) to get statistics for.
>> + * @param clear    Set to 1 to clear the counters after they are
>> read
>> + * @param status   Where to put the results.
>> + */
>> +static inline void cvmx_pip_get_port_status(u64 port_num, u64 clear,
>> cvmx_pip_port_status_t *status)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
>> +		unsigned int node = cvmx_get_node_num();
>> +
>> +		cvmx_pki_get_port_stats(node, port_num, (struct
>> cvmx_pki_port_stats *)status);
>> +	} else {
>> +		cvmx_pip_get_port_stats(port_num, clear, status);
>> +	}
>> +}
>> +
>> +/**
>> + * Configure the hardware CRC engine
>> + *
>> + * @param interface Interface to configure (0 or 1)
>> + * @param invert_result
>> + *                 Invert the result of the CRC
>> + * @param reflect  Reflect
>> + * @param initialization_vector
>> + *                 CRC initialization vector
>> + */
>> +static inline void cvmx_pip_config_crc(u64 interface, u64
>> invert_result, u64 reflect,
>> +				       u32 initialization_vector)
>> +{
>> +	/* Only CN38XX & CN58XX */
>> +}
>> +
>> +/**
>> + * Clear all bits in a tag mask. This should be called on
>> + * startup before any calls to cvmx_pip_tag_mask_set. Each bit
>> + * set in the final mask represent a byte used in the packet for
>> + * tag generation.
>> + *
>> + * @param mask_index Which tag mask to clear (0..3)
>> + */
>> +static inline void cvmx_pip_tag_mask_clear(u64 mask_index)
>> +{
>> +	u64 index;
>> +	cvmx_pip_tag_incx_t pip_tag_incx;
>> +
>> +	pip_tag_incx.u64 = 0;
>> +	pip_tag_incx.s.en = 0;
>> +	for (index = mask_index * 16; index < (mask_index + 1) * 16;
>> index++)
>> +		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
>> +}
>> +
>> +/**
>> + * Sets a range of bits in the tag mask. The tag mask is used
>> + * when the cvmx_pip_port_tag_cfg_t tag_mode is non zero.
>> + * There are four separate masks that can be configured.
>> + *
>> + * @param mask_index Which tag mask to modify (0..3)
>> + * @param offset     Offset into the bitmask to set bits at. Use the
>> GCC macro
>> + *                   offsetof() to determine the offsets into packet
>> headers.
>> + *                   For example, offsetof(ethhdr, protocol) returns
>> the offset
>> + *                   of the ethernet protocol field.  The bitmask
>> selects which bytes
>> + *                   to include the the tag, with bit offset X
>> selecting byte at offset X
>> + *                   from the beginning of the packet data.
>> + * @param len        Number of bytes to include. Usually this is the
>> sizeof()
>> + *                   the field.
>> + */
>> +static inline void cvmx_pip_tag_mask_set(u64 mask_index, u64 offset,
>> u64 len)
>> +{
>> +	while (len--) {
>> +		cvmx_pip_tag_incx_t pip_tag_incx;
>> +		u64 index = mask_index * 16 + offset / 8;
>> +
>> +		pip_tag_incx.u64 = csr_rd(CVMX_PIP_TAG_INCX(index));
>> +		pip_tag_incx.s.en |= 0x80 >> (offset & 0x7);
>> +		csr_wr(CVMX_PIP_TAG_INCX(index), pip_tag_incx.u64);
>> +		offset++;
>> +	}
>> +}
>> +
>> +/**
>> + * Set byte count for Max-Sized and Min Sized frame check.
>> + *
>> + * @param interface   Which interface to set the limit
>> + * @param max_size    Byte count for Max-Size frame check
>> + */
>> +static inline void cvmx_pip_set_frame_check(int interface, u32
>> max_size)
>> +{
>> +	cvmx_pip_frm_len_chkx_t frm_len;
>> +
>> +	/* max_size and min_size are passed as 0, reset to default
>> values. */
>> +	if (max_size < 1536)
>> +		max_size = 1536;
>> +
>> +	/* On CN68XX frame check is enabled for a pkind n and
>> +	   PIP_PRT_CFG[len_chk_sel] selects which set of
>> +	   MAXLEN/MINLEN to use. */
>> +	if (octeon_has_feature(OCTEON_FEATURE_PKND)) {
>> +		int port;
>> +		int num_ports =
>> cvmx_helper_ports_on_interface(interface);
>> +
>> +		for (port = 0; port < num_ports; port++) {
>> +			if (octeon_has_feature(OCTEON_FEATURE_PKI)) {
>> +				int ipd_port;
>> +
>> +				ipd_port =
>> cvmx_helper_get_ipd_port(interface, port);
>> +				cvmx_pki_set_max_frm_len(ipd_port,
>> max_size);
>> +			} else {
>> +				int pknd;
>> +				int sel;
>> +				cvmx_pip_prt_cfgx_t config;
>> +
>> +				pknd = cvmx_helper_get_pknd(interface,
>> port);
>> +				config.u64 =
>> csr_rd(CVMX_PIP_PRT_CFGX(pknd));
>> +				sel = config.s.len_chk_sel;
>> +				frm_len.u64 =
>> csr_rd(CVMX_PIP_FRM_LEN_CHKX(sel));
>> +				frm_len.s.maxlen = max_size;
>> +				csr_wr(CVMX_PIP_FRM_LEN_CHKX(sel),
>> frm_len.u64);
>> +			}
>> +		}
>> +	}
>> +	/* on cn6xxx and cn7xxx models, PIP_FRM_LEN_CHK0 applies to
>> +	 *     all incoming traffic */
>> +	else if (OCTEON_IS_OCTEON2() || OCTEON_IS_MODEL(OCTEON_CN70XX))
>> {
>> +		frm_len.u64 = csr_rd(CVMX_PIP_FRM_LEN_CHKX(0));
>> +		frm_len.s.maxlen = max_size;
>> +		csr_wr(CVMX_PIP_FRM_LEN_CHKX(0), frm_len.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * Initialize Bit Select Extractor config. Their are 8 bit positions
>> and valids
>> + * to be used when using the corresponding extractor.
>> + *
>> + * @param bit     Bit Select Extractor to use
>> + * @param pos     Which position to update
>> + * @param val     The value to update the position with
>> + */
>> +static inline void cvmx_pip_set_bsel_pos(int bit, int pos, int val)
>> +{
>> +	cvmx_pip_bsel_ext_posx_t bsel_pos;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return;
>> +
>> +	if (bit < 0 || bit > 3) {
>> +		debug("ERROR: cvmx_pip_set_bsel_pos: Invalid Bit-Select
>> Extractor (%d) passed\n",
>> +		      bit);
>> +		return;
>> +	}
>> +
>> +	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
>> +	switch (pos) {
>> +	case 0:
>> +		bsel_pos.s.pos0_val = 1;
>> +		bsel_pos.s.pos0 = val & 0x7f;
>> +		break;
>> +	case 1:
>> +		bsel_pos.s.pos1_val = 1;
>> +		bsel_pos.s.pos1 = val & 0x7f;
>> +		break;
>> +	case 2:
>> +		bsel_pos.s.pos2_val = 1;
>> +		bsel_pos.s.pos2 = val & 0x7f;
>> +		break;
>> +	case 3:
>> +		bsel_pos.s.pos3_val = 1;
>> +		bsel_pos.s.pos3 = val & 0x7f;
>> +		break;
>> +	case 4:
>> +		bsel_pos.s.pos4_val = 1;
>> +		bsel_pos.s.pos4 = val & 0x7f;
>> +		break;
>> +	case 5:
>> +		bsel_pos.s.pos5_val = 1;
>> +		bsel_pos.s.pos5 = val & 0x7f;
>> +		break;
>> +	case 6:
>> +		bsel_pos.s.pos6_val = 1;
>> +		bsel_pos.s.pos6 = val & 0x7f;
>> +		break;
>> +	case 7:
>> +		bsel_pos.s.pos7_val = 1;
>> +		bsel_pos.s.pos7 = val & 0x7f;
>> +		break;
>> +	default:
>> +		debug("Warning: cvmx_pip_set_bsel_pos: Invalid
>> pos(%d)\n", pos);
>> +		break;
>> +	}
>> +	csr_wr(CVMX_PIP_BSEL_EXT_POSX(bit), bsel_pos.u64);
>> +}
>> +
>> +/**
>> + * Initialize offset and skip values to use by bit select extractor.
>> +
>> + * @param bit	Bit Select Extractor to use
>> + * @param offset	Offset to add to extractor mem addr to get
>> final address
>> + *			to lookup table.
>> + * @param skip		Number of bytes to skip from start of
>> packet 0-64
>> + */
>> +static inline void cvmx_pip_bsel_config(int bit, int offset, int
>> skip)
>> +{
>> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return;
>> +
>> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
>> +	bsel_cfg.s.offset = offset;
>> +	bsel_cfg.s.skip = skip;
>> +	csr_wr(CVMX_PIP_BSEL_EXT_CFGX(bit), bsel_cfg.u64);
>> +}
>> +
>> +/**
>> + * Get the entry for the Bit Select Extractor Table.
>> + * @param work   pointer to work queue entry
>> + * @return       Index of the Bit Select Extractor Table
>> + */
>> +static inline int cvmx_pip_get_bsel_table_index(cvmx_wqe_t *work)
>> +{
>> +	int bit = cvmx_wqe_get_port(work) & 0x3;
>> +	/* Get the Bit select table index. */
>> +	int index;
>> +	int y;
>> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
>> +	cvmx_pip_bsel_ext_posx_t bsel_pos;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return -1;
>> +
>> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
>> +	bsel_pos.u64 = csr_rd(CVMX_PIP_BSEL_EXT_POSX(bit));
>> +
>> +	for (y = 0; y < 8; y++) {
>> +		char *ptr = (char *)cvmx_phys_to_ptr(work-
>>> packet_ptr.s.addr);
>> +		int bit_loc = 0;
>> +		int bit;
>> +
>> +		ptr += bsel_cfg.s.skip;
>> +		switch (y) {
>> +		case 0:
>> +			ptr += (bsel_pos.s.pos0 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos0 & 0x3);
>> +			break;
>> +		case 1:
>> +			ptr += (bsel_pos.s.pos1 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos1 & 0x3);
>> +			break;
>> +		case 2:
>> +			ptr += (bsel_pos.s.pos2 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos2 & 0x3);
>> +			break;
>> +		case 3:
>> +			ptr += (bsel_pos.s.pos3 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos3 & 0x3);
>> +			break;
>> +		case 4:
>> +			ptr += (bsel_pos.s.pos4 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos4 & 0x3);
>> +			break;
>> +		case 5:
>> +			ptr += (bsel_pos.s.pos5 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos5 & 0x3);
>> +			break;
>> +		case 6:
>> +			ptr += (bsel_pos.s.pos6 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos6 & 0x3);
>> +			break;
>> +		case 7:
>> +			ptr += (bsel_pos.s.pos7 >> 3);
>> +			bit_loc = 7 - (bsel_pos.s.pos7 & 0x3);
>> +			break;
>> +		}
>> +		bit = (*ptr >> bit_loc) & 1;
>> +		index |= bit << y;
>> +	}
>> +	index += bsel_cfg.s.offset;
>> +	index &= 0x1ff;
>> +	return index;
>> +}
>> +
>> +static inline int cvmx_pip_get_bsel_qos(cvmx_wqe_t *work)
>> +{
>> +	int index = cvmx_pip_get_bsel_table_index(work);
>> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return -1;
>> +
>> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
>> +
>> +	return bsel_tbl.s.qos;
>> +}
>> +
>> +static inline int cvmx_pip_get_bsel_grp(cvmx_wqe_t *work)
>> +{
>> +	int index = cvmx_pip_get_bsel_table_index(work);
>> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return -1;
>> +
>> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
>> +
>> +	return bsel_tbl.s.grp;
>> +}
>> +
>> +static inline int cvmx_pip_get_bsel_tt(cvmx_wqe_t *work)
>> +{
>> +	int index = cvmx_pip_get_bsel_table_index(work);
>> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return -1;
>> +
>> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
>> +
>> +	return bsel_tbl.s.tt;
>> +}
>> +
>> +static inline int cvmx_pip_get_bsel_tag(cvmx_wqe_t *work)
>> +{
>> +	int index = cvmx_pip_get_bsel_table_index(work);
>> +	int port = cvmx_wqe_get_port(work);
>> +	int bit = port & 0x3;
>> +	int upper_tag = 0;
>> +	cvmx_pip_bsel_tbl_entx_t bsel_tbl;
>> +	cvmx_pip_bsel_ext_cfgx_t bsel_cfg;
>> +	cvmx_pip_prt_tagx_t prt_tag;
>> +
>> +	/* The bit select extractor is available in CN61XX and CN68XX
>> pass2.0 onwards. */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_BIT_EXTRACTOR))
>> +		return -1;
>> +
>> +	bsel_tbl.u64 = csr_rd(CVMX_PIP_BSEL_TBL_ENTX(index));
>> +	bsel_cfg.u64 = csr_rd(CVMX_PIP_BSEL_EXT_CFGX(bit));
>> +
>> +	prt_tag.u64 = csr_rd(CVMX_PIP_PRT_TAGX(port));
>> +	if (prt_tag.s.inc_prt_flag == 0)
>> +		upper_tag = bsel_cfg.s.upper_tag;
>> +	return bsel_tbl.s.tag | ((bsel_cfg.s.tag << 8) & 0xff00) |
>> ((upper_tag << 16) & 0xffff0000);
>> +}
>> +
>> +#endif /*  __CVMX_PIP_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
>> new file mode 100644
>> index 000000000000..79b99b0bd7c2
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pki-resources.h
>> @@ -0,0 +1,157 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Resource management for PKI resources.
>> + */
>> +
>> +#ifndef __CVMX_PKI_RESOURCES_H__
>> +#define __CVMX_PKI_RESOURCES_H__
>> +
>> +/**
>> + * This function allocates/reserves a style from pool of global
>> styles per node.
>> + * @param node	 node to allocate style from.
>> + * @param style	 style to allocate, if -1 it will be allocated
>> +		 first available style from style resource. If index is
>> positive
>> +		 number and in range, it will try to allocate specified
>> style.
>> + * @return	 style number on success, -1 on failure.
>> + */
>> +int cvmx_pki_style_alloc(int node, int style);
>> +
>> +/**
>> + * This function allocates/reserves a cluster group from per node
>> +   cluster group resources.
>> + * @param node		node to allocate cluster group from.
>> +   @param cl_grp	cluster group to allocate/reserve, if -1 ,
>> +			allocate any available cluster group.
>> + * @return		cluster group number or -1 on failure
>> + */
>> +int cvmx_pki_cluster_grp_alloc(int node, int cl_grp);
>> +
>> +/**
>> + * This function allocates/reserves a cluster from per node
>> +   cluster resources.
>> + * @param node		node to allocate cluster group from.
>> +   @param cluster_mask	mask of clusters  to allocate/reserve,
>> if -1 ,
>> +			allocate any available clusters.
>> + * @param num_clusters	number of clusters that will be
>> allocated
>> + */
>> +int cvmx_pki_cluster_alloc(int node, int num_clusters, u64
>> *cluster_mask);
>> +
>> +/**
>> + * This function allocates/reserves a pcam entry from node
>> + * @param node		node to allocate pcam entry from.
>> +   @param index	index of pacm entry (0-191), if -1 ,
>> +			allocate any available pcam entry.
>> + * @param bank		pcam bank where to allocate/reserve
>> pcan entry from
>> + * @param cluster_mask  mask of clusters from which pcam entry is
>> needed.
>> + * @return		pcam entry of -1 on failure
>> + */
>> +int cvmx_pki_pcam_entry_alloc(int node, int index, int bank, u64
>> cluster_mask);
>> +
>> +/**
>> + * This function allocates/reserves QPG table entries per node.
>> + * @param node		node number.
>> + * @param base_offset	base_offset in qpg table. If -1, first
>> available
>> +			qpg base_offset will be allocated. If
>> base_offset is positive
>> +			number and in range, it will try to allocate
>> specified base_offset.
>> +   @param count		number of consecutive qpg entries to
>> allocate. They will be consecutive
>> +			from base offset.
>> + * @return		qpg table base offset number on success, -1 on
>> failure.
>> + */
>> +int cvmx_pki_qpg_entry_alloc(int node, int base_offset, int count);
>> +
>> +/**
>> + * This function frees a style from pool of global styles per node.
>> + * @param node	 node to free style from.
>> + * @param style	 style to free
>> + * @return	 0 on success, -1 on failure.
>> + */
>> +int cvmx_pki_style_free(int node, int style);
>> +
>> +/**
>> + * This function frees a cluster group from per node
>> +   cluster group resources.
>> + * @param node		node to free cluster group from.
>> +   @param cl_grp	cluster group to free
>> + * @return		0 on success or -1 on failure
>> + */
>> +int cvmx_pki_cluster_grp_free(int node, int cl_grp);
>> +
>> +/**
>> + * This function frees QPG table entries per node.
>> + * @param node		node number.
>> + * @param base_offset	base_offset in qpg table. If -1, first
>> available
>> + *			qpg base_offset will be allocated. If
>> base_offset is positive
>> + *			number and in range, it will try to allocate
>> specified base_offset.
>> + * @param count		number of consecutive qpg entries to
>> allocate. They will be consecutive
>> + *			from base offset.
>> + * @return		qpg table base offset number on success, -1 on
>> failure.
>> + */
>> +int cvmx_pki_qpg_entry_free(int node, int base_offset, int count);
>> +
>> +/**
>> + * This function frees  clusters  from per node
>> +   clusters resources.
>> + * @param node		node to free clusters from.
>> + * @param cluster_mask  mask of clusters need freeing
>> + * @return		0 on success or -1 on failure
>> + */
>> +int cvmx_pki_cluster_free(int node, u64 cluster_mask);
>> +
>> +/**
>> + * This function frees a pcam entry from node
>> + * @param node		node to allocate pcam entry from.
>> +   @param index	index of pacm entry (0-191) needs to be freed.
>> + * @param bank		pcam bank where to free pcam entry from
>> + * @param cluster_mask  mask of clusters from which pcam entry is
>> freed.
>> + * @return		0 on success OR -1 on failure
>> + */
>> +int cvmx_pki_pcam_entry_free(int node, int index, int bank, u64
>> cluster_mask);
>> +
>> +/**
>> + * This function allocates/reserves a bpid from pool of global bpid
>> per node.
>> + * @param node	node to allocate bpid from.
>> + * @param bpid	bpid  to allocate, if -1 it will be allocated
>> + *		first available boid from bpid resource. If index is
>> positive
>> + *		number and in range, it will try to allocate specified
>> bpid.
>> + * @return	bpid number on success,
>> + *		-1 on alloc failure.
>> + *		-2 on resource already reserved.
>> + */
>> +int cvmx_pki_bpid_alloc(int node, int bpid);
>> +
>> +/**
>> + * This function frees a bpid from pool of global bpid per node.
>> + * @param node	 node to free bpid from.
>> + * @param bpid	 bpid to free
>> + * @return	 0 on success, -1 on failure or
>> + */
>> +int cvmx_pki_bpid_free(int node, int bpid);
>> +
>> +/**
>> + * This function frees all the PKI software resources
>> + * (clusters, styles, qpg_entry, pcam_entry etc) for the specified
>> node
>> + */
>> +
>> +/**
>> + * This function allocates/reserves an index from pool of global
>> MTAG-IDX per node.
>> + * @param node	node to allocate index from.
>> + * @param idx	index  to allocate, if -1 it will be allocated
>> + * @return	MTAG index number on success,
>> + *		-1 on alloc failure.
>> + *		-2 on resource already reserved.
>> + */
>> +int cvmx_pki_mtag_idx_alloc(int node, int idx);
>> +
>> +/**
>> + * This function frees an index from pool of global MTAG-IDX per
>> node.
>> + * @param node	 node to free bpid from.
>> + * @param bpid	 bpid to free
>> + * @return	 0 on success, -1 on failure or
>> + */
>> +int cvmx_pki_mtag_idx_free(int node, int idx);
>> +
>> +void __cvmx_pki_global_rsrc_free(int node);
>> +
>> +#endif /*  __CVM_PKI_RESOURCES_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pki.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
>> new file mode 100644
>> index 000000000000..c1feb55a1f01
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pki.h
>> @@ -0,0 +1,970 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Packet Input Data unit.
>> + */
>> +
>> +#ifndef __CVMX_PKI_H__
>> +#define __CVMX_PKI_H__
>> +
>> +#include "cvmx-fpa3.h"
>> +#include "cvmx-helper-util.h"
>> +#include "cvmx-helper-cfg.h"
>> +#include "cvmx-error.h"
>> +
>> +/* PKI AURA and BPID count are equal to FPA AURA count */
>> +#define CVMX_PKI_NUM_AURA	       (cvmx_fpa3_num_auras())
>> +#define CVMX_PKI_NUM_BPID	       (cvmx_fpa3_num_auras())
>> +#define CVMX_PKI_NUM_SSO_GROUP	       (cvmx_sso_num_xgrp())
>> +#define CVMX_PKI_NUM_CLUSTER_GROUP_MAX 1
>> +#define CVMX_PKI_NUM_CLUSTER_GROUP     (cvmx_pki_num_cl_grp())
>> +#define CVMX_PKI_NUM_CLUSTER	       (cvmx_pki_num_clusters())
>> +
>> +/* FIXME: Reduce some of these values, convert to routines XXX */
>> +#define CVMX_PKI_NUM_CHANNEL	    4096
>> +#define CVMX_PKI_NUM_PKIND	    64
>> +#define CVMX_PKI_NUM_INTERNAL_STYLE 256
>> +#define CVMX_PKI_NUM_FINAL_STYLE    64
>> +#define CVMX_PKI_NUM_QPG_ENTRY	    2048
>> +#define CVMX_PKI_NUM_MTAG_IDX	    (32 / 4) /* 32 registers
>> grouped by 4*/
>> +#define CVMX_PKI_NUM_LTYPE	    32
>> +#define CVMX_PKI_NUM_PCAM_BANK	    2
>> +#define CVMX_PKI_NUM_PCAM_ENTRY	    192
>> +#define CVMX_PKI_NUM_FRAME_CHECK    2
>> +#define CVMX_PKI_NUM_BELTYPE	    32
>> +#define CVMX_PKI_MAX_FRAME_SIZE	    65535
>> +#define CVMX_PKI_FIND_AVAL_ENTRY    (-1)
>> +#define CVMX_PKI_CLUSTER_ALL	    0xf
>> +
>> +#ifdef CVMX_SUPPORT_SEPARATE_CLUSTER_CONFIG
>> +#define
>> CVMX_PKI_TOTAL_PCAM_ENTRY
>>                        \
>> +	((CVMX_PKI_NUM_CLUSTER) * (CVMX_PKI_NUM_PCAM_BANK) *
>> (CVMX_PKI_NUM_PCAM_ENTRY))
>> +#else
>> +#define CVMX_PKI_TOTAL_PCAM_ENTRY (CVMX_PKI_NUM_PCAM_BANK *
>> CVMX_PKI_NUM_PCAM_ENTRY)
>> +#endif
>> +
>> +static inline unsigned int cvmx_pki_num_clusters(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX) ||
>> OCTEON_IS_MODEL(OCTEON_CNF75XX))
>> +		return 2;
>> +	return 4;
>> +}
>> +
>> +static inline unsigned int cvmx_pki_num_cl_grp(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX) ||
>> OCTEON_IS_MODEL(OCTEON_CNF75XX) ||
>> +	    OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 1;
>> +	return 0;
>> +}
>> +
>> +enum cvmx_pki_pkind_parse_mode {
>> +	CVMX_PKI_PARSE_LA_TO_LG = 0,  /* Parse LA(L2) to LG */
>> +	CVMX_PKI_PARSE_LB_TO_LG = 1,  /* Parse LB(custom) to LG */
>> +	CVMX_PKI_PARSE_LC_TO_LG = 3,  /* Parse LC(L3) to LG */
>> +	CVMX_PKI_PARSE_LG = 0x3f,     /* Parse LG */
>> +	CVMX_PKI_PARSE_NOTHING = 0x7f /* Parse nothing */
>> +};
>> +
>> +enum cvmx_pki_parse_mode_chg {
>> +	CVMX_PKI_PARSE_NO_CHG = 0x0,
>> +	CVMX_PKI_PARSE_SKIP_TO_LB = 0x1,
>> +	CVMX_PKI_PARSE_SKIP_TO_LC = 0x3,
>> +	CVMX_PKI_PARSE_SKIP_TO_LD = 0x7,
>> +	CVMX_PKI_PARSE_SKIP_TO_LG = 0x3f,
>> +	CVMX_PKI_PARSE_SKIP_ALL = 0x7f,
>> +};
>> +
>> +enum cvmx_pki_l2_len_mode { PKI_L2_LENCHK_EQUAL_GREATER = 0,
>> PKI_L2_LENCHK_EQUAL_ONLY };
>> +
>> +enum cvmx_pki_cache_mode {
>> +	CVMX_PKI_OPC_MODE_STT = 0LL,	  /* All blocks write through
>> DRAM,*/
>> +	CVMX_PKI_OPC_MODE_STF = 1LL,	  /* All blocks into L2 */
>> +	CVMX_PKI_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
>> +	CVMX_PKI_OPC_MODE_STF2_STT = 3LL  /* 1st, 2nd blocks L2, rest
>> DRAM */
>> +};
>> +
>> +/**
>> + * Tag type definitions
>> + */
>> +enum cvmx_sso_tag_type {
>> +	CVMX_SSO_TAG_TYPE_ORDERED = 0L,
>> +	CVMX_SSO_TAG_TYPE_ATOMIC = 1L,
>> +	CVMX_SSO_TAG_TYPE_UNTAGGED = 2L,
>> +	CVMX_SSO_TAG_TYPE_EMPTY = 3L
>> +};
>> +
>> +enum cvmx_pki_qpg_qos {
>> +	CVMX_PKI_QPG_QOS_NONE = 0,
>> +	CVMX_PKI_QPG_QOS_VLAN,
>> +	CVMX_PKI_QPG_QOS_MPLS,
>> +	CVMX_PKI_QPG_QOS_DSA_SRC,
>> +	CVMX_PKI_QPG_QOS_DIFFSERV,
>> +	CVMX_PKI_QPG_QOS_HIGIG,
>> +};
>> +
>> +enum cvmx_pki_wqe_vlan { CVMX_PKI_USE_FIRST_VLAN = 0,
>> CVMX_PKI_USE_SECOND_VLAN };
>> +
>> +/**
>> + * Controls how the PKI statistics counters are handled
>> + * The PKI_STAT*_X registers can be indexed either by port kind
>> (pkind), or
>> + * final style. (Does not apply to the PKI_STAT_INB* registers.)
>> + *    0 = X represents the packet?s pkind
>> + *    1 = X represents the low 6-bits of packet?s final style
>> + */
>> +enum cvmx_pki_stats_mode { CVMX_PKI_STAT_MODE_PKIND,
>> CVMX_PKI_STAT_MODE_STYLE };
>> +
>> +enum cvmx_pki_fpa_wait { CVMX_PKI_DROP_PKT, CVMX_PKI_WAIT_PKT };
>> +
>> +#define PKI_BELTYPE_E__NONE_M 0x0
>> +#define PKI_BELTYPE_E__MISC_M 0x1
>> +#define PKI_BELTYPE_E__IP4_M  0x2
>> +#define PKI_BELTYPE_E__IP6_M  0x3
>> +#define PKI_BELTYPE_E__TCP_M  0x4
>> +#define PKI_BELTYPE_E__UDP_M  0x5
>> +#define PKI_BELTYPE_E__SCTP_M 0x6
>> +#define PKI_BELTYPE_E__SNAP_M 0x7
>> +
>> +/* PKI_BELTYPE_E_t */
>> +enum cvmx_pki_beltype {
>> +	CVMX_PKI_BELTYPE_NONE = PKI_BELTYPE_E__NONE_M,
>> +	CVMX_PKI_BELTYPE_MISC = PKI_BELTYPE_E__MISC_M,
>> +	CVMX_PKI_BELTYPE_IP4 = PKI_BELTYPE_E__IP4_M,
>> +	CVMX_PKI_BELTYPE_IP6 = PKI_BELTYPE_E__IP6_M,
>> +	CVMX_PKI_BELTYPE_TCP = PKI_BELTYPE_E__TCP_M,
>> +	CVMX_PKI_BELTYPE_UDP = PKI_BELTYPE_E__UDP_M,
>> +	CVMX_PKI_BELTYPE_SCTP = PKI_BELTYPE_E__SCTP_M,
>> +	CVMX_PKI_BELTYPE_SNAP = PKI_BELTYPE_E__SNAP_M,
>> +	CVMX_PKI_BELTYPE_MAX = CVMX_PKI_BELTYPE_SNAP
>> +};
>> +
>> +struct cvmx_pki_frame_len {
>> +	u16 maxlen;
>> +	u16 minlen;
>> +};
>> +
>> +struct cvmx_pki_tag_fields {
>> +	u64 layer_g_src : 1;
>> +	u64 layer_f_src : 1;
>> +	u64 layer_e_src : 1;
>> +	u64 layer_d_src : 1;
>> +	u64 layer_c_src : 1;
>> +	u64 layer_b_src : 1;
>> +	u64 layer_g_dst : 1;
>> +	u64 layer_f_dst : 1;
>> +	u64 layer_e_dst : 1;
>> +	u64 layer_d_dst : 1;
>> +	u64 layer_c_dst : 1;
>> +	u64 layer_b_dst : 1;
>> +	u64 input_port : 1;
>> +	u64 mpls_label : 1;
>> +	u64 first_vlan : 1;
>> +	u64 second_vlan : 1;
>> +	u64 ip_prot_nexthdr : 1;
>> +	u64 tag_sync : 1;
>> +	u64 tag_spi : 1;
>> +	u64 tag_gtp : 1;
>> +	u64 tag_vni : 1;
>> +};
>> +
>> +struct cvmx_pki_pkind_parse {
>> +	u64 mpls_en : 1;
>> +	u64 inst_hdr : 1;
>> +	u64 lg_custom : 1;
>> +	u64 fulc_en : 1;
>> +	u64 dsa_en : 1;
>> +	u64 hg2_en : 1;
>> +	u64 hg_en : 1;
>> +};
>> +
>> +struct cvmx_pki_pool_config {
>> +	int pool_num;
>> +	cvmx_fpa3_pool_t pool;
>> +	u64 buffer_size;
>> +	u64 buffer_count;
>> +};
>> +
>> +struct cvmx_pki_qpg_config {
>> +	int qpg_base;
>> +	int port_add;
>> +	int aura_num;
>> +	int grp_ok;
>> +	int grp_bad;
>> +	int grptag_ok;
>> +	int grptag_bad;
>> +};
>> +
>> +struct cvmx_pki_aura_config {
>> +	int aura_num;
>> +	int pool_num;
>> +	cvmx_fpa3_pool_t pool;
>> +	cvmx_fpa3_gaura_t aura;
>> +	int buffer_count;
>> +};
>> +
>> +struct cvmx_pki_cluster_grp_config {
>> +	int grp_num;
>> +	u64 cluster_mask; /* Bit mask of cluster assigned to this
>> cluster group */
>> +};
>> +
>> +struct cvmx_pki_sso_grp_config {
>> +	int group;
>> +	int priority;
>> +	int weight;
>> +	int affinity;
>> +	u64 core_mask;
>> +	u8 core_mask_set;
>> +};
>> +
>> +/* This is per style structure for configuring port parameters,
>> + * it is kind of of profile which can be assigned to any port.
>> + * If multiple ports are assigned same style be aware that modifying
>> + * that style will modify the respective parameters for all the
>> ports
>> + * which are using this style
>> + */
>> +struct cvmx_pki_style_parm {
>> +	bool ip6_udp_opt;
>> +	bool lenerr_en;
>> +	bool maxerr_en;
>> +	bool minerr_en;
>> +	u8 lenerr_eqpad;
>> +	u8 minmax_sel;
>> +	bool qpg_dis_grptag;
>> +	bool fcs_strip;
>> +	bool fcs_chk;
>> +	bool rawdrp;
>> +	bool force_drop;
>> +	bool nodrop;
>> +	bool qpg_dis_padd;
>> +	bool qpg_dis_grp;
>> +	bool qpg_dis_aura;
>> +	u16 qpg_base;
>> +	enum cvmx_pki_qpg_qos qpg_qos;
>> +	u8 qpg_port_sh;
>> +	u8 qpg_port_msb;
>> +	u8 apad_nip;
>> +	u8 wqe_vs;
>> +	enum cvmx_sso_tag_type tag_type;
>> +	bool pkt_lend;
>> +	u8 wqe_hsz;
>> +	u16 wqe_skip;
>> +	u16 first_skip;
>> +	u16 later_skip;
>> +	enum cvmx_pki_cache_mode cache_mode;
>> +	u8 dis_wq_dat;
>> +	u64 mbuff_size;
>> +	bool len_lg;
>> +	bool len_lf;
>> +	bool len_le;
>> +	bool len_ld;
>> +	bool len_lc;
>> +	bool len_lb;
>> +	bool csum_lg;
>> +	bool csum_lf;
>> +	bool csum_le;
>> +	bool csum_ld;
>> +	bool csum_lc;
>> +	bool csum_lb;
>> +};
>> +
>> +/* This is per style structure for configuring port's tag
>> configuration,
>> + * it is kind of of profile which can be assigned to any port.
>> + * If multiple ports are assigned same style be aware that modiying
>> that style
>> + * will modify the respective parameters for all the ports which are
>> + * using this style */
>> +enum cvmx_pki_mtag_ptrsel {
>> +	CVMX_PKI_MTAG_PTRSEL_SOP = 0,
>> +	CVMX_PKI_MTAG_PTRSEL_LA = 8,
>> +	CVMX_PKI_MTAG_PTRSEL_LB = 9,
>> +	CVMX_PKI_MTAG_PTRSEL_LC = 10,
>> +	CVMX_PKI_MTAG_PTRSEL_LD = 11,
>> +	CVMX_PKI_MTAG_PTRSEL_LE = 12,
>> +	CVMX_PKI_MTAG_PTRSEL_LF = 13,
>> +	CVMX_PKI_MTAG_PTRSEL_LG = 14,
>> +	CVMX_PKI_MTAG_PTRSEL_VL = 15,
>> +};
>> +
>> +struct cvmx_pki_mask_tag {
>> +	bool enable;
>> +	int base;   /* CVMX_PKI_MTAG_PTRSEL_XXX */
>> +	int offset; /* Offset from base. */
>> +	u64 val;    /* Bitmask:
>> +		1 = enable, 0 = disabled for each byte in the 64-byte
>> array.*/
>> +};
>> +
>> +struct cvmx_pki_style_tag_cfg {
>> +	struct cvmx_pki_tag_fields tag_fields;
>> +	struct cvmx_pki_mask_tag mask_tag[4];
>> +};
>> +
>> +struct cvmx_pki_style_config {
>> +	struct cvmx_pki_style_parm parm_cfg;
>> +	struct cvmx_pki_style_tag_cfg tag_cfg;
>> +};
>> +
>> +struct cvmx_pki_pkind_config {
>> +	u8 cluster_grp;
>> +	bool fcs_pres;
>> +	struct cvmx_pki_pkind_parse parse_en;
>> +	enum cvmx_pki_pkind_parse_mode initial_parse_mode;
>> +	u8 fcs_skip;
>> +	u8 inst_skip;
>> +	int initial_style;
>> +	bool custom_l2_hdr;
>> +	u8 l2_scan_offset;
>> +	u64 lg_scan_offset;
>> +};
>> +
>> +struct cvmx_pki_port_config {
>> +	struct cvmx_pki_pkind_config pkind_cfg;
>> +	struct cvmx_pki_style_config style_cfg;
>> +};
>> +
>> +struct cvmx_pki_global_parse {
>> +	u64 virt_pen : 1;
>> +	u64 clg_pen : 1;
>> +	u64 cl2_pen : 1;
>> +	u64 l4_pen : 1;
>> +	u64 il3_pen : 1;
>> +	u64 l3_pen : 1;
>> +	u64 mpls_pen : 1;
>> +	u64 fulc_pen : 1;
>> +	u64 dsa_pen : 1;
>> +	u64 hg_pen : 1;
>> +};
>> +
>> +struct cvmx_pki_tag_sec {
>> +	u16 dst6;
>> +	u16 src6;
>> +	u16 dst;
>> +	u16 src;
>> +};
>> +
>> +struct cvmx_pki_global_config {
>> +	u64 cluster_mask[CVMX_PKI_NUM_CLUSTER_GROUP_MAX];
>> +	enum cvmx_pki_stats_mode stat_mode;
>> +	enum cvmx_pki_fpa_wait fpa_wait;
>> +	struct cvmx_pki_global_parse gbl_pen;
>> +	struct cvmx_pki_tag_sec tag_secret;
>> +	struct cvmx_pki_frame_len frm_len[CVMX_PKI_NUM_FRAME_CHECK];
>> +	enum cvmx_pki_beltype ltype_map[CVMX_PKI_NUM_BELTYPE];
>> +	int pki_enable;
>> +};
>> +
>> +#define CVMX_PKI_PCAM_TERM_E_NONE_M	 0x0
>> +#define CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M 0x2
>> +#define CVMX_PKI_PCAM_TERM_E_HIGIGD_M	 0x4
>> +#define CVMX_PKI_PCAM_TERM_E_HIGIG_M	 0x5
>> +#define CVMX_PKI_PCAM_TERM_E_SMACH_M	 0x8
>> +#define CVMX_PKI_PCAM_TERM_E_SMACL_M	 0x9
>> +#define CVMX_PKI_PCAM_TERM_E_DMACH_M	 0xA
>> +#define CVMX_PKI_PCAM_TERM_E_DMACL_M	 0xB
>> +#define CVMX_PKI_PCAM_TERM_E_GLORT_M	 0x12
>> +#define CVMX_PKI_PCAM_TERM_E_DSA_M	 0x13
>> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M	 0x18
>> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M	 0x19
>> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M	 0x1A
>> +#define CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M	 0x1B
>> +#define CVMX_PKI_PCAM_TERM_E_MPLS0_M	 0x1E
>> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M	 0x1F
>> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M	 0x20
>> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPML_M	 0x21
>> +#define CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M	 0x22
>> +#define CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M	 0x23
>> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M	 0x24
>> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M	 0x25
>> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPML_M	 0x26
>> +#define CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M	 0x27
>> +#define CVMX_PKI_PCAM_TERM_E_LD_VNI_M	 0x28
>> +#define CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M 0x2B
>> +#define CVMX_PKI_PCAM_TERM_E_LF_SPI_M	 0x2E
>> +#define CVMX_PKI_PCAM_TERM_E_L4_SPORT_M	 0x2f
>> +#define CVMX_PKI_PCAM_TERM_E_L4_PORT_M	 0x30
>> +#define CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M 0x39
>> +
>> +enum cvmx_pki_term {
>> +	CVMX_PKI_PCAM_TERM_NONE = CVMX_PKI_PCAM_TERM_E_NONE_M,
>> +	CVMX_PKI_PCAM_TERM_L2_CUSTOM =
>> CVMX_PKI_PCAM_TERM_E_L2_CUSTOM_M,
>> +	CVMX_PKI_PCAM_TERM_HIGIGD = CVMX_PKI_PCAM_TERM_E_HIGIGD_M,
>> +	CVMX_PKI_PCAM_TERM_HIGIG = CVMX_PKI_PCAM_TERM_E_HIGIG_M,
>> +	CVMX_PKI_PCAM_TERM_SMACH = CVMX_PKI_PCAM_TERM_E_SMACH_M,
>> +	CVMX_PKI_PCAM_TERM_SMACL = CVMX_PKI_PCAM_TERM_E_SMACL_M,
>> +	CVMX_PKI_PCAM_TERM_DMACH = CVMX_PKI_PCAM_TERM_E_DMACH_M,
>> +	CVMX_PKI_PCAM_TERM_DMACL = CVMX_PKI_PCAM_TERM_E_DMACL_M,
>> +	CVMX_PKI_PCAM_TERM_GLORT = CVMX_PKI_PCAM_TERM_E_GLORT_M,
>> +	CVMX_PKI_PCAM_TERM_DSA = CVMX_PKI_PCAM_TERM_E_DSA_M,
>> +	CVMX_PKI_PCAM_TERM_ETHTYPE0 = CVMX_PKI_PCAM_TERM_E_ETHTYPE0_M,
>> +	CVMX_PKI_PCAM_TERM_ETHTYPE1 = CVMX_PKI_PCAM_TERM_E_ETHTYPE1_M,
>> +	CVMX_PKI_PCAM_TERM_ETHTYPE2 = CVMX_PKI_PCAM_TERM_E_ETHTYPE2_M,
>> +	CVMX_PKI_PCAM_TERM_ETHTYPE3 = CVMX_PKI_PCAM_TERM_E_ETHTYPE3_M,
>> +	CVMX_PKI_PCAM_TERM_MPLS0 = CVMX_PKI_PCAM_TERM_E_MPLS0_M,
>> +	CVMX_PKI_PCAM_TERM_L3_SIPHH = CVMX_PKI_PCAM_TERM_E_L3_SIPHH_M,
>> +	CVMX_PKI_PCAM_TERM_L3_SIPMH = CVMX_PKI_PCAM_TERM_E_L3_SIPMH_M,
>> +	CVMX_PKI_PCAM_TERM_L3_SIPML = CVMX_PKI_PCAM_TERM_E_L3_SIPML_M,
>> +	CVMX_PKI_PCAM_TERM_L3_SIPLL = CVMX_PKI_PCAM_TERM_E_L3_SIPLL_M,
>> +	CVMX_PKI_PCAM_TERM_L3_FLAGS = CVMX_PKI_PCAM_TERM_E_L3_FLAGS_M,
>> +	CVMX_PKI_PCAM_TERM_L3_DIPHH = CVMX_PKI_PCAM_TERM_E_L3_DIPHH_M,
>> +	CVMX_PKI_PCAM_TERM_L3_DIPMH = CVMX_PKI_PCAM_TERM_E_L3_DIPMH_M,
>> +	CVMX_PKI_PCAM_TERM_L3_DIPML = CVMX_PKI_PCAM_TERM_E_L3_DIPML_M,
>> +	CVMX_PKI_PCAM_TERM_L3_DIPLL = CVMX_PKI_PCAM_TERM_E_L3_DIPLL_M,
>> +	CVMX_PKI_PCAM_TERM_LD_VNI = CVMX_PKI_PCAM_TERM_E_LD_VNI_M,
>> +	CVMX_PKI_PCAM_TERM_IL3_FLAGS =
>> CVMX_PKI_PCAM_TERM_E_IL3_FLAGS_M,
>> +	CVMX_PKI_PCAM_TERM_LF_SPI = CVMX_PKI_PCAM_TERM_E_LF_SPI_M,
>> +	CVMX_PKI_PCAM_TERM_L4_PORT = CVMX_PKI_PCAM_TERM_E_L4_PORT_M,
>> +	CVMX_PKI_PCAM_TERM_L4_SPORT = CVMX_PKI_PCAM_TERM_E_L4_SPORT_M,
>> +	CVMX_PKI_PCAM_TERM_LG_CUSTOM = CVMX_PKI_PCAM_TERM_E_LG_CUSTOM_M
>> +};
>> +
>> +#define CVMX_PKI_DMACH_SHIFT	  32
>> +#define CVMX_PKI_DMACH_MASK	  cvmx_build_mask(16)
>> +#define CVMX_PKI_DMACL_MASK	  CVMX_PKI_DATA_MASK_32
>> +#define CVMX_PKI_DATA_MASK_32	  cvmx_build_mask(32)
>> +#define CVMX_PKI_DATA_MASK_16	  cvmx_build_mask(16)
>> +#define CVMX_PKI_DMAC_MATCH_EXACT cvmx_build_mask(48)
>> +
>> +struct cvmx_pki_pcam_input {
>> +	u64 style;
>> +	u64 style_mask; /* bits: 1-match, 0-dont care */
>> +	enum cvmx_pki_term field;
>> +	u32 field_mask; /* bits: 1-match, 0-dont care */
>> +	u64 data;
>> +	u64 data_mask; /* bits: 1-match, 0-dont care */
>> +};
>> +
>> +struct cvmx_pki_pcam_action {
>> +	enum cvmx_pki_parse_mode_chg parse_mode_chg;
>> +	enum cvmx_pki_layer_type layer_type_set;
>> +	int style_add;
>> +	int parse_flag_set;
>> +	int pointer_advance;
>> +};
>> +
>> +struct cvmx_pki_pcam_config {
>> +	int in_use;
>> +	int entry_num;
>> +	u64 cluster_mask;
>> +	struct cvmx_pki_pcam_input pcam_input;
>> +	struct cvmx_pki_pcam_action pcam_action;
>> +};
>> +
>> +/**
>> + * Status statistics for a port
>> + */
>> +struct cvmx_pki_port_stats {
>> +	u64 dropped_octets;
>> +	u64 dropped_packets;
>> +	u64 pci_raw_packets;
>> +	u64 octets;
>> +	u64 packets;
>> +	u64 multicast_packets;
>> +	u64 broadcast_packets;
>> +	u64 len_64_packets;
>> +	u64 len_65_127_packets;
>> +	u64 len_128_255_packets;
>> +	u64 len_256_511_packets;
>> +	u64 len_512_1023_packets;
>> +	u64 len_1024_1518_packets;
>> +	u64 len_1519_max_packets;
>> +	u64 fcs_align_err_packets;
>> +	u64 runt_packets;
>> +	u64 runt_crc_packets;
>> +	u64 oversize_packets;
>> +	u64 oversize_crc_packets;
>> +	u64 inb_packets;
>> +	u64 inb_octets;
>> +	u64 inb_errors;
>> +	u64 mcast_l2_red_packets;
>> +	u64 bcast_l2_red_packets;
>> +	u64 mcast_l3_red_packets;
>> +	u64 bcast_l3_red_packets;
>> +};
>> +
>> +/**
>> + * PKI Packet Instruction Header Structure (PKI_INST_HDR_S)
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 w : 1;    /* INST_HDR size: 0 = 2 bytes, 1 = 4 or 8
>> bytes */
>> +		u64 raw : 1;  /* RAW packet indicator in WQE[RAW]: 1 =
>> enable */
>> +		u64 utag : 1; /* Use INST_HDR[TAG] to compute WQE[TAG]:
>> 1 = enable */
>> +		u64 uqpg : 1; /* Use INST_HDR[QPG] to compute QPG: 1 =
>> enable */
>> +		u64 rsvd1 : 1;
>> +		u64 pm : 3; /* Packet parsing mode. Legal values =
>> 0x0..0x7 */
>> +		u64 sl : 8; /* Number of bytes in INST_HDR. */
>> +		/* The following fields are not present, if INST_HDR[W]
>> = 0: */
>> +		u64 utt : 1; /* Use INST_HDR[TT] to compute WQE[TT]: 1
>> = enable */
>> +		u64 tt : 2;  /* INST_HDR[TT] => WQE[TT], if
>> INST_HDR[UTT] = 1 */
>> +		u64 rsvd2 : 2;
>> +		u64 qpg : 11; /* INST_HDR[QPG] => QPG, if
>> INST_HDR[UQPG] = 1 */
>> +		u64 tag : 32; /* INST_HDR[TAG] => WQE[TAG], if
>> INST_HDR[UTAG] = 1 */
>> +	} s;
>> +} cvmx_pki_inst_hdr_t;
>> +
>> +/**
>> + * This function assignes the clusters to a group, later pkind can
>> be
>> + * configured to use that group depending on number of clusters
>> pkind
>> + * would use. A given cluster can only be enabled in a single
>> cluster group.
>> + * Number of clusters assign to that group determines how many
>> engine can work
>> + * in parallel to process the packet. Eack cluster can process x
>> MPPS.
>> + *
>> + * @param node	Node
>> + * @param cluster_group Group to attach clusters to.
>> + * @param cluster_mask The mask of clusters which needs to be
>> assigned to the group.
>> + */
>> +static inline int cvmx_pki_attach_cluster_to_group(int node, u64
>> cluster_group, u64 cluster_mask)
>> +{
>> +	cvmx_pki_icgx_cfg_t pki_cl_grp;
>> +
>> +	if (cluster_group >= CVMX_PKI_NUM_CLUSTER_GROUP) {
>> +		debug("ERROR: config cluster group %d",
>> (int)cluster_group);
>> +		return -1;
>> +	}
>> +	pki_cl_grp.u64 = cvmx_read_csr_node(node,
>> CVMX_PKI_ICGX_CFG(cluster_group));
>> +	pki_cl_grp.s.clusters = cluster_mask;
>> +	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cluster_group),
>> pki_cl_grp.u64);
>> +	return 0;
>> +}
>> +
>> +static inline void cvmx_pki_write_global_parse(int node, struct
>> cvmx_pki_global_parse gbl_pen)
>> +{
>> +	cvmx_pki_gbl_pen_t gbl_pen_reg;
>> +
>> +	gbl_pen_reg.u64 = cvmx_read_csr_node(node, CVMX_PKI_GBL_PEN);
>> +	gbl_pen_reg.s.virt_pen = gbl_pen.virt_pen;
>> +	gbl_pen_reg.s.clg_pen = gbl_pen.clg_pen;
>> +	gbl_pen_reg.s.cl2_pen = gbl_pen.cl2_pen;
>> +	gbl_pen_reg.s.l4_pen = gbl_pen.l4_pen;
>> +	gbl_pen_reg.s.il3_pen = gbl_pen.il3_pen;
>> +	gbl_pen_reg.s.l3_pen = gbl_pen.l3_pen;
>> +	gbl_pen_reg.s.mpls_pen = gbl_pen.mpls_pen;
>> +	gbl_pen_reg.s.fulc_pen = gbl_pen.fulc_pen;
>> +	gbl_pen_reg.s.dsa_pen = gbl_pen.dsa_pen;
>> +	gbl_pen_reg.s.hg_pen = gbl_pen.hg_pen;
>> +	cvmx_write_csr_node(node, CVMX_PKI_GBL_PEN, gbl_pen_reg.u64);
>> +}
>> +
>> +static inline void cvmx_pki_write_tag_secret(int node, struct
>> cvmx_pki_tag_sec tag_secret)
>> +{
>> +	cvmx_pki_tag_secret_t tag_secret_reg;
>> +
>> +	tag_secret_reg.u64 = cvmx_read_csr_node(node,
>> CVMX_PKI_TAG_SECRET);
>> +	tag_secret_reg.s.dst6 = tag_secret.dst6;
>> +	tag_secret_reg.s.src6 = tag_secret.src6;
>> +	tag_secret_reg.s.dst = tag_secret.dst;
>> +	tag_secret_reg.s.src = tag_secret.src;
>> +	cvmx_write_csr_node(node, CVMX_PKI_TAG_SECRET,
>> tag_secret_reg.u64);
>> +}
>> +
>> +static inline void cvmx_pki_write_ltype_map(int node, enum
>> cvmx_pki_layer_type layer,
>> +					    enum cvmx_pki_beltype
>> backend)
>> +{
>> +	cvmx_pki_ltypex_map_t ltype_map;
>> +
>> +	if (layer > CVMX_PKI_LTYPE_E_MAX || backend >
>> CVMX_PKI_BELTYPE_MAX) {
>> +		debug("ERROR: invalid ltype beltype mapping\n");
>> +		return;
>> +	}
>> +	ltype_map.u64 = cvmx_read_csr_node(node,
>> CVMX_PKI_LTYPEX_MAP(layer));
>> +	ltype_map.s.beltype = backend;
>> +	cvmx_write_csr_node(node, CVMX_PKI_LTYPEX_MAP(layer),
>> ltype_map.u64);
>> +}
>> +
>> +/**
>> + * This function enables the cluster group to start parsing.
>> + *
>> + * @param node    Node number.
>> + * @param cl_grp  Cluster group to enable parsing.
>> + */
>> +static inline int cvmx_pki_parse_enable(int node, unsigned int
>> cl_grp)
>> +{
>> +	cvmx_pki_icgx_cfg_t pki_cl_grp;
>> +
>> +	if (cl_grp >= CVMX_PKI_NUM_CLUSTER_GROUP) {
>> +		debug("ERROR: pki parse en group %d", (int)cl_grp);
>> +		return -1;
>> +	}
>> +	pki_cl_grp.u64 = cvmx_read_csr_node(node,
>> CVMX_PKI_ICGX_CFG(cl_grp));
>> +	pki_cl_grp.s.pena = 1;
>> +	cvmx_write_csr_node(node, CVMX_PKI_ICGX_CFG(cl_grp),
>> pki_cl_grp.u64);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * This function enables the PKI to send bpid level backpressure to
>> CN78XX inputs.
>> + *
>> + * @param node Node number.
>> + */
>> +static inline void cvmx_pki_enable_backpressure(int node)
>> +{
>> +	cvmx_pki_buf_ctl_t pki_buf_ctl;
>> +
>> +	pki_buf_ctl.u64 = cvmx_read_csr_node(node, CVMX_PKI_BUF_CTL);
>> +	pki_buf_ctl.s.pbp_en = 1;
>> +	cvmx_write_csr_node(node, CVMX_PKI_BUF_CTL, pki_buf_ctl.u64);
>> +}
>> +
>> +/**
>> + * Clear the statistics counters for a port.
>> + *
>> + * @param node Node number.
>> + * @param port Port number (ipd_port) to get statistics for.
>> + *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per
>> port/pkind stats.
>> + */
>> +void cvmx_pki_clear_port_stats(int node, u64 port);
>> +
>> +/**
>> + * Get the status counters for index from PKI.
>> + *
>> + * @param node	  Node number.
>> + * @param index   PKIND number, if PKI_STATS_CTL:mode = 0 or
>> + *     style(flow) number, if PKI_STATS_CTL:mode = 1
>> + * @param status  Where to put the results.
>> + */
>> +void cvmx_pki_get_stats(int node, int index, struct
>> cvmx_pki_port_stats *status);
>> +
>> +/**
>> + * Get the statistics counters for a port.
>> + *
>> + * @param node	 Node number
>> + * @param port   Port number (ipd_port) to get statistics for.
>> + *    Make sure PKI_STATS_CTL:mode is set to 0 for collecting per
>> port/pkind stats.
>> + * @param status Where to put the results.
>> + */
>> +static inline void cvmx_pki_get_port_stats(int node, u64 port,
>> struct cvmx_pki_port_stats *status)
>> +{
>> +	int xipd = cvmx_helper_node_to_ipd_port(node, port);
>> +	int xiface = cvmx_helper_get_interface_num(xipd);
>> +	int index = cvmx_helper_get_interface_index_num(port);
>> +	int pknd = cvmx_helper_get_pknd(xiface, index);
>> +
>> +	cvmx_pki_get_stats(node, pknd, status);
>> +}
>> +
>> +/**
>> + * Get the statistics counters for a flow represented by style in
>> PKI.
>> + *
>> + * @param node Node number.
>> + * @param style_num Style number to get statistics for.
>> + *    Make sure PKI_STATS_CTL:mode is set to 1 for collecting per
>> style/flow stats.
>> + * @param status Where to put the results.
>> + */
>> +static inline void cvmx_pki_get_flow_stats(int node, u64 style_num,
>> +					   struct cvmx_pki_port_stats
>> *status)
>> +{
>> +	cvmx_pki_get_stats(node, style_num, status);
>> +}
>> +
>> +/**
>> + * Show integrated PKI configuration.
>> + *
>> + * @param node	   node number
>> + */
>> +int cvmx_pki_config_dump(unsigned int node);
>> +
>> +/**
>> + * Show integrated PKI statistics.
>> + *
>> + * @param node	   node number
>> + */
>> +int cvmx_pki_stats_dump(unsigned int node);
>> +
>> +/**
>> + * Clear PKI statistics.
>> + *
>> + * @param node	   node number
>> + */
>> +void cvmx_pki_stats_clear(unsigned int node);
>> +
>> +/**
>> + * This function enables PKI.
>> + *
>> + * @param node	 node to enable pki in.
>> + */
>> +void cvmx_pki_enable(int node);
>> +
>> +/**
>> + * This function disables PKI.
>> + *
>> + * @param node	node to disable pki in.
>> + */
>> +void cvmx_pki_disable(int node);
>> +
>> +/**
>> + * This function soft resets PKI.
>> + *
>> + * @param node	node to enable pki in.
>> + */
>> +void cvmx_pki_reset(int node);
>> +
>> +/**
>> + * This function sets the clusters in PKI.
>> + *
>> + * @param node	node to set clusters in.
>> + */
>> +int cvmx_pki_setup_clusters(int node);
>> +
>> +/**
>> + * This function reads global configuration of PKI block.
>> + *
>> + * @param node    Node number.
>> + * @param gbl_cfg Pointer to struct to read global configuration
>> + */
>> +void cvmx_pki_read_global_config(int node, struct
>> cvmx_pki_global_config *gbl_cfg);
>> +
>> +/**
>> + * This function writes global configuration of PKI into hw.
>> + *
>> + * @param node    Node number.
>> + * @param gbl_cfg Pointer to struct to global configuration
>> + */
>> +void cvmx_pki_write_global_config(int node, struct
>> cvmx_pki_global_config *gbl_cfg);
>> +
>> +/**
>> + * This function reads per pkind parameters in hardware which
>> defines how
>> + * the incoming packet is processed.
>> + *
>> + * @param node   Node number.
>> + * @param pkind  PKI supports a large number of incoming interfaces
>> and packets
>> + *     arriving on different interfaces or channels may want to be
>> processed
>> + *     differently. PKI uses the pkind to determine how the incoming
>> packet
>> + *     is processed.
>> + * @param pkind_cfg	Pointer to struct conatining pkind
>> configuration read
>> + *     from hardware.
>> + */
>> +int cvmx_pki_read_pkind_config(int node, int pkind, struct
>> cvmx_pki_pkind_config *pkind_cfg);
>> +
>> +/**
>> + * This function writes per pkind parameters in hardware which
>> defines how
>> + * the incoming packet is processed.
>> + *
>> + * @param node   Node number.
>> + * @param pkind  PKI supports a large number of incoming interfaces
>> and packets
>> + *     arriving on different interfaces or channels may want to be
>> processed
>> + *     differently. PKI uses the pkind to determine how the incoming
>> packet
>> + *     is processed.
>> + * @param pkind_cfg	Pointer to struct conatining pkind
>> configuration need
>> + *     to be written in hardware.
>> + */
>> +int cvmx_pki_write_pkind_config(int node, int pkind, struct
>> cvmx_pki_pkind_config *pkind_cfg);
>> +
>> +/**
>> + * This function reads parameters associated with tag configuration
>> in hardware.
>> + *
>> + * @param node	 Node number.
>> + * @param style  Style to configure tag for.
>> + * @param cluster_mask  Mask of clusters to configure the style for.
>> + * @param tag_cfg  Pointer to tag configuration struct.
>> + */
>> +void cvmx_pki_read_tag_config(int node, int style, u64 cluster_mask,
>> +			      struct cvmx_pki_style_tag_cfg *tag_cfg);
>> +
>> +/**
>> + * This function writes/configures parameters associated with tag
>> + * configuration in hardware.
>> + *
>> + * @param node  Node number.
>> + * @param style  Style to configure tag for.
>> + * @param cluster_mask  Mask of clusters to configure the style for.
>> + * @param tag_cfg  Pointer to taf configuration struct.
>> + */
>> +void cvmx_pki_write_tag_config(int node, int style, u64
>> cluster_mask,
>> +			       struct cvmx_pki_style_tag_cfg *tag_cfg);
>> +
>> +/**
>> + * This function reads parameters associated with style in hardware.
>> + *
>> + * @param node	Node number.
>> + * @param style  Style to read from.
>> + * @param cluster_mask  Mask of clusters style belongs to.
>> + * @param style_cfg  Pointer to style config struct.
>> + */
>> +void cvmx_pki_read_style_config(int node, int style, u64
>> cluster_mask,
>> +				struct cvmx_pki_style_config
>> *style_cfg);
>> +
>> +/**
>> + * This function writes/configures parameters associated with style
>> in hardware.
>> + *
>> + * @param node  Node number.
>> + * @param style  Style to configure.
>> + * @param cluster_mask  Mask of clusters to configure the style for.
>> + * @param style_cfg  Pointer to style config struct.
>> + */
>> +void cvmx_pki_write_style_config(int node, u64 style, u64
>> cluster_mask,
>> +				 struct cvmx_pki_style_config
>> *style_cfg);
>> +/**
>> + * This function reads qpg entry at specified offset from qpg table
>> + *
>> + * @param node  Node number.
>> + * @param offset  Offset in qpg table to read from.
>> + * @param qpg_cfg  Pointer to structure containing qpg values
>> + */
>> +int cvmx_pki_read_qpg_entry(int node, int offset, struct
>> cvmx_pki_qpg_config *qpg_cfg);
>> +
>> +/**
>> + * This function writes qpg entry at specified offset in qpg table
>> + *
>> + * @param node  Node number.
>> + * @param offset  Offset in qpg table to write to.
>> + * @param qpg_cfg  Pointer to stricture containing qpg values.
>> + */
>> +void cvmx_pki_write_qpg_entry(int node, int offset, struct
>> cvmx_pki_qpg_config *qpg_cfg);
>> +
>> +/**
>> + * This function writes pcam entry at given offset in pcam table in
>> hardware
>> + *
>> + * @param node  Node number.
>> + * @param index	 Offset in pcam table.
>> + * @param cluster_mask  Mask of clusters in which to write pcam
>> entry.
>> + * @param input  Input keys to pcam match passed as struct.
>> + * @param action  PCAM match action passed as struct
>> + */
>> +int cvmx_pki_pcam_write_entry(int node, int index, u64 cluster_mask,
>> +			      struct cvmx_pki_pcam_input input, struct
>> cvmx_pki_pcam_action action);
>> +/**
>> + * Configures the channel which will receive backpressure from the
>> specified bpid.
>> + * Each channel listens for backpressure on a specific bpid.
>> + * Each bpid can backpressure multiple channels.
>> + * @param node  Node number.
>> + * @param bpid  BPID from which channel will receive backpressure.
>> + * @param channel  Channel number to receive backpressue.
>> + */
>> +int cvmx_pki_write_channel_bpid(int node, int channel, int bpid);
>> +
>> +/**
>> + * Configures the bpid on which, specified channel will
>> + * assert backpressure.
>> + * Each bpid receives backpressure from auras.
>> + * Multiple auras can backpressure single bpid.
>> + * @param node  Node number.
>> + * @param aura  Number which will assert backpressure on that bpid.
>> + * @param bpid  To assert backpressure on.
>> + */
>> +int cvmx_pki_write_aura_bpid(int node, int aura, int bpid);
>> +
>> +/**
>> + * Enables/Disabled QoS (RED Drop, Tail Drop & backpressure) for
>> the* PKI aura.
>> + *
>> + * @param node  Node number
>> + * @param aura  To enable/disable QoS on.
>> + * @param ena_red  Enable/Disable RED drop between pass and drop
>> level
>> + *    1-enable 0-disable
>> + * @param ena_drop  Enable/disable tail drop when max drop level
>> exceeds
>> + *    1-enable 0-disable
>> + * @param ena_bp  Enable/Disable asserting backpressure on bpid when
>> + *    max DROP level exceeds.
>> + *    1-enable 0-disable
>> + */
>> +int cvmx_pki_enable_aura_qos(int node, int aura, bool ena_red, bool
>> ena_drop, bool ena_bp);
>> +
>> +/**
>> + * This function gives the initial style used by that pkind.
>> + *
>> + * @param node  Node number.
>> + * @param pkind  PKIND number.
>> + */
>> +int cvmx_pki_get_pkind_style(int node, int pkind);
>> +
>> +/**
>> + * This function sets the wqe buffer mode. First packet data buffer
>> can reside
>> + * either in same buffer as wqe OR it can go in separate buffer. If
>> used the later mode,
>> + * make sure software allocate enough buffers to now have wqe
>> separate from packet data.
>> + *
>> + * @param node  Node number.
>> + * @param style  Style to configure.
>> + * @param pkt_outside_wqe
>> + *    0 = The packet link pointer will be at word [FIRST_SKIP]
>> immediately
>> + *    followed by packet data, in the same buffer as the work queue
>> entry.
>> + *    1 = The packet link pointer will be at word [FIRST_SKIP] in a
>> new
>> + *    buffer separate from the work queue entry. Words following the
>> + *    WQE in the same cache line will be zeroed, other lines in the
>> + *    buffer will not be modified and will retain stale data (from
>> the
>> + *    buffer?s previous use). This setting may decrease the peak PKI
>> + *    performance by up to half on small packets.
>> + */
>> +void cvmx_pki_set_wqe_mode(int node, u64 style, bool
>> pkt_outside_wqe);
>> +
>> +/**
>> + * This function sets the Packet mode of all ports and styles to
>> little-endian.
>> + * It Changes write operations of packet data to L2C to
>> + * be in little-endian. Does not change the WQE header format, which
>> is
>> + * properly endian neutral.
>> + *
>> + * @param node  Node number.
>> + * @param style  Style to configure.
>> + */
>> +void cvmx_pki_set_little_endian(int node, u64 style);
>> +
>> +/**
>> + * Enables/Disables L2 length error check and max & min frame length
>> checks.
>> + *
>> + * @param node  Node number.
>> + * @param pknd  PKIND to disable error for.
>> + * @param l2len_err	 L2 length error check enable.
>> + * @param maxframe_err	Max frame error check enable.
>> + * @param minframe_err	Min frame error check enable.
>> + *    1 -- Enabel err checks
>> + *    0 -- Disable error checks
>> + */
>> +void cvmx_pki_endis_l2_errs(int node, int pknd, bool l2len_err, bool
>> maxframe_err,
>> +			    bool minframe_err);
>> +
>> +/**
>> + * Enables/Disables fcs check and fcs stripping on the pkind.
>> + *
>> + * @param node  Node number.
>> + * @param pknd  PKIND to apply settings on.
>> + * @param fcs_chk  Enable/disable fcs check.
>> + *    1 -- enable fcs error check.
>> + *    0 -- disable fcs error check.
>> + * @param fcs_strip	 Strip L2 FCS bytes from packet, decrease
>> WQE[LEN] by 4 bytes
>> + *    1 -- strip L2 FCS.
>> + *    0 -- Do not strip L2 FCS.
>> + */
>> +void cvmx_pki_endis_fcs_check(int node, int pknd, bool fcs_chk, bool
>> fcs_strip);
>> +
>> +/**
>> + * This function shows the qpg table entries, read directly from
>> hardware.
>> + *
>> + * @param node  Node number.
>> + * @param num_entry  Number of entries to print.
>> + */
>> +void cvmx_pki_show_qpg_entries(int node, u16 num_entry);
>> +
>> +/**
>> + * This function shows the pcam table in raw format read directly
>> from hardware.
>> + *
>> + * @param node  Node number.
>> + */
>> +void cvmx_pki_show_pcam_entries(int node);
>> +
>> +/**
>> + * This function shows the valid entries in readable format,
>> + * read directly from hardware.
>> + *
>> + * @param node  Node number.
>> + */
>> +void cvmx_pki_show_valid_pcam_entries(int node);
>> +
>> +/**
>> + * This function shows the pkind attributes in readable format,
>> + * read directly from hardware.
>> + * @param node  Node number.
>> + * @param pkind  PKIND number to print.
>> + */
>> +void cvmx_pki_show_pkind_attributes(int node, int pkind);
>> +
>> +/**
>> + * @INTERNAL
>> + * This function is called by cvmx_helper_shutdown() to extract all
>> FPA buffers
>> + * out of the PKI. After this function completes, all FPA buffers
>> that were
>> + * prefetched by PKI will be in the appropriate FPA pool.
>> + * This functions does not reset the PKI.
>> + * WARNING: It is very important that PKI be reset soon after a call
>> to this function.
>> + *
>> + * @param node  Node number.
>> + */
>> +void __cvmx_pki_free_ptr(int node);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-
>> ports-range.h b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-
>> ports-range.h
>> new file mode 100644
>> index 000000000000..1fb49b3fb6de
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pko-internal-ports-
>> range.h
>> @@ -0,0 +1,43 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __CVMX_INTERNAL_PORTS_RANGE__
>> +#define __CVMX_INTERNAL_PORTS_RANGE__
>> +
>> +/*
>> + * Allocated a block of internal ports for the specified
>> interface/port
>> + *
>> + * @param  interface  the interface for which the internal ports are
>> requested
>> + * @param  port       the index of the port within in the interface
>> for which the internal ports
>> + *                    are requested.
>> + * @param  count      the number of internal ports requested
>> + *
>> + * @return  0 on success
>> + *         -1 on failure
>> + */
>> +int cvmx_pko_internal_ports_alloc(int interface, int port, u64
>> count);
>> +
>> +/*
>> + * Free the internal ports associated with the specified
>> interface/port
>> + *
>> + * @param  interface  the interface for which the internal ports are
>> requested
>> + * @param  port       the index of the port within in the interface
>> for which the internal ports
>> + *                    are requested.
>> + *
>> + * @return  0 on success
>> + *         -1 on failure
>> + */
>> +int cvmx_pko_internal_ports_free(int interface, int port);
>> +
>> +/*
>> + * Frees up all the allocated internal ports.
>> + */
>> +void cvmx_pko_internal_ports_range_free_all(void);
>> +
>> +void cvmx_pko_internal_ports_range_show(void);
>> +
>> +int __cvmx_pko_internal_ports_range_init(void);
>> +
>> +#endif
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
>> new file mode 100644
>> index 000000000000..5f8398904953
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pko3-queue.h
>> @@ -0,0 +1,175 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __CVMX_PKO3_QUEUE_H__
>> +#define __CVMX_PKO3_QUEUE_H__
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Find or allocate global port/dq map table
>> + * which is a named table, contains entries for
>> + * all possible OCI nodes.
>> + *
>> + * The table global pointer is stored in core-local variable
>> + * so that every core will call this function once, on first use.
>> + */
>> +int __cvmx_pko3_dq_table_setup(void);
>> +
>> +/*
>> + * Get the base Descriptor Queue number for an IPD port on the local
>> node
>> + */
>> +int cvmx_pko3_get_queue_base(int ipd_port);
>> +
>> +/*
>> + * Get the number of Descriptor Queues assigned for an IPD port
>> + */
>> +int cvmx_pko3_get_queue_num(int ipd_port);
>> +
>> +/**
>> + * Get L1/Port Queue number assigned to interface port.
>> + *
>> + * @param xiface is interface number.
>> + * @param index is port index.
>> + */
>> +int cvmx_pko3_get_port_queue(int xiface, int index);
>> +
>> +/*
>> + * Configure L3 through L5 Scheduler Queues and Descriptor Queues
>> + *
>> + * The Scheduler Queues in Levels 3 to 5 and Descriptor Queues are
>> + * configured one-to-one or many-to-one to a single parent Scheduler
>> + * Queues. The level of the parent SQ is specified in an argument,
>> + * as well as the number of children to attach to the specific
>> parent.
>> + * The children can have fair round-robin or priority-based
>> scheduling
>> + * when multiple children are assigned a single parent.
>> + *
>> + * @param node is the OCI node location for the queues to be
>> configured
>> + * @param parent_level is the level of the parent queue, 2 to 5.
>> + * @param parent_queue is the number of the parent Scheduler Queue
>> + * @param child_base is the number of the first child SQ or DQ to
>> assign to
>> + * @param parent
>> + * @param child_count is the number of consecutive children to
>> assign
>> + * @param stat_prio_count is the priority setting for the children
>> L2 SQs
>> + *
>> + * If <stat_prio_count> is -1, the Ln children will have equal
>> Round-Robin
>> + * relationship with eachother. If <stat_prio_count> is 0, all Ln
>> children
>> + * will be arranged in Weighted-Round-Robin, with the first having
>> the most
>> + * precedence. If <stat_prio_count> is between 1 and 8, it indicates
>> how
>> + * many children will have static priority settings (with the first
>> having
>> + * the most precedence), with the remaining Ln children having WRR
>> scheduling.
>> + *
>> + * @returns 0 on success, -1 on failure.
>> + *
>> + * Note: this function supports the configuration of node-local
>> unit.
>> + */
>> +int cvmx_pko3_sq_config_children(unsigned int node, unsigned int
>> parent_level,
>> +				 unsigned int parent_queue, unsigned
>> int child_base,
>> +				 unsigned int child_count, int
>> stat_prio_count);
>> +
>> +/*
>> + * @INTERNAL
>> + * Register a range of Descriptor Queues wth an interface port
>> + *
>> + * This function poulates the DQ-to-IPD translation table
>> + * used by the application to retrieve the DQ range (typically
>> ordered
>> + * by priority) for a given IPD-port, which is either a physical
>> port,
>> + * or a channel on a channelized interface (i.e. ILK).
>> + *
>> + * @param xiface is the physical interface number
>> + * @param index is either a physical port on an interface
>> + * @param or a channel of an ILK interface
>> + * @param dq_base is the first Descriptor Queue number in a
>> consecutive range
>> + * @param dq_count is the number of consecutive Descriptor Queues
>> leading
>> + * @param the same channel or port.
>> + *
>> + * Only a consecurive range of Descriptor Queues can be associated
>> with any
>> + * given channel/port, and usually they are ordered from most to
>> least
>> + * in terms of scheduling priority.
>> + *
>> + * Note: thus function only populates the node-local translation
>> table.
>> + *
>> + * @returns 0 on success, -1 on failure.
>> + */
>> +int __cvmx_pko3_ipd_dq_register(int xiface, int index, unsigned int
>> dq_base, unsigned int dq_count);
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Unregister DQs associated with CHAN_E (IPD port)
>> + */
>> +int __cvmx_pko3_ipd_dq_unregister(int xiface, int index);
>> +
>> +/*
>> + * Map channel number in PKO
>> + *
>> + * @param node is to specify the node to which this configuration is
>> applied.
>> + * @param pq_num specifies the Port Queue (i.e. L1) queue number.
>> + * @param l2_l3_q_num  specifies L2/L3 queue number.
>> + * @param channel specifies the channel number to map to the queue.
>> + *
>> + * The channel assignment applies to L2 or L3 Shaper Queues
>> depending
>> + * on the setting of channel credit level.
>> + *
>> + * @return returns none.
>> + */
>> +void cvmx_pko3_map_channel(unsigned int node, unsigned int pq_num,
>> unsigned int l2_l3_q_num,
>> +			   u16 channel);
>> +
>> +int cvmx_pko3_pq_config(unsigned int node, unsigned int mac_num,
>> unsigned int pq_num);
>> +
>> +int cvmx_pko3_port_cir_set(unsigned int node, unsigned int pq_num,
>> unsigned long rate_kbips,
>> +			   unsigned int burst_bytes, int adj_bytes);
>> +int cvmx_pko3_dq_cir_set(unsigned int node, unsigned int pq_num,
>> unsigned long rate_kbips,
>> +			 unsigned int burst_bytes);
>> +int cvmx_pko3_dq_pir_set(unsigned int node, unsigned int pq_num,
>> unsigned long rate_kbips,
>> +			 unsigned int burst_bytes);
>> +typedef enum {
>> +	CVMX_PKO3_SHAPE_RED_STALL,
>> +	CVMX_PKO3_SHAPE_RED_DISCARD,
>> +	CVMX_PKO3_SHAPE_RED_PASS
>> +} red_action_t;
>> +
>> +void cvmx_pko3_dq_red(unsigned int node, unsigned int dq_num,
>> red_action_t red_act,
>> +		      int8_t len_adjust);
>> +
>> +/**
>> + * Macros to deal with short floating point numbers,
>> + * where unsigned exponent, and an unsigned normalized
>> + * mantissa are represented each with a defined field width.
>> + *
>> + */
>> +#define CVMX_SHOFT_MANT_BITS 8
>> +#define CVMX_SHOFT_EXP_BITS  4
>> +
>> +/**
>> + * Convert short-float to an unsigned integer
>> + * Note that it will lose precision.
>> + */
>> +#define CVMX_SHOFT_TO_U64(m,
>> e)
>>   \
>> +	((((1ull << CVMX_SHOFT_MANT_BITS) | (m)) << (e)) >>
>> CVMX_SHOFT_MANT_BITS)
>> +
>> +/**
>> + * Convert to short-float from an unsigned integer
>> + */
>> +#define CVMX_SHOFT_FROM_U64(ui, m,
>> e)                                                              \
>> +	do
>> {
>>                     \
>> +		unsigned long long
>> u;                                                              \
>> +		unsigned int
>> k;
>>   \
>> +		k = (1ull << (CVMX_SHOFT_MANT_BITS + 1)) -
>> 1;                                      \
>> +		(e) =
>> 0;
>>          \
>> +		u = (ui) <<
>> CVMX_SHOFT_MANT_BITS;
>>    \
>> +		while ((u) > k)
>> {                                                                  \
>> +			u >>=
>> 1;
>> \
>> +			(e)++;
>>                              \
>> +		}
>>                              \
>> +		(m) = u & (k >>
>> 1);                                                                \
>> +	} while (0);
>> +
>> +#define
>> CVMX_SHOFT_MAX()
>>                        \
>> +	CVMX_SHOFT_TO_U64((1 << CVMX_SHOFT_MANT_BITS) - 1, (1 <<
>> CVMX_SHOFT_EXP_BITS) - 1)
>> +#define CVMX_SHOFT_MIN() CVMX_SHOFT_TO_U64(0, 0)
>> +
>> +#endif /* __CVMX_PKO3_QUEUE_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-pow.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
>> new file mode 100644
>> index 000000000000..0680ca258f12
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-pow.h
>> @@ -0,0 +1,2991 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * Interface to the hardware Scheduling unit.
>> + *
>> + * New, starting with SDK 1.7.0, cvmx-pow supports a number of
>> + * extended consistency checks. The define
>> + * CVMX_ENABLE_POW_CHECKS controls the runtime insertion of POW
>> + * internal state checks to find common programming errors. If
>> + * CVMX_ENABLE_POW_CHECKS is not defined, checks are by default
>> + * enabled. For example, cvmx-pow will check for the following
>> + * program errors or POW state inconsistency.
>> + * - Requesting a POW operation with an active tag switch in
>> + *   progress.
>> + * - Waiting for a tag switch to complete for an excessively
>> + *   long period. This is normally a sign of an error in locking
>> + *   causing deadlock.
>> + * - Illegal tag switches from NULL_NULL.
>> + * - Illegal tag switches from NULL.
>> + * - Illegal deschedule request.
>> + * - WQE pointer not matching the one attached to the core by
>> + *   the POW.
>> + */
>> +
>> +#ifndef __CVMX_POW_H__
>> +#define __CVMX_POW_H__
>> +
>> +#include "cvmx-wqe.h"
>> +#include "cvmx-pow-defs.h"
>> +#include "cvmx-sso-defs.h"
>> +#include "cvmx-address.h"
>> +#include "cvmx-coremask.h"
>> +
>> +/* Default to having all POW constancy checks turned on */
>> +#ifndef CVMX_ENABLE_POW_CHECKS
>> +#define CVMX_ENABLE_POW_CHECKS 1
>> +#endif
>> +
>> +/*
>> + * Special type for CN78XX style SSO groups (0..255),
>> + * for distinction from legacy-style groups (0..15)
>> + */
>> +typedef union {
>> +	u8 xgrp;
>> +	/* Fields that map XGRP for backwards compatibility */
>> +	struct __attribute__((__packed__)) {
>> +		u8 group : 5;
>> +		u8 qus : 3;
>> +	};
>> +} cvmx_xgrp_t;
>> +
>> +/*
>> + * Softwsare-only structure to convey a return value
>> + * containing multiple information fields about an work queue entry
>> + */
>> +typedef struct {
>> +	u32 tag;
>> +	u16 index;
>> +	u8 grp; /* Legacy group # (0..15) */
>> +	u8 tag_type;
>> +} cvmx_pow_tag_info_t;
>> +
>> +/**
>> + * Wait flag values for pow functions.
>> + */
>> +typedef enum {
>> +	CVMX_POW_WAIT = 1,
>> +	CVMX_POW_NO_WAIT = 0,
>> +} cvmx_pow_wait_t;
>> +
>> +/**
>> + *  POW tag operations.  These are used in the data stored to the
>> POW.
>> + */
>> +typedef enum {
>> +	CVMX_POW_TAG_OP_SWTAG = 0L,
>> +	CVMX_POW_TAG_OP_SWTAG_FULL = 1L,
>> +	CVMX_POW_TAG_OP_SWTAG_DESCH = 2L,
>> +	CVMX_POW_TAG_OP_DESCH = 3L,
>> +	CVMX_POW_TAG_OP_ADDWQ = 4L,
>> +	CVMX_POW_TAG_OP_UPDATE_WQP_GRP = 5L,
>> +	CVMX_POW_TAG_OP_SET_NSCHED = 6L,
>> +	CVMX_POW_TAG_OP_CLR_NSCHED = 7L,
>> +	CVMX_POW_TAG_OP_NOP = 15L
>> +} cvmx_pow_tag_op_t;
>> +
>> +/**
>> + * This structure defines the store data on a store to POW
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 no_sched : 1;
>> +		u64 unused : 2;
>> +		u64 index : 13;
>> +		cvmx_pow_tag_op_t op : 4;
>> +		u64 unused2 : 2;
>> +		u64 qos : 3;
>> +		u64 grp : 4;
>> +		cvmx_pow_tag_type_t type : 3;
>> +		u64 tag : 32;
>> +	} s_cn38xx;
>> +	struct {
>> +		u64 no_sched : 1;
>> +		cvmx_pow_tag_op_t op : 4;
>> +		u64 unused1 : 4;
>> +		u64 index : 11;
>> +		u64 unused2 : 1;
>> +		u64 grp : 6;
>> +		u64 unused3 : 3;
>> +		cvmx_pow_tag_type_t type : 2;
>> +		u64 tag : 32;
>> +	} s_cn68xx_clr;
>> +	struct {
>> +		u64 no_sched : 1;
>> +		cvmx_pow_tag_op_t op : 4;
>> +		u64 unused1 : 12;
>> +		u64 qos : 3;
>> +		u64 unused2 : 1;
>> +		u64 grp : 6;
>> +		u64 unused3 : 3;
>> +		cvmx_pow_tag_type_t type : 2;
>> +		u64 tag : 32;
>> +	} s_cn68xx_add;
>> +	struct {
>> +		u64 no_sched : 1;
>> +		cvmx_pow_tag_op_t op : 4;
>> +		u64 unused1 : 16;
>> +		u64 grp : 6;
>> +		u64 unused3 : 3;
>> +		cvmx_pow_tag_type_t type : 2;
>> +		u64 tag : 32;
>> +	} s_cn68xx_other;
>> +	struct {
>> +		u64 rsvd_62_63 : 2;
>> +		u64 grp : 10;
>> +		cvmx_pow_tag_type_t type : 2;
>> +		u64 no_sched : 1;
>> +		u64 rsvd_48 : 1;
>> +		cvmx_pow_tag_op_t op : 4;
>> +		u64 rsvd_42_43 : 2;
>> +		u64 wqp : 42;
>> +	} s_cn78xx_other;
>> +
>> +} cvmx_pow_tag_req_t;
>> +
>> +union cvmx_pow_tag_req_addr {
>> +	u64 u64;
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 addr : 40;
>> +	} s;
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 tag : 32;
>> +		u64 reserved_0_3 : 4;
>> +	} s_cn78xx;
>> +};
>> +
>> +/**
>> + * This structure describes the address to load stuff from POW
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	/**
>> +	 * Address for new work request loads (did<2:0> == 0)
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_4_39 : 36;
>> +		u64 wait : 1;
>> +		u64 reserved_0_2 : 3;
>> +	} swork;
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 reserved_32_35 : 4;
>> +		u64 indexed : 1;
>> +		u64 grouped : 1;
>> +		u64 rtngrp : 1;
>> +		u64 reserved_16_28 : 13;
>> +		u64 index : 12;
>> +		u64 wait : 1;
>> +		u64 reserved_0_2 : 3;
>> +	} swork_78xx;
>> +	/**
>> +	 * Address for loads to get POW internal status
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_10_39 : 30;
>> +		u64 coreid : 4;
>> +		u64 get_rev : 1;
>> +		u64 get_cur : 1;
>> +		u64 get_wqp : 1;
>> +		u64 reserved_0_2 : 3;
>> +	} sstatus;
>> +	/**
>> +	 * Address for loads to get 68XX SS0 internal status
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_14_39 : 26;
>> +		u64 coreid : 5;
>> +		u64 reserved_6_8 : 3;
>> +		u64 opcode : 3;
>> +		u64 reserved_0_2 : 3;
>> +	} sstatus_cn68xx;
>> +	/**
>> +	 * Address for memory loads to get POW internal state
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_16_39 : 24;
>> +		u64 index : 11;
>> +		u64 get_des : 1;
>> +		u64 get_wqp : 1;
>> +		u64 reserved_0_2 : 3;
>> +	} smemload;
>> +	/**
>> +	 * Address for memory loads to get SSO internal state
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_20_39 : 20;
>> +		u64 index : 11;
>> +		u64 reserved_6_8 : 3;
>> +		u64 opcode : 3;
>> +		u64 reserved_0_2 : 3;
>> +	} smemload_cn68xx;
>> +	/**
>> +	 * Address for index/pointer loads
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_9_39 : 31;
>> +		u64 qosgrp : 4;
>> +		u64 get_des_get_tail : 1;
>> +		u64 get_rmt : 1;
>> +		u64 reserved_0_2 : 3;
>> +	} sindexload;
>> +	/**
>> +	 * Address for a Index/Pointer loads to get SSO internal state
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_15_39 : 25;
>> +		u64 qos_grp : 6;
>> +		u64 reserved_6_8 : 3;
>> +		u64 opcode : 3;
>> +		u64 reserved_0_2 : 3;
>> +	} sindexload_cn68xx;
>> +	/**
>> +	 * Address for NULL_RD request (did<2:0> == 4)
>> +	 * when this is read, HW attempts to change the state to NULL
>> if it is NULL_NULL
>> +	 * (the hardware cannot switch from NULL_NULL to NULL if a POW
>> entry is not available -
>> +	 * software may need to recover by finishing another piece of
>> work before a POW
>> +	 * entry can ever become available.)
>> +	 */
>> +	struct {
>> +		u64 mem_region : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 reserved_0_39 : 40;
>> +	} snull_rd;
>> +} cvmx_pow_load_addr_t;
>> +
>> +/**
>> + * This structure defines the response to a load/SENDSINGLE to POW
>> (except CSR reads)
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	/**
>> +	 * Response to new work request loads
>> +	 */
>> +	struct {
>> +		u64 no_work : 1;
>> +		u64 pend_switch : 1;
>> +		u64 tt : 2;
>> +		u64 reserved_58_59 : 2;
>> +		u64 grp : 10;
>> +		u64 reserved_42_47 : 6;
>> +		u64 addr : 42;
>> +	} s_work;
>> +
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==0 and
>> get_wqp==0)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 pend_switch : 1;
>> +		u64 pend_switch_full : 1;
>> +		u64 pend_switch_null : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_desched_switch : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_new_work : 1;
>> +		u64 pend_new_work_wait : 1;
>> +		u64 pend_null_rd : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 reserved_51 : 1;
>> +		u64 pend_index : 11;
>> +		u64 pend_grp : 4;
>> +		u64 reserved_34_35 : 2;
>> +		u64 pend_type : 2;
>> +		u64 pend_tag : 32;
>> +	} s_sstatus0;
>> +	/**
>> +	 * Result for a SSO Status Load (when opcode is SL_PENDTAG)
>> +	 */
>> +	struct {
>> +		u64 pend_switch : 1;
>> +		u64 pend_get_work : 1;
>> +		u64 pend_get_work_wait : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_alloc_we : 1;
>> +		u64 reserved_48_56 : 9;
>> +		u64 pend_index : 11;
>> +		u64 reserved_34_36 : 3;
>> +		u64 pend_type : 2;
>> +		u64 pend_tag : 32;
>> +	} s_sstatus0_cn68xx;
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==0 and
>> get_wqp==1)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 pend_switch : 1;
>> +		u64 pend_switch_full : 1;
>> +		u64 pend_switch_null : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_desched_switch : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_new_work : 1;
>> +		u64 pend_new_work_wait : 1;
>> +		u64 pend_null_rd : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 reserved_51 : 1;
>> +		u64 pend_index : 11;
>> +		u64 pend_grp : 4;
>> +		u64 pend_wqp : 36;
>> +	} s_sstatus1;
>> +	/**
>> +	 * Result for a SSO Status Load (when opcode is SL_PENDWQP)
>> +	 */
>> +	struct {
>> +		u64 pend_switch : 1;
>> +		u64 pend_get_work : 1;
>> +		u64 pend_get_work_wait : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_alloc_we : 1;
>> +		u64 reserved_51_56 : 6;
>> +		u64 pend_index : 11;
>> +		u64 reserved_38_39 : 2;
>> +		u64 pend_wqp : 38;
>> +	} s_sstatus1_cn68xx;
>> +
>> +	struct {
>> +		u64 pend_switch : 1;
>> +		u64 pend_get_work : 1;
>> +		u64 pend_get_work_wait : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_alloc_we : 1;
>> +		u64 reserved_56 : 1;
>> +		u64 prep_index : 12;
>> +		u64 reserved_42_43 : 2;
>> +		u64 pend_tag : 42;
>> +	} s_sso_ppx_pendwqp_cn78xx;
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==0,
>> and get_rev==0)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 link_index : 11;
>> +		u64 index : 11;
>> +		u64 grp : 4;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_sstatus2;
>> +	/**
>> +	 * Result for a SSO Status Load (when opcode is SL_TAG)
>> +	 */
>> +	struct {
>> +		u64 reserved_57_63 : 7;
>> +		u64 index : 11;
>> +		u64 reserved_45 : 1;
>> +		u64 grp : 6;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_sstatus2_cn68xx;
>> +
>> +	struct {
>> +		u64 tailc : 1;
>> +		u64 reserved_60_62 : 3;
>> +		u64 index : 12;
>> +		u64 reserved_46_47 : 2;
>> +		u64 grp : 10;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 tt : 2;
>> +		u64 tag : 32;
>> +	} s_sso_ppx_tag_cn78xx;
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==0,
>> and get_rev==1)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 revlink_index : 11;
>> +		u64 index : 11;
>> +		u64 grp : 4;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_sstatus3;
>> +	/**
>> +	 * Result for a SSO Status Load (when opcode is SL_WQP)
>> +	 */
>> +	struct {
>> +		u64 reserved_58_63 : 6;
>> +		u64 index : 11;
>> +		u64 reserved_46 : 1;
>> +		u64 grp : 6;
>> +		u64 reserved_38_39 : 2;
>> +		u64 wqp : 38;
>> +	} s_sstatus3_cn68xx;
>> +
>> +	struct {
>> +		u64 reserved_58_63 : 6;
>> +		u64 grp : 10;
>> +		u64 reserved_42_47 : 6;
>> +		u64 tag : 42;
>> +	} s_sso_ppx_wqp_cn78xx;
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==1,
>> and get_rev==0)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 link_index : 11;
>> +		u64 index : 11;
>> +		u64 grp : 4;
>> +		u64 wqp : 36;
>> +	} s_sstatus4;
>> +	/**
>> +	 * Result for a SSO Status Load (when opcode is SL_LINKS)
>> +	 */
>> +	struct {
>> +		u64 reserved_46_63 : 18;
>> +		u64 index : 11;
>> +		u64 reserved_34 : 1;
>> +		u64 grp : 6;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 reserved_24_25 : 2;
>> +		u64 revlink_index : 11;
>> +		u64 reserved_11_12 : 2;
>> +		u64 link_index : 11;
>> +	} s_sstatus4_cn68xx;
>> +
>> +	struct {
>> +		u64 tailc : 1;
>> +		u64 reserved_60_62 : 3;
>> +		u64 index : 12;
>> +		u64 reserved_38_47 : 10;
>> +		u64 grp : 10;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 reserved_25 : 1;
>> +		u64 revlink_index : 12;
>> +		u64 link_index_vld : 1;
>> +		u64 link_index : 12;
>> +	} s_sso_ppx_links_cn78xx;
>> +	/**
>> +	 * Result for a POW Status Load (when get_cur==1, get_wqp==1,
>> and get_rev==1)
>> +	 */
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 revlink_index : 11;
>> +		u64 index : 11;
>> +		u64 grp : 4;
>> +		u64 wqp : 36;
>> +	} s_sstatus5;
>> +	/**
>> +	 * Result For POW Memory Load (get_des == 0 and get_wqp == 0)
>> +	 */
>> +	struct {
>> +		u64 reserved_51_63 : 13;
>> +		u64 next_index : 11;
>> +		u64 grp : 4;
>> +		u64 reserved_35 : 1;
>> +		u64 tail : 1;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_smemload0;
>> +	/**
>> +	 * Result For SSO Memory Load (opcode is ML_TAG)
>> +	 */
>> +	struct {
>> +		u64 reserved_38_63 : 26;
>> +		u64 tail : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_smemload0_cn68xx;
>> +
>> +	struct {
>> +		u64 reserved_39_63 : 25;
>> +		u64 tail : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s_sso_iaq_ppx_tag_cn78xx;
>> +	/**
>> +	 * Result For POW Memory Load (get_des == 0 and get_wqp == 1)
>> +	 */
>> +	struct {
>> +		u64 reserved_51_63 : 13;
>> +		u64 next_index : 11;
>> +		u64 grp : 4;
>> +		u64 wqp : 36;
>> +	} s_smemload1;
>> +	/**
>> +	 * Result For SSO Memory Load (opcode is ML_WQPGRP)
>> +	 */
>> +	struct {
>> +		u64 reserved_48_63 : 16;
>> +		u64 nosched : 1;
>> +		u64 reserved_46 : 1;
>> +		u64 grp : 6;
>> +		u64 reserved_38_39 : 2;
>> +		u64 wqp : 38;
>> +	} s_smemload1_cn68xx;
>> +
>> +	/**
>> +	 * Entry structures for the CN7XXX chips.
>> +	 */
>> +	struct {
>> +		u64 reserved_39_63 : 25;
>> +		u64 tailc : 1;
>> +		u64 tail : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 tt : 2;
>> +		u64 tag : 32;
>> +	} s_sso_ientx_tag_cn78xx;
>> +
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 head : 1;
>> +		u64 nosched : 1;
>> +		u64 reserved_56_59 : 4;
>> +		u64 grp : 8;
>> +		u64 reserved_42_47 : 6;
>> +		u64 wqp : 42;
>> +	} s_sso_ientx_wqpgrp_cn73xx;
>> +
>> +	struct {
>> +		u64 reserved_62_63 : 2;
>> +		u64 head : 1;
>> +		u64 nosched : 1;
>> +		u64 reserved_58_59 : 2;
>> +		u64 grp : 10;
>> +		u64 reserved_42_47 : 6;
>> +		u64 wqp : 42;
>> +	} s_sso_ientx_wqpgrp_cn78xx;
>> +
>> +	struct {
>> +		u64 reserved_38_63 : 26;
>> +		u64 pend_switch : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 pend_tt : 2;
>> +		u64 pend_tag : 32;
>> +	} s_sso_ientx_pendtag_cn78xx;
>> +
>> +	struct {
>> +		u64 reserved_26_63 : 38;
>> +		u64 prev_index : 10;
>> +		u64 reserved_11_15 : 5;
>> +		u64 next_index_vld : 1;
>> +		u64 next_index : 10;
>> +	} s_sso_ientx_links_cn73xx;
>> +
>> +	struct {
>> +		u64 reserved_28_63 : 36;
>> +		u64 prev_index : 12;
>> +		u64 reserved_13_15 : 3;
>> +		u64 next_index_vld : 1;
>> +		u64 next_index : 12;
>> +	} s_sso_ientx_links_cn78xx;
>> +
>> +	/**
>> +	 * Result For POW Memory Load (get_des == 1)
>> +	 */
>> +	struct {
>> +		u64 reserved_51_63 : 13;
>> +		u64 fwd_index : 11;
>> +		u64 grp : 4;
>> +		u64 nosched : 1;
>> +		u64 pend_switch : 1;
>> +		u64 pend_type : 2;
>> +		u64 pend_tag : 32;
>> +	} s_smemload2;
>> +	/**
>> +	 * Result For SSO Memory Load (opcode is ML_PENTAG)
>> +	 */
>> +	struct {
>> +		u64 reserved_38_63 : 26;
>> +		u64 pend_switch : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 pend_type : 2;
>> +		u64 pend_tag : 32;
>> +	} s_smemload2_cn68xx;
>> +
>> +	struct {
>> +		u64 pend_switch : 1;
>> +		u64 pend_get_work : 1;
>> +		u64 pend_get_work_wait : 1;
>> +		u64 pend_nosched : 1;
>> +		u64 pend_nosched_clr : 1;
>> +		u64 pend_desched : 1;
>> +		u64 pend_alloc_we : 1;
>> +		u64 reserved_34_56 : 23;
>> +		u64 pend_tt : 2;
>> +		u64 pend_tag : 32;
>> +	} s_sso_ppx_pendtag_cn78xx;
>> +	/**
>> +	 * Result For SSO Memory Load (opcode is ML_LINKS)
>> +	 */
>> +	struct {
>> +		u64 reserved_24_63 : 40;
>> +		u64 fwd_index : 11;
>> +		u64 reserved_11_12 : 2;
>> +		u64 next_index : 11;
>> +	} s_smemload3_cn68xx;
>> +
>> +	/**
>> +	 * Result For POW Index/Pointer Load (get_rmt ==
>> 0/get_des_get_tail == 0)
>> +	 */
>> +	struct {
>> +		u64 reserved_52_63 : 12;
>> +		u64 free_val : 1;
>> +		u64 free_one : 1;
>> +		u64 reserved_49 : 1;
>> +		u64 free_head : 11;
>> +		u64 reserved_37 : 1;
>> +		u64 free_tail : 11;
>> +		u64 loc_val : 1;
>> +		u64 loc_one : 1;
>> +		u64 reserved_23 : 1;
>> +		u64 loc_head : 11;
>> +		u64 reserved_11 : 1;
>> +		u64 loc_tail : 11;
>> +	} sindexload0;
>> +	/**
>> +	 * Result for SSO Index/Pointer Load(opcode ==
>> +	 * IPL_IQ/IPL_DESCHED/IPL_NOSCHED)
>> +	 */
>> +	struct {
>> +		u64 reserved_28_63 : 36;
>> +		u64 queue_val : 1;
>> +		u64 queue_one : 1;
>> +		u64 reserved_24_25 : 2;
>> +		u64 queue_head : 11;
>> +		u64 reserved_11_12 : 2;
>> +		u64 queue_tail : 11;
>> +	} sindexload0_cn68xx;
>> +	/**
>> +	 * Result For POW Index/Pointer Load (get_rmt ==
>> 0/get_des_get_tail == 1)
>> +	 */
>> +	struct {
>> +		u64 reserved_52_63 : 12;
>> +		u64 nosched_val : 1;
>> +		u64 nosched_one : 1;
>> +		u64 reserved_49 : 1;
>> +		u64 nosched_head : 11;
>> +		u64 reserved_37 : 1;
>> +		u64 nosched_tail : 11;
>> +		u64 des_val : 1;
>> +		u64 des_one : 1;
>> +		u64 reserved_23 : 1;
>> +		u64 des_head : 11;
>> +		u64 reserved_11 : 1;
>> +		u64 des_tail : 11;
>> +	} sindexload1;
>> +	/**
>> +	 * Result for SSO Index/Pointer Load(opcode ==
>> IPL_FREE0/IPL_FREE1/IPL_FREE2)
>> +	 */
>> +	struct {
>> +		u64 reserved_60_63 : 4;
>> +		u64 qnum_head : 2;
>> +		u64 qnum_tail : 2;
>> +		u64 reserved_28_55 : 28;
>> +		u64 queue_val : 1;
>> +		u64 queue_one : 1;
>> +		u64 reserved_24_25 : 2;
>> +		u64 queue_head : 11;
>> +		u64 reserved_11_12 : 2;
>> +		u64 queue_tail : 11;
>> +	} sindexload1_cn68xx;
>> +	/**
>> +	 * Result For POW Index/Pointer Load (get_rmt ==
>> 1/get_des_get_tail == 0)
>> +	 */
>> +	struct {
>> +		u64 reserved_39_63 : 25;
>> +		u64 rmt_is_head : 1;
>> +		u64 rmt_val : 1;
>> +		u64 rmt_one : 1;
>> +		u64 rmt_head : 36;
>> +	} sindexload2;
>> +	/**
>> +	 * Result For POW Index/Pointer Load (get_rmt ==
>> 1/get_des_get_tail == 1)
>> +	 */
>> +	struct {
>> +		u64 reserved_39_63 : 25;
>> +		u64 rmt_is_head : 1;
>> +		u64 rmt_val : 1;
>> +		u64 rmt_one : 1;
>> +		u64 rmt_tail : 36;
>> +	} sindexload3;
>> +	/**
>> +	 * Response to NULL_RD request loads
>> +	 */
>> +	struct {
>> +		u64 unused : 62;
>> +		u64 state : 2;
>> +	} s_null_rd;
>> +
>> +} cvmx_pow_tag_load_resp_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 reserved_57_63 : 7;
>> +		u64 index : 11;
>> +		u64 reserved_45 : 1;
>> +		u64 grp : 6;
>> +		u64 head : 1;
>> +		u64 tail : 1;
>> +		u64 reserved_34_36 : 3;
>> +		u64 tag_type : 2;
>> +		u64 tag : 32;
>> +	} s;
>> +} cvmx_pow_sl_tag_resp_t;
>> +
>> +/**
>> + * This structure describes the address used for stores to the POW.
>> + *  The store address is meaningful on stores to the POW.  The
>> hardware assumes that an aligned
>> + *  64-bit store was used for all these stores.
>> + *  Note the assumption that the work queue entry is aligned on an
>> 8-byte
>> + *  boundary (since the low-order 3 address bits must be zero).
>> + *  Note that not all fields are used by all operations.
>> + *
>> + *  NOTE: The following is the behavior of the pending switch bit at
>> the PP
>> + *       for POW stores (i.e. when did<7:3> == 0xc)
>> + *     - did<2:0> == 0      => pending switch bit is set
>> + *     - did<2:0> == 1      => no affect on the pending switch bit
>> + *     - did<2:0> == 3      => pending switch bit is cleared
>> + *     - did<2:0> == 7      => no affect on the pending switch bit
>> + *     - did<2:0> == others => must not be used
>> + *     - No other loads/stores have an affect on the pending switch
>> bit
>> + *     - The switch bus from POW can clear the pending switch bit
>> + *
>> + *  NOTE: did<2:0> == 2 is used by the HW for a special single-cycle
>> ADDWQ command
>> + *  that only contains the pointer). SW must never use did<2:0> ==
>> 2.
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 mem_reg : 2;
>> +		u64 reserved_49_61 : 13;
>> +		u64 is_io : 1;
>> +		u64 did : 8;
>> +		u64 addr : 40;
>> +	} stag;
>> +} cvmx_pow_tag_store_addr_t; /* FIXME- this type is unused */
>> +
>> +/**
>> + * Decode of the store data when an IOBDMA SENDSINGLE is sent to POW
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 scraddr : 8;
>> +		u64 len : 8;
>> +		u64 did : 8;
>> +		u64 unused : 36;
>> +		u64 wait : 1;
>> +		u64 unused2 : 3;
>> +	} s;
>> +	struct {
>> +		u64 scraddr : 8;
>> +		u64 len : 8;
>> +		u64 did : 8;
>> +		u64 node : 4;
>> +		u64 unused1 : 4;
>> +		u64 indexed : 1;
>> +		u64 grouped : 1;
>> +		u64 rtngrp : 1;
>> +		u64 unused2 : 13;
>> +		u64 index_grp_mask : 12;
>> +		u64 wait : 1;
>> +		u64 unused3 : 3;
>> +	} s_cn78xx;
>> +} cvmx_pow_iobdma_store_t;
>> +
>> +/* CSR typedefs have been moved to cvmx-pow-defs.h */
>> +
>> +/*enum for group priority parameters which needs modification*/
>> +enum cvmx_sso_group_modify_mask {
>> +	CVMX_SSO_MODIFY_GROUP_PRIORITY = 0x01,
>> +	CVMX_SSO_MODIFY_GROUP_WEIGHT = 0x02,
>> +	CVMX_SSO_MODIFY_GROUP_AFFINITY = 0x04
>> +};
>> +
>> +/**
>> + * @INTERNAL
>> + * Return the number of SSO groups for a given SoC model
>> + */
>> +static inline unsigned int cvmx_sso_num_xgrp(void)
>> +{
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX))
>> +		return 256;
>> +	if (OCTEON_IS_MODEL(OCTEON_CNF75XX))
>> +		return 64;
>> +	if (OCTEON_IS_MODEL(OCTEON_CN73XX))
>> +		return 64;
>> +	printf("ERROR: %s: Unknown model\n", __func__);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Return the number of POW groups on current model.
>> + * In case of CN78XX/CN73XX this is the number of equivalent
>> + * "legacy groups" on the chip when it is used in backward
>> + * compatible mode.
>> + */
>> +static inline unsigned int cvmx_pow_num_groups(void)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return cvmx_sso_num_xgrp() >> 3;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		return 64;
>> +	else
>> +		return 16;
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Return the number of mask-set registers.
>> + */
>> +static inline unsigned int cvmx_sso_num_maskset(void)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return 2;
>> +	else
>> +		return 1;
>> +}
>> +
>> +/**
>> + * Get the POW tag for this core. This returns the current
>> + * tag type, tag, group, and POW entry index associated with
>> + * this core. Index is only valid if the tag type isn't NULL_NULL.
>> + * If a tag switch is pending this routine returns the tag before
>> + * the tag switch, not after.
>> + *
>> + * @return Current tag
>> + */
>> +static inline cvmx_pow_tag_info_t cvmx_pow_get_current_tag(void)
>> +{
>> +	cvmx_pow_load_addr_t load_addr;
>> +	cvmx_pow_tag_info_t result;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_sso_sl_ppx_tag_t sl_ppx_tag;
>> +		cvmx_xgrp_t xgrp;
>> +		int node, core;
>> +
>> +		CVMX_SYNCS;
>> +		node = cvmx_get_node_num();
>> +		core = cvmx_get_local_core_num();
>> +		sl_ppx_tag.u64 = csr_rd_node(node,
>> CVMX_SSO_SL_PPX_TAG(core));
>> +		result.index = sl_ppx_tag.s.index;
>> +		result.tag_type = sl_ppx_tag.s.tt;
>> +		result.tag = sl_ppx_tag.s.tag;
>> +
>> +		/* Get native XGRP value */
>> +		xgrp.xgrp = sl_ppx_tag.s.grp;
>> +
>> +		/* Return legacy style group 0..15 */
>> +		result.grp = xgrp.group;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		cvmx_pow_sl_tag_resp_t load_resp;
>> +
>> +		load_addr.u64 = 0;
>> +		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
>> +		load_addr.sstatus_cn68xx.is_io = 1;
>> +		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
>> +		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
>> +		load_addr.sstatus_cn68xx.opcode = 3;
>> +		load_resp.u64 = csr_rd(load_addr.u64);
>> +		result.grp = load_resp.s.grp;
>> +		result.index = load_resp.s.index;
>> +		result.tag_type = load_resp.s.tag_type;
>> +		result.tag = load_resp.s.tag;
>> +	} else {
>> +		cvmx_pow_tag_load_resp_t load_resp;
>> +
>> +		load_addr.u64 = 0;
>> +		load_addr.sstatus.mem_region = CVMX_IO_SEG;
>> +		load_addr.sstatus.is_io = 1;
>> +		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
>> +		load_addr.sstatus.coreid = cvmx_get_core_num();
>> +		load_addr.sstatus.get_cur = 1;
>> +		load_resp.u64 = csr_rd(load_addr.u64);
>> +		result.grp = load_resp.s_sstatus2.grp;
>> +		result.index = load_resp.s_sstatus2.index;
>> +		result.tag_type = load_resp.s_sstatus2.tag_type;
>> +		result.tag = load_resp.s_sstatus2.tag;
>> +	}
>> +	return result;
>> +}
>> +
>> +/**
>> + * Get the POW WQE for this core. This returns the work queue
>> + * entry currently associated with this core.
>> + *
>> + * @return WQE pointer
>> + */
>> +static inline cvmx_wqe_t *cvmx_pow_get_current_wqp(void)
>> +{
>> +	cvmx_pow_load_addr_t load_addr;
>> +	cvmx_pow_tag_load_resp_t load_resp;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_sso_sl_ppx_wqp_t sso_wqp;
>> +		int node = cvmx_get_node_num();
>> +		int core = cvmx_get_local_core_num();
>> +
>> +		sso_wqp.u64 = csr_rd_node(node,
>> CVMX_SSO_SL_PPX_WQP(core));
>> +		if (sso_wqp.s.wqp)
>> +			return (cvmx_wqe_t
>> *)cvmx_phys_to_ptr(sso_wqp.s.wqp);
>> +		return (cvmx_wqe_t *)0;
>> +	}
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		load_addr.u64 = 0;
>> +		load_addr.sstatus_cn68xx.mem_region = CVMX_IO_SEG;
>> +		load_addr.sstatus_cn68xx.is_io = 1;
>> +		load_addr.sstatus_cn68xx.did = CVMX_OCT_DID_TAG_TAG5;
>> +		load_addr.sstatus_cn68xx.coreid = cvmx_get_core_num();
>> +		load_addr.sstatus_cn68xx.opcode = 4;
>> +		load_resp.u64 = csr_rd(load_addr.u64);
>> +		if (load_resp.s_sstatus3_cn68xx.wqp)
>> +			return (cvmx_wqe_t
>> *)cvmx_phys_to_ptr(load_resp.s_sstatus3_cn68xx.wqp);
>> +		else
>> +			return (cvmx_wqe_t *)0;
>> +	} else {
>> +		load_addr.u64 = 0;
>> +		load_addr.sstatus.mem_region = CVMX_IO_SEG;
>> +		load_addr.sstatus.is_io = 1;
>> +		load_addr.sstatus.did = CVMX_OCT_DID_TAG_TAG1;
>> +		load_addr.sstatus.coreid = cvmx_get_core_num();
>> +		load_addr.sstatus.get_cur = 1;
>> +		load_addr.sstatus.get_wqp = 1;
>> +		load_resp.u64 = csr_rd(load_addr.u64);
>> +		return (cvmx_wqe_t
>> *)cvmx_phys_to_ptr(load_resp.s_sstatus4.wqp);
>> +	}
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + * Print a warning if a tag switch is pending for this core
>> + *
>> + * @param function Function name checking for a pending tag switch
>> + */
>> +static inline void __cvmx_pow_warn_if_pending_switch(const char
>> *function)
>> +{
>> +	u64 switch_complete;
>> +
>> +	CVMX_MF_CHORD(switch_complete);
>> +	cvmx_warn_if(!switch_complete, "%s called with tag switch in
>> progress\n", function);
>> +}
>> +
>> +/**
>> + * Waits for a tag switch to complete by polling the completion bit.
>> + * Note that switches to NULL complete immediately and do not need
>> + * to be waited for.
>> + */
>> +static inline void cvmx_pow_tag_sw_wait(void)
>> +{
>> +	const u64 TIMEOUT_MS = 10; /* 10ms timeout */
>> +	u64 switch_complete;
>> +	u64 start_cycle;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS)
>> +		start_cycle = get_timer(0);
>> +
>> +	while (1) {
>> +		CVMX_MF_CHORD(switch_complete);
>> +		if (cvmx_likely(switch_complete))
>> +			break;
>> +
>> +		if (CVMX_ENABLE_POW_CHECKS) {
>> +			if (cvmx_unlikely(get_timer(start_cycle) >
>> TIMEOUT_MS)) {
>> +				debug("WARNING: %s: Tag switch is
>> taking a long time, possible deadlock\n",
>> +				      __func__);
>> +			}
>> +		}
>> +	}
>> +}
>> +
>> +/**
>> + * Synchronous work request.  Requests work from the POW.
>> + * This function does NOT wait for previous tag switches to
>> complete,
>> + * so the caller must ensure that there is not a pending tag switch.
>> + *
>> + * @param wait   When set, call stalls until work becomes available,
>> or
>> + *               times out. If not set, returns immediately.
>> + *
>> + * @return Returns the WQE pointer from POW. Returns NULL if no work
>> was
>> + * available.
>> + */
>> +static inline cvmx_wqe_t
>> *cvmx_pow_work_request_sync_nocheck(cvmx_pow_wait_t wait)
>> +{
>> +	cvmx_pow_load_addr_t ptr;
>> +	cvmx_pow_tag_load_resp_t result;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS)
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.swork_78xx.node = cvmx_get_node_num();
>> +		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.swork_78xx.is_io = 1;
>> +		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		ptr.swork_78xx.wait = wait;
>> +	} else {
>> +		ptr.swork.mem_region = CVMX_IO_SEG;
>> +		ptr.swork.is_io = 1;
>> +		ptr.swork.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		ptr.swork.wait = wait;
>> +	}
>> +
>> +	result.u64 = csr_rd(ptr.u64);
>> +	if (result.s_work.no_work)
>> +		return NULL;
>> +	else
>> +		return (cvmx_wqe_t
>> *)cvmx_phys_to_ptr(result.s_work.addr);
>> +}
>> +
>> +/**
>> + * Synchronous work request.  Requests work from the POW.
>> + * This function waits for any previous tag switch to complete
>> before
>> + * requesting the new work.
>> + *
>> + * @param wait   When set, call stalls until work becomes available,
>> or
>> + *               times out. If not set, returns immediately.
>> + *
>> + * @return Returns the WQE pointer from POW. Returns NULL if no work
>> was
>> + * available.
>> + */
>> +static inline cvmx_wqe_t *cvmx_pow_work_request_sync(cvmx_pow_wait_t
>> wait)
>> +{
>> +	/* Must not have a switch pending when requesting work */
>> +	cvmx_pow_tag_sw_wait();
>> +	return (cvmx_pow_work_request_sync_nocheck(wait));
>> +}
>> +
>> +/**
>> + * Synchronous null_rd request.  Requests a switch out of NULL_NULL
>> POW state.
>> + * This function waits for any previous tag switch to complete
>> before
>> + * requesting the null_rd.
>> + *
>> + * @return Returns the POW state of type cvmx_pow_tag_type_t.
>> + */
>> +static inline cvmx_pow_tag_type_t
>> cvmx_pow_work_request_null_rd(void)
>> +{
>> +	cvmx_pow_load_addr_t ptr;
>> +	cvmx_pow_tag_load_resp_t result;
>> +
>> +	/* Must not have a switch pending when requesting work */
>> +	cvmx_pow_tag_sw_wait();
>> +
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.swork_78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.swork_78xx.is_io = 1;
>> +		ptr.swork_78xx.did = CVMX_OCT_DID_TAG_NULL_RD;
>> +		ptr.swork_78xx.node = cvmx_get_node_num();
>> +	} else {
>> +		ptr.snull_rd.mem_region = CVMX_IO_SEG;
>> +		ptr.snull_rd.is_io = 1;
>> +		ptr.snull_rd.did = CVMX_OCT_DID_TAG_NULL_RD;
>> +	}
>> +	result.u64 = csr_rd(ptr.u64);
>> +	return (cvmx_pow_tag_type_t)result.s_null_rd.state;
>> +}
>> +
>> +/**
>> + * Asynchronous work request.
>> + * Work is requested from the POW unit, and should later be checked
>> with
>> + * function cvmx_pow_work_response_async.
>> + * This function does NOT wait for previous tag switches to
>> complete,
>> + * so the caller must ensure that there is not a pending tag switch.
>> + *
>> + * @param scr_addr Scratch memory address that response will be
>> returned to,
>> + *     which is either a valid WQE, or a response with the invalid
>> bit set.
>> + *     Byte address, must be 8 byte aligned.
>> + * @param wait 1 to cause response to wait for work to become
>> available
>> + *               (or timeout)
>> + *             0 to cause response to return immediately
>> + */
>> +static inline void cvmx_pow_work_request_async_nocheck(int scr_addr,
>> cvmx_pow_wait_t wait)
>> +{
>> +	cvmx_pow_iobdma_store_t data;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS)
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +
>> +	/* scr_addr must be 8 byte aligned */
>> +	data.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		data.s_cn78xx.node = cvmx_get_node_num();
>> +		data.s_cn78xx.scraddr = scr_addr >> 3;
>> +		data.s_cn78xx.len = 1;
>> +		data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		data.s_cn78xx.wait = wait;
>> +	} else {
>> +		data.s.scraddr = scr_addr >> 3;
>> +		data.s.len = 1;
>> +		data.s.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		data.s.wait = wait;
>> +	}
>> +	cvmx_send_single(data.u64);
>> +}
>> +
>> +/**
>> + * Asynchronous work request.
>> + * Work is requested from the POW unit, and should later be checked
>> with
>> + * function cvmx_pow_work_response_async.
>> + * This function waits for any previous tag switch to complete
>> before
>> + * requesting the new work.
>> + *
>> + * @param scr_addr Scratch memory address that response will be
>> returned to,
>> + *     which is either a valid WQE, or a response with the invalid
>> bit set.
>> + *     Byte address, must be 8 byte aligned.
>> + * @param wait 1 to cause response to wait for work to become
>> available
>> + *               (or timeout)
>> + *             0 to cause response to return immediately
>> + */
>> +static inline void cvmx_pow_work_request_async(int scr_addr,
>> cvmx_pow_wait_t wait)
>> +{
>> +	/* Must not have a switch pending when requesting work */
>> +	cvmx_pow_tag_sw_wait();
>> +	cvmx_pow_work_request_async_nocheck(scr_addr, wait);
>> +}
>> +
>> +/**
>> + * Gets result of asynchronous work request.  Performs a IOBDMA sync
>> + * to wait for the response.
>> + *
>> + * @param scr_addr Scratch memory address to get result from
>> + *                  Byte address, must be 8 byte aligned.
>> + * @return Returns the WQE from the scratch register, or NULL if no
>> work was
>> + *         available.
>> + */
>> +static inline cvmx_wqe_t *cvmx_pow_work_response_async(int scr_addr)
>> +{
>> +	cvmx_pow_tag_load_resp_t result;
>> +
>> +	CVMX_SYNCIOBDMA;
>> +	result.u64 = cvmx_scratch_read64(scr_addr);
>> +	if (result.s_work.no_work)
>> +		return NULL;
>> +	else
>> +		return (cvmx_wqe_t
>> *)cvmx_phys_to_ptr(result.s_work.addr);
>> +}
>> +
>> +/**
>> + * Checks if a work queue entry pointer returned by a work
>> + * request is valid.  It may be invalid due to no work
>> + * being available or due to a timeout.
>> + *
>> + * @param wqe_ptr pointer to a work queue entry returned by the POW
>> + *
>> + * @return 0 if pointer is valid
>> + *         1 if invalid (no work was returned)
>> + */
>> +static inline u64 cvmx_pow_work_invalid(cvmx_wqe_t *wqe_ptr)
>> +{
>> +	return (!wqe_ptr); /* FIXME: improve */
>> +}
>> +
>> +/**
>> + * Starts a tag switch to the provided tag value and tag
>> type.  Completion for
>> + * the tag switch must be checked for separately.
>> + * This function does NOT update the
>> + * work queue entry in dram to match tag value and type, so the
>> application must
>> + * keep track of these if they are important to the application.
>> + * This tag switch command must not be used for switches to NULL, as
>> the tag
>> + * switch pending bit will be set by the switch request, but never
>> cleared by
>> + * the hardware.
>> + *
>> + * NOTE: This should not be used when switching from a NULL
>> tag.  Use
>> + * cvmx_pow_tag_sw_full() instead.
>> + *
>> + * This function does no checks, so the caller must ensure that any
>> previous tag
>> + * switch has completed.
>> + *
>> + * @param tag      new tag value
>> + * @param tag_type new tag type (ordered or atomic)
>> + */
>> +static inline void cvmx_pow_tag_sw_nocheck(u32 tag,
>> cvmx_pow_tag_type_t tag_type)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called with NULL tag\n", __func__);
>> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
>> (current_tag.tag == tag),
>> +			     "%s called to perform a tag switch to the
>> same tag\n", __func__);
>> +		cvmx_warn_if(
>> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
>> +			"%s called to perform a tag switch to NULL. Use
>> cvmx_pow_tag_sw_null() instead\n",
>> +			__func__);
>> +	}
>> +
>> +	/*
>> +	 * Note that WQE in DRAM is not updated here, as the POW does
>> not read
>> +	 * from DRAM once the WQE is in flight.  See hardware manual
>> for
>> +	 * complete details.
>> +	 * It is the application's responsibility to keep track of the
>> +	 * current tag value if that is important.
>> +	 */
>> +	tag_req.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn78xx_other.type = tag_type;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn68xx_other.tag = tag;
>> +		tag_req.s_cn68xx_other.type = tag_type;
>> +	} else {
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn38xx.tag = tag;
>> +		tag_req.s_cn38xx.type = tag_type;
>> +	}
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.s_cn78xx.is_io = 1;
>> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		ptr.s_cn78xx.node = cvmx_get_node_num();
>> +		ptr.s_cn78xx.tag = tag;
>> +	} else {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
>> +	}
>> +	/* Once this store arrives at POW, it will attempt the switch
>> +	   software must wait for the switch to complete separately */
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Starts a tag switch to the provided tag value and tag
>> type.  Completion for
>> + * the tag switch must be checked for separately.
>> + * This function does NOT update the
>> + * work queue entry in dram to match tag value and type, so the
>> application must
>> + * keep track of these if they are important to the application.
>> + * This tag switch command must not be used for switches to NULL, as
>> the tag
>> + * switch pending bit will be set by the switch request, but never
>> cleared by
>> + * the hardware.
>> + *
>> + * NOTE: This should not be used when switching from a NULL
>> tag.  Use
>> + * cvmx_pow_tag_sw_full() instead.
>> + *
>> + * This function waits for any previous tag switch to complete, and
>> also
>> + * displays an error on tag switches to NULL.
>> + *
>> + * @param tag      new tag value
>> + * @param tag_type new tag type (ordered or atomic)
>> + */
>> +static inline void cvmx_pow_tag_sw(u32 tag, cvmx_pow_tag_type_t
>> tag_type)
>> +{
>> +	/*
>> +	 * Note that WQE in DRAM is not updated here, as the POW does
>> not read
>> +	 * from DRAM once the WQE is in flight.  See hardware manual
>> for
>> +	 * complete details. It is the application's responsibility to
>> keep
>> +	 * track of the current tag value if that is important.
>> +	 */
>> +
>> +	/*
>> +	 * Ensure that there is not a pending tag switch, as a tag
>> switch
>> +	 * cannot be started if a previous switch is still pending.
>> +	 */
>> +	cvmx_pow_tag_sw_wait();
>> +	cvmx_pow_tag_sw_nocheck(tag, tag_type);
>> +}
>> +
>> +/**
>> + * Starts a tag switch to the provided tag value and tag
>> type.  Completion for
>> + * the tag switch must be checked for separately.
>> + * This function does NOT update the
>> + * work queue entry in dram to match tag value and type, so the
>> application must
>> + * keep track of these if they are important to the application.
>> + * This tag switch command must not be used for switches to NULL, as
>> the tag
>> + * switch pending bit will be set by the switch request, but never
>> cleared by
>> + * the hardware.
>> + *
>> + * This function must be used for tag switches from NULL.
>> + *
>> + * This function does no checks, so the caller must ensure that any
>> previous tag
>> + * switch has completed.
>> + *
>> + * @param wqp      pointer to work queue entry to submit.  This
>> entry is
>> + *                 updated to match the other parameters
>> + * @param tag      tag value to be assigned to work queue entry
>> + * @param tag_type type of tag
>> + * @param group    group value for the work queue entry.
>> + */
>> +static inline void cvmx_pow_tag_sw_full_nocheck(cvmx_wqe_t *wqp, u32
>> tag,
>> +						cvmx_pow_tag_type_t
>> tag_type, u64 group)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +	unsigned int node = cvmx_get_node_num();
>> +	u64 wqp_phys = cvmx_ptr_to_phys(wqp);
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
>> (current_tag.tag == tag),
>> +			     "%s called to perform a tag switch to the
>> same tag\n", __func__);
>> +		cvmx_warn_if(
>> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
>> +			"%s called to perform a tag switch to NULL. Use
>> cvmx_pow_tag_sw_null() instead\n",
>> +			__func__);
>> +		if ((wqp != cvmx_phys_to_ptr(0x80)) &&
>> cvmx_pow_get_current_wqp())
>> +			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
>> +				     "%s passed WQE(%p) doesn't match
>> the address in the POW(%p)\n",
>> +				     __func__, wqp,
>> cvmx_pow_get_current_wqp());
>> +	}
>> +
>> +	/*
>> +	 * Note that WQE in DRAM is not updated here, as the POW does
>> not
>> +	 * read from DRAM once the WQE is in flight.  See hardware
>> manual
>> +	 * for complete details. It is the application's responsibility
>> to
>> +	 * keep track of the current tag value if that is important.
>> +	 */
>> +	tag_req.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int xgrp;
>> +
>> +		if (wqp_phys != 0x80) {
>> +			/* If WQE is valid, use its XGRP:
>> +			 * WQE GRP is 10 bits, and is mapped
>> +			 * to legacy GRP + QoS, includes node number.
>> +			 */
>> +			xgrp = wqp->word1.cn78xx.grp;
>> +			/* Use XGRP[node] too */
>> +			node = xgrp >> 8;
>> +			/* Modify XGRP with legacy group # from arg */
>> +			xgrp &= ~0xf8;
>> +			xgrp |= 0xf8 & (group << 3);
>> +
>> +		} else {
>> +			/* If no WQE, build XGRP with QoS=0 and current
>> node */
>> +			xgrp = group << 3;
>> +			xgrp |= node << 8;
>> +		}
>> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
>> +		tag_req.s_cn78xx_other.type = tag_type;
>> +		tag_req.s_cn78xx_other.grp = xgrp;
>> +		tag_req.s_cn78xx_other.wqp = wqp_phys;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
>> +		tag_req.s_cn68xx_other.tag = tag;
>> +		tag_req.s_cn68xx_other.type = tag_type;
>> +		tag_req.s_cn68xx_other.grp = group;
>> +	} else {
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_FULL;
>> +		tag_req.s_cn38xx.tag = tag;
>> +		tag_req.s_cn38xx.type = tag_type;
>> +		tag_req.s_cn38xx.grp = group;
>> +	}
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.s_cn78xx.is_io = 1;
>> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		ptr.s_cn78xx.node = node;
>> +		ptr.s_cn78xx.tag = tag;
>> +	} else {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_SWTAG;
>> +		ptr.s.addr = wqp_phys;
>> +	}
>> +	/* Once this store arrives at POW, it will attempt the switch
>> +	   software must wait for the switch to complete separately */
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Starts a tag switch to the provided tag value and tag type.
>> + * Completion for the tag switch must be checked for separately.
>> + * This function does NOT update the work queue entry in dram to
>> match tag value
>> + * and type, so the application must keep track of these if they are
>> important
>> + * to the application. This tag switch command must not be used for
>> switches
>> + * to NULL, as the tag switch pending bit will be set by the switch
>> request,
>> + * but never cleared by the hardware.
>> + *
>> + * This function must be used for tag switches from NULL.
>> + *
>> + * This function waits for any pending tag switches to complete
>> + * before requesting the tag switch.
>> + *
>> + * @param wqp      Pointer to work queue entry to submit.
>> + *     This entry is updated to match the other parameters
>> + * @param tag      Tag value to be assigned to work queue entry
>> + * @param tag_type Type of tag
>> + * @param group    Group value for the work queue entry.
>> + */
>> +static inline void cvmx_pow_tag_sw_full(cvmx_wqe_t *wqp, u32 tag,
>> cvmx_pow_tag_type_t tag_type,
>> +					u64 group)
>> +{
>> +	/*
>> +	 * Ensure that there is not a pending tag switch, as a tag
>> switch cannot
>> +	 * be started if a previous switch is still pending.
>> +	 */
>> +	cvmx_pow_tag_sw_wait();
>> +	cvmx_pow_tag_sw_full_nocheck(wqp, tag, tag_type, group);
>> +}
>> +
>> +/**
>> + * Switch to a NULL tag, which ends any ordering or
>> + * synchronization provided by the POW for the current
>> + * work queue entry.  This operation completes immediately,
>> + * so completion should not be waited for.
>> + * This function does NOT wait for previous tag switches to
>> complete,
>> + * so the caller must ensure that any previous tag switches have
>> completed.
>> + */
>> +static inline void cvmx_pow_tag_sw_null_nocheck(void)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called when we already have a NULL
>> tag\n", __func__);
>> +	}
>> +	tag_req.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn78xx_other.type = CVMX_POW_TAG_TYPE_NULL;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn68xx_other.type = CVMX_POW_TAG_TYPE_NULL;
>> +	} else {
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG;
>> +		tag_req.s_cn38xx.type = CVMX_POW_TAG_TYPE_NULL;
>> +	}
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.s_cn78xx.is_io = 1;
>> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
>> +		ptr.s_cn78xx.node = cvmx_get_node_num();
>> +	} else {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
>> +	}
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Switch to a NULL tag, which ends any ordering or
>> + * synchronization provided by the POW for the current
>> + * work queue entry.  This operation completes immediately,
>> + * so completion should not be waited for.
>> + * This function waits for any pending tag switches to complete
>> + * before requesting the switch to NULL.
>> + */
>> +static inline void cvmx_pow_tag_sw_null(void)
>> +{
>> +	/*
>> +	 * Ensure that there is not a pending tag switch, as a tag
>> switch cannot
>> +	 * be started if a previous switch is still pending.
>> +	 */
>> +	cvmx_pow_tag_sw_wait();
>> +	cvmx_pow_tag_sw_null_nocheck();
>> +}
>> +
>> +/**
>> + * Submits work to an input queue.
>> + * This function updates the work queue entry in DRAM to match the
>> arguments given.
>> + * Note that the tag provided is for the work queue entry submitted,
>> and
>> + * is unrelated to the tag that the core currently holds.
>> + *
>> + * @param wqp      pointer to work queue entry to submit.
>> + *                 This entry is updated to match the other
>> parameters
>> + * @param tag      tag value to be assigned to work queue entry
>> + * @param tag_type type of tag
>> + * @param qos      Input queue to add to.
>> + * @param grp      group value for the work queue entry.
>> + */
>> +static inline void cvmx_pow_work_submit(cvmx_wqe_t *wqp, u32 tag,
>> cvmx_pow_tag_type_t tag_type,
>> +					u64 qos, u64 grp)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	tag_req.u64 = 0;
>> +	ptr.u64 = 0;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int node = cvmx_get_node_num();
>> +		unsigned int xgrp;
>> +
>> +		xgrp = (grp & 0x1f) << 3;
>> +		xgrp |= (qos & 7);
>> +		xgrp |= 0x300 & (node << 8);
>> +
>> +		wqp->word1.cn78xx.rsvd_0 = 0;
>> +		wqp->word1.cn78xx.rsvd_1 = 0;
>> +		wqp->word1.cn78xx.tag = tag;
>> +		wqp->word1.cn78xx.tag_type = tag_type;
>> +		wqp->word1.cn78xx.grp = xgrp;
>> +		CVMX_SYNCWS;
>> +
>> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
>> +		tag_req.s_cn78xx_other.type = tag_type;
>> +		tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
>> +		tag_req.s_cn78xx_other.grp = xgrp;
>> +
>> +		ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
>> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.s_cn78xx.is_io = 1;
>> +		ptr.s_cn78xx.node = node;
>> +		ptr.s_cn78xx.tag = tag;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		/* Reset all reserved bits */
>> +		wqp->word1.cn68xx.zero_0 = 0;
>> +		wqp->word1.cn68xx.zero_1 = 0;
>> +		wqp->word1.cn68xx.zero_2 = 0;
>> +		wqp->word1.cn68xx.qos = qos;
>> +		wqp->word1.cn68xx.grp = grp;
>> +
>> +		wqp->word1.tag = tag;
>> +		wqp->word1.tag_type = tag_type;
>> +
>> +		tag_req.s_cn68xx_add.op = CVMX_POW_TAG_OP_ADDWQ;
>> +		tag_req.s_cn68xx_add.type = tag_type;
>> +		tag_req.s_cn68xx_add.tag = tag;
>> +		tag_req.s_cn68xx_add.qos = qos;
>> +		tag_req.s_cn68xx_add.grp = grp;
>> +
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
>> +		ptr.s.addr = cvmx_ptr_to_phys(wqp);
>> +	} else {
>> +		/* Reset all reserved bits */
>> +		wqp->word1.cn38xx.zero_2 = 0;
>> +		wqp->word1.cn38xx.qos = qos;
>> +		wqp->word1.cn38xx.grp = grp;
>> +
>> +		wqp->word1.tag = tag;
>> +		wqp->word1.tag_type = tag_type;
>> +
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_ADDWQ;
>> +		tag_req.s_cn38xx.type = tag_type;
>> +		tag_req.s_cn38xx.tag = tag;
>> +		tag_req.s_cn38xx.qos = qos;
>> +		tag_req.s_cn38xx.grp = grp;
>> +
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG1;
>> +		ptr.s.addr = cvmx_ptr_to_phys(wqp);
>> +	}
>> +	/* SYNC write to memory before the work submit.
>> +	 * This is necessary as POW may read values from DRAM at this
>> time */
>> +	CVMX_SYNCWS;
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * This function sets the group mask for a core.  The group mask
>> + * indicates which groups each core will accept work from. There are
>> + * 16 groups.
>> + *
>> + * @param core_num   core to apply mask to
>> + * @param mask   Group mask, one bit for up to 64 groups.
>> + *               Each 1 bit in the mask enables the core to accept
>> work from
>> + *               the corresponding group.
>> + *               The CN68XX supports 64 groups, earlier models only
>> support
>> + *               16 groups.
>> + *
>> + * The CN78XX in backwards compatibility mode allows up to 32
>> groups,
>> + * so the 'mask' argument has one bit for every of the legacy
>> + * groups, and a '1' in the mask causes a total of 8 groups
>> + * which share the legacy group numbher and 8 qos levels,
>> + * to be enabled for the calling processor core.
>> + * A '0' in the mask will disable the current core
>> + * from receiving work from the associated group.
>> + */
>> +static inline void cvmx_pow_set_group_mask(u64 core_num, u64 mask)
>> +{
>> +	u64 valid_mask;
>> +	int num_groups = cvmx_pow_num_groups();
>> +
>> +	if (num_groups >= 64)
>> +		valid_mask = ~0ull;
>> +	else
>> +		valid_mask = (1ull << num_groups) - 1;
>> +
>> +	if ((mask & valid_mask) == 0) {
>> +		printf("ERROR: %s empty group mask disables work on
>> core# %llu, ignored.\n",
>> +		       __func__, (unsigned long long)core_num);
>> +		return;
>> +	}
>> +	cvmx_warn_if(mask & (~valid_mask), "%s group number range
>> exceeded: %#llx\n", __func__,
>> +		     (unsigned long long)mask);
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int mask_set;
>> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
>> +		unsigned int core, node;
>> +		unsigned int rix;  /* Register index */
>> +		unsigned int grp;  /* Legacy group # */
>> +		unsigned int bit;  /* bit index */
>> +		unsigned int xgrp; /* native group # */
>> +
>> +		node = cvmx_coremask_core_to_node(core_num);
>> +		core = cvmx_coremask_core_on_node(core_num);
>> +
>> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
>> */
>> +		/* 73xx: 64 groups are in one register */
>> +		for (rix = 0; rix < (cvmx_sso_num_xgrp() >> 6); rix++)
>> {
>> +			grp_msk.u64 = 0;
>> +			for (bit = 0; bit < 64; bit++) {
>> +				/* 8-bit native XGRP number */
>> +				xgrp = (rix << 6) | bit;
>> +				/* Legacy 5-bit group number */
>> +				grp = (xgrp >> 3) & 0x1f;
>> +				/* Inspect legacy mask by legacy group
>> */
>> +				if (mask & (1ull << grp))
>> +					grp_msk.s.grp_msk |= 1ull <<
>> bit;
>> +				/* Pre-set to all 0's */
>> +			}
>> +			for (mask_set = 0; mask_set <
>> cvmx_sso_num_maskset(); mask_set++) {
>> +				csr_wr_node(node,
>> CVMX_SSO_PPX_SX_GRPMSKX(core, mask_set, rix),
>> +					    grp_msk.u64);
>> +			}
>> +		}
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		cvmx_sso_ppx_grp_msk_t grp_msk;
>> +
>> +		grp_msk.s.grp_msk = mask;
>> +		csr_wr(CVMX_SSO_PPX_GRP_MSK(core_num), grp_msk.u64);
>> +	} else {
>> +		cvmx_pow_pp_grp_mskx_t grp_msk;
>> +
>> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
>> +		grp_msk.s.grp_msk = mask & 0xffff;
>> +		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * This function gets the group mask for a core.  The group mask
>> + * indicates which groups each core will accept work from.
>> + *
>> + * @param core_num   core to apply mask to
>> + * @return	Group mask, one bit for up to 64 groups.
>> + *               Each 1 bit in the mask enables the core to accept
>> work from
>> + *               the corresponding group.
>> + *               The CN68XX supports 64 groups, earlier models only
>> support
>> + *               16 groups.
>> + *
>> + * The CN78XX in backwards compatibility mode allows up to 32
>> groups,
>> + * so the 'mask' argument has one bit for every of the legacy
>> + * groups, and a '1' in the mask causes a total of 8 groups
>> + * which share the legacy group numbher and 8 qos levels,
>> + * to be enabled for the calling processor core.
>> + * A '0' in the mask will disable the current core
>> + * from receiving work from the associated group.
>> + */
>> +static inline u64 cvmx_pow_get_group_mask(u64 core_num)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk;
>> +		unsigned int core, node, i;
>> +		int rix; /* Register index */
>> +		u64 mask = 0;
>> +
>> +		node = cvmx_coremask_core_to_node(core_num);
>> +		core = cvmx_coremask_core_on_node(core_num);
>> +
>> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
>> */
>> +		/* 73xx: 64 groups are in one register */
>> +		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0;
>> rix--) {
>> +			/* read only mask_set=0 (both 'set' was written
>> same) */
>> +			grp_msk.u64 = csr_rd_node(node,
>> CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
>> +			/* ASSUME: (this is how mask bits got written)
>> */
>> +			/* grp_mask[7:0]: all bits 0..7 are same */
>> +			/* grp_mask[15:8]: all bits 8..15 are same, etc
>> */
>> +			/* DO: mask[7:0] =
>> grp_mask.u64[56,48,40,32,24,16,8,0] */
>> +			for (i = 0; i < 8; i++)
>> +				mask |= (grp_msk.u64 & ((u64)1 << (i *
>> 8))) >> (7 * i);
>> +			/* we collected 8 MSBs in mask[7:0], <<=8 and
>> continue */
>> +			if (cvmx_likely(rix != 0))
>> +				mask <<= 8;
>> +		}
>> +		return mask & 0xFFFFFFFF;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		cvmx_sso_ppx_grp_msk_t grp_msk;
>> +
>> +		grp_msk.u64 = csr_rd(CVMX_SSO_PPX_GRP_MSK(core_num));
>> +		return grp_msk.u64;
>> +	} else {
>> +		cvmx_pow_pp_grp_mskx_t grp_msk;
>> +
>> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
>> +		return grp_msk.u64 & 0xffff;
>> +	}
>> +}
>> +
>> +/*
>> + * Returns 0 if 78xx(73xx,75xx) is not programmed in legacy
>> compatible mode
>> + * Returns 1 if 78xx(73xx,75xx) is programmed in legacy compatible
>> mode
>> + * Returns 1 if octeon model is not 78xx(73xx,75xx)
>> + */
>> +static inline u64 cvmx_pow_is_legacy78mode(u64 core_num)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_sso_ppx_sx_grpmskx_t grp_msk0, grp_msk1;
>> +		unsigned int core, node, i;
>> +		int rix; /* Register index */
>> +		u64 mask = 0;
>> +
>> +		node = cvmx_coremask_core_to_node(core_num);
>> +		core = cvmx_coremask_core_on_node(core_num);
>> +
>> +		/* 78xx: 256 groups divided into 4 X 64 bit registers
>> */
>> +		/* 73xx: 64 groups are in one register */
>> +		/* 1) in order for the 78_SSO to be in legacy
>> compatible mode
>> +		 * the both mask_sets should be programmed the same */
>> +		for (rix = (cvmx_sso_num_xgrp() >> 6) - 1; rix >= 0;
>> rix--) {
>> +			/* read mask_set=0 (both 'set' was written
>> same) */
>> +			grp_msk0.u64 = csr_rd_node(node,
>> CVMX_SSO_PPX_SX_GRPMSKX(core, 0, rix));
>> +			grp_msk1.u64 = csr_rd_node(node,
>> CVMX_SSO_PPX_SX_GRPMSKX(core, 1, rix));
>> +			if (grp_msk0.u64 != grp_msk1.u64) {
>> +				return 0;
>> +			}
>> +			/* (this is how mask bits should be written) */
>> +			/* grp_mask[7:0]: all bits 0..7 are same */
>> +			/* grp_mask[15:8]: all bits 8..15 are same, etc
>> */
>> +			/* 2) in order for the 78_SSO to be in legacy
>> compatible
>> +			 * mode above should be true (test only
>> mask_set=0 */
>> +			for (i = 0; i < 8; i++) {
>> +				mask = (grp_msk0.u64 >> (i << 3)) &
>> 0xFF;
>> +				if (!(mask == 0 || mask == 0xFF)) {
>> +					return 0;
>> +				}
>> +			}
>> +		}
>> +		/* if we come here, the 78_SSO is in legacy compatible
>> mode */
>> +	}
>> +	return 1; /* the SSO/POW is in legacy (or compatible) mode */
>> +}
>> +
>> +/**
>> + * This function sets POW static priorities for a core. Each input
>> queue has
>> + * an associated priority value.
>> + *
>> + * @param core_num   core to apply priorities to
>> + * @param priority   Vector of 8 priorities, one per POW Input Queue
>> (0-7).
>> + *                   Highest priority is 0 and lowest is 7. A
>> priority value
>> + *                   of 0xF instructs POW to skip the Input Queue
>> when
>> + *                   scheduling to this specific core.
>> + *                   NOTE: priorities should not have gaps in
>> values, meaning
>> + *                         {0,1,1,1,1,1,1,1} is a valid
>> configuration while
>> + *                         {0,2,2,2,2,2,2,2} is not.
>> + */
>> +static inline void cvmx_pow_set_priority(u64 core_num, const u8
>> priority[])
>> +{
>> +	/* Detect gaps between priorities and flag error */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		int i;
>> +		u32 prio_mask = 0;
>> +
>> +		for (i = 0; i < 8; i++)
>> +			if (priority[i] != 0xF)
>> +				prio_mask |= 1 << priority[i];
>> +
>> +		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
>> +			debug("ERROR: POW static priorities should be
>> contiguous (0x%llx)\n",
>> +			      (unsigned long long)prio_mask);
>> +			return;
>> +		}
>> +	}
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int group;
>> +		unsigned int node = cvmx_get_node_num();
>> +		cvmx_sso_grpx_pri_t grp_pri;
>> +
>> +		/*grp_pri.s.weight = 0x3f; these will be anyway
>> overwritten */
>> +		/*grp_pri.s.affinity = 0xf; by the next
>> csr_rd_node(..), */
>> +
>> +		for (group = 0; group < cvmx_sso_num_xgrp(); group++) {
>> +			grp_pri.u64 = csr_rd_node(node,
>> CVMX_SSO_GRPX_PRI(group));
>> +			grp_pri.s.pri = priority[group & 0x7];
>> +			csr_wr_node(node, CVMX_SSO_GRPX_PRI(group),
>> grp_pri.u64);
>> +		}
>> +
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		cvmx_sso_ppx_qos_pri_t qos_pri;
>> +
>> +		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
>> +		qos_pri.s.qos0_pri = priority[0];
>> +		qos_pri.s.qos1_pri = priority[1];
>> +		qos_pri.s.qos2_pri = priority[2];
>> +		qos_pri.s.qos3_pri = priority[3];
>> +		qos_pri.s.qos4_pri = priority[4];
>> +		qos_pri.s.qos5_pri = priority[5];
>> +		qos_pri.s.qos6_pri = priority[6];
>> +		qos_pri.s.qos7_pri = priority[7];
>> +		csr_wr(CVMX_SSO_PPX_QOS_PRI(core_num), qos_pri.u64);
>> +	} else {
>> +		/* POW priorities on CN5xxx .. CN66XX */
>> +		cvmx_pow_pp_grp_mskx_t grp_msk;
>> +
>> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
>> +		grp_msk.s.qos0_pri = priority[0];
>> +		grp_msk.s.qos1_pri = priority[1];
>> +		grp_msk.s.qos2_pri = priority[2];
>> +		grp_msk.s.qos3_pri = priority[3];
>> +		grp_msk.s.qos4_pri = priority[4];
>> +		grp_msk.s.qos5_pri = priority[5];
>> +		grp_msk.s.qos6_pri = priority[6];
>> +		grp_msk.s.qos7_pri = priority[7];
>> +
>> +		csr_wr(CVMX_POW_PP_GRP_MSKX(core_num), grp_msk.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * This function gets POW static priorities for a core. Each input
>> queue has
>> + * an associated priority value.
>> + *
>> + * @param[in]  core_num core to get priorities for
>> + * @param[out] priority Pointer to u8[] where to return priorities
>> + *			Vector of 8 priorities, one per POW Input Queue
>> (0-7).
>> + *			Highest priority is 0 and lowest is 7. A
>> priority value
>> + *			of 0xF instructs POW to skip the Input Queue
>> when
>> + *			scheduling to this specific core.
>> + *                   NOTE: priorities should not have gaps in
>> values, meaning
>> + *                         {0,1,1,1,1,1,1,1} is a valid
>> configuration while
>> + *                         {0,2,2,2,2,2,2,2} is not.
>> + */
>> +static inline void cvmx_pow_get_priority(u64 core_num, u8
>> priority[])
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int group;
>> +		unsigned int node = cvmx_get_node_num();
>> +		cvmx_sso_grpx_pri_t grp_pri;
>> +
>> +		/* read priority only from the first 8 groups */
>> +		/* the next groups are programmed the same
>> (periodicaly) */
>> +		for (group = 0; group < 8 /*cvmx_sso_num_xgrp() */;
>> group++) {
>> +			grp_pri.u64 = csr_rd_node(node,
>> CVMX_SSO_GRPX_PRI(group));
>> +			priority[group /* & 0x7 */] = grp_pri.s.pri;
>> +		}
>> +
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		cvmx_sso_ppx_qos_pri_t qos_pri;
>> +
>> +		qos_pri.u64 = csr_rd(CVMX_SSO_PPX_QOS_PRI(core_num));
>> +		priority[0] = qos_pri.s.qos0_pri;
>> +		priority[1] = qos_pri.s.qos1_pri;
>> +		priority[2] = qos_pri.s.qos2_pri;
>> +		priority[3] = qos_pri.s.qos3_pri;
>> +		priority[4] = qos_pri.s.qos4_pri;
>> +		priority[5] = qos_pri.s.qos5_pri;
>> +		priority[6] = qos_pri.s.qos6_pri;
>> +		priority[7] = qos_pri.s.qos7_pri;
>> +	} else {
>> +		/* POW priorities on CN5xxx .. CN66XX */
>> +		cvmx_pow_pp_grp_mskx_t grp_msk;
>> +
>> +		grp_msk.u64 = csr_rd(CVMX_POW_PP_GRP_MSKX(core_num));
>> +		priority[0] = grp_msk.s.qos0_pri;
>> +		priority[1] = grp_msk.s.qos1_pri;
>> +		priority[2] = grp_msk.s.qos2_pri;
>> +		priority[3] = grp_msk.s.qos3_pri;
>> +		priority[4] = grp_msk.s.qos4_pri;
>> +		priority[5] = grp_msk.s.qos5_pri;
>> +		priority[6] = grp_msk.s.qos6_pri;
>> +		priority[7] = grp_msk.s.qos7_pri;
>> +	}
>> +
>> +	/* Detect gaps between priorities and flag error - (optional)
>> */
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		int i;
>> +		u32 prio_mask = 0;
>> +
>> +		for (i = 0; i < 8; i++)
>> +			if (priority[i] != 0xF)
>> +				prio_mask |= 1 << priority[i];
>> +
>> +		if (prio_mask ^ ((1 << cvmx_pop(prio_mask)) - 1)) {
>> +			debug("ERROR:%s: POW static priorities should
>> be contiguous (0x%llx)\n",
>> +			      __func__, (unsigned long long)prio_mask);
>> +			return;
>> +		}
>> +	}
>> +}
>> +
>> +static inline void cvmx_sso_get_group_priority(int node, cvmx_xgrp_t
>> xgrp, int *priority,
>> +					       int *weight, int
>> *affinity)
>> +{
>> +	cvmx_sso_grpx_pri_t grp_pri;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +
>> +	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
>> +	*affinity = grp_pri.s.affinity;
>> +	*priority = grp_pri.s.pri;
>> +	*weight = grp_pri.s.weight;
>> +}
>> +
>> +/**
>> + * Performs a tag switch and then an immediate deschedule. This
>> completes
>> + * immediately, so completion must not be waited for.  This function
>> does NOT
>> + * update the wqe in DRAM to match arguments.
>> + *
>> + * This function does NOT wait for any prior tag switches to
>> complete, so the
>> + * calling code must do this.
>> + *
>> + * Note the following CAVEAT of the Octeon HW behavior when
>> + * re-scheduling DE-SCHEDULEd items whose (next) state is
>> + * ORDERED:
>> + *   - If there are no switches pending at the time that the
>> + *     HW executes the de-schedule, the HW will only re-schedule
>> + *     the head of the FIFO associated with the given tag. This
>> + *     means that in many respects, the HW treats this ORDERED
>> + *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
>> + *     case (to an ORDERED tag), the HW will do the switch
>> + *     before the deschedule whenever it is possible to do
>> + *     the switch immediately, so it may often look like
>> + *     this case.
>> + *   - If there is a pending switch to ORDERED at the time
>> + *     the HW executes the de-schedule, the HW will perform
>> + *     the switch at the time it re-schedules, and will be
>> + *     able to reschedule any/all of the entries with the
>> + *     same tag.
>> + * Due to this behavior, the RECOMMENDATION to software is
>> + * that they have a (next) state of ATOMIC when they
>> + * DE-SCHEDULE. If an ORDERED tag is what was really desired,
>> + * SW can choose to immediately switch to an ORDERED tag
>> + * after the work (that has an ATOMIC tag) is re-scheduled.
>> + * Note that since there are never any tag switches pending
>> + * when the HW re-schedules, this switch can be IMMEDIATE upon
>> + * the reception of the pointer during the re-schedule.
>> + *
>> + * @param tag      New tag value
>> + * @param tag_type New tag type
>> + * @param group    New group value
>> + * @param no_sched Control whether this work queue entry will be
>> rescheduled.
>> + *                 - 1 : don't schedule this work
>> + *                 - 0 : allow this work to be scheduled.
>> + */
>> +static inline void cvmx_pow_tag_sw_desched_nocheck(u32 tag,
>> cvmx_pow_tag_type_t tag_type, u64 group,
>> +						   u64 no_sched)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called with NULL tag. Deschedule not
>> allowed from NULL state\n",
>> +			     __func__);
>> +		cvmx_warn_if((current_tag.tag_type !=
>> CVMX_POW_TAG_TYPE_ATOMIC) &&
>> +			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
>> +			     "%s called where neither the before or
>> after tag is ATOMIC\n",
>> +			     __func__);
>> +	}
>> +	tag_req.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_t *wqp = cvmx_pow_get_current_wqp();
>> +
>> +		if (!wqp) {
>> +			debug("ERROR: Failed to get WQE, %s\n",
>> __func__);
>> +			return;
>> +		}
>> +		group &= 0x1f;
>> +		wqp->word1.cn78xx.tag = tag;
>> +		wqp->word1.cn78xx.tag_type = tag_type;
>> +		wqp->word1.cn78xx.grp = group << 3;
>> +		CVMX_SYNCWS;
>> +		tag_req.s_cn78xx_other.op =
>> CVMX_POW_TAG_OP_SWTAG_DESCH;
>> +		tag_req.s_cn78xx_other.type = tag_type;
>> +		tag_req.s_cn78xx_other.grp = group << 3;
>> +		tag_req.s_cn78xx_other.no_sched = no_sched;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		group &= 0x3f;
>> +		tag_req.s_cn68xx_other.op =
>> CVMX_POW_TAG_OP_SWTAG_DESCH;
>> +		tag_req.s_cn68xx_other.tag = tag;
>> +		tag_req.s_cn68xx_other.type = tag_type;
>> +		tag_req.s_cn68xx_other.grp = group;
>> +		tag_req.s_cn68xx_other.no_sched = no_sched;
>> +	} else {
>> +		group &= 0x0f;
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
>> +		tag_req.s_cn38xx.tag = tag;
>> +		tag_req.s_cn38xx.type = tag_type;
>> +		tag_req.s_cn38xx.grp = group;
>> +		tag_req.s_cn38xx.no_sched = no_sched;
>> +	}
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
>> +		ptr.s_cn78xx.node = cvmx_get_node_num();
>> +		ptr.s_cn78xx.tag = tag;
>> +	} else {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
>> +	}
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Performs a tag switch and then an immediate deschedule. This
>> completes
>> + * immediately, so completion must not be waited for.  This function
>> does NOT
>> + * update the wqe in DRAM to match arguments.
>> + *
>> + * This function waits for any prior tag switches to complete, so
>> the
>> + * calling code may call this function with a pending tag switch.
>> + *
>> + * Note the following CAVEAT of the Octeon HW behavior when
>> + * re-scheduling DE-SCHEDULEd items whose (next) state is
>> + * ORDERED:
>> + *   - If there are no switches pending at the time that the
>> + *     HW executes the de-schedule, the HW will only re-schedule
>> + *     the head of the FIFO associated with the given tag. This
>> + *     means that in many respects, the HW treats this ORDERED
>> + *     tag as an ATOMIC tag. Note that in the SWTAG_DESCH
>> + *     case (to an ORDERED tag), the HW will do the switch
>> + *     before the deschedule whenever it is possible to do
>> + *     the switch immediately, so it may often look like
>> + *     this case.
>> + *   - If there is a pending switch to ORDERED at the time
>> + *     the HW executes the de-schedule, the HW will perform
>> + *     the switch at the time it re-schedules, and will be
>> + *     able to reschedule any/all of the entries with the
>> + *     same tag.
>> + * Due to this behavior, the RECOMMENDATION to software is
>> + * that they have a (next) state of ATOMIC when they
>> + * DE-SCHEDULE. If an ORDERED tag is what was really desired,
>> + * SW can choose to immediately switch to an ORDERED tag
>> + * after the work (that has an ATOMIC tag) is re-scheduled.
>> + * Note that since there are never any tag switches pending
>> + * when the HW re-schedules, this switch can be IMMEDIATE upon
>> + * the reception of the pointer during the re-schedule.
>> + *
>> + * @param tag      New tag value
>> + * @param tag_type New tag type
>> + * @param group    New group value
>> + * @param no_sched Control whether this work queue entry will be
>> rescheduled.
>> + *                 - 1 : don't schedule this work
>> + *                 - 0 : allow this work to be scheduled.
>> + */
>> +static inline void cvmx_pow_tag_sw_desched(u32 tag,
>> cvmx_pow_tag_type_t tag_type, u64 group,
>> +					   u64 no_sched)
>> +{
>> +	/* Need to make sure any writes to the work queue entry are
>> complete */
>> +	CVMX_SYNCWS;
>> +	/* Ensure that there is not a pending tag switch, as a tag
>> switch cannot be started
>> +	 * if a previous switch is still pending.  */
>> +	cvmx_pow_tag_sw_wait();
>> +	cvmx_pow_tag_sw_desched_nocheck(tag, tag_type, group,
>> no_sched);
>> +}
>> +
>> +/**
>> + * Descchedules the current work queue entry.
>> + *
>> + * @param no_sched no schedule flag value to be set on the work
>> queue entry.
>> + *     If this is set the entry will not be rescheduled.
>> + */
>> +static inline void cvmx_pow_desched(u64 no_sched)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called with NULL tag. Deschedule not
>> expected from NULL state\n",
>> +			     __func__);
>> +	}
>> +	/* Need to make sure any writes to the work queue entry are
>> complete */
>> +	CVMX_SYNCWS;
>> +
>> +	tag_req.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_DESCH;
>> +		tag_req.s_cn78xx_other.no_sched = no_sched;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		tag_req.s_cn68xx_other.op = CVMX_POW_TAG_OP_DESCH;
>> +		tag_req.s_cn68xx_other.no_sched = no_sched;
>> +	} else {
>> +		tag_req.s_cn38xx.op = CVMX_POW_TAG_OP_DESCH;
>> +		tag_req.s_cn38xx.no_sched = no_sched;
>> +	}
>> +	ptr.u64 = 0;
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +		ptr.s_cn78xx.is_io = 1;
>> +		ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG3;
>> +		ptr.s_cn78xx.node = cvmx_get_node_num();
>> +	} else {
>> +		ptr.s.mem_region = CVMX_IO_SEG;
>> +		ptr.s.is_io = 1;
>> +		ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
>> +	}
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/*******************************************************************
>> ***********/
>> +/* OCTEON3-specific
>> functions.                                                */
>> +/*******************************************************************
>> ***********/
>> +/**
>> + * This function sets the the affinity of group to the cores in
>> 78xx.
>> + * It sets up all the cores in core_mask to accept work from the
>> specified group.
>> + *
>> + * @param xgrp	Group to accept work from, 0 - 255.
>> + * @param core_mask	Mask of all the cores which will accept work
>> from this group
>> + * @param mask_set	Every core has set of 2 masks which can be set
>> to accept work
>> + *     from 256 groups. At the time of get_work, cores can choose
>> which mask_set
>> + *     to get work from. 'mask_set' values range from 0 to 3, where	
>> each of the
>> + *     two bits represents a mask set. Cores will be added to the
>> mask set with
>> + *     corresponding bit set, and removed from the mask set with
>> corresponding
>> + *     bit clear.
>> + * Note: cores can only accept work from SSO groups on the same
>> node,
>> + * so the node number for the group is derived from the core number.
>> + */
>> +static inline void cvmx_sso_set_group_core_affinity(cvmx_xgrp_t
>> xgrp,
>> +						    const struct
>> cvmx_coremask *core_mask,
>> +						    u8 mask_set)
>> +{
>> +	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
>> +	int core;
>> +	int grp_index = xgrp.xgrp >> 6;
>> +	int bit_pos = xgrp.xgrp % 64;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +	cvmx_coremask_for_each_core(core, core_mask)
>> +	{
>> +		unsigned int node, ncore;
>> +		u64 reg_addr;
>> +
>> +		node = cvmx_coremask_core_to_node(core);
>> +		ncore = cvmx_coremask_core_on_node(core);
>> +
>> +		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 0,
>> grp_index);
>> +		grp_msk.u64 = csr_rd_node(node, reg_addr);
>> +
>> +		if (mask_set & 1)
>> +			grp_msk.s.grp_msk |= (1ull << bit_pos);
>> +		else
>> +			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
>> +
>> +		csr_wr_node(node, reg_addr, grp_msk.u64);
>> +
>> +		reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(ncore, 1,
>> grp_index);
>> +		grp_msk.u64 = csr_rd_node(node, reg_addr);
>> +
>> +		if (mask_set & 2)
>> +			grp_msk.s.grp_msk |= (1ull << bit_pos);
>> +		else
>> +			grp_msk.s.grp_msk &= ~(1ull << bit_pos);
>> +
>> +		csr_wr_node(node, reg_addr, grp_msk.u64);
>> +	}
>> +}
>> +
>> +/**
>> + * This function sets the priority and group affinity arbitration
>> for each group.
>> + *
>> + * @param node		Node number
>> + * @param xgrp	Group 0 - 255 to apply mask parameters to
>> + * @param priority	Priority of the group relative to other groups
>> + *     0x0 - highest priority
>> + *     0x7 - lowest priority
>> + * @param weight	Cross-group arbitration weight to apply to this
>> group.
>> + *     valid values are 1-63
>> + *     h/w default is 0x3f
>> + * @param affinity	Processor affinity arbitration weight to apply
>> to this group.
>> + *     If zero, affinity is disabled.
>> + *     valid values are 0-15
>> + *     h/w default which is 0xf.
>> + * @param modify_mask   mask of the parameters which needs to be
>> modified.
>> + *     enum cvmx_sso_group_modify_mask
>> + *     to modify only priority -- set bit0
>> + *     to modify only weight   -- set bit1
>> + *     to modify only affinity -- set bit2
>> + */
>> +static inline void cvmx_sso_set_group_priority(int node, cvmx_xgrp_t
>> xgrp, int priority, int weight,
>> +					       int affinity,
>> +					       enum
>> cvmx_sso_group_modify_mask modify_mask)
>> +{
>> +	cvmx_sso_grpx_pri_t grp_pri;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +	if (weight <= 0)
>> +		weight = 0x3f; /* Force HW default when out of range */
>> +
>> +	grp_pri.u64 = csr_rd_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp));
>> +	if (grp_pri.s.weight == 0)
>> +		grp_pri.s.weight = 0x3f;
>> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_PRIORITY)
>> +		grp_pri.s.pri = priority;
>> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_WEIGHT)
>> +		grp_pri.s.weight = weight;
>> +	if (modify_mask & CVMX_SSO_MODIFY_GROUP_AFFINITY)
>> +		grp_pri.s.affinity = affinity;
>> +	csr_wr_node(node, CVMX_SSO_GRPX_PRI(xgrp.xgrp), grp_pri.u64);
>> +}
>> +
>> +/**
>> + * Asynchronous work request.
>> + * Only works on CN78XX style SSO.
>> + *
>> + * Work is requested from the SSO unit, and should later be checked
>> with
>> + * function cvmx_pow_work_response_async.
>> + * This function does NOT wait for previous tag switches to
>> complete,
>> + * so the caller must ensure that there is not a pending tag switch.
>> + *
>> + * @param scr_addr Scratch memory address that response will be
>> returned to,
>> + *     which is either a valid WQE, or a response with the invalid
>> bit set.
>> + *     Byte address, must be 8 byte aligned.
>> + * @param xgrp  Group to receive work for (0-255).
>> + * @param wait
>> + *     1 to cause response to wait for work to become available (or
>> timeout)
>> + *     0 to cause response to return immediately
>> + */
>> +static inline void cvmx_sso_work_request_grp_async_nocheck(int
>> scr_addr, cvmx_xgrp_t xgrp,
>> +							   cvmx_pow_wai
>> t_t wait)
>> +{
>> +	cvmx_pow_iobdma_store_t data;
>> +	unsigned int node = cvmx_get_node_num();
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_
>> WQE), "Not CN78XX");
>> +	}
>> +	/* scr_addr must be 8 byte aligned */
>> +	data.u64 = 0;
>> +	data.s_cn78xx.scraddr = scr_addr >> 3;
>> +	data.s_cn78xx.len = 1;
>> +	data.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +	data.s_cn78xx.grouped = 1;
>> +	data.s_cn78xx.index_grp_mask = (node << 8) | xgrp.xgrp;
>> +	data.s_cn78xx.wait = wait;
>> +	data.s_cn78xx.node = node;
>> +
>> +	cvmx_send_single(data.u64);
>> +}
>> +
>> +/**
>> + * Synchronous work request from the node-local SSO without
>> verifying
>> + * pending tag switch. It requests work from a specific SSO group.
>> + *
>> + * @param lgrp The local group number (within the SSO of the node of
>> the caller)
>> + *     from which to get the work.
>> + * @param wait When set, call stalls until work becomes available,
>> or times out.
>> + *     If not set, returns immediately.
>> + *
>> + * @return Returns the WQE pointer from SSO.
>> + *     Returns NULL if no work was available.
>> + */
>> +static inline void *cvmx_sso_work_request_grp_sync_nocheck(unsigned
>> int lgrp, cvmx_pow_wait_t wait)
>> +{
>> +	cvmx_pow_load_addr_t ptr;
>> +	cvmx_pow_tag_load_resp_t result;
>> +	unsigned int node = cvmx_get_node_num() & 3;
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		cvmx_warn_if(!octeon_has_feature(OCTEON_FEATURE_CN78XX_
>> WQE), "Not CN78XX");
>> +	}
>> +	ptr.u64 = 0;
>> +	ptr.swork_78xx.mem_region = CVMX_IO_SEG;
>> +	ptr.swork_78xx.is_io = 1;
>> +	ptr.swork_78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +	ptr.swork_78xx.node = node;
>> +	ptr.swork_78xx.grouped = 1;
>> +	ptr.swork_78xx.index = (lgrp & 0xff) | node << 8;
>> +	ptr.swork_78xx.wait = wait;
>> +
>> +	result.u64 = csr_rd(ptr.u64);
>> +	if (result.s_work.no_work)
>> +		return NULL;
>> +	else
>> +		return cvmx_phys_to_ptr(result.s_work.addr);
>> +}
>> +
>> +/**
>> + * Synchronous work request from the node-local SSO.
>> + * It requests work from a specific SSO group.
>> + * This function waits for any previous tag switch to complete
>> before
>> + * requesting the new work.
>> + *
>> + * @param lgrp The node-local group number from which to get the
>> work.
>> + * @param wait When set, call stalls until work becomes available,
>> or times out.
>> + *     If not set, returns immediately.
>> + *
>> + * @return The WQE pointer or NULL, if work is not available.
>> + */
>> +static inline void *cvmx_sso_work_request_grp_sync(unsigned int
>> lgrp, cvmx_pow_wait_t wait)
>> +{
>> +	cvmx_pow_tag_sw_wait();
>> +	return cvmx_sso_work_request_grp_sync_nocheck(lgrp, wait);
>> +}
>> +
>> +/**
>> + * This function sets the group mask for a core.  The group mask
>> bits
>> + * indicate which groups each core will accept work from.
>> + *
>> + * @param core_num	Processor core to apply mask to.
>> + * @param mask_set	7XXX has 2 sets of masks per core.
>> + *     Bit 0 represents the first mask set, bit 1 -- the second.
>> + * @param xgrp_mask	Group mask array.
>> + *     Total number of groups is divided into a number of
>> + *     64-bits mask sets. Each bit in the mask, if set, enables
>> + *     the core to accept work from the corresponding group.
>> + *
>> + * NOTE: Each core can be configured to accept work in accordance to
>> both
>> + * mask sets, with the first having higher precedence over the
>> second,
>> + * or to accept work in accordance to just one of the two mask sets.
>> + * The 'core_num' argument represents a processor core on any node
>> + * in a coherent multi-chip system.
>> + *
>> + * If the 'mask_set' argument is 3, both mask sets are configured
>> + * with the same value (which is not typically the intention),
>> + * so keep in mind the function needs to be called twice
>> + * to set a different value into each of the mask sets,
>> + * once with 'mask_set=1' and second time with 'mask_set=2'.
>> + */
>> +static inline void cvmx_pow_set_xgrp_mask(u64 core_num, u8 mask_set,
>> const u64 xgrp_mask[])
>> +{
>> +	unsigned int grp, node, core;
>> +	u64 reg_addr;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_warn_if(((mask_set < 1) || (mask_set > 3)),
>> "Invalid mask set");
>> +	}
>> +
>> +	if ((mask_set < 1) || (mask_set > 3))
>> +		mask_set = 3;
>> +
>> +	node = cvmx_coremask_core_to_node(core_num);
>> +	core = cvmx_coremask_core_on_node(core_num);
>> +
>> +	for (grp = 0; grp < (cvmx_sso_num_xgrp() >> 6); grp++) {
>> +		if (mask_set & 1) {
>> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0,
>> grp),
>> +			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
>> +		}
>> +		if (mask_set & 2) {
>> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1,
>> grp),
>> +			csr_wr_node(node, reg_addr, xgrp_mask[grp]);
>> +		}
>> +	}
>> +}
>> +
>> +/**
>> + * This function gets the group mask for a core.  The group mask
>> bits
>> + * indicate which groups each core will accept work from.
>> + *
>> + * @param core_num	Processor core to apply mask to.
>> + * @param mask_set	7XXX has 2 sets of masks per core.
>> + *     Bit 0 represents the first mask set, bit 1 -- the second.
>> + * @param xgrp_mask	Provide pointer to u64 mask[8] output array.
>> + *     Total number of groups is divided into a number of
>> + *     64-bits mask sets. Each bit in the mask represents
>> + *     the core accepts work from the corresponding group.
>> + *
>> + * NOTE: Each core can be configured to accept work in accordance to
>> both
>> + * mask sets, with the first having higher precedence over the
>> second,
>> + * or to accept work in accordance to just one of the two mask sets.
>> + * The 'core_num' argument represents a processor core on any node
>> + * in a coherent multi-chip system.
>> + */
>> +static inline void cvmx_pow_get_xgrp_mask(u64 core_num, u8 mask_set,
>> u64 *xgrp_mask)
>> +{
>> +	cvmx_sso_ppx_sx_grpmskx_t grp_msk;
>> +	unsigned int grp, node, core;
>> +	u64 reg_addr;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_warn_if(mask_set != 1 && mask_set != 2, "Invalid
>> mask set");
>> +	}
>> +
>> +	node = cvmx_coremask_core_to_node(core_num);
>> +	core = cvmx_coremask_core_on_node(core_num);
>> +
>> +	for (grp = 0; grp < cvmx_sso_num_xgrp() >> 6; grp++) {
>> +		if (mask_set & 1) {
>> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 0,
>> grp),
>> +			grp_msk.u64 = csr_rd_node(node, reg_addr);
>> +			xgrp_mask[grp] = grp_msk.s.grp_msk;
>> +		}
>> +		if (mask_set & 2) {
>> +			reg_addr = CVMX_SSO_PPX_SX_GRPMSKX(core, 1,
>> grp),
>> +			grp_msk.u64 = csr_rd_node(node, reg_addr);
>> +			xgrp_mask[grp] = grp_msk.s.grp_msk;
>> +		}
>> +	}
>> +}
>> +
>> +/**
>> + * Executes SSO SWTAG command.
>> + * This is similar to cvmx_pow_tag_sw() function, but uses linear
>> + * (vs. integrated group-qos) group index.
>> + */
>> +static inline void cvmx_pow_tag_sw_node(cvmx_wqe_t *wqp, u32 tag,
>> cvmx_pow_tag_type_t tag_type,
>> +					int node)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +
>> +	if
>> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
>> +		debug("ERROR: %s is supported on OCTEON3 only\n",
>> __func__);
>> +		return;
>> +	}
>> +	CVMX_SYNCWS;
>> +	cvmx_pow_tag_sw_wait();
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called with NULL tag\n", __func__);
>> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
>> (current_tag.tag == tag),
>> +			     "%s called to perform a tag switch to the
>> same tag\n", __func__);
>> +		cvmx_warn_if(
>> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
>> +			"%s called to perform a tag switch to NULL. Use
>> cvmx_pow_tag_sw_null() instead\n",
>> +			__func__);
>> +	}
>> +	wqp->word1.cn78xx.tag = tag;
>> +	wqp->word1.cn78xx.tag_type = tag_type;
>> +	CVMX_SYNCWS;
>> +
>> +	tag_req.u64 = 0;
>> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG;
>> +	tag_req.s_cn78xx_other.type = tag_type;
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +	ptr.s_cn78xx.is_io = 1;
>> +	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +	ptr.s_cn78xx.node = node;
>> +	ptr.s_cn78xx.tag = tag;
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Executes SSO SWTAG_FULL command.
>> + * This is similar to cvmx_pow_tag_sw_full() function, but
>> + * uses linear (vs. integrated group-qos) group index.
>> + */
>> +static inline void cvmx_pow_tag_sw_full_node(cvmx_wqe_t *wqp, u32
>> tag, cvmx_pow_tag_type_t tag_type,
>> +					     u8 xgrp, int node)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +	u16 gxgrp;
>> +
>> +	if
>> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
>> +		debug("ERROR: %s is supported on OCTEON3 only\n",
>> __func__);
>> +		return;
>> +	}
>> +	/* Ensure that there is not a pending tag switch, as a tag
>> switch cannot be
>> +	 * started, if a previous switch is still pending. */
>> +	CVMX_SYNCWS;
>> +	cvmx_pow_tag_sw_wait();
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if((current_tag.tag_type == tag_type) &&
>> (current_tag.tag == tag),
>> +			     "%s called to perform a tag switch to the
>> same tag\n", __func__);
>> +		cvmx_warn_if(
>> +			tag_type == CVMX_POW_TAG_TYPE_NULL,
>> +			"%s called to perform a tag switch to NULL. Use
>> cvmx_pow_tag_sw_null() instead\n",
>> +			__func__);
>> +		if ((wqp != cvmx_phys_to_ptr(0x80)) &&
>> cvmx_pow_get_current_wqp())
>> +			cvmx_warn_if(wqp != cvmx_pow_get_current_wqp(),
>> +				     "%s passed WQE(%p) doesn't match
>> the address in the POW(%p)\n",
>> +				     __func__, wqp,
>> cvmx_pow_get_current_wqp());
>> +	}
>> +	gxgrp = node;
>> +	gxgrp = gxgrp << 8 | xgrp;
>> +	wqp->word1.cn78xx.grp = gxgrp;
>> +	wqp->word1.cn78xx.tag = tag;
>> +	wqp->word1.cn78xx.tag_type = tag_type;
>> +	CVMX_SYNCWS;
>> +
>> +	tag_req.u64 = 0;
>> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_FULL;
>> +	tag_req.s_cn78xx_other.type = tag_type;
>> +	tag_req.s_cn78xx_other.grp = gxgrp;
>> +	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +	ptr.s_cn78xx.is_io = 1;
>> +	ptr.s_cn78xx.did = CVMX_OCT_DID_TAG_SWTAG;
>> +	ptr.s_cn78xx.node = node;
>> +	ptr.s_cn78xx.tag = tag;
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Submits work to an SSO group on any OCI node.
>> + * This function updates the work queue entry in DRAM to match
>> + * the arguments given.
>> + * Note that the tag provided is for the work queue entry submitted,
>> + * and is unrelated to the tag that the core currently holds.
>> + *
>> + * @param wqp pointer to work queue entry to submit.
>> + * This entry is updated to match the other parameters
>> + * @param tag tag value to be assigned to work queue entry
>> + * @param tag_type type of tag
>> + * @param xgrp native CN78XX group in the range 0..255
>> + * @param node The OCI node number for the target group
>> + *
>> + * When this function is called on a model prior to CN78XX, which
>> does
>> + * not support OCI nodes, the 'node' argument is ignored, and the
>> 'xgrp'
>> + * parameter is converted into 'qos' (the lower 3 bits) and 'grp'
>> (the higher
>> + * 5 bits), following the backward-compatibility scheme of
>> translating
>> + * between new and old style group numbers.
>> + */
>> +static inline void cvmx_pow_work_submit_node(cvmx_wqe_t *wqp, u32
>> tag, cvmx_pow_tag_type_t tag_type,
>> +					     u8 xgrp, u8 node)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +	u16 group;
>> +
>> +	if
>> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
>> +		debug("ERROR: %s is supported on OCTEON3 only\n",
>> __func__);
>> +		return;
>> +	}
>> +	group = node;
>> +	group = group << 8 | xgrp;
>> +	wqp->word1.cn78xx.tag = tag;
>> +	wqp->word1.cn78xx.tag_type = tag_type;
>> +	wqp->word1.cn78xx.grp = group;
>> +	CVMX_SYNCWS;
>> +
>> +	tag_req.u64 = 0;
>> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_ADDWQ;
>> +	tag_req.s_cn78xx_other.type = tag_type;
>> +	tag_req.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
>> +	tag_req.s_cn78xx_other.grp = group;
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s_cn78xx.did = 0x66; // CVMX_OCT_DID_TAG_TAG6;
>> +	ptr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +	ptr.s_cn78xx.is_io = 1;
>> +	ptr.s_cn78xx.node = node;
>> +	ptr.s_cn78xx.tag = tag;
>> +
>> +	/* SYNC write to memory before the work submit.  This is
>> necessary
>> +	 ** as POW may read values from DRAM at this time */
>> +	CVMX_SYNCWS;
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/**
>> + * Executes the SSO SWTAG_DESCHED operation.
>> + * This is similar to the cvmx_pow_tag_sw_desched() function, but
>> + * uses linear (vs. unified group-qos) group index.
>> + */
>> +static inline void cvmx_pow_tag_sw_desched_node(cvmx_wqe_t *wqe, u32
>> tag,
>> +						cvmx_pow_tag_type_t
>> tag_type, u8 xgrp, u64 no_sched,
>> +						u8 node)
>> +{
>> +	union cvmx_pow_tag_req_addr ptr;
>> +	cvmx_pow_tag_req_t tag_req;
>> +	u16 group;
>> +
>> +	if
>> (cvmx_unlikely(!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))) {
>> +		debug("ERROR: %s is supported on OCTEON3 only\n",
>> __func__);
>> +		return;
>> +	}
>> +	/* Need to make sure any writes to the work queue entry are
>> complete */
>> +	CVMX_SYNCWS;
>> +	/*
>> +	 * Ensure that there is not a pending tag switch, as a tag
>> switch cannot
>> +	 * be started if a previous switch is still pending.
>> +	 */
>> +	cvmx_pow_tag_sw_wait();
>> +
>> +	if (CVMX_ENABLE_POW_CHECKS) {
>> +		cvmx_pow_tag_info_t current_tag;
>> +
>> +		__cvmx_pow_warn_if_pending_switch(__func__);
>> +		current_tag = cvmx_pow_get_current_tag();
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL_NULL,
>> +			     "%s called with NULL_NULL tag\n",
>> __func__);
>> +		cvmx_warn_if(current_tag.tag_type ==
>> CVMX_POW_TAG_TYPE_NULL,
>> +			     "%s called with NULL tag. Deschedule not
>> allowed from NULL state\n",
>> +			     __func__);
>> +		cvmx_warn_if((current_tag.tag_type !=
>> CVMX_POW_TAG_TYPE_ATOMIC) &&
>> +			     (tag_type != CVMX_POW_TAG_TYPE_ATOMIC),
>> +			     "%s called where neither the before or
>> after tag is ATOMIC\n",
>> +			     __func__);
>> +	}
>> +	group = node;
>> +	group = group << 8 | xgrp;
>> +	wqe->word1.cn78xx.tag = tag;
>> +	wqe->word1.cn78xx.tag_type = tag_type;
>> +	wqe->word1.cn78xx.grp = group;
>> +	CVMX_SYNCWS;
>> +
>> +	tag_req.u64 = 0;
>> +	tag_req.s_cn78xx_other.op = CVMX_POW_TAG_OP_SWTAG_DESCH;
>> +	tag_req.s_cn78xx_other.type = tag_type;
>> +	tag_req.s_cn78xx_other.grp = group;
>> +	tag_req.s_cn78xx_other.no_sched = no_sched;
>> +
>> +	ptr.u64 = 0;
>> +	ptr.s.mem_region = CVMX_IO_SEG;
>> +	ptr.s.is_io = 1;
>> +	ptr.s.did = CVMX_OCT_DID_TAG_TAG3;
>> +	ptr.s_cn78xx.node = node;
>> +	ptr.s_cn78xx.tag = tag;
>> +	cvmx_write_io(ptr.u64, tag_req.u64);
>> +}
>> +
>> +/* Executes the UPD_WQP_GRP SSO operation.
>> + *
>> + * @param wqp  Pointer to the new work queue entry to switch to.
>> + * @param xgrp SSO group in the range 0..255
>> + *
>> + * NOTE: The operation can be performed only on the local node.
>> + */
>> +static inline void cvmx_sso_update_wqp_group(cvmx_wqe_t *wqp, u8
>> xgrp)
>> +{
>> +	union cvmx_pow_tag_req_addr addr;
>> +	cvmx_pow_tag_req_t data;
>> +	int node = cvmx_get_node_num();
>> +	int group = node << 8 | xgrp;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		debug("ERROR: %s is not supported on this chip)\n",
>> __func__);
>> +		return;
>> +	}
>> +	wqp->word1.cn78xx.grp = group;
>> +	CVMX_SYNCWS;
>> +
>> +	data.u64 = 0;
>> +	data.s_cn78xx_other.op = CVMX_POW_TAG_OP_UPDATE_WQP_GRP;
>> +	data.s_cn78xx_other.grp = group;
>> +	data.s_cn78xx_other.wqp = cvmx_ptr_to_phys(wqp);
>> +
>> +	addr.u64 = 0;
>> +	addr.s_cn78xx.mem_region = CVMX_IO_SEG;
>> +	addr.s_cn78xx.is_io = 1;
>> +	addr.s_cn78xx.did = CVMX_OCT_DID_TAG_TAG1;
>> +	addr.s_cn78xx.node = node;
>> +	cvmx_write_io(addr.u64, data.u64);
>> +}
>> +
>> +/*******************************************************************
>> ***********/
>> +/* Define usage of bits within the 32 bit tag
>> values.                         */
>> +/*******************************************************************
>> ***********/
>> +/*
>> + * Number of bits of the tag used by software.  The SW bits
>> + * are always a contiguous block of the high starting at bit 31.
>> + * The hardware bits are always the low bits.  By default, the top 8
>> bits
>> + * of the tag are reserved for software, and the low 24 are set by
>> the IPD unit.
>> + */
>> +#define CVMX_TAG_SW_BITS  (8)
>> +#define CVMX_TAG_SW_SHIFT (32 - CVMX_TAG_SW_BITS)
>> +
>> +/* Below is the list of values for the top 8 bits of the tag. */
>> +/*
>> + * Tag values with top byte of this value are reserved for internal
>> executive
>> + * uses
>> + */
>> +#define CVMX_TAG_SW_BITS_INTERNAL 0x1
>> +
>> +/*
>> + * The executive divides the remaining 24 bits as follows:
>> + * the upper 8 bits (bits 23 - 16 of the tag) define a subgroup
>> + * the lower 16 bits (bits 15 - 0 of the tag) define are the value
>> with
>> + * the subgroup. Note that this section describes the format of tags
>> generated
>> + * by software - refer to the hardware documentation for a
>> description of the
>> + * tags values generated by the packet input hardware.
>> + * Subgroups are defined here
>> + */
>> +
>> +/* Mask for the value portion of the tag */
>> +#define CVMX_TAG_SUBGROUP_MASK	0xFFFF
>> +#define CVMX_TAG_SUBGROUP_SHIFT 16
>> +#define CVMX_TAG_SUBGROUP_PKO	0x1
>> +
>> +/* End of executive tag subgroup definitions */
>> +
>> +/* The remaining values software bit values 0x2 - 0xff are available
>> + * for application use */
>> +
>> +/**
>> + * This function creates a 32 bit tag value from the two values
>> provided.
>> + *
>> + * @param sw_bits The upper bits (number depends on configuration)
>> are set
>> + *     to this value.  The remainder of bits are set by the hw_bits
>> parameter.
>> + * @param hw_bits The lower bits (number depends on configuration)
>> are set
>> + *     to this value.  The remainder of bits are set by the sw_bits
>> parameter.
>> + *
>> + * @return 32 bit value of the combined hw and sw bits.
>> + */
>> +static inline u32 cvmx_pow_tag_compose(u64 sw_bits, u64 hw_bits)
>> +{
>> +	return (((sw_bits & cvmx_build_mask(CVMX_TAG_SW_BITS)) <<
>> CVMX_TAG_SW_SHIFT) |
>> +		(hw_bits & cvmx_build_mask(32 - CVMX_TAG_SW_BITS)));
>> +}
>> +
>> +/**
>> + * Extracts the bits allocated for software use from the tag
>> + *
>> + * @param tag    32 bit tag value
>> + *
>> + * @return N bit software tag value, where N is configurable with
>> + *     the CVMX_TAG_SW_BITS define
>> + */
>> +static inline u32 cvmx_pow_tag_get_sw_bits(u64 tag)
>> +{
>> +	return ((tag >> (32 - CVMX_TAG_SW_BITS)) &
>> cvmx_build_mask(CVMX_TAG_SW_BITS));
>> +}
>> +
>> +/**
>> + *
>> + * Extracts the bits allocated for hardware use from the tag
>> + *
>> + * @param tag    32 bit tag value
>> + *
>> + * @return (32 - N) bit software tag value, where N is configurable
>> with
>> + *     the CVMX_TAG_SW_BITS define
>> + */
>> +static inline u32 cvmx_pow_tag_get_hw_bits(u64 tag)
>> +{
>> +	return (tag & cvmx_build_mask(32 - CVMX_TAG_SW_BITS));
>> +}
>> +
>> +static inline u64 cvmx_sso3_get_wqe_count(int node)
>> +{
>> +	cvmx_sso_grpx_aq_cnt_t aq_cnt;
>> +	unsigned int grp = 0;
>> +	u64 cnt = 0;
>> +
>> +	for (grp = 0; grp < cvmx_sso_num_xgrp(); grp++) {
>> +		aq_cnt.u64 = csr_rd_node(node,
>> CVMX_SSO_GRPX_AQ_CNT(grp));
>> +		cnt += aq_cnt.s.aq_cnt;
>> +	}
>> +	return cnt;
>> +}
>> +
>> +static inline u64 cvmx_sso_get_total_wqe_count(void)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		int node = cvmx_get_node_num();
>> +
>> +		return cvmx_sso3_get_wqe_count(node);
>> +	} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
>> +		cvmx_sso_iq_com_cnt_t sso_iq_com_cnt;
>> +
>> +		sso_iq_com_cnt.u64 = csr_rd(CVMX_SSO_IQ_COM_CNT);
>> +		return (sso_iq_com_cnt.s.iq_cnt);
>> +	} else {
>> +		cvmx_pow_iq_com_cnt_t pow_iq_com_cnt;
>> +
>> +		pow_iq_com_cnt.u64 = csr_rd(CVMX_POW_IQ_COM_CNT);
>> +		return (pow_iq_com_cnt.s.iq_cnt);
>> +	}
>> +}
>> +
>> +/**
>> + * Store the current POW internal state into the supplied
>> + * buffer. It is recommended that you pass a buffer of at least
>> + * 128KB. The format of the capture may change based on SDK
>> + * version and Octeon chip.
>> + *
>> + * @param buffer Buffer to store capture into
>> + * @param buffer_size The size of the supplied buffer
>> + *
>> + * @return Zero on success, negative on failure
>> + */
>> +int cvmx_pow_capture(void *buffer, int buffer_size);
>> +
>> +/**
>> + * Dump a POW capture to the console in a human readable format.
>> + *
>> + * @param buffer POW capture from cvmx_pow_capture()
>> + * @param buffer_size Size of the buffer
>> + */
>> +void cvmx_pow_display(void *buffer, int buffer_size);
>> +
>> +/**
>> + * Return the number of POW entries supported by this chip
>> + *
>> + * @return Number of POW entries
>> + */
>> +int cvmx_pow_get_num_entries(void);
>> +int cvmx_pow_get_dump_size(void);
>> +
>> +/**
>> + * This will allocate count number of SSO groups on the specified
>> node to the
>> + * calling application. These groups will be for exclusive use of
>> the
>> + * application until they are freed.
>> + * @param node The numa node for the allocation.
>> + * @param base_group Pointer to the initial group, -1 to allocate
>> anywhere.
>> + * @param count  The number of consecutive groups to allocate.
>> + * @return 0 on success and -1 on failure.
>> + */
>> +int cvmx_sso_reserve_group_range(int node, int *base_group, int
>> count);
>> +#define cvmx_sso_allocate_group_range cvmx_sso_reserve_group_range
>> +int cvmx_sso_reserve_group(int node);
>> +#define cvmx_sso_allocate_group cvmx_sso_reserve_group
>> +int cvmx_sso_release_group_range(int node, int base_group, int
>> count);
>> +int cvmx_sso_release_group(int node, int group);
>> +
>> +/**
>> + * Show integrated SSO configuration.
>> + *
>> + * @param node	   node number
>> + */
>> +int cvmx_sso_config_dump(unsigned int node);
>> +
>> +/**
>> + * Show integrated SSO statistics.
>> + *
>> + * @param node	   node number
>> + */
>> +int cvmx_sso_stats_dump(unsigned int node);
>> +
>> +/**
>> + * Clear integrated SSO statistics.
>> + *
>> + * @param node	   node number
>> + */
>> +int cvmx_sso_stats_clear(unsigned int node);
>> +
>> +/**
>> + * Show SSO core-group affinity and priority per node (multi-node
>> systems)
>> + */
>> +void cvmx_pow_mask_priority_dump_node(unsigned int node, struct
>> cvmx_coremask *avail_coremask);
>> +
>> +/**
>> + * Show POW/SSO core-group affinity and priority (legacy, single-
>> node systems)
>> + */
>> +static inline void cvmx_pow_mask_priority_dump(struct cvmx_coremask
>> *avail_coremask)
>> +{
>> +	cvmx_pow_mask_priority_dump_node(0 /*node */, avail_coremask);
>> +}
>> +
>> +/**
>> + * Show SSO performance counters (multi-node systems)
>> + */
>> +void cvmx_pow_show_perf_counters_node(unsigned int node);
>> +
>> +/**
>> + * Show POW/SSO performance counters (legacy, single-node systems)
>> + */
>> +static inline void cvmx_pow_show_perf_counters(void)
>> +{
>> +	cvmx_pow_show_perf_counters_node(0 /*node */);
>> +}
>> +
>> +#endif /* __CVMX_POW_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>> new file mode 100644
>> index 000000000000..19915eb82c51
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>> @@ -0,0 +1,304 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __CVMX_QLM_H__
>> +#define __CVMX_QLM_H__
>> +
>> +/*
>> + * Interface 0 on the 78xx can be connected to qlm 0 or qlm 2. When
>> interface
>> + * 0 is connected to qlm 0, this macro must be set to 0. When
>> interface 0 is
>> + * connected to qlm 2, this macro must be set to 1.
>> + */
>> +#define MUX_78XX_IFACE0 0
>> +
>> +/*
>> + * Interface 1 on the 78xx can be connected to qlm 1 or qlm 3. When
>> interface
>> + * 1 is connected to qlm 1, this macro must be set to 0. When
>> interface 1 is
>> + * connected to qlm 3, this macro must be set to 1.
>> + */
>> +#define MUX_78XX_IFACE1 0
>> +
>> +/* Uncomment this line to print QLM JTAG state */
>> +/* #define CVMX_QLM_DUMP_STATE 1 */
>> +
>> +typedef struct {
>> +	const char *name;
>> +	int stop_bit;
>> +	int start_bit;
>> +} __cvmx_qlm_jtag_field_t;
>> +
>> +/**
>> + * Return the number of QLMs supported by the chip
>> + *
>> + * @return  Number of QLMs
>> + */
>> +int cvmx_qlm_get_num(void);
>> +
>> +/**
>> + * Return the qlm number based on the interface
>> + *
>> + * @param xiface  Interface to look
>> + */
>> +int cvmx_qlm_interface(int xiface);
>> +
>> +/**
>> + * Return the qlm number based for a port in the interface
>> + *
>> + * @param xiface  interface to look up
>> + * @param index  index in an interface
>> + *
>> + * @return the qlm number based on the xiface
>> + */
>> +int cvmx_qlm_lmac(int xiface, int index);
>> +
>> +/**
>> + * Return if only DLM5/DLM6/DLM5+DLM6 is used by BGX
>> + *
>> + * @param BGX  BGX to search for.
>> + *
>> + * @return muxes used 0 = DLM5+DLM6, 1 = DLM5, 2 = DLM6.
>> + */
>> +int cvmx_qlm_mux_interface(int bgx);
>> +
>> +/**
>> + * Return number of lanes for a given qlm
>> + *
>> + * @param qlm QLM block to query
>> + *
>> + * @return  Number of lanes
>> + */
>> +int cvmx_qlm_get_lanes(int qlm);
>> +
>> +/**
>> + * Get the QLM JTAG fields based on Octeon model on the supported
>> chips.
>> + *
>> + * @return  qlm_jtag_field_t structure
>> + */
>> +const __cvmx_qlm_jtag_field_t *cvmx_qlm_jtag_get_field(void);
>> +
>> +/**
>> + * Get the QLM JTAG length by going through qlm_jtag_field for each
>> + * Octeon model that is supported
>> + *
>> + * @return return the length.
>> + */
>> +int cvmx_qlm_jtag_get_length(void);
>> +
>> +/**
>> + * Initialize the QLM layer
>> + */
>> +void cvmx_qlm_init(void);
>> +
>> +/**
>> + * Get a field in a QLM JTAG chain
>> + *
>> + * @param qlm    QLM to get
>> + * @param lane   Lane in QLM to get
>> + * @param name   String name of field
>> + *
>> + * @return JTAG field value
>> + */
>> +u64 cvmx_qlm_jtag_get(int qlm, int lane, const char *name);
>> +
>> +/**
>> + * Set a field in a QLM JTAG chain
>> + *
>> + * @param qlm    QLM to set
>> + * @param lane   Lane in QLM to set, or -1 for all lanes
>> + * @param name   String name of field
>> + * @param value  Value of the field
>> + */
>> +void cvmx_qlm_jtag_set(int qlm, int lane, const char *name, u64
>> value);
>> +
>> +/**
>> + * Errata G-16094: QLM Gen2 Equalizer Default Setting Change.
>> + * CN68XX pass 1.x and CN66XX pass 1.x QLM tweak. This function
>> tweaks the
>> + * JTAG setting for a QLMs to run better at 5 and 6.25Ghz.
>> + */
>> +void __cvmx_qlm_speed_tweak(void);
>> +
>> +/**
>> + * Errata G-16174: QLM Gen2 PCIe IDLE DAC change.
>> + * CN68XX pass 1.x, CN66XX pass 1.x and CN63XX pass 1.0-2.2 QLM
>> tweak.
>> + * This function tweaks the JTAG setting for a QLMs for PCIe to run
>> better.
>> + */
>> +void __cvmx_qlm_pcie_idle_dac_tweak(void);
>> +
>> +void __cvmx_qlm_pcie_cfg_rxd_set_tweak(int qlm, int lane);
>> +
>> +/**
>> + * Get the speed (Gbaud) of the QLM in Mhz.
>> + *
>> + * @param qlm    QLM to examine
>> + *
>> + * @return Speed in Mhz
>> + */
>> +int cvmx_qlm_get_gbaud_mhz(int qlm);
>> +/**
>> + * Get the speed (Gbaud) of the QLM in Mhz on specific node.
>> + *
>> + * @param node   Target QLM node
>> + * @param qlm    QLM to examine
>> + *
>> + * @return Speed in Mhz
>> + */
>> +int cvmx_qlm_get_gbaud_mhz_node(int node, int qlm);
>> +
>> +enum cvmx_qlm_mode {
>> +	CVMX_QLM_MODE_DISABLED = -1,
>> +	CVMX_QLM_MODE_SGMII = 1,
>> +	CVMX_QLM_MODE_XAUI,
>> +	CVMX_QLM_MODE_RXAUI,
>> +	CVMX_QLM_MODE_PCIE,	/* gen3 / gen2 / gen1 */
>> +	CVMX_QLM_MODE_PCIE_1X2, /* 1x2 gen2 / gen1 */
>> +	CVMX_QLM_MODE_PCIE_2X1, /* 2x1 gen2 / gen1 */
>> +	CVMX_QLM_MODE_PCIE_1X1, /* 1x1 gen2 / gen1 */
>> +	CVMX_QLM_MODE_SRIO_1X4, /* 1x4 short / long */
>> +	CVMX_QLM_MODE_SRIO_2X2, /* 2x2 short / long */
>> +	CVMX_QLM_MODE_SRIO_4X1, /* 4x1 short / long */
>> +	CVMX_QLM_MODE_ILK,
>> +	CVMX_QLM_MODE_QSGMII,
>> +	CVMX_QLM_MODE_SGMII_SGMII,
>> +	CVMX_QLM_MODE_SGMII_DISABLED,
>> +	CVMX_QLM_MODE_DISABLED_SGMII,
>> +	CVMX_QLM_MODE_SGMII_QSGMII,
>> +	CVMX_QLM_MODE_QSGMII_QSGMII,
>> +	CVMX_QLM_MODE_QSGMII_DISABLED,
>> +	CVMX_QLM_MODE_DISABLED_QSGMII,
>> +	CVMX_QLM_MODE_QSGMII_SGMII,
>> +	CVMX_QLM_MODE_RXAUI_1X2,
>> +	CVMX_QLM_MODE_SATA_2X1,
>> +	CVMX_QLM_MODE_XLAUI,
>> +	CVMX_QLM_MODE_XFI,
>> +	CVMX_QLM_MODE_10G_KR,
>> +	CVMX_QLM_MODE_40G_KR4,
>> +	CVMX_QLM_MODE_PCIE_1X8, /* 1x8 gen3 / gen2 / gen1 */
>> +	CVMX_QLM_MODE_RGMII_SGMII,
>> +	CVMX_QLM_MODE_RGMII_XFI,
>> +	CVMX_QLM_MODE_RGMII_10G_KR,
>> +	CVMX_QLM_MODE_RGMII_RXAUI,
>> +	CVMX_QLM_MODE_RGMII_XAUI,
>> +	CVMX_QLM_MODE_RGMII_XLAUI,
>> +	CVMX_QLM_MODE_RGMII_40G_KR4,
>> +	CVMX_QLM_MODE_MIXED,		/* BGX2 is mixed mode,
>> DLM5(SGMII) & DLM6(XFI) */
>> +	CVMX_QLM_MODE_SGMII_2X1,	/* Configure BGX2 separate for DLM5 &
>> DLM6 */
>> +	CVMX_QLM_MODE_10G_KR_1X2,	/* Configure BGX2 separate for DLM5 &
>> DLM6 */
>> +	CVMX_QLM_MODE_XFI_1X2,		/* Configure BGX2 separate
>> for DLM5 & DLM6 */
>> +	CVMX_QLM_MODE_RGMII_SGMII_1X1,	/* Configure BGX2, applies to
>> DLM5 */
>> +	CVMX_QLM_MODE_RGMII_SGMII_2X1,	/* Configure BGX2, applies to
>> DLM6 */
>> +	CVMX_QLM_MODE_RGMII_10G_KR_1X1, /* Configure BGX2, applies to
>> DLM6 */
>> +	CVMX_QLM_MODE_RGMII_XFI_1X1,	/* Configure BGX2, applies to
>> DLM6 */
>> +	CVMX_QLM_MODE_SDL,		/* RMAC Pipe */
>> +	CVMX_QLM_MODE_CPRI,		/* RMAC */
>> +	CVMX_QLM_MODE_OCI
>> +};
>> +
>> +enum cvmx_gmx_inf_mode {
>> +	CVMX_GMX_INF_MODE_DISABLED = 0,
>> +	CVMX_GMX_INF_MODE_SGMII = 1,  /* Other interface can be SGMII
>> or QSGMII */
>> +	CVMX_GMX_INF_MODE_QSGMII = 2, /* Other interface can be SGMII
>> or QSGMII */
>> +	CVMX_GMX_INF_MODE_RXAUI = 3,  /* Only interface 0, interface 1
>> must be DISABLED */
>> +};
>> +
>> +/**
>> + * Eye diagram captures are stored in the following structure
>> + */
>> +typedef struct {
>> +	int width;	   /* Width in the x direction (time) */
>> +	int height;	   /* Height in the y direction (voltage) */
>> +	u32 data[64][128]; /* Error count at location, saturates as max
>> */
>> +} cvmx_qlm_eye_t;
>> +
>> +/**
>> + * These apply to DLM1 and DLM2 if its not in SATA mode
>> + * Manual refers to lanes as follows:
>> + *  DML 0 lane 0 == GSER0 lane 0
>> + *  DML 0 lane 1 == GSER0 lane 1
>> + *  DML 1 lane 2 == GSER1 lane 0
>> + *  DML 1 lane 3 == GSER1 lane 1
>> + *  DML 2 lane 4 == GSER2 lane 0
>> + *  DML 2 lane 5 == GSER2 lane 1
>> + */
>> +enum cvmx_pemx_cfg_mode {
>> +	CVMX_PEM_MD_GEN2_2LANE = 0, /* Valid for PEM0(DLM1), PEM1(DLM2)
>> */
>> +	CVMX_PEM_MD_GEN2_1LANE = 1, /* Valid for PEM0(DLM1.0),
>> PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
>> +	CVMX_PEM_MD_GEN2_4LANE = 2, /* Valid for PEM0(DLM1-2) */
>> +	/* Reserved */
>> +	CVMX_PEM_MD_GEN1_2LANE = 4, /* Valid for PEM0(DLM1), PEM1(DLM2)
>> */
>> +	CVMX_PEM_MD_GEN1_1LANE = 5, /* Valid for PEM0(DLM1.0),
>> PEM1(DLM1.1,DLM2.0), PEM2(DLM2.1) */
>> +	CVMX_PEM_MD_GEN1_4LANE = 6, /* Valid for PEM0(DLM1-2) */
>> +	/* Reserved */
>> +};
>> +
>> +/*
>> + * Read QLM and return mode.
>> + */
>> +enum cvmx_qlm_mode cvmx_qlm_get_mode(int qlm);
>> +enum cvmx_qlm_mode cvmx_qlm_get_mode_cn78xx(int node, int qlm);
>> +enum cvmx_qlm_mode cvmx_qlm_get_dlm_mode(int dlm_mode, int
>> interface);
>> +void __cvmx_qlm_set_mult(int qlm, int baud_mhz, int old_multiplier);
>> +
>> +void cvmx_qlm_display_registers(int qlm);
>> +
>> +int cvmx_qlm_measure_clock(int qlm);
>> +
>> +/**
>> + * Measure the reference clock of a QLM on a multi-node setup
>> + *
>> + * @param node   node to measure
>> + * @param qlm    QLM to measure
>> + *
>> + * @return Clock rate in Hz
>> + */
>> +int cvmx_qlm_measure_clock_node(int node, int qlm);
>> +
>> +/*
>> + * Perform RX equalization on a QLM
>> + *
>> + * @param node	Node the QLM is on
>> + * @param qlm	QLM to perform RX equalization on
>> + * @param lane	Lane to use, or -1 for all lanes
>> + *
>> + * @return Zero on success, negative if any lane failed RX
>> equalization
>> + */
>> +int __cvmx_qlm_rx_equalization(int node, int qlm, int lane);
>> +
>> +/**
>> + * Errata GSER-27882 -GSER 10GBASE-KR Transmit Equalizer
>> + * Training may not update PHY Tx Taps. This function is not static
>> + * so we can share it with BGX KR
>> + *
>> + * @param node	Node to apply errata workaround
>> + * @param qlm	QLM to apply errata workaround
>> + * @param lane	Lane to apply the errata
>> + */
>> +int cvmx_qlm_gser_errata_27882(int node, int qlm, int lane);
>> +
>> +void cvmx_qlm_gser_errata_25992(int node, int qlm);
>> +
>> +#ifdef CVMX_DUMP_GSER
>> +/**
>> + * Dump GSER configuration for node 0
>> + */
>> +int cvmx_dump_gser_config(unsigned int gser);
>> +/**
>> + * Dump GSER status for node 0
>> + */
>> +int cvmx_dump_gser_status(unsigned int gser);
>> +/**
>> + * Dump GSER configuration
>> + */
>> +int cvmx_dump_gser_config_node(unsigned int node, unsigned int
>> gser);
>> +/**
>> + * Dump GSER status
>> + */
>> +int cvmx_dump_gser_status_node(unsigned int node, unsigned int
>> gser);
>> +#endif
>> +
>> +int cvmx_qlm_eye_display(int node, int qlm, int qlm_lane, int
>> format, const cvmx_qlm_eye_t *eye);
>> +
>> +void cvmx_prbs_process_cmd(int node, int qlm, int mode);
>> +
>> +#endif /* __CVMX_QLM_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>> new file mode 100644
>> index 000000000000..d567a8453b7a
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>> @@ -0,0 +1,113 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * This file provides support for the processor local scratch
>> memory.
>> + * Scratch memory is byte addressable - all addresses are byte
>> addresses.
>> + */
>> +
>> +#ifndef __CVMX_SCRATCH_H__
>> +#define __CVMX_SCRATCH_H__
>> +
>> +/* Note: This define must be a long, not a long long in order to
>> compile
>> +	without warnings for both 32bit and 64bit. */
>> +#define CVMX_SCRATCH_BASE (-32768l) /* 0xffffffffffff8000 */
>> +
>> +/* Scratch line for LMTST/LMTDMA on Octeon3 models */
>> +#ifdef CVMX_CAVIUM_OCTEON3
>> +#define CVMX_PKO_LMTLINE 2ull
>> +#endif
>> +
>> +/**
>> + * Reads an 8 bit value from the processor local scratchpad memory.
>> + *
>> + * @param address byte address to read from
>> + *
>> + * @return value read
>> + */
>> +static inline u8 cvmx_scratch_read8(u64 address)
>> +{
>> +	return *CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address);
>> +}
>> +
>> +/**
>> + * Reads a 16 bit value from the processor local scratchpad memory.
>> + *
>> + * @param address byte address to read from
>> + *
>> + * @return value read
>> + */
>> +static inline u16 cvmx_scratch_read16(u64 address)
>> +{
>> +	return *CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address);
>> +}
>> +
>> +/**
>> + * Reads a 32 bit value from the processor local scratchpad memory.
>> + *
>> + * @param address byte address to read from
>> + *
>> + * @return value read
>> + */
>> +static inline u32 cvmx_scratch_read32(u64 address)
>> +{
>> +	return *CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address);
>> +}
>> +
>> +/**
>> + * Reads a 64 bit value from the processor local scratchpad memory.
>> + *
>> + * @param address byte address to read from
>> + *
>> + * @return value read
>> + */
>> +static inline u64 cvmx_scratch_read64(u64 address)
>> +{
>> +	return *CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address);
>> +}
>> +
>> +/**
>> + * Writes an 8 bit value to the processor local scratchpad memory.
>> + *
>> + * @param address byte address to write to
>> + * @param value   value to write
>> + */
>> +static inline void cvmx_scratch_write8(u64 address, u64 value)
>> +{
>> +	*CASTPTR(volatile u8, CVMX_SCRATCH_BASE + address) = (u8)value;
>> +}
>> +
>> +/**
>> + * Writes a 32 bit value to the processor local scratchpad memory.
>> + *
>> + * @param address byte address to write to
>> + * @param value   value to write
>> + */
>> +static inline void cvmx_scratch_write16(u64 address, u64 value)
>> +{
>> +	*CASTPTR(volatile u16, CVMX_SCRATCH_BASE + address) =
>> (u16)value;
>> +}
>> +
>> +/**
>> + * Writes a 16 bit value to the processor local scratchpad memory.
>> + *
>> + * @param address byte address to write to
>> + * @param value   value to write
>> + */
>> +static inline void cvmx_scratch_write32(u64 address, u64 value)
>> +{
>> +	*CASTPTR(volatile u32, CVMX_SCRATCH_BASE + address) =
>> (u32)value;
>> +}
>> +
>> +/**
>> + * Writes a 64 bit value to the processor local scratchpad memory.
>> + *
>> + * @param address byte address to write to
>> + * @param value   value to write
>> + */
>> +static inline void cvmx_scratch_write64(u64 address, u64 value)
>> +{
>> +	*CASTPTR(volatile u64, CVMX_SCRATCH_BASE + address) = value;
>> +}
>> +
>> +#endif /* __CVMX_SCRATCH_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>> b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>> new file mode 100644
>> index 000000000000..c9e3c8312a65
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>> @@ -0,0 +1,1462 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + *
>> + * This header file defines the work queue entry (wqe) data
>> structure.
>> + * Since this is a commonly used structure that depends on
>> structures
>> + * from several hardware blocks, those definitions have been placed
>> + * in this file to create a single point of definition of the wqe
>> + * format.
>> + * Data structures are still named according to the block that they
>> + * relate to.
>> + */
>> +
>> +#ifndef __CVMX_WQE_H__
>> +#define __CVMX_WQE_H__
>> +
>> +#include "cvmx-packet.h"
>> +#include "cvmx-csr-enums.h"
>> +#include "cvmx-pki-defs.h"
>> +#include "cvmx-pip-defs.h"
>> +#include "octeon-feature.h"
>> +
>> +#define OCT_TAG_TYPE_STRING(x)					
>> 	\
>> +	(((x) == CVMX_POW_TAG_TYPE_ORDERED) ?				
>> \
>> +	 "ORDERED" :							
>> \
>> +	 (((x) == CVMX_POW_TAG_TYPE_ATOMIC) ?				
>> \
>> +	  "ATOMIC" :							
>> \
>> +	  (((x) == CVMX_POW_TAG_TYPE_NULL) ? "NULL" : "NULL_NULL")))
>> +
>> +/* Error levels in WQE WORD2 (ERRLEV).*/
>> +#define PKI_ERRLEV_E__RE_M 0x0
>> +#define PKI_ERRLEV_E__LA_M 0x1
>> +#define PKI_ERRLEV_E__LB_M 0x2
>> +#define PKI_ERRLEV_E__LC_M 0x3
>> +#define PKI_ERRLEV_E__LD_M 0x4
>> +#define PKI_ERRLEV_E__LE_M 0x5
>> +#define PKI_ERRLEV_E__LF_M 0x6
>> +#define PKI_ERRLEV_E__LG_M 0x7
>> +
>> +enum cvmx_pki_errlevel {
>> +	CVMX_PKI_ERRLEV_E_RE = PKI_ERRLEV_E__RE_M,
>> +	CVMX_PKI_ERRLEV_E_LA = PKI_ERRLEV_E__LA_M,
>> +	CVMX_PKI_ERRLEV_E_LB = PKI_ERRLEV_E__LB_M,
>> +	CVMX_PKI_ERRLEV_E_LC = PKI_ERRLEV_E__LC_M,
>> +	CVMX_PKI_ERRLEV_E_LD = PKI_ERRLEV_E__LD_M,
>> +	CVMX_PKI_ERRLEV_E_LE = PKI_ERRLEV_E__LE_M,
>> +	CVMX_PKI_ERRLEV_E_LF = PKI_ERRLEV_E__LF_M,
>> +	CVMX_PKI_ERRLEV_E_LG = PKI_ERRLEV_E__LG_M
>> +};
>> +
>> +#define CVMX_PKI_ERRLEV_MAX BIT(3) /* The size of WORD2:ERRLEV
>> field.*/
>> +
>> +/* Error code in WQE WORD2 (OPCODE).*/
>> +#define CVMX_PKI_OPCODE_RE_NONE	      0x0
>> +#define CVMX_PKI_OPCODE_RE_PARTIAL    0x1
>> +#define CVMX_PKI_OPCODE_RE_JABBER     0x2
>> +#define CVMX_PKI_OPCODE_RE_FCS	      0x7
>> +#define CVMX_PKI_OPCODE_RE_FCS_RCV    0x8
>> +#define CVMX_PKI_OPCODE_RE_TERMINATE  0x9
>> +#define CVMX_PKI_OPCODE_RE_RX_CTL     0xb
>> +#define CVMX_PKI_OPCODE_RE_SKIP	      0xc
>> +#define CVMX_PKI_OPCODE_RE_DMAPKT     0xf
>> +#define CVMX_PKI_OPCODE_RE_PKIPAR     0x13
>> +#define CVMX_PKI_OPCODE_RE_PKIPCAM    0x14
>> +#define CVMX_PKI_OPCODE_RE_MEMOUT     0x15
>> +#define CVMX_PKI_OPCODE_RE_BUFS_OFLOW 0x16
>> +#define CVMX_PKI_OPCODE_L2_FRAGMENT   0x20
>> +#define CVMX_PKI_OPCODE_L2_OVERRUN    0x21
>> +#define CVMX_PKI_OPCODE_L2_PFCS	      0x22
>> +#define CVMX_PKI_OPCODE_L2_PUNY	      0x23
>> +#define CVMX_PKI_OPCODE_L2_MAL	      0x24
>> +#define CVMX_PKI_OPCODE_L2_OVERSIZE   0x25
>> +#define CVMX_PKI_OPCODE_L2_UNDERSIZE  0x26
>> +#define CVMX_PKI_OPCODE_L2_LENMISM    0x27
>> +#define CVMX_PKI_OPCODE_IP_NOT	      0x41
>> +#define CVMX_PKI_OPCODE_IP_CHK	      0x42
>> +#define CVMX_PKI_OPCODE_IP_MAL	      0x43
>> +#define CVMX_PKI_OPCODE_IP_MALD	      0x44
>> +#define CVMX_PKI_OPCODE_IP_HOP	      0x45
>> +#define CVMX_PKI_OPCODE_L4_MAL	      0x61
>> +#define CVMX_PKI_OPCODE_L4_CHK	      0x62
>> +#define CVMX_PKI_OPCODE_L4_LEN	      0x63
>> +#define CVMX_PKI_OPCODE_L4_PORT	      0x64
>> +#define CVMX_PKI_OPCODE_TCP_FLAG      0x65
>> +
>> +#define CVMX_PKI_OPCODE_MAX BIT(8) /* The size of WORD2:OPCODE
>> field.*/
>> +
>> +/* Layer types in pki */
>> +#define CVMX_PKI_LTYPE_E_NONE_M	      0x0
>> +#define CVMX_PKI_LTYPE_E_ENET_M	      0x1
>> +#define CVMX_PKI_LTYPE_E_VLAN_M	      0x2
>> +#define CVMX_PKI_LTYPE_E_SNAP_PAYLD_M 0x5
>> +#define CVMX_PKI_LTYPE_E_ARP_M	      0x6
>> +#define CVMX_PKI_LTYPE_E_RARP_M	      0x7
>> +#define CVMX_PKI_LTYPE_E_IP4_M	      0x8
>> +#define CVMX_PKI_LTYPE_E_IP4_OPT_M    0x9
>> +#define CVMX_PKI_LTYPE_E_IP6_M	      0xA
>> +#define CVMX_PKI_LTYPE_E_IP6_OPT_M    0xB
>> +#define CVMX_PKI_LTYPE_E_IPSEC_ESP_M  0xC
>> +#define CVMX_PKI_LTYPE_E_IPFRAG_M     0xD
>> +#define CVMX_PKI_LTYPE_E_IPCOMP_M     0xE
>> +#define CVMX_PKI_LTYPE_E_TCP_M	      0x10
>> +#define CVMX_PKI_LTYPE_E_UDP_M	      0x11
>> +#define CVMX_PKI_LTYPE_E_SCTP_M	      0x12
>> +#define CVMX_PKI_LTYPE_E_UDP_VXLAN_M  0x13
>> +#define CVMX_PKI_LTYPE_E_GRE_M	      0x14
>> +#define CVMX_PKI_LTYPE_E_NVGRE_M      0x15
>> +#define CVMX_PKI_LTYPE_E_GTP_M	      0x16
>> +#define CVMX_PKI_LTYPE_E_SW28_M	      0x1C
>> +#define CVMX_PKI_LTYPE_E_SW29_M	      0x1D
>> +#define CVMX_PKI_LTYPE_E_SW30_M	      0x1E
>> +#define CVMX_PKI_LTYPE_E_SW31_M	      0x1F
>> +
>> +enum cvmx_pki_layer_type {
>> +	CVMX_PKI_LTYPE_E_NONE = CVMX_PKI_LTYPE_E_NONE_M,
>> +	CVMX_PKI_LTYPE_E_ENET = CVMX_PKI_LTYPE_E_ENET_M,
>> +	CVMX_PKI_LTYPE_E_VLAN = CVMX_PKI_LTYPE_E_VLAN_M,
>> +	CVMX_PKI_LTYPE_E_SNAP_PAYLD = CVMX_PKI_LTYPE_E_SNAP_PAYLD_M,
>> +	CVMX_PKI_LTYPE_E_ARP = CVMX_PKI_LTYPE_E_ARP_M,
>> +	CVMX_PKI_LTYPE_E_RARP = CVMX_PKI_LTYPE_E_RARP_M,
>> +	CVMX_PKI_LTYPE_E_IP4 = CVMX_PKI_LTYPE_E_IP4_M,
>> +	CVMX_PKI_LTYPE_E_IP4_OPT = CVMX_PKI_LTYPE_E_IP4_OPT_M,
>> +	CVMX_PKI_LTYPE_E_IP6 = CVMX_PKI_LTYPE_E_IP6_M,
>> +	CVMX_PKI_LTYPE_E_IP6_OPT = CVMX_PKI_LTYPE_E_IP6_OPT_M,
>> +	CVMX_PKI_LTYPE_E_IPSEC_ESP = CVMX_PKI_LTYPE_E_IPSEC_ESP_M,
>> +	CVMX_PKI_LTYPE_E_IPFRAG = CVMX_PKI_LTYPE_E_IPFRAG_M,
>> +	CVMX_PKI_LTYPE_E_IPCOMP = CVMX_PKI_LTYPE_E_IPCOMP_M,
>> +	CVMX_PKI_LTYPE_E_TCP = CVMX_PKI_LTYPE_E_TCP_M,
>> +	CVMX_PKI_LTYPE_E_UDP = CVMX_PKI_LTYPE_E_UDP_M,
>> +	CVMX_PKI_LTYPE_E_SCTP = CVMX_PKI_LTYPE_E_SCTP_M,
>> +	CVMX_PKI_LTYPE_E_UDP_VXLAN = CVMX_PKI_LTYPE_E_UDP_VXLAN_M,
>> +	CVMX_PKI_LTYPE_E_GRE = CVMX_PKI_LTYPE_E_GRE_M,
>> +	CVMX_PKI_LTYPE_E_NVGRE = CVMX_PKI_LTYPE_E_NVGRE_M,
>> +	CVMX_PKI_LTYPE_E_GTP = CVMX_PKI_LTYPE_E_GTP_M,
>> +	CVMX_PKI_LTYPE_E_SW28 = CVMX_PKI_LTYPE_E_SW28_M,
>> +	CVMX_PKI_LTYPE_E_SW29 = CVMX_PKI_LTYPE_E_SW29_M,
>> +	CVMX_PKI_LTYPE_E_SW30 = CVMX_PKI_LTYPE_E_SW30_M,
>> +	CVMX_PKI_LTYPE_E_SW31 = CVMX_PKI_LTYPE_E_SW31_M,
>> +	CVMX_PKI_LTYPE_E_MAX = CVMX_PKI_LTYPE_E_SW31
>> +};
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 ptr_vlan : 8;
>> +		u64 ptr_layer_g : 8;
>> +		u64 ptr_layer_f : 8;
>> +		u64 ptr_layer_e : 8;
>> +		u64 ptr_layer_d : 8;
>> +		u64 ptr_layer_c : 8;
>> +		u64 ptr_layer_b : 8;
>> +		u64 ptr_layer_a : 8;
>> +	};
>> +} cvmx_pki_wqe_word4_t;
>> +
>> +/**
>> + * HW decode / err_code in work queue entry
>> + */
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 ip_offset : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 varies : 12;
>> +		u64 dec_ipcomp : 1;
>> +		u64 tcp_or_udp : 1;
>> +		u64 dec_ipsec : 1;
>> +		u64 is_v6 : 1;
>> +		u64 software : 1;
>> +		u64 L4_error : 1;
>> +		u64 is_frag : 1;
>> +		u64 IP_exc : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} s;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 ip_offset : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 port : 12;
>> +		u64 dec_ipcomp : 1;
>> +		u64 tcp_or_udp : 1;
>> +		u64 dec_ipsec : 1;
>> +		u64 is_v6 : 1;
>> +		u64 software : 1;
>> +		u64 L4_error : 1;
>> +		u64 is_frag : 1;
>> +		u64 IP_exc : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} s_cn68xx;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 ip_offset : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 pr : 4;
>> +		u64 unassigned2a : 4;
>> +		u64 unassigned2 : 4;
>> +		u64 dec_ipcomp : 1;
>> +		u64 tcp_or_udp : 1;
>> +		u64 dec_ipsec : 1;
>> +		u64 is_v6 : 1;
>> +		u64 software : 1;
>> +		u64 L4_error : 1;
>> +		u64 is_frag : 1;
>> +		u64 IP_exc : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} s_cn38xx;
>> +	struct {
>> +		u64 unused1 : 16;
>> +		u64 vlan : 16;
>> +		u64 unused2 : 32;
>> +	} svlan;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 unused : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 varies : 12;
>> +		u64 unassigned2 : 4;
>> +		u64 software : 1;
>> +		u64 unassigned3 : 1;
>> +		u64 is_rarp : 1;
>> +		u64 is_arp : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} snoip;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 unused : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 port : 12;
>> +		u64 unassigned2 : 4;
>> +		u64 software : 1;
>> +		u64 unassigned3 : 1;
>> +		u64 is_rarp : 1;
>> +		u64 is_arp : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} snoip_cn68xx;
>> +	struct {
>> +		u64 bufs : 8;
>> +		u64 unused : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 unassigned : 1;
>> +		u64 vlan_cfi : 1;
>> +		u64 vlan_id : 12;
>> +		u64 pr : 4;
>> +		u64 unassigned2a : 8;
>> +		u64 unassigned2 : 4;
>> +		u64 software : 1;
>> +		u64 unassigned3 : 1;
>> +		u64 is_rarp : 1;
>> +		u64 is_arp : 1;
>> +		u64 is_bcast : 1;
>> +		u64 is_mcast : 1;
>> +		u64 not_IP : 1;
>> +		u64 rcv_error : 1;
>> +		u64 err_code : 8;
>> +	} snoip_cn38xx;
>> +} cvmx_pip_wqe_word2_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 software : 1;
>> +		u64 lg_hdr_type : 5;
>> +		u64 lf_hdr_type : 5;
>> +		u64 le_hdr_type : 5;
>> +		u64 ld_hdr_type : 5;
>> +		u64 lc_hdr_type : 5;
>> +		u64 lb_hdr_type : 5;
>> +		u64 is_la_ether : 1;
>> +		u64 rsvd_0 : 8;
>> +		u64 vlan_valid : 1;
>> +		u64 vlan_stacked : 1;
>> +		u64 stat_inc : 1;
>> +		u64 pcam_flag4 : 1;
>> +		u64 pcam_flag3 : 1;
>> +		u64 pcam_flag2 : 1;
>> +		u64 pcam_flag1 : 1;
>> +		u64 is_frag : 1;
>> +		u64 is_l3_bcast : 1;
>> +		u64 is_l3_mcast : 1;
>> +		u64 is_l2_bcast : 1;
>> +		u64 is_l2_mcast : 1;
>> +		u64 is_raw : 1;
>> +		u64 err_level : 3;
>> +		u64 err_code : 8;
>> +	};
>> +} cvmx_pki_wqe_word2_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	cvmx_pki_wqe_word2_t pki;
>> +	cvmx_pip_wqe_word2_t pip;
>> +} cvmx_wqe_word2_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u16 hw_chksum;
>> +		u8 unused;
>> +		u64 next_ptr : 40;
>> +	} cn38xx;
>> +	struct {
>> +		u64 l4ptr : 8;	  /* 56..63 */
>> +		u64 unused0 : 8;  /* 48..55 */
>> +		u64 l3ptr : 8;	  /* 40..47 */
>> +		u64 l2ptr : 8;	  /* 32..39 */
>> +		u64 unused1 : 18; /* 14..31 */
>> +		u64 bpid : 6;	  /* 8..13 */
>> +		u64 unused2 : 2;  /* 6..7 */
>> +		u64 pknd : 6;	  /* 0..5 */
>> +	} cn68xx;
>> +} cvmx_pip_wqe_word0_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 rsvd_0 : 4;
>> +		u64 aura : 12;
>> +		u64 rsvd_1 : 1;
>> +		u64 apad : 3;
>> +		u64 channel : 12;
>> +		u64 bufs : 8;
>> +		u64 style : 8;
>> +		u64 rsvd_2 : 10;
>> +		u64 pknd : 6;
>> +	};
>> +} cvmx_pki_wqe_word0_t;
>> +
>> +/* Use reserved bit, set by HW to 0, to indicate buf_ptr legacy
>> translation*/
>> +#define pki_wqe_translated word0.rsvd_1
>> +
>> +typedef union {
>> +	u64 u64;
>> +	cvmx_pip_wqe_word0_t pip;
>> +	cvmx_pki_wqe_word0_t pki;
>> +	struct {
>> +		u64 unused : 24;
>> +		u64 next_ptr : 40; /* On cn68xx this is unused as well
>> */
>> +	} raw;
>> +} cvmx_wqe_word0_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 len : 16;
>> +		u64 rsvd_0 : 2;
>> +		u64 rsvd_1 : 2;
>> +		u64 grp : 10;
>> +		cvmx_pow_tag_type_t tag_type : 2;
>> +		u64 tag : 32;
>> +	};
>> +} cvmx_pki_wqe_word1_t;
>> +
>> +#define pki_errata20776 word1.rsvd_0
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 len : 16;
>> +		u64 varies : 14;
>> +		cvmx_pow_tag_type_t tag_type : 2;
>> +		u64 tag : 32;
>> +	};
>> +	cvmx_pki_wqe_word1_t cn78xx;
>> +	struct {
>> +		u64 len : 16;
>> +		u64 zero_0 : 1;
>> +		u64 qos : 3;
>> +		u64 zero_1 : 1;
>> +		u64 grp : 6;
>> +		u64 zero_2 : 3;
>> +		cvmx_pow_tag_type_t tag_type : 2;
>> +		u64 tag : 32;
>> +	} cn68xx;
>> +	struct {
>> +		u64 len : 16;
>> +		u64 ipprt : 6;
>> +		u64 qos : 3;
>> +		u64 grp : 4;
>> +		u64 zero_2 : 1;
>> +		cvmx_pow_tag_type_t tag_type : 2;
>> +		u64 tag : 32;
>> +	} cn38xx;
>> +} cvmx_wqe_word1_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 rsvd_0 : 8;
>> +		u64 hwerr : 8;
>> +		u64 rsvd_1 : 24;
>> +		u64 sqid : 8;
>> +		u64 rsvd_2 : 4;
>> +		u64 vfnum : 12;
>> +	};
>> +} cvmx_wqe_word3_t;
>> +
>> +typedef union {
>> +	u64 u64;
>> +	struct {
>> +		u64 rsvd_0 : 21;
>> +		u64 sqfc : 11;
>> +		u64 rsvd_1 : 5;
>> +		u64 sqtail : 11;
>> +		u64 rsvd_2 : 3;
>> +		u64 sqhead : 13;
>> +	};
>> +} cvmx_wqe_word4_t;
>> +
>> +/**
>> + * Work queue entry format.
>> + * Must be 8-byte aligned.
>> + */
>> +typedef struct cvmx_wqe_s {
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 0                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_wqe_word0_t word0;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 1                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_wqe_word1_t word1;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 2                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64-bits are filled in by hardware
>> when a
>> +	 * packet arrives. This indicates a variety of status and error
>> +	 *conditions.
>> +	 */
>> +	cvmx_pip_wqe_word2_t word2;
>> +
>> +	/* Pointer to the first segment of the packet. */
>> +	cvmx_buf_ptr_t packet_ptr;
>> +
>> +	/* HW WRITE: OCTEON will fill in a programmable amount from the
>> packet,
>> +	 * up to (at most, but perhaps less) the amount needed to fill
>> the work
>> +	 * queue entry to 128 bytes. If the packet is recognized to be
>> IP, the
>> +	 * hardware starts (except that the IPv4 header is padded for
>> +	 * appropriate alignment) writing here where the IP header
>> starts.
>> +	 * If the packet is not recognized to be IP, the hardware
>> starts
>> +	 * writing the beginning of the packet here.
>> +	 */
>> +	u8 packet_data[96];
>> +
>> +	/* If desired, SW can make the work Q entry any length. For the
>> purposes
>> +	 * of discussion here, Assume 128B always, as this is all that
>> the hardware
>> +	 * deals with.
>> +	 */
>> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_t;
>> +
>> +/**
>> + * Work queue entry format for NQM
>> + * Must be 8-byte aligned
>> + */
>> +typedef struct cvmx_wqe_nqm_s {
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 0                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_wqe_word0_t word0;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 1                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_wqe_word1_t word1;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 2                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* Reserved */
>> +	u64 word2;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 3                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* NVMe specific information.*/
>> +	cvmx_wqe_word3_t word3;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 4                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* NVMe specific information.*/
>> +	cvmx_wqe_word4_t word4;
>> +
>> +	/* HW WRITE: OCTEON will fill in a programmable amount from the
>> packet,
>> +	 * up to (at most, but perhaps less) the amount needed to fill
>> the work
>> +	 * queue entry to 128 bytes. If the packet is recognized to be
>> IP, the
>> +	 * hardware starts (except that the IPv4 header is padded for
>> +	 * appropriate alignment) writing here where the IP header
>> starts.
>> +	 * If the packet is not recognized to be IP, the hardware
>> starts
>> +	 * writing the beginning of the packet here.
>> +	 */
>> +	u8 packet_data[88];
>> +
>> +	/* If desired, SW can make the work Q entry any length.
>> +	 * For the purposes of discussion here, assume 128B always, as
>> this is
>> +	 * all that the hardware deals with.
>> +	 */
>> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_nqm_t;
>> +
>> +/**
>> + * Work queue entry format for 78XX.
>> + * In 78XX packet data always resides in WQE buffer unless option
>> + * DIS_WQ_DAT=1 in PKI_STYLE_BUF, which causes packet data to use
>> separate buffer.
>> + *
>> + * Must be 8-byte aligned.
>> + */
>> +typedef struct {
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 0                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_pki_wqe_word0_t word0;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 1                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64 bits are filled by HW when a
>> packet
>> +	 * arrives.
>> +	 */
>> +	cvmx_pki_wqe_word1_t word1;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 2                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64-bits are filled in by hardware
>> when a
>> +	 * packet arrives. This indicates a variety of status and error
>> +	 * conditions.
>> +	 */
>> +	cvmx_pki_wqe_word2_t word2;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 3                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* Pointer to the first segment of the packet.*/
>> +	cvmx_buf_ptr_pki_t packet_ptr;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORD
>> 4                                                            */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* HW WRITE: the following 64-bits are filled in by hardware
>> when a
>> +	 * packet arrives contains a byte pointer to the start of Layer
>> +	 * A/B/C/D/E/F/G relative of start of packet.
>> +	 */
>> +	cvmx_pki_wqe_word4_t word4;
>> +
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	/* WORDs 5/6/7 may be extended there, if WQE_HSZ is
>> set.             */
>> +	/*-------------------------------------------------------------
>> ------*/
>> +	u64 wqe_data[11];
>> +
>> +} CVMX_CACHE_LINE_ALIGNED cvmx_wqe_78xx_t;
>> +
>> +/* Node LS-bit position in the WQE[grp] or PKI_QPG_TBL[grp_ok].*/
>> +#define CVMX_WQE_GRP_NODE_SHIFT 8
>> +
>> +/*
>> + * This is an accessor function into the WQE that retreives the
>> + * ingress port number, which can also be used as a destination
>> + * port number for the same port.
>> + *
>> + * @param work - Work Queue Entrey pointer
>> + * @returns returns the normalized port number, also known as "ipd"
>> port
>> + */
>> +static inline int cvmx_wqe_get_port(cvmx_wqe_t *work)
>> +{
>> +	int port;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		/* In 78xx wqe entry has channel number not port*/
>> +		port = work->word0.pki.channel;
>> +		/* For BGX interfaces (0x800 - 0xdff) the 4 LSBs
>> indicate
>> +		 * the PFC channel, must be cleared to normalize to
>> "ipd"
>> +		 */
>> +		if (port & 0x800)
>> +			port &= 0xff0;
>> +		/* Node number is in AURA field, make it part of port #
>> */
>> +		port |= (work->word0.pki.aura >> 10) << 12;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		port = work->word2.s_cn68xx.port;
>> +	} else {
>> +		port = work->word1.cn38xx.ipprt;
>> +	}
>> +
>> +	return port;
>> +}
>> +
>> +static inline void cvmx_wqe_set_port(cvmx_wqe_t *work, int port)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word0.pki.channel = port;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		work->word2.s_cn68xx.port = port;
>> +	else
>> +		work->word1.cn38xx.ipprt = port;
>> +}
>> +
>> +static inline int cvmx_wqe_get_grp(cvmx_wqe_t *work)
>> +{
>> +	int grp;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		/* legacy: GRP[0..2] :=QOS */
>> +		grp = (0xff & work->word1.cn78xx.grp) >> 3;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		grp = work->word1.cn68xx.grp;
>> +	else
>> +		grp = work->word1.cn38xx.grp;
>> +
>> +	return grp;
>> +}
>> +
>> +static inline void cvmx_wqe_set_xgrp(cvmx_wqe_t *work, int grp)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word1.cn78xx.grp = grp;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		work->word1.cn68xx.grp = grp;
>> +	else
>> +		work->word1.cn38xx.grp = grp;
>> +}
>> +
>> +static inline int cvmx_wqe_get_xgrp(cvmx_wqe_t *work)
>> +{
>> +	int grp;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		grp = work->word1.cn78xx.grp;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		grp = work->word1.cn68xx.grp;
>> +	else
>> +		grp = work->word1.cn38xx.grp;
>> +
>> +	return grp;
>> +}
>> +
>> +static inline void cvmx_wqe_set_grp(cvmx_wqe_t *work, int grp)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		unsigned int node = cvmx_get_node_num();
>> +		/* Legacy: GRP[0..2] :=QOS */
>> +		work->word1.cn78xx.grp &= 0x7;
>> +		work->word1.cn78xx.grp |= 0xff & (grp << 3);
>> +		work->word1.cn78xx.grp |= (node << 8);
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		work->word1.cn68xx.grp = grp;
>> +	} else {
>> +		work->word1.cn38xx.grp = grp;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_get_qos(cvmx_wqe_t *work)
>> +{
>> +	int qos;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		/* Legacy: GRP[0..2] :=QOS */
>> +		qos = work->word1.cn78xx.grp & 0x7;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		qos = work->word1.cn68xx.qos;
>> +	} else {
>> +		qos = work->word1.cn38xx.qos;
>> +	}
>> +
>> +	return qos;
>> +}
>> +
>> +static inline void cvmx_wqe_set_qos(cvmx_wqe_t *work, int qos)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		/* legacy: GRP[0..2] :=QOS */
>> +		work->word1.cn78xx.grp &= ~0x7;
>> +		work->word1.cn78xx.grp |= qos & 0x7;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		work->word1.cn68xx.qos = qos;
>> +	} else {
>> +		work->word1.cn38xx.qos = qos;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_get_len(cvmx_wqe_t *work)
>> +{
>> +	int len;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		len = work->word1.cn78xx.len;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		len = work->word1.cn68xx.len;
>> +	else
>> +		len = work->word1.cn38xx.len;
>> +
>> +	return len;
>> +}
>> +
>> +static inline void cvmx_wqe_set_len(cvmx_wqe_t *work, int len)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word1.cn78xx.len = len;
>> +	else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE))
>> +		work->word1.cn68xx.len = len;
>> +	else
>> +		work->word1.cn38xx.len = len;
>> +}
>> +
>> +/**
>> + * This function returns, if there was L2/L1 errors detected in
>> packet.
>> + *
>> + * @param work	pointer to work queue entry
>> + *
>> + * @return	0 if packet had no error, non-zero to indicate error
>> code.
>> + *
>> + * Please refer to HRM for the specific model for full enumaration
>> of error codes.
>> + * With Octeon1/Octeon2 models, the returned code indicates L1/L2
>> errors.
>> + * On CN73XX/CN78XX, the return code is the value of PKI_OPCODE_E,
>> + * if it is non-zero, otherwise the returned code will be derived
>> from
>> + * PKI_ERRLEV_E such that an error indicated in LayerA will return
>> 0x20,
>> + * LayerB - 0x30, LayerC - 0x40 and so forth.
>> + */
>> +static inline int cvmx_wqe_get_rcv_err(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_RE ||
>> wqe->word2.err_code != 0)
>> +			return wqe->word2.err_code;
>> +		else
>> +			return (wqe->word2.err_level << 4) + 0x10;
>> +	} else if (work->word2.snoip.rcv_error) {
>> +		return work->word2.snoip.err_code;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static inline u32 cvmx_wqe_get_tag(cvmx_wqe_t *work)
>> +{
>> +	return work->word1.tag;
>> +}
>> +
>> +static inline void cvmx_wqe_set_tag(cvmx_wqe_t *work, u32 tag)
>> +{
>> +	work->word1.tag = tag;
>> +}
>> +
>> +static inline int cvmx_wqe_get_tt(cvmx_wqe_t *work)
>> +{
>> +	return work->word1.tag_type;
>> +}
>> +
>> +static inline void cvmx_wqe_set_tt(cvmx_wqe_t *work, int tt)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		work->word1.cn78xx.tag_type = (cvmx_pow_tag_type_t)tt;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		work->word1.cn68xx.tag_type = (cvmx_pow_tag_type_t)tt;
>> +		work->word1.cn68xx.zero_2 = 0;
>> +	} else {
>> +		work->word1.cn38xx.tag_type = (cvmx_pow_tag_type_t)tt;
>> +		work->word1.cn38xx.zero_2 = 0;
>> +	}
>> +}
>> +
>> +static inline u8 cvmx_wqe_get_unused8(cvmx_wqe_t *work)
>> +{
>> +	u8 bits;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		bits = wqe->word2.rsvd_0;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		bits = work->word0.pip.cn68xx.unused1;
>> +	} else {
>> +		bits = work->word0.pip.cn38xx.unused;
>> +	}
>> +
>> +	return bits;
>> +}
>> +
>> +static inline void cvmx_wqe_set_unused8(cvmx_wqe_t *work, u8 v)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.rsvd_0 = v;
>> +	} else if (octeon_has_feature(OCTEON_FEATURE_CN68XX_WQE)) {
>> +		work->word0.pip.cn68xx.unused1 = v;
>> +	} else {
>> +		work->word0.pip.cn38xx.unused = v;
>> +	}
>> +}
>> +
>> +static inline u8 cvmx_wqe_get_user_flags(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return work->word0.pki.rsvd_2;
>> +	else
>> +		return 0;
>> +}
>> +
>> +static inline void cvmx_wqe_set_user_flags(cvmx_wqe_t *work, u8 v)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word0.pki.rsvd_2 = v;
>> +}
>> +
>> +static inline int cvmx_wqe_get_channel(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return (work->word0.pki.channel);
>> +	else
>> +		return cvmx_wqe_get_port(work);
>> +}
>> +
>> +static inline void cvmx_wqe_set_channel(cvmx_wqe_t *work, int
>> channel)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word0.pki.channel = channel;
>> +	else
>> +		debug("%s: ERROR: not supported for model\n",
>> __func__);
>> +}
>> +
>> +static inline int cvmx_wqe_get_aura(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return (work->word0.pki.aura);
>> +	else
>> +		return (work->packet_ptr.s.pool);
>> +}
>> +
>> +static inline void cvmx_wqe_set_aura(cvmx_wqe_t *work, int aura)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word0.pki.aura = aura;
>> +	else
>> +		work->packet_ptr.s.pool = aura;
>> +}
>> +
>> +static inline int cvmx_wqe_get_style(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		return (work->word0.pki.style);
>> +	return 0;
>> +}
>> +
>> +static inline void cvmx_wqe_set_style(cvmx_wqe_t *work, int style)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE))
>> +		work->word0.pki.style = style;
>> +}
>> +
>> +static inline int cvmx_wqe_is_l3_ip(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +		/* Match all 4 values for v4/v6 with.without options */
>> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			return 1;
>> +		if ((wqe->word2.le_hdr_type & 0x1c) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			return 1;
>> +		return 0;
>> +	} else {
>> +		return !work->word2.s_cn38xx.not_IP;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_l3_ipv4(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +		/* Match 2 values - with/wotuout options */
>> +		if ((wqe->word2.lc_hdr_type & 0x1e) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			return 1;
>> +		if ((wqe->word2.le_hdr_type & 0x1e) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			return 1;
>> +		return 0;
>> +	} else {
>> +		return (!work->word2.s_cn38xx.not_IP &&
>> +			!work->word2.s_cn38xx.is_v6);
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_l3_ipv6(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +		/* Match 2 values - with/wotuout options */
>> +		if ((wqe->word2.lc_hdr_type & 0x1e) ==
>> CVMX_PKI_LTYPE_E_IP6)
>> +			return 1;
>> +		if ((wqe->word2.le_hdr_type & 0x1e) ==
>> CVMX_PKI_LTYPE_E_IP6)
>> +			return 1;
>> +		return 0;
>> +	} else {
>> +		return (!work->word2.s_cn38xx.not_IP &&
>> +			work->word2.s_cn38xx.is_v6);
>> +	}
>> +}
>> +
>> +static inline bool cvmx_wqe_is_l4_udp_or_tcp(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_TCP)
>> +			return true;
>> +		if (wqe->word2.lf_hdr_type == CVMX_PKI_LTYPE_E_UDP)
>> +			return true;
>> +		return false;
>> +	}
>> +
>> +	if (work->word2.s_cn38xx.not_IP)
>> +		return false;
>> +
>> +	return (work->word2.s_cn38xx.tcp_or_udp != 0);
>> +}
>> +
>> +static inline int cvmx_wqe_is_l2_bcast(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.is_l2_bcast;
>> +	} else {
>> +		return work->word2.s_cn38xx.is_bcast;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_l2_mcast(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.is_l2_mcast;
>> +	} else {
>> +		return work->word2.s_cn38xx.is_mcast;
>> +	}
>> +}
>> +
>> +static inline void cvmx_wqe_set_l2_bcast(cvmx_wqe_t *work, bool
>> bcast)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.is_l2_bcast = bcast;
>> +	} else {
>> +		work->word2.s_cn38xx.is_bcast = bcast;
>> +	}
>> +}
>> +
>> +static inline void cvmx_wqe_set_l2_mcast(cvmx_wqe_t *work, bool
>> mcast)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.is_l2_mcast = mcast;
>> +	} else {
>> +		work->word2.s_cn38xx.is_mcast = mcast;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_l3_bcast(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.is_l3_bcast;
>> +	}
>> +	debug("%s: ERROR: not supported for model\n", __func__);
>> +	return 0;
>> +}
>> +
>> +static inline int cvmx_wqe_is_l3_mcast(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.is_l3_mcast;
>> +	}
>> +	debug("%s: ERROR: not supported for model\n", __func__);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * This function returns is there was IP error detected in packet.
>> + * For 78XX it does not flag ipv4 options and ipv6 extensions.
>> + * For older chips if PIP_GBL_CTL was proviosned to flag ip4_otions
>> and
>> + * ipv6 extension, it will be flag them.
>> + * @param work	pointer to work queue entry
>> + * @return	1 -- If IP error was found in packet
>> + *          0 -- If no IP error was found in packet.
>> + */
>> +static inline int cvmx_wqe_is_ip_exception(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LC)
>> +			return 1;
>> +		else
>> +			return 0;
>> +	}
>> +
>> +	return work->word2.s.IP_exc;
>> +}
>> +
>> +static inline int cvmx_wqe_is_l4_error(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (wqe->word2.err_level == CVMX_PKI_ERRLEV_E_LF)
>> +			return 1;
>> +		else
>> +			return 0;
>> +	} else {
>> +		return work->word2.s.L4_error;
>> +	}
>> +}
>> +
>> +static inline void cvmx_wqe_set_vlan(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.vlan_valid = set;
>> +	} else {
>> +		work->word2.s.vlan_valid = set;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_vlan(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.vlan_valid;
>> +	} else {
>> +		return work->word2.s.vlan_valid;
>> +	}
>> +}
>> +
>> +static inline int cvmx_wqe_is_vlan_stacked(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.vlan_stacked;
>> +	} else {
>> +		return work->word2.s.vlan_stacked;
>> +	}
>> +}
>> +
>> +/**
>> + * Extract packet data buffer pointer from work queue entry.
>> + *
>> + * Returns the legacy (Octeon1/Octeon2) buffer pointer structure
>> + * for the linked buffer list.
>> + * On CN78XX, the native buffer pointer structure is converted into
>> + * the legacy format.
>> + * The legacy buf_ptr is then stored in the WQE, and word0 reserved
>> + * field is set to indicate that the buffer pointers were
>> translated.
>> + * If the packet data is only found inside the work queue entry,
>> + * a standard buffer pointer structure is created for it.
>> + */
>> +cvmx_buf_ptr_t cvmx_wqe_get_packet_ptr(cvmx_wqe_t *work);
>> +
>> +static inline int cvmx_wqe_get_bufs(cvmx_wqe_t *work)
>> +{
>> +	int bufs;
>> +
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		bufs = work->word0.pki.bufs;
>> +	} else {
>> +		/* Adjust for packet-in-WQE cases */
>> +		if (cvmx_unlikely(work->word2.s_cn38xx.bufs == 0 &&
>> !work->word2.s.software))
>> +			(void)cvmx_wqe_get_packet_ptr(work);
>> +		bufs = work->word2.s_cn38xx.bufs;
>> +	}
>> +	return bufs;
>> +}
>> +
>> +/**
>> + * Free Work Queue Entry memory
>> + *
>> + * Will return the WQE buffer to its pool, unless the WQE contains
>> + * non-redundant packet data.
>> + * This function is intended to be called AFTER the packet data
>> + * has been passed along to PKO for transmission and release.
>> + * It can also follow a call to cvmx_helper_free_packet_data()
>> + * to release the WQE after associated data was released.
>> + */
>> +void cvmx_wqe_free(cvmx_wqe_t *work);
>> +
>> +/**
>> + * Check if a work entry has been intiated by software
>> + *
>> + */
>> +static inline bool cvmx_wqe_is_soft(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return wqe->word2.software;
>> +	} else {
>> +		return work->word2.s.software;
>> +	}
>> +}
>> +
>> +/**
>> + * Allocate a work-queue entry for delivering software-initiated
>> + * event notifications.
>> + * The application data is copied into the work-queue entry,
>> + * if the space is sufficient.
>> + */
>> +cvmx_wqe_t *cvmx_wqe_soft_create(void *data_p, unsigned int
>> data_sz);
>> +
>> +/* Errata (PKI-20776) PKI_BUFLINK_S's are endian-swapped
>> + * CN78XX pass 1.x has a bug where the packet pointer in each
>> segment is
>> + * written in the opposite endianness of the configured mode. Fix
>> these here.
>> + */
>> +static inline void cvmx_wqe_pki_errata_20776(cvmx_wqe_t *work)
>> +{
>> +	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +	if (OCTEON_IS_MODEL(OCTEON_CN78XX_PASS1_X) && !wqe-
>>> pki_errata20776) {
>> +		u64 bufs;
>> +		cvmx_buf_ptr_pki_t buffer_next;
>> +
>> +		bufs = wqe->word0.bufs;
>> +		buffer_next = wqe->packet_ptr;
>> +		while (bufs > 1) {
>> +			cvmx_buf_ptr_pki_t next;
>> +			void *nextaddr =
>> cvmx_phys_to_ptr(buffer_next.addr - 8);
>> +
>> +			memcpy(&next, nextaddr, sizeof(next));
>> +			next.u64 = __builtin_bswap64(next.u64);
>> +			memcpy(nextaddr, &next, sizeof(next));
>> +			buffer_next = next;
>> +			bufs--;
>> +		}
>> +		wqe->pki_errata20776 = 1;
>> +	}
>> +}
>> +
>> +/**
>> + * @INTERNAL
>> + *
>> + * Extract the native PKI-specific buffer pointer from WQE.
>> + *
>> + * NOTE: Provisional, may be superceded.
>> + */
>> +static inline cvmx_buf_ptr_pki_t cvmx_wqe_get_pki_pkt_ptr(cvmx_wqe_t
>> *work)
>> +{
>> +	cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +	if (!octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_buf_ptr_pki_t x = { 0 };
>> +		return x;
>> +	}
>> +
>> +	cvmx_wqe_pki_errata_20776(work);
>> +	return wqe->packet_ptr;
>> +}
>> +
>> +/**
>> + * Set the buffer segment count for a packet.
>> + *
>> + * @return Returns the actual resulting value in the WQE fielda
>> + *
>> + */
>> +static inline unsigned int cvmx_wqe_set_bufs(cvmx_wqe_t *work,
>> unsigned int bufs)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		work->word0.pki.bufs = bufs;
>> +		return work->word0.pki.bufs;
>> +	}
>> +
>> +	work->word2.s.bufs = bufs;
>> +	return work->word2.s.bufs;
>> +}
>> +
>> +/**
>> + * Get the offset of Layer-3 header,
>> + * only supported when Layer-3 protocol is IPv4 or IPv6.
>> + *
>> + * @return Returns the offset, or 0 if the offset is not known or
>> unsupported.
>> + *
>> + * FIXME: Assuming word4 is present.
>> + */
>> +static inline unsigned int cvmx_wqe_get_l3_offset(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +		/* Match 4 values: IPv4/v6 w/wo options */
>> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			return wqe->word4.ptr_layer_c;
>> +	} else {
>> +		return work->word2.s.ip_offset;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * Set the offset of Layer-3 header in a packet.
>> + * Typically used when an IP packet is generated by software
>> + * or when the Layer-2 header length is modified, and
>> + * a subsequent recalculation of checksums is anticipated.
>> + *
>> + * @return Returns the actual value of the work entry offset field.
>> + *
>> + * FIXME: Assuming word4 is present.
>> + */
>> +static inline unsigned int cvmx_wqe_set_l3_offset(cvmx_wqe_t *work,
>> unsigned int ip_off)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +		/* Match 4 values: IPv4/v6 w/wo options */
>> +		if ((wqe->word2.lc_hdr_type & 0x1c) ==
>> CVMX_PKI_LTYPE_E_IP4)
>> +			wqe->word4.ptr_layer_c = ip_off;
>> +	} else {
>> +		work->word2.s.ip_offset = ip_off;
>> +	}
>> +
>> +	return cvmx_wqe_get_l3_offset(work);
>> +}
>> +
>> +/**
>> + * Set the indication that the packet contains a IPv4 Layer-3 *
>> header.
>> + * Use 'cvmx_wqe_set_l3_ipv6()' if the protocol is IPv6.
>> + * When 'set' is false, the call will result in an indication
>> + * that the Layer-3 protocol is neither IPv4 nor IPv6.
>> + *
>> + * FIXME: Add IPV4_OPT handling based on L3 header length.
>> + */
>> +static inline void cvmx_wqe_set_l3_ipv4(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP4;
>> +		else
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		work->word2.s.not_IP = !set;
>> +		if (set)
>> +			work->word2.s_cn38xx.is_v6 = 0;
>> +	}
>> +}
>> +
>> +/**
>> + * Set packet Layer-3 protocol to IPv6.
>> + *
>> + * FIXME: Add IPV6_OPT handling based on presence of extended
>> headers.
>> + */
>> +static inline void cvmx_wqe_set_l3_ipv6(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_IP6;
>> +		else
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		work->word2.s_cn38xx.not_IP = !set;
>> +		if (set)
>> +			work->word2.s_cn38xx.is_v6 = 1;
>> +	}
>> +}
>> +
>> +/**
>> + * Set a packet Layer-4 protocol type to UDP.
>> + */
>> +static inline void cvmx_wqe_set_l4_udp(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_UDP;
>> +		else
>> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		if (!work->word2.s_cn38xx.not_IP)
>> +			work->word2.s_cn38xx.tcp_or_udp = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Set a packet Layer-4 protocol type to TCP.
>> + */
>> +static inline void cvmx_wqe_set_l4_tcp(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_TCP;
>> +		else
>> +			wqe->word2.lf_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		if (!work->word2.s_cn38xx.not_IP)
>> +			work->word2.s_cn38xx.tcp_or_udp = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Set the "software" flag in a work entry.
>> + */
>> +static inline void cvmx_wqe_set_soft(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.software = set;
>> +	} else {
>> +		work->word2.s.software = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Return true if the packet is an IP fragment.
>> + */
>> +static inline bool cvmx_wqe_is_l3_frag(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return (wqe->word2.is_frag != 0);
>> +	}
>> +
>> +	if (!work->word2.s_cn38xx.not_IP)
>> +		return (work->word2.s.is_frag != 0);
>> +
>> +	return false;
>> +}
>> +
>> +/**
>> + * Set the indicator that the packet is an fragmented IP packet.
>> + */
>> +static inline void cvmx_wqe_set_l3_frag(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		wqe->word2.is_frag = set;
>> +	} else {
>> +		if (!work->word2.s_cn38xx.not_IP)
>> +			work->word2.s.is_frag = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Set the packet Layer-3 protocol to RARP.
>> + */
>> +static inline void cvmx_wqe_set_l3_rarp(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_RARP;
>> +		else
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		work->word2.snoip.is_rarp = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Set the packet Layer-3 protocol to ARP.
>> + */
>> +static inline void cvmx_wqe_set_l3_arp(cvmx_wqe_t *work, bool set)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		if (set)
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_ARP;
>> +		else
>> +			wqe->word2.lc_hdr_type = CVMX_PKI_LTYPE_E_NONE;
>> +	} else {
>> +		work->word2.snoip.is_arp = set;
>> +	}
>> +}
>> +
>> +/**
>> + * Return true if the packet Layer-3 protocol is ARP.
>> + */
>> +static inline bool cvmx_wqe_is_l3_arp(cvmx_wqe_t *work)
>> +{
>> +	if (octeon_has_feature(OCTEON_FEATURE_CN78XX_WQE)) {
>> +		cvmx_wqe_78xx_t *wqe = (cvmx_wqe_78xx_t *)work;
>> +
>> +		return (wqe->word2.lc_hdr_type ==
>> CVMX_PKI_LTYPE_E_ARP);
>> +	}
>> +
>> +	if (work->word2.s_cn38xx.not_IP)
>> +		return (work->word2.snoip.is_arp != 0);
>> +
>> +	return false;
>> +}
>> +
>> +#endif /* __CVMX_WQE_H__ */
>> diff --git a/arch/mips/mach-octeon/include/mach/octeon_qlm.h
>> b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
>> new file mode 100644
>> index 000000000000..219625b25688
>> --- /dev/null
>> +++ b/arch/mips/mach-octeon/include/mach/octeon_qlm.h
>> @@ -0,0 +1,109 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __OCTEON_QLM_H__
>> +#define __OCTEON_QLM_H__
>> +
>> +/* Reference clock selector values for ref_clk_sel */
>> +#define OCTEON_QLM_REF_CLK_100MHZ 0 /** 100 MHz */
>> +#define OCTEON_QLM_REF_CLK_125MHZ 1 /** 125 MHz */
>> +#define OCTEON_QLM_REF_CLK_156MHZ 2 /** 156.25 MHz */
>> +#define OCTEON_QLM_REF_CLK_161MHZ 3 /** 161.1328125 MHz */
>> +
>> +/**
>> + * Configure qlm/dlm speed and mode.
>> + * @param qlm     The QLM or DLM to configure
>> + * @param speed   The speed the QLM needs to be configured in Mhz.
>> + * @param mode    The QLM to be configured as SGMII/XAUI/PCIe.
>> + * @param rc      Only used for PCIe, rc = 1 for root complex mode,
>> 0 for EP
>> + *		  mode.
>> + * @param pcie_mode Only used when qlm/dlm are in pcie mode.
>> + * @param ref_clk_sel Reference clock to use for 70XX where:
>> + *			0: 100MHz
>> + *			1: 125MHz
>> + *			2: 156.25MHz
>> + *			3: 161.1328125MHz (CN73XX and CN78XX only)
>> + * @param ref_clk_input	This selects which reference clock
>> input to use.  For
>> + *			cn70xx:
>> + *				0: DLMC_REF_CLK0
>> + *				1: DLMC_REF_CLK1
>> + *				2: DLM0_REF_CLK
>> + *			cn61xx: (not used)
>> + *			cn78xx/cn76xx/cn73xx:
>> + *				0: Internal clock (QLM[0-7]_REF_CLK)
>> + *				1: QLMC_REF_CLK0
>> + *				2: QLMC_REF_CLK1
>> + *
>> + * @return	Return 0 on success or -1.
>> + *
>> + * @note	When the 161MHz clock is used it can only be used for
>> + *		XLAUI mode with a 6316 speed or XFI mode with a 103125
>> speed.
>> + *		This rate is also only supported for CN73XX and CN78XX.
>> + */
>> +int octeon_configure_qlm(int qlm, int speed, int mode, int rc, int
>> pcie_mode, int ref_clk_sel,
>> +			 int ref_clk_input);
>> +
>> +int octeon_configure_qlm_cn78xx(int node, int qlm, int speed, int
>> mode, int rc, int pcie_mode,
>> +				int ref_clk_sel, int ref_clk_input);
>> +
>> +/**
>> + * Some QLM speeds need to override the default tuning parameters
>> + *
>> + * @param node     Node to configure
>> + * @param qlm      QLM to configure
>> + * @param baud_mhz Desired speed in MHz
>> + * @param lane     Lane the apply the tuning parameters
>> + * @param tx_swing Voltage swing.  The higher the value the lower
>> the voltage,
>> + *		   the default value is 7.
>> + * @param tx_pre   pre-cursor pre-emphasis
>> + * @param tx_post  post-cursor pre-emphasis.
>> + * @param tx_gain   Transmit gain. Range 0-7
>> + * @param tx_vboost Transmit voltage boost. Range 0-1
>> + */
>> +void octeon_qlm_tune_per_lane_v3(int node, int qlm, int baud_mhz,
>> int lane, int tx_swing,
>> +				 int tx_pre, int tx_post, int tx_gain,
>> int tx_vboost);
>> +
>> +/**
>> + * Some QLM speeds need to override the default tuning parameters
>> + *
>> + * @param node     Node to configure
>> + * @param qlm      QLM to configure
>> + * @param baud_mhz Desired speed in MHz
>> + * @param tx_swing Voltage swing.  The higher the value the lower
>> the voltage,
>> + *		   the default value is 7.
>> + * @param tx_premptap bits [0:3] pre-cursor pre-emphasis, bits[4:8]
>> post-cursor
>> + *		      pre-emphasis.
>> + * @param tx_gain   Transmit gain. Range 0-7
>> + * @param tx_vboost Transmit voltage boost. Range 0-1
>> + */
>> +void octeon_qlm_tune_v3(int node, int qlm, int baud_mhz, int
>> tx_swing, int tx_premptap, int tx_gain,
>> +			int tx_vboost);
>> +
>> +/**
>> + * Disables DFE for the specified QLM lane(s).
>> + * This function should only be called for low-loss channels.
>> + *
>> + * @param node     Node to configure
>> + * @param qlm      QLM to configure
>> + * @param lane     Lane to configure, or -1 all lanes
>> + * @param baud_mhz The speed the QLM needs to be configured in Mhz.
>> + * @param mode     The QLM to be configured as SGMII/XAUI/PCIe.
>> + */
>> +void octeon_qlm_dfe_disable(int node, int qlm, int lane, int
>> baud_mhz, int mode);
>> +
>> +/**
>> + * Some QLMs need to override the default pre-ctle for low loss
>> channels.
>> + *
>> + * @param node     Node to configure
>> + * @param qlm      QLM to configure
>> + * @param pre_ctle pre-ctle settings for low loss channels
>> + */
>> +void octeon_qlm_set_channel_v3(int node, int qlm, int pre_ctle);
>> +
>> +void octeon_init_qlm(int node);
>> +
>> +int octeon_mcu_probe(int node);
>> +
>> +#endif /* __OCTEON_QLM_H__ */


Viele Gr??e,
Stefan

-- 
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-51 Fax: (+49)-8142-66989-80 Email: sr at denx.de

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver
  2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
                   ` (51 preceding siblings ...)
  2021-04-23 17:56 ` [PATCH v3 " Stefan Roese
@ 2021-04-24 22:49 ` Daniel Schwierzeck
  52 siblings, 0 replies; 57+ messages in thread
From: Daniel Schwierzeck @ 2021-04-24 22:49 UTC (permalink / raw)
  To: u-boot

Am Freitag, den 11.12.2020, 17:05 +0100 schrieb Stefan Roese:
> This patchset adds the serdes and (mostly networking) device helper
> macros and functions, needed to support the still missing Octeon II /
> III devices in mainline U-Boot.
> 
> Please excuse the massive amount of files in this patch series. Also
> the
> sometimes huge files (mostly headers with register definitions) that
> I
> needed to include.
> 
> The infrastructure code with all the headers is ported without any
> intended functional changes from the 2013 Cavium / Marvell U-Boot
> version. It has undergone many hours of extensive code cleanup and
> reformatting. Some of it done by using tools (checkpatch, Lindent,
> clang
> format etc) and also some of it done manually, as I couldn't find
> some
> tools that could do the needed work in a reliable and functional way.
> The result is that checkpatch now only throws a "few" warnings that
> are
> left. Some of those can't be removed without an even more extensive
> cleanup / rewrite of the code, like the addition of typedefs.
> 
> The added headers and helper functions will be used by the upcoming
> support for the Octeon II / III networking drivers, including PHY &
> switch support. It was not easily possible to split these
> infrastructure
> files into a separate patchset, as it is heavily interconnected in
> the
> common QLM/DLM serdes interface initialization. The result is, that
> the
> upcoming ethernet driver support will be much smaller (this is at
> least
> my current assumption).
> 
> The added PCIe RC support with the included DM PCIe driver is the
> first
> driver making use of this Octeon serdes infrastructure. This has been
> tested with an Intel E1000 PCIe network card in the Octeon 7304 EBB.
> 
> Thanks,
> Stefan
> 

series applied to u-boot-mips, thanks.

> 
> Aaron Williams (42):
>   mips: octeon: Add misc cvmx-helper header files
>   mips: octeon: Add cvmx-agl-defs.h header file
>   mips: octeon: Add cvmx-asxx-defs.h header file
>   mips: octeon: Add cvmx-bgxx-defs.h header file
>   mips: octeon: Add cvmx-ciu-defs.h header file
>   mips: octeon: Add cvmx-dbg-defs.h header file
>   mips: octeon: Add cvmx-dpi-defs.h header file
>   mips: octeon: Add cvmx-dtx-defs.h header file
>   mips: octeon: Add cvmx-fpa-defs.h header file
>   mips: octeon: Add cvmx-gmxx-defs.h header file
>   mips: octeon: Add cvmx-gserx-defs.h header file
>   mips: octeon: Add cvmx-ipd-defs.h header file
>   mips: octeon: Add cvmx-l2c-defs.h header file
>   mips: octeon: Add cvmx-mio-defs.h header file
>   mips: octeon: Add cvmx-npi-defs.h header file
>   mips: octeon: Add cvmx-pcieepx-defs.h header file
>   mips: octeon: Add cvmx-pciercx-defs.h header file
>   mips: octeon: Add cvmx-pcsx-defs.h header file
>   mips: octeon: Add cvmx-pemx-defs.h header file
>   mips: octeon: Add cvmx-pepx-defs.h header file
>   mips: octeon: Add cvmx-pip-defs.h header file
>   mips: octeon: Add cvmx-pki-defs.h header file
>   mips: octeon: Add cvmx-pko-defs.h header file
>   mips: octeon: Add cvmx-pow-defs.h header file
>   mips: octeon: Add cvmx-rst-defs.h header file
>   mips: octeon: Add cvmx-sata-defs.h header file
>   mips: octeon: Add cvmx-sli-defs.h header file
>   mips: octeon: Add cvmx-smix-defs.h header file
>   mips: octeon: Add cvmx-sriomaintx-defs.h header file
>   mips: octeon: Add cvmx-sriox-defs.h header file
>   mips: octeon: Add cvmx-sso-defs.h header file
>   mips: octeon: Add misc remaining header files
>   mips: octeon: Add cvmx-helper-cfg.c
>   mips: octeon: Add cvmx-helper-fdt.c
>   mips: octeon: Add cvmx-helper-jtag.c
>   mips: octeon: Add cvmx-helper-util.c
>   mips: octeon: Add cvmx-helper.c
>   mips: octeon: Add cvmx-pcie.c
>   mips: octeon: Add cvmx-qlm.c
>   mips: octeon: Add octeon_fdt.c
>   mips: octeon: Add octeon_qlm.c
>   mips: octeon: octeon_ebb7304: Add board specific QLM init code
> 
> Stefan Roese (8):
>   mips: global_data.h: Add Octeon specific data to arch_global_data
>     struct
>   mips: octeon: Misc changes required because of the newly added
> headers
>   mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach
>   mips: octeon: Makefile: Enable building of the newly added C files
>   mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT
>   mips: octeon: mrvl,cn73xx.dtsi: Add PCIe controller DT node
>   mips: octeon: Add Octeon PCIe host controller driver
>   mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and
> E1000
> 
>  arch/mips/dts/mrvl,cn73xx.dtsi                |   16 +
>  arch/mips/include/asm/global_data.h           |    9 +
>  arch/mips/mach-octeon/Kconfig                 |    4 +
>  arch/mips/mach-octeon/Makefile                |   11 +
>  arch/mips/mach-octeon/bootoctlinux.c          |    1 +
>  arch/mips/mach-octeon/cvmx-bootmem.c          |    6 -
>  arch/mips/mach-octeon/cvmx-coremask.c         |    1 +
>  arch/mips/mach-octeon/cvmx-helper-cfg.c       | 1914 ++++
>  arch/mips/mach-octeon/cvmx-helper-fdt.c       |  970 ++
>  arch/mips/mach-octeon/cvmx-helper-jtag.c      |  172 +
>  arch/mips/mach-octeon/cvmx-helper-util.c      | 1225 +++
>  arch/mips/mach-octeon/cvmx-helper.c           | 2611 +++++
>  arch/mips/mach-octeon/cvmx-pcie.c             | 2487 +++++
>  arch/mips/mach-octeon/cvmx-qlm.c              | 2350 +++++
>  .../mach-octeon/include/mach/cvmx-address.h   |  209 +
>  .../mach-octeon/include/mach/cvmx-agl-defs.h  | 3135 ++++++
>  .../mach-octeon/include/mach/cvmx-asxx-defs.h |  709 ++
>  .../mach-octeon/include/mach/cvmx-bgxx-defs.h | 4106 +++++++
>  .../mach-octeon/include/mach/cvmx-ciu-defs.h  | 7351 +++++++++++++
>  .../mach-octeon/include/mach/cvmx-cmd-queue.h |  441 +
>  .../mach-octeon/include/mach/cvmx-csr-enums.h |   87 +
>  arch/mips/mach-octeon/include/mach/cvmx-csr.h |   78 +
>  .../mach-octeon/include/mach/cvmx-dbg-defs.h  |   33 +
>  .../mach-octeon/include/mach/cvmx-dpi-defs.h  | 1460 +++
>  .../mach-octeon/include/mach/cvmx-dtx-defs.h  | 6962 ++++++++++++
>  .../mach-octeon/include/mach/cvmx-error.h     |  456 +
>  .../mach-octeon/include/mach/cvmx-fpa-defs.h  | 1866 ++++
>  arch/mips/mach-octeon/include/mach/cvmx-fpa.h |  217 +
>  .../mips/mach-octeon/include/mach/cvmx-fpa1.h |  196 +
>  .../mips/mach-octeon/include/mach/cvmx-fpa3.h |  566 +
>  .../include/mach/cvmx-global-resources.h      |  213 +
>  arch/mips/mach-octeon/include/mach/cvmx-gmx.h |   16 +
>  .../mach-octeon/include/mach/cvmx-gmxx-defs.h | 6378 +++++++++++
>  .../include/mach/cvmx-gserx-defs.h            | 2191 ++++
>  .../include/mach/cvmx-helper-agl.h            |   68 +
>  .../include/mach/cvmx-helper-bgx.h            |  335 +
>  .../include/mach/cvmx-helper-board.h          |  558 +
>  .../include/mach/cvmx-helper-cfg.h            |  884 ++
>  .../include/mach/cvmx-helper-errata.h         |   50 +
>  .../include/mach/cvmx-helper-fdt.h            |  568 +
>  .../include/mach/cvmx-helper-fpa.h            |   43 +
>  .../include/mach/cvmx-helper-gpio.h           |  427 +
>  .../include/mach/cvmx-helper-ilk.h            |   93 +
>  .../include/mach/cvmx-helper-ipd.h            |   16 +
>  .../include/mach/cvmx-helper-jtag.h           |   84 +
>  .../include/mach/cvmx-helper-loop.h           |   37 +
>  .../include/mach/cvmx-helper-npi.h            |   42 +
>  .../include/mach/cvmx-helper-pki.h            |  319 +
>  .../include/mach/cvmx-helper-pko.h            |   51 +
>  .../include/mach/cvmx-helper-pko3.h           |   76 +
>  .../include/mach/cvmx-helper-rgmii.h          |   99 +
>  .../include/mach/cvmx-helper-sfp.h            |  437 +
>  .../include/mach/cvmx-helper-sgmii.h          |   81 +
>  .../include/mach/cvmx-helper-spi.h            |   73 +
>  .../include/mach/cvmx-helper-srio.h           |   72 +
>  .../include/mach/cvmx-helper-util.h           |  412 +
>  .../include/mach/cvmx-helper-xaui.h           |  108 +
>  .../mach-octeon/include/mach/cvmx-helper.h    |  565 +
>  .../mach-octeon/include/mach/cvmx-hwfau.h     |  606 ++
>  .../mach-octeon/include/mach/cvmx-hwpko.h     |  570 +
>  arch/mips/mach-octeon/include/mach/cvmx-ilk.h |  154 +
>  .../mach-octeon/include/mach/cvmx-ipd-defs.h  | 1925 ++++
>  arch/mips/mach-octeon/include/mach/cvmx-ipd.h |  233 +
>  .../mach-octeon/include/mach/cvmx-l2c-defs.h  |  172 +
>  .../include/mach/{cvmx => }/cvmx-lmcx-defs.h  |    0
>  .../mach-octeon/include/mach/cvmx-mio-defs.h  |  353 +
>  .../mach-octeon/include/mach/cvmx-npi-defs.h  | 1953 ++++
>  .../mach-octeon/include/mach/cvmx-packet.h    |   40 +
>  .../mips/mach-octeon/include/mach/cvmx-pcie.h |  279 +
>  .../include/mach/cvmx-pcieepx-defs.h          | 6848 ++++++++++++
>  .../include/mach/cvmx-pciercx-defs.h          | 5586 ++++++++++
>  .../mach-octeon/include/mach/cvmx-pcsx-defs.h | 1005 ++
>  .../mach-octeon/include/mach/cvmx-pemx-defs.h | 2028 ++++
>  .../mach-octeon/include/mach/cvmx-pexp-defs.h | 1382 +++
>  .../mach-octeon/include/mach/cvmx-pip-defs.h  | 3040 ++++++
>  arch/mips/mach-octeon/include/mach/cvmx-pip.h | 1080 ++
>  .../mach-octeon/include/mach/cvmx-pki-defs.h  | 2353 +++++
>  .../include/mach/cvmx-pki-resources.h         |  157 +
>  arch/mips/mach-octeon/include/mach/cvmx-pki.h |  970 ++
>  .../mach-octeon/include/mach/cvmx-pko-defs.h  | 9388
> +++++++++++++++++
>  .../mach/cvmx-pko-internal-ports-range.h      |   43 +
>  .../include/mach/cvmx-pko3-queue.h            |  175 +
>  .../mach-octeon/include/mach/cvmx-pow-defs.h  | 1135 ++
>  arch/mips/mach-octeon/include/mach/cvmx-pow.h | 2991 ++++++
>  arch/mips/mach-octeon/include/mach/cvmx-qlm.h |  304 +
>  .../mips/mach-octeon/include/mach/cvmx-regs.h |  330 +-
>  .../mach-octeon/include/mach/cvmx-rst-defs.h  |   77 +
>  .../mach-octeon/include/mach/cvmx-sata-defs.h |  311 +
>  .../mach-octeon/include/mach/cvmx-scratch.h   |  113 +
>  .../mach-octeon/include/mach/cvmx-sli-defs.h  | 6548 ++++++++++++
>  .../mach-octeon/include/mach/cvmx-smix-defs.h |  360 +
>  .../include/mach/cvmx-sriomaintx-defs.h       |   61 +
>  .../include/mach/cvmx-sriox-defs.h            |   44 +
>  .../mach-octeon/include/mach/cvmx-sso-defs.h  | 2904 +++++
>  arch/mips/mach-octeon/include/mach/cvmx-wqe.h | 1462 +++
>  .../mach-octeon/include/mach/octeon-feature.h |    2 +
>  .../mach-octeon/include/mach/octeon-model.h   |    2 +
>  .../mach-octeon/include/mach/octeon_ddr.h     |  189 +-
>  arch/mips/mach-octeon/octeon_fdt.c            | 1040 ++
>  arch/mips/mach-octeon/octeon_qlm.c            | 5853 ++++++++++
>  board/Marvell/octeon_ebb7304/board.c          |  732 +-
>  configs/octeon_ebb7304_defconfig              |    5 +-
>  drivers/pci/Kconfig                           |    6 +
>  drivers/pci/Makefile                          |    1 +
>  drivers/pci/pcie_octeon.c                     |  159 +
>  drivers/ram/octeon/octeon3_lmc.c              |   28 +-
>  drivers/ram/octeon/octeon_ddr.c               |   22 +-
>  107 files changed, 118724 insertions(+), 240 deletions(-)
>  create mode 100644 arch/mips/mach-octeon/cvmx-helper-cfg.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-helper-fdt.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-helper-jtag.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-helper-util.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-helper.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-pcie.c
>  create mode 100644 arch/mips/mach-octeon/cvmx-qlm.c
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-address.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-agl-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-asxx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-bgxx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ciu-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-cmd-
> queue.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr-
> enums.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-csr.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dbg-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dpi-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-dtx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-error.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa1.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-fpa3.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-global-
> resources.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmx.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gmxx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-gserx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> agl.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> bgx.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> board.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> cfg.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> errata.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> fdt.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> fpa.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> gpio.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> ilk.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> ipd.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> jtag.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> loop.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> npi.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> pki.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> pko.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> pko3.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> rgmii.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> sfp.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> sgmii.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> spi.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> srio.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> util.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper-
> xaui.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-helper.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwfau.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-hwpko.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ilk.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-ipd.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-l2c-
> defs.h
>  rename arch/mips/mach-octeon/include/mach/{cvmx => }/cvmx-lmcx-
> defs.h (100%)
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-mio-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-npi-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-packet.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcie.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcieepx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pciercx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pcsx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pemx-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pexp-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pip.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki-
> resources.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pki.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko-
> internal-ports-range.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pko3-
> queue.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-pow.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-qlm.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-rst-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sata-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-scratch.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sli-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-smix-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-
> sriomaintx-defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sriox-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-sso-
> defs.h
>  create mode 100644 arch/mips/mach-octeon/include/mach/cvmx-wqe.h
>  create mode 100644 arch/mips/mach-octeon/octeon_fdt.c
>  create mode 100644 arch/mips/mach-octeon/octeon_qlm.c
>  create mode 100644 drivers/pci/pcie_octeon.c
> 
-- 
- Daniel

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-04-24 22:49 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-11 16:05 [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Stefan Roese
2020-12-11 16:05 ` [PATCH v1 01/50] mips: global_data.h: Add Octeon specific data to arch_global_data struct Stefan Roese
2020-12-11 16:05 ` [PATCH v1 02/50] mips: octeon: Add misc cvmx-helper header files Stefan Roese
2020-12-11 16:05 ` [PATCH v1 03/50] mips: octeon: Add cvmx-agl-defs.h header file Stefan Roese
2020-12-11 16:05 ` [PATCH v1 04/50] mips: octeon: Add cvmx-asxx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 05/50] mips: octeon: Add cvmx-bgxx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 06/50] mips: octeon: Add cvmx-ciu-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 07/50] mips: octeon: Add cvmx-dbg-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 08/50] mips: octeon: Add cvmx-dpi-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 09/50] mips: octeon: Add cvmx-dtx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 10/50] mips: octeon: Add cvmx-fpa-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 11/50] mips: octeon: Add cvmx-gmxx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 12/50] mips: octeon: Add cvmx-gserx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 13/50] mips: octeon: Add cvmx-ipd-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 14/50] mips: octeon: Add cvmx-l2c-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 15/50] mips: octeon: Add cvmx-mio-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 16/50] mips: octeon: Add cvmx-npi-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 17/50] mips: octeon: Add cvmx-pcieepx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 18/50] mips: octeon: Add cvmx-pciercx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 19/50] mips: octeon: Add cvmx-pcsx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 20/50] mips: octeon: Add cvmx-pemx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 21/50] mips: octeon: Add cvmx-pepx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 22/50] mips: octeon: Add cvmx-pip-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 23/50] mips: octeon: Add cvmx-pki-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 24/50] mips: octeon: Add cvmx-pko-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 25/50] mips: octeon: Add cvmx-pow-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 26/50] mips: octeon: Add cvmx-rst-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 27/50] mips: octeon: Add cvmx-sata-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 28/50] mips: octeon: Add cvmx-sli-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 29/50] mips: octeon: Add cvmx-smix-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 30/50] mips: octeon: Add cvmx-sriomaintx-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 31/50] mips: octeon: Add cvmx-sriox-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 32/50] mips: octeon: Add cvmx-sso-defs.h " Stefan Roese
2020-12-11 16:05 ` [PATCH v1 33/50] mips: octeon: Add misc remaining header files Stefan Roese
2020-12-11 16:05 ` [PATCH v1 34/50] mips: octeon: Misc changes required because of the newly added headers Stefan Roese
2020-12-11 16:05 ` [PATCH v1 35/50] mips: octeon: Move cvmx-lmcx-defs.h from mach/cvmx to mach Stefan Roese
2020-12-11 16:05 ` [PATCH v1 36/50] mips: octeon: Add cvmx-helper-cfg.c Stefan Roese
2020-12-11 16:05 ` [PATCH v1 37/50] mips: octeon: Add cvmx-helper-fdt.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 38/50] mips: octeon: Add cvmx-helper-jtag.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 39/50] mips: octeon: Add cvmx-helper-util.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 40/50] mips: octeon: Add cvmx-helper.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 41/50] mips: octeon: Add cvmx-pcie.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 42/50] mips: octeon: Add cvmx-qlm.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 43/50] mips: octeon: Add octeon_fdt.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 44/50] mips: octeon: Add octeon_qlm.c Stefan Roese
2020-12-11 16:06 ` [PATCH v1 45/50] mips: octeon: Makefile: Enable building of the newly added C files Stefan Roese
2020-12-11 16:06 ` [PATCH v1 46/50] mips: octeon: Kconfig: Enable CONFIG_SYS_PCI_64BIT Stefan Roese
2020-12-11 16:06 ` [PATCH v1 47/50] mips: octeon: mrvl, cn73xx.dtsi: Add PCIe controller DT node Stefan Roese
2020-12-11 16:06 ` [PATCH v1 48/50] mips: octeon: octeon_ebb7304: Add board specific QLM init code Stefan Roese
2020-12-11 16:06 ` [PATCH v1 49/50] mips: octeon: Add Octeon PCIe host controller driver Stefan Roese
2021-04-07  6:43   ` [PATCH v2 " Stefan Roese
2020-12-11 16:06 ` [PATCH v1 50/50] mips: octeon: octeon_ebb7304_defconfig: Enable Octeon PCIe and E1000 Stefan Roese
2021-04-23  3:56 ` [PATCH v2 33/50] mips: octeon: Add misc remaining header files Stefan Roese
2021-04-23 16:38   ` Daniel Schwierzeck
2021-04-23 17:57     ` Stefan Roese
2021-04-23 17:56 ` [PATCH v3 " Stefan Roese
2021-04-24 22:49 ` [PATCH v1 00/50] mips: octeon: Add serdes and device helper support incl. DM PCIe driver Daniel Schwierzeck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.